Content of Database Systems in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Persistent memory- and shared cache architecture-based high-performance database
    Congcong WANG, Huiqi HU
    Journal of East China Normal University(Natural Science)    2023, 2023 (5): 1-10.   DOI: 10.3969/j.issn.1000-5641.2023.05.001
    Abstract140)   HTML23)    PDF (1228KB)(75)      

    The upsurge in cloud-native databases has been drawing attention to shared architectures. Although a shared cache architecture can effectively address cache consistency issues among multiple read-write nodes, problems still exist, such as slow persistence speed, high latency in maintaining cache directories, and timestamp bottlenecks. To address these issues, this study proposes a shared cache architecture-based solution that is combined with novel persistent memory hardware, to realize a three-layer shared architecture database—TampoDB, which includes memory, persistent memory, and storage layers. The transaction execution process was redesigned based on this architecture with optimized timestamps and directories, thereby resolving the aforementioned problems. Experimental results show that TampoDB effectively enhances the persistence speed of transactions.

    Table and Figures | Reference | Related Articles | Metrics
    An HTAP database prototype with an adaptive data synchronization
    Rong YU, Panfei YANG, Qingshuai WANG, Rong ZHANG
    Journal of East China Normal University(Natural Science)    2023, 2023 (5): 11-25.   DOI: 10.3969/j.issn.1000-5641.2023.05.002
    Abstract88)   HTML5)    PDF (2638KB)(30)      

    In HTAP (hybrid transactional and analytical processing) database, resource isolation and data sharing is a difficult problem. Although different vendors achieve resource isolation through different architectures, the freshness of user concerns, that is, the gap between online transactional processing (OLTP) write and online analytical processing (OLAP) read versions, is determined by the consistency model of data sharing. However, existing HTAP databases apply only one consistency synchronization model for an easy implementation, which is contradictory to the multiple consistency requirements of user applications, and the overall system performance is sacrificed for the highest consistency upward compatibility. In this paper, by constructing a cost model of freshness and performance tradeoff, proposing a consistency switching algorithm and a processing strategy for synchronized data before and after switching, and realizing an HTAP database prototype with adaptive switching between sequential consistency synchronization and linear consistency synchronization, which makes it possible to support query loads with different consistency (freshness) requirements and maximize the system performance without adjusting the HTAP architecture. The effectiveness of adaptive switching is also verified by extensive experiments.s of adaptive switching is also verified by extensive experiments.

    Table and Figures | Reference | Related Articles | Metrics
    Hybrid granular buffer management scheme for storage and computing separation architecture
    Wenjuan MEI, Peng CAI
    Journal of East China Normal University(Natural Science)    2023, 2023 (5): 26-39.   DOI: 10.3969/j.issn.1000-5641.2023.05.003
    Abstract74)   HTML6)    PDF (1669KB)(37)      

    The architecture of storage-compute separation has emerged as a solution for improving the performance and efficiency of large-scale data processing. However, there are notable performance bottlenecks in this approach, primarily due to the low access efficiency of object storage and the significant network overhead. Additionally, object storage exhibits low storage efficiency for small-sized files. For instance, ClickHouse, a MergeTree-based database, generates a plethora of small-sized files when storing data. To address these challenges, HG-Buffer (hybrid granularity buffer) is introduced as an SSD (solid state driver)-based caching management solution for optimizing the storage-compute separation in ClickHouse and S3, while also tackling the small-file issue in object storage. The primary objective of HG-Buffer is to minimize network transmission overhead and enhance system access efficiency. This is achieved by introducing SSD as a caching layer between the compute and storage layers and organizing the SSD buffer into two granularities: object buffer and block buffer. The object buffer granularity corresponds to the data granularity in object storage, while the block buffer granularity represents the data granularity accessed by the system, with the block buffer granularity being a subset of the object buffer granularity. By statistically analyzing data hotness information, HG-Buffer adaptively selects the storage location for data, improving SSD space utilization and system performance. Experimental evaluations conducted on ClickHouse and S3 demonstrate the effectiveness and robustness of HG-Buffer.

    Table and Figures | Reference | Related Articles | Metrics
    Separate management strategies for Part metadata under the storage-computing separation architecture
    Danqi LIU, Peng CAI
    Journal of East China Normal University(Natural Science)    2023, 2023 (5): 40-50.   DOI: 10.3969/j.issn.1000-5641.2023.05.004
    Abstract59)   HTML3)    PDF (1081KB)(18)      

    To address the deficiencies of ClickHouse, including underutilization of hardware resources, lack of flexibility, and slow node startup, this paper proposes metadata management strategies under the storage-compute separation architecture, which focuses on the description of data information through Part metadata. Part metadata are the most crucial component of metadata. To effectively manage data on remote shared storage, this study collected all Part metadata files and merged them. After key-value mapping, serialization, and deserialization processes, the merged metadata were stored in a distributed key-value database. Furthermore, a synchronization strategy was designed to ensure consistency between the data on remote shared storage and the metadata in the distributed key-value database. By implementing the above strategies, a metadata management system was developed for Part metadata, which effectively addressed the slow node startup issue in ClickHouse and supported efficient dynamic scaling of nodes.

    Table and Figures | Reference | Related Articles | Metrics
    Generating diverse database isolation level test cases with fuzzy testing
    Xiyu LU, Wei LIU, Siyang WENG, Keqiang LI, Rong ZHANG
    Journal of East China Normal University(Natural Science)    2023, 2023 (5): 51-64.   DOI: 10.3969/j.issn.1000-5641.2023.05.005
    Abstract68)   HTML3)    PDF (2266KB)(38)      

    Database management systems play a vital role in modern information systems. Isolation level testing is important for database management systems to ensure the isolation of concurrent operations and data consistency to prevent data corruption, inconsistency and security risks, and to provide reliable data access to users. Fuzzy testing is a method widely used in software and system testing. By searching the test space and generating diverse test cases, it explores the boundary conditions, anomalies and potential problems of the system to find possible vulnerabilities. This article introduces SilverBlade, a tool for fuzzy testing of database isolation levels, that aims to improve the diversity of generated test cases and explore the isolation level test space in depth-wise. To effectively search the huge test space, this study designed a structured test input that splits the test space into two subspaces of concurrent transaction combination and execution interaction modes for searching. To test the isolation-level core implementation test space more comprehensively, an adaptive search method based on depth and breadth was also designed for effective mutation test cases. The experimental results show that SilverBlade is able to generate diverse test cases and provide broader coverage of the core implementation code of the database isolation level in the popular database management system PostgreSQL. Compared to similar tools, SilverBlade performed better at improving test coverage in critical areas of the isolation level.

    Table and Figures | Reference | Related Articles | Metrics