Top Read Articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A case study on the application of the automatic labelling of the subject knowledge graph of Chinese large language models: Take morality and law and mathematics as examples
    Sijia KOU, Fengyun YAN, Jing MA
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 81-92.   DOI: 10.3969/j.issn.1000-5641.2024.05.008
    Abstract702)   HTML14)    PDF(pc) (1324KB)(1069)       Save

    With the rapid development of artificial intelligence technology, large language models (LLMs) have demonstrated strong abilities in natural language processing and various knowledge applications. This study examined the application of Chinese large language models in the automatic labelling of knowledge graphs for primary and secondary school subjects in particular compulsory education stage morality and law and high school mathematics. In education, the construction of knowledge graphs is crucial for organizing systemic knowledge . However, traditional knowledge graph methods have problems such as low efficiency and labor-cost consumption in data labelling. This study aimed to solve these problems using LLMs, thereby improving the level of automation and intelligence in the construction of knowledge graphs. Based on the status quo of domestic LLMs, this paper discusses their application in the automatic labelling of subject knowledge graphs. Taking morality and rule of law and mathematics as examples, the relevant methods and experimental results are explained. First, the research background and significance are discussed. Second, the development status of the domestic large language model and automatic labelling technology of the subject knowledge graph are then presented. In the methods and model section, an automatic labelling method based on LLMs is explored to improve its application in a subject knowledge graph. This study also explored the subject knowledge graph model to compare and evaluate the actual effect of the automatic labelling method. In the experiment and analysis section, through the automatic labelling experiments and results analysis of the subjects of morality and law and mathematics, the knowledge graphs of the two disciplines are automatically labeled to achieve high accuracy and efficiency. A series of valuable conclusions are obtained, and the effectiveness and accuracy of the proposed methods are verified. Finally, future research directions are discussed. In general, this study provides a new concept and method for the automatic labelling of subject knowledge graphs, which is expected to promote further developments in related fields.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research on the impact of a typhoon on the accretion-erosion of mudflats: Based on UAV photogrammetry and in situ hydrodynamic measurements
    Xinmiao ZHANG, Liming XUE, Benwei SHI, Wenxiang ZHANG, Tianyou LI, Biaobiao PENG, Xiuzhen LI, Yaping WANG
    Journal of East China Normal University(Natural Science)    2024, 2024 (4): 150-160.   DOI: 10.3969/j.issn.1000-5641.2024.04.014
    Abstract565)   HTML12)    PDF(pc) (2685KB)(570)       Save

    Extreme events such as typhoons can change mudflats by tens of centimeters. It is important for coastal management and ecosystem maintenance to recognize changes in accretion-erosion during typhoons and to understand the mechanisms driving it. In this study, Unmanned Aerial Vehicle (UAV) photogrammetry based on the Structure-from-Motion (SfM) algorithm was used to generate Digital Elevation Models (DEM) of a mudflat in Eastern Chongming, Yangtze Estuary, before and after the passage of Typhoon “In-Fa” (July 2021). Hydrodynamic measurements were conducted from bare flats to marshes to explore the mechanisms of DEM changes. Changes in accretion-erosion observed by UAV photogrammetry presented an obvious zonation of eroded bare flats and accreted marshes. The accuracy of the DEMs is 4.1 cm. Under the impact of the typhoon, the erosion of the bare flat and the accretion of the marsh have a amplitude of ±32 cm. During typhoons, the wave height and water depth in the bare flat increases to the condition of wave breaking, and the surface sediment is eroded and carried by rising tides. But in marshes, the sediment carrying capacity of water columns decreases, and the sediments are deposited. Consequently, the mudflat presents an obvious zonation of accretion-erosion. This study provides a new perspective for deeply understanding the impact of typhoons on the accretion-erosion of mudflats by combining UAV photogrammetry and hydrodynamic measurements.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Knowledge graph empowered object-oriented programming C++ teaching reform and practice
    Zhuang PEI, Xiuxia TIAN, Bingxue LI
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 104-113.   DOI: 10.3969/j.issn.1000-5641.2024.05.010
    Abstract504)   HTML6)    PDF(pc) (3157KB)(695)       Save

    Against the backdrop of the national new engineering education initiative, early C++ teaching has failed to meet the requirements of high-level sophistication, innovation, and challenges. Furthermore, issues such as fragmented knowledge points, difficulty in integrating theory with practice, and single-perspective bias are prevalent in this field. To address these problems, we propose an innovative teaching model that effectively integrates QT(Qt Toolkit) and C++ by merging the two courses. This model facilitates the teaching process via a course knowledge graph deployed on the Zhihuishu platform. The breadth of teaching is expanded by effectively linking course knowledge points, integrating and sharing multimodal teaching resources, enhancing multiperspective learning, showcasing the course’s innovative nature, and avoiding single-perspective bias. Simultaneously, the depth of teaching is increased through the construction of a knowledge graph that integrates QT and object-oriented programming (C++), organically combining the knowledge points of both courses. This approach bridges the gap between theory and practice by enhancing the course’s sophistication and level of challenge. Consequently, this study pioneers the reform of C++ teaching by providing valuable references and insights for programming courses under the new engineering education framework.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    SA-MGKT: Multi-graph knowledge tracing method based on self-attention
    Chang WANG, Dan MA, Huarong XU, Panfeng CHEN, Mei CHEN, Hui LI
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 20-31.   DOI: 10.3969/j.issn.1000-5641.2024.05.003
    Abstract499)   HTML12)    PDF(pc) (936KB)(247)       Save

    This study proposes a multi-graph knowledge tracing method integrated with a self-attention mechanism (SA-MGKT), The aim is to model students’ knowledge mastery based on their historical performance on problem-solving exercises and evaluate their future learning performance. Firstly, a heterogeneous graph of student-exercise is constructed to represent the high-order relationships between these two factors. Graph contrastive learning techniques are employed to capture students’ answer preferences, and a three-layer LightGCN is utilized for graph representation learning. Secondly, we introduce information from concept association hypergraphs and directed transition graphs, and obtain node embeddings through hypergraph convolutional networks and directed graph convolutional networks. Finally, by incorporating the self-attention mechanism, we successfully fuse the internal information within the exercise sequence and the latent knowledge embedded in the representations learned from multiple graphs, leading to a substantial enhancement in the accuracy of the knowledge tracing model. Experimental outcomes on three benchmark datasets demonstrate promising results, showcasing remarkable improvements of 3.51%, 17.91%, and 1.47% respectively in the evaluation metrics, compared to the baseline models. These findings robustly validate the effectiveness of integrating multi-graph information and the self-attention mechanism in enhancing the performance of knowledge tracing models.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Educational resource content review method based on knowledge graph and large language model collaboration
    Jia LIU, Xin SUN, Yuqing ZHANG
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 57-69.   DOI: 10.3969/j.issn.1000-5641.2024.05.006
    Abstract496)   HTML18)    PDF(pc) (1448KB)(279)       Save

    Automated content reviews on digital educational resources are urgently in demand in the educational informatization era. Especially in the applicability review of whether educational resources exceed the standard, there are problems with knowledge which are easy to exceed national curriculum standards and difficult to locate. In response to this demand, this study proposed a review method for educational resources based on the collaboration of an educational knowledge graph and a large language model . Specifically, this study initially utilized the ontology concept to design and construct a knowledge graph for curriculum education in primary and secondary schools. A knowledge localization method was subsequently designed based on teaching content generation, sorting, and pruning, by utilizing the advantages of large language models for text generation and sorting tasks. Finally, by detecting conflicts between the core knowledge sub-graph of teaching content and the knowledge graph teaching path, the goal of recognizing teaching content that exceeded the national standard was achieved. Experimental results demonstrate that the proposed method effectively addresses the task of reviewing exceptional standard knowledge in educational resource content. This opens up a new technological direction for educational application based on the knowledge graph and large language model collaboration.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Knowledge-distillation-based lightweight crop-disease-recognition algorithm
    Wenjing HU, Longquan JIANG, Junlong YU, Yiqian XU, Qipeng LIU, Lei LIANG, Jiahao LI
    J* E* C* N* U* N* S*    2025, 2025 (1): 59-71.   DOI: 10.3969/j.issn.1000-5641.2025.01.005
    Abstract493)   HTML12)    PDF(pc) (3454KB)(188)       Save

    Crop diseases are one of the main factors threatening crop growth. In this regard, machine-learning algorithms can efficiently detect large-scale crop diseases and are beneficial for timely processing and improving crop yield and quality. In large-scale agricultural scenarios, owing to limitations in power supply and other conditions, the power-supply requirements of high-computing-power devices such as servers cannot be fulfilled. Most existing deep-network models require high computing power and cannot be deployed easily on low-power embedded devices, thus hindering the accurate identification and application of large-scale crop diseases. Hence, this paper proposes a lightweight crop-disease-recognition algorithm based on knowledge distillation. A student model based on a residual structure and the attention mechanism is designed and knowledge distillation is applied to complete transfer learning from the ConvNeXt model, thus achieving the lightweight model while maintaining high-precision recognition. The experimental results show that the accuracy of image classification for 39 types of crop diseases is 98.72% under a model size of 2.28 MB, which satisfies the requirement for deployment in embedded devices and indicates a practical and efficient solution for crop-disease recognition.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Time series uncertainty forecasting based on graph augmentation and attention mechanism
    Chaojie MEN, Jing ZHAO, Nan ZHANG
    J* E* C* N* U* N* S*    2025, 2025 (1): 82-96.   DOI: 10.3969/j.issn.1000-5641.2025.01.007
    Abstract488)   HTML9)    PDF(pc) (1026KB)(549)       Save

    To improve the ability to predict future events and effectively address uncertainty, we propose a network architecture based on graph augmentation and attention mechanisms for uncertainty forecasting in multivariate time series. By introducing an implicit graph structure and integrating graph neural network techniques, we capture the mutual dependencies among sequences to model the interactions between time series. We utilize attention mechanisms to capture temporal patterns within the same sequence for modeling the dynamic evolution patterns of time series. We utilize the Monte Carlo dropout method to approximate model parameters and model the predicted sequences as a stochastic distribution, thus achieving accurate uncertainty forecasting in time series. The experimental results indicate that this approach maintains a high level of prediction precision while providing reliable uncertainty estimation, thus providing confidence for use in decision-making tasks.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Locally lightweight course teaching-assistant system based on IPEX-LLM
    Jiarui ZHANG, Qiming ZHANG, Fenglin BI, Yanbin ZHANG, Wei WANG, Erjin REN, Haili ZHANG
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 162-172.   DOI: 10.3969/j.issn.1000-5641.2024.05.015
    Abstract482)   HTML14)    PDF(pc) (15203KB)(80)       Save

    This study introduces and implements a local, lightweight, intelligent teaching-assistant system. Using the IPEX-LLM (Intel PyTorch extention for large language model) acceleration library, the system can efficiently deploy and execute large language models that are fine-tuned using the QLoRA (quantum-logic optimized resource allocation) framework on devices with limited computational resources. Combining this with enhanced retrieval techniques, the system provides flexible course customization through four major functional modules: intelligent Q&A, automated question generation, syllabus creation, and course PPT generation. This system is intended to assist educators in improving the quality and efficiency of lesson preparation and delivery, safeguarding data privacy, supporting personalized student learning, and offering real-time feedback. Performance tests exemplified by the optimized Chatglm3-6B model show the rapid inference capability of the system via the processing of a 64-token output task within 4.08 s in a resource-constrained environment. A practical case study comparing the functionality of the system with native Chatglm-6B and ChatGPT 4.0 further validates its superior accuracy and practicality.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Arrangement and analysis of type specimens of the Shanghai Natural History Museum Herbarium
    Ruiping SHI, Bicheng LI, Chunqing Wen, Yunfei ZHANG, Qianqian WU, Xiangkun QIN
    Journal of East China Normal University(Natural Science)    2024, 2024 (4): 82-99.   DOI: 10.3969/j.issn.1000-5641.2024.04.009
    Abstract481)   HTML6)    PDF(pc) (13433KB)(101)       Save

    To ascertain the status and promote the utilization and sharing of type specimens in Herbarium of Shanghai Natural History Museum (SHM), the collecting information of normal specimens in SHM with type specimens in specimens of plant resource sharing platform and journal of plant taxonomy were compared, 418 type specimens were confirmed. There are 239 species belonging to 147 genera in 69 families, including 390 type specimens newly discovered. The quantity, type, species, dominant groups, collectiion location, collection time, and the name type specimen collector were collected and analysed in the herbarium.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Machine learning-based remote sensing retrievals of dissolved organic carbon in the Yangtze River Estuary
    Hao CHEN, Xianqiang HE, Run LI, Fang CAO
    Journal of East China Normal University(Natural Science)    2024, 2024 (4): 123-136.   DOI: 10.3969/j.issn.1000-5641.2024.04.012
    Abstract481)   HTML17)    PDF(pc) (17080KB)(136)       Save

    Dissolved organic carbon (DOC) is the largest reservoir of active organic matter in the ocean. Accurate characterization of the spatial and temporal patterns of DOC in large-river estuaries and neighboring coastal margins will help improve our understanding of biogeochemical processes and the fate of fluvial DOC across the estuary−coastal ocean continuum. By retrieving the absorption properties of colored dissolved organic matter (CDOM) in the dissolved organic matter (DOM) pool using machine learning models, and based on the correlation between CDOM absorption and DOC concentrations, we developed an ocean DOC algorithm for the GOCI satellite. The results indicated that the Nu-Supporting Vector Regression model performed best in retrieving CDOM absorption properties, with mean absolute percent differences (MAPD) of 32% and 8.6% for the CDOM absorption coefficient at 300 nm (aCDOM(300)) and CDOM spectral slope over the wavelength range of 275 ~ 295 nm (S275–295). Estimates of DOC concentrations based on the seasonal linear relationship between aCDOM(300) and DOC were achieved with high retrieval accuracy, with MAPD of 11% and 14% for the training dataset using field measurements and validation datasets on satellite platforms, respectively. Application of the DOC algorithm to GOCI satellite imagery revealed that DOC levels varied dramatically at both seasonal and hourly scales. Elevated surface DOC concentrations were largely associated with summer and lower DOC concentrations in winter as a result of seasonal cycles of Yangtze River discharges. The DOC also changed rapidly on an hourly scale due to the influence of the tide and local wind regimes. This study provides a useful method to improve our understanding of DOC dynamics and their environmental controls across the estuarine −coastal ocean continuum.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Personalized knowledge concept recommendation for massive open online courses
    Chao KONG, Jiahui CHEN, Dan MENG, Huabin DIAO, Wei WANG, Liping ZHANG, Tao LIU
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 32-44.   DOI: 10.3969/j.issn.1000-5641.2024.05.004
    Abstract454)   HTML12)    PDF(pc) (1453KB)(192)       Save

    In recent years, massive open online courses (MOOCs) have become a significant pathway for acquiring knowledge and skills. However, the increasing number of courses has led to severe information overload. Knowledge concept recommendation aims to identify and recommend specific knowledge points that students need to master. Existing research addresses the challenge of data sparsity by constructing heterogeneous information networks; however, there are limitations in fully leveraging these networks and considering the diverse interactions between learners and knowledge concepts. To address these issues, this study proposes a novel method, heterogeneous learning behavior-aware knowledge concept recommendation (HLB-KCR). First, it uses metapath-based random walks and skip-gram algorithms to generate semantically rich metapath embeddings and optimizes these embeddings through a two-stage enhancement module. Second, a multi-type interaction graph incorporating temporal contextual information is constructed, and a graph neural network (GNN) is employed for message passing to update the nodes, obtaining deep embedded representations that include time and interaction type information. Third, a semantic attention module is introduced to integrate meta-path embeddings with multi-type interaction embeddings. Finally, an extended matrix factorization rating prediction module is used to optimize the recommendation algorithm. Extensive experiments on the large-scale public MOOCCubeX dataset demonstrate the effectiveness and rationality of the HLB-KCR method.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Purging diffusion models through CLIP based fine-tuning
    Ping WU, Xin LIN
    J* E* C* N* U* N* S*    2025, 2025 (1): 138-150.   DOI: 10.3969/j.issn.1000-5641.2025.01.011
    Abstract449)   HTML5)    PDF(pc) (1531KB)(66)       Save

    Diffusion models have revolutionized text-to-image synthesis, enabling users to generate high-quality and imaginative artworks from simple natural-language text prompts. Unfortunately, due to the large and unfiltered training dataset, inappropriate content such as nudity and violence can be generated from them. To deploy such models at a higher level of safety, we propose a novel method, directional contrastive language-image pre-training (CLIP) loss-based fine-tuning, dubbed as CLIF. This method utilizes directional CLIP loss to suppress the model’s inappropriate generation ability. CLIF is lightweight and immune to circumvention. To demonstrate the effectiveness of CLIF, we proposed a benchmark called categorized toxic prompts (CTP) to evaluate the ability to generate inappropriate content for text-to-image diffusion models. As shown by our experiments on CTP and common objects in context (COCO) datasets, CLIF is capable of significantly suppressing inappropriate generation while preserving the model’s ability to produce general content.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Study on the influence of a knowledge graph-based learning system design on online learning results
    Kechen QU, Jinchang LI, Deming HUANG, Jia SONG
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 70-80.   DOI: 10.3969/j.issn.1000-5641.2024.05.007
    Abstract434)   HTML13)    PDF(pc) (2268KB)(384)       Save

    Drawing on constructivism and competency-based theory, this paper proposes an online learning system design method based on a knowledge graph, which breaks the traditional knowledge structure and builds a multi-dimensional competence framework of knowledge and skills with the goal of improving competence. A learning system with a knowledge graph as the underlying logic and linked digital learning resources was built. Teaching practice and empirical research were then carried out. First, the learning system was verified with a questionnaire. Second, taking the ability to “read English academic papers” as the learning task, experimental and control groups were created to evaluate the understanding of knowledge and skills, memory level, and comprehensive application ability of the participants. The results showed that the effectiveness and usability of the learning system were higher in the experimental group than in the control group in terms of total, knowledge, skill, and ability scores. Among these, total and ability scores showed significant differences, indicating that the system played a role in promoting the effect of online learning.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    A formal verification method for embedded operating systems
    Yang WANG, Jingcheng FANG, Xiong CAI, Zhipeng ZHANG, Yong CAI, Weikai MIAO
    Journal of East China Normal University(Natural Science)    2024, 2024 (4): 1-17.   DOI: 10.3969/j.issn.1000-5641.2024.04.001
    Abstract429)   HTML17)    PDF(pc) (1364KB)(393)       Save

    The operating system is the core and foundation of the entire computer system. Its reliability and safety are vital because faults or vulnerabilities in the operating system can lead to system crashes, data loss, privacy breaches, and security attacks. In safety-critical systems, any errors in the operating system can result in significant loss of life and property. Ensuring the safety and reliability of the operating system has always been a major challenge in industry and academia. Currently, methods for verifying the operating system’s safety include software testing, static analysis, and formal methods. Formal methods are the most promising in ensuring the operating system’s safety and trustworthiness. Mathematical models can be established using formal methods, and the system can be formally analyzed and verified to discover potential errors and vulnerabilities. In the operating system, formal methods can be used to verify the correctness and completeness of the operating system’s functions and system safety. A formal scheme for embedded operating systems is proposed herein on the basis of existing formal verification achievements for operating systems. This scheme uses VCC (verified C compiler), CBMC (C bounded model checker), and PAT (process analysis toolkit) tools to verify the operating system at the unit, module, and system levels, respectively. The schema, upon being successfully applied to a task scheduling architecture case of a certain operating system, exhibits a certain universality for analyzing and verifying embedded operating systems.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Bioinformatics-based construction of immune prognostic gene model for hepatocellular carcinoma and preliminary model validation
    Linding XIE, Yuan ZHANG, Yihong CAI
    Journal of East China Normal University(Natural Science)    2024, 2024 (4): 100-110.   DOI: 10.3969/j.issn.1000-5641.2024.04.010
    Abstract429)   HTML6)    PDF(pc) (4520KB)(1240)       Save

    The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) databases were used to collect RNA sequence information from patients with hepatocellular carcinoma (HCC). The key genes involved in the immune response mechanism to HCC were screened using the non-negative matrix factorization (NMF) clustering method and weighted gene co-expression network analysis (WGCNA). Prognostic gene models were constructed using the least absolute shrinkage and selection operator (LASSO) regression analysis, and biological functions were analyzed using gene set enrichment analysis (GSEA). Subsequently, to assess the immune infiltration and the related functional differences between the patients in two different risk groups , we used single-sample gene set enrichment analysis (ssGSEA). We constructed column line graphs in combination with independent risk factors to predict overall patient survival time using the “RMS” package in R. Finally, preliminary clinical validation was performed using the Human Protein Atlas (HPA) database with real-time quantitative fluorescent PCR (RT-qPCR). In conclusion, we integrated the clinical characteristics of patients based on risk scores to construct a verifiable and reproducible column line chart, providing a reliable reference for the precise treatment of patients in clinical oncology.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Progress and critical issues in research on micro- and nanoplastics in human body
    Tiefeng CUI, Daoji LI
    Journal of East China Normal University(Natural Science)    2024, 2024 (6): 1-13.   DOI: 10.3969/j.issn.1000-5641.2024.06.001
    Abstract426)   HTML23)    PDF(pc) (925KB)(993)       Save

    Micro- and nanoplastics (M-NPs) are ubiquitous in the natural environment and have become a topic of concern. However, due to the lack of key data on human exposure to M-NPs, our understanding of the potential health risks posed by the entry of M-NPs into the human body is still limited. Current research indicates that M-NPs are commonly found in various parts of the human body. However, the experimental analysis techniques for M-NPs in the human body have not yet been standardized, with the main differences lying in sample pretreatment and detection methods. This increases the difficulty of conducting systematic research on the distribution, transfer, accumulation, and excretion of M-NPs in the human body. In addition, the study of nanoplastics (< 1 μm) still faces insurmountable technical obstacles. The experimental research results of M-NPs standard samples, although instructive, do not fully reflect the exposure risks of M-NPs in the real environment, and thus, do not have universal scientific significance. This review aims to provide direction for the standardization of experimental analysis and risk assessment for M-NPs in the human body.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Knowledge graph completion by integrating textual information and graph structure information
    Houlong FAN, Ailian FANG, Xin LIN
    J* E* C* N* U* N* S*    2025, 2025 (1): 111-123.   DOI: 10.3969/j.issn.1000-5641.2025.01.009
    Abstract425)   HTML11)    PDF(pc) (1436KB)(111)       Save

    Based upon path query information, we propose a graph attention model that effectively integrates textual and graph structure information in knowledge graphs, thereby enhancing knowledge graph completion. For textual information, a dual-encoder based on pre-trained language models is utilized to separately obtain embedding representations of entities and path query information. Additionally, an attention mechanism is employed to aggregate path query information, which is used to capture graph structural information and update entity embeddings. The model was trained using contrastive learning and experiments were conducted on multiple knowledge graph datasets, with good results achieved in both transductive and inductive settings. These results demonstrate the advantage of combining pre-trained language models with graph neural networks to effectively capture both textual and graph structural information, thereby enhancing knowledge graph completion.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Algorithm for security management and privacy protection of education big data based on smart contracts
    Shaojie QIAO, Yuhe JIANG, Chenxu LIU, Cheqing JIN, Nan HAN, Shuaiwei HE
    Journal of East China Normal University(Natural Science)    2024, 2024 (5): 128-140.   DOI: 10.3969/j.issn.1000-5641.2024.05.012
    Abstract425)   HTML9)    PDF(pc) (1051KB)(240)       Save

    Conventional education big data management is faced with security risks such as privacy data leakage, questionable data credibility, and unauthorized access. To avoid the above risks, a novel type of education big data security management and privacy protection method, Algorithm for security management and privacy protection of education big data based on smart contracts (ASPES), is proposed. It integrates an improved key splitting and sharing algorithm based on the secret sharing of Shamir, a hybrid encryption algorithm based on SM2-SHA256-AES, and a smart contract management algorithm based on hierarchical data access control. Experiments are conduced on the real dataset of MOOCCube and the results indicate that the execution efficiency and security of ASPES are significantly improved when compared with the state-of-the-art methods, which can effectively store and manage education big data and realize the reasonable distribution of educational resources. By embedding smart contracts into the blockchain and inputting operations like data reading and writing into the blockchain, ASPES can optimize the management path, improve management efficiency, ensure the fairness of education, and considerably improve the quality of education.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research on software classification based on the fusion of code and descriptive text
    Yuhang CHEN, Shizhou WANG, Zhengting TANG, Liangyu CHEN, Ningkang JIANG
    J* E* C* N* U* N* S*    2025, 2025 (1): 46-58.   DOI: 10.3969/j.issn.1000-5641.2025.01.004
    Abstract422)   HTML14)    PDF(pc) (2128KB)(342)       Save

    Third-party software systems play a significant role in modern software development. Software developers build software based on requirements by retrieving appropriate dependency libraries from third-party software repositories, effectively avoiding repetitive wheel-building operations and thus speeding up the development process. However, retrieving third-party dependency libraries can be challenging. Typically, third-party software repositories provide preset tags (categories) for software developers to search. However, when a software’s preset tags are incorrectly labeled, software developers are unable to find the libraries required, and this inevitably affects the development process. This study proposes a software clustering model to address the aforementioned challenges. The model combines method vectors, method importance, and text vectors to categorize unknown categories of software into known categories. In addition, because no publicly available dataset exists for this problem, we built a dataset and made it publicly available. This clustering model was tested on a self-built dataset comprising 30 categories and software systems from the Maven repository. The accuracy of the prediction category was 70% for one candidate (top-1) and 90% for three candidates (top-3). The experimental results show that our model can help software developers find suitable software, can be useful for classifying software systems in open-source repositories, and can assist software developers in quickly locating third-party libraries.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Surface-height- and uncertainty-based depth estimation for Mono3D
    Yinshuai JI, Jinhua XU
    J* E* C* N* U* N* S*    2025, 2025 (1): 72-81.   DOI: 10.3969/j.issn.1000-5641.2025.01.006
    Abstract420)   HTML10)    PDF(pc) (1215KB)(183)       Save

    Monocular three-dimensional (3D) object detection is a fundamental but challenging task in autonomous driving and robotic navigation. Directly predicting object depth from a single image is essentially an ill-posed problem. Geometry projection is a powerful depth estimation method that infers an object’s depth from its physical and projected heights in the image plane. However, height estimation errors are amplified by the depth error. In this study, the physical and projected heights of object surface points (rather than the height of the object itself) were estimated to obtain several depth candidates. In addition, the uncertainties in the heights were estimated and the final object depth was obtained by assembling the depth predictions according to the uncertainties. Experiments demonstrated the effectiveness of the depth estimation method, which achieved state-of-the-art (SOTA) results on a monocular 3D object detection task of the KITTI dataset.

    Table and Figures | Reference | Related Articles | Metrics | Comments0