Loading...

Table of Content

    25 September 2025, Volume 2025 Issue 5 Previous Issue   
    For Selected: Toggle Thumbnails
    AI-Enabled Open Source Technologies and Applications
    Research on the GitHub developer geographic location prediction method based on multi-dimensional feature fusion
    Sijia ZHAO, Fanyu HAN, Wei WANG
    2025, 2025 (5):  1-13.  doi: 10.3969/j.issn.1000-5641.2025.05.001
    Abstract ( 30 )   HTML ( 2 )   PDF (1118KB) ( 8 )   Save

    The geographic location information of developers is important for understanding the global distribution of open source activities and formulating regional policies. However, a substantial number of developer accounts on the GitHub platform lack geographic location information, limiting the comprehensive analysis of the geographic distribution of the global open source ecosystem. This study proposed a hierarchical geographic location prediction framework based on multidimensional feature fusion. By integrating three major categories of multidimensional features—temporal behavior, linguistic culture, and network characteristics—the framework established a four-tier progressive prediction mechanism consisting of rule-driven rapid positioning, name cultural inference, time zone cross-validation, and a deep learning ensemble. Experiments conducted on a large-scale dataset built from 50000 globally active developers demonstrated that this method successfully predicted the geographic locations of 82.52% of the developers. Among these, the name cultural inference layer covered most users with an accuracy of 0.7629, whereas the deep learning ensemble layer handled the most complex cases with an accuracy of 0.7557. A comparative analysis with the prediction results from the Moonshot large language model validated the superiority of the proposed method in complex geographic inference tasks.

    Figures and Tables | References | Related Articles | Metrics
    Application and evaluation of large language models in open source project topic annotation
    Dexin HE, Fanyu HAN, Wei WANG
    2025, 2025 (5):  14-24.  doi: 10.3969/j.issn.1000-5641.2025.05.002
    Abstract ( 27 )   HTML ( 1 )   PDF (795KB) ( 5 )   Save

    With the rapid development of open source communities, the number of GitHub projects has increased exponentially. However, a considerable portion of these projects lack explicit topic labels, creating challenges for developers in technology selection and project retrieval processes. Existing topic generation methods rely primarily on supervised learning paradigms that suffer from strong dependencies on high-quality annotated data and other limitations. This study addresses the accuracy and efficiency issues in open source community project topic annotation by conducting the first comprehensive study on the application effectiveness of large language models in GitHub project topic prediction tasks. We constructed a dataset containing 3000 popular GitHub projects that were selected based on a quantitative metric specifically designed to evaluate the activity and influence of open source projects, encompassing multidimensional features including repository names, README documents, and description information. Comparative experiments were conducted using several mainstream large language models from domestic and international sources including Claude 3.7 Sonnet, DeepSeek-V3, Gemini 2.0 Flash, GPT-4o, and Qwen-Plus. The results demonstrated that Claude 3.7 Sonnet achieved optimal performance across most evaluation metrics, and as the dataset scale expanded, the performances of all models tended to stabilize. The experiments proved that large language models exhibited excellent applicability in project topic annotation tasks, although significant performance differences existed among different models. These findings provide an important reference foundation for open source community project management and intelligent annotation system design.

    Figures and Tables | References | Related Articles | Metrics
    Liquidity design for ecological industries in the large language model era: Analysis of liquidity elements represented by open-source communities
    Xudong REN, Zhipeng HUANG, Jiaheng PENG, Wei WANG
    2025, 2025 (5):  25-31.  doi: 10.3969/j.issn.1000-5641.2025.05.003
    Abstract ( 17 )   HTML ( 0 )   PDF (660KB) ( 2 )   Save

    With the development of the digital economy, the explosion of mobile Internet, and the rise of cloud business models, ecological industries have shown significant vitality in the capital market. This article explores the “liquidity” element of the ecological industry, analyzes its value and role in building the ecological industry, and proposes suggestions for building an ecological industry cluster around open-source communities and open-source talents as the core. By formulizing an equation for “open-source index,” for the first time, the article provides a method that can quantitatively identify the importance of the open-source ecosystem for building the industrial ecology. Finally, the article proposes specific strategies and improvement suggestions based on the domestic mainstream ecological industry development cases of open-source ecological construction. For future work, the article identifies several directions, including but not limited to quantified liquidity study of the industrial ecology, cross industry collaboration, policy and legal environment, and security.

    Figures and Tables | References | Related Articles | Metrics
    Interactive data structure and algorithm visualization based on AI agents
    Ruiyang PANG, Xuesong LU
    2025, 2025 (5):  32-42.  doi: 10.3969/j.issn.1000-5641.2025.05.004
    Abstract ( 14 )   HTML ( 1 )   PDF (1065KB) ( 8 )   Save

    Data structures and algorithms (DSA), as a core course in computer science education, play a key role in cultivating programming skills and algorithmic thinking of students. Visualization can significantly enhance teaching effectiveness and deepen student understanding in DSA education. However, existing DSA visualization tools often rely on manually written visualization codes that lead to limitations such as limited coverage, high maintenance costs, and lack of interactivity; hence, the needs of dynamic demonstrations and personalized teaching are difficult to meet. With the outstanding performance of large language models (LLMs) in code generation, automated DSA visualization has become a promising possibility. Therefore, this study proposed an interactive visualization code generation method based on the reasoning and acting (ReAct) AI agent framework, aiming to address the low automation and insufficient interactivity of traditional visualization tools. By leveraging the code generation capabilities of LLMs and integrating with the data structure visualization (DSV) platform interface, the proposed method transformed Python-based DSA code into interactive, executable, and dynamically visualized code, thereby enhancing teaching clarity and learning experience. To systematically evaluate the effectiveness of the method, we constructed a dataset of 150 pairs of DSA code and corresponding DSV visualization code and compared three approaches—direct prompting, chain-of-thought prompting, and the ReAct AI agent approach—across several mainstream LLMs. The experimental results showed that the proposed ReAct AI agent-based method significantly outperformed the other approaches in terms of the compilation rate, execution rate, and usability rate, with the best performance observed in the DeepSeek-R1 model. This demonstrated notable improvements in the accuracy and interactivity of generated visualization code. This research confirms the feasibility and advantages of integrating LLMs with agent frameworks in DSA visualization teaching, offering a novel path toward building efficient, personalized, and automated tools for computer programming education.

    Figures and Tables | References | Related Articles | Metrics
    ATBench: Benchmark for evaluating analysis trajectories in end-to-end data analysis
    Xufei WANG, Huarong XU, Panfeng CHEN, Mei CHEN, Dan MA, Zhengxi CHEN, Xu TIAN, Hui LI
    2025, 2025 (5):  43-52.  doi: 10.3969/j.issn.1000-5641.2025.05.005
    Abstract ( 17 )   HTML ( 0 )   PDF (809KB) ( 4 )   Save

    This paper introduces ATBench, a benchmark designed for evaluating analysis trajectories in end-to-end data analysis tasks, to address the limitations in granularity and domain coverage present in current benchmarks. Analysis trajectories represent the process in which an agent iteratively poses questions, derives insights, and formulates conclusions around a specific analysis goal via iterative interactions. Leveraging both existing benchmarks and real Kaggle task data, we constructed 151 evaluation datasets spanning eight distinct domains by employing an annotation strategy that balances goal-driven and exploratory approaches. Additionally, we propose a fine-grained evaluation metric, the analysis trajectory score, to assess an agent's coherent analytical capabilities during end-to-end data analysis tasks. Experimental results demonstrate that ATBench exhibits strong stability and discriminative power, effectively distinguishing performance differences among models in analytical tasks. The results also reveal the limitations in agents’ abilities for coherent analysis and insight discovery, thereby providing data-driven support for future improvements.

    Figures and Tables | References | Related Articles | Metrics
    Research on challenges and optimization of large multimodal model applications in treefall scenarios
    Lei FENG, Chaonan LI, Chunjie SHENG, Yuxing SHI, Yicheng HUANG, Jianhong JIN, Yun XU, Yuzhou DU, Nina ZHOU, Sihao MIAO
    2025, 2025 (5):  53-65.  doi: 10.3969/j.issn.1000-5641.2025.05.006
    Abstract ( 22 )   HTML ( 1 )   PDF (1399KB) ( 6 )   Save

    To address the limited robustness of large multimodal models (LMMs) in complex visual scenarios, such as identifying responsibility for fallen trees, which emanates from their reliance on single-path reasoning. This study proposes a novel reasoning optimization method based on Beam Search Chain-of-Thought (BS-CoT). Conventional models often fall into a “first-impression” trap, in which an initial incorrect inference leads to an irreversible analytical failure. The proposed BS-CoT method counteracts this by exploring and evaluating multiple potential inference paths in parallel. It maintains a diverse set of hypotheses about the scene, continuously pruning less likely hypotheses, which effectively overcomes the tendency to commit to a single, fallacious line of reasoning. This significantly enhances visual decision-making capabilities in complex and noisy environments. To validate its efficacy, we constructed a specialized dataset capturing a wide array of treefall incidents in urban governance. Experimental results demonstrated that the proposed method achieved substantial improvements in both event recall and key information capture rates compared with baseline models. This research not only provides a reliable technical solution for visual decision-making challenges in urban public safety but also introduces a new, more robust paradigm for improving the reasoning reliability of large models in critical applications.

    Figures and Tables | References | Related Articles | Metrics
    Innovative Practices of Open Source and AI in Education
    Synergy between large language models and open source ecosystems in AI education
    Lijun XU, Li YANG, Ziyi HUANG
    2025, 2025 (5):  66-75.  doi: 10.3969/j.issn.1000-5641.2025.05.007
    Abstract ( 15 )   HTML ( 0 )   PDF (1061KB) ( 2 )   Save

    To address the challenges of outdated teaching resources, insufficient practical skills, and a lack of value-oriented guidance in education, this study constructs an innovative pedagogical model driven by the dual-engine of large language model (LLM) and open source ecosystem. The model is designed to bridge the gap between theoretical knowledge and real-world engineering practice by integrating open-source tools, dynamic code repositories, and authentic project scenarios into the curriculum. Meanwhile, LLMs are employed as intelligent teaching assistants to enable personalized learning paths, generate automated feedback, and support immersive ideological and ethical modules. This research was implemented in the course “Artificial intelligence and its applications”, where a mixed-method evaluation was conducted. Quantitative metrics such as attendance, interaction frequency, repository contributions, and assignment performance were used to measure student engagement and learning effectiveness. Additionally, a set of custom-designed assessment formulas was used to evaluate cross-platform transferability and community participation. Experimental results from 90 undergraduate students showed that learners engaged in open-source collaboration and LLM-assisted learning achieved significantly higher scores in both technical proficiency and value cognition than those in the control group. The study demonstrates that the integration of LLMs and open-source collaboration can effectively enhance student autonomy, promote engineering skills, and reinforce ethical awareness. This dual-driven model not only offers a feasible approach for modernizing AI education but also contributes to the broader goal of cultivating socially responsible and technically competent AI talents.

    Figures and Tables | References | Related Articles | Metrics
    Static cognitive diagnosis model enhanced by knowledge point relations
    Henggui LIANG, Yihui ZHU, Xiaowen TANG, Mingdong ZHU
    2025, 2025 (5):  76-86.  doi: 10.3969/j.issn.1000-5641.2025.05.008
    Abstract ( 14 )   HTML ( 0 )   PDF (932KB) ( 3 )   Save

    Cognitive diagnosis, a core task in personalized education, aims to evaluate students’ mastery of knowledge points using historical response records. Existing static cognitive diagnosis models are typically based on manually annotated key knowledge points, ignoring potential correlations between knowledge points within items as well as differences in how items emphasize specific knowledge points. To address these limitations, this study proposes a static cognitive diagnosis model improved by knowledge point relations (Q-matrix Enhanced Neural Cognitive Diagnosis, QENCD) model. The model optimizes the item-knowledge point association vector by constructing knowledge point dependency relationships and item emphasis information, then integrating these features through residual connections. The experimental results showed that QENCD model significantly outperforms existing models on the ASSIST09, ASSIST17, and Junyi datasets significantly outperforming state-of-the-art baselines. This study provides a more precise knowledge modeling method for static cognitive diagnosis.

    Figures and Tables | References | Related Articles | Metrics
    Student employment prediction for digital jobs based on behavior in open-source communities
    Linna XIE, Xuesong LU
    2025, 2025 (5):  87-98.  doi: 10.3969/j.issn.1000-5641.2025.05.009
    Abstract ( 16 )   HTML ( 0 )   PDF (1853KB) ( 2 )   Save

    Accurately predicting students’ post-graduation career paths plays a vital role in talent development in higher education and in refining recruitment strategies in industry. Most existing employment prediction research relies heavily on academic or campus-related data, while overlooking the role of students’ open-source contributions in the process of securing digital-related positions. This study addresses employment prediction for digital roles by analyzing students’ behaviors in open-source communities. We construct a heterogeneous graph comprising student nodes, code repository nodes, and various semantic relationships to model students’ expertise. To enhance prediction performance, we propose two strategies that integrate large language model (LLM) with graph neural networks: LLM-as-Encoder and LLM-as-Explainer. Experiments on our curated dataset show that the proposed approach outperforms baseline methods, achieving improvements of 7.71% in accuracy and 9.19% in Macro-F1. By leveraging open-source activity, this study supports data-driven decision-making for university career services, aids enterprises in identifying technical talent, and provides students with actionable insights for career planning.

    Figures and Tables | References | Related Articles | Metrics
    Open Source Ecosystem: Development and Governance
    Research and analysis of the development of the open source ecosystem in the field of geographic information system
    Yuang ZHANG, Zhong XIE, Qinjun QIU, Liufeng TAO
    2025, 2025 (5):  99-108.  doi: 10.3969/j.issn.1000-5641.2025.05.010
    Abstract ( 37 )   HTML ( 12 )   PDF (1330KB) ( 21 )   Save

    With the rapid advancement of information technology, the open source paradigm has become popular in multiple domains, including geographic information system (GIS). Developing an open, collaborative, and sustainable open source GIS ecosystem can promote GIS technology innovation, lower implementation costs, and foster development within the field. This study systematically investigates methods for developing an open source GIS ecosystem and its future trends, addressing four main aspects: ① reviewing the development history of open source GIS and the current technological landscape to refine a four-stage evolutionary framework; ② from a GIS perspective and based on the existing open source foundation, proposing a multi-layered ecosystem construction model specifically tailored to GIS; ③ introducing HyperCRX to perform quantitative analysis and visualization of four metrics—OpenRank, Activity, Contributors, and Participants—for eight representative open source GIS projects, thereby revealing differences in their influence, activity levels, and community engagement to reflect the current state of the ecosystem; and ④ summarizing the challenges faced by the open source GIS ecosystem in terms of public perception, talent cultivation, governance mechanisms, data–software coordination, and sustainable business models, as well as outlining future development directions and research hotspots in the era of large-scale models. It is hoped that this study will provide useful references for future research and practical applications.

    Figures and Tables | References | Related Articles | Metrics
    Open-source collaboration structure modeling and multilayer-network link-prediction methods
    Pu ZHAO, Qingxi PENG, Yuang ZHANG, Xiejie JIN, Dezhou ZHAO
    2025, 2025 (5):  109-124.  doi: 10.3969/j.issn.1000-5641.2025.05.011
    Abstract ( 18 )   HTML ( 0 )   PDF (1082KB) ( 4 )   Save

    Collaborative relationships among open-source projects are becoming increasingly complex, involving multiple reuse mechanisms such as dependency co-usage, language consistency, and contributor overlap. Traditional graph models struggle to represent these heterogeneous structures in a unified manner, limiting their ability to identify potential collaboration links. This paper proposes an analytical framework that integrates multilayer graph modeling with structure-aware link prediction, tailored to open-source ecosystems. A three-layer unweighted graph is constructed to capture different types of collaborations, and two structural enhancements—layer overlap modulation and community-aware scoring—are introduced to improve structural perception and semantic interpretability. Experimental results on multiple real-world datasets show that the proposed method consistently outperforms mainstream link prediction algorithms, particularly in networks with high structural heterogeneity. Further analysis reveals that the predicted links exhibit strong community consistency and semantic recoverability. Overall, the proposed approach effectively uncovers latent collaboration paths among open-source projects and provides structural support for reuse modeling and community evolution analysis.

    Figures and Tables | References | Related Articles | Metrics
    Analysis of the status, hotspots, and trends of open-source innovation: A bibliometric study based on CNKI literature from 2005 to 2024
    Rui WANG, Qiuyue LYU, Jia LIAO
    2025, 2025 (5):  125-139.  doi: 10.3969/j.issn.1000-5641.2025.05.012
    Abstract ( 15 )   HTML ( 0 )   PDF (1646KB) ( 0 )   Save

    This study systematically analyzes the evolutionary characteristics and research hotspots of open- source innovation in China. A dataset comprising 732 valid journal articles, with “open-source” in the title, was retrieved from the China National Knowledge Infrastructure (CNKI) for the period 2005–2024. A bibliometric approach was employed to examine such dimensions as annual publication volume, disciplinary distribution, keyword co-occurrence and clustering, burst keywords, and timeline evolution. The results indicate that research in this field has progressed through three stages, initial exploration, steady development, and rapid growth, with a significant surge in publications over the past five years. Disciplinary distribution analysis reveals a multidisciplinary landscape centered on library and information science, computer science, and industrial technology, which extends to fields such as education, management, and law. Keyword clustering analysis identifies nine core research areas, accompanied by a review of the representative literature within each cluster. Timeline evolution analysis suggests that future research will likely focus on the deep integration of artificial intelligence (AI) and open-source ecosystems, the evolution of collaboration and governance models in open-source communities, open-source software security and supply chain risk identification, and open-source law and intellectual property protection. On the basis of these findings, we propose several recommendations to foster the sustainable development of open-source innovation in China, including strengthening the synergistic governance of AI and open-source ecosystems, enhancing supply chain security systems, advancing innovations in legal and licensing frameworks, and constructing a digital open-source infrastructure oriented toward industrial and public services.

    Figures and Tables | References | Related Articles | Metrics
    A DTA based activity evaluation method for high star GitHub repositories
    Mingdong YOU, Jiaheng PENG, Fanyu HAN, Wei WANG
    2025, 2025 (5):  140-150.  doi: 10.3969/j.issn.1000-5641.2025.05.013
    Abstract ( 17 )   HTML ( 0 )   PDF (1378KB) ( 2 )   Save

    In the context of identifying GitHub’s long-term active, high-star repositories—critical for assisting the development of robust open-source communities and vital digital infrastructure—we propose a novel method for evaluating the long-term activity of these repositories. This method is firmly based on a time series prediction model, which excels in forecasting repository activity metrics rather than being specifically designed for this purpose. A key innovation of our method is the first-time use of the developer activity cycle as a pivotal feature. This improves the accuracy of predictions for repository development trends and provides a more nuanced understanding of project evolution. After meticulously modeling and mining the time series data of various activity indicators, we developed a new activity calculation formula: development trend-based activity (DTA). This formula allows a precise quantitative evaluation of a repository's true activity level. To rigorously validate our methodology, we designed and curated a comprehensive benchmark dataset with fine time granularity and broad coverage. Subsequently, we systematically evaluated the performance of multiple prediction models against this dataset, eventually identifying the best model for forecasting open-source repository activity. The experimental results conclusively demonstrate the effectiveness of our proposed method in accurately predicting the long-term activity of repositories. Consequently, using DTA to evaluate repository activity can enable open-source participants to effectively identify repositories poised for long-term engagement, strategically determine their participation focus, and thereby significantly promote the sustained development of open-source communities and critical digital infrastructure.

    Figures and Tables | References | Related Articles | Metrics
    Open source evaluatology: A framework and methodology for evaluating open source ecosystems based on evaluatology
    Shengyu ZHAO, Wei WANG, Fanyu HAN, Jiaheng PENG, Lan YOU
    2025, 2025 (5):  151-161.  doi: 10.3969/j.issn.1000-5641.2025.05.014
    Abstract ( 11 )   HTML ( 0 )   PDF (670KB) ( 1 )   Save

    The open source ecosystem, as a critical component of the modern software industry, has garnered increasing attention from both academia and industry regarding its evaluation challenges. However, existing evaluation methods face issues such as inconsistent evaluation standards, lack of theoretical grounding, and poor comparability of evaluation results. Guided by foundational theories of evaluatology, this study introduced a novel interdisciplinary research domain, open source evaluatology, for the first time. It established a theoretical framework and methodological system for evaluating the open source ecosystem. The primary contributions of this paper include the following. Developing the theoretical foundation of open source evaluatology based on the five axioms of evaluatology and defining fundamental concepts, evaluation dimensions, and standards for open source ecosystem evaluation. Designing an evaluation conditions framework comprising five levels: problem definition, task instances, algorithm mechanisms, implementation examples, and supporting systems. A hybrid evaluation model combining statistical and network metrics was proposed. Based on the experiments conducted using the GitHub dataset, this study validated the proposed method from three dimensions: open source repositories, developers, and communities. The results demonstrated that the proposed evaluation model exhibited strong applicability and explanatory power in open source scenarios.

    Figures and Tables | References | Related Articles | Metrics
    Open source hardware: Driving force and future trends for the new industrial revolution
    Menghan HU, Wenjing CHENG, Xiang DAI, Yiqing LIU
    2025, 2025 (5):  162-169.  doi: 10.3969/j.issn.1000-5641.2025.05.015
    Abstract ( 9 )   HTML ( 0 )   PDF (552KB) ( 1 )   Save

    In the context of the new industrial revolution, with increasing computational complexity and intensified demand for customization, open source hardware has emerged as a key approach to overcome the limitations of closed architectures and enhance technological autonomy. This study focused on the Reduced Instruction Set Computer-Five(RISC-Ⅴ)open instruction set architecture and systematically reviewed its ecosystem advantages and industrial value. It compared major domestic and international open source projects in terms of design openness, system flexibility, and collaborative innovation mechanisms. From a temporal perspective, this study analyzed the development trend of open source hardware, evolving from low-level architectural innovation to heterogeneous integration and scenario-based expansion. The findings indicated that open source hardware had broad application prospects in critical areas such as intelligent manufacturing, edge computing, and immersive terminals. This could significantly improve computational efficiency and reduce development complexity and system costs. Open source hardware drives the chip design from a closed paradigm toward a shared model, offering new support for industrial intelligence upgrades and strategic technology security.

    Figures and Tables | References | Related Articles | Metrics
    OSS Insight: A platform for open source ecosystem spatiotemporal data analysis and insights
    Xiaowei CHEN, Wei WANG, Fanyu HAN, Guanglei BAO, Fei DONG, Hao HUO, Chen LIU
    2025, 2025 (5):  170-182.  doi: 10.3969/j.issn.1000-5641.2025.05.016
    Abstract ( 9 )   HTML ( 0 )   PDF (1447KB) ( 3 )   Save

    An open source ecosystem abounds with valuable data, yet extracting insights requires innovative data infrastructure and analytical methods. To address this, OSS Insight was developed that innovatively used the hybrid transactional analytical processing(HTAP) database for efficient storage and query of billions of GitHub event data and offered real-time exploration via a visual interface. It delved into spatiotemporal data analysis, modeling developer behaviors and ecosystem evolution, such as visualizing global contribution patterns. Integrated with large language models(LLMs), it enabled natural language to structured query language(SQL) conversion for intelligent querying. A case study of Kubernetes showcased its capabilities in analyzing developers, project evolution, and organizational collaboration. Experiments proved that OSS Insight efficiently analyzed large-scale open source data, and its LLM-driven interaction simplified data analysis and provided automated insights.

    Figures and Tables | References | Related Articles | Metrics
    Ethics, Laws, and Security in Open Source and AI
    Brief discussion on fair use for distribution of open-source large model datasets
    Yunhu ZHAO, Yuzhou YANG, Lin QIN
    2025, 2025 (5):  183-190.  doi: 10.3969/j.issn.1000-5641.2025.05.017
    Abstract ( 14 )   HTML ( 0 )   PDF (741KB) ( 1 )   Save

    The openness of large models requires not only sharing conventional computer software elements such as model architectures and training codes but also disclosing model parameters and datasets. Applying the analytical frameworks of the “four-factor test” and “three-step test” while considering the transformative nature and purpose of dataset distribution under open licenses as well as the public interest in technological development and application, one may conclude that distributing datasets for open-source large models constitutes fair use, thus obviating the necessity for obtaining copyright licenses from upstream right holders. Such an approach satisfies governance requirements regarding artificial-intelligence transparency and actively contributes to promoting knowledge sharing.

    References | Related Articles | Metrics
    Dynamics model in two-layer networks and case study on generative artificial intelligence bias cognition propagation
    Hongmiao ZHU, Xiaodong ZHAO, Huimin ZHOU, Jiayin QI
    2025, 2025 (5):  191-201.  doi: 10.3969/j.issn.1000-5641.2025.05.018
    Abstract ( 18 )   HTML ( 4 )   PDF (1263KB) ( 4 )   Save

    This study developed a model to understand the communication dynamics of generative artificial intelligence (GAI) bias cognition within a two-layer network comprising enterprise managers and ordinary employees. The model integrated the effects of communication between different levels and the impact of cognitive training. Using the next-generation matrix method, the study accurately calculated the propagation threshold, R0, that served as a crucial quantitative foundation for effective governance. Specifically, when R0<1, deviant cognition tended to disappear spontaneously, whereas when R0>1, a risk of biased cognition spreading existed. Additionally, the study compared and evaluated two intervention strategies through numerical simulations, providing a comprehensive analysis of the mechanisms that drove the generation and dissemination of deviant cognition within enterprises, supported by case studies.

    Figures and Tables | References | Related Articles | Metrics
    Historical transformations and future prospects of open source ethics
    Biaowei ZHUANG, Runtao LIU
    2025, 2025 (5):  202-208.  doi: 10.3969/j.issn.1000-5641.2025.05.019
    Abstract ( 13 )   HTML ( 1 )   PDF (545KB) ( 0 )   Save

    Open source ethics are unique and are the moral code and values that the open-source ecosystem should follow. Through the evaluation of open-source socialization behaviors, open-source classic texts, and typical cases, three stages of ethical evolution are observed in open source: the “elite ethics” phase driven by early hackers; the “commercial ethics” phase marked by growing corporate involvement; and the current “external ethics” phase shaped by social responsibility, privacy concerns, and geopolitical issues. The integration of China’s open-source technology and traditional culture has driven open source ethics into a new phase of development.

    References | Related Articles | Metrics