华东师范大学学报(自然科学版) ›› 2024, Vol. 2024 ›› Issue (5): 20-31.doi: 10.3969/j.issn.1000-5641.2024.05.003

• 学习评价与推荐 • 上一篇    下一篇

SA-MGKT: 基于自注意力融合的多图知识追踪方法

王畅1,2, 马丹1,2,*(), 许华容1,2, 陈攀峰1,2, 陈梅1,2, 李晖1,2   

  1. 1. 公共大数据国家重点实验室, 贵阳 550025
    2. 贵州大学 计算机科学与技术学院, 贵阳 550025
  • 收稿日期:2024-07-12 出版日期:2024-09-25 发布日期:2024-09-23
  • 通讯作者: 马丹 E-mail:dma@gzu.edu.cn
  • 基金资助:
    国家自然科学基金(61462010); 贵州省科技计划项目(黔科合重大专项[2024]003, 黔科合成果[2023]一般010)

SA-MGKT: Multi-graph knowledge tracing method based on self-attention

Chang WANG1,2, Dan MA1,2,*(), Huarong XU1,2, Panfeng CHEN1,2, Mei CHEN1,2, Hui LI1,2   

  1. 1. State Key Laboratory of Public Big Data, Guiyang 550025, China
    2. School of Computer Science and Technology, Guizhou University, Guiyang 550025, China
  • Received:2024-07-12 Online:2024-09-25 Published:2024-09-23
  • Contact: Dan MA E-mail:dma@gzu.edu.cn

摘要:

提出了一种基于自注意力融合的多图知识追踪方法(multi-graph knowledge tracing method based on self-attention, SA-MGKT), 旨在通过学生的历史答题数据, 对其知识的掌握程度进行模型化, 并评估其未来学习的表现. 首先, 该方法构建了学生–习题异质图来表示学生–习题的高阶关系, 通过图对比学习技术捕获学生的答题偏好, 并采用3层LightGCN进行图表征学习. 然后, 引入概念关联超图和有向转换图的信息, 通过超图卷积网络和有向图卷积网络进行节点嵌入. 最后, 通过引入自注意力机制, 成功融合了习题序列的内部信息以及多图表征学习所蕴含的潜在信息, 从而显著提升了知识追踪模型的准确性. 实验数据在3个标准数据集上均展现出令人鼓舞的结果, 模型的分类性能得到了大幅提升, 具体表现为相对于基线模型, 在评估指标上分别提高了3.51%、17.91%和1.47%. 这些结果充分验证了融合多图信息和自注意力机制对于增强知识追踪模型性能的有效性.

关键词: 知识追踪, 图对比学习, 自注意力机制

Abstract:

This study proposes a multi-graph knowledge tracing method integrated with a self-attention mechanism (SA-MGKT), The aim is to model students’ knowledge mastery based on their historical performance on problem-solving exercises and evaluate their future learning performance. Firstly, a heterogeneous graph of student-exercise is constructed to represent the high-order relationships between these two factors. Graph contrastive learning techniques are employed to capture students’ answer preferences, and a three-layer LightGCN is utilized for graph representation learning. Secondly, we introduce information from concept association hypergraphs and directed transition graphs, and obtain node embeddings through hypergraph convolutional networks and directed graph convolutional networks. Finally, by incorporating the self-attention mechanism, we successfully fuse the internal information within the exercise sequence and the latent knowledge embedded in the representations learned from multiple graphs, leading to a substantial enhancement in the accuracy of the knowledge tracing model. Experimental outcomes on three benchmark datasets demonstrate promising results, showcasing remarkable improvements of 3.51%, 17.91%, and 1.47% respectively in the evaluation metrics, compared to the baseline models. These findings robustly validate the effectiveness of integrating multi-graph information and the self-attention mechanism in enhancing the performance of knowledge tracing models.

Key words: knowledge tracing, graph contrastive learning, self-attention

中图分类号: