华东师范大学学报(自然科学版) ›› 2024, Vol. 2024 ›› Issue (2): 65-75.doi: 10.3969/j.issn.1000-5641.2024.02.008

• 计算机科学 • 上一篇    下一篇

基于双路径多模态交互的一阶段视觉定位模型

王月, 叶加博, 林欣*()   

  1. 1. 华东师范大学 计算机科学与技术学院, 上海 200062
  • 收稿日期:2022-12-08 出版日期:2024-03-25 发布日期:2024-03-18
  • 通讯作者: 林欣 E-mail:xlin@cs.ecnu.edu.cn

Dual-path network with multilevel interaction for one-stage visual grounding

Yue WANG, Jiabo YE, Xin LIN*()   

  1. 1. School of Computer Science and Technology, East China Normal University, Shanghai 200062, China
  • Received:2022-12-08 Online:2024-03-25 Published:2024-03-18
  • Contact: Xin LIN E-mail:xlin@cs.ecnu.edu.cn

摘要:

现有的一阶段方法分别提取视觉特征映射和文本特征, 并进行多模态推理来预测被引用对象的边界框. 这些方法存在以下两个缺点: 首先, 预先训练的视觉特征提取器在视觉特征中引入了与文本无关的视觉信号, 阻碍了多模态交互; 其次, 现有模型的推理过程缺乏对语言建模的可视化指导. 基于上述缺点, 现有的一阶段方法的推理能力是有限的. 提出了一种提取文本相关视觉特征映射的低阶交互和一种整合视觉特征的高阶交互来指导语言建模, 并进一步对视觉特征进行多步推理. 在此基础上, 提出了一种新的网络结构, 称为双路径多级交互网络. 在5种常用的视觉定位数据集上进行了实验, 结果表明该方法具有较好的性能和实时性.

关键词: 视觉定位, 多模态推理, 引用表达

Abstract:

This study explores the multimodal understanding and reasoning for one-stage visual grounding. Existing one-stage methods extract visual feature maps and textual features separately, and then, multimodal reasoning is performed to predict the bounding box of the referred object. These methods suffer from the following two weaknesses: Firstly, the pre-trained visual feature extractors introduce text-unrelated visual signals into the visual features that hinder multimodal interaction. Secondly, the reasoning process followed in these two methods lacks visual guidance for language modeling. It is clear from these shortcomings that the reasoning ability of existing one-stage methods is limited. We propose a low-level interaction to extract text-related visual feature maps, and a high-level interaction to incorporate visual features in guiding the language modeling and further performing multistep reasoning on visual features. Based on the proposed interactions, we present a novel network architecture called the dual-path multilevel interaction network (DPMIN). Furthermore, experiments on five commonly used visual grounding datasets are conducted. The results demonstrate the superior performance of the proposed method and its real-time applicability.

Key words: visual grounding, multimodal understanding, referring expressions

中图分类号: