基于深度学习的视频序列运动目标自适应跟踪
作者:
作者单位:

西北大学现代学院 电影学院,陕西 西安 710130

作者简介:

李嘉琪(1990-),女,硕士,讲师,主要研究方向为影视传媒.email:egelge401@163.com.

通讯作者:

基金项目:

伦理声明:



Adaptive tracking of moving targets in video sequences based on deep learning
Author:
Ethical statement:

Affiliation:

School of Film,Modern College of Northwest University,Xi'an Shaanxi 710130,China

Funding:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    针对视频序列中外观变化、背景杂波和严重遮挡等因素导致的目标跟踪精确度低的问题,提出一种新型的双阶段自适应跟踪模型。该模型包含目标检测和边界框估计2个阶段:在目标检测阶段,模型对目标进行粗略定位;在边界框估计阶段,精确确定目标位置。为应对视频场景复杂性及小目标跟踪的挑战,采用了多特征融合技术构建丰富的目标表示。实验结果表明,与在线和实时跟踪(SORT)、Tracktor++、FairMOT、Transformer等模型相比,本模型表现出最优的综合性能,有效平衡了计算速度与跟踪精确度之间的关系,展现出良好的应用潜力。

    Abstract:

    In response to the issues of low tracking accuracy in video sequences due to factors such as appearance changes, background clutter, and severe occlusions, a novel two-stage adaptive tracking model is proposed. This model includes two phases: target detection and bounding box estimation. In the target detection phase,the model roughly locates the target; in the bounding box estimation phase, the exact position of the target is determined. To address the complexity of video scenes and the challenges of tracking small targets, multi-feature fusion technology is employed to construct a rich target representation. Experimental results show that compared with models such as Simple Online and Realtime Tracking(SORT), Tracktor++, FairMOT, and Transformer, this model demonstrates the best overall performance, effectively balancing the relationship between computational speed and tracking accuracy, and showing good potential for application.

    参考文献
    相似文献
    引证文献
引用本文

李嘉琪.基于深度学习的视频序列运动目标自适应跟踪[J].太赫兹科学与电子信息学报,2024,22(11):1304~1311

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
历史
  • 收稿日期:2024-03-31
  • 最后修改日期:2024-05-24
  • 录用日期:
  • 在线发布日期: 2024-12-11
  • 出版日期:
关闭