中文版 | English
Title

Learning Dual-Fused Modality-Aware Representations for RGBD Tracking

Author
Corresponding AuthorZheng,Feng
DOI
Publication Years
2023
ISSN
0302-9743
EISSN
1611-3349
Source Title
Volume
13808 LNCS
Pages
478-494
Abstract
With the development of depth sensors in recent years, RGBD object tracking has received significant attention. Compared with the traditional RGB object tracking, the addition of the depth modality can effectively solve the target and background interference. However, some existing RGBD trackers use the two modalities separately and thus some particularly useful shared information between them is ignored. On the other hand, some methods attempt to fuse the two modalities by treating them equally, resulting in the missing of modality-specific features. To tackle these limitations, we propose a novel Dual-fused Modality-aware Tracker (termed DMTracker) which aims to learn informative and discriminative representations of the target objects for robust RGBD tracking. The first fusion module focuses on extracting the shared information between modalities based on cross-modal attention. The second aims at integrating the RGB-specific and depth-specific information to enhance the fused features. By fusing both the modality-shared and modality-specific information in a modality-aware scheme, our DMTracker can learn discriminative representations in complex tracking scenes. Experiments show that our proposed tracker achieves very promising results on challenging RGBD benchmarks. Code is available at https://github.com/ShangGaoG/DMTracker.
Keywords
SUSTech Authorship
First ; Corresponding
Language
English
URL[Source Record]
Scopus EID
2-s2.0-85151390839
Data Source
Scopus
Citation statistics
Cited Times [WOS]:0
Document TypeConference paper
Identifierhttp://kc.sustech.edu.cn/handle/2SGJ60CL/524275
DepartmentDepartment of Computer Science and Engineering
Affiliation
1.Department of Computer Science and Engineering,Southern University of Science and Technology,Shenzhen,China
2.University of Birmingham,Birmingham,United Kingdom
3.University of Electronic Science and Technology of China,Chengdu,China
First Author AffilicationDepartment of Computer Science and Engineering
Corresponding Author AffilicationDepartment of Computer Science and Engineering
First Author's First AffilicationDepartment of Computer Science and Engineering
Recommended Citation
GB/T 7714
Gao,Shang,Yang,Jinyu,Li,Zhe,et al. Learning Dual-Fused Modality-Aware Representations for RGBD Tracking[C],2023:478-494.
Files in This Item:
There are no files associated with this item.
Related Services
Fulltext link
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Export to Excel
Export to Csv
Altmetrics Score
Google Scholar
Similar articles in Google Scholar
[Gao,Shang]'s Articles
[Yang,Jinyu]'s Articles
[Li,Zhe]'s Articles
Baidu Scholar
Similar articles in Baidu Scholar
[Gao,Shang]'s Articles
[Yang,Jinyu]'s Articles
[Li,Zhe]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Gao,Shang]'s Articles
[Yang,Jinyu]'s Articles
[Li,Zhe]'s Articles
Terms of Use
No data!
Social Bookmark/Share
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.