中文版 | English
Title

VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix

Author
Corresponding AuthorFeng Zheng
Publication Years
2022
Conference Name
38th International Conference on Machine Learning (ICML)
ISSN
2640-3498
Source Title
Conference Date
JUL 17-23, 2022
Conference Place
null,Baltimore,MD
Publication Place
1269 LAW ST, SAN DIEGO, CA, UNITED STATES
Publisher
Abstract
Existing vision-language pre-training (VLP) methods primarily rely on paired image-text datasets, which are either annotated by enormous human labors, or crawled from the internet followed by elaborate data cleaning techniques. To reduce the dependency on well-aligned imagetext pairs, it is promising to directly leverage the large-scale text-only and image-only corpora. This paper proposes a data augmentation method, namely cross-modal CutMix (CMC), for implicit cross-modal alignment learning in unpaired VLP. Specifically, CMC transforms natural sentences from the textual view into a multi-modal view, where visually-grounded words in a sentence are randomly replaced by diverse image patches with similar semantics. There are several appealing proprieties of the proposed CMC. First, it enhances the data diversity while keeping the semantic meaning intact for tackling problems where the aligned data are scarce; Second, by attaching cross-modal noise on uni-modal data, it guides models to learn token-level interactions across modalities for better denoising. Furthermore, we present a new unpaired VLP method, dubbed as VLMixer, that integrates CMC with contrastive learning to pull together the uni-modal and multi-modal views for better instance-level alignments among different modalities. Extensive experiments on five downstream tasks show that VLMixer could surpass previous state-of-the-art unpaired VLP methods. Project page: https: //github.com/ttengwang/VLMixer
SUSTech Authorship
First ; Corresponding
Language
English
URL[Source Record]
Indexed By
Funding Project
National Natural Science Foundation of China["61972188","62122035","61906081","62106097"] ; China Postdoctoral Science Foundation["2021M691424","27208720","17212120"]
WOS Research Area
Computer Science
WOS Subject
Computer Science, Artificial Intelligence
WOS Accession No
WOS:000900130203039
Data Source
Web of Science
Citation statistics
Cited Times [WOS]:0
Document TypeConference paper
Identifierhttp://kc.sustech.edu.cn/handle/2SGJ60CL/415616
DepartmentDepartment of Computer Science and Engineering
Affiliation
1.Department of Computer Science and Engineering, Southern University of Science and Technology
2.Department of Computer Science, The University of Hong Kong 3
3.Data Platform, Tencent
First Author AffilicationDepartment of Computer Science and Engineering
Corresponding Author AffilicationDepartment of Computer Science and Engineering
First Author's First AffilicationDepartment of Computer Science and Engineering
Recommended Citation
GB/T 7714
Teng Wang,Wenhao Jiang,Zhichao Lu,et al. VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix[C]. 1269 LAW ST, SAN DIEGO, CA, UNITED STATES:JMLR-JOURNAL MACHINE LEARNING RESEARCH,2022.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Export to Excel
Export to Csv
Altmetrics Score
Google Scholar
Similar articles in Google Scholar
[Teng Wang]'s Articles
[Wenhao Jiang]'s Articles
[Zhichao Lu]'s Articles
Baidu Scholar
Similar articles in Baidu Scholar
[Teng Wang]'s Articles
[Wenhao Jiang]'s Articles
[Zhichao Lu]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Teng Wang]'s Articles
[Wenhao Jiang]'s Articles
[Zhichao Lu]'s Articles
Terms of Use
No data!
Social Bookmark/Share
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.