Title | LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT |
Author | |
Corresponding Author | Wei,Zhihua |
DOI | |
Publication Years | 2022
|
Conference Name | Interspeech Conference
|
ISSN | 2308-457X
|
EISSN | 1990-9772
|
Source Title | |
Volume | 2022-September
|
Pages | 1686-1690
|
Conference Date | SEP 18-22, 2022
|
Conference Place | null,Incheon,SOUTH KOREA
|
Publication Place | C/O EMMANUELLE FOXONET, 4 RUE DES FAUVETTES, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRANCE
|
Publisher | |
Abstract | Self-supervised speech representation learning has shown promising results in various speech processing tasks. However, the pre-trained models, e.g., HuBERT, are storage-intensive Transformers, limiting their scope of applications under low-resource settings. To this end, we propose LightHuBERT, a once-for-all Transformer compression framework, to find the desired architectures automatically by pruning structured parameters. More precisely, we create a Transformer-based supernet that is nested with thousands of weight-sharing subnets and design a two-stage distillation strategy to leverage the contextualized latent representations from HuBERT. Experiments on automatic speech recognition (ASR) and the SUPERB benchmark show the proposed LightHuBERT enables over 10 architectures concerning the embedding dimension, attention dimension, head number, feed-forward network ratio, and network depth. LightHuBERT outperforms the original HuBERT on ASR and five SUPERB tasks with the HuBERT size, achieves comparable performance to the teacher model in most tasks with a reduction of 29% parameters, and obtains a 3.5× compression ratio in three SUPERB tasks, e.g., automatic speaker verification, keyword spotting, and intent classification, with a slight accuracy loss. The code and pre-trained models are available at https://github.com/mechanicalsea/lighthubert. |
Keywords | |
SUSTech Authorship | Others
|
Language | English
|
URL | [Source Record] |
Indexed By | |
Funding Project | National Nature Science Foundation of China["61976160","61906137","61976158","62076184","62076182"]
; Shanghai Science and Technology Plan Project[21DZ1204800]
; Technology research plan project of Ministry of Public and Security[2020JSYJD01]
|
WOS Research Area | Acoustics
; Audiology & Speech-Language Pathology
; Computer Science
; Engineering
|
WOS Subject | Acoustics
; Audiology & Speech-Language Pathology
; Computer Science, Artificial Intelligence
; Engineering, Electrical & Electronic
|
WOS Accession No | WOS:000900724501174
|
Scopus EID | 2-s2.0-85140048392
|
Data Source | Scopus
|
Citation statistics |
Cited Times [WOS]:0
|
Document Type | Conference paper |
Identifier | http://kc.sustech.edu.cn/handle/2SGJ60CL/406917 |
Department | Department of Computer Science and Engineering |
Affiliation | 1.Department of Computer Science and Technology,Tongji University,China 2.Department of Computer Science and Engineering,Southern University of Science and Technology,China 3.School of Data Science,The Chinese University of Hong Kong,Shenzhen,China 4.Microsoft, 5.Peng Cheng Laboratory,China 6.ByteDance AI Lab, |
Recommended Citation GB/T 7714 |
Wang,Rui,Bai,Qibing,Ao,Junyi,et al. LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT[C]. C/O EMMANUELLE FOXONET, 4 RUE DES FAUVETTES, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRANCE:ISCA-INT SPEECH COMMUNICATION ASSOC,2022:1686-1690.
|
Files in This Item: | There are no files associated with this item. |
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment