Title | Model Compression by Iterative Pruning with Knowledge Distillation and Its Application to Speech Enhancement |
Author | |
DOI | |
Publication Years | 2022
|
Conference Name | Interspeech Conference
|
ISSN | 2308-457X
|
EISSN | 1990-9772
|
Source Title | |
Volume | 2022-September
|
Pages | 941-945
|
Conference Date | SEP 18-22, 2022
|
Conference Place | null,Incheon,SOUTH KOREA
|
Publication Place | C/O EMMANUELLE FOXONET, 4 RUE DES FAUVETTES, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRANCE
|
Publisher | |
Abstract | Over the past decade, deep learning has demonstrated its effectiveness and keeps setting new records in a wide variety of tasks. However, good model performance usually leads to a huge amount of parameters and extremely high computational complexity which greatly limit the use cases of deep learning models, particularly in embedded systems. Therefore, model compression is getting more and more attention. In this paper, we propose a compression strategy based on iterative pruning and knowledge distillation. Specifically, in each iteration, we first utilize a pruning criterion to drop the weights which have less impact on performance. Then, the model before pruning is used as a teacher to fine-tune the student which is the model after pruning. After several iterations, we get the final compressed model. The proposed method is verified on gated convolutional recurrent network (GCRN) and long short-term memory (LSTM) for single channel speech enhancement tasks. Experimental results show that the proposed compression strategy can dramatically reduce the model size by 40x without significant performance degradation for GCRN. |
Keywords | |
SUSTech Authorship | Others
|
Language | English
|
URL | [Source Record] |
Indexed By | |
WOS Research Area | Acoustics
; Audiology & Speech-Language Pathology
; Computer Science
; Engineering
|
WOS Subject | Acoustics
; Audiology & Speech-Language Pathology
; Computer Science, Artificial Intelligence
; Engineering, Electrical & Electronic
|
WOS Accession No | WOS:000900724501024
|
Scopus EID | 2-s2.0-85140075848
|
Data Source | Scopus
|
Citation statistics |
Cited Times [WOS]:0
|
Document Type | Conference paper |
Identifier | http://kc.sustech.edu.cn/handle/2SGJ60CL/406914 |
Department | Department of Electrical and Electronic Engineering |
Affiliation | 1.Department of Computer Science,Inner Mongolia University,Canada 2.Department of Electrical and Electronic Engineering,Southern University of Science and Technology,China |
Recommended Citation GB/T 7714 |
Wei,Zeyuan,Li,Hao,Zhang,Xueliang. Model Compression by Iterative Pruning with Knowledge Distillation and Its Application to Speech Enhancement[C]. C/O EMMANUELLE FOXONET, 4 RUE DES FAUVETTES, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRANCE:ISCA-INT SPEECH COMMUNICATION ASSOC,2022:941-945.
|
Files in This Item: | There are no files associated with this item. |
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment