Adversarial VAE with Normalizing Flows for Multi-Dimensional Classification
Exploiting correlations among class variables and using them to facilitate the learning process are a key challenge of Multi-Dimensional Classification (MDC) problems. Label embedding is an efficient strategy towards MDC problems. However, previous methods for MDC only use this technique as a way of feature augmentation and train a separate model for each class variable in MDC problems. Such two-stage approaches may cause unstable results and achieve suboptimal performance. In this paper, we propose an end-to-end model called Adversarial Variational AutoEncoder with Normalizing Flow (ADVAE-Flow), which encodes both features and class variables to probabilistic latent spaces. Specifically, considering the heterogeneity of class spaces, we introduce a normalizing flows module to increase the capacity of probabilistic latent spaces. Then adversarial training is adopted to help align transformed latent spaces obtained by normalizing flows. Extensive experiments on eight MDC datasets demonstrate the superiority of the proposed ADVAE-Flow model over state-of-the-art MDC models.
Cited Times [WOS]:0
|Document Type||Conference paper|
|Department||Southern University of Science and Technology|
1.University of California Irvine,Irvine,United States
2.Southern University of Science and Technology,Shenzhen,China
3.Hong Kong University of Science and Technology,Hong Kong
4.City University of Hong Kong,Hong Kong
5.Peng Cheng Laboratory,Shenzhen,China
|Corresponding Author Affilication||Southern University of Science and Technology|
Zhang，Wenbo,Gou，Yunhao,Jiang，Yuepeng,et al. Adversarial VAE with Normalizing Flows for Multi-Dimensional Classification[C],2022:205-219.
|Files in This Item:||There are no files associated with this item.|
|Recommend this item|
|Export to Endnote|
|Export to Excel|
|Export to Csv|
|Similar articles in Google Scholar|
|Similar articles in Baidu Scholar|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.