Title | Split-AE: An Autoencoder-based Disentanglement Framework for 3D Shape-to-shape Feature Transfer |
Author | |
DOI | |
Publication Years | 2022
|
Conference Name | IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) / IEEE World Congress on Computational Intelligence (IEEE WCCI) / International Joint Conference on Neural Networks (IJCNN) / IEEE Congress on Evolutionary Computation (IEEE CEC)
|
ISSN | 2161-4393
|
Source Title | |
Volume | 2022-July
|
Conference Date | JUL 18-23, 2022
|
Conference Place | null,Padua,ITALY
|
Publication Place | 345 E 47TH ST, NEW YORK, NY 10017 USA
|
Publisher | |
Abstract | Recent advancements in machine learning comprise generative models such as autoencoders (AE) for learning and compressing 3D data to generate low-dimensional latent representations of 3D shapes. Learning latent representations that disentangle the underlying factors of variations in 3D shapes is an intuitive way to achieve generalization in generative models. However, it remains an open problem to learn a generative model of 3D shapes such that the latent variables are disentangled and represent different interpretable aspects of 3D shapes. In this paper, we propose Split-AE, which is an autoencoder-based architecture for partitioning the latent space into two sets, named as content and style codes. The content code represents global features of 3D shapes to differentiate between semantic categories of shapes, while style code represents distinct visual features to differentiate between shape categories having similar semantic meaning. We present qualitative and quantitative experiments to verify feature disentanglement using our Split-AE. Further, we demonstrate that, given a source shape as an initial shape and a target shape as a style reference, the trained Split-AE combines the content of a source and style of a target shape to generate a novel augmented shape, that possesses the distinct features of the target shape category yet maintains the similarity of the global features with the source shape. We conduct a qualitative study showing that the augmented shapes exhibit a realistic interpretable mixture of content and style features across different shape classes with similar semantic meaning. |
Keywords | |
SUSTech Authorship | Others
|
Language | English
|
URL | [Source Record] |
Indexed By | |
WOS Research Area | Computer Science
; Engineering
; Neurosciences & Neurology
|
WOS Subject | Computer Science, Artificial Intelligence
; Computer Science, Hardware & Architecture
; Engineering, Electrical & Electronic
; Neurosciences
|
WOS Accession No | WOS:000867070907051
|
Scopus EID | 2-s2.0-85140714216
|
Data Source | Scopus
|
Citation statistics |
Cited Times [WOS]:0
|
Document Type | Conference paper |
Identifier | http://kc.sustech.edu.cn/handle/2SGJ60CL/415601 |
Department | Department of Computer Science and Engineering |
Affiliation | 1.Honda Research Institute Europe,Offenbach,Germany 2.School of Computer Science,University of Birmingham,Birmingham,United Kingdom 3.SUSTech,Department of Computer Science and Engineering,China |
Recommended Citation GB/T 7714 |
Saha,Sneha,Minku,Leandro L.,Yao,Xin,et al. Split-AE: An Autoencoder-based Disentanglement Framework for 3D Shape-to-shape Feature Transfer[C]. 345 E 47TH ST, NEW YORK, NY 10017 USA:IEEE,2022.
|
Files in This Item: | There are no files associated with this item. |
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment