Title | DS3 -Net: Difficulty-Perceived Common-to-T1ce Semi-supervised Multimodal MRI Synthesis Network |
Author | |
Corresponding Author | Tang,Xiaoying |
DOI | |
Publication Years | 2022
|
Conference Name | 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)
|
ISSN | 0302-9743
|
EISSN | 1611-3349
|
ISBN | 978-3-031-16445-3
|
Source Title | |
Pages | 571-581
|
Conference Date | SEP 18-22, 2022
|
Conference Place | null,Singapore,SINGAPORE
|
Publication Place | GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
|
Publisher | |
Abstract | Contrast-enhanced T1 (T1ce) is one of the most essential magnetic resonance imaging (MRI) modalities for diagnosing and analyzing brain tumors, especially gliomas. In clinical practice, common MRI modalities such as T1, T2, and fluid attenuation inversion recovery are relatively easy to access while T1ce is more challenging considering the additional cost and potential risk of allergies to the contrast agent. Therefore, it is of great clinical necessity to develop a method to synthesize T1ce from other common modalities. Current paired image translation methods typically have the issue of requiring a large amount of paired data and do not focus on specific regions of interest, e.g., the tumor region, in the synthesization process. To address these issues, we propose a Difficulty-perceived common-to-T1ce Semi-Supervised multimodal MRI Synthesis network (DS -Net), involving both paired and unpaired data together with dual-level knowledge distillation. DS -Net predicts a difficulty map to progressively promote the synthesis task. Specifically, a pixelwise constraint and a patchwise contrastive constraint are guided by the predicted difficulty map. Through extensive experiments on the publicly-available BraTS2020 dataset, DS -Net outperforms its supervised counterpart in each respect. Furthermore, with only 5% paired data, the proposed DS -Net achieves competitive performance with state-of-the-art image translation methods utilizing 100% paired data, delivering an average SSIM of 0.8947 and an average PSNR of 23.60. The source code is available at https://github.com/Huangziqi777/DS-3_Net. |
Keywords | |
SUSTech Authorship | First
; Corresponding
|
Language | English
|
URL | [Source Record] |
Indexed By | |
Funding Project | National Natural Science Foundation of China[62071210]
|
WOS Research Area | Imaging Science & Photographic Technology
; Radiology, Nuclear Medicine & Medical Imaging
|
WOS Subject | Imaging Science & Photographic Technology
; Radiology, Nuclear Medicine & Medical Imaging
|
WOS Accession No | WOS:000867434800054
|
Scopus EID | 2-s2.0-85139109005
|
Data Source | Scopus
|
Citation statistics |
Cited Times [WOS]:0
|
Document Type | Conference paper |
Identifier | http://kc.sustech.edu.cn/handle/2SGJ60CL/406262 |
Department | Department of Electrical and Electronic Engineering |
Affiliation | 1.Department of Electrical and Electronic Engineering,Southern University of Science and Technology,Shenzhen,China 2.Department of Electrical and Electronic Engineering,The University of Hong Kong,Hong Kong |
First Author Affilication | Department of Electrical and Electronic Engineering |
Corresponding Author Affilication | Department of Electrical and Electronic Engineering |
First Author's First Affilication | Department of Electrical and Electronic Engineering |
Recommended Citation GB/T 7714 |
Huang,Ziqi,Lin,Li,Cheng,Pujin,et al. DS3 -Net: Difficulty-Perceived Common-to-T1ce Semi-supervised Multimodal MRI Synthesis Network[C]. GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND:SPRINGER INTERNATIONAL PUBLISHING AG,2022:571-581.
|
Files in This Item: | There are no files associated with this item. |
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment