Title | Self-supervised Blind2Unblind deep learning scheme for OCT speckle reductions |
Author | |
Corresponding Author | Chen, Jinna |
Publication Years | 2023-06-01
|
DOI | |
Source Title | |
ISSN | 2156-7085
|
Volume | 14Issue:6 |
Abstract | As a low-coherence interferometry-based imaging modality, optical coherence tomography (OCT) inevitably suffers from the influence of speckles originating from multiply scattered photons. Speckles hide tissue microstructures and degrade the accuracy of disease diagnoses, which thus hinder OCT clinical applications. Various methods have been proposed to address such an issue, yet they suffer either from the heavy computational load, or the lack of high-quality clean images prior, or both. In this paper, a novel self-supervised deep learning scheme, namely, Blind2Unblind network with refinement strategy (B2Unet), is proposed for OCT speckle reduction with a single noisy image only. Specifically, the overall B2Unet network architecture is presented first, and then, a global-aware mask mapper together with a loss function are devised to improve image perception and optimize sampled mask mapper blind spots, respectively. To make the blind spots visible to B2Unet, a new re-visible loss is also designed, and its convergence is discussed with the speckle properties being considered. Extensive experiments with different OCT image datasets are finally conducted to compare B2Unet with those state-of-the-art existing methods. Both qualitative and quantitative results convincingly demonstrate that B2Unet outperforms the state-of-the-art model-based and fully supervised deep-learning methods, and it is robust and capable of effectively suppressing speckles while preserving the important tissue micro-structures in OCT images in different cases.& COPY; 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement |
URL | [Source Record] |
Indexed By | |
Language | English
|
SUSTech Authorship | Corresponding
|
Funding Project | National Natural Science Foundation of China[62220106006]
; Basic and Applied Basic Research Foundation of Guangdong Province[2021B1515120013]
; Key Research and Development Projects of Shaanxi Province[2021SF-342]
; Key Research Project of Shaanxi Higher Education Teaching Reform[21BG005]
|
WOS Research Area | Biochemistry & Molecular Biology
; Optics
; Radiology, Nuclear Medicine & Medical Imaging
|
WOS Subject | Biochemical Research Methods
; Optics
; Radiology, Nuclear Medicine & Medical Imaging
|
WOS Accession No | WOS:001014778000003
|
Publisher | |
Data Source | Web of Science
|
Citation statistics |
Cited Times [WOS]:0
|
Document Type | Journal Article |
Identifier | http://kc.sustech.edu.cn/handle/2SGJ60CL/549208 |
Department | Department of Electrical and Electronic Engineering |
Affiliation | 1.Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China 2.Northwestern Polytech Univ Shenzhen, Res & Dev Inst, Guangzhou 51800, Peoples R China 3.Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore 4.Soochow Univ, Sch Elect & Informat Engn, Suzhou 215006, Jiangsu, Peoples R China 5.Southern Univ Sci & Technol, Dept Elect & Elect Engn, Shenzhen 518055, Guangdong, Peoples R China |
Corresponding Author Affilication | Department of Electrical and Electronic Engineering |
First Author's First Affilication | Department of Electrical and Electronic Engineering |
Recommended Citation GB/T 7714 |
Yu, Xiaojun,Ge, Chenkun,Li, Mingshuai,et al. Self-supervised Blind2Unblind deep learning scheme for OCT speckle reductions[J]. BIOMEDICAL OPTICS EXPRESS,2023,14(6).
|
APA |
Yu, Xiaojun.,Ge, Chenkun.,Li, Mingshuai.,Yuan, Miao.,Liu, Linbo.,...&Chen, Jinna.(2023).Self-supervised Blind2Unblind deep learning scheme for OCT speckle reductions.BIOMEDICAL OPTICS EXPRESS,14(6).
|
MLA |
Yu, Xiaojun,et al."Self-supervised Blind2Unblind deep learning scheme for OCT speckle reductions".BIOMEDICAL OPTICS EXPRESS 14.6(2023).
|
Files in This Item: | There are no files associated with this item. |
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment