Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27117
Title: SGU-Net: Shape-Guided Ultralight Network for Abdominal Image Segmentation
Authors: Lei, T
Sun, R
Du, X
Fu, H
Zhang, C
Nandi, AK
Keywords: medical image segmentation;deep learning;ultralight convolution;adversarial shape-constraint
Issue Date: 19-Jan-2023
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Citation: Lei, T. et al. (2023) 'SGU-Net: Shape-Guided Ultralight Network for Abdominal Image Segmentation', IEEE Journal of Biomedical and Health Informatics, 27 (3), pp. 1431 - 1442. doi: 10.1109/JBHI.2023.3238183.
Abstract: Copyright © The Author(s) 2023. Convolutional neural networks (CNNs) have achieved significant success in medical image segmentation. However, they also suffer from the requirement of a large number of parameters, leading to a difficulty of deploying CNNs to low-source hardwares, e.g., embedded systems and mobile devices. Although some compacted or small memory-hungry models have been reported, most of them may cause degradation in segmentation accuracy. To address this issue, we propose a shape-guided ultralight network (SGU-Net) with extremely low computational costs. The proposed SGU-Net includes two main contributions: it first presents an ultralight convolution that is able to implement double separable convolutions simultaneously, i.e., asymmetric convolution and depthwise separable convolution. The proposed ultralight convolution not only effectively reduces the number of parameters but also enhances the robustness of SGU-Net. Secondly, our SGU-Net employs an additional adversarial shape-constraint to let the network learn shape representation of targets, which can significantly improve the segmentation accuracy for abdomen medical images using self-supervision. The SGU-Net is extensively tested on four public benchmark datasets, LiTS, CHAOS, NIH-TCIA and 3Dircbdb. Experimental results show that SGU-Net achieves higher segmentation accuracy using lower memory costs, and outperforms state-of-the-art networks. Moreover, we apply our ultralight convolution into a 3D volume segmentation network, which obtains a comparable performance with fewer parameters and memory usage.
URI: https://bura.brunel.ac.uk/handle/2438/27117
DOI: https://doi.org/10.1109/JBHI.2023.3238183
ISSN: 2168-2194
Other Identifiers: ORCID iDs: Tao Lei https://orcid.org/0000-0002-2104-9298; Rui Sun https://orcid.org/0000-0003-4342-7103; Xiaogang Du https://orcid.org/0000-0002-9702-5524; Huazhu Fu https://orcid.org/0000-0002-9702-5524; Changqing Zhang https://orcid.org/0000-0003-1410-6650; Asoke K. Nandi https://orcid.org/0000-0001-6248-2875.
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © The Author(s) 2023. Published by Institute of Electrical and Electronics Engineers (IEEE). This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/3.16 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons