Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27263
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMcConnell, N-
dc.contributor.authorNdipenoch, N-
dc.contributor.authorCao, Y-
dc.contributor.authorMiron, A-
dc.contributor.authorLi, Y-
dc.date.accessioned2023-09-27T16:25:36Z-
dc.date.available2023-09-27T16:25:36Z-
dc.date.issued2023-09-30-
dc.identifierORCID iD: Alina Miron https://orcid.org/0000-0002-0068-4495-
dc.identifierORCID iD: Yongmin Li https://orcid.org/0000-0003-1668-2440.-
dc.identifier.citationMcConnell, N. et al. (2023) 'Advanced Architectural Variations of nnUNet', Neurocomputing, 560, 126837, pp. 1 - 15. doi: 10.1016/j.neucom.2023.126837.en_US
dc.identifier.issn0925-2312-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27263-
dc.descriptionData availability: Data utilised is publicly available and link has been provided.-
dc.descriptionSource code available via: https://github.com/niccolo246/Advanced_nnUNet.git-
dc.description.abstractThe nnUNet is a state-of-the-art deep learning based segmentation framework which automatically and systematically configures the entire network training pipeline. We extend the network architecture component of the nnUNet framework by newly integrating mechanisms from advanced U-Net variations including residual, dense, and inception blocks as well as three forms of the attention mechanism. We propose the following extensions to nnUNet, namely Residual-nnUNet, Dense-nnUNet, Inception-nnUNet, Spatial-Single-Attention-nnUNet, Spatial- Multi-Attention-nnUNet, and Channel-Spatial-Attention-nnUNet. Furthermore, within Channel-Spatial- Attention-nnUNet we integrate our newly proposed variation of the channel-attention mechanism. We demonstrate that use of the nnUNet allows for consistent and transparent comparison of U-Net architectural modifications, while maintaining network architecture as the sole independent variable across experiments with respect to a dataset. The proposed variants are evaluated on eight medical imaging datasets consisting of 20 anatomical regions which is the largest collection of datasets on which attention U-Net variants have been compared in a single work. Our results suggest that attention variants are effective at improving performance when applied to tumour segmentation tasks consisting of two or more target anatomical regions, and that segmentation performance is influenced by use of the deep supervision architectural feature.-
dc.format.extent1 - 15-
dc.format.extentPrint-Electronic-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCopyright © 2023 Elsevier. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.neucom.2023.126837, made available on this repository under a Creative Commons CC BY-NC-ND attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectbiomedical image segmentationen_US
dc.subjectnnUneten_US
dc.subjectresidualen_US
dc.subjectdenseen_US
dc.subjectinceptionen_US
dc.subjectattentionen_US
dc.titleAdvanced Architectural Variations of nnUNeten_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1016/j.neucom.2023.126837-
dc.relation.isPartOfNeurocomputing-
pubs.publication-statusPublished-
pubs.volume560-
dc.identifier.eissn1872-8286-
dc.rights.holderElsevier-
Appears in Collections:Dept of Computer Science Embargoed Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 30 September 202410.84 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons