Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/26038
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, P-
dc.contributor.authorWang, Z-
dc.contributor.authorZheng, B-
dc.contributor.authorLi, H-
dc.contributor.authorAlsaadi, FE-
dc.contributor.authorZeng, N-
dc.date.accessioned2023-03-03T10:33:57Z-
dc.date.available2023-03-03T10:33:57Z-
dc.date.issued2022-12-21-
dc.identifierORCID iDs: Zidong Wang https://orcid.org/0000-0002-9576-7401; Han Li https://orcid.org/0000-0003-0276-9756; Fuad E. Alsaadi https://orcid.org/0000-0001-6420-3948; Nianyin Zeng https://orcid.org/0000-0002-6957-2942.-
dc.identifier106457-
dc.identifier.citationWu, P. et al. (2022) 'AGGN: Attention-based glioma grading network with multi-scale feature extraction and multi-modal information fusion', Computers in Biology and Medicine, 152, 106457, pp. 1 - 10. doi: 10.1016/j.compbiomed.2022.106457.en_US
dc.identifier.issn0010-4825-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/26038-
dc.description.abstractIn this paper, a magnetic resonance imaging (MRI) oriented novel attention-based glioma grading network (AGGN) is proposed. By applying the dual-domain attention mechanism, both channel and spatial information can be considered to assign weights, which benefits highlighting the key modalities and locations in the feature maps. Multi-branch convolution and pooling operations are applied in a multi-scale feature extraction module to separately obtain shallow and deep features on each modality, and a multi-modal information fusion module is adopted to sufficiently merge low-level detailed and high-level semantic features, which promotes the synergistic interaction among different modality information. The proposed AGGN is comprehensively evaluated through extensive experiments, and the results have demonstrated the effectiveness and superiority of the proposed AGGN in comparison to other advanced models, which also presents high generalization ability and strong robustness. In addition, even without the manually labeled tumor masks, AGGN can present considerable performance as other state-of-the-art algorithms, which alleviates the excessive reliance on supervised information in the end-to-end learning paradigm.en_US
dc.description.sponsorshipThis research work was funded by Institutional Fund Projects under grant no. (IFPIP: 30-135-1443). The authors gratefully acknowledge the technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.en_US
dc.format.extent1 - 10-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherElsevieren_US
dc.rightsCopyright © 2022 Elsevier Ltd. All rights reserved. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1016/j.compbiomed.2022.106457, made available on this repository under a Creative Commons CC BY-NC-ND attribution licence (https://creativecommons.org/licenses/by-nc-nd/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectartificial intelligenceen_US
dc.subjectglioma gradingen_US
dc.subjectfeature extractionen_US
dc.subjectinformation fusionen_US
dc.subjectmagnetic resonance imaging (MRI)en_US
dc.titleAGGN: Attention-based glioma grading network with multi-scale feature extraction and multi-modal information fusionen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1016/j.compbiomed.2022.106457-
dc.relation.isPartOfComputers in Biology and Medicine-
pubs.publication-statusPublished-
pubs.volume152-
dc.identifier.eissn1879-0534-
dc.rights.holderElsevier Ltd.-
Appears in Collections:Dept of Computer Science Embargoed Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfEmbargoed until 21 December 202314.82 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons