Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/25326
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, X-
dc.contributor.authorZhang, Y-
dc.contributor.authorLei, T-
dc.contributor.authorWang, Y-
dc.contributor.authorZhai, Y-
dc.contributor.authorNandi, A-
dc.date.accessioned2022-10-17T13:29:08Z-
dc.date.available2022-10-17T13:29:08Z-
dc.date.issued2022-10-03-
dc.identifierORCID iDs: Xuan Wang https://orcid.org/0000-0002-0842-6511; Tao Lei https://orcid.org/0000-0002-2104-9298; Yingbo Wang https://orcid.org/0000-0001-6447-8730; Asoke K. Nandi https://orcid.org/0000-0001-6248-2875.-
dc.identifier4941-
dc.identifier.citationWang, X. et al (2022) 'Dynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Images'. Remote Sensing, 14 (19), 4941, pp.1 - 20. https://doi.org/10.3390/rs14194941en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/25326-
dc.descriptionData Availability Statement: The datasets used in this study have been published, and their addresses are https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-potsdam/ (accessed on 30 January 2021) and https://www2.isprs.org/commissions/comm2/wg4/benchmark/2d-sem-label-vaihingen/ (accessed on 30 January 2021).-
dc.description.abstractCopyright © 2022 by the authors. The current deep convolutional neural networks for very-high-resolution (VHR) remote-sensing image land-cover classification often suffer from two challenges. First, the feature maps extracted by network encoders based on vanilla convolution usually contain a lot of redundant information, which easily causes misclassification of land cover. Moreover, these encoders usually require a large number of parameters and high computational costs. Second, as remote-sensing images are complex and contain many objects with large-scale variances, it is difficult to use the popular feature fusion modules to improve the representation ability of networks. To address the above issues, we propose a dynamic convolution self-attention network (DCSA-Net) for VHR remote-sensing image land-cover classification. The proposed network has two advantages. On one hand, we designed a lightweight dynamic convolution module (LDCM) by using dynamic convolution and a self-attention mechanism. This module can extract more useful image features than vanilla convolution, avoiding the negative effect of useless feature maps on land-cover classification. On the other hand, we designed a context information aggregation module (CIAM) with a ladder structure to enlarge the receptive field. This module can aggregate multi-scale contexture information from feature maps with different resolutions using a dense connection. Experiment results show that the proposed DCSA-Net is superior to state-of-the-art networks due to higher accuracy of land-cover classification, fewer parameters, and lower computational cost. The source code is made public available.en_US
dc.description.sponsorshipNational Natural Science Foundation of China (Program No. 61871259, 62271296, 61861024), in part by Natural Science Basic Research Program of Shaanxi (Program No. 2021JC-47), in part by Key Research and Development Program of Shaanxi (Program No. 2022GY-436, 2021ZDLGY08-07), in part by Natural Science Basic Research Program of Shaanxi (Program No. 2022JQ-634, 2022JQ-018), and in part by Shaanxi Joint Laboratory of Artificial Intelligence (No. 2020SS-03).en_US
dc.format.extent1 - 20-
dc.format.mediumElectronic-
dc.publisherMDPIen_US
dc.rightsCopyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0-
dc.subjectLand-cover classificationen_US
dc.subjectfeature fusionen_US
dc.subjectself-attentionen_US
dc.subjectlightweighten_US
dc.titleDynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Imagesen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.3390/rs14194941-
dc.relation.isPartOfRemote Sensing-
pubs.issue19-
pubs.publication-statusPublished-
pubs.volume14-
dc.identifier.eissn2072-4292-
dc.rights.holderThe authors-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).4.82 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons