Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27119
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLei, T-
dc.contributor.authorZhang, D-
dc.contributor.authorWang, R-
dc.contributor.authorLi, S-
dc.contributor.authorZhang, W-
dc.contributor.authorNandi, AK-
dc.date.accessioned2023-09-03T17:24:10Z-
dc.date.available2023-09-03T17:24:10Z-
dc.date.issued2021-06-09-
dc.identifierORCID iDs: Tao Lei https://orcid.org/0000-0002-2104-9298; Asoke K. Nandi https://orcid.org/0000-0001-6248-2875-
dc.identifier.citationLei, T. et al. (2021) 'MFP-Net: Multi-scale feature pyramid network for crowd counting', IET Image Processing, 15 (14), pp. 3522 - 3533. doi: 10.1049/ipr2.12230.en_US
dc.identifier.issn1751-9659-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27119-
dc.description.abstractCopyright © 2021 The Authors.. Although deep learning has been widely used for dense crowd counting, it still faces two challenges. Firstly, the popular network models are sensitive to scale variance of human head, human occlusions, and complex background due to repeated utilization of vanilla convolution kernels. Secondly, the vanilla feature fusion often depends on summation or concatenation, which ignores the correlation of different features leading to information redundancy and low robustness to background noise. To address these issues, a multi-scale feature pyramid network (MFP-Net) for dense crowd counting is proposed in this paper. The proposed MFP-Net makes two contributions. Firstly, the feature pyramid fusion module is designed that adopts rich convolutions with different depths and scales, not only to expand the receptive field, but also to improve the inference speed of models by using parallel group convolution. Secondly, a feature attention-aware module is added in the feature fusion stage. The module can achieve local and global information fusion by capturing the importance of the spatial and channel domains to improve model robustness. The proposed MFP-Net is evaluated on five publicly available datasets, and experiments show that the MFP-Net not only provides better crowd counting results than comparative models, but also requires fewer parameters.en_US
dc.description.sponsorshipNational Natural Science Foundation of China. Grant Numbers: 61871259, 61861024, 61701387; Natural Science Basic Research Program of Shaanxi. Grant Number: 2021JC-47; Science and Technology Program of Shaanxi Province of China. Grant Number: 2020NY-172.en_US
dc.format.extent3522 - 3533-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoenen_US
dc.publisherInstitution of Engineering and Technology (IET)en_US
dc.rightsCopyright © 2021 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. This is an open access article under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited.-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectoptical, image and video signal processingen_US
dc.subjectcomputer vision and image processing techniquesen_US
dc.subjectneural netsen_US
dc.titleMFP-Net: Multi-scale feature pyramid network for crowd countingen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1049/ipr2.12230-
dc.relation.isPartOfIET Image Processing-
pubs.issue14-
pubs.publication-statusPublished-
pubs.volume15-
dc.identifier.eissn1751-9667-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2021 The Authors. IET Image Processing published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. This is an open access article under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits use, distribution and reproduction in any medium, provided the original work is properly cited.3.83 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons