Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27048
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiang, Y-
dc.contributor.authorLi, M-
dc.contributor.authorJiang, C-
dc.contributor.authorLiu, G-
dc.date.accessioned2023-08-24T15:31:03Z-
dc.date.available2021-01-01-
dc.date.available2023-08-24T15:31:03Z-
dc.date.issued2021-12-15-
dc.identifierORCID iDs: Maozhen Li https://orcid.org/0000-0002-0820-5487; Changjun Jiang https://orcid.org/0000-0003-0637-9317; Guanjun Liu https://orcid.org/0000-0002-7523-4827.-
dc.identifier.citationLiang, Y. et al. (2021) 'CEModule: A Computation Efficient Module for Lightweight Convolutional Neural Networks', IEEE Transactions on Neural Networks and Learning Systems, 0 (early access), pp. 1 - 12. doi: 10.1109/TNNLS.2021.3133127.en_US
dc.identifier.issn2162-237X-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27048-
dc.description.abstractLightweight convolutional neural networks (CNNs) rely heavily on the design of lightweight convolutional modules (LCMs). For an LCM, lightweight design based on repetitive feature maps (LoR) is currently one of the most effective approaches. An LoR mainly involves an extraction of feature maps from convolutional layers (CE) and feature map regeneration through cheap operations (RO). However, existing LoR approaches carry out lightweight improvements only from the aspect of RO but ignore the problems of poor generalization, low stability, and high computation workload incurred in the CE part. To alleviate these problems, this article introduces the concept of key features from a CNN model interpretation perspective. Subsequently, it presents a novel LCM, namely CEModule, focusing on the CE part. CEModule increases the number of key features to maintain a high level of accuracy in classification. In the meantime, CEModule employs a group convolution strategy to reduce floating-point operations (FLOPs) incurred in the training process. Finally, this article brings forth a dynamic adaptation algorithm (α-DAM) to enhance the generalization of CEModule-enabled lightweight CNN models, including the developed CENet in dealing with datasets of different scales. Compared with the state-of-the-art results, CEModule reduces FLOPs by up to 54% on CIFAR-10 while maintaining a similar level of accuracy in classification. On ImageNet, CENet increases accuracy by 1.2% following the same FLOPs and training strategies.en_US
dc.description.sponsorshipNational Key Research and Development Project of China (Grant Number: 2018YFB2100801); Director Foundation Project of National Engineering Laboratory for Public Safety Risk Perception and Control by Big Data PSRPC; Fundamental Research Funds for the Central Universities; China Electronics Technology Group Corporation CETC; Shanghai Municipal Science and Technology Major Project (Grant Number: 2021SHZDZX0100).en_US
dc.format.extent1 - 12-
dc.format.mediumPrint-Electronic-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCopyright © 2021 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works by sending a request to pubs-permissions@ieee.org. For more information, see https://www.ieee.org/publications/rights/rights-policies.html-
dc.rights.urihttps://www.ieee.org/publications/rights/rights-policies.html-
dc.subjectautomated machine learning (AutoML)en_US
dc.subjectconvolutional neural networks (CNNs)en_US
dc.subjectfeature map regenerationen_US
dc.subjecthyperparameter optimization (HPO)en_US
dc.subjectlightweighten_US
dc.subjectneural network interpretationen_US
dc.titleCEModule: A Computation Efficient Module for Lightweight Convolutional Neural Networksen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/TNNLS.2021.3133127-
dc.relation.isPartOfIEEE Transactions on Neural Networks and Learning Systems-
pubs.publication-statusPublished-
pubs.volume0-
dc.identifier.eissn2162-2388-
dc.rights.holderInstitute of Electrical and Electronics Engineers (IEEE)-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2021 Institute of Electrical and Electronics Engineers (IEEE). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works by sending a request to pubs-permissions@ieee.org. For more information, see https://www.ieee.org/publications/rights/rights-policies.html2.68 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.