Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/13560
Full metadata record
DC FieldValueLanguage
dc.contributor.authorEmary, E-
dc.contributor.authorZawbaa, HM-
dc.contributor.authorGrosan, C-
dc.date.accessioned2016-11-30T15:21:52Z-
dc.date.available2016-11-30T15:21:52Z-
dc.date.issued2017-01-10-
dc.identifier.citationEmary, E., Zawbaa, H.M. and Grosan, C. (2017) 'Experienced Gray Wolf Optimization Through Reinforcement Learning and Neural Networks,' IEEE Transactions on Neural Networks and Learning Systems, 29 (3), pp. 681-694. doi: 10.1109/TNNLS.2016.2634548.en_US
dc.identifier.issn2162-237X-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/13560-
dc.description.abstractIn this paper, a variant of Grey Wolf Optimizer (GWO) that uses reinforcement learning principles combined with neural networks to enhance the performance is proposed. The aim is to overcome, by reinforced learning, the common challenges of setting the right parameters for the algorithm. In GWO, a single parameter is used to control the exploration/exploitation rate which influences the performance of the algorithm. Rather than using a global way to change this parameter for all the agents, we use reinforcement learning to set it on an individual basis. The adaptation of the exploration rate for each agent depends on the agent’s own experience and the current terrain of the search space. In order to achieve this, an experience repository is built based on the neural network to map a set of agents’ states to a set of corresponding actions that specifically influence the exploration rate. The experience repository is updated by all the search agents to reflect experience and to enhance the future actions continuously. The resulted algorithm is called Experienced Grey Wolf Optimizer (EGWO) and its performance is assessed on solving feature selection problems and on finding optimal weights for neural networks algorithm. We use a set of performance indicators to evaluate the efficiency of the method. Results over various datasets demonstrate an advance of the EGWO over the original GWO and other meta-heuristics such as genetic algorithms and particle swarm optimizationen_US
dc.description.sponsorshipIPROCOM Marie Curie initial training network; 10.13039/501100004963-People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007-2013/; Romanian National Authority for Scientific Research, CNDI-UEFISCDI;en_US
dc.format.extent681 - 694 (14)-
dc.format.mediumPrint-Electronic-
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.rights© 2017 IEEE. Open Access. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.-
dc.subjectreinforcement learningen_US
dc.subjectneural networken_US
dc.subjectgrey wolf optimizationen_US
dc.subjectadaptive exploration rateen_US
dc.titleExperienced grey wolf optimizer through reinforcement learning and neural networksen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/TNNLS.2016.2634548-
dc.relation.isPartOfIEEE Transactions on Neural Networks and Learning Systems-
pubs.publication-statusPublished-
dc.identifier.eissn2162-2388-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText-AAM.pdf805.26 kBAdobe PDFView/Open
FullText.pdf2.75 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.