Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/12984
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMumith, J-A-
dc.contributor.authorKarayiannis, T-
dc.contributor.authorMakatsoris, C-
dc.date.accessioned2016-07-20T10:41:22Z-
dc.date.available2016-07-20T10:41:22Z-
dc.date.issued2015-
dc.identifier.citationInternational Journal of Low-Carbon Technologies, pp. 1-9, (2015)en_US
dc.identifier.issn1748-1317-
dc.identifier.urihttp://ijlct.oxfordjournals.org/content/early/2015/08/16/ijlct.ctv023-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/12984-
dc.description.abstractThe thermoacoustic heat engine (TAHE) is a type of prime mover that converts thermal power to acoustic power. It is composed of two heat exchangers (the devices heat source and sink), some kind of porous medium where the conversion of power takes place and a tube that houses the acoustic wave produced. Its simple design and the fact that it is one of a few prime movers that do not require moving parts make such a device an attractive alternative for many practical applications. The acoustic power produced by the TAHE can be used to generate electricity, drive a heat pump or a refrigeration system. Although the geometry of the TAHE is simple, the behavior of the engine is complex with 30þ design parameters that affect the performance of the device; therefore, designing such a device remains a significant challenge. In this work, a radical design methodology using reinforcement learning (RL) is employed for the design and optimization of a TAHE for the first time. Reinforcement learning is a machine learning technique that allows optimization by specifying ‘good’ and ‘bad’ behavior using a simple reward scheme r. Although its framework is simple, it has proved to be a very powerful tool in solving a wide range of complex decisionmaking/ optimization problems. The RL technique employed by the agent in this work is known as Q-learning. Preliminary results have shown the potential of the RL technique to solve this type of complex design problem, as the RL agent was able to figure out the correct configuration of components that would create positive acoustic power output. The learning agent was able to create a design that yielded an acoustic power output of 643.31 W with a thermal efficiency of 3.29%. It is eventually hoped that with increased understanding of the design problem, in terms of the RL framework, it will be possible to ultimately create an autonomous RL agent for the design and optimization of complex TAHEs with minimal predefined conditions/restrictions.en_US
dc.format.extentctv023 - ctv023-
dc.language.isoenen_US
dc.publisherOxford University Pressen_US
dc.subjectThermoacoustic heat engineen_US
dc.subjectHeat recovery technologyen_US
dc.subjectDesignen_US
dc.subjectOptimizationen_US
dc.subjectReinforcement learningen_US
dc.titleDesign and optimization of a thermoacoustic heat engine using reinforcement learningen_US
dc.typeArticleen_US
dc.identifier.doihttp://dx.doi.org/10.1093/ijlct/ctv023-
dc.relation.isPartOfInternational Journal of Low-Carbon Technologies-
pubs.publication-statusPublished online-
Appears in Collections:Publications

Files in This Item:
File Description SizeFormat 
Fulltext.pdf273.54 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.