Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/17962
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, Q-
dc.contributor.authorCai, J-
dc.contributor.authorFan, S-Z-
dc.contributor.authorAbbod, MF-
dc.contributor.authorShieh, J-S-
dc.contributor.authorKung, Y-
dc.contributor.authorLin, L-
dc.date.accessioned2019-04-29T14:15:02Z-
dc.date.available2019-04-29T14:15:02Z-
dc.date.issued2019-
dc.identifier.citationIEEE Accessen_US
dc.identifier.issn2169-3536-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/17962-
dc.description.abstractOne of the most challenging predictive data analysis efforts is accurate prediction of depth of anesthesia (DOA) indicators which has attracted a growing attention since it provides patients a safe surgical environment in case of secondary damage caused by intraoperative awareness or brain injury. However, many researchers put heavily handcraft feature extraction or carefully tailored feature engineering to each patient to achieve very high sensitivity and low false prediction rate for a particular dataset. This limits the benefit of the proposed approaches if a different dataset is used. Recently, representations learned using deep convolutional neural network (CNN) for object recognition are becoming widely used model of the processing hierarchy in the human visual system. The correspondence between models and brain signals that holds the acquired activity at high temporal resolution has been explored less exhaustively. In this paper, deep learning CNN with a range of different architectures, is designed for identifying related activities from raw electroencephalography (EEG). Specifically, an improved short-time Fourier transform (STFT) is used to stand for the time-frequency information after extracting the spectral images of the original EEG as input to CNN. Then CNN models are designed and trained to predict the DOA levels from EEG spectrum without handcrafted features, which presents an intuitive mapping process with high efficiency and reliability. As a result, the best trained CNN model achieved an accuracy of 93.50%, interpreted as CNN’s deep learning to approximate the DOA by senior anesthesiologists, which highlights the potential of deep CNN combined with advanced visualization techniques for EEG-based brain mapping.en_US
dc.description.sponsorshipThis research was financially supported by Lenovo Technology B.V. Taiwan Branch. Also, it was supported by National Chung-Shan Institute of Science & Technology in Taiwan (Grant nos. CSIST-095-V301 and CSIST-095-V302) and National Natural Science Foundation of China (Grant no. 51475342).en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.subjectdepth of anesthesiaen_US
dc.subjectconvolutional neural networken_US
dc.subjectelectroencephalographyen_US
dc.subjectshort-time Fourier transformen_US
dc.titleSpectrum analysis of EEG signals using CNN to model patient’s consciousness level based on anesthesiologists’ experienceen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2019.2912273-
dc.relation.isPartOfIEEE Access-
pubs.publication-statusAccepted-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf1.55 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.