Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/26490
Title: Customized 2D CNN Model for the Automatic Emotion Recognition Based on EEG Signals
Authors: Baradaran, F
Farzan, A
Danishvar, S
Sheykhivand, S
Keywords: emotion recognition;deep learning;EEG;music;CNN
Issue Date: 14-May-2023
Publisher: MDPI
Citation: Baradaran, F. et al. (2023) 'Customized 2D CNN Model for the Automatic Emotion Recognition Based on EEG Signals', Electronics, 12 (10), 2232, pp. 1 - 21. doi: 10.3390/electronics12102232.
Abstract: Copyright © 2023 by the authors. Automatic emotion recognition from electroencephalogram (EEG) signals can be considered as the main component of brain–computer interface (BCI) systems. In the previous years, many researchers in this direction have presented various algorithms for the automatic classification of emotions from EEG signals, and they have achieved promising results; however, lack of stability, high error, and low accuracy are still considered as the central gaps in this research. For this purpose, obtaining a model with the precondition of stability, high accuracy, and low error is considered essential for the automatic classification of emotions. In this research, a model based on Deep Convolutional Neural Networks (DCNNs) is presented, which can classify three positive, negative, and neutral emotions from EEG signals based on musical stimuli with high reliability. For this purpose, a comprehensive database of EEG signals has been collected while volunteers were listening to positive and negative music in order to stimulate the emotional state. The architecture of the proposed model consists of a combination of six convolutional layers and two fully connected layers. In this research, different feature learning and hand-crafted feature selection/extraction algorithms were investigated and compared with each other in order to classify emotions. The proposed model for the classification of two classes (positive and negative) and three classes (positive, neutral, and negative) of emotions had 98% and 96% accuracy, respectively, which is very promising compared with the results of previous research. In order to evaluate more fully, the proposed model was also investigated in noisy environments; with a wide range of different SNRs, the classification accuracy was still greater than 90%. Due to the high performance of the proposed model, it can be used in brain–computer user environments.
Description: Data Availability Statement: The data related to this article is publicly available on the GitHub platform under the title Baradaran emotion dataset.
URI: https://bura.brunel.ac.uk/handle/2438/26490
DOI: https://doi.org/10.3390/electronics12102232
Other Identifiers: ORCID iDs: Sebelan Danishvar https://orcid.org/0000-0002-8258-0437; Sobhan Sheykhivand https://orcid.org/0000-0002-2275-8133.
2232
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).7.18 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons