Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27231
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKhaleghi, N-
dc.contributor.authorRezaii, TY-
dc.contributor.authorBeheshti, S-
dc.contributor.authorMeshgini, S-
dc.contributor.authorSheykhivand, S-
dc.contributor.authorDanishvar, S-
dc.date.accessioned2023-09-21T08:45:52Z-
dc.date.available2023-09-21T08:45:52Z-
dc.date.issued2022-11-07-
dc.identifierORCID iDs: Sebelan Danishvar https://orcid.org/0000-0002-8258-0437-
dc.identifier3637-
dc.identifier.citationKhaleghi, N. et al. (2022) 'Visual Saliency and Image Reconstruction from EEG 'Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network', Electronics (Switzerland), 11 (21), 3637, pp. 1 - 30. doi: 10.3390/electronics11213637.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27231-
dc.descriptionData Availability Statement: The EEG-ImageNet dataset used in this study is publicly available in this address: https://tinyurl.com/eeg-visual-classification (accessed on 10 October 2022).en_US
dc.description.abstractCopyright © 2022 by the authors. Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals.en_US
dc.description.sponsorshipThis research received no external funding.en_US
dc.format.extent1 - 30-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherMDPIen_US
dc.rightsCopyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectvisual saliencyen_US
dc.subjectelectroencephalogramen_US
dc.subjectimage reconstructionen_US
dc.subjectgeometric deep networken_US
dc.subjectgenerative adversarial networken_US
dc.titleVisual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Networken_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.3390/electronics11213637-
dc.relation.isPartOfElectronics (Switzerland)-
pubs.issue21-
pubs.publication-statusPublished-
pubs.volume11-
dc.identifier.eissn2079-9292-
dc.rights.holderThe authors-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).13.79 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons