Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27231
Title: Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network
Authors: Khaleghi, N
Rezaii, TY
Beheshti, S
Meshgini, S
Sheykhivand, S
Danishvar, S
Keywords: visual saliency;electroencephalogram;image reconstruction;geometric deep network;generative adversarial network
Issue Date: 7-Nov-2022
Publisher: MDPI
Citation: Khaleghi, N. et al. (2022) 'Visual Saliency and Image Reconstruction from EEG 'Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network', Electronics (Switzerland), 11 (21), 3637, pp. 1 - 30. doi: 10.3390/electronics11213637.
Abstract: Copyright © 2022 by the authors. Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals.
Description: Data Availability Statement: The EEG-ImageNet dataset used in this study is publicly available in this address: https://tinyurl.com/eeg-visual-classification (accessed on 10 October 2022).
URI: https://bura.brunel.ac.uk/handle/2438/27231
DOI: https://doi.org/10.3390/electronics11213637
Other Identifiers: ORCID iDs: Sebelan Danishvar https://orcid.org/0000-0002-8258-0437
3637
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).13.79 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons