Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/23632
Title: Generative adversarial network image synthesis method for skin lesion generation and classification
Authors: Mutepfe, F
Kalejahi, BK
Meshgini, S
Danishvar, S
Keywords: DCGAN;dermoscopy;pretraining;skin lesion
Issue Date: 20-Oct-2021
Publisher: Wolters Kluwer - Medknow
Citation: Mutepfe, F. et al. (2021) 'Generative adversarial network image synthesis method for skin lesion generation and classification', Journal of Medical Signals and Sensors, 2021, 11 (4), pp. 237 - 252. doi: 10.4103/jmss.JMSS_53_20.
Abstract: Background: One of the common limitations in the treatment of cancer is in the early detection of this disease. The customary medical practice of cancer examination is a visual examination by the dermatologist followed by an invasive biopsy. Nonetheless, this symptomatic approach is time-consuming and prone to human errors. An automated machine learning model is essential to capacitate fast diagnoses and early treatment. Objective: The key objective of this study is to establish a fully automatic model that helps Dermatologists in skin cancer handling process in a way that could improve skin lesion classification accuracy. Method: The work is conducted following an implementation of a Deep Convolutional Generative Adversarial Network (DCGAN) using the Python-based deep learning library Keras. We incorporated effective image filtering and enhancement algorithms such as bilateral filter to enhance feature detection and extraction during training. The Deep Convolutional Generative Adversarial Network (DCGAN) needed slightly more fine-tuning to ripe a better return. Hyperparameter optimization was utilized for selecting the best-performed hyperparameter combinations and several network hyperparameters. In this work, we decreased the learning rate from the default 0.001 to 0.0002, and the momentum for Adam optimization algorithm from 0.9 to 0.5, in trying to reduce the instability issues related to GAN models and at each iteration the weights of the discriminative and generative network were updated to balance the loss between them. We endeavour to address a binary classification which predicts two classes present in our dataset, namely benign and malignant. More so, some well-known metrics such as the receiver operating characteristic -area under the curve and confusion matrix were incorporated for evaluating the results and classification accuracy. Results: The model generated very conceivable lesions during the early stages of the experiment and we could easily visualise a smooth transition in resolution along the way. Thus, we have achieved an overall test accuracy of 93.5% after fine-tuning most parameters of our network. Conclusion: This classification model provides spatial intelligence that could be useful in the future for cancer risk prediction. Unfortunately, it is difficult to generate high quality images that are much like the synthetic real samples and to compare different classification methods given the fact that some methods use non-public datasets for training.
URI: https://bura.brunel.ac.uk/handle/2438/23632
DOI: https://doi.org/10.4103/jmss.JMSS_53_20
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2021 Journal of Medical Signals & Sensors. Published by Wolters Kluwer - Medknow. This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution‐NonCommercial‐ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non‐commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.5.48 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons