Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/21904
Title: Estimating Uncertainty in Deep Learning for Reporting Confidence to Clinicians in Medical Image Segmentation and Diseases Detection
Authors: Ghoshal, B
Tucker, A
Sanghera, B
Wong, WL
Keywords: Bias‐corrected uncertainty estimation;Classification;Deep learning;Dropweights;Ensembles;Medical image segmentation
Issue Date: 22-Oct-2020
Publisher: Wiley
Citation: Ghoshal, B, Tucker, A, Sanghera, B, Lup Wong, W. Estimating uncertainty in deep learning for reporting confidence to clinicians in medical image segmentation and diseases detection. Computational Intelligence. 2020; 1– 34.
Abstract: Deep learning (DL), which involves powerful black box predictors, has achieved a remarkable performance in medical image analysis, such as segmentation and classification for diagnosis. However, in spite of these successes, these methods focus exclusively on improving the accuracy of point predictions without assessing the quality of their outputs. Knowing how much confidence there is in a prediction is essential for gaining clinicians' trust in the technology. In this article, we propose an uncertainty estimation framework, called MC‐DropWeights, to approximate Bayesian inference in DL by imposing a Bernoulli distribution on the incoming or outgoing weights of the model, including neurones. We demonstrate that by decomposing predictive probabilities into two main types of uncertainty, aleatoric and epistemic, using the Bayesian Residual U‐Net (BRUNet) in image segmentation. Approximation methods in Bayesian DL suffer from the “mode collapse” phenomenon in variational inference. To address this problem, we propose a model which Ensembles of Monte‐Carlo DropWeights by varying the DropWeights rate. In segmentation, we introduce a predictive uncertainty estimator, which takes the mean of the standard deviations of the class probabilities associated with every class. However, in classification, we need an alternative approach since the predictive probabilities from a forward pass through the model does not capture uncertainty. The entropy of the predictive distribution is a measure of uncertainty, but its exponential depends on sample size. The plug‐in estimate in mutual information is subject to sampling bias. We propose Jackknife resampling, to correct for sample bias, which improves estimating uncertainty quality in image classification. We demonstrate that our deep ensemble MC‐DropWeights method, using the bias‐corrected estimator produces an equally good or better result in both quantified uncertainty estimation and quality of uncertainty estimates than approximate Bayesian neural networks in practice.
URI: http://bura.brunel.ac.uk/handle/2438/21904
ISSN: 0824-7935
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf8.73 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.