Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27157
Title: DO-U-Net for Segmentation and Counting: Applications to Satellite and Medical Images
Authors: Overton, T
Tucker, A
Keywords: convolutional neural networks;U-Net;segmentation;counting;satellite imagery;blood smear
Issue Date: 2-Apr-2020
Publisher: Springer Nature
Citation: Overton, T. and Tucker, A. (2020) 'DO-U-Net for Segmentation and Counting: Applications to Satellite and Medical Images', Advances in Intelligent Data Analysis XVIII 18th International Symposium on Intelligent Data Analysis, IDA 2020. Proceedings. Virtual, 27-29 April, pp. 391 - 403. doi: 10.1007/978-3-030-44584-3_31.
Series/Report no.: Lecture Notes in Computer Science;LNISA,volume 12080
Abstract: Copyright © The Author(s) 2020.. Many image analysis tasks involve the automatic segmentation and counting of objects with specific characteristics. However, we find that current approaches look to either segment objects or count them through bounding boxes, and those methodologies that both segment and count struggle with co-located and overlapping objects. This restricts our capabilities when, for example, we require the area covered by particular objects as well as the number of those objects present, especially when we have a large amount of images to obtain this information for. In this paper, we address this by proposing a Dual-Output U-Net. DO-U-Net is an Encoder-Decoder style, Fully Convolutional Network (FCN) for object segmentation and counting in image processing. Our proposed architecture achieves precision and sensitivity superior to other, similar models by producing two target outputs: a segmentation mask and an edge mask. Two case studies are used to demonstrate the capabilities of DO-U-Net: locating and counting Internally Displaced People (IDP) tents in satellite imagery, and the segmentation and counting of erythrocytes in blood smears. The model was demonstrated to work with a relatively small training dataset, achieving a sensitivity of 98.69% for IDP camps of the fixed resolution, and 94.66% for a scale-invariant IDP model. DO-U-Net achieved a sensitivity of 99.07% on the erythrocytes dataset. DO-U-Net has a reduced memory footprint, allowing for training and deployment on a machine with a lower to mid-range GPU, making it accessible to a wider audience, including non-governmental organisations (NGOs) providing humanitarian aid, as well as health care organisations.
Description: Acknowledgement: The ALL_IDB1 dataset from the Acute Lymphoblastic Leukemia Image Database for Image Processing was provided by the Department of Information Technology, Università degli Studi di Milano.
Part of the Lecture Notes in Computer Science book series (LNISA,volume 12080). LNCS Sublibrary: SL3 – Information Systems and Applications, incl. Internet/Web, and HCI
URI: https://bura.brunel.ac.uk/handle/2438/27157
DOI: https://doi.org/10.1007/978-3-030-44584-3_31
ISBN: 978-3-030-44583-6 (pbk)
978-3-030-44584-3 (ebk)
ISSN: 0302-9743
Other Identifiers: ORCID iDs: Toyah Overton https://orcid.org/0000-0002-5934-9907; Allan Tucker https://orcid.org/0000-0001-5105-3506.
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © The Editor(s) (if applicable) and The Author(s) 2020. Rights and permissions: Open Access. This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.2.28 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons