Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/20924
Title: Low-delay single holoscopic 3D computer-generated image to multiview images
Authors: Alfaqheri, T
Aondoakaa, AS
Swash, MR
Sadka, AH
Keywords: holoscopic 3D imaging;macrolens array;low delay;image disparity;viewpoint image;multiview image;autostereoscopic display
Issue Date: 19-Jun-2020
Publisher: Springer Nature
Citation: Alfaqheri, T. et al. (2020) 'Low-delay single holoscopic 3D computer-generated image to multiview images', Journal of Real-Time Image Processing, 17 (6), pp. 2015 – 2027. doi: 10.1007/s11554-020-00991-y.
Abstract: Copyright © The Author(s) 2020. Due to the nature of holoscopic 3D (H3D) imaging technology, H3D cameras can capture more angular information than their conventional 2D counterparts. This is mainly attributed to the macrolens array which captures the 3D scene with slightly different viewing angles and generates holoscopic elemental images based on fly’s eyes imaging concept. However, this advantage comes at the cost of decreasing the spatial resolution in the reconstructed images. On the other hand, the consumer market is looking to find an efficient multiview capturing solution for the commercially available autostereoscopic displays. The autostereoscopic display provides multiple viewers with the ability to simultaneously enjoy a 3D viewing experience without the need for wearing 3D display glasses. This paper proposes a low-delay content adaptation framework for converting a single holoscopic 3D computer-generated image into multiple viewpoint images. Furthermore, it investigates the effects of varying interpolation step sizes on the converted multiview images using the nearest neighbour and bicubic sampling interpolation techniques. In addition, it evaluates the effects of changing the macrolens array size, using the proposed framework, on the perceived visual quality both objectively and subjectively. The experimental work is conducted on computer-generated H3D images with different macrolens sizes. The experimental results show that the proposed content adaptation framework can be used to capture multiple viewpoint images to be visualised on autostereoscopic displays.
URI: https://bura.brunel.ac.uk/handle/2438/20924
DOI: https://doi.org/10.1007/s11554-020-00991-y
ISSN: 1861-8200
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © The Author(s) 2020. Rights and permissions: Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.2.24 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons