Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/28857
Full metadata record
DC FieldValueLanguage
dc.contributor.authorIslam, T-
dc.contributor.authorMiron, A-
dc.contributor.authorLiu, X-
dc.contributor.authorLi, Y-
dc.date.accessioned2024-04-24T13:49:17Z-
dc.date.available2024-04-24T13:49:17Z-
dc.date.issued2024-02-21-
dc.identifierORCiD: Tasin Islam https://orcid.org/0000-0001-7568-9322-
dc.identifierORCiD: Alina Miron https://orcid.org/0000-0002-0068-4495-
dc.identifierORCiD: Xiaohui Liu https://orcid.org/0000-0003-1589-1267-
dc.identifierORCiD: Yongmin Li https://orcid.org/0000-0003-1668-2440-
dc.identifier.citationIslam, T. et al. (2024) 'Deep Learning in Virtual Try-On: A Comprehensive Survey', IEEE Access, 12, pp. 29475 - 29502. doi: 10.1109/ACCESS.2024.3368612.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/28857-
dc.description.abstractVirtual try-on technology has gained significant importance in the retail industry due to its potential to transform the way customers interact with products and make purchase decisions. It allows users to virtually try on clothing and accessories, providing a realistic representation of how the items would look and fit without the need for physical interaction. The ability to virtually try on products addresses common challenges associated with online shopping, such as uncertainty about fit and style, ultimately enhancing the overall customer experience and satisfaction. As a result, virtual try-on technology has the potential to reduce returns and optimise conversion rates for businesses, making it a valuable tool in the e-commerce landscape. In this paper, we provide a comprehensive review of deep learning based virtual try-on models, focusing on their functionality, technical details, dataset usage, weaknesses, and impact on customer satisfaction. The models are categorised into three main types: image-based, multi-pose, and video virtual try-on models, with detailed examples and technical summaries provided for each category. Additionally, we identify and discuss similarities and differences in these methods. Furthermore, we examine the datasets currently available for building and evaluating virtual try-on models, including the number of images/videos and their resolutions. We present the commonly used methods for both qualitative and quantitative evaluations, comparing synthesised images with previous work and performing quantitative evaluations across various metrics and benchmark datasets. We discuss the weaknesses of current deep learning based virtual try-on models, including challenges in preserving clothing characteristics and textures, the level of accuracy of applying the clothing to the person, and the preservation of facial identities. Additionally, we address dataset bias, particularly the domination of female models, limited diversity in clothing featured, and relatively simple and clean backgrounds in the datasets, which can negatively impact the model’s ability to handle challenging situations. Moreover, we explore the impact of virtual try-ons on customer satisfaction, highlighting the benefits that customers can enjoy, which also reduces returns and optimises conversion rates for businesses.en_US
dc.format.extent29475 - 29502-
dc.format.mediumElectronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.rightsCopyright © 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectvirtual try-on (VTO)en_US
dc.subjectdeep learningen_US
dc.subjectimage synthesisen_US
dc.subjectgenerative adversarial networks (GANs)en_US
dc.subjectdiffusion models (DMs)en_US
dc.titleDeep Learning in Virtual Try-On: A Comprehensive Surveyen_US
dc.typeArticleen_US
dc.date.dateAccepted2024-02-19-
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2024.3368612-
dc.relation.isPartOfIEEE Access-
pubs.publication-statusPublished-
pubs.volume12-
dc.identifier.eissn2169-3536-
dc.rights.licensehttps://creativecommons.org/licenses/by/4.0/legalcode.en-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © 2024 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/6.7 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons