Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27284
Full metadata record
DC FieldValueLanguage
dc.contributor.authorShahi, K-
dc.contributor.authorLi, Y-
dc.date.accessioned2023-10-01T18:54:11Z-
dc.date.available2023-10-01T18:54:11Z-
dc.date.issued2023-06-23-
dc.identifierORCID iD: Yongmin Li https://orcid.org/0000-0003-1668-2440-
dc.identifier.citationShahi, K. and Li, Y. (2023) 'Background Replacement in Video Conferencing', International Journal of Network Dynamics and Intelligence, 2 (2), pp. 1 - 11. doi: 10.53941/ijndi.2023.100004.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27284-
dc.descriptionData Availability Statement: Source code is available at Github and dataset is available at Kaggle. Link for GitHub repository: https://github.com/kiranshahi/Real-time-Background-replacement-in-Video-Conferencing; Link for a dataset: https://www.kaggle.com/datasets/nikhilroxtomar/person-segmentation.en_US
dc.description.abstractCopyright © Kiran Shahi, Yongmin Li 2023. Background replacement is one of the most used features in video conferencing applications by many people, perhaps mainly for privacy protection, but also for other purposes such as branding, marketing and promoting professionalism. However, the existing applications in video conference tools have serious limitations. Most applications tend to generate strong artefacts (while there is a slight change in the perspective of the background), or require green screens to avoid such artefacts, which results in an unnatural background or even exposes the original background to other users in the video conference. In this work, we aim to study the relationship between the foreground and background in real-time videos. Three different methods are presented and evaluated, including the baseline U-Net, the lightweight U-Net MobileNet, and the U-Net MobileNet&ConvLSTM models. The above models are trained on public datasets for image segmentation. Experimental results show that both the lightweight U-Net MobileNet and the U-Net MobileNet& ConvLSTM models achieve superior performance as compared to the baseline U-Net model.en_US
dc.description.sponsorshipThis research received no external funding.en_US
dc.format.extent1 - 11-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherScilight Pressen_US
dc.rightsCopyright (c) 2023 Kiran Shahi, Yongmin Li. Creative Commons License. This work is licensed under a Creative Commons Attribution 4.0 International License.-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectvideo conferencingen_US
dc.subjectbackground replacementen_US
dc.subjectimage segmentationen_US
dc.subjectU-Neten_US
dc.subjectmobileneten_US
dc.subjectConvLSTMen_US
dc.titleBackground Replacement in Video Conferencingen_US
dc.typeArticleen_US
dc.relation.isPartOfInternational Journal of Network Dynamics and Intelligence-
pubs.issue2-
pubs.publication-statusPublished-
pubs.volume2-
dc.rights.holderKiran Shahi, Yongmin Li-
Appears in Collections:Dept of Computer Science Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright (c) 2023 Kiran Shahi, Yongmin Li. Creative Commons License. This work is licensed under a Creative Commons Attribution 4.0 International License.2.36 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons