Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/25071
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSong, W-
dc.contributor.authorLiu, Z-
dc.contributor.authorGuo, Y-
dc.contributor.authorSun, S-
dc.contributor.authorZu, G-
dc.contributor.authorLi, M-
dc.date.accessioned2022-08-12T12:26:09Z-
dc.date.available2022-08-12T12:26:09Z-
dc.date.issued2022-08-08-
dc.identifier3825-
dc.identifier.citationSong, W. et al. (2022) ‘DGPolarNet: Dynamic Graph Convolution Network for LiDAR Point Cloud Semantic Segmentation on Polar BEV’, Remote Sensing, 14 (15), 3825, pp. 1 - 18. doi: 10.3390/rs14153825.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/25071-
dc.descriptionData Availability Statement: Not applicable.en_US
dc.description.abstractCopyright: © 2022 by the authors. Semantic segmentation in LiDAR point clouds has become an important research topic for autonomous driving systems. This paper proposes a dynamic graph convolution neural network for LiDAR point cloud semantic segmentation using a polar bird’s-eye view, referred to as DGPolarNet. LiDAR point clouds are converted to polar coordinates, which are rasterized into regular grids. The points mapped onto each grid distribute evenly to solve the problem of the sparse distribution and uneven density of LiDAR point clouds. In DGPolarNet, a dynamic feature extraction module is designed to generate edge features of perceptual points of interest sampled by the farthest point sampling and K-nearest neighbor methods. By embedding edge features with the original point cloud, local features are obtained and input into PointNet to quantize the points and predict semantic segmentation results. The system was tested on the Semantic KITTI dataset, and the segmentation accuracy reached 56.5%en_US
dc.description.sponsorshipThis research was supported by the Education and Teaching Reform Project of North China University of Technology, Beijing Urban Governance Research Base, the Ministry of Science (MSIT, ICT), Korea, under the High-Potential Individuals Global Training Program (2020-0-01576) supervised by the Institute for Information and Communications Technology Planning and Evaluation (IITP), the Great Wall Scholar Program (CIT&TCD20190304), and the National Natural Science Foundation of China (No. 61503005).en_US
dc.format.extent1 - 18-
dc.format.mediumElectronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherMDPI AGen_US
dc.rightsCopyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectsemantic segmentationen_US
dc.subjectpolar BEVen_US
dc.subjectLiDAR point clouden_US
dc.subjectdynamic graph convolution networken_US
dc.titleDGPolarNet: Dynamic Graph Convolution Network for LiDAR Point Cloud Semantic Segmentation on Polar BEVen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.3390/rs14153825-
dc.relation.isPartOfRemote Sensing-
pubs.issue15-
pubs.publication-statusPublished online-
pubs.volume14-
dc.identifier.eissn2072-4292-
dc.rights.holderThe authors.-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.2.63 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons