Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/22544
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYang, H-
dc.contributor.authorChen, L-
dc.contributor.authorChen, M-
dc.contributor.authorMa, Z-
dc.contributor.authorDeng, F-
dc.contributor.authorLi, M-
dc.contributor.authorLi, X-
dc.date.accessioned2021-04-16T09:54:08Z-
dc.date.available2019-01-01-
dc.date.available2021-04-16T09:54:08Z-
dc.date.issued2019-12-11-
dc.identifier.citationYang, H., Chen, L., Chen, M., Ma, Z., Deng, F., Li, M. and Li, X. (2019) 'Tender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Model', IEEE Access, 2019, 7 pp. 180998 - 181011. doi: 10.1109/ACCESS.2019.2958614.en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/22544-
dc.description.abstractTo recognize the tender shoots for high-quality tea and to determine the picking points accurately and quickly, this paper proposes a method of recognizing the picking points of the tender tea shoots with the improved YOLO-v3 deep convolutional neural network algorithm. This method realizes the end-to-end target detection and the recognition of different postures of high-quality tea shoots, considering both efficiency and accuracy. At first, in order to predict the category and position of tender tea shoots, an image pyramid structure is used to obtain the characteristic map of tea shoots at different scales. The residual network block structure is added to the downsampling part, and the fully connected part is replaced by a \times 1$ convolution operation at the end, ensuring accurate identification of the result and simplifying the network structure. The K-means method is used to cluster the dimension of the target box. Finally, the image data set of picking points for high-quality tea shoots is built. The accuracy of the trained model under the verification set is over 90%, which is much higher than the detection accuracy of the research methods.en_US
dc.description.sponsorshipNatural Science Foundation of Shandong Province under Grant ZR2019MEE102; Key Research and Development Program of Shandong Province under Grant 2018GNC112007; Project of Shandong Province Higher Educational Science and Technology Program under Grant J18KA015.en_US
dc.format.extent180998 - 181011-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/-
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectimage recognitionen_US
dc.subjectYOLO-v3en_US
dc.subjectconvolutional neural networken_US
dc.subjectimage pyramiden_US
dc.subjecttea shooten_US
dc.titleTender Tea Shoots Recognition and Positioning for Picking Robot Using Improved YOLO-V3 Modelen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.1109/ACCESS.2019.2958614-
dc.relation.isPartOfIEEE Access-
pubs.publication-statusPublished-
pubs.volume7-
dc.identifier.eissn2169-3536-
Appears in Collections:Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf4.29 MBAdobe PDFView/Open


This item is licensed under a Creative Commons License Creative Commons