Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27838
Full metadata record
DC FieldValueLanguage
dc.contributor.authorCai, L-
dc.contributor.authorYu, X-
dc.contributor.authorLi, C-
dc.contributor.authorEberhard, A-
dc.contributor.authorNguyen, LT-
dc.contributor.authorDoan, CT-
dc.coverage.spatialPerth, WA, Australia-
dc.date.accessioned2023-12-10T17:28:18Z-
dc.date.available2022-12-02-
dc.date.available2023-12-10T17:28:18Z-
dc.date.issued2022-12-02-
dc.identifierORCID iD: Thai Doan Chuong https://orcid.org/0000-0003-0893-5604-
dc.identifier.citationCai, L. et al. (2022) 'Impact of Mathematical Norms on Convergence of Gradient Descent Algorithms for Deep Neural Networks Learning', Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics, vol. 13728 LNAI), Perth, WA, Australia, 5-8 December, pp. 131 - 144. doi: 10.1007/978-3-031-22695-3_10.en_US
dc.identifier.isbn978-3-031-22695-3 (ebk)-
dc.identifier.isbn978-3-031-22694-6 (pbk)-
dc.identifier.issn0302-9743-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27838-
dc.descriptionThe conference poster is also available online at: https://ajcai2022.org/wp-content/uploads/2022/11/poster6569.pdf . The conference paper is not available on this institutional repository.-
dc.description.abstractTo improve the performance of gradient descent learning algorithms, the impact of different types of norms is studied for deep neural network training. The performance of different norm types used on both finite-time and fixed-time convergence algorithms are compared. The accuracy of the multiclassification task realized by three typical algorithms using different types of norms is given, and the improvement of Jorge’s finite time algorithm with momentum or Nesterov accelerated gradient is also studied. Numerical experiments show that the infinity norm can provide better performance in finite time gradient descent algorithms and give strong robustness under different network structures.en_US
dc.format.extent131 - 144-
dc.format.mediumPrint-Electronic-
dc.languageEnglish-
dc.language.isoen_USen_US
dc.publisherSpringer Natureen_US
dc.source35th Australasian Joint Conference on Artificial Intelligence (AI 2022)-
dc.source35th Australasian Joint Conference on Artificial Intelligence (AI 2022)-
dc.subjectinfinity normen_US
dc.subjectfinite-time convergenceen_US
dc.subjectnorms equivalenceen_US
dc.subjectdeep neural networken_US
dc.titleImpact of Mathematical Norms on Convergence of Gradient Descent Algorithms for Deep Neural Networks Learningen_US
dc.typeConference Paperen_US
dc.identifier.doihttps://doi.org/10.1007/978-3-031-22695-3_10-
dc.relation.isPartOfLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
pubs.finish-date2022-12-08-
pubs.finish-date2022-12-08-
pubs.publication-statusPublished-
pubs.start-date2022-12-05-
pubs.start-date2022-12-05-
pubs.volume13728 LNAI-
dc.identifier.eissn1611-3349-
dc.identifier.eissn1611-3349-
Appears in Collections:Dept of Mathematics Research Papers

Files in This Item:
File Description SizeFormat 
Poster.pdf2.31 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.