Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/27840
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, H-
dc.contributor.authorYin, P-
dc.contributor.authorHuang, H-
dc.contributor.authorWu, Q-
dc.contributor.authorLiu, R-
dc.contributor.authorZhu, X-
dc.date.accessioned2023-12-10T18:20:24Z-
dc.date.available2023-12-10T18:20:24Z-
dc.date.issued2023-12-10-
dc.identifier.citationChen, H. et al. (2023) 'Typhoon Intensity Prediction with Vision Transformer', NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning: Blending New and Existing Knowledge Systems Workshop, New Orleans, LA, USA, 16 December, pp. 1 - 8. Available at: https://www.climatechange.ai/papers/neurips2023/60 (accessed: 10 December 2023).en_US
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/27840-
dc.descriptionTackling Climate Change with Machine Learning workshop at NeurIPS 2023.en_US
dc.descriptionCode availability: https://github.com/chen-huanxin/Tint .-
dc.description.abstractPredicting typhoon intensity accurately across space and time is crucial for issuing timely disaster warnings and facilitating emergency response. This has vast potential for minimizing life losses and property damages as well as reducing economic and environmental impacts. Leveraging satellite imagery for scenario analysis is effective but also introduces additional challenges due to the complex relations among clouds and the highly dynamic context. Existing deep learning methods in this domain rely on convolutional neural networks (CNNs), which suffer from limited per-layer receptive fields. This limitation hinders their ability to capture long-range dependencies and global contextual knowledge during inference. In response, we introduce a novel approach, namely "Typhoon Intensity Transformer" (Tint), which leverages self-attention mechanisms with global receptive fields per layer. Tint adopts a sequence-to-sequence feature representation learning perspective. It begins by cutting a given satellite image into a sequence of patches and recursively employs self-attention operations to extract both local and global contextual relations between all patch pairs simultaneously, thereby enhancing per-patch feature representation learning. Extensive experiments on a publicly available typhoon benchmark validate the efficacy of Tint in comparison with both state-of-the-art deep learning and conventional meteorological methods. Our code is available at https://github.com/chen-huanxin/Tint.en_US
dc.description.sponsorshipChina Postdoctoral Science Foundation (2022M721182).en_US
dc.format.extent1 - 8-
dc.format.mediumElectronic-
dc.language.isoen_USen_US
dc.publisherUnited Nationsen_US
dc.relation.urihttps://www.climatechange.ai/papers/neurips2023/60-
dc.relation.urihttps://www.climatechange.ai/papers/neurips2023/60-
dc.relation.urihttps://github.com/chen-huanxin/Tint-
dc.sourceNeural Information Processing Systems (NeurIPS) Workshop on Tackling Climate Change with Machine Learning-
dc.sourceNeural Information Processing Systems (NeurIPS) Workshop on Tackling Climate Change with Machine Learning-
dc.titleTyphoon Intensity Prediction with Vision Transformeren_US
dc.typeConference Paperen_US
pubs.publication-statusPublished online-
pubs.publisher-urlhttps://www.climatechange.ai/papers/neurips2023/4-
Appears in Collections:Dept of Economics and Finance Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdf452.17 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.