Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/16333
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGaus, YFA-
dc.contributor.authorMeng, H-
dc.coverage.spatialXi'an, China-
dc.date.accessioned2018-06-12T12:31:31Z-
dc.date.available2018-05-16-
dc.date.available2018-06-12T12:31:31Z-
dc.date.issued2018-
dc.identifier.citation2018, pp. 492 - 498 (7)en_US
dc.identifier.issnhttp://dx.doi.org/10.1109/FG.2018.00079-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/16333-
dc.description.abstractAutomatic continuous affect recognition from multiple modality in the wild is arguably one of the most challenging research areas in affective computing. In addressing this regression problem, the advantages of the each modality, such as audio, video and text, have been frequently explored but in an isolated way. Little attention has been paid so far to quantify the relationship within these modalities. Motivated to leverage the individual advantages of each modality, this study investigates behavioral modeling of continuous affect estimation, in multimodal fusion approaches, using Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming. The capabilities of each fusion approach are illustrated by applying it to the formulation of affect estimation generated from multiple modality using classical Support Vector Regression. The proposed fusion methods were applied in the public Sentiment Analysis in the Wild (SEWA) multimodal dataset and the experimental results indicate that employing proper fusion can deliver a significant performance improvement for all affect estimation. The results further show that the proposed systems is competitive or outperform the other state-of-the-art approaches.en_US
dc.format.extent492 - 498 (7)-
dc.language.isoenen_US
dc.sourceIEEE Conference on Automatic Face and Gesture Recognition-
dc.sourceIEEE Conference on Automatic Face and Gesture Recognition-
dc.subject-linearen_US
dc.subjectaffecten_US
dc.subjectnon-linearen_US
dc.subjectfusionen_US
dc.subjectlinear regressionen_US
dc.titleLinear and Non-Linear Multimodal Fusion for Continuous Affect Estimation in-the-Wilden_US
dc.typeOtheren_US
dc.identifier.doihttp://dx.doi.org/10.1109/FG.2018.00079-
pubs.finish-date2018-05-19-
pubs.finish-date2018-05-19-
pubs.publication-statusPublished-
pubs.start-date2018-05-15-
pubs.start-date2018-05-15-
Appears in Collections:Publications

Files in This Item:
File Description SizeFormat 
Fulltext.pdf279.76 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.