Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/20105
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, K-
dc.contributor.authorChakrabarty, D-
dc.date.accessioned2020-01-24T14:27:24Z-
dc.date.available2020-01-24T14:27:24Z-
dc.date.issued2018-04-16-
dc.identifierhttps://arxiv.org/abs/1803.04582v2-
dc.identifierORCiD: Dalia Chakrabarty https://orcid.org/0000-0003-1246-4235-
dc.identifier.citationWang, K. and Chakrabarty, D. (2018) 'Deep Bayesian Supervised Learning given Hypercuboidally-shaped, Discontinuous Data, using Compound Tensor-Variate & Scalar-Variate Gaussian Processes', arXiv:1803.04582v2 [stat.ME], pp. 1 - 43. doi: 10.48550/arXiv.1803.04582.-
dc.identifier.urihttps://bura.brunel.ac.uk/handle/2438/20105-
dc.description.abstractWe undertake Bayesian learning of the high-dimensional functional relationship between a system parameter vector and an observable, that is in general tensor-valued. The ultimate aim is Bayesian inverse prediction of the system parameters, at which test data is recorded. We attempt such learning given hypercuboidally-shaped data that displays strong discontinuities, rendering learning challenging. We model the sought high-dimensional function, with a tensor-variate Gaussian Process (GP), and use three independent ways for learning covariance matrices of the resulting likelihood, which is Tensor-Normal. We demonstrate that the discontinuous data demands that implemented covariance kernels be non-stationary--achieved by modelling each kernel hyperparameter, as a function of the sample function of the invoked tensor-variate GP. Each such function can be shown to be temporally-evolving, and treated as a realisation from a distinct scalar-variate GP, with covariance described adaptively by collating information from a historical set of samples of chosen sample-size. We prove that deep-learning using 2-"layers", suffice, where the outer-layer comprises the tensor-variate GP, compounded with multiple scalar-variate GPs in the "inner-layer", and undertake inference with Metropolis-within-Gibbs. We apply our method to a cuboidally-shaped, discontinuous, real dataset, and subsequently perform forward prediction to generate data from our model, given our results--to perform model-checking.en_US
dc.format.extent1 - 43-
dc.language.isoenen_US
dc.publisherCornell Universityen_US
dc.rightsCopyright © The Authors 2018 under an arXiv.org perpetual, non-exclusive license 1.0. This license gives limited rights to arXiv to distribute the article, and also limits re-use of any type from other entities or individuals (see: https://info.arxiv.org/help/license/index.html).-
dc.rights.urihttps://info.arxiv.org/help/license/index.html-
dc.subjectcompound tensor-variate scalar-variate GPsen_US
dc.subjectcovariance kernel parametrisationen_US
dc.subjectLipschitz continuityen_US
dc.subjectdeep learningen_US
dc.titleDeep Bayesian Supervised Learning given Hypercuboidally-shaped, Discontinuous Data, using Compound Tensor-Variate & Scalar-Variate Gaussian Processesen_US
dc.typeArticleen_US
dc.identifier.doihttps://doi.org/10.48550/arXiv.1803.04582-
dc.rights.holderThe Authors-
Appears in Collections:Dept of Mathematics Research Papers

Files in This Item:
File Description SizeFormat 
FullText.pdfCopyright © The Authors 2018 under an arXiv.org perpetual, non-exclusive license 1.0. This license gives limited rights to arXiv to distribute the article, and also limits re-use of any type from other entities or individuals (see: https://info.arxiv.org/help/license/index.html).747.79 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.