ML
AAAI

Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior

January 20, 2021

Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness. Recently, automatic kernel composition methods provide not only accurate prediction but also attractive interpretability through search-based methods. However, existing methods suffer from slow kernel composition learning. To tackle large-scaled data, we propose a new sparse approximate posterior for GPs, MultiSVGP, constructed from groups of inducing points associated with individual additive kernels in compositional kernels. We demonstrate that this approximation provides a better fit to learn compositional kernels given empirical observations. We also provide theoretically justification on error bound when compared to the traditional sparse GP. In contrast to the search-based approach, we present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior. We demonstrate that our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.

Overall

< 1 minute

Anh Tong , Toan Tran , Hung Bui , Jaesik Choi

AAAI 2021

Share Article

Related publications

ML
ICML Top Tier
May 16, 2024

Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung

ML
ICML Top Tier
May 16, 2024

Vy Vo, Trung Le, Tung-Long Vuong, He Zhao, Edwin V. Bonilla, Dinh Phung

ML
ICML Top Tier
May 14, 2024

Ngoc Bui, Hieu Trung Nguyen, Viet Anh Nguyen, Rex Ying

GenAI
ML
ICML Top Tier
May 14, 2024

Bao Nguyen, Binh Nguyen, Hieu Nguyen, Viet Anh Nguyen