ML
MLJ

Improving kernel online learning with a snapshot memory

April 24, 2022

We propose in this paper the Stochastic Variance-reduced Gradient Descent for Kernel Online Learning (DualSVRG), which obtains the ?ε-approximate linear convergence rate and is not vulnerable to the curse of kernelization. Our approach uses a variance reduction technique to reduce the variance when estimating full gradient, and further exploits recent work in dual space gradient descent for online learning to achieve model optimality. This is achieved by introducing the concept of an instant memory, which is a snapshot storing the most recent incoming data instances and proposing three transformer oracles, namely budget, coverage, and always-move oracles. We further develop rigorous theoretical analysis to demonstrate that our proposed approach can obtain the ?ε-approximate linear convergence rate, while maintaining model sparsity, hence encourages fast training. We conduct extensive experiments on several benchmark datasets to compare our DualSVRG with state-of-the-art baselines in both batch and online settings. The experimental results show that our DualSVRG yields superior predictive performance, while spending comparable training time with baselines.

Overall

< 1 minute

Trung Le, Khanh Nguyen, Dinh Phung

Machine Learning Journal 2022

Share Article

Related publications

ML
ICML Top Tier
May 16, 2024

Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung

ML
ICML Top Tier
May 16, 2024

Vy Vo, Trung Le, Tung-Long Vuong, He Zhao, Edwin V. Bonilla, Dinh Phung

ML
ICML Top Tier
May 14, 2024

Ngoc Bui, Hieu Trung Nguyen, Viet Anh Nguyen, Rex Ying

GenAI
ML
ICML Top Tier
May 14, 2024

Bao Nguyen, Binh Nguyen, Hieu Nguyen, Viet Anh Nguyen