SEMINAR

Implicit Regularization for Algorithm Design: Neural Collapse and Worst Group Generalization

Speaker

Yiping Lu

Working
Stanford University
Timeline
Fri, Mar 25 2022 - 10:00 am (GMT + 7)
About Speaker

Yiping Lu is a doctoral student in Computational and Mathematical Engineering at Stanford University, working with Lexing Ying and Jose Blanchet. Previously, he received his bachelor’s degree in Information and Computing Sciences at the School of Mathematical Sciences, Peking University. His work spans statistical learning, stochastic control, numerical analysis and computational economics. His recent research interest focus on integrating structural/physics form and machine learning and robust decision making (experiment design/machine learning/control).

Abstract

Although overparameterized models have shown their success on many machine learning tasks, the accuracy could drop on the testing distribution that is different from the training one. At the same time, importance weighting, a traditional technique to handle distribution shifts, has been demonstrated to have less or even no effect on overparameterized models both empirically and theoretically. In this talk, we aim to understand and fix this problem through Neural Collapse, a recently discovered implicit bias which showed a highly symmetric geometric pattern of neural networks that emerges during the terminal phase of training. In the first part of the talk, we showed how implicit regularization can lead the last-layer features and classifiers to the Neural Collapse Geometry from a surrogate model called the unconstrained layer-peeled model (ULPM). In the second part of the talk, we propose importance tempering to improve the decision boundary and achieve consistently better results for overparameterized models. Theoretically, we justify that the selection of group temperature can be different under label shift and spurious correlation setting. At the same time, we also prove that properly selected temperatures can extricate the minority collapse for imbalanced classification. Empirically, we achieve state-of-the-art results on worst group classification tasks using importance tempering. This talk is based on our recent mainly working with Wending Ji, Zhun Dang, Weijie Su, Zach Izoo and Lexing Ying.

Related seminars

Trieu Trinh

Google Deepmind

AlphaGeometry: Solving IMO Geometry without Human Demonstrations
Fri, Jul 5 2024 - 10:00 am (GMT + 7)

Tat-Jun (TJ) Chin

Adelaide University

Quantum Computing in Computer Vision: A Case Study in Robust Geometric Optimisation
Fri, Jun 7 2024 - 11:00 am (GMT + 7)

Fernando De la Torre

Carnegie Mellon University

Human Sensing for AR/VR
Wed, Apr 24 2024 - 07:00 am (GMT + 7)

Anh Nguyen

Microsoft GenAI

The Revolution of Small Language Models
Fri, Mar 8 2024 - 02:30 pm (GMT + 7)