ML
ICLR

Distributional Sliced-Wasserstein and Applications to Generative Modeling

January 19, 2021

Sliced-Wasserstein distance (SW) and its variant, Max Sliced-Wasserstein distance (Max-SW), have been used widely in the recent years due to their fast computation and scalability even when the probability measures lie in a very high dimensional space. However, SW requires many unnecessary projection samples to approximate its value while Max-SW only uses the most important projection, which ignores the information of other useful directions. In order to account for these weaknesses, we propose a novel distance, named Distributional Sliced-Wasserstein distance (DSW), that finds an optimal distribution over projections that can balance between exploring distinctive projecting directions and the informativeness of projections themselves. We show that the DSW is a generalization of Max-SW, and it can be computed efficiently by searching for the optimal push-forward measure over a set of probability measures over the unit sphere satisfying certain regularizing constraints that favor distinct directions. Finally, we conduct extensive experiments with large-scale datasets to demonstrate the favorable performances of the proposed distances over the previous sliced-based distances in generative modeling applications.

Overall

< 1 minute

Khai Nguyen, Nhat Ho, Tung Pham, Hung Bui

ICLR 2021

Share Article

Related publications

GenAI
ML
NeurIPS
November 28, 2024
Long Tung Vuong, Anh Tuan Bui,
Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung
ML
NeurIPS
November 28, 2024

Hoang Phan*, Lam Tran*, Quyen Tran*, Trung Le

ML
NeurIPS
November 28, 2024

Haocheng Luo, Tuan Truong, Tung Pham, Mehrtash Harandi, Dinh Phung, Trung Le

GenAI
ML
NeurIPS
November 28, 2024

Minh Le, An Nguyen, Huy Nguyen, Trang Nguyen, Trang Pham, Linh Van Ngo, Nhat Ho