Machine Learning

Machine learning (ML) assumes a central place in artificial intelligence (AI) and computer science, which concentrates on the utility of learning algorithms and data to simulate the human learning process. The learning performance of an ML framework has been gradually improved (e.g., by acquiring more data and using modern deep learning methods) to approach human-level performance. At VinAI, our ML group conducts cutting-edge fundamental research, which subsequently drives progress in essential applications such as computer vision, natural language processing, robotics, smart mobility, human behavior understanding, or machine translation. We question the core of intelligence, the effective learning mechanisms using data and prior knowledge, and how to translate them into efficient algorithmic implementations.

In particular, we focus on learning algorithms that can achieve (near) human-level capability in transfer learning and multi-task learning and pioneer some of the most advanced methods using optimal transport and mathematical optimization in ML. Some of our specific research areas, but not limited to, include:
- Deep generative models
- Representation learning
- Optimal transport
- Continual learning
- Robust and trustworthy ML
- Adversarial ML
- Transfer learning and Domain adaptation

ML ICCV Top Tier
Reducing Training Time in Cross-Silo Federated Learning using Multigraph Topology

Federated learning is an active research topic since it enables several participants to…

ML ICML Top Tier
Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction

Max sliced Wasserstein (Max-SW) distance has been widely known as a solution for…

ML ICML Top Tier
Vector Quantized Wasserstein Auto-Encoder

Learning deep discrete latent presentations offers a promise of better symbolic and summarized…

ML IEEE
MAP Estimation With Bernoulli Randomness, and Its Application to Text Analysis and Recommender Systems

MAP estimation plays an important role in many probabilistic models. However, in many…

Related publications

ML ICLR Top Tier
February 19, 2024

Nguyen Hung-Quang, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D Doan

CV ML AAAI Top Tier
January 8, 2024

Tran Huynh Ngoc, Dang Minh Nguyen, Tung Pham, Anh Tran

ML AAAI Top Tier
January 8, 2024

Viet Nguyen*, Giang Vu*, Tung Nguyen Thanh, Khoat Than, Toan Tran

ML NeurIPS Top Tier
October 4, 2023

Van-Anh Nguyen, Trung Le, Anh Tuan Bui, Thanh-Toan Do, Dinh Phung

ML NeurIPS Top Tier
October 4, 2023

Van-Anh Nguyen, Tung-Long Vuong, Hoang Phan, Thanh-Toan Do, Dinh Phung, Trung Le

Do not miss these Seminars & Workshops

Eugene Bagdasaryan

Cornell University

(Un)trustworthy Machine Learning: How to Balance Security, Accuracy, and Privacy
Tue, Apr 4 2023 - 09:30 am (GMT + 7)

Wray Buntine

Monash University

A View in Old & New Machine Learning
Sat, Aug 3 2019 - 09:30 am (GMT + 7)

Thai Ngoc Anh

Georgia Institute of Technology

Developmental Machine Learning
Tue, Nov 24 2020 - 10:00 am (GMT + 7)

Released Source Codes

NO

Code

Paper

Conference

Year

01.

Blur-kernel-space-exploring

125
33
Exploring Image Deblurring via Blur Kernel Space CVPR 2021
02.

BERTweet

540
52
BERTweet: A pre-trained language model for English Tweets EMNLP 2020

Technical Blog

ML AAAI
March 22, 2024

Viet Nguyen, Giang Vu, Tung Nguyen Thanh, Khoat Than, Toan Tran

July 25, 2023

Hoang Phan, Trung Le, Trung Phung, Tuan Anh Bui, Nhat Ho, Dinh Phung

October 27, 2022

Hoang Phan, Ngoc N. Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung