GenAI
NLP
EMNLP Findings

PhoBERT: Pre-trained language models for Vietnamese

November 17, 2020

We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT models are available at this https URL

Overall

< 1 minute

Dat Quoc Nguyen, Anh Tuan Nguyen

EMNLP Findings 2020

Share Article

Related publications

GenAI
NLP
LREC-COLING
June 28, 2024

Nhu Vo, Dat Quoc Nguyen, Dung D. Le, Massimo Piccardi, Wray Buntine

GenAI
NLP
Findings of ACL
June 28, 2024

Minh-Vuong Nguyen, Linhao Luo, Fatemeh Shiri, Dinh Phung, Yuan-Fang Li, Thuy-Trang Vu, Gholamreza Haffari

GenAI
NLP
Findings of ACL
June 28, 2024

Tinh Son Luong, Thanh-Thien Le, Linh Van Ngo, and Thien Huu Nguyen

GenAI
NLP
ACL Top Tier
June 28, 2024

Trinh Pham*, Khoi M. Le*, Luu Anh Tuan