GenAI
NLP
EMNLP

BERTweet: A pre-trained language model for English Tweets

October 19, 2020

We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification. We release BERTweet under the MIT License to facilitate future research and applications on Tweet data. Our BERTweet is available at https://github.com/VinAIResearch/BERTweet/

Overall

< 1 minute

Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen

EMNLP 2020

Share Article

Related publications

GenAI
NLP
LREC-COLING
June 28, 2024

Nhu Vo, Dat Quoc Nguyen, Dung D. Le, Massimo Piccardi, Wray Buntine

GenAI
NLP
Findings of ACL
June 28, 2024

Minh-Vuong Nguyen, Linhao Luo, Fatemeh Shiri, Dinh Phung, Yuan-Fang Li, Thuy-Trang Vu, Gholamreza Haffari

GenAI
NLP
Findings of ACL
June 28, 2024

Tinh Son Luong, Thanh-Thien Le, Linh Van Ngo, and Thien Huu Nguyen

GenAI
NLP
ACL Top Tier
June 28, 2024

Trinh Pham*, Khoi M. Le*, Luu Anh Tuan