COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks

January 8, 2024

Backdoor attacks pose a critical concern to the practice of using third-party data for AI development. The data can be poisoned to make a trained model misbehave when a predefined trigger pattern appears, granting the attackers illegal benefits. While most proposed backdoor attacks are dirty-label, clean-label attacks are more desirable by keeping data labels unchanged to dodge human inspection. However, designing a working clean-label attack is a challenging task, and existing clean-label attacks show underwhelming performance. In this paper, we propose a novel mechanism to develop clean-label attacks with outstanding attack performance. The key component is a trigger pattern generator, which is trained together with a surrogate model in an alternating manner. Our proposed mechanism is flexible and customizable, allowing different backdoor trigger types and behaviors for either single or multiple target labels. Our backdoor attacks can reach near-perfect attack success rates and bypass all state-of-the-art backdoor defenses, as illustrated via comprehensive experiments on standard benchmark datasets. Our code is available at https://github.com/VinAIResearch/COMBAT

Overall

< 1 minute

Tran Huynh Ngoc, Dang Minh Nguyen, Tung Pham, Anh Tran

Share Article

Related publications

CV CVPR Top Tier
March 6, 2024

Supreeth Narasimhaswamy, Huy Nguyen, Lihan Huang, Minh Hoai

CV CVPR Top Tier
March 6, 2024

Ka Chun Shum, Jaeyeon Kim, Binh-Son Hua, Duc Thanh Nguyen, Sai-Kit Yeung

CV CVPR Top Tier
March 6, 2024

Phong Tran, Egor Zakharov, Long-Nhat Ho, Anh Tran, Liwen Hu, Hao Li

CV CVPR Top Tier
March 6, 2024

Trung Tuan Dao, Duc Hong Vu, Cuong Pham, Anh Tran