ML
ICLR

WaNet – Imperceptible Warping-based Backdoor Attack

January 19, 2021

With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years. A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears. However, the existing backdoor attacks are all built on noise perturbation triggers, making them noticeable to humans. In this paper, we instead propose using warping-based triggers. The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness. To make such models undetectable by machine defenders, we propose a novel training mode, called the “noise mode. The trained networks successfully attack and bypass the state-ofthe art defense methods on standard classification datasets, including MNIST, CIFAR-10, GTSRB, and CelebA. Behavior analyses show that our backdoors are transparent to network inspection, further proving this novel attack mechanism’s efficiency.

Overall

< 1 minute

Anh Tuan Nguyen, Anh Tuan Tran

ICLR 2021

Share Article

Related publications

ML
ICML Top Tier
May 16, 2024

Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung

ML
ICML Top Tier
May 16, 2024

Vy Vo, Trung Le, Tung-Long Vuong, He Zhao, Edwin V. Bonilla, Dinh Phung

ML
ICML Top Tier
May 14, 2024

Ngoc Bui, Hieu Trung Nguyen, Viet Anh Nguyen, Rex Ying

GenAI
ML
ICML Top Tier
May 14, 2024

Bao Nguyen, Binh Nguyen, Hieu Nguyen, Viet Anh Nguyen