CV
NeurIPS

Input-aware Dynamic Backdoor Attack

November 17, 2020

In recent years, neural backdoor attack has been considered to be a potential security threat to deep learning systems. Such systems, while achieving the state-of-the-art performance on clean data, perform abnormally on inputs with predefined triggers. Current backdoor techniques, however, rely on uniform trigger patterns, which are easily detected and mitigated by current defense methods. In this work, we propose a novel backdoor attack technique in which the triggers vary from input to input. To achieve this goal, we implement an input-aware trigger generator driven by diversity loss. A novel cross-trigger test is applied to enforce trigger nonreusablity, making backdoor verification impossible. Experiments show that our method is efficient in various attack scenarios as well as multiple datasets. We further demonstrate that our backdoor can bypass the state of the art defense methods. An analysis with a famous neural network inspector again proves the stealthiness of the proposed attack. Our code is publicly available at this https URL.

Overall

< 1 minute

Anh Tuan Nguyen, Anh Tuan Tran

NeurIPS 2020

Share Article

Related publications

CV
WACV
July 11, 2024

Chau Pham*, Truong Vu*, Khoi Nguyen

CV
CVPR Top Tier
March 6, 2024

Supreeth Narasimhaswamy, Huy Nguyen, Lihan Huang, Minh Hoai

GenAI
CV
CVPR Top Tier
March 6, 2024

Ka Chun Shum, Jaeyeon Kim, Binh-Son Hua, Duc Thanh Nguyen, Sai-Kit Yeung

GenAI
CV
CVPR Top Tier
March 6, 2024

Phong Tran, Egor Zakharov, Long-Nhat Ho, Anh Tran, Liwen Hu, Hao Li