图像对抗攻击

深度神经网络目前已经获得了突破式发展,并且在多个领域得到了广泛应用。 然而,深度神经网络同样面临着被攻击的威胁,也就是“对抗样本”: 攻击者通过在源数据上增加难以通过感官辨识到的细微改变,让神经网络模型做出错误的分类决定。
本团队通过研究对抗样本的生成原理和算法实现,有助于分析基于深度学习的系统存在的安全漏洞, 并针对此类攻击建立更好的防范机制,加速机器学习领域的进步。


UPC: Learning Universal Physical Camouflage Attack on Object Detectors

简介: We study physical adversarial attacks on object detectors in the wild. Prior arts on this matter are mostly instance-level attacks performed under irreproducible environments. To this end, we propose to learn an adversarial pattern to effectively attack all instances belonging to the same object category (e.g., person, car), referred to as Universal Physical Camouflage Attack (UPC).

[论文] [项目主页]

G-UAP: Generic Universal Adversarial Perturbation that Fools RPN-based Detectors

简介: Our paper proposed the G-UAP which is the first work to craft universal adversarial perturbations to fool the RPN-based detectors. G-UAP focuses on misleading the foreground prediction of RPN to background to make detectors detect nothing.

Asian Conference on Machine Learning (ACML), 2019
[论文]

Back to top