Date | 2021-05-27 |
---|---|
Speaker | 이성윤 |
Dept. | 서울대학교 |
Room | 27-116 |
Time | 17:00-18:00 |
Deep learning has shown successful results in many applications. However, it has been demonstrated that deep neural networks are vulnerable to small but adversarially designed perturbations in the input which can mislead a network to predict a wrong label. There have been many studies on such adversarial attacks and defenses against them. However, Athalye etal cite{athalye2018obfuscated} have shown that most defenses rely on specific predefined adversarial attacks and can be completely broken by stronger adaptive attacks. Thus, certified methods are proposed to guarantee stable prediction of input within a perturbation set. We present this transition from heuristic defense to certified defense, and investigate key features of certified defenses, textit{tightness and smoothness}.