上海品茶

2019年无处不在的对抗样本攻防.pptx

编号:97436 PPTX 23页 15.01MB 下载积分:VIP专享
下载报告请您先登录!

2019年无处不在的对抗样本攻防.pptx

1、A Short Intro:无处不在的对抗样本攻防,扇贝算法团队负责人,无处不在的对抗样本攻防,01,What is an Adversarial Example?矛与盾(常见攻击和防御算法)新的趋势和风险,02,03,StyleGAN(2018),Tacotron(2017),GPT-2(2019),深度学习模型生成结果已经可以欺骗人类,那么模型可以被欺骗吗?,What is an Adversarial Example(对抗样本)?,Inputs that have been intentionally designed to cause a model to make a mistake

2、 Theyre like optical illusions for machines.,Adversarial Stop Sign,Adversarial Glasses,Adversarial Patch,Ian Goodfellow(Google Brain),Alexey Kurakin(Google Brain),Dawn Song(UC Berkeley),GeekPwn,Competition on Adversarial Attacks and Defenses 2018,CAAD CTF Ruleset,Non-Targeted Adversarial Attack(非定向攻

3、击)Slightly modify source image in a way that image will be classified incorrectly by generally unknown classifier.Targeted Adversarial Attack(定向攻击)Slightly modify source image in a way that image will be classified as specified target class by generally unknown classifier.Defense Against Adversarial

4、 Attack,无处不在的对抗样本攻防,01,What is an Adversarial Example?矛与盾(常见攻击和防御算法)新的趋势和风险,02,03,Example Attack Scenarios,FGSM(Fast Gradient Sign Method)BIM(Basic Iterative Method)MIM(Momentum Iterative FGSM)ATN(Adversarial Transformation Networks),Fun Results(transferability),Butterfly,Rabbit,Fun Results(transfer

5、ability),parachute,vehicle,Fun Results(transferability),aircraft carrier,guillotine,Example Defense Scenarios,Gradient maskingDetectionImage processing and randomization Adversarial training,Gradient Masking,24Papernot,Nicolas,et al.Practical black-box attacks against machine learning.Proceedings of

6、 the 2017 ACM on Asia Conference on Computer and Communications Security.ACM,2017.,Construct a model that does not have useful gradients24They break gradient-based white box attacks.But then they dont break black box attacks(e.g.,adversarial examples made for other models),Detection,Image processing

7、 and randomization,Image processing and randomization,2 most significant bits=quantification,Adversarial Training,10F.Tramr,A.Kurakin,N.Papernot,I.Goodfellow,D.Boneh,and P.McDaniel,“Ensemble Adversarial Training:Attacks and Defenses,”arXiv:1705.07204 cs,stat,May 2017.,Augments a models training data

8、 with adversarial examples crafted on other static pre-trained models 10Adversarial logit paringFeature Denoising(128 Nvidia V100 GPUs,52 hours for Imagenet),无处不在的对抗样本攻防,01,What is an Adversarial Example?矛与盾(常见攻击和防御算法)新的趋势和风险,02,03,Imperceptible,Robust,and Targeted Adversarial Examples for Automatic

9、 Speech Recognition,“研表究明,汉字的序顺并不定一能影阅响读,比如当你看完这句话后,才发这现里的字全是乱的。”,Ref,Authors:Ian Goodfellow,Kaiming He,Jun ZhuConferences:NIPS,ICML,CVPRToolkits:CleverHans,AdvBox,adversarial-robustness-toolbox,1I.J.Goodfellow,J.Shlens,and C.Szegedy,“Explaining and Harnessing Adversarial Examples,”arXiv:1412.6572 c

10、s,stat,Dec.2014.2A.Kurakin,I.Goodfellow,and S.Bengio,“Adversarial examples in the physical world,”arXiv:1607.02533 cs,stat,Jul.2016.3N.Papernot et al.,“CleverHans v2.1.0:an adversarial machine learning library,”arXiv:1610.00768 cs,stat,Oct.2016.4S.Baluja and I.Fischer,“Adversarial Transformation Net

11、works:Learning to Generate Adversarial Examples,”arXiv:1703.09387 cs,Mar.2017.5Y.Dong et al.,“Boosting Adversarial Attacks with Momentum,”arXiv:1710.06081 cs,stat,Oct.2017.6C.Guo,M.Rana,M.Cisse,and L.van der Maaten,“Countering Adversarial Images using Input Transformations,”arXiv:1711.00117 cs,Oct.2

12、017.7J.Hayes and G.Danezis,“Learning Universal Adversarial Perturbations with Generative Models,”arXiv:1708.05207 cs,stat,Aug.2017.8F.Liao,M.Liang,Y.Dong,T.Pang,X.Hu,and J.Zhu,“Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser,”arXiv:1712.02976 cs,Dec.2017.9T.Pang,C

13、.Du,Y.Dong,and J.Zhu,“Towards Robust Detection of Adversarial Examples,”arXiv:1706.00633 cs,Jun.2017.10F.Tramr,A.Kurakin,N.Papernot,I.Goodfellow,D.Boneh,and P.McDaniel,“Ensemble Adversarial Training:Attacks and Defenses,”arXiv:1705.07204 cs,stat,May 2017.11C.Xie,J.Wang,Z.Zhang,Z.Ren,and A.Yuille,“Mi

14、tigating Adversarial Effects Through Randomization,”arXiv:1711.01991 cs,Nov.2017.12A.Athalye,N.Carlini,and D.Wagner,“Obfuscated Gradients Give a False Sense of Security:Circumventing Defenses to Adversarial Examples,”arXiv:1802.00420 cs,Feb.2018.13G.F.Elsayed et al.,“Adversarial Examples that Fool b

15、oth Computer Vision and Time-Limited Humans,”arXiv:1802.08195 cs,q-bio,stat,Feb.2018.14J.Gilmer,R.P.Adams,I.Goodfellow,D.Andersen,and G.E.Dahl,“Motivating the Rules of the Game for Adversarial Example Research,”arXiv:1807.06732 cs,stat,Jul.2018.15H.Kannan,A.Kurakin,and I.Goodfellow,“Adversarial Logi

16、t Pairing,”arXiv:1803.06373 cs,stat,Mar.2018.16A.Kurakin et al.,“Adversarial Attacks and Defences Competition,”arXiv:1804.00097 cs,stat,Mar.2018.17M.-I.Nicolae et al.,“Adversarial Robustness Toolbox v0.3.0,”arXiv:1807.01069 cs,stat,Jul.2018.18A.Prakash,N.Moran,S.Garber,A.DiLillo,and J.Storer,“Deflec

17、ting Adversarial Attacks with Pixel Deflection,”arXiv:1801.08926 cs,Jan.2018.19D.Su,H.Zhang,H.Chen,J.Yi,P.-Y.Chen,and Y.Gao,“Is Robustness the Cost of Accuracy?-A Comprehensive Study on the Robustness of 18 Deep Image Classification Models,”arXiv:1808.01688 cs,Aug.2018.20J.Uesato,B.ODonoghue,A.van d

18、en Oord,and P.Kohli,“Adversarial Risk and the Dangers of Evaluating Against Weak Attacks,”arXiv:1802.05666 cs,stat,Feb.2018.21S.Wang et al.,“Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks,”arXiv:1809.05165 cs,stat,Sep.2018.22K.Xu et al.,“Structured Adversarial Attack:Towards General Implementation and Better Interpretability,”arXiv:1808.01664 cs,stat,Aug.2018.,

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(2019年无处不在的对抗样本攻防.pptx)为本站 (云闲) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

沉**... 升级为高级VIP   138**80... 升级为至尊VIP 

 138**98...  升级为标准VIP  wei**n_... 升级为至尊VIP 

 wei**n_... 升级为标准VIP wei**n_...  升级为标准VIP

wei**n_... 升级为至尊VIP 189**10... 升级为至尊VIP

wei**n_... 升级为至尊VIP  準**... 升级为至尊VIP

151**04...   升级为高级VIP 155**04...  升级为高级VIP 

 wei**n_...  升级为高级VIP sha**dx...  升级为至尊VIP

186**26...  升级为高级VIP   136**38... 升级为标准VIP

182**73... 升级为至尊VIP    136**71...  升级为高级VIP

139**05... 升级为至尊VIP    wei**n_...  升级为标准VIP

  wei**n_... 升级为高级VIP  wei**n_... 升级为标准VIP

微**...  升级为标准VIP Bru**Cu...  升级为高级VIP

 155**29... 升级为标准VIP  wei**n_... 升级为高级VIP

 爱**... 升级为至尊VIP  wei**n_...  升级为标准VIP

 wei**n_... 升级为至尊VIP 150**02... 升级为高级VIP

 wei**n_... 升级为标准VIP   138**72... 升级为至尊VIP 

wei**n_... 升级为高级VIP   153**21...  升级为标准VIP 

wei**n_...  升级为高级VIP wei**n_...  升级为高级VIP

ji**yl 升级为高级VIP  DAN**ZD...  升级为高级VIP 

wei**n_...  升级为至尊VIP  wei**n_...  升级为高级VIP

wei**n_...  升级为至尊VIP 186**81...  升级为高级VIP 

wei**n_...  升级为高级VIP  wei**n_... 升级为高级VIP

wei**n_...  升级为至尊VIP  wei**n_... 升级为标准VIP

  wei**n_... 升级为高级VIP 升级为至尊VIP

 msl**ng 升级为高级VIP 刷**  升级为至尊VIP

 186**12... 升级为高级VIP  186**00... 升级为至尊VIP

182**12... 升级为高级VIP 185**05... 升级为至尊VIP 

Za**ry 升级为标准VIP  wei**n_...  升级为高级VIP

183**46... 升级为高级VIP 孙** 升级为标准VIP

wei**n_...  升级为至尊VIP   wei**n_... 升级为高级VIP

wei**n_...  升级为至尊VIP  微**... 升级为至尊VIP 

180**79...  升级为标准VIP  Nik**us 升级为至尊VIP

138**86... 升级为高级VIP  wei**n_... 升级为标准VIP

183**37...  升级为高级VIP  wei**n_... 升级为标准VIP

wei**n_... 升级为标准VIP 159**85...  升级为至尊VIP

 137**52... 升级为高级VIP 138**81... 升级为至尊VIP 

wei**n_...  升级为高级VIP  wei**n_...   升级为标准VIP

 微**... 升级为至尊VIP 136**16...  升级为标准VIP

186**15...  升级为高级VIP  139**87... 升级为至尊VIP 

  wei**n_... 升级为至尊VIP 137**01...  升级为标准VIP 

182**85...  升级为至尊VIP 158**05... 升级为标准VIP 

 180**51...  升级为高级VIP wei**n_...  升级为高级VIP

wei**n_... 升级为高级VIP  wei**n_... 升级为至尊VIP

h**a 升级为高级VIP   wei**n_...  升级为高级VIP

Ani** Y... 升级为标准VIP  wei**n_...  升级为高级VIP 

wei**n_...   升级为高级VIP 微**...  升级为高级VIP 

 137**22... 升级为至尊VIP  138**95... 升级为标准VIP 

159**87...  升级为高级VIP  Mic**el...  升级为至尊VIP

wei**n_... 升级为至尊VIP  wei**n_...  升级为高级VIP

wei**n_...  升级为高级VIP  胖**... 升级为至尊VIP