上海品茶

图像识别基于传输的攻击的最新进展.pdf

编号:140577 PDF 34页 7.87MB 下载积分:VIP专享
下载报告请您先登录!

图像识别基于传输的攻击的最新进展.pdf

1、Recent Progresses in Transfer-based Attack for Image RecognitionXiaosen WangHuawei Singularity Security LabPreliminaries1Gradient-based Attacks2Input Transformation-based Attacks3Model-related Attacks4Advanced Objec-tiveFunctions5Further Discussion&Conclusion6CONTENTS DNNs are everywhere in our life

2、!DNNsImage ClassificationObject DetectionAutonomous DrivingMedical DiagnosticsFacial Scan PaymentVoice RecognitionPreliminaries Adversarial examples are indistinguishable from legitimate ones by adding small perturbations,but lead to incorrect model prediction.PreliminariesGoodfellow et al.Explainin

3、g and Harnessing Adversarial Examples.ICLR 2015.Wei et al.Adversarial Sticker:A Stealthy Attack Method in the Physical World.TPAMI 2022.Eykholt et al.Robust Physical-World Attacks on Deep Learning Visual Classification.CVPR 2018.Adversarial examples bring a huge threats to AI applications.Preliminar

4、ies How to generate Adversarial examples?Training a Network:min!,$,;.Generating Adversarial Example:max|(!#|)*+,-,;.Untargeted attack:The victim model predicts the generated adversarial example into any incorrect categories.Targeted attack:The victim model predicts the generated adversarial example

5、into a specific category.:Training dataset(%):Loss function:Clean input:Ground-truth label$%&:Adversarial examplePreliminariesWhite-box Attack:The attacker could access any information of victim model,e.g.,architecture,weights,gradients,etc.Black-box Attack:The attacker could access limited informat

6、ion of victim model.Score-based Attack:The attacker could obtain the prediction probability.Decision-based Attack:The attacker could obtain the prediction label.Transfer-based Attack:The adversarial examples generated on one model could mislead other victim models.!#$%!#$%PreliminariesTransfer-based

7、 AttacksWang et al.Towards Boosting Adversarial Transferability on Image Classification:A Survey.To be released.Preliminaries1Gradient-based Attacks2Input Transformation-based Attacks3Model-related Attacks4Advanced Objec-tiveFunctions5Further Discussion&Conclusion6CONTENTSGradient-based AttacksGoodf

8、ellow et al.Explaining and Harnessing Adversarial Examples.ICLR 2015.Kurakin et al.Adversarial Examples in the Physical World.ICLR Workshop 2018.Dong et al.Boosting Adversarial Attacks with Momentum.CVPR 2018.Lin et al.Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks.ICLR 2

9、020.Gradient-based adversarial attacks are widely investigated:FGSM Goodfellow et al.,2015:+,-=+sign ,;I-FGSM Kurakin et al.,2018:./0+,-=.+,-+sign .+,-,;MI-FGSM Dong et al.,2018:./0=.+2(!#,$;!|2(!#,$;!|),./0+,-=.+,-+sign./0 NI-FGSM Lin et al.,2020:.+,-=.+,-+./0=.+.+,-,;|.+,-,;|0,./0+,-=.+,-+sign./0G

10、radient-based AttacksWang et al.Enhancing the Transferability of Adversarial Attacks through Variance Tuning.CVPR 2021.Variance Tuning(VT)ParametersTransferabilityInput image xGeneralizationAdversarial Example GenerationStandard Model TrainingNI-FGSM finds that Nestorve Accelerated Gradient(NAG)that

11、 accelerates the convergence of optimization process,also enhances the transferability.We treat the iterative gradient-based adversarial attack as SGD optimization process,in which at each iteration,the attacker always chooses the target model for update.SGD introduces variance due to randomness.Gra

12、dient-based Attacks Variance Tuning(VT)Gradient Variance.Given a classifier with parameters and loss function(,;),an arbitrary image and upper bound for the neighborhood,the gradient variance can be defined as:!x=|$!%$|&!$!,;$,;.In practice,we approximate the gradient variance by sampling N examples

13、 in the neighborhood of:=16()*+$#(,;$,;,where(=+,.At t-th iteration,we tune the gradient of with the gradient variance at(t-1)-thiteration to stabilize the update direction.Wang et al.Enhancing the Transferability of Adversarial Attacks through Variance Tuning.CVPR 2021.Gradient-based Attacks Varian

14、ce Tuning(VT)The variance tuning is generally applicable to all iterative gradient based attacks.Wang et al.Enhancing the Transferability of Adversarial Attacks through Variance Tuning.CVPR 2021.VMI-FGSM:./0=.+(!#.+,-,;+()|(!#.+,-,;+()|0,./0+,-=.+,-+sign./0Gradient-based AttacksWang et al.Enhancing

15、the Transferability of Adversarial Attacks through Variance Tuning.CVPR 2021.Variance Tuning(VT)Preliminaries1Gradient-based Attacks2Input Transformation-based Attacks3Model-related Attacks4Advanced Objec-tiveFunctions5Further Discussion&Conclusion6CONTENTSInput Transformation-based AttacksXie et al

16、.Improving Transferability of Adversarial Examples with Input Diversity.CVPR 2019.Dong et al.Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks.CVPR 2019.Lin et al.Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks.ICLR 2020.Wang et al.Admi

17、x:Enhancing the Transferability of Adversarial Attacks.ICCV 2021.Long et al.Frequency Domain Model Augmentation for Adversarial Attack.ECCV 2022.Similar to data augmentation in training,input transformation can enhance the diversity of image,thus boosting adversarial transferability.DIM Xie et al.,2

18、019:Randomly resize the image and add padding for gradient calculation.TIM Dong et al.,2019:Accumulate the gradient on a set of translated images.To approximate this process,TIM convolves the gradient of original image with a predefined kernel.SIM Lin et al.,2020:Accumulate the gradient on a set of

19、scaled images.Admix Wang et al.,2021:Mixup the image with the images from other categories for gradient calculation.SSA Long et al.,2022:Add noise and randomly mask the elements in the frequency domain to generate several images for gradient calculation.Input Transformation-based AttacksWang et al.S

20、tructure Invariant Transformation for better Adversarial Transferability.ICCV 2023.Structure Invariant Attack(SIA)Assumption:The more diverse the transformed images are,the better transferability the adversarial examples have.LPIPS,(=1.&.,)|,)&,)&|Input Transformation-based AttacksWang et al.Structu

21、re Invariant Transformation for better Adversarial Transferability.ICCV 2023.Structure Invariant Attack(SIA)Structure of Image:Given an image,which is randomly split into blocks,the relative relation between each anchor point is the structure of image,where the anchor point is the center of the imag

22、e block.The structure of image depicts important semantic information for human recognition.Scaling the image blocks with various factors does not change the structure of image so that the generated image can be correctly recognized by humans as well as deep models.Input Transformation-based Attacks

23、Wang et al.Structure Invariant Transformation for better Adversarial Transferability.ICCV 2023.Structure Invariant Attack(SIA)To improve the diversity and maintain the semantic information,we apply various image transformationsto different image blocks,denoted as Structure Invariant Transformation(S

24、IT).The proposed transformation significantly improves the diversity but maintains the structure invariance.The proposed transformation can be integrated into existing gradient-based methods.The gradient is computed on several transformed images.Input Transformation-based AttacksWang et al.Structure

25、 Invariant Transformation for better Adversarial Transferability.ICCV 2023.Structure Invariant Attack(SIA)Preliminaries1Gradient-based Attacks2Input Transformation-based Attacks3Model-related Attacks4Advanced Objec-tiveFunctions5Further Discussion&Conclusion6CONTENTSModel-related AttacksLi et al.Lea

26、rning Transferable Adversarial Examples via Ghost Networks.AAAI 2020.Wu et al.Skip Connections Matter:On the Transferability of Adversarial Examples Generated with ResNets.ICLR 2020.Guo et al.Backpropagating Linearly Improves Transferability of Adversarial Examples.NeurIPS 2020.Modifying the surroga

27、te model to boost adversarial transferability.Ghost Network Li et al.,2020:Densely add dropout layer and randomly scale the feature passing the skip connection of ResNets.SGM Wu et al.,2020:Adopt more gradient from the skip connections instead of the residual modules using a decay factor for backpro

28、pagation.LinBP Guo et al.,2020:Adopt constant value as the gradient of ReLU activation and modify the gradient of residual modules to makes backpropagation more linear.Model-related AttacksWang et al.Rethinking the Backward Propagation for Adversarial Transferability.Under review.Backward Propagatio

29、n Attack(BPA)Backpropagation follows the chain rule:(,;)=,;9/09C:;/09:/0:/0 00otherwise Maxpooling layer!#!=-1if!is the maximum value in the window0otherwiseReLU =max(0,)Model-related AttacksWang et al.Rethinking the Backward Propagation for Adversarial Transferability.Under review.Backward Propagat

30、ion Attack(BPA)Assumption:The truncation of gradient introduced by non-linear layers in the backward propagation process decays the adversarial transferability.Randomly mask the gradient to introduce more truncation.Randomly replace the zeros in the gradient of ReLU or maxpooling layers with onesGra

31、dient Truncation decays the transferability!Model-related AttacksWang et al.Rethinking the Backward Propagation for Adversarial Transferability.Under review.Backward Propagation Attack(BPA)Recover the truncated gradient for better transferability:Replace the gradient of ReLU with that of SiLU:/0:=:1

32、+:1 :Adopting the Softmax function to calculate the gradient within each window of the max-pooling:/0=.*,-.-Model-related AttacksWang et al.Rethinking the Backward Propagation for Adversarial Transferability.Under review.Backward Propagation Attack(BPA)Preliminaries1Gradient-based Attacks2Input Tran

33、sformation-based Attacks3Model-related Attacks4Advanced Objec-tiveFunctions5Further Discussion&Conclusion6CONTENTSAdvanced Objective FunctionsWang et al.Feature Importance-aware Transferable Adversarial Attacks.ICCV 2021.Zhang et al.Enhancing the Transferability of Adversarial Examples with Random P

34、atch.IJCAI 2022.Zhang et al.Improving Adversarial Transferability via Neuron Attribution-based Attacks.CVPR 2022.Several attacks disrupt the high-level features:FIA Wang et al.,2021:Adopt aggregate gradient to highlight important features:3BC=1.DE!F GD,;B(GD),G Bernoulli 1 ,=3BC B RPA Zhang et al.,2

35、022:Instead of randomly masking the pixels,RPA randomly split the image into patches,which will be randomly masked for calculating the weight matrix.NAA Zhang et al.,2022:Adopt integrated gradients for neuron attribution:3BC=1.DE!F H+H,;BH+H,=3BC B BHAdvanced Objective FunctionsWang et al.Disrupting

36、 Semantic and Abstract Features for better Adversarial Transferability.Under review.Semantic and Abstract FEatures disRuption(SAFER)DNNs usually focus more on high-frequency components(e.g.,texture,edge)High frequency components are beneficial for boosting adversarial transferability!Advanced Object

37、ive FunctionsWang et al.Disrupting Semantic and Abstract Features for better Adversarial Transferability.Under review.Semantic and Abstract FEatures disRuption(SAFER)Randomly perturbing the semantic and abstract features:Blockmix:,B=K:,=with the probability:,=Bwith the probability 1 Frequency Pertur

38、bation:=C +IJKLM=,H,3BC=1.DE!F,;B(),=3BC BAdvanced Objective FunctionsWang et al.Disrupting Semantic and Abstract Features for better Adversarial Transferability.Under review.Semantic and Abstract FEatures disRuption(SAFER)Preliminaries1Gradient-based Attacks2Input Transformation-based Attacks3Model

39、-related Attacks4Advanced Objec-tive Functions5Further Discussion&Conclusion6CONTENTSFurther Discussion&ConclusionTransferAttack:a benchmark containing more than 60 transfer-based attack methodsThe framework will be released soon!Further Discussion&ConclusionVT tunes the gradient using the gradient

40、variance of previous iterationSIA randomly transforms the image block while preserving the structureBPA recovers the truncated gradient of non-linear layersSAFER derives an object-aware weight matrix to disrupt significant featuresTransferAttack:a benchmark that contains more than 60 transfer-based attack methodsThanks for your Attention!Q&A

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(图像识别基于传输的攻击的最新进展.pdf)为本站 (2200) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

hak**a9...  升级为至尊VIP 185**56... 升级为高级VIP 

156**93... 升级为标准VIP wei**n_...  升级为至尊VIP

  wei**n_... 升级为至尊VIP  Br**e有... 升级为至尊VIP

wei**n_... 升级为标准VIP    wei**n_... 升级为高级VIP

wei**n_... 升级为至尊VIP     156**20... 升级为至尊VIP

wei**n_... 升级为至尊VIP  微**...  升级为标准VIP

135**45... 升级为标准VIP   wei**n_... 升级为至尊VIP 

 wei**n_... 升级为高级VIP 157**60...  升级为高级VIP

 150**45... 升级为至尊VIP  wei**n_... 升级为标准VIP

wei**n_... 升级为至尊VIP   151**80...  升级为高级VIP

135**10... 升级为标准VIP  wei**n_...  升级为高级VIP

 wei**n_...  升级为高级VIP wei**n_...   升级为至尊VIP

wei**n_... 升级为标准VIP   wei**n_... 升级为高级VIP

wei**n_... 升级为高级VIP  135**22... 升级为高级VIP

 wei**n_...  升级为至尊VIP  181**62...  升级为至尊VIP

 黑**... 升级为至尊VIP wei**n_... 升级为至尊VIP 

178**61...  升级为高级VIP  186**20... 升级为高级VIP 

 wei**n_...  升级为标准VIP wei**n_...  升级为高级VIP

 wei**n_... 升级为标准VIP wei**n_... 升级为至尊VIP

 wei**n_... 升级为标准VIP 152**94... 升级为高级VIP 

wei**n_...  升级为标准VIP  wei**n_... 升级为标准VIP 

185**27... 升级为标准VIP   135**37... 升级为至尊VIP

 159**71...  升级为高级VIP 139**27... 升级为至尊VIP 

wei**n_...   升级为高级VIP wei**n_...  升级为高级VIP

188**66...  升级为标准VIP  wei**n_...  升级为至尊VIP

 wei**n_... 升级为高级VIP wei**n_... 升级为至尊VIP 

wei**n_...  升级为高级VIP wei**n_... 升级为高级VIP 

wei**n_... 升级为至尊VIP 177**81...  升级为标准VIP

185**22...  升级为标准VIP  138**26...  升级为至尊VIP

军歌 升级为至尊VIP   159**75... 升级为至尊VIP

wei**n_... 升级为标准VIP wei**n_...  升级为至尊VIP

wei**n_...  升级为高级VIP su2**62...   升级为至尊VIP

wei**n_... 升级为至尊VIP   wei**n_... 升级为至尊VIP

186**35... 升级为高级VIP  186**21...  升级为标准VIP

wei**n_...  升级为标准VIP  wei**n_... 升级为标准VIP

wei**n_... 升级为标准VIP  137**40... 升级为至尊VIP 

wei**n_... 升级为至尊VIP  186**37... 升级为至尊VIP

177**05... 升级为至尊VIP    wei**n_... 升级为高级VIP

wei**n_...  升级为至尊VIP wei**n_... 升级为至尊VIP 

 wei**n_... 升级为标准VIP wei**n_...  升级为高级VIP

155**91...  升级为至尊VIP  155**91... 升级为标准VIP

  177**25... 升级为至尊VIP  139**88... 升级为至尊VIP

wei**n_...  升级为至尊VIP wei**n_...   升级为高级VIP

wei**n_...  升级为标准VIP  135**30...  升级为标准VIP

wei**n_...  升级为高级VIP  138**62... 升级为标准VIP

  洛宾 升级为高级VIP  wei**n_... 升级为标准VIP

wei**n_... 升级为高级VIP  wei**n_... 升级为标准VIP 

180**13...  升级为高级VIP wei**n_... 升级为至尊VIP

152**69...  升级为标准VIP 152**69...  升级为标准VIP 

小**... 升级为标准VIP  wei**n_...   升级为标准VIP