上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

Google:SAIF(Secure AI Framework)安全AI框架(2023)(英文版)(12页).pdf

编号:129448 PDF  DOCX   12页 636.27KB 下载积分:VIP专享
下载报告请您先登录!

Google:SAIF(Secure AI Framework)安全AI框架(2023)(英文版)(12页).pdf

1、Secure AI FrameworkApproachA quick guide to implementingthe Secure AI Framework(SAIF)Table of contentsIntro2Pu?ing SAIF into practice3Step 1-Understand the use3Step 2-Assemble the team3Step 3-Level set with an AI primer4Step 4-Apply the six core elements of SAIF4Expand strong security foundations to

2、 the AI ecosystem4Extend detection and response to bring AI into an organizations threat universe6Automate defenses to keep pace with existing and new threats7Harmonize pla?orm level controls to ensure consistent security across the organization7Adapt controls to adjust mitigations and create faster

3、 feedback loops for AI deployment8Contextualize AI system risks in surrounding business processes9Conclusion111IntroSecure AI Framework(SAIF)is a conceptual framework for secure a?i?cial intelligence(AI)systems.It is inspired by security best practices like reviewing,testing and controlling the supp

4、lychain that Google has applied to so?ware development,while incorporating our understandingof security mega-trends and risks speci?c to AI systems.SAIF o?ers a practical approach toaddress the concerns that are top of mind for security and risk professionals,such as:Securitya)Access managementb)Net

5、work/endpoint securityc)Application/product securityd)Supply chain a?ackse)Data securityf)AI speci?c threatsg)Threat detection and responseAI/ML model risk managementa)Model transparency and accountabilityb)Error-prone manual reviews for detecting anomaliesc)Data poisoningd)Data lineage,retention an

6、d governance controlsPrivacy and compliancea)Data privacy and usage of sensitive datab)Emerging regulationsPeople and organizationa)Talent gapb)Governance/Board reportingThis quick guide is intended to provide high level practical considerations on how organizationscould go about building the SAIF a

7、pproach into their existing or new adoptions of AI.Fu?hercontent will delve deeper into the topics-for now we focus on the priority elements that need to beaddressed under each of the six core elements of SAIF:Expand strong security foundations to the AI ecosystemExtend detection and response to bri

8、ng AI into an organizations threat modelAutomate defenses to keep pace with existing and new threatsHarmonize pla?orm level controls to ensure consistent security across the organizationAdapt controls to adjust mitigations and create faster feedback loops for AI deploymentContextualize AI system ris

9、ks in surrounding business processes2Pu?ing SAIF into practiceStep 1-Understand the useMany organizations are considering using AI for the?rst time,or expanding the AI solutions theyhave to take advantage of new Generative AI(Gen AI)capabilities.In all cases,understanding thespeci?c business problem

10、 AI will solve and the data needed to train the model,will help drive thepolicy,protocols,and controls that need to be implemented as pa?of SAIF.For example,modelsdesigned to analyze or act on existing data,such as models that summarize an analyst repo?ordetect fraud,may implicate fewer complex issu

11、es compared to models that,for example,are usedto make predictions for consumer?nances(e.g.credit risk models),which would raise additionalchallenges due to the potential impact on consumers and applicable consumer protectionobligations.The context of how the models interact with end-users also play

12、s an impo?ant role.Forexample,an AI model exposed externally,that takes in end-user input,will have di?erentrequirements for security and data governance compared to a model used for trading stocks.Alongwith that,using a pre-built model from a third pa?y vs.developing and/or training your own willal

13、so have di?erent implications related to securing the infrastructure and the developmentpla?orm,monitoring model behavior and outcome,threat detection and protections.Thus,thoroughly understanding the AI use case will ensure the implementation of the SAIFcaptures the complexities and risks of the pa

14、?icular deployment.Step 2-Assemble the teamDeveloping and deploying AI systems,just like traditional systems,are multidisciplinary e?o?s andinclude similar elements,such as risk assessment,security/privacy/compliance controls,threatmodeling,and incident response.Additionally,AI systems are o?en comp

15、lex and opaque,have alarge number of moving pa?s,rely on large amounts of data,are resource intensive,can be used toapply judgment-based decisions,and can generate novel content that may be o?ensive,harmful,orcan perpetuate stereotypes and social biases.This expands the composition of the team to in

16、cludestakeholders across multiple organizations,such as:Business use case ownersSecurityCloud EngineeringRisk and Audit teamsPrivacyLegalData Science teamsDevelopment teamsResponsible AI and Ethics3Establishing the right cross-functional team ensures that security,privacy,risk,and complianceconsider

17、ations are included from the sta?and not added a?er the fact.Step 3-Level set with an AI primerAI,especially Gen AI,is a still-emerging and rapidly developing technology.As teams embark onevaluating the business use,the various and evolving complexities,risks,and security controls thatapply-it is cr

18、itical that pa?ies involved understand the basics of the AI model developmentlifecycle,the design and logic of the model methodologies,including capabilities,merits andlimitations.Sta?ing with concepts like AI,machine learning(ML),Deep Learning,Gen AI,largelanguage models(LLMs),etc.,will level set a

19、nd allow non-technical stakeholders to accuratelycapture and evaluate the risks and controls required to manage and deploy AI safely andresponsibly.Step 4-Apply the six core elements of SAIFOnce the use cases and context are known,the team has been assembled and primed on AI,youcan sta?to apply the

20、six elements of SAIF to address some of the concerns mentioned earlier.Itshould be noted that the elements are not intended to be applied in chronological order,ratherlevers that collectively guide the organizations to build and deploy AI systems in a secure andresponsible manner.Expand strong secur

21、ity foundations to the AI ecosystemReview what existing security controls across the security domains apply to AI systemsExisting security controls across the security domains apply to AI systems in a number of ways.Forexample,data security controls can be used to protect the data that AI systems us

22、e to train andoperate.Application security controls can be used to protect the so?ware that AI systems areimplemented in;infrastructure security controls can be used to protect the underlyinginfrastructure that AI systems rely on;and operational security controls can be used to ensure thatAI systems

23、 are operated in a secure manner.The speci?c controls that are needed will vary depending on the use of AI,as well as the speci?c AIsystems and environments.Evaluate the relevance of traditional controls to AI threats and risks using availableframeworksTraditional security controls can be relevant t

24、o AI threats and risks,but they may need to beadapted to be e?ective,or additional layers added to the defense posture to help cover the AIspeci?c risks.For example,data encryption can help to protect AI systems from unauthorized4access by limiting the access of the keys to ce?ain roles,but it may a

25、lso need to be used to protectAI models and their underpinning data from being stolen or tampered with.Pe?orm an analysis to determine what security controls need to be added due to AI speci?cthreats,regulations,etc.Using the assembled team,review how your current controls map to your AI use case,do

26、 a?t forpurpose evaluation of these controls and then create a plan to address the gap areas.Once all ofthat is done,also measure the e?ectiveness of these controls based on whether they lower the riskand how well they address your intended AI usage.Prepare to store and track supply chain assets,cod

27、e and training dataOrganizations that use AI systems must prepare to store and track supply chain assets,code,andtraining data.This includes identifying,categorizing,and securing all assets,as well as monitoringfor unauthorized access or use.By taking these steps,organizations can help protect their

28、 AIsystems from a?ack.Ensure your data governance and lifecycle management are scalable and adapted to AI.Depending on the de?nition of data governance you follow,there are up to six decision domains fordata governance:Data qualityData securityData architectureMetadataData lifecycleData storageAI da

29、ta governance will become more impo?ant than ever.For example,a key underpinning of thee?ectiveness of AI models are the training sets of data.Ensure that you have a proper lifecyclemanagement system when it comes to data sets,with a strong emphasis on security as pa?of thelifecycle(i.e.have securit

30、y measures from creation of data to the ultimate destruction of dataembedded throughout the lifecycle).Data lineage will also play a key pa?and help to answerquestions with regards to privacy and intellectual prope?y.If you know who created the data,where it came from,and what makes up the dataset,i

31、t is much easier to answer questions on theaforementioned topics.As AI adoption grows,your organizations success will likely hinge on scaling these decisiondomains in an agile manner.To help suppo?this e?o?,it is critical to review your data governancestrategy with a cross functional team and potent

32、ially adjust it to ensure it re?ects advances in AI.5Retain and retrainWe are not talking about AI,but rather people.For many organizations,?nding the right talent insecurity,privacy and compliance can be a multi-year journey.Taking steps to retain this talent canadd to your success,as they can be r

33、etrained with skills relevant to AI quicker than hiring talentexternally that may have the speci?c AI knowledge,but lack the institutional knowledge that cantake longer to acquire.Extend detection and response to bring AI into an organizations threatuniverseDevelop understanding of threats that ma?e

34、r for AI usage scenarios,the types of AI used,etc.Organizations that use AI systems must understand the threats relevant to their speci?c AI usagescenarios.This includes understanding the types of AI they use,the data they use to train AIsystems,and the potential consequences of a security breach.By

35、 taking these steps,organizationscan help protect their AI systems from a?ack.Prepare to respond to a?acks against AI and also to issues raised by AI outputOrganizations that use AI systems must have a plan for detecting and responding to securityincidents,and mitigate the risks of AI systems making

36、 harmful or biased decisions.By taking thesesteps,organizations can help protect their AI systems and users from harm.Speci?cally,for Gen AI,focus on AI output-prepare to enforce content safety policiesGen AI is a powe?ul tool for creating a variety of content,from text to images to videos.However,t

37、his power also comes with the potential for abuse.For example,Gen AI could be used to createharmful content,such as hate speech or violent images.To mitigate these risks,it is impo?ant toprepare to enforce content safety policies.Adjust your abuse policy and incident response processes to AI-speci?c

38、 incident types,such as malicious content creation or AI privacy violationsAs AI systems become more complex and pervasive,it is impo?ant to adjust your abuse policy todeal with use cases of abuse and then also adjust your incident response processes to account forAI-speci?c incident types.These typ

39、es of incidents can include malicious content creation,AIprivacy violations,AI bias and general abuse of the system.6Automate defenses to keep pace with existing and new threatsIdentify the list of AI security capabilities focused on securing AI systems,training datapipelines,etc.AI security technol

40、ogies can protect AI systems from a variety of threats,including data breaches,malicious content creation,and AI bias.Some of these technologies include traditional dataencryption,access control,auditing which can be augmented with AI and newer technologies thatcan pe?orm training data protection,an

41、d model protection.Use AI defenses to counter AI threats,but keep humans in the loop for decisions whennecessaryAI can be used to detect and respond to AI threats,such as data breaches,malicious contentcreation,and AI bias.However,humans must remain in the loop for impo?ant decisions,such asdetermin

42、ing what constitutes a threat and how to respond to it.This is because AI systems can bebiased or make mistakes,and human oversight is necessary to ensure that AI systems are usedethically and responsibly.Use AI to automate time consuming tasks,reduce toil,and speed up defensive mechanismsAlthough i

43、t seems like a more simplistic point in light of the uses for AI,using AI to speed up timeconsuming tasks will ultimately lead to faster outcomes.For example,it can be time consuming toreverse engineer a malware binary.However,AI can quickly review the relevant code and provide ananalyst with action

44、able information.Using this information,the analyst could then ask the system togenerate a YARA rule looking for these actions.In this example,there is an immediate reduction oftoil and faster output for the defensive posture.Harmonize pla?orm level controls to ensure consistent security across theo

45、rganizationReview usage of AI and life cycle of AI based appsAs mentioned in Step 1,understanding the use of AI is a key component.Once AI becomes morewidely used in your organization,you should implement a process for periodic review of usage toidentify and mitigate security risks.This includes rev

46、iewing the types of AI models and applicationsbeing used,the data used to train and run AI models,the security measures in place to protect AImodels and applications,the procedures for monitoring and responding to AI security incidents,and AI security risk awareness and training for all employees.Pr

47、event fragmentation of controls by trying to standardize on tooling and frameworksWith the above process in place,you can be?er understand the existing tooling,security controls,and frameworks currently in place.At the same time,it is impo?ant to examine whether yourorganization has di?erent or over

48、lapping frameworks for security and compliance controls to help7reduce fragmentation.Fragmentation will increase complexity and create signi?cant overlap,increasing costs and ine?ciencies.By harmonizing your frameworks and controls,andunderstanding their applicability to your AI usage context,you wi

49、ll limit fragmentation and provide aright?t approach to controls to mitigate risk.This guidance primarily refers to existing controlframeworks and standards,but the same principle(e.g.try to keep the overall number as small aspossible)would apply to new and emerging frameworks and standards for AI.A

50、dapt controls to adjust mitigations and create faster feedback loops for AIdeploymentConduct Red Team exercises to improve safety and security for AI-powered products andcapabilitiesRed Team exercises are a security testing method where a team of ethical hackers a?empts toexploit vulnerabilities in

51、an organizations systems and applications.This can help organizationsidentify and mitigate security risks in their AI systems before they can be exploited by maliciousactors.Stay on top of novel a?acks including prompt injection,data poisoning and evasion a?acksThese a?acks can exploit vulnerabiliti

52、es in AI systems to cause harm,such as leaking sensitive data,making incorrect predictions,or disrupting operations.By staying up-to-date on the latest a?ackmethods,organizations can take steps to mitigate these risks.Apply machine learning techniques to improve detection accuracy and speedAlthough

53、it is critical to focus on securing the use of AI,AI can also help organizations achievebe?er security outcomes at scale(see reference in Step 3 above).AI-assisted detection andresponse capabilities,for example,can be an impo?ant asset for any organization.At the sametime,it is essential to keep hum

54、ans in the loop to oversee relevant AI systems,processes,anddecisions.Over time,this e?o?can drive continuous learning to improve AI base protections,update training and?ne-tuning of data sets for foundation models,and the ML models used forbuilding protections.In turn,this will enable organizations

55、 to strategically respond to a?acks as thethreat environment evolves.Continuous learning is also critical for improving accuracy,reducinglatency and increasing e?ciency of protections.Create a feedback loopTo maximize the impact of the above three elements,it is critical to create a feedback loop.Fo

56、rexample,if your Red Team discovers a way to misuse your AI system,that information should be fedback into your organization to help improve defenses,rather than focusing solely on remediation.Similarly,if your organization discovers a new a?ack vector,it should be fed back into your trainingdata se

57、t as pa?of continuous learning.To ensure that feedback is put to good use,it is impo?ant8to consider various ingestion avenues and have a good understanding of how quickly feedback canbe incorporated into your protections.Contextualize AI system risks in surrounding business processesEstablish a mod

58、el risk management framework and build a team that understands AI-relatedrisksOrganizations should develop a process for identifying,assessing,and mitigating the risksassociated with AI models.The team should be composed of expe?s in AI,security,and riskmanagement.Build an inventory of AI models and

59、 their risk pro?le based on the speci?c use cases andshared responsibility when leveraging third-pa?y solutions and servicesOrganizations should build a comprehensive inventory of AI models and assess their risk pro?lebased on the speci?c use cases,data sensitivity,and shared responsibility when lev

60、eragingthird-pa?y solutions and services.This means identifying all AI models in use,understanding thespeci?c risks associated with each model,and implementing security controls to mitigate thoserisks along with having clear roles and responsibilities.Implement data privacy,cyber risk,and third-pa?y

61、 risk policies,protocols and controlsthroughout the ML model lifecycle to guide the model development,implementation,monitoring,and validationOrganizations should implement data privacy,cyber risk,and third-pa?y risk policies,protocols andcontrols throughout the ML model lifecycle to guide the model

62、 development,implementation,monitoring,and validation.This means developing and implementing policies,protocols,andcontrols that address the speci?c risks associated with each stage of the ML model lifecycle.Keepthe fou?h element of the framework above in mind to ensure you do not create unduefragme

63、ntation.Pe?orm a risk assessment that considers organizational use of AIOrganizations should identify and assess the risks associated with the use of AI,and implementsecurity controls to mitigate those risks.Organizations should also cover security practices tomonitor and validate control e?ectivene

64、ss,including model output explainability and monitoring fordri?.As referenced in Steps 1 and 2,it is impo?ant to create a cross functional team and build adeeper understanding of the relevant use cases to suppo?this e?o?.Organizations can useexisting frameworks for risk assessment to help guide thei

65、r work,but will likely need to augment oradapt their approach to address new emerging AI risk management frameworks.9Incorporate the shared responsibility for securing AI depending on who develops AIsystems,deploys models developed by model provider,tunes the models or useso?-the-shelf solutionsThe

66、security of AI systems is a shared responsibility between the developers,deployers,and usersof those systems.The speci?c responsibilities of each pa?y will vary depending on their role in thedevelopment and deployment of the AI system.For example,the AI system developers areresponsible for developin

67、g AI systems that are secure by design.This includes using secure codingpractices,training AI models on clean data,and implementing security controls to protect AIsystems from a?ack.Match the AI use cases to risk tolerancesThis means understanding the speci?c risks associated with each AI use case a

68、nd implementingsecurity measures to mitigate those risks.For example,AI systems that are used to help makedecisions that could signi?cantly impact peoples lives,such as healthcare or?nance,will likelyneed to be more heavily secured than AI systems that are used for less urgent tasks,such asmarketing

69、 or customer service.10ConclusionAI has captured the worlds imagination and many organizations are seeing oppo?unities to boostcreativity and improve productivity by leveraging this emerging technology.At Google,weve beenbringing AI into our products and services for over a decade,and we remain comm

70、i?ed toapproaching it in a bold and responsible way.SAIF is designed to help raise the security bar and reduce overall risk when developing anddeploying AI systems.To ensure we enable secure-by-default AI advancements,it is impo?ant towork collaboratively.With suppo?from customers,pa?ners,industry and governments,we willcontinue to advance the core elements of the framework and o?er practical and actionableresources to help organizations achieve be?er security outcomes at scale.11

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(Google:SAIF(Secure AI Framework)安全AI框架(2023)(英文版)(12页).pdf)为本站 (无糖拿铁) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部