上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

CETaS:2024人工智能安全报告:国际标准所发挥的作用(英文版)(55页).pdf

编号:157882  PDF  DOCX  55页 7.63MB 下载积分:VIP专享
下载报告请您先登录!

CETaS:2024人工智能安全报告:国际标准所发挥的作用(英文版)(55页).pdf

1、DRAFT FOR REVIEW 0 Towards Secure AI How far can international standards take us?Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes March 2024 Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 1 About CETaS.2 Acknowledgements.2 Executive Summary.3 Summary Recommendations.5

2、 Glossary.7 Introduction.8 1.Why is Securing AI Such a Challenge?.11 1.1 Defining the scope of AI security.11 1.2 How does AI security differ from cybersecurity?.12 2.The Role of AI Security Standards.15 2.1 Background on standardisation.15 2.2 What are standards and why are they helpful?.17 2.3 Pro

3、gress towards standards for secure AI.20 3.A Roadmap for Future AI Security Standards.26 3.1 Key gaps in AI security standards.26 3.2 Priority topics for future standards.27 4.Persistent Challenges to Creating AI Security Standards.31 4.1 Optimising timelines for AI security standards.31 4.2 Multist

4、akeholder inclusivity and accessibility of AI standards.34 5.Incentivising,Enforcing and Assessing Standards Adoption.37 5.1 The two-fold challenge of standards adoption.37 5.2 UK strategy on cybersecurity incentives.39 5.3 Encouraging adoption:expanding incentives to cover international standards.4

5、0 5.4 Enforcing adoption:the role of regulation.42 5.5 Assessing adoption:the role of certification.43 6.Situating Standards within a Broader AI Security Toolkit.45 6.1 Why are alternative levers for AI security needed?.45 6.2 Integrating AI security techniques through AI assurance.47 Recommendation

6、s.49 About the Authors.52 Towards Secure AI:How far can international standards take us?2 About CETaS The Centre for Emerging Technology and Security(CETaS)is a research centre based at The Alan Turing Institute,the UKs national institute for data science and artificial intelligence.The Centres miss

7、ion is to inform UK security policy through evidence-based,interdisciplinary research on emerging technology issues.Connect with CETaS at cetas.turing.ac.uk.This research was supported by The Alan Turing Institutes Defence and Security Programme.All views expressed in this report are those of the au

8、thors,and do not necessarily represent the views of The Alan Turing Institute or any other organisation.Acknowledgements The authors are grateful to all those who took part in a research interview or workshop for this project,without whom the research would not have been possible.The authors are als

9、o grateful to Ihsen Alouani,Matilda Rhode,Christopher Thomas,Rob van der Veer and Colin Whorlow for their valuable feedback on an earlier version of this report.We would also like to thank the AI security research team at the NCSC for their comments on an earlier draft.Design for this report was led

10、 by Michelle Wronski.This work is licensed under the terms of the Creative Commons Attribution License 4.0 which permits unrestricted use,provided the original authors and source are credited.The license is available at:https:/creativecommons.org/licenses/by-nc-sa/4.0/legalcode.Cite this work as:Ros

11、amund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes,“Towards Secure AI:How far can international standards take us?,”CETaS Research Reports(March 2024).Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 3 Executive Summary This CETaS Research Report prepares policymakers to und

12、erstand and address the significant cybersecurity challenges that have resulted from the widespread rollout of AI,with a specific focus on the role played by international standards.The range of AI security risks is broad,including instances of privacy attacks on AI systems(e.g.model inversion where

13、 sensitive or personal data from training sets can be reconstructed from an AI model)and attacks which result in failed model outputs(e.g.evasion attacks leading autonomous vehicles to not recognise signs).As capabilities progress,these risks will evolve,with new vulnerabilities already emerging in

14、generative AI models,particularly regarding prompt injection.In response to these risks,some safeguards for secure AI have been proposed with 2023 seeing new official guidance in both the UK and US on secure AI and AI risk management.However,despite these high-level guidance documents,there remains

15、a lack of specific,operationally focused guidelines for securing AI.In this context,international standards have significant potential to promote AI security best practice.International standards already play a foundational role informing cybersecurity approaches.Furthermore,they are becoming increa

16、singly essential in AI governance,serving a critical translation function between high-level AI policies and practical implementation.Our findings indicate that many of the challenges associated with securing AI systems are not new and can be resolved by updating existing cybersecurity standards.How

17、ever,several challenges are new and evolving rapidly.These include difficulties associated with the security of AI supply chains and new threats which have emerged in the context of generative AI.Standards development organisations(SDOs)have recognised that securing AI presents a new and important c

18、hallenge and have begun to introduce international standards which help protect AI systems from attack.These standards are at a nascent stage.We recommend governments redouble their efforts to support SDOs and to ensure crucial international standards are made available and accessible to those who n

19、eed to implement them.We also recommend further investment in related research to advance understandings of adversarial AI,ensuring that future international standards dont just focus on identifying vulnerabilities,but instead offer robust and specific mitigation strategies.Finally,we recommend gove

20、rnments recognise that international standards will Towards Secure AI:How far can international standards take us?4 not resolve all of their AI governance challenges but must be accompanied by adjacent efforts such as upskilling initiatives and more agile technical solutions.Without these combined e

21、fforts,AI systems vulnerabilities to attack will likely increase further,preventing UK society from being able to fully and safely harness the opportunities these technologies bring.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 5 Summary Recommendations For international standards

22、to meet AI security needs,SDOs,governments,industry,and academia must work together.We make recommendations to each group,focusing on five objectives:1.Set a clear roadmap for future international standards on AI security.We recommend resources within SDOs are prioritised to focus initially on expan

23、ding the scope of existing AI terminology standards,on creating new process standards,and on introducing AI security threat mapping standards.Subsequently,attention should turn to mitigation techniques,measurement standards and sector-specific standards.Where possible,existing standards should be up

24、dated rather than starting from scratch,and more time should be dedicated to coordination across SDOs to avoid duplication and to maximise alignment in terminology and frame of reference between standards.2.Improve fundamental understandings of how to secure AI.Research funding should be dedicated t

25、o fundamental AI security questions.Technical research should focus on identifying new methods to protect AI systems from attack.Social research should explore the human factors preventing AI practitioners from implementing security-aware best practices and aim to improve understandings of who will

26、be most impacted by failures to address these challenges.3.Foster a responsive standardisation ecosystem that is better equipped to tackle the challenges AI brings.We recommend national governments are proactive in international standardisation.Interventions should include government-backed horizon

27、scanning to identify key trends early on and government funding for civil society and small and medium size enterprise(SME)participation in standardisation.4.Introduce incentives to encourage uptake of international cybersecurity and AI standards.Existing cybersecurity incentives should be expanded,

28、with increased focus on accountability.In expanding incentives,we recommend greater integration of UK incentives with international standards.For instance,the UK National Cyber Security Centre(NCSC)should consider introducing an AI-specific tier within the Cyber Essentials accreditation scheme.This

29、new certification tier should be informed by international standards.5.Develop guidelines on AI security,which bring international standards together with analogous resources.We recommend the NCSC works with the Department for Towards Secure AI:How far can international standards take us?6 Science,I

30、nnovation and Technology(DSIT)to produce guidelines on AI security that integrate international standards with national policies,technical solutions,and industry cybersecurity guidelines.Ideally,these guidelines should be produced in collaboration with international partners.Rosamund Powell,Sam Stoc

31、kwell,Nalanda Sharadjaya and Hugh Boyes 7 Glossary Definitions:AI:AI is defined in line with the OECD as any machine-based system that,for explicit or implicit objectives,infers,from the input it receives,how to generate outputs such as predictions,content,recommendations,or decisions that can influ

32、ence physical or virtual environments.1 Cybersecurity:In line with the NCSC,we define cybersecurity simply as the means through which individuals and organisations reduce the risk of cyber-attack.2 AI Security:We define AI security as the process of managing the design,implementation and operation o

33、f AI models,systems,and data throughout their lifecycle,to reduce the risk of harm either from deliberate,unwanted,hostile or malicious acts,or failures to act.Standards:Documents that set repeatable voluntary guidelines for how things should be done(as defined by the AI Standards Hub).Any such refe

34、rence document may be considered a standard,but our research is focused on those created by standards development organisations(SDOs).3 Standards Development Organisation(SDO):SDOs are recognised bodies that develop and publish standards.They are composed of technical committees that oversee the dev

35、elopment of standards within a given remit,via procedures designed to achieve consensus among committee members(as defined by NIST).4 CIA Triad:The confidentiality,integrity,and availability triad is a commonly used model to determine cyber threats and cybersecurity best practices.Abbreviations:ETSI

36、:European Telecommunications Standards Institute CEN:European Committee for Standardisation CENELEC:European Committee for Electrotechnical Standardisation ISO:International Organisation for Standardisation IEC:International Electrotechnical Commission NIST:National Institute of Standards and Techno

37、logy OWASP:Open Worldwide Application Security Project BSI:British Standards Institute NPL:National Physical Laboratory 1“OECD AI Principles Overview,”OECD,https:/oecd.ai/en/ai-principles.2 HM Government,“What is cyber security?”National Cyber Security Centre,https:/www.ncsc.gov.uk/section/about-ncs

38、c/what-is-cyber-security.3“Standards at a glance,”AI Standards Hub,https:/aistandardshub.org/resource/main-training-page-example/4-the-main-stages-of-standards-development/.4“Definition of SDO,”NIST,https:/csrc.nist.gov/glossary/term/SDO.Towards Secure AI:How far can international standards take us?

39、8 Introduction Rapid and widespread adoption of AI,and of large language models(LLMs)in particular,has caused concern in the academic community that we have entered into a cybersecurity crisis of artificial intelligence.5 And,while much attention has been paid to the threat of malicious actors using

40、 AI to boost their cyberattack capabilities,6 insufficient emphasis has so far been placed on attacks that target AI systems themselves.These cyberattacks can be associated with a range of harms globally,from the disruption of peoples access to critical services,to the economic impact on businesses

41、who have been targeted,and the harms caused to people whose personal data may have been impacted.Those working in the field of adversarial AI have studied this threat in detail,cataloguing the vulnerabilities of AI systems to a range of attacks from evasion and poisoning to privacy and abuse attacks

42、.7 Given the scale and urgency of the threat,especially in high-risk sectors such as defence and critical infrastructure,governments have been called on to implement stricter measures,for example through mandating compliance with robust codes of practice on AI security.8 However,developing codes of

43、practice that are sufficiently robust,while also being usable by developers,is a significant challenge.Near-term solutions are likely to come from international SDOs who already play a critical role in operationalising high level AI policies in the form of implementable technical and sociotechnical

44、guidance.9 For decades,international standards have provided consensus-driven best practice guidance on topics from information security to technological resilience.10 SDOs have already released crucial standards on AI,for example specifying how to create an AI management system,11 and defining key

45、terminology such as AI bias and reliability.12 5 Andreas Tsamados,Luciano Floridi and Mariarosaria Taddeo,“The Cybersecurity Crisis of Artificial Intelligence:Unrestrained Adoption and Natural Language-Based Attacks,”SSRN(September 2023),https:/ HM Government,The near-term impact of AI on the cyber

46、threat(National Cyber Security Centre:2024),https:/www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat#section_3.7 NIST,“NIST identifies types of cyberattacks that manipulate behavior of AI systems,”NIST News,4 January 2024,https:/www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberatta

47、cks-manipulate-behavior-ai-systems.8 Marcus Comiter,Attacking Artificial Intelligence:AIs Security Vulnerability and what Policymakers Can Do About It(Belfer Center for Science and International Affairs:August 2019),https:/www.belfercenter.org/publication/AttackingAI.9 Hadrien Pouget,“What will the

48、role of standards be in AI governance?,”Ada Lovelace Institute Blog,5 April 2023,https:/www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/.10“Cyber Security Standard:the most popular cyber security standards explained,”IT Governance,https:/www.itgovernance.co.uk/cybersecurity-stan

49、dards;Karen Scarfone,Dan Benigni and Tim Grance,Cyber Security Standards(NIST:2009),https:/www.nist.gov/publications/cyber-security-standards.11“ISO/IEC 42001:2023,”ISO,https:/www.iso.org/standard/81230.html.12“ISO/IEC 22989:2022,”ISO,https:/www.iso.org/standard/74296.html.Rosamund Powell,Sam Stockw

50、ell,Nalanda Sharadjaya and Hugh Boyes 9 Further international standards covering AI security explicitly will be released in the coming months and years as SDOs,including ETSI,CEN/CENELEC and ISO/IEC,have identified this as a priority.13 But relying on international standards alone for the security o

51、f AI is not a dependable strategy.This is because:Significant gaps remain in the international standards landscape.For example,there is minimal coverage of generative AI,and of specific attacks(e.g.model inversion).The highly lengthy and resource-demanding process of standards development prevents S

52、DOs from keeping pace with cutting-edge techniques to defend AI against attack.Concern about the domination of industry in standards-setting is widespread.Trust in standards must be bolstered to ensure they can be relied on for AI security needs.Even when standards are available,they are voluntary.C

53、urrent UK cybersecurity incentives do not sufficiently cover AI,nor do they introduce sufficiently robust lines of accountability.More must be done to integrate international standards with more agile resources(e.g.national policies,academic and industry approaches)as standards alone will not meet e

54、veryones needs.This report addresses these obstacles to maximise the efficacy of international standards for AI security.We do so by answering the following research questions:Why is AI security such a hard problem for SDOs to tackle?(Section 1)How have SDOs addressed the challenge so far and what t

55、hreats should they tackle next?(Sections 2&3)How might implementation of international standards be improved,both through changes to how standards are designed and how uptake is incentivised?(Sections 4&5)How can developers,procurement teams,product owners and others use international standards alon

56、gside other AI security techniques to implement a whole-lifecycle approach to securing AI?(Section 6)13“ISO/IEC JTC 1/SC 42,”ISO,https:/www.iso.org/committee/6794475.html;“Technical Committee(TC)Securing Artificial Intelligence(SAI),”ETSI,https:/www.etsi.org/committee/technical-committee-tc-securing

57、-artificial-intelligence-sai;“CEN-CENELEC JTC 21,”CEN-CENELEC,https:/www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/.Towards Secure AI:How far can international standards take us?10 International standards stand to play a critical role in minimising harms from cyberattack

58、s on AI systems,and more must be done to ensure their widespread implementation.However,it will also be essential to integrate international standards with agile technical solutions that can be updated as and when new threats to AI systems emerge,and to complement these efforts with upskilling initi

59、atives that provide teams with the skills they need to implement standards effectively.Research methodology and limitations Data collection for this study was conducted over a four-month period from October 2023 January 2024 including four core research activities:1.Literature review covering academ

60、ic and policy literature on topics such as AI governance,cybersecurity,and adversarial AI.2.Standards mapping to assess current coverage of AI security concerns by SDOs,focusing on existing standards across ISO/IEC,ETSI,CEN/CENELEC,and ITU.3.Semi-structured interviews with 31 participants across gov

61、ernment,SDOs,industry and academia.4.Research workshop attended by more than 30 experts on international AI security standards.The scope of this report is limited to considerations around the cybersecurity of AI systems(i.e.protecting the AI models,systems and data).AI-enabled cyberattacks and AI ag

62、ents for cyber-defence are not addressed.It is beyond our scope to provide an exhaustive mapping of individual international standards.We do not cover national standards in depth.Finally,bodies producing standards for internal or private use,such as companies and military agencies whose protocols fo

63、r AI security may be proprietary or confidential,are out of scope.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 11 1.Why is Securing AI Such a Challenge?AI technologies bring new challenges for cybersecurity experts for a number of reasons.For example,the probabilistic nature of ma

64、ny AI systems leads to inherent non-determinism which can be exploited by malicious actors.14 Furthermore,AI system performance is intrinsically dependent on both training data and input data,meaning it is possible to manipulate the behaviour of the system by indirectly manipulating properties of tr

65、aining or input data.15 This section explores how novel properties and capabilities of AI systems necessitate new definitions and new methods from those deployed in other cybersecurity domains.1.1 Defining the scope of AI security A clear definition of AI security is essential to support the consens

66、us-driven process of standards making.16 Establishing such a definition is no easy task.The term AI security is already being used in several ways,with different implications for the breadth of our report.AI security can mean anything from:A subset of cybersecurity,focusing on vulnerabilities to adv

67、ersarial attack present within machine learning(ML)systems,17 or A topic extending beyond cybersecurity to overlap with AI safety.18 Depending on context,the latter definition can be particularly relevant to cyberphysical systems(CPS)where AI interacts with the control of a physical entity.19 Howeve

68、r,it can also 14 Mariarosaria Taddeo et al.,“Artificial Intelligence for national security:the predictability problem,”CETaS Research Reports(September 2022).15 HM Government,Guidelines for secure AI system development(National Cyber Security Centre:2023),https:/www.ncsc.gov.uk/collection/guidelines

69、-secure-ai-system-development.16 Before considering the scope of AI security,clarity is required about what is meant by security itself.This report adopts the UK Engineering Councils definition,where security is defined as the state of relative freedom from threat or harm caused by deliberate,unwant

70、ed,hostile or malicious acts.See:www.engc.org.uk/security.17 Micah Musser,Adversarial Machine Learning and Cybersecurity:Risks,Challenges and Legal Implications(CSET:April 2023),https:/cset.georgetown.edu/publication/adversarial-machine-learning-and-cybersecurity/.18 Jessica Newman,“Towards AI Secur

71、ity:Global Aspirations for a More Resilient Future,”CLTC White Paper(February 2019)https:/cltc.berkeley.edu/publication/toward-ai-security-global-aspirations-for-a-more-resilient-future/.19 While this report discusses AI systems,with a particular focus on ML and neural networks,the authors are cogni

72、sant that,where an AI system forms part of a cyber-physical system,there are potential vulnerabilities in the interface between the AI system and its interactions with the CPS.Towards Secure AI:How far can international standards take us?12 be relevant to harms associated with malicious uses of gene

73、rative AI,for instance in creating disinformation.Throughout research engagements,participants frequently preferred the use of narrower definitions.But,while favouring narrower definitions to specifically highlight adversarial threats,it is necessary to acknowledge that AI security is still a sociot

74、echnical rather than purely technical challenge.It is therefore important to look beyond just technical controls to the physical,personnel,and process aspects of security,as well as considering the security of the information employed in AI processes.Bearing these conflicting priorities in mind,we d

75、efine AI security throughout as:Managing the design,implementation and operation of AI models,systems,and data throughout their lifecycle,to reduce the risk of harm either from unwanted,hostile or malicious acts,or failures to act by accountable actors(e.g.developers,deployers,procurers,end users).1

76、.2 How does AI security differ from cybersecurity?There is agreement in the academic literature that a significant overlap exists between traditional cybersecurity and the security of AI.20 The common ground typically relates to security of the computational infrastructure on which AI systems operat

77、e,and the protection of models and data.However,experts disagree about the extent to which AI security is new.21 Below,we explore four ways in which AI security significantly diverges from cybersecurity.New goals for secure AI We can no longer rely upon the conventional security goals of confidentia

78、lity,integrity and availability(CIA triad)as the sole basis for securing AI applications.For example,integrity addresses the completeness of information not its authenticity,i.e.the difference between 20 ENISA,Cybersecurity of AI and Standardisation(ENISA:March 2023),https:/www.enisa.europa.eu/publi

79、cations/cybersecurity-of-ai-and-standardisation.21 Interview with academic expert,1 October 2023.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 13 completeness and validity.Likewise,there is a difference between availability and utility,the former concerning usability and the latter

80、 usefulness.However,there is still disagreement as to the specific properties that should characterise secure AI.Such disagreement reopens the debate about AI system characteristics such as trustworthiness,dependability,resilience,and robustness,raising questions such as:Should robustness come under

81、 security?and Is resilience a better characterisation?22 The specific characteristics of secure AI can also differ depending on context.For instance,in discussions around NISTs AI risk management framework,the goal of secure and resilient AI has been broken down simply into the sub-goals of confiden

82、tiality,integrity,availability,and security-by-design.23 In contrast,the properties of secure AI in the context of CPS have been broken down to focus on the more complex goals of confidentiality,integrity,availability,authenticity,safety,resilience,utility,interoperability,and control.24 Standardisa

83、tion will be dependent on broad agreement on whether we must look beyond the CIA triad,and if so which additional characteristics are critical for secure AI.New modes of attack Some definitions of AI security focus not on defining the properties of secure AI,but instead on cataloguing new ways AI sy

84、stems can be attacked.Examples include open-source vulnerabilities,supply chain attacks,data vulnerabilities,data poisoning and prompt injection.25 Some have categorised these new modes of attack according to whether they affect generative or predictive AI systems.26 Attackers regularly find new way

85、s to undermine AI security,creating difficulties for SDOs wishing to cover the full spectrum of attacks.22 Interview with industry expert(1),13 November 2023;Interview with academic expert,14 November 2023;Interview with industry experts,16 November 2023.23 Jessica Newman,“A Taxonomy of Trustworthin

86、ess for Artificial Intelligence,”CLTC White Paper Series(January 2023),26,https:/cltc.berkeley.edu/wp-content/uploads/2023/01/Taxonomy_of_AI_Trustworthiness.pdf.24 These goals are based on BSI PAS 1192-5,ISO 19650-5,BSI PAS 185,IET/NCSC Code of Practice covering cybersecurity in the built environmen

87、t,the latest NPSA CAPSS guidance.25 Andrew Lohn,“Poison in the Well:Securing the Shared Resources of Machine Learning,”CSET Policy Brief(June 2021),https:/cset.georgetown.edu/wp-content/uploads/CSET-Poison-in-the-Well.pdf.26 Apostol Vassilev et al.,Adversarial Machine Learning:A Taxonomy of Terminol

88、ogy of Attacks and Mitigations(NIST:January 2024),https:/csrc.nist.gov/pubs/ai/100/2/e2023/final.Towards Secure AI:How far can international standards take us?14 New secure-by-design methods to implement The opacity of complex AI systems makes it more challenging to identify if,and how,secure inputs

89、 guarantee secure outputs.Secure-by-design remains a critical principle but must be updated to account for new complex AI systems.Many approaches to AI security focus on linking solutions to specific stages of the AI lifecycle.For example,NCSCs principles for the security of machine learning encoura

90、ge design for security.27 This can require an in-depth understanding of the relationship represented by input data and the desired behaviours or performance of the wider system.NCSC and the National Protective Security Authority(NPSA)both recommend that organisations developing innovative technologi

91、es should manage their assets and should also secure their infrastructure and supply chains.28 For an AI system,these assets and supply chains will differ from other software.New governance challenges Traditional security awareness practices are insufficiently embedded within AI development communit

92、ies often due to a widespread culture of rapid innovation within the developer community.One industry respondent suggested that AI practitioners dont hold themselves to the same rigour as engineers in more regulated,safety crucial industries.29Additionally,because of the pace and scale of AI adoptio

93、n,particularly generative AI,rapidly emerging security challenges are already causing real-world harm.30 This poses significant challenges for both developers and security professionals.27 HM Government,Principles for the security of machine learning(National Cyber Security Centre:August 2022),https

94、:/www.ncsc.gov.uk/collection/machine-learning.28 HM Government,Principles for the security of machine learning(National Cyber Security Centre:August 2022),https:/www.ncsc.gov.uk/collection/machine-learning;“Secure Innovation,”NPSA,https:/www.npsa.gov.uk/secure-innovation.29 Interview with industry e

95、xpert,3 November 2023.30 Andreas Tsamados,Luciano Floridi and Mariarosaria Taddeo,“The Cybersecurity Crisis of Artificial Intelligence:Unrestrained Adoption and Natural Language-Based Attacks,”SSRN(September 2023),https:/ Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 15 2.The Role of AI Sec

96、urity Standards Initial progress towards securing AI has already been made thanks to international standards.However,this work is nascent and remains fragmented,with distinct SDO committees each focusing on differing aspects of AI and cybersecurity.2.1 Background on standardisation What is an SDO,an

97、d which SDOs are leading work on AI standardisation?SDOs are recognised bodies,composed of technical committees,that develop and publish standards,following procedures designed to achieve consensus among committee members.31 There are national,regional,and international SDOs.Key international player

98、s in AI standardisation include ISO,IEC,IEEE,and,increasingly,ITU.32 At the European level,the main standards bodies working on AI are the European Standards Organisations(ESOs)CEN,CENELEC,and ETSI.33 While ETSI is technically a European SDO,its membership model allows its standards to have global r

99、each.The likely division of responsibility between the ESOs,as signalled in the European Commissions draft standardisation request,will involve the bulk of European AI standards coming from CEN-CENELEC,with some specific AI security work from ETSI.34 National standards bodies(NSBs)send delegates to

100、represent them at international SDOs,while at the same time developing national standards and adopting international standards for national use.In the UK,the British Standards Institution(BSI)is leading AI standardisation.The UKs AI Standards Hub,a partnership between the Alan Turing Institute,BSI,a

101、nd the National Physical Laboratory(NPL)is further advancing the role of international AI standards via research,stakeholder engagement and capacity-building initiatives.31“Definition of SDO,”NIST,https:/csrc.nist.gov/glossary/term/SDO.32 Peter Cihon,Standards for AI Governance:International Standar

102、ds to Enable Global Coordination in AI Research&Development(University of Oxford:April 2019),https:/www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf.33“Standard Setting,”EU AI Act,https:/artificialintelligenceact.eu/standard-setting/.34“Standard Setting,”EU AI Act,https:/artif

103、icialintelligenceact.eu/standard-setting/;ENISA,Cybersecurity of AI and Standardisation(ENISA:March 2023),https:/www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.Towards Secure AI:How far can international standards take us?16 Many of these SDOs have a committee focused on de

104、veloping AI standards:ISO/IEC JTC 1/SC 42:AI subcommittee of the joint ISO/IEC technical committee on information technology35 o BSI ART/1:mirrors the work of SC 42 in addition to producing its own standards36 IEEE AI Standards Committee37 CEN-CENELEC JTC 21:joint committee on AI38 ETSIs Securing AI

105、:works exclusively on security issues39 In addition to AI-focused committees,these SDOs have workstreams on fundamental topics related to AI security,such as cybersecurity,information security,and hardware integrity.In some cases,key AI standardisation activities are being led by bodies that are not

106、 formally SDOs.In the US,for example,ANSI serves as the national standards body,but NIST40 has published a variety of technical reference materials and other forms of guidance on the use of AI.41 These are not formally considered standards but are often treated as though they were by governments,ind

107、ustry,and academia.Other bodies that are not SDOs but produce standard-like forms of technical guidance on AI include not-for-profit organisations like OWASP.42 How are standards developed?Members of a technical committee work to achieve consensus on a standards content,structure,and language.In gen

108、eral,the process involves:proposing and approving a topic for a new standard;drafting the standard;discussing and revising the draft;and in some cases making the standard available for public comment.Once published,standards are periodically updated to reflect scientific and technical advancements.4

109、3 Before publication,committee members must vote to approve a draft standard.At ISO,there must be general 35 ISO/IEC JTC 1/SC 42,https:/www.iso.org/committee/6794475.html.36 BSI,“ART/1 Artificial Intelligence,”https:/ IEEE SA,“Artificial Intelligence Standards Committee,”https:/sagroups.ieee.org/ai-

110、sc/.38 CEN/CENELEC,“Artificial Intelligence,”https:/www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/.39 ETSI TC SAI,https:/www.etsi.org/committee/technical-committee-tc-securing-artificial-intelligence-sai.40 NISTs role as a key American standards body precedes its AI-rela

111、ted work.While not an SDO,NIST coordinates federal government policy on the use of standards and oversees key conformity assessment procedures:See NIST,“What we do,”https:/www.nist.gov/standardsgov/what-we-do.41 NIST,Artificial Intelligence Risk Management Framework(AI RMF 1.0)(NIST:January 2023).42

112、 OWASP,“Project spotlight AI security and privacy guide,”https:/owasp.org/projects/spotlight/.43 AI Standards Hub,“Standards at a glance,”https:/aistandardshub.org/resource/main-training-page-example/4-the-main-stages-of-standards-development/.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hug

113、h Boyes 17 agreement among the committee members as to the content of the standard,with no sustained opposition to substantial issues.44 SDOs permit a range of stakeholders to participate in standards development and are generally encouraging of multistakeholder involvement.45 In practice,committees

114、 are often dominated by industry,46 although this varies a lot across SDOs(for example,committees at the ITU have much more government presence).How are standards used?Anybody involved in the design,development,deployment,assessment or use of an AI system might use an AI standard.Some standards bodi

115、es(ISO/IEC,IEEE,CEN-CENELEC,BSI)make standards available for purchase,while others(ITU,ETSI)make standards free to access.Standards are non-binding and voluntary.Governments can play an important role in shaping how standards are used.While compliance with particular standards is voluntary,in some j

116、urisdictions,mechanisms exist to strongly incentivise the use of standards,or else to harmonise particular standards with regulatory requirements,making adopting those standards the easiest way to comply with the relevant regulation.47 Use of these mechanisms to incentivise adoption of specific stan

117、dards is considered in many jurisdictions to be an important component of public policy,especially where environmental,public health,or safety interests are at play.48 2.2 What are standards and why are they helpful?Standards are documents that set detailed voluntary guidelines for how things should

118、 be done:how sizes and quantities should be measured,how tools and devices should be constructed,how processes should be undertaken and overseen.49 Any such reference 44“Glossary,”ISO,https:/www.iso.org/glossary.html.45“Opportunities to participate,”IEEE,https:/standards.ieee.org/participate/;“ISOs

119、committee on consumer policy,”ISO,https:/www.iso.org/copolco.html.46 Christine Galvagna,“Discussion paper:inclusive AI governance,”Ada Lovelace Institute Discussion Paper(March 2023)https:/www.adalovelaceinstitute.org/report/inclusive-ai-governance/.47“Standardisation policy,”European Commission,htt

120、ps:/single-market-economy.ec.europa.eu/single-market/european-standards/standardisation-policy_en;Office of the Federal Register,“Incorporation by Reference Handbook,”(June 2023)https:/www.archives.gov/files/federal-register/write/handbook/ibr.pdf.48 ISO,Standards and public policy:a toolkit for nat

121、ional standards bodies(ISO:August 2023),https:/www.iso.org/files/live/sites/isoorg/files/publications/en/ISO_Public-Policy-Toolkit.pdf.49“Standards at a glance:What are standards,”AI Standards Hub,https:/aistandardshub.org/resource/main-training-page-example/1-what-are-standards/.Towards Secure AI:H

122、ow far can international standards take us?18 document may be considered a standard,but our research is focused on those created by SDOs,as these are agreed through expert-led,consensus-driven processes.50 In general,SDO standards are seen as industry tools:by setting common rules and expectations f

123、or products and services,standards can lower trade barriers,create market efficiencies,and increase reliability and consumer trust.51 SDO standards bring several advantages(see Table 1).Table 1:Benefits of international standards Stakeholder type Benefits Consumers Increased trust and confidence in

124、product quality,safety,and reliability Interoperability between products and services,improving functionality and usability Industry Increased market access Facilitating regulatory compliance Knowledge and technology transfer Government Readymade implementation guidance to accompany broader governan

125、ce objectives or regulatory requirements Repository of best-practice to inform regulatory policymaking Different types of standards serve different functions.For the purposes of this research,four types of standards(adapted from the AI Standards Hub)will be particularly relevant.52 The purpose of ea

126、ch is illustrated in Figure 1.53 50 Ibid.51“Brokering Standards by Consensus,”ITU,July 2021,https:/www.itu.int/en/mediacentre/backgrounders/Pages/standardization.aspx.52 For our analysis,we have grouped together product testing and performance standards with measurement standards as these types of s

127、tandards serve similar functions when it comes to AI security,with both of these standard types still being at a particularly nascent stage for AI systems.53“Standards at a glance:Different types of standards,”AI Standards Hub,https:/aistandardshub.org/resource/main-training-page-example/2-different

128、-types-of-standards/.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 19 Figure 1:Different types of standards and their functions(adapted from the AI Standards Hub)There is a growing consensus among AI policy experts that SDO standards will be important tools for operationalising com

129、plex AI governance objectives.54 Where regulators may lack the requisite expertise to set detailed guidelines for creating safe,fair,or secure AI systems,they can point to standards that have been developed by committees of technical AI experts instead.Already,governments have set their sights on ke

130、y standards bodies to begin this work.In 2022,the European Commission issued a request to the European Standards Organisations CEN and CENELEC to develop standards in support of safe and trustworthy artificial intelligence,to underpin key requirements of the forthcoming EU AI Act.55 In the US,meanwh

131、ile,NIST has been at the forefront of setting technical frameworks for national AI policy and has received instructions from both Congress56 and the President57 to develop such resources.54 Hadrien Pouget,“What will the role of standards be in AI governance?,”Ada Lovelace Institute Blog,5 April 2023

132、.https:/www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/.55 European Commission,Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy AI(December 2022),https:/ec.europa.eu/docsroom/documents/52376.56 House of Representativ

133、es,William M.(Mac)Thornberry National Defense Authorization Act for Fiscal Year 2021,Division E:National Artificial Intelligence Initiative Act,Pub.L.No.H.R.6395,Division E,p.1164(2022),https:/www.congress.gov/116/crpt/hrpt617/CRPT-116hrpt617.pdf#page=1210.57 US Executive Order 14110,“Safe,Secure,an

134、d Trustworthy Development and Use of Artificial Intelligence,”October 2023,https:/www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.Towards Secure AI:How far can international standards take us?20 2.3 Progress towards s

135、tandards for secure AI Despite the nascency of AI security as a topic for standardisation,with the first official SDO technical committee covering AI security only launched at the end of 2023,58 several relevant international standards already exist.We do not provide a detailed mapping of existing s

136、tandards,instead highlighting some of the most important standards.Cybersecurity standards Cybersecurity standardisation has been ongoing for decades and is relatively mature.While many of the challenges of securing AI will be new,traditional cybersecurity practice can provide useful groundwork for

137、those wishing to secure AI systems.59 Key standards include:ISO 27000 series which provides guidance on information security management.60 This series defines a set of common terms and processes for organisations to secure the data they own and handle,promoting long-term cyber-resilience.These stand

138、ards can support AI security needs by ensuring that,at a baseline,the information used to train an AI system and the information produced by an AI system are secured.ISO/IEC 29147:2018 on vulnerability disclosure in the field of information security outlines key requirements for organisations seekin

139、g to investigate,disclose,and remedy security vulnerabilities in their IT systems.61 ETSI TC Cyber is also working to produce relevant cybersecurity standards.For example,ETSI TS 103 485 provides guidance on privacy assurance while ETSI TS 103 458 covers attribute-based encryption.62 NIST cybersecur

140、ity resources63 can also be relevant,including their computer security publication series,SP 800;64 their Cybersecurity Framework(CSF)(last updated in 58 Sophia Antipolis,“ETSIs Securing AI group becomes a technical committee to help ETSI to answer the EU AI Act,”ETSI News,17 October 2023,https:/www

141、.etsi.org/newsroom/news/2288-etsi-s-securing-ai-group-becomes-a-technical-committee-to-help-etsi-to-answer-the-eu-ai-act.59 Interview with government representative 1 October 2023,Interview with academic expert,7 November 2023.60“ISO/IEC 27000 family,”ISO,https:/www.iso.org/standard/iso-iec-27000-fa

142、mily.61“ISO.IEC 29147:2018,”ISO,https:/www.iso.org/standard/72311.html.62“ETSI TC Cyber,”ETSI,https:/www.etsi.org/technologies/cyber-security.63 NIST,“Cybersecurity and Privacy Program,”Extended Fact Sheet,July 2022,https:/www.nist.gov/system/files/documents/2022/07/21/Extended%20Cybersecurity%20Vit

143、als%20Fact%20Sheet.pdf.64 NIST,“Information Technology Laboratory,”21 May 2018,https:/www.nist.gov/itl/publications-0/nist-special-publication-800-series-general-information.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 21 February 2024);65 and their cybersecurity Risk Management F

144、ramework(RMF),published as SP 800-37.66 AI standards Increasingly,SDOs are producing standards on responsible and trustworthy AI.Much of this work has focused on defining key concepts,addressing broader characteristics like ethics,trustworthiness,and transparency,and providing overarching risk manag

145、ement guidance.Within SC 42,the ISO/IEC AI committee,key foundational and terminology standards for AI have already been agreed,helping to establish a shared vocabulary and conceptual framework for future standards to build on:ISO/IEC 22989:2022 on AI concepts and terminology.67 ISO/IEC 23053:2022,a

146、 Framework for Artificial Intelligence(AI)Systems Using Machine Learning(ML)provides a more detailed overview of machine learning tasks,as well as the development pipeline.68 As both were published in 2022,there is limited coverage of more recent developments(e.g.transformer architectures which unde

147、rpin LLMs).More recently,two vital process and management AI standards have been published.These standards build on existing ISO management system and risk management standards,including ISO 9001:2015 and ISO 31000:2018.ISO/IEC 42001:2023 provides requirements for implementing an AI management syste

148、m within organisations that provide or use AI systems.69 The standard defines organisational responsibilities and is centred around three key activities:risk assessment,risk treatment,and system impact assessment.ISO/IEC 23894:2023 focuses on risk management for AI systems,pointing to key sources of

149、 risk(data,personnel)and explaining how to address risks once identified.70 Achieving system-level security is considered to be an objective of ISO/IEC 23894,which refers to the 27000 series for a definition of information security risk management 65 NIST,Cybersecurity Framework 2.0(February 2024),h

150、ttps:/nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf.66 NIST,Risk Management Framework for Information Systems and Organizations(December 2018),https:/nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-37r2.pdf.67“ISO/IEC 22989:2022,”ISO,https:/www.iso.org/standard/74296.html.68“ISO/IEC 2305

151、3:2022,”ISO,https:/www.iso.org/standard/74438.html.69“ISO/IEC 42001:2023,”ISO,https:/www.iso.org/standard/81230.html.70“ISO/IEC 23894:2023,ISO,https:/www.iso.org/standard/77304.html.Towards Secure AI:How far can international standards take us?22 and to ISO/IEC TR 24028:2020(Information technology A

152、rtificial Intelligence Overview of trustworthiness in artificial intelligence)71 for a taxonomy of key AI-specific security vulnerabilities.72 Other important resources relating to AI standardisation include:NISTs AI Risk Management Framework(RMF)1.0 which provides instructions for organisations to

153、identify(map),assess(measure),and manage risks posed by their AI systems and includes guidance on withdrawing systems in case of unacceptable risks.The AI RMF requires risks to be identified separately according to each of seven trustworthiness characteristics of AI systems,one of which is security

154、and resilience,73 with trade-offs between the categories to be assessed in order to create a holistic risk management plan.The IEEE 7000 series for AI Ethics and Governance,released between 2020 and 2022.74 These standards cover topics like ethical design(IEEE 7000-2021),transparency(IEEE 7001-2021)

155、,and data privacy(IEEE 7002-2022).While they do not specifically tackle AI cybersecurity concerns except insofar as data privacy and data governance(IEEE 7005-2021)practices contribute to overall security these standards form part of a growing body of international standards that support responsible

156、 AI.AI security standards Standards focused explicitly on security of AI remain somewhat rare.An important exception to this is the work of the ETSI Technical Committee(TC)Securing AI(SAI),which is developing a series of technical reference materials to address various AI security aspects.75 71“ISO/

157、IEC TR 24028:2020,”ISO,https:/www.iso.org/standard/77608.html.72“ISO/IEC 23894:2023,”ISO,Annex A,https:/www.iso.org/standard/77304.html.73 NIST,Artificial Intelligence Risk Management Framework(AI RMF 1.0)(NIST:January 2023).74“GET Program for AI Ethics and Governance Standards,”IEEE,https:/ieeexplo

158、re.ieee.org/browse/standards/get-program/page/series.75 TC SAI looks at AI security in four different ways:1.Security of AI systems themselves,2.Security from malicious AI(or AI-enhanced)systems,3.Security of systems with AI techniques,and 4.Broader security concerns related to the use of AI.We are

159、most interested in their third objective.See:https:/www.etsi.org/technologies/securing-artificial-intelligence.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 23 Before becoming a TC,ETSIs industry specification group(ISG)SAI published several group reports(GRs)covering a variety of

160、topics within AI security:76 ETSI GR SAI 001:AI Threat Ontology77 ETSI GR SAI 002:Data Supply Chain Security78 ETSI GR SAI 004:Problem Statement79 ETSI GR SAI 005:Mitigation Strategy Report80 ETSI GR SAI 006:The role of hardware in security of AI81 ETSI GR SAI 007:Explicability and transparency of A

161、I processing82 ETSI GR SAI 009:Artificial Intelligence Computing Platform Security Framework83 ETSI GR SAI 011:Automated Manipulation of Multimedia Identity Representations84 ETSI GR SAI 013:Proof of Concepts Framework85 These GRs provide informative content on key terms,concepts,and approaches in A

162、I security.86 It will be crucial to raise awareness of this work among AI practitioners,as many perceive these topics to be unaddressed by international standards.Since becoming a TC,SAI has published ETSI TR SAI 104:Collaborative Artificial Intelligence,a technical report which provides an overview

163、 of AI security and performance issues that may stem from interaction or collaboration between AI systems,their stakeholders,and each other.87 Beyond ETSI,ISO/IEC have released a few standards covering AI security:ISO/IEC 27563:2023(Security and privacy in artificial intelligence use cases Best prac

164、tices)outlines security and privacy risks stemming from the AI use cases delineated 76 ETSI,ISG SAI Activity Report(ETSI:2022),https:/www.etsi.org/committee-activity/activity-report-sai;Sophia Antipolis,“ETSI releases three reports on securing artificial intelligence for a secure,transparent and exp

165、licable AI system,”ETSI News,11 July 2023,https:/www.etsi.org/newsroom/press-releases/2259-etsi-releases-three-reports-on-securing-artificial-intelligence-for-a-secure-transparent-and-explicable-ai-system.77“ETSI GR SAI 001,”ETSI,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/001/01.01.01_60/gr_SAI

166、001v010101p.pdf.78“ETSI GR SAI 002,”ETSI,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/002/01.01.01_60/gr_SAI002v010101p.pdf.79“ETSI GR SAI 004,”ETSI,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/004/01.01.01_60/gr_SAI004v010101p.pdf.80“ETSI GR SAI 005,”ETSI,https:/www.etsi.org/deliver/etsi_gr/S

167、AI/001_099/005/01.01.01_60/gr_SAI005v010101p.pdf.81“ETSI GR SAI 006,”ETSI,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/006/01.01.01_60/gr_SAI006v010101p.pdf.82“ETSI GR SAI 007,”ETSI,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/007/01.01.01_60/gr_SAI007v010101p.pdf.83“ETSI GR SAI 009,”ETSI,http

168、s:/www.etsi.org/deliver/etsi_gr/SAI/001_099/009/01.01.01_60/gr_SAI009v010101p.pdf.84“ETSI GR SAI 011,”ETSI,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/011/01.01.01_60/gr_SAI011v010101p.pdf.85“ETSI GR SAI 013,”ETSI,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/013/01.01.01_60/gr_SAI013v010101p.

169、pdf.86“Types of standards,”ETSI,https:/www.etsi.org/standards/types-of-standards.87“ETSI TR 104 032,”ETSI,https:/www.etsi.org/deliver/etsi_tr/104000_104099/104032/01.01.01_60/tr_104032v010101p.pdf.Towards Secure AI:How far can international standards take us?24 in ISO/IEC TR 24030:2021(Information t

170、echnology Artificial intelligence Use cases).This foundational standard provides a taxonomy of risks and controls,outlining how to begin developing a security and privacy plan.Detailed implementation guidance is not provided.88 ISO/IEC 24029(Artificial intelligence Assessment of the robustness of ne

171、ural networks)series,of which two parts have already been published.ISO/IEC 24029-1:2021(Part 1:Overview)analyses the robustness concept and defines key statistical,formal,and empirical methods of assessing the robustness of neural networks.Detailed guidance on these formal assessment methods is pro

172、vided in ISO/IEC 24029-2:2023(Part 2:Methodology for the use of formal methods),with a final standard,ISO/IEC AWI 24029-3(Part 3:Methodology for the use of statistical methods),currently under development.89 ISO/IEC TR 29119-11:2020(Software and systems engineering:Software testing)offers guidelines

173、 on testing AI-based systems,with an updated version expected in 2024,maps specific AI testing processes to the verification and validation stages of the AI lifecycle.90 Two forthcoming standards from ISO/IEC are also set to be particularly relevant:ISO/IEC CD 27090(Cybersecurity:Artificial intellig

174、ence)is set to offer guidance on identifying and mitigating security threats throughout the AI lifecycle.91 ISO/IEC WD 27091.2(Cybersecurity and Privacy:Artificial intelligence)is set to help organisations identify privacy risks throughout the AI lifecycle and to treat these risks.92 So far,the SDOs

175、 producing the most relevant work for those interested in securing AI have been ISO/IEC and ETSI.The work of these SDOs is likely to remain relevant,as many of the key AI standards they are producing will underpin requirements of the forthcoming EU AI Act.CEN-CENELEC,which has been tasked with stand

176、ards drafting around the EU AI Act,has adopted several ISO AI standards with plans to adopt more,93 while ETSIs work will 88“ISO/IEC TR 27563:2023,”ISO,https:/www.iso.org/standard/80396.html.89“ISO/IEC TR 24029-1:2021,”ISO,https:/www.iso.org/standard/77609.html;ISO/IEC 24029-2:2023,https:/www.iso.or

177、g/standard/79804.html.90“ISO/IEC TR 29119-11:2020,”ISO https:/www.iso.org/standard/79016.html.91“ISO/IEC CD 27090,”ISO,https:/www.iso.org/standard/56581.html.92“ISO/IEC WD 27091.2,”ISO,https:/www.iso.org/standard/56582.html.93 ENISA,Cybersecurity of AI and Standardisation(ENISA:March 2023),https:/ww

178、w.enisa.europa.eu/publications/rnmecybersecurity-of-ai-and-standardisation.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 25 increase in relevance following the Securing AI Groups elevation from an industry specification group to an official technical committee.Even as the field of

179、AI standardisation is relatively immature,with considerable work anticipated over the next several years,relevant standards straddling two key areas of interest cybersecurity and AI can support stakeholders to secure their AI systems today.A key problem so far has been raising awareness of these sta

180、ndards,which is made harder by their fragmentation across multiple SDO committees and by the cost of many of these standards(excluding those released by ETSI and NIST which are free to access).Towards Secure AI:How far can international standards take us?26 3.A Roadmap for Future AI Security Standar

181、ds Despite the progress made by SDOs so far,key gaps remain.This section offers a high-level overview of significant standardisation gaps and priorities for future standards development.3.1 Key gaps in AI security standards To identify standardisation gaps for AI security,we asked experts to classif

182、y perceived gaps according to four major types of international standard.The results of this exercise(shown in Figure 2)demonstrate the need to increase awareness around existing standards at the same time as filling gaps,as several topics cited as future priorities have in fact already been standar

183、dised(for example,terminology standards for AI).Figure 2:Perceived gaps in international standards for AI security.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 27 For more granular analyses of these standards gaps,see the OWASP AI exchange94(especially for analysis of ISO standard

184、s)and ENISA95(especially for analysis of the EU AI Act).3.2 Priority topics for future standards SDOs are under pressure,given demand for standards across such a range of topics,96 and given tight deadlines set by political commitments,particularly the EU AI act and US Executive Order.Consequently,i

185、t will be essential to prioritise resources,focusing on topics which are:A)Of high strategic importance(due to the likelihood and severity of impact if not addressed).B)Mature enough to standardise(due to the availability of pre-standardisation research).C)Best addressed by international SDOs,over a

186、nd above alternative responsible bodies(e.g.national standards bodies,national governments,industry,academia).Below,we prioritise topics for AI security standardisation,grouping them into five categories:1.Adapt existing standards:Topics where closely related international standards are available an

187、d can be expanded.2.Standardise now:Topics which are high priority,where pre-standardisation technical research is available and broad consensus among experts is achievable.3.Standardise in the future:Topics which are of high strategic importance,but where significant research is needed and there is

188、 widespread disagreement about best practice.4.Address through other means:Topics which are of high strategic importance,but are better addressed through government policies,or research by standards-adjacent organisations,academic researchers,or industry.94“OWASP AI Exchange,”OWASP,https:/owaspai.or

189、g/.95 ENISA,Cybersecurity of AI and Standardisation(ENISA:March 2023),https:/www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.96 CETaS workshop,17 January 2024.Towards Secure AI:How far can international standards take us?28 Adapt existing standards:Participants identified fo

190、ur key standards as having the greatest potential to be adapted to address AI security concerns:ISO/IEC 27001,information security management system could be adapted to cover AI.ISO/IEC 42001 on AI management systems and ISO/IEC 23894 on AI risk management could be expanded to offer more detail on s

191、ecurity considerations.ISO/IEC 22989 on AI concepts and terminology could be expanded for security terminology.There are limitations associated with this approach,given the extent to which AI security differs from traditional cybersecurity,97 given that general-purpose AI standards(e.g.ISO/IEC 42001

192、 and ISO/IEC 23894)often only superficially address security considerations,and given the limited coverage of application security in ISO/IEC 27001.An immediate priority should be to update terminology standards to focus more on security-specific terminology(e.g.confidentiality,integrity,availabilit

193、y,authenticity,safety,resilience,utility,interoperability,and control).Research participants cited this as a gap,highlighting limited awareness of work that has already been done(e.g.ISO/IEC 22989 already covers resilience and robustness).98 Standardise now:A secure-by-design,whole lifecycle standar

194、d for securing AI,building on NCSC and CISAs guidelines on AI security,made in collaboration with over 20 international partners.99 Forthcoming work from ISO on ISO/IEC CD 27090 should be tracked as potentially relevant for such a standard.A taxonomy of threats and risks,including a list of potentia

195、l attacks,building on NISTs research cataloguing the adversarial AI landscape.100 97 ENISA,Cybersecurity of AI and Standardisation(ENISA:March 2023),https:/www.enisa.europa.eu/publications/cybersecurity-of-ai-and-standardisation.98“ISO/IEC 22989:2022,”ISO,https:/www.iso.org/standard/74296.html.99 HM

196、 Government,Guidelines for secure AI system development(National Cyber Security Centre:2023),https:/www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.100 Apostol Vassilev et al.,Adversarial Machine Learning:A Taxonomy of Terminology of Attacks and Mitigations(NIST:January 2024),http

197、s:/csrc.nist.gov/pubs/ai/100/2/e2023/final.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 29 Transparency requirements for vulnerability disclosure.ISO/IEC 29147:2018 already covers vulnerability disclosure for information technologies so there is potential to build on this work.101

198、 Supply chain due diligence to ensure regular checking across each stage from design to deployment.ETSI has begun to address AI supply chains,although its work is confined to data considerations and needs updating.102 The increasing modularity of AI systems has led to increasingly complex AI supply

199、chains,raising issues from data provenance to secure hardware.103 Greater coverage of hardware concerns as they relate to AI security risks(e.g.semiconductors).SDOs are working on these topics,for example TC 47 at the IEC is dedicated to international standards on semiconductor devices.104 ETSI have

200、 also published on the role of hardware in securing AI.105 This work should be tracked for future standards on hardwares role in AI security.Standardise in the future:Measurement standards were seen as requiring further research to ensure reliable and precise metrics for properties such as AI resili

201、ence and robustness.Research on this should begin now to facilitate standards in the future.Mitigation techniques are still evolving.Pre-standardisation research,particularly from NIST,OWASP,NPL and MITRE,has laid groundwork for future standards.This should be tracked closely by SDOs which have so f

202、ar focused more on process-based AI standards(e.g.ISO/IEC 42001 and ISO/IEC 23894)as opposed to technical controls.101“ISO/IEC 29147:2018,”ISO,https:/www.iso.org/standard/72311.html.102“ETSI GR SAI 002,Securing Artificial Intelligence(SAI)Data Supply Chain Security,”ETSI,August 2021,https:/www.etsi.

203、org/deliver/etsi_gr/SAI/001_099/002/01.01.01_60/gr_SAI002v010101p.pdf.103 Rosamund Powell and Marion Oswald,“Assurance of Third-Party AI Systems for UK National Security,”CETaS Research Report(January 2024),https:/cetas.turing.ac.uk/publications/assurance-third-party-ai-systems-uk-national-security;

204、Ian Brown,“Expert Explainer:allocating accountability in AI supply chains,”Ada Lovelace Institute Paper(June 2023),https:/www.adalovelaceinstitute.org/resource/ai-supply-chains/;Jennifer Cobbe,Michael Veale and Jatinder Singh,“Understanding accountability in algorithmic supply chains,”in FaccT 23:Pr

205、oceedings of the 2023 ACM Conference on Fairness,Accountability,and Transparency(New York:Association for Computing Machinery,2023),1186-1197.104 Semiconductor devices,”IEC TC 47,IEC,https:/www.iec.ch/dyn/www/f.105“ETSI GR SAI 006,Securing Artificial Intelligence:The role of hardware in security of

206、AI,”ETSI,March 2022,https:/www.etsi.org/deliver/etsi_gr/SAI/001_099/006/01.01.01_60/gr_SAI006v010101p.pdf.Towards Secure AI:How far can international standards take us?30 Further work is needed on cybersecurity logging for AI,building on existing resources.Relevant proposals for standards covering A

207、I system logging have been made in response to the EU AI Act,and this work should be tracked.106 Standards which address generative AI explicitly.NISTs taxonomy of adversarial attacks distinguishes between vulnerabilities of predictive AI and of generative AI,acknowledging numerous new modes of atta

208、ck from prompt injection(direct and indirect)to supply chain attacks.107 Vertical,sector and use case-specific standards should be developed(where needed)once solid foundations are built through horizontal standardisation efforts.Address through other means:Adjacent organisations,particularly NIST a

209、nd OWASP,should cover topics which are insufficiently mature for standardisation(e.g.by accelerating research into measurement standards).Industry appears to be making good progress with certain topics such as file formats.These can be deprioritised by SDOs.Education is needed on basic cybersecurity

210、 practices and on the international standards that are already available.More work on certifying compliance with standards is needed.More effort needs to be dedicated to coordinating work between different SDOs,to avoid duplication and inconsistencies across different standards.106“PWI NWIP AI syste

211、m logging AI system logging,”CEN,April 2023,https:/ Apostol Vassilev et al.,Adversarial Machine Learning:A Taxonomy of Terminology of Attacks and Mitigations(NIST:January 2024),https:/csrc.nist.gov/pubs/ai/100/2/e2023/final.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 31 4.Persist

212、ent Challenges to Creating AI Security Standards To fill these standardisation gaps,several challenges within the standards making process will need to be overcome.This section focuses on two crucial tensions to be resolved:optimising timelines and promoting multistakeholder inclusivity.4.1 Optimisi

213、ng timelines for AI security standards The creation of effective standards requires a degree of technological maturity.With a novel research field like AI,it can be challenging to develop standards that meet the practical need of implementers and adopters,creating a risk that early standardisation w

214、ill lead to wasted resources from a lack of uptake.108 Premature standards can also require regularly updates,making it challenging for businesses to keep up.109 This creates a dilemma for SDOs who would normally wait for a lot of maturity before standardisation,but are facing pressure to standardis

215、e now.110 Research engagements show that while much of the AI security field is insufficiently mature for the development of standards,some efforts can be undertaken immediately.For instance,foundational and terminology standards are already emerging,while process standards covering risk management

216、for AI security are also planned for publication.Further to disagreements over whether AI security is sufficiently mature to warrant standardisation,many perceive the standards-setting process itself as too slow,arguing the consensus-building requirements risk any new standards becoming quickly outd

217、ated.111 For example,in ISO the average time from first proposal to standard publication is 3 years.112 Yet this perception of slowness may misrepresent the importance of careful deliberation in standards-setting.113 Firstly,rushing the process can risk creating standards which are not 108 Charles M

218、.Schmidt,Best Practices for Technical Standard Creation(MITRE:April 2017),1,https:/www.mitre.org/sites/default/files/publications/17-1332-best-practices-for-technical-standard-creation.pdf.109 Interview with standards experts(2),1 December 2023;Interview with academic expert,31 October 2023.110 Inte

219、rview with academic expert,16 November 2023.111 Interview with government representative,23 October 2023;Interview with academic expert,7 November 2023;Interview with standards body representative,7 November 2023;Interview with regulator,9 November 2023;Interview with academic expert,14 November 202

220、3;Interview with standards expert,20 November 2023.112“Developing Standards,”ISO,https:/www.iso.org/developing-standards.html.113 Interview with regulator,9 November 2023;Interview with academic expert,14 November 2023.Towards Secure AI:How far can international standards take us?32 robust leading s

221、ome to argue that premature release without enough consultation,consensus and expertise,is the bigger problem.114 Second,moving too quickly could create a perceived lack of time for sustainable adoption internationally.115 This would have knock-on effects on developers who would regularly need to re

222、design systems,116 and on government procurement because systems could become non-compliant very quickly.117 Despite disagreement on optimal timelines for standards-setting,several suggestions were made for improving the agility of existing processes.For instance,moving away from consensus-based agr

223、eements and towards majority-based voting118 and introducing streamlined approaches to submitting comments on draft standards.119 Not all SDOs operate according to strict timelines,and SDOs can often release guidance more quickly if it is not classified explicitly as a standard.For instance,ETSI inc

224、ludes a technical specification route in 12-15 months,which can then be upgraded into a formalised standard.120 In some cases,rather than looking to modify the procedures of international SDOs,efforts can be supplemented by national standards,new technical specifications and other AI security resear

225、ch:1.BSI:offers a Flex standard which operates on a faster timescale,with new iterations of flex standards typically generated within 6 months.121 2.NIST:uploads publicly available guidelines and technical specifications.122 3.OWASP:guidance can be published in 4 months,since it is based on open-sou

226、rce tools(e.g.Slack and GitHub)and anyone can get involved,including people who have limited time to commit.OWASP has also provided work free of copyright and attribution,to be used by SDOs.123 114 Interview with government standards expert,13 November 2023.115 Interview with government representati

227、ve(1),24 October 2023;Interview with industry expert,17 November 2023.116 Interview with industry expert,31 October 2023.117 Interview with industry expert,31 October 2023.118 Interview with academic expert,3 November 2023.119 Interview with standards expert,27 October 2023.120 Interview with govern

228、ment representative,1 October 2023;Interview with government expert,21 November 2023.121 Interview with government representative,1 October 2023;BSI,“Principles of BSI Flex Standardisation,”2021,https:/ Interview with industry experts,16 November 2023.123 Interview with standards body representative

229、,7 November 2023.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 33 While domestic efforts(e.g.NIST and BSI)may be beneficial in improving agility,they should wherever possible remain aligned with the wider international community.124 National standards can also themselves be interna

230、tionalised,with ISO 27001 on information security being a key example of this.125 Furthermore,while standards-adjacent research is useful in offering a more agile approach to AI security,SDOs must be proactive in coordinating with organisations like OWASP,for example through liaison schemes.126 Rath

231、er than focusing purely on making SDOs more agile,the most effective solution is to make the content of standards future-proof.This will not be easy,owing to trade-offs between specificity and longevity.127 The more specific a standard is,the easier it is to use,but the more quickly it becomes obsol

232、ete.128 To improve the longevity of standards,we recommend starting with high-level standards before narrowing down into specifics.129 In doing so,SDOs must avoid the proliferation of taxonomies which vary from one another,instead ensuring consistency and longevity.130 Where possible,standards shoul

233、d not focus on the technology itself but rather on the areas where the technology is being used(e.g.healthcare or defence).131 However,this will not always be possible as certain AI techniques(e.g.large language models)are associated with specific attack vectors(e.g.prompt injection),requiring techn

234、ology-specific standards.Several issues and dilemmas outlined above could be mitigated through horizon-scanning functions being integrated into standards bodies working on AI security.By coordinating working groups across relevant SDOs,key state-of-the-art trends may be identified to inform new stan

235、dards proposals.132 124 Interview with government expert,21 November 2023;Interview with standards expert,20 November 2023.125“The History of ISO 27001,”SecureFrame,https:/ 126 Interview with standards expert,7 November 2023.127 Interview with standards experts,1 December 2023;Interview with standar

236、ds body expert,7 November 2023.128 Interview with government security expert,3 November 2023.129 Interview with industry expert,31 October 2023;Interview with academic expert,16 November 2023;Interview with academic expert(2),16 November 2023.130 Jeferson O.Batista et al.,“Ontologically correct taxo

237、nomies by construction,”(May 2022),https:/ Interview with industry expert,3 November 2023.132 Interview with academic expert(2),16 November 2023.Towards Secure AI:How far can international standards take us?34 4.2 Multistakeholder inclusivity and accessibility of AI standards SDOs often suffer from

238、a lack of stakeholder diversity.This has been outlined as a problem in the area of privacy standards,133 and similar problems are observed in AI security.134 In the experience of research participants,industry representatives tend to be the most prominent members of secure AI WGs,135 and involvement

239、 is not evenly distributed across industry.SMEs were highlighted as struggling to shape AI security standards,due to the cost of memberships in certain standards bodies and the amount of time or resources required to devote efforts to the process.136 Aside from SMEs,civil society organisations face

240、similar barriers to accessing SDO WGs.137 Commenting on the composition of SDO WGs,one interviewee acknowledged it is diverse to the extent you have government,industry and academia.It is probably fair to say civil society isnt there.138 Several schemes do exist to improve diversity at SDOs.For inst

241、ance,ANEC seeks to represent consumer voices in ETSI and CEN-CENELEC,139 and the Systers programme is intended to promote the role of women and non-binary participants at the IETF.140 BSI runs a Consumer and Public Interest Network(CPIN)to boost civil society participation and engages with organisat

242、ions representing SMEs(e.g.the British Chambers of Commerce and the Federation for Small Businesses)to boost SME participation.141 Yet,despite these schemes,and despite detailed research addressing the lack of multistakeholder participation in standards setting,142 there has not been consistent repo

243、rting year-on-year detailing the multi-stakeholder composition at SDOs.Further efforts 133 Sam Stockwell et al.,“The Future of Privacy by Design Technology:Policy Implications for UK Security,”CETaS Research Reports(September 2023):30-33,https:/cetas.turing.ac.uk/publications/future-privacy-design-t

244、echnology.134 Hadrien Pouget,“What will the role of standards be in AI governance?,”Ada Lovelace Institute Blog,5 April 2023,https:/www.adalovelaceinstitute.org/blog/role-of-standards-in-ai-governance/.135 Interview with government representative,23 October 2023.136 Interview with government securit

245、y expert,3 November 2023;Interview with government standards expert,13 November 2023;Interview with academic expert,16 November 2023;Interview with academic expert(2),16 November 2023.137 Interview with standards expert,27 October 2023;Interview with government standards expert,13 November 2023.138

246、Interview with government expert,21 November 2023.139“About ANEC,”ANEC,https:/www.anec.eu/priorities/digital-society.140“IETF Systers,”IETF,https:/www.ietf.org/about/groups/ietf-systers/.141“Join the BSI Consumer&Public Interest Network(CPIN),”BSI,https:/ Christine Galvana,Inclusive AI governance(Ad

247、a Lovelace Institute:March 2023),https:/www.adalovelaceinstitute.org/report/inclusive-ai-governance/.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 35 should be made by SDOs to report on what good looks like to them when it comes to multistakeholder participation so that in future,t

248、he public can obtain a better understanding of the extent to which targets are being met.When there is a lack of inclusivity within standards-setting processes,the end products can suffer.One of the most obvious concerns is that the perceived lack of involvement by key actors will undermine uptake,w

249、hich risks leading to inconsistent approaches to AI security.143 The current dominance of major industry players could also undermine the inclusion of socio-technical perspectives.144 Finally,these imbalances at SDOs can result in biases regarding which global regions are represented in discussions.

250、145 Yet in SDO debates that were perceived as highly technical,the dominance of tech companies was felt to be justified by some interviewees,allowing resources for civil society participation to be directed towards more sociotechnical standards work.146 Further action should be taken to improve dive

251、rsity in standardisation.Firstly,national governments could provide greater support to underrepresented groups,including SMEs,AI researchers and civil society organisations.This could include covering membership fees and raising awareness on how to participate in standards bodies.147 Secondly,margin

252、alised stakeholders could steer their efforts towards organisations which utilise public engagement mechanisms.The EU has launched open consultations on certain aspects of standards,such as setting a strategic roadmap,148 while institutions like NIST and OWASP incorporate accessible communication pl

253、atforms and host multistakeholder workshops to facilitate wider public involvement.149 143 Interview with government representative,1 October 2023.144 Hadrien Pouget,“What will the role of standards be in AI governance?,”Ada Lovelace Institute Blog,5 April 2023,https:/www.adalovelaceinstitute.org/bl

254、og/role-of-standards-in-ai-governance/;Interview with academic expert,16 November 2023.145 Interview with government security expert,3 November 2023.146 Interview with academic expert,31 October 2023;Interview with government expert,21 November 2023.147 Neil Brown et al.,The Role of Standardisation

255、in Support of Emerging Technologies in the UK(Department for Business,Energy&Industrial Strategy:May 2022),9-10,https:/assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1080614/role-of-standardisation-in-support-of-emerging-technologies-uk.pdf;Peter Cihon,Standa

256、rds for AI Governance:International Standards to Enable Global Coordination in AI Research&Development(Future of Humanity Institute:April 2019),3,https:/www.fhi.ox.ac.uk/wp-content/uploads/Standards_-FHI-Technical-Report.pdf;Interview with government representative,1 December 2023.148 Interview with

257、 standards expert,27 October 2023.149 Interview with academic expert,1 November 2023;Interview with standards body representative,7 November 2023.Towards Secure AI:How far can international standards take us?36 Beyond diversity at the standards development stage,further issues arise when standards a

258、re published but remain expensive to access,as is the case for ISO and CEN/CENELEC standards.In accounting for the cost to buy standards,BSI note that on average each standard costs 15,000 to develop and that they do not make profits from standards development.150 Furthermore,where standards are fre

259、e to access,as is the case with ETSI standards,costs can be hidden elsewhere,for instance in the fees paid by industry to contribute to standards setting.151 Given the challenges around making standards completely free to access without raising costs elsewhere,we recommend government prioritises sup

260、port for international standards access towards SMEs and non-profits.Where possible this should be done through supporting existing efforts so as not to duplicate work unnecessarily.For example,both OWASP and the AI Standards Hub already do a lot to educate a range of stakeholders on which standards

261、 are available.152 150“How to buy and access standards,”BSI,https:/ does membership cost?,”ETSI,https:/www.etsi.org/membership/dues.152“OWASP AI Exchange,”OWASP,https:/owaspai.org/;“About the AI Standards Hub,”AI Standards Hub,https:/aistandardshub.org/the-ai-standards-hub/.Rosamund Powell,Sam Stock

262、well,Nalanda Sharadjaya and Hugh Boyes 37 5.Incentivising,Enforcing and Assessing Standards Adoption Even when high quality standards are available,adoption is often lacking.We explore the potential for incentives,enforcement,and certification to increase standards adoption.5.1 The two-fold challeng

263、e of standards adoption Standards adoption is a longstanding challenge in cybersecurity.In 2022,research by the Department for Digital,Culture,Media and Sport found that awareness of cybersecurity standards across industry was low,contributing to limited uptake and certification(only 8%of businesses

264、 reported following ISO 27001,6%adhered to Cyber Essentials,1%to Cyber Essentials Plus,and 4%followed some form of NIST standard).153 Despite these figures,there is a lack of broader data available on uptake of specific standards.Our research suggests that even in cases where standards are said to h

265、ave been followed,their implementation can often be ineffective or insufficient.154 Incentives for standards adoption must therefore address the numerous factors which lead to either ineffective or insufficient standards adoption(see Table 2).153 These figures do rise for large businesses(more than

266、250 employees)where 23%have adhered to ISO 27001,20%to a NIST standard,35%to cyber essentials and 17%to cyber essentials plus.See:HM Government,Cyber Security Breaches Survey 2022(Department for Digital,Culture,Media&Sport:2022),https:/www.gov.uk/government/statistics/cyber-security-breaches-survey-

267、2022/cyber-security-breaches-survey-2022#chapter-2-profiling-uk-businesses-and-charities.154 Interview with academic expert,14 November 2023;Interview with legal expert,2 November 2023;Interview with standards body expert,7 November 2023;Interview with standards experts,1 December 2023.Towards Secur

268、e AI:How far can international standards take us?38 Table 2:factors hindering adoption of international standards for AI security Reasons for insufficient standards adoption Reasons for ineffective standards adoption Awareness from developers of standards is insufficient.155 Standards implementation

269、 is often viewed as a box-ticking exercise.156 The culture of AI is organised to focus on rapid innovation,not robust cybersecurity.157 Standards implementation is frequently outsourced.158 Customers for AI dont know how to ask suppliers for international standards implementation.159 Standards repre

270、sent best practice(rather than strict requirements)and for SMEs can be too resource intensive.160 Without enforcement,the motivation for industry to comply is insufficient.161 Standards shopping hinders efficacy as people find standards to suit their priorities.162 155 Interview with academic expert

271、,14 November 2023.156 Interview with legal expert,2 November 2023.157 Interview with standards body expert,7 November 2023.158 Interview with academic expert,14 November 2023.159 Interview with legal expert,7 November 2023.160 Interview with standards experts,1 December 2023.161 Interview with legal

272、 expert,2 November 2023.162 Interview with academic expert,3 November 2023.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 39 5.2 UK strategy on cybersecurity incentives Several options are available for governments wishing to incentivise cybersecurity,ranging from highly interventio

273、nist strategies such as regulation to no intervention at all(see Figure 3).163 Figure 3:Incentive hierarchy for standards adoption The UK Government has long acknowledged it cannot leave cybersecurity solely to the marketplace,and must introduce the right mix of regulation and incentives.164 As the

274、scale of the cyber threat has grown,so too has the Governments willingness to intervene.The UKs current approach to cybersecurity incentives focuses on four pillars:1.Foundations:Providing guidance on cyber risk management.2.Capabilities:Supporting skilled professionals to implement guidance.3.Marke

275、t incentives:Creating incentives for organisations to invest in cybersecurity.163 A range of stakeholders may use these levers to increase standards uptake,for instance standards development bodies or multilateral organisations.We focus primarily on the role the UK government can and should play to

276、increase uptake of international standards for AI security.164 HM Government,2022 cyber security incentives and regulation review(Department for Digital Culture Media&Sport:January 2022),https:/www.gov.uk/government/publications/2022-cyber-security-incentives-and-regulation-review/2022-cyber-securit

277、y-incentives-and-regulation-review.Towards Secure AI:How far can international standards take us?40 4.Accountability:Holding organisations accountable for managing their cyber risk.165 The emphasis is on education and upskilling over enforcement and regulation.166 This is mirrored in the UK Governme

278、nt Cyber Security strategy,167 and is consistent with the governments pro-innovation AI strategy.168 5.3 Encouraging adoption:expanding incentives to cover international standards Despite acknowledging that cybersecurity incentives need to be strengthened,in part due to new AI-specific security risk

279、s,cybersecurity incentives have not been consistently integrated with incentives for standards implementation,nor have they addressed AI-specific concerns.169 Due to the pace of AI adoption and claims this has led to a cybersecurity crisis,170 the need to expand government incentives,bringing them m

280、ore in line with international standards,should be urgently considered.Our research suggests there is consensus on the need to expand each pillar from the UKs cybersecurity incentive strategy to explicitly encourage uptake of international standards that cover AI security concerns(see Table 3).165 H

281、M Government,Cyber Security Regulation and Incentives Review(December 2016),https:/assets.publishing.service.gov.uk/media/5a7f944940f0b62305b87ffb/Cyber_Security_Regulation_and_Incentives_Review.pdf.166 Within government,enforcement of cybersecurity is more widespread as government departments must

282、base their practices on policies published by both the National Cybersecurity Centre(NCSC)and the Government Security Group(GSG).167 HM Government,Government Cyber Security Strategy(Cabinet Office:2022),https:/assets.publishing.service.gov.uk/media/61f0169de90e070375c230a8/government-cyber-security-

283、strategy.pdf.168 HM Government,A pro-innovation approach to AI regulation(Department for Science Innovation and Technology:March 2023),https:/assets.publishing.service.gov.uk/media/64cb71a547915a00142a91c4/a-pro-innovation-approach-to-ai-regulation-amended-web-ready.pdf.169 The notable exception is

284、government support for the AI Standards Hub.The AI Standards Hub hosts an observatory of AI standards,helps track existing and forthcoming standards,and provides research and training on international AI standards.See:https:/aistandardshub.org.170 Andreas Tsamados,Luciano Floridi and Mariarosaria Ta

285、ddeo,“The Cybersecurity Crisis of Artificial Intelligence:Unrestrained Adoption and Natural Language-Based Attacks,”SSRN(September 2023),https:/ Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 41 Table 3:New incentives for AI security standards Incentive pillar Rationale for expansion Recomme

286、ndation for new incentives Foundations and capabilities Insufficient awareness of cybersecurity and AI security standards.Inconsistencies between UK approaches and international standards have caused confusion for developers on which approach to take.171 Explicitly target the AI community with exist

287、ing cybersecurity training(e.g.through harnessing forums such as the AI Standards Hub).Expand the scope of cybersecurity training to cover international AI security standards.This training should target public sector AI procurers,SMEs working with AI,and cybersecurity experts wanting to expand their

288、 expertise.Market incentives and accountability Financial incentives are likely to be more effective than other measures.Buying power of the public sector can influence the actions of major AI developers,encouraging them to introduce safeguards for secure AI.Updates should made to government AI proc

289、urement processes to ensure specific international standards on AI security,and cybersecurity more broadly,are referenced in invitations to tender.Much could be learnt by the more stringent procurement standards currently in place in Defence.172 Procurement teams across government should receive tra

290、ining on international standards and cybersecurity.Effective incentivisation will require a context-and sector-specific approach.Different levels of intervention and different international standards will be relevant to distinct sectors.In 171 CETaS workshop,17 January 2024.172 Interview with regula

291、tor,9 November 2023.Towards Secure AI:How far can international standards take us?42 the immediate term standards incentivisation in high-risk sectors such as security,defence,healthcare,and critical national infrastructure should be prioritised.5.4 Enforcing adoption:the role of regulation Regulati

292、on can go even further to incentivise adoption of international standards.But the role played by regulation in standards incentivisation can take a number of forms.In the EU,harmonised standards are set to play a critical role in the implementation of the EU AI act.173 Legislation sets out high-leve

293、l legal requirements for AI developers which are clarified by secondary legislation.The plan is that standards will then fill the implementation gap by specifying regulatory requirements in the form of best practice guidance.174 The EU AI act is accompanied by a standardisation request to European S

294、DOs with CEN/CENELEC asked to draft standards which enable organisations to demonstrate they have taken reasonable steps to comply with the Act.175 The EU approach comes with pros and cons.On the one hand,it helps SDOs design their workplans,and promotes a good separation of work between policymaker

295、s,industry,and technical experts.176 However,regulation creates challenges for SDOs,with one expert noting that it will be quite challenging to ensure all relevant standards are available by the end of the 3-year transition period.177 There is also potential for the harsh regulatory environment of t

296、he EU to stifle innovation,178 and for regulation to cause EU standards to diverge from global ones.179 This strategy can be contrasted with the UKs sector-specific approach to regulation,and the UK approach to designated standards.180 Designated standards in the UK are standards recognised by the g

297、overnment as providing evidence that a particular product or service 173 Claire OBrien,Bennett Borden,Mark Rasdale and Daisy Wong,“The role of harmonised standards as tools for AI act compliance,”(DLA Piper:January 2024),https:/ Setting,”EU AI Act,https:/artificialintelligenceact.eu/standard-setting

298、/.175 CEN-CENELEC,“ETUCs position on the draft standardisation request in support of safe and trustworthy AI,”CEN-CENELEC News,1 June 2022,https:/www.cencenelec.eu/news-and-events/news/2022/newsletter/issue-34-etuc-s-position-on-the-draft-standardization-request-in-support-of-safe-and-trustworthy-ai

299、/.176 Interview with industry expert,17 November 2023.177 Interview with standards expert,27 October 2023.178 Interview with legal expert,2 November 2023.179 Interview with government expert,21 November 2023.180 HM Government,A pro-innovation approach to AI regulation:government response(Department

300、for Science,Innovation and Technology:February 2024),https:/www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 43 complies

301、with UK law.181 The designation process does not involve UK government sending out standardisation requests to SDOs.Instead,the focus is on identifying existing standards that closely align with UK law and then considering whether they are suitable for designation.182 To support this approach,the UK

302、s AI Strategy announced the role of the AI Standards Hub to coordinate UK engagement with international standardisation and support AI stakeholders to engage with the standards ecosystem.183 At present,our analysis suggests there is limited potential to incentivise AI security standards through regu

303、lation in the UK.This is due to both the nascency of AI security standards and the lack of interest from UK government in AI regulation in general,at least for the present.184 We therefore recommend UK decisionmakers focus on alternative means through which to increase accountability for security of

304、 AI(e.g.through broader regulation on cybersecurity and through alternative non-regulatory incentives on AI security),while closely tracking international regulatory approaches.The UK should closely track the EU AI act,particularly its influence on SDO workplans,as there will be knock-on impacts for

305、 companies operating in the UK,and on the direction of standardisation more broadly.5.5 Assessing adoption:the role of certification In cybersecurity,there has been a long-standing challenge around certification of best practice.Research has found that 32%of businesses and 29%of charities in the UK

306、invest in some form of cybersecurity certification,with the two most common being NCSCs Cyber Essentials Scheme and compliance with ISO 27001.185 Furthermore,external certification is far from a catch-all solution.Outsourcing compliance checks to external consultancies can result in a lack of unders

307、tanding internally,which does little to help secure AI systems in the long-term.186 Human factors research suggests that certification around standards such as ISO 27001 has become a box ticking exercise with 181“Designated Standards,”HM Government,Department for Business and Trade and Office for Pr

308、oduct Safety and Standards,3 December 2020,https:/www.gov.uk/guidance/designated-standards.182 Ibid.183 HM Government,National AI Strategy(September 2021),https:/assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf 184 HM Government,A pro-innovation

309、approach to AI regulation:government response(Department for Science,Innovation and Technology:February 2024),https:/www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response.185 HM Government,C

310、yber security longitudinal survey:wave 1(DCMS:January 2022),https:/www.gov.uk/government/publications/cyber-security-longitudinal-survey-wave-one/cyber-security-longitudinal-survey-wave-1;Note,these figures differ somewhat from those included from the“Cyber Security Breaches Survey”as they are based

311、 on distinct research from UK government.186 Interview with academic expert,14 November 2023.Towards Secure AI:How far can international standards take us?44 limited quality control and behaviour change from companies claiming to have implemented the standard.187 Compliance audits need to be robust

312、and independent,but this requires further investment in AI skills for those responsible for certification.188 The UK Government is in many ways already a frontrunner on cybersecurity certification thanks to the Cyber Essentials scheme.Recent updates to explicitly support SMEs working on fundamental

313、AI research to become certified under the Cyber Essentials Plus scheme are particularly promising.189 Rather than starting from scratch,Cyber Essentials should be updated when relevant standards for secure AI are released.This could involve introducing an additional tier for Cyber Essentials,focusin

314、g on AI vulnerabilities explicitly.When expanding Cyber Essentials,government should carefully consider when and how to align with international standards.Already,NCSC receive regular inquiries about the overlaps and divergences between Cyber Essentials and alternative standards for cybersecurity.19

315、0 On the one hand,divergence between Cyber Essentials and international standards(e.g.ISO 27001)can be viewed as unhelpful,contributing to fragmentation.191 On the other hand,such divergence can be considered necessary to allow for a more streamlined approach compared to the document-heavy ISO 27001

316、 certification.We recommend that in expanding Cyber Essentials to cover AI explicitly,UK Government prioritises international cooperation to avoid further fragmentation,aligning with international standards on AI wherever possible while aiming to streamline the certification process.187 Interview wi

317、th academic expert,14 November 2023.188 Interview with standards body expert,7 November 2023.189“Funded Cyber Essentials Programme,”HM Government,NCSC,19 December 2022,https:/www.ncsc.gov.uk/information/funded-cyber-essentials-programme.190 Chris Ensor,“Cyber Essentials:are there any alternative sta

318、ndards?”NCSC Blog,23 January 2024,https:/www.ncsc.gov.uk/blog-post/cyber-essentials-are-there-any-alternative-standards.191 CETaS workshop,17 January 2024.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 45 6.Situating Standards within a Broader AI Security Toolkit This section consid

319、ers how a holistic AI assurance process,which brings international standards together with more agile AI security techniques,can help to ensure we are not overly reliant on international standards for AI security needs.6.1 Why are alternative levers for AI security needed?Decisionmakers involved wit

320、h the EU AI Act,the US Executive Order,and to a lesser extent,the UK AI Safety Institute,have each placed significant pressure on SDOs to resolve their implementation challenges by operationalising high-level policies in the form of best practice guidance.192 At the same time,the people steeped in s

321、tandards development tend to be less ambitious.193 They acknowledge that existing standardisation gaps will persist until robust scientific consensus is reached on how we can best secure AI systems from attack.The urgency of the security challenge in AI,particularly in high-risk sectors,means we can

322、not simply wait for SDOs to fill all gaps.What is urgently needed is a holistic and whole-lifecycle AI assurance process which incorporates not just standards,but further levers for AI security such as agile technical solutions(e.g.the MITRE ATT&CK framework),194 government policies(e.g.NCSC AI Secu

323、rity Guidelines),195 and industry standards(e.g.Googles Secure AI Framework).196 Even when standards on AI security are mature,these related techniques for securing AI will continue to be relevant to developers,alongside standards.Table 4 summarises the advantages and weaknesses of these alternative

324、 levers,compared to international standards.197 Across the board,the key advantage is pace and agility,while the key weakness is the creation of a fragmented landscape containing contradictory guidance.192 Interview with academic expert(1),16 November 2023.193 Ibid.194“MITRE ATT&CK,”MITRE,https:/att

325、ack.mitre.org/.195 HM Government,Guidelines for secure AI system development(National Cyber Security Centre:2023),https:/www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.196“Googles Secure AI Framework,”Google,https:/safety.google/cybersecurity-advancements/saif/.197 Findings in Ta

326、ble 4 are partially based on a CETaS workshop,17 January 2024.Towards Secure AI:How far can international standards take us?46 Table 4:Alternative levers for AI security Alternative lever for AI security Advantages compared to standardisation Weaknesses compared to standardisation Non-regulatory gui

327、delines developed by government e.g.NCSC AI security guidelines,NCSC principles for the security of machine learning.198 Pace of development and ability to update in a more agile way.Less subject to conflicting interests inherent to the multistakeholder standards making process.Risk of being less in

328、formed by wide-ranging expertise.Often principles-based and open to differing interpretations.Lack international acceptance.Could contribute to fragmentation,especially if they diverge from emergent standards.Not enforceable.Industry consortia and specific industry standards e.g.Frontier AI Forum,19

329、9 Google Secure AI Framework200 Faster and more agile.Widespread adoption from associated industry AI developers.Less widely recognised.Risks specific commercial interests being baked into frameworks that become de facto best practice.Much narrower input.Competitive barrier if not open/readily avail

330、able.Not enforceable.198 HM Government,Guidelines for secure AI system development(National Cyber Security Centre:2023),https:/www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development.199“How we work,”Frontier AI Forum,https:/www.frontiermodelforum.org/how-we-work/.200“Googles Secure AI Fr

331、amework,”Google,https:/safety.google/cybersecurity-advancements/saif/.Rosamund Powell,Sam Stockwell,Nalanda Sharadjaya and Hugh Boyes 47 Standards adjacent research e.g.OWASP AI exchange,201 NIST adversarial AI publications,202 Fast moving and agile.Multi-stakeholder and multi-disciplinary.Outputs a

332、re coordinated with SDO research to avoid fragmentation.Less widely recognised.Regular updates make them more difficult to track.Not enforceable.6.2 Integrating AI security techniques through AI assurance Each of these levers for secure AI can be complementary to international standards.But to enabl

333、e a whole-lifecycle approach to AI cybersecurity,information on standards should be collated with analogous resources.203 AI assurance can help to achieve this goal,bringing international standards for secure AI together with agile technical solutions,national policies,academic research and industry approaches.Assurance can also help to ensure that the cybersecurity of AI systems is not considered

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(CETaS:2024人工智能安全报告:国际标准所发挥的作用(英文版)(55页).pdf)为本站 (白日梦派对) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部