上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

英国政府:2023促进创新的人工智能监管方法白皮书(英文版)(91页).pdf

编号:123061  PDF  DOCX  91页 1.17MB 下载积分:VIP专享
下载报告请您先登录!

英国政府:2023促进创新的人工智能监管方法白皮书(英文版)(91页).pdf

1、 A pro-innovation approach to AI regulation March 2023 CP 815 A pro-innovation approach to AI regulation Presented to Parliament by the Secretary of State for Science,Innovation and Technology by Command of His Majesty March 2023 CP 815 Crown copyright 2023 This publication is licensed under the ter

2、ms of the Open Government Licence v3.0 except where otherwise stated.To view this licence,visit nationalarchives.gov.uk/doc/open-government-licence/version/3.Where we have identified any third-party copyright information you will need to obtain permission from the copyright holders concerned.This pu

3、blication is available at www.gov.uk/official-documents.Any enquiries regarding this publication should be sent to us at:evidenceofficeforai.gov.uk ISBN 978-1-5286-4009-1 E02886733 03/23 Printed on paper containing 40%recycled fibre content minimum Printed in the UK by HH Associates Ltd.on behalf of

4、 the Controller of His Majestys Stationery OfficeContents Ministerial foreword 1 Executive summary 4 Part One:Introduction 8 Part Two:The current regulatory environment 14 Part Three:An innovative and iterative approach 19 Part Four:Tools for trustworthy AI to support implementation 64 Part Five:Ter

5、ritorial application 67 Part Six:Global interoperability and international engagement 68 Part Seven:Conclusion and next steps 72 Annex A:Implementation of the principles by regulators 75 Annex B:Stakeholder engagement 78 Annex C:How to respond to this consultation 83 A pro-innovation approach to AI

6、regulation 1 Ministerial foreword The Rt Hon Michelle Donelan MP,Secretary of State for Science,Innovation and Technology I believe that a common-sense,outcomes-oriented approach is the best way to get right to the heart of delivering on the priorities of people across the UK.Better public services,

7、high quality jobs and opportunities to learn the skills that will power our future these are the priorities that will drive our goal to become a science and technology superpower by 2030.Artificial intelligence(AI)will play a central part in delivering and enabling these goals,and this white paper w

8、ill ensure we are putting the UK on course to be the best place in the world to build,test and use AI technology.But we are not starting from zero.Having invested over 2.5 billion in AI since 2014,this paper builds on our recent announcements of 110 million for our AI Tech Missions Fund,900 million

9、to establish a new AI Research Resource and to develop an exascale supercomputer capable of running large AI models backed up by our new 8 million AI Global Talent Network and 117 million of existing funding to create hundreds of new PhDs for AI researchers.Most of us are only now beginning to under

10、stand the transformative potential of AI as the technology rapidly improves.But in many ways,AI is already delivering fantastic social and economic benefits for real people from improving NHS medical care to making transport safer.Recent advances in things like generative AI give us a glimpse into t

11、he enormous opportunities that await us in the near future if we are prepared to lead the world in the AI sector with our values of transparency,accountability and innovation.My vision for an AI-enabled country is one where our NHS heroes are able to save lives using AI technologies that were unimag

12、inable just a few decades ago.I want our police,transport networks and climate scientists and many more to be empowered by AI technologies that will make the UK the smartest,healthiest,safest and happiest place to live and work.That is why AI is one of this governments five technologies of tomorrow-

13、bringing stronger growth,better jobs,and bold new discoveries.It is a vision that has been shaped by stakeholders and experts in AI,whose expertise and ideas I am determined to see reflected in our department.The UK has been at the forefront of this progress,placing third in the world for AI researc

14、h and development.We are home to a third of Europes total AI companies and twice as many as any other European country.Our world-leading status is down to our thriving research base and the pipeline of A pro-innovation approach to AI regulation 2 expertise graduating through our universities,the ing

15、enuity of our innovators and the governments long-term commitment to invest in AI.To ensure we become an AI superpower,though,it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments.That includes gett

16、ing regulation right so that innovators can thrive and the risks posed by AI can be addressed.These risks could include anything from physical harm,an undermining of national security,as well as risks to mental health.The development and deployment of AI can also present ethical challenges which do

17、not always have clear answers.Unless we act,household consumers,public services and businesses will not trust the technology and will be nervous about adopting it.Unless we build public trust,we will miss out on many of the benefits on offer.Indeed,the pace of change itself can be unsettling.Some fe

18、ar a future in which AI replaces or displaces jobs,for example.Our white paper and our vision for a future AI-enabled country is one in which our ways of working are complemented by AI rather than disrupted by it.In the modern world,too much of our professional lives are taken up by monotonous tasks

19、 inputting data,filling out paperwork,scanning through documents for one piece of information and so on.AI in the workplace has the potential to free us up from these tasks,allowing us to spend more time doing the things we trained for teachers with more time to teach,clinicians with more time to sp

20、end with patients,police officers with more time on the beat rather than behind a desk the list goes on.Indeed,since AI is already in our day-to-day lives,there are numerous examples that can help to illustrate the real,tangible benefits that AI can bring once any risks are mitigated.Streaming servi

21、ces already use advanced AI to recommend TV shows and films to us.Our satnav uses AI to plot the fastest routes for our journeys,or helps us avoid traffic by intelligently predicting where congestion will be on our journey.And of course,almost all of us carry a smartphone in our pockets that uses ad

22、vanced AI in all sorts of ways.These common devices all carried risks at one time or another,but today they benefit us enormously.That is why our white paper details how we intend to support innovation while providing a framework to ensure risks are identified and addressed.However,a heavy-handed an

23、d rigid approach can stifle innovation and slow AI adoption.That is why we set out a proportionate and pro-innovation regulatory framework.Rather than target specific technologies,it focuses on the context in which AI is deployed.This enables us to take a balanced approach to weighing up the benefit

24、s versus the potential risks.We recognise that particular AI technologies,foundation models for example,can be applied in many different ways and this means the risks can vary hugely.For example,using a chatbot to produce a summary of a long article presents very different risks to using the same te

25、chnology to provide medical advice.We understand the need to monitor these developments in partnership with innovators while also avoiding placing unnecessary regulatory burdens on those deploying AI.To ensure our regulatory framework is effective,we will leverage the expertise of our world class re

26、gulators.They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI.This will mean supporting innovation and working closely with business,but also stepping in to address risks when necessary.By underpinning the framework with a set of principles

27、,we will drive consistency across regulators while also providing them with the flexibility needed.For innovators working at the cutting edge and developing novel technologies,navigating regulatory regimes can be challenging.Thats why we are confirming our commitment to taking forward a key recommen

28、dation made by Sir Patrick Vallance to establish a regulatory sandbox for AI.This will bring together regulators to support innovators directly and help them get their products to market.The sandbox will also enable us to understand how regulation interacts with new technologies and refine this inte

29、raction where necessary.Having exited the European Union we are free to establish a regulatory approach that enables us to establish the UK as an AI superpower.It is an approach that will actively support innovation while A pro-innovation approach to AI regulation 3 addressing risks and public conce

30、rns.The UK is home to thriving start-ups,which our framework will support to scale-up and compete internationally.Our pro-innovation approach will also act as a strong incentive when it comes to AI businesses based overseas establishing a presence in the UK.The white paper sets out our commitment to

31、 engaging internationally to support interoperability across different regulatory regimes.Not only will this ease the burden on business but it will also allow us to embed our values as global approaches to governing AI develop.Our approach relies on collaboration between government,regulators and b

32、usiness.Initially,we do not intend to introduce new legislation.By rushing to legislate too early,we would risk placing undue burdens on businesses.But alongside empowering regulators to take a lead,we are also setting expectations.Our new monitoring functions will provide a real time assessment of

33、how the regulatory framework is performing so that we can be confident that it is proportionate.The pace of technological development also means that we need to understand new and emerging risks,engaging with experts to ensure we take action where necessary.A critical component of this activity will

34、 be engaging with the public to understand their expectations,raising awareness of the potential of AI and demonstrating that we are responding to concerns.The framework set out in this white paper is deliberately designed to be flexible.As the technology evolves,our regulatory approach may also nee

35、d to adjust.Our principles based approach,with central functions to monitor and drive collaboration,will enable us to adapt as needed while providing industry with the clarity needed to innovate.We will continue to develop our approach,building on our commitment to making the UK the best place in th

36、e world to be a business developing and using AI.Responses to the consultation will inform how we develop the regulatory framework-I encourage all of those with an interest to respond.RT HON MICHELLE DONELAN MP Secretary of State for Science,Innovation and Technology Department for Science,Innovatio

37、n and Technology A pro-innovation approach to AI regulation 4 Executive summary Artificial intelligence the opportunity and the challenge 1.Artificial intelligence(AI)is already delivering wide societal benefits,from medical advances1 to mitigating climate change.2 For example,an AI technology devel

38、oped by DeepMind,a UK-based business,can now predict the structure of almost every protein known to science.3 This breakthrough will accelerate scientific research and the development of life-saving medicines it has already helped scientists to make huge progress in combating malaria,antibiotic resi

39、stance,and plastic waste.2.The UK Science and Technology Framework4 sets out governments strategic vision and identifies AI as one of five critical technologies.The framework notes the role of regulation in creating the environment for AI to flourish.We know that we have yet to see AI technologies r

40、each their full potential.Under the right conditions,AI will transform all areas of life5 and stimulate the UK economy by unleashing innovation and driving productivity,6 creating new jobs and improving the workplace.3.Across the world,countries and regions are beginning to draft the rules for AI.Th

41、e UK needs to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of our pragmatic,proportionate regulatory approach.The need to act was highlighted by Sir Patrick Vallance in his recent Regulation for Innovation review.The report identifies the

42、short time frame for government intervention to provide a clear,pro-innovation regulatory environment in order to make the UK one of the top places in the world to build foundational AI companies.7 4.While we should capitalise on the benefits of these technologies,we should also not overlook the new

43、 risks that may arise from their use,nor the unease that the complexity of AI technologies can produce in the wider public.We already know that some uses of AI could 1 The use of AI in healthcare and medicine is booming,Insider Intelligence,2023.2 How to fight climate change using AI,Forbes,2022;Tac

44、kling Climate Change with Machine Learning,Rolnick et al.,2019.3 DeepMinds protein-folding AI cracks biologys biggest problem,New Scientist,2022;Improved protein structure prediction using potentials from deep learning,Senior et al.,2020.4 The UK Science and Technology Framework,Department for Scien

45、ce,Innovation and Technology,2023.5 Six of the best future uses of Artificial Intelligence,Technology Magazine,2023;Multidisciplinary perspectives on emerging challenges,opportunities,and agenda for research,practice and policy,Dwivedi et al.,2021.6 Large dedicated AI companies make a major contribu

46、tion to the UK economy,with GVA(gross value added)per employee estimated to be 400k,more than double that of comparable estimates of large dedicated firms in other sectors.See AI Sector Study 2022,DSIT,2023.7 Pro-innovation Regulation of Technologies Review:Digital Technologies,HM Treasury,2023.A pr

47、o-innovation approach to AI regulation 5 damage our physical8 and mental health,9 infringe on the privacy of individuals10 and undermine human rights.11 5.Public trust in AI will be undermined unless these risks,and wider concerns about the potential for bias and discrimination,are addressed.By buil

48、ding trust,we can accelerate the adoption of AI across the UK to maximise the economic and social benefits that the technology can deliver,while attracting investment and stimulating the creation of high-skilled AI jobs.12 In order to maintain the UKs position as a global AI leader,we need to ensure

49、 that the public continues to see how the benefits of AI can outweigh the risks.13 6.Responding to risk and building public trust are important drivers for regulation.But clear and consistent regulation can also support business investment and build confidence in innovation.Throughout our extensive

50、engagement,industry repeatedly emphasised that consumer trust is key to the success of innovation economies.We therefore need a clear,proportionate approach to regulation that enables the responsible application of AI to flourish.Instead of creating cumbersome rules applying to all AI technologies,o

51、ur framework ensures that regulatory measures are proportionate to context and outcomes,by focusing on the use of AI rather than the technology itself.7.People and organisations develop and use AI in the UK within the rules set by our existing laws,informed by standards,guidance and other tools.But

52、AI is a general purpose technology and its uses can cut across regulatory remits.As a result,AI technologies are currently regulated through a complex patchwork of legal requirements.We are concerned by feedback from across industry that the absence of cross-cutting AI regulation creates uncertainty

53、 and inconsistency which can undermine business and consumer confidence in AI,and stifle innovation.By providing a clear and unified approach to regulation,our framework will build public confidence,making it clear that AI technologies are subject to cross-cutting,principles-based regulation.Our pro

54、-innovation framework 8.The government will put in place a new framework to bring clarity and coherence to the AI regulatory landscape.This regime is designed to make responsible innovation easier.It will strengthen the UKs position as a global leader in AI,harness AIs ability to drive growth and pr

55、osperity,14 and increase public trust in its use and application.8 AI Barometer Part 4 Transport and logistics,Centre for Data Ethics and Innovation,2021.9 How TikTok Reads Your Mind,New York Times,2021.10 Privacy Considerations in Large Language Models,Google Research,2020.11 Artificial Intelligenc

56、e,Human Rights,Democracy,and the Rule of Law,Alan Turing Institute and Council of Europe,2021.12 Demand for AI skills in jobs,OECD iLibrary,2021.13 Public expectations for AI governance(transparency,fairness and accountability),Centre for Data Ethics and Innovation,2023.14 The AI sector is estimated

57、 to contribute 3.7bn in GVA(Gross Value Added)to the UK economy.AI Sector Study 2022,DSIT,2023.A pro-innovation approach to AI regulation 6 9.We are taking a deliberately agile and iterative approach,recognising the speed at which these technologies are evolving.Our framework is designed to build th

58、e evidence base so that we can learn from experience and continuously adapt to develop the best possible regulatory regime.Industry has praised our pragmatic and proportionate approach.10.Our framework is underpinned by five principles to guide and inform the responsible development and use of AI in

59、 all sectors of the economy:o Safety,security and robustness o Appropriate transparency and explainability o Fairness o Accountability and governance o Contestability and redress 11.We will not put these principles on a statutory footing initially.New rigid and onerous legislative requirements on bu

60、sinesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances.Instead,the principles will be issued on a non-statutory basis and implemented by existing regulators.This approach makes use of regulators domain-specific exp

61、ertise to tailor the implementation of the principles to the specific context in which AI is used.During the initial period of implementation,we will continue to collaborate with regulators to identify any barriers to the proportionate application of the principles,and evaluate whether the non-statu

62、tory framework is having the desired effect.12.Following this initial period of implementation,and when parliamentary time allows,we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles.Some feedback from regulators,industry and academia suggested

63、 we should implement further measures to support the enforcement of the framework.A duty requiring regulators to have regard to the principles should allow regulators the flexibility to exercise judgement when applying the principles in particular contexts,while also strengthening their mandate to i

64、mplement them.In line with our proposal to work collaboratively with regulators and take an adaptable approach,we will not move to introduce such a statutory duty if our monitoring of the framework shows that implementation is effective without the need to legislate.13.In the 2022 AI regulation poli

65、cy paper,15 we proposed a small coordination layer within the regulatory architecture.Industry and civil society were supportive of our intention to ensure coherence across the AI regulatory framework.However,feedback often argued strongly for greater central coordination to support regulators on is

66、sues requiring cross-cutting collaboration and ensure that the overall regulatory framework functions as intended.14.We have identified a number of central support functions required to make sure that the overall framework offers a proportionate but effective response to risk while promoting innovat

67、ion across the regulatory landscape:o Monitoring and evaluation of the overall regulatory frameworks effectiveness and the implementation of the principles,including the extent to which implementation supports innovation.This will allow us to remain responsive and adapt the framework if necessary,in

68、cluding where it needs to be adapted to remain effective in the context of developments in AIs capabilities and the state of the art.15 Establishing a pro-innovation approach to regulating AI,Office for Artificial Intelligence,2022.A pro-innovation approach to AI regulation 7 o Assessing and monitor

69、ing risks across the economy arising from AI.o Conducting horizon scanning and gap analysis,including by convening industry,to inform a coherent response to emerging AI technology trends.o Supporting testbeds and sandbox initiatives to help AI innovators get new technologies to market.o Providing ed

70、ucation and awareness to give clarity to businesses and empower citizens to make their voices heard as part of the ongoing iteration of the framework.o Promoting interoperability with international regulatory frameworks.15.The central support functions will initially be provided from within governme

71、nt but will leverage existing activities and expertise from across the broader economy.The activities described above will neither replace nor duplicate the work undertaken by regulators and will not involve the creation of a new AI regulator.16.Our proportionate approach recognises that regulation

72、is not always the most effective way to support responsible innovation.The proposed framework is aligned with,and supplemented by,a variety of tools for trustworthy AI,such as assurance techniques,voluntary guidance and technical standards.Government will promote the use of such tools.We are collabo

73、rating with partners like the UK AI Standards Hub to ensure that our overall governance framework encourages responsible AI innovation(see part four for details).17.In keeping with the global nature of these technologies,we will also continue to work with international partners to deliver interopera

74、ble measures that incentivise the responsible design,development and application of AI.During our call for views,industry,academia and civil society stressed that international alignment should support UK businesses to capitalise on global markets and protect UK citizens from cross-border harms.18.T

75、he UK is frequently ranked third in the world across a range of measures,including level of investment,innovation and implementation of AI.16 To make the UK the most attractive place in the world for AI innovation and support UK companies wishing to export and attract international investment,we mus

76、t ensure international compatibility between approaches.Countries around the world,as well as multilateral forums,are exploring approaches to regulating AI.Thanks to our reputation for pragmatic regulation,the UK is rightly seen by international partners as a leader in this global conversation.16 Gl

77、obal AI Index,Tortoise Media,2022.A pro-innovation approach to AI regulation 8 Part One:Introduction 1.1 The power and potential of artificial intelligence 19.AI is already delivering major advances and efficiencies in many areas.AI quietly automates aspects of our everyday activities,from systems t

78、hat monitor traffic to make our commutes smoother,17 to those that detect fraud in our bank accounts.18 AI has revolutionised large-scale safety-critical practices in industry,like controlling the process of nuclear fusion.19 And it has also been used to accelerate scientific advancements,such as th

79、e discovery of new medicine20 or the technologies we need to tackle climate change.21 20.But this is just the beginning.AI can be used in a huge variety of settings and has the extraordinary potential to transform our society and economy.22 It could have as much impact as electricity or the internet

80、,and has been identified as one of five critical technologies in the UK Science and Technology Framework.23 As AI becomes more powerful,and as innovators explore new ways to use it,we will see more applications of AI emerge.As a result,AI has a huge potential to drive growth24 and create jobs.25 It

81、will support people to carry out their existing jobs,by helping to improve workforce efficiency and workplace safety.26 To remain world leaders in AI,attract global talent and create high-skilled jobs in the UK,we must create a regulatory environment where such innovation can thrive.21.Technological

82、 advances like large language models(LLMs)are an indication of the transformative developments yet to come.27 LLMs provide substantial opportunities to transform the economy and society.For example,LLMs can automate the process of writing code and 17 Transport apps like Google Maps,and CityMapper,us

83、e AI.18 Artificial Intelligence in Banking Industry:A Review on Fraud Detection,Credit Management,and Document Processing,ResearchBerg Review of Science and Technology,2018.19 Accelerating fusion science through learned plasma control,Deepmind,2022;Magnetic control of tokamak plasmas through deep re

84、inforcement learning,Degrave et al.,2022.20 Why Artificial Intelligence Could Speed Drug Discovery,Morgan Stanley,2022.21 AI Is Essential for Solving the Climate Crisis,BCG,2022.22 General Purpose Technologies Handbook of Economic Growth,National Bureau of Economic Research,2005.23 The UK Science an

85、d Technology Framework,Department for Science,Innovation and Technology,2023.24 In 2022 annual revenues generated by UK AI companies totalled an estimated 10.6 billion.AI Sector Study 2022,DSIT,2023.25 DSIT analysis estimates over 50,000 full time workers are employed in AI roles in AI companies.AI

86、Sector Study 2022,DSIT,2023.26 For example,AI can potentially improve health and safety in mining while also improving efficiency.See AI on-side:how artificial intelligence is being used to improve health and safety in mining,Axora,2023.Box 1.1 gives further examples of AI driving efficiency improve

87、ments.27 Large Language Models Will Define Artificial Intelligence,Forbes,2023;Scaling Language Models:Methods,Analysis&Insights from Training Gopher,Borgeaud et al.,2022.A pro-innovation approach to AI regulation 9 fixing programming bugs.The technology can support genetic medicine by identifying l

88、inks between genetic sequences and medical conditions.It can support people to review and summarise key points from lengthy documents.In the last four years,LLMs have been developed beyond expectations and they are becoming applicable to an increasingly wide range of tasks.28 We expand on the develo

89、pment of LLM and other foundation models in section 3.3.3 below.Box 1.1:Examples of AI opportunities AI helps piece together the first complete image of a black hole AI can enable scientific discovery.A computer vision model was used to piece together the first ever image of a black hole 55 million

90、light years away,combining images from eight telescopes around the world.29 AI solves decades old protein-folding puzzle An AI company based in the UK trained neural networks to predict the structures of proteins,solving a problem that had long stumped scientists.The predictions are advancing the fi

91、eld of structural biology:scientists have already used them to prevent antibiotic resistance,30 advance disease research,31 and accelerate the fight against plastic pollution.32 As we find more uses for AI,it will rewrite scientific fields and change the way we learn about our world.Deep learning AI

92、 could improve breast cancer screening AI could transform how diseases are detected,prevented,and treated.Doctors are testing if deep learning can be applied to breast cancer screening.Currently,every mammogram is double-checked by radiologists but this is labour-intensive and causes diagnosis delay

93、s.A UK medical technology company is working with the 28 See,for example,What are Large Language Models used for?NVIDIA,2023.29 Black hole pictured for first time in spectacular detail,Nature,2019.30 Accelerating the race against antibiotic resistance,Deepmind,2022.31 Stopping malaria in its tracks,

94、Deepmind,2022.32 Creating plastic-eating enzymes that could save us from pollution,Deepmind,2022.A pro-innovation approach to AI regulation 10 NHS to test AI for the second screening,meaning greater numbers of patients could be screened faster and clinicians could spend more time with patients and p

95、rovide faster access to treatment.33 Farming efficiency increased by AI robots Applying robotics and AI to field management can make farming more efficient,sustainable and productive.Lightweight,autonomous mapping and monitoring robots operating across the UK can spend hours on the field in all cond

96、itions and significantly reduce soil compaction.These systems can digitise the field,providing farmers with data to improve weed and pest management.If these systems become widely used,they could contribute to agricultural and horticultural productivity,reduce the pressure of labour shortages and be

97、tter preserve the environment.34 AI helps accelerate the discovery of new medicines Significant time and resources are currently needed to develop new and effective medicines.AI can accelerate the discovery of new medicines by quickly identifying potential biologically active compounds from millions

98、 of candidates within a short period.35 Scientists may also have succeeded in using generative AI to design antibodies that bind to a human protein linked to cancer.36 AI is used in the fight against the most serious and harmful crimes The Child Images Abuse Database37 uses the powerful data process

99、ing capabilities of AI to identify victims and perpetrators of child sexual abuse.The quick and effective identification of victims and perpetrators in digital abuse images 33 Mia mammography intelligent assessment,NHS England,2021.34 Robotics and Autonomous Systems for Net Zero Agriculture,Pearson

100、et al.,2022.35 Artificial intelligence,big data and machine learning approaches to precision medicine and drug discovery,Current Drug Targets,2021.36 Unlocking de novo antibody design with generative artificial intelligence,Shanehsazzadeh et al.,2023.37 Pioneering new tools to be rolled out in fight

101、 against child abusers,Home Office,2019.A pro-innovation approach to AI regulation 11 allows for real world action to remove victims from harm and ensure their abusers are held to account.The use of AI increases the scale and speed of analysis while protecting staff welfare by reducing their exposur

102、e to distressing content.AI increases cyber security capabilities Companies providing cyber security services are increasingly using AI to analyse large amounts of data about malware and respond to vulnerabilities in network security at faster-than-human speeds.38 As the complexity of the cyber thre

103、at landscape evolves,the pattern-recognition and recursive learning capabilities of AI are likely to play an increasingly significant role in proactive cyber defence against malicious actors.1.2 Managing AI risks 22.The concept of AI is not new,but recent advances in data generation and processing h

104、ave changed the field and the technology it produces.For example,while recent developments in the capabilities of generative AI models have created exciting opportunities,they have also sparked new debates about potential AI risks.39 As AI research and development continues at pace and scale,we expe

105、ct to see even greater impact and public awareness of AI risks.40 23.We know that not all AI risks arise from the deliberate action of bad actors.Some AI risks can emerge as an unintended consequence or from a lack of appropriate controls to ensure responsible AI use.41 24.We have made an initial as

106、sessment of AI-specific risks and their potential to cause harm,with reference in our analysis to the values that they threaten if left unaddressed.These values include safety,security,fairness,privacy and agency,human rights,societal well-being and prosperity.25.Our assessment of cross-cutting AI r

107、isk identified a range of high-level risks that our framework will seek to prioritise and mitigate with proportionate interventions.For example,safety risks include physical damage to humans and property,as well as damage to mental health.42 AI 38 Intelligent security tools,National Cyber Security C

108、entre,2019.39 What is generative AI,and why is it suddenly everywhere?,Vox,2023.40 See,for example,The Benefits and Harms of Algorithms,The Digital Regulation Cooperation Forum,2022;Harms of AI,Acemoglu,2021.41 AI Accidents:An Emerging Threat,Center for Security and Emerging Technology,2021.42 AI fo

109、r radiographic COVID-19 detection selects shortcuts over signal,DeGrave,Janizek and Lee,2021;Pathways:How digital design puts children at risk,5Rights Foundation,2021.A pro-innovation approach to AI regulation 12 creates a range of new security risks to individuals,organisations,and critical infrast

110、ructure.43 Without government action,AI could cause and amplify discrimination that results in,for example,unfairness in the justice system.44 Similarly,without regulatory oversight,AI technologies could pose risks to our privacy and human dignity,potentially harming our fundamental liberties.45 Our

111、 regulatory intervention will ensure that AI does not cause harm at a societal level,threatening democracy46 or UK values.Box 1.2:Illustrative AI risks The patchwork of legal frameworks that currently regulate some uses of AI may not sufficiently address the risks that AI can pose.The following exam

112、ples are hypothetical scenarios designed to illustrate AIs potential to create harm.Risks to human rights Generative AI is used to generate deepfake pornographic video content,potentially damaging the reputation,relationships and dignity of the subject.Risks to safety An AI assistant based on LLM te

113、chnology recommends a dangerous activity that it has found on the internet,without understanding or communicating the context of the website where the activity was described.The user undertakes this activity causing physical harm.Risks to fairness47 An AI tool assessing credit-worthiness of loan app

114、licants is trained on incomplete or biased data,leading the company to offer loans to individuals on different terms based on characteristics like race or gender.Risks to privacy and agency Connected devices in the home may constantly gather data,including conversations,potentially creating a near-c

115、omplete portrait of an individuals home life.Privacy risks are compounded the more parties can access this data.Risks to societal wellbeing Disinformation generated and propagated by AI could undermine access to reliable information and trust in democratic institutions and processes.43 The Malicious

116、 Use of Artificial Intelligence,Malicious AI Report,2018.44 Constitutional Challenges in the Algorithmic Society,Micklitz et al.,2022.45 Smart Speakers and Voice Assistants,CDEI,2019;Deepfakes and Audiovisual disinformation,CDEI,2019.46 Artificial Intelligence,Human Rights,Democracy and the Rule of

117、Law,Leslie et al.,2021.47 Government has already committed to addressing some of these issues more broadly.See,for example,the Inclusive Britain report,Race Disparity Unit,2022.A pro-innovation approach to AI regulation 13 Risks to security AI tools can be used to automate,accelerate and magnify the

118、 impact of highly targeted cyber attacks,increasing the severity of the threat from malicious actors.The emergence of LLMs enables hackers48 with little technical knowledge or skill to generate phishing campaigns with malware delivery capabilities.49 1.3 A note on terminology Terminology used in thi

119、s paper:50 AI or AI system or AI technologies:products and services that are adaptable and autonomous in the sense outlined in our definition in section 3.2.1.AI supplier:any organisation or individual who plays a role in the research,development,training,implementation,deployment,maintenance,provis

120、ion or sale of AI systems.AI user:any individual or organisation that uses an AI product.AI life cycle:all events and processes that relate to an AI systems lifespan,from inception to decommissioning,including its design,research,training,development,deployment,integration,operation,maintenance,sale

121、,use and governance.AI ecosystem:the complex network of actors and processes that enable the use and supply of AI throughout the AI life cycle(including supply chains,markets,and governance mechanisms).Foundation model:a type of AI model that is trained on a vast quantity of data and is adaptable fo

122、r use on a wide range of tasks.Foundation models can be used as a base for building more specific AI models.Foundation models are discussed in more detail in section 3.3.3 below.51 Impacted third party:an individual or company that is impacted by the outcomes of the AI systems that they do not use o

123、r supply themselves.48 Is ChatGPT a cybersecurity threat?TechCrunch,2023.49 OPWNAI:Cybercriminals starting to use ChatGPT,Check Point Research,2023.50 These are not intended to be legal definitions for the purposes of the framework.51 The value chain of general-purpose AI,Ada Lovelace Institute,2023

124、.A pro-innovation approach to AI regulation 14 Part Two:The current regulatory environment 2.1 Navigating the current landscape 26.The UKs AI success is,in part,due to our reputation for high-quality regulators and our strong approach to the rule of law,supported by our technology-neutral legislatio

125、n and regulations.UK laws,regulators and courts already address some of the emerging risks posed by AI technologies(see box 2.1 for examples).This strong legal foundation encourages investment in new technologies,enabling AI innovation to thrive,52 and high-quality jobs to flourish.53 Box 2.1:Exampl

126、e of legal coverage of AI in the UK and potential gaps Discriminatory outcomes that result from the use of AI may contravene the protections set out in the Equality Act 2010.54 AI systems are also required by data protection law to process personal data fairly.55 However,AI can increase the risk of

127、unfair bias or discrimination across a range of indicators or characteristics.This could undermine public trust in AI.Product safety laws ensure that goods manufactured and placed on the market in the UK are safe.Product-specific legislation(such as for electrical and electronic equipment,56 medical

128、 devices,57 and toys58)may apply to some products that include integrated AI.However,safety risks specific to AI technologies should be monitored closely.As the capability and adoption of AI increases,it may pose new and substantial risks that are unaddressed by existing rules.52 Global Innovation I

129、ndex 2022,GII 2022;Global Indicators of Regulatory Governance,World Bank,2023.53 Demand for AI skills in jobs,OECD Science,Technology and Industry Working Papers,2021.54 The protected characteristics are age,disability,gender reassignment,marriage and civil partnership,race,religion or belief,sex,an

130、d sexual orientation.55 Article 5(1)(a)Article 5(1)(a)Principles relating to processing of personal data,HM Government,2016.56 Electrical Equipment(Safety)Regulations,HM Government,2016.57 Medical Devices Regulation,HM Government,2002.58 Toys(Safety)Regulations,HM Government,2011.A pro-innovation ap

131、proach to AI regulation 15 Consumer rights law59 may protect consumers where they have entered into a sales contract for AI-based products and services.Certain contract terms(for example,that goods are of satisfactory quality,fit for a particular purpose,and as described)are relevant to consumer con

132、tracts.Similarly,businesses are prohibited from including certain terms in consumer contracts.Tort law provides a complementary regime that may provide redress where a civil wrong has caused harm.It is not yet clear whether consumer rights law will provide the right level of protection in the contex

133、t of products that include integrated AI or services based on AI,or how tort law may apply to fill any gap in consumer rights law protection.27.While AI is currently regulated through existing legal frameworks like financial services regulation,60 some AI risks arise across,or in the gaps between,ex

134、isting regulatory remits.Industry told us that conflicting or uncoordinated requirements from regulators create unnecessary burdens and that regulatory gaps may leave risks unmitigated,harming public trust and slowing AI adoption.28.Industry has warned us that regulatory incoherence could stifle inn

135、ovation and competition by causing a disproportionate amount of smaller businesses to leave the market.If regulators are not proportionate and aligned in their regulation of AI,businesses may have to spend excessive time and money complying with complex rules instead of creating new technologies.Sma

136、ll businesses and start-ups often do not have the resources to do both.61 With the vast majority of digital technology businesses employing under 50 people,62 it is important to ensure that regulatory burdens do not fall disproportionately on smaller companies,which play an essential role in the AI

137、innovation ecosystem and act as engines for economic growth and job creation.63 29.Regulatory coordination will support businesses to invest confidently in AI innovation and build public trust by ensuring real risks are effectively addressed.While some regulators already work together to ensure regu

138、latory coherence for AI through formal networks like the AI and digital regulations service in the health sector64 and the Digital Regulation Cooperation Forum(DRCF),other regulators have limited capacity and access to AI expertise.This creates the risk of inconsistent enforcement across regulators.

139、There is also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty.Industry asked us to support further system-wide coordination

140、to clarify who is 59 Consumer Rights Act 2015;Consumer Protection from Unfair Trading Regulations,HM Government,2008.60 Such as the Financial Services and Markets Act,HM Government,2000.61 Evidence to support the analysis of impacts for AI governance,Frontier Economics,2023.62 In 2019,98.8%of busine

141、sses in the digital sector had less than 50 employees.DCMS Sectors Economic Estimates 2019:Business Demographics,ONS,2022.63 The AI Sector Study found that almost 90%of businesses in the AI sector are small or micro in size.AI Sector Study 2022,DSIT,2023.64 AI and Digital Regulations Service,Care Qu

142、ality Commission,Health Research Authority,Medicines and Healthcare Products Regulatory Agency,National Institute for Health and Care Excellence,2023.A pro-innovation approach to AI regulation 16 responsible for addressing cross-cutting AI risks and avoid duplicate requirements across multiple regul

143、ators.A pro-innovation approach to AI regulation 17 Case study 2.1:Addressing AI fairness under the existing legal and regulatory framework A fictional company,“AI Fairness Insurance Limited”,is designing a new AI-driven algorithm to set prices for insurance premiums that accurately reflect a client

144、s risk.Setting fair prices and building consumer trust is a key component of AI Fairness Insurance Limiteds brand so ensuring it complies with the relevant legislation and guidance is a priority.Fairness in AI systems is covered by a variety of regulatory requirements and best practice.AI Fairness I

145、nsurance Limiteds use of AI to set prices for insurance premiums could be subject to a range of legal frameworks,including data protection,equality,and general consumer protection laws.It could also be subject to sectoral rules like the Financial Services and Markets Act 2000.65 It can be challengin

146、g for a company like AI Fairness Insurance Limited to identify which rules are relevant and confidently apply them to AI use cases.There is currently a lack of support for businesses like AI Fairness Insurance Limited to navigate the regulatory landscape,with no cross-cutting principles and limited

147、system-wide coordination.30.Government intervention is needed to improve the regulatory landscape.We intend to leverage and build on existing regimes,maximising the benefits of what we already have,while intervening in a proportionate way to address regulatory uncertainty and gaps.This will deliver

148、a pro-innovation regulatory framework that is designed to be adaptable and future-proof,supported by tools for trustworthy AI including assurance techniques and technical standards.This approach will provide more clarity and encourage collaboration between government,regulators and industry to unloc

149、k innovation.65 Financial Services and Markets Act,HM Government,2000.A pro-innovation approach to AI regulation 18 Case study 2.2:Adapting regulatory approaches to AI AI as a medical device Some UK regulators have led the way and proactively adapted their approaches to AI-enabled technologies.In 20

150、22,the MHRA(Medicines and Healthcare products Regulatory Agency)published a roadmap clarifying in guidance the requirements for AI and software used in medical devices.66 The regulator is also updating the regulatory framework for medical devices to protect patients and secure the UKs global reputat

151、ion for responsible innovation in medical device software.As part of this work,the MHRA will develop guidance on the transparency and interpretability of AI as a medical device.67 The MHRA will consider the specific challenges posed by AI in this context,drawing on the applicable AI regulation cross

152、-sectoral principles and ethical principles for AI in health and social care to issue practical guidance on how to meet legal product safety requirements.The MHRA will work with other regulators such as the Information Commissioners Office(ICO)and the National Data Guardian to consider patients data

153、 protection and trust in medical devices.This work will provide manufacturers with clear requirements and guidance to attract responsible innovation to the UK.66 Software and AI as a Medical Device Change Programme Roadmap,MHRA,2022.67 The exact relation between the concepts interpretability and exp

154、lainability is the subject of ongoing academic debate.See Interpretable and explainable machine learning:A methods-centric overview with concrete examples,Marcinkevics and Vogt,2023.We use explainability as the key term in our AI principle in alignment with the OECD.A pro-innovation approach to AI r

155、egulation 19 Part Three:An innovative and iterative approach 3.1 Aims of the regulatory framework 31.Regulation can increase innovation by giving businesses the incentive to solve important problems while addressing the risk of harm to citizens.For example,product safety legislation has increased in

156、novation towards safer products and services.68 In the case of AI,a context-based,proportionate approach to regulation will help strengthen public trust and increase AI adoption.69 32.The National AI Strategy set out our aim to regulate AI effectively and support innovation.70 In line with the princ

157、iples set out in the Plan for Digital Regulation,71 our approach to AI regulation will be proportionate;balancing real risks against the opportunities and benefits that AI can generate.We will maintain an effective balance as we implement the framework by focusing on the context and outcomes of AI.3

158、3.Our policy paper proposed a pro-innovation framework designed to give consumers the confidence to use AI products and services,and provide businesses the clarity they need to invest in AI and innovate responsibly.72 This approach was broadly welcomed particularly by industry.Based on feedback,we h

159、ave distilled our aims into three objectives that our framework is designed to achieve:o Drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty.This will encourage investment in AI and support its adoption throughout the economy,creating jobs and help

160、ing us to do them more efficiently.To achieve this objective we must act quickly to remove existing barriers to innovation and prevent the emergence of new ones.This will allow AI companies to capitalise on early development successes and achieve long term market advantage.73 By acting now,we can gi

161、ve UK innovators a headstart in the global race to convert the potential of AI into long term advantages for the UK,maximising the economic and social value of these technologies and strengthening our current position as a world leader in AI.74 68 The impact of regulation on innovation,Nesta,2012.69

162、 Public expectations for AI governance(transparency,fairness and accountability),Centre for Data Ethics and Innovation,2023.70 National AI Strategy,Office for Artificial Intelligence,2021.71 Plan for Digital Regulation,DSIT(formerly DCMS),2022.72 Establishing a pro-innovation approach to regulating

163、AI,Office for Artificial Intelligence,2022.73 Economic impacts of artificial intelligence,European Parliament,2019.74 The UK is ranked near the top of the Global AI Index,third only to the US and China.Global AI Index,Tortoise Media,2022.A pro-innovation approach to AI regulation 20 o Increase publi

164、c trust in AI by addressing risks and protecting our fundamental values.Trust is a critical driver for AI adoption.75 If people do not trust AI,they will be reluctant to use it.Such reluctance can reduce demand for AI products and hinder innovation.Therefore we must demonstrate that our regulatory f

165、ramework(described in section 3.2)effectively addresses AI risks.o Strengthen the UKs position as a global leader in AI.The development of AI technologies can address some of the most pressing global challenges,from climate change to future pandemics.There is also growing international recognition t

166、hat AI requires new regulatory responses to guide responsible innovation.The UK can play a central role in the global conversation by shaping international governance and regulation to maximise opportunities and build trust in the technology,while mitigating potential cross-border risks and protecti

167、ng our democratic values.There is also an important leadership role for the UK in the development of the global AI assurance industry,76 including auditing and safety.We will ensure that the UK remains attractive to innovators and investors by promoting interoperability with other regulatory approac

168、hes and minimising cross-border frictions.We will work closely with global partners through multilateral and bilateral engagements to learn from,influence and adapt as international and domestic approaches to AI regulation continue to emerge(see part 6).34.The proposed regulatory framework does not

169、seek to address all of the wider societal and global challenges that may relate to the development or use of AI.This includes issues relating to access to data,compute capability,and sustainability,as well as the balancing of the rights of content producers and AI developers.These are important issu

170、es to consider especially in the context of the UKs ability to maintain its place as a global leader in AI but they are outside of the scope of our proposals for a new overarching framework for AI regulation.35.Government is taking wider action to ensure the UK retains its status as a global leader

171、in AI,for example by taking forward Sir Patrick Vallances recommendation relating to intellectual property law and generative AI.77 This will ensure we keep the right balance between protecting rights holders and our thriving creative industries,while supporting AI developers to access the data they

172、 need.75 Trust in Artificial Intelligence:a five country study,KPMG and the University of Queensland,2021;Evidence to support the analysis of impacts for AI governance,Frontier Economics,2023.76“Building on the UKs strengths in the professional services and technology sectors,AI assurance will also

173、become a significant economic activity in its own right,with the potential for the UK to be a global leader in a new multi-billion pound industry.”See The roadmap to an effective AI assurance ecosystem,Centre for Data Ethics and Innovation,2021.77 Pro-innovation Regulation of Technologies Review:Dig

174、ital Technologies,HM Treasury,2023.A pro-innovation approach to AI regulation 21 3.2 The proposed regulatory framework 36.Our innovative approach to AI regulation uses a principles-based framework for regulators to interpret and apply to AI within their remits.This collaborative and iterative approa

175、ch can keep pace with a fast moving technology that requires proportionate action to balance risk and opportunity and to strengthen the UKs position as a global leader in AI.Our agile approach aligns with Sir Patrick Vallances Regulation for Innovation report,78 which highlights that flexible regula

176、tory approaches can better strike the balance between providing clarity,building trust and enabling experimentation.Our framework will provide more clarity to innovators by encouraging collaboration between government,regulators,industry and civil society.37.We have identified the essential characte

177、ristics of our regulatory regime.Our framework will be pro-innovation,proportionate,trustworthy,adaptable,clear and collaborative.79 o Pro-innovation:enabling rather than stifling responsible innovation.o Proportionate:avoiding unnecessary or disproportionate burdens for businesses and regulators.o

178、Trustworthy:addressing real risks and fostering public trust in AI in order to promote and encourage its uptake.o Adaptable:enabling us to adapt quickly and effectively to keep pace with emergent opportunities and risks as AI technologies evolve.o Clear:making it easy for actors in the AI life cycle

179、,including businesses using AI,to know what the rules are,who they apply to,who enforces them,and how to comply with them.o Collaborative:encouraging government,regulators,and industry to work together to facilitate AI innovation,build trust and ensure that the voice of the public is heard and consi

180、dered.38.The framework,built around the four key elements below,is designed to empower our existing regulators and promote coherence across the regulatory landscape.The four key elements are:o Defining AI based on its unique characteristics to support regulator coordination(section 3.2.1).o Adopting

181、 a context-specific approach(section 3.2.2).o Providing a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities(section 3.2.3).o The principles clarify governments expectations for responsible AI and describe good governance at all stages of the AI life cycle.78

182、 Pro-innovation Regulation of Technologies Review:Digital Technologies,HM Treasury,2023.79 These characteristics are aligned with existing principles set out in the Plan for Digital Regulation,the report of the independent Taskforce on Innovation,Growth and Regulatory Reform and with the findings of

183、 the Pro-innovation Regulation of Technologies Review:Digital Technologies,published in March 2023,which called for a proportionate and agile regulatory approach and acknowledged the importance of achieving a“balance between providing clarity and building public trust,while also enabling development

184、,experimentation,and deployment.”A pro-innovation approach to AI regulation 22 o The application of the principles will initially be at the discretion of the regulators,allowing prioritisation according to the needs of their sectors.o Following this initial non-statutory period of implementation,and

185、 when parliamentary time allows,we anticipate introducing a statutory duty requiring regulators to have due regard to the principles.o Delivering new central functions to support regulators to deliver the AI regulatory framework,maximising the benefits of an iterative approach and ensuring that the

186、framework is coherent(section 3.2.4).3.2.1 Defining Artificial Intelligence 39.To regulate AI effectively,and to support the clarity of our proposed framework,we need a common understanding of what is meant by artificial intelligence.There is no general definition of AI that enjoys widespread consen

187、sus.80 That is why we have defined AI by reference to the two characteristics that generate the need for a bespoke regulatory response.o The adaptivity of AI can make it difficult to explain the intent or logic of the systems outcomes:o AI systems are trained once or continually and operate by infer

188、ring patterns and connections in data which are often not easily discernible to humans.o Through such training,AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.o The autonomy of AI can make it difficult to assign responsibility

189、 for outcomes:o Some AI systems can make decisions without the express intent or ongoing control of a human.40.The combination of adaptivity and autonomy can make it difficult to explain,predict,or control the outputs of an AI system,or the underlying logic by which they are generated.It can also be

190、 challenging to allocate responsibility for the systems operation and outputs.For regulatory purposes,this has potentially serious implications,particularly when decisions are made relating to significant matters,like an individuals health,or where there is an expectation that a decision should be j

191、ustifiable in easily understood terms,like a legal ruling.41.By defining AI with reference to these functional capabilities and designing our approach to address the challenges created by these characteristics,we future-proof our framework against unanticipated new technologies that are autonomous a

192、nd adaptive.Because we are not creating blanket new rules for specific technologies or applications of AI,like facial recognition or LLMs,we do not need to use rigid legal definitions.Our use of these defining characteristics was widely supported in responses to our policy paper,81 as rigid definiti

193、ons can quickly become outdated and restrictive with the rapid evolution of AI.82 We will,however,retain the ability to 80 One of the biggest problems in regulating AI is agreeing on a definition,Carnegie Endowment for International Peace,2022.81 Establishing a pro-innovation approach to regulating

194、AI,Office for Artificial Intelligence,2022.82 As stated in government guidance on using AI in the public sector,we consider machine learning to be a subset of AI.While machine learning is the most widely-used form of AI and will be captured within our framework,our adaptive and autonomous A pro-inno

195、vation approach to AI regulation 23 adapt our approach to defining AI if necessary,alongside the ongoing monitoring and iteration of the wider regulatory framework.42.Below,we provide some illustrative examples of AI systems to demonstrate their autonomous and adaptive characteristics.While many asp

196、ects of the technologies described in these case studies will be covered by existing law,they illustrate how AI-specific characteristics introduce novel risks and regulatory implications.Figure 1:Illustration of our strategy for regulating AI characteristics ensure any current or future AI system th

197、at meets this criteria will be within scope.See A guide to using artificial intelligence in the public sector,Government Digital Service and Office for Artificial Intelligence,2019.A pro-innovation approach to AI regulation 24 Case study 3.1:Natural language processing in customer service chatbots A

198、daptivity:Provides responses to real-time customer messages,having been trained on huge datasets to identify statistical patterns in ordinary human speech,potentially increasing personalisation over time as the system learns from each new experience.Autonomy:Generates a human-like output based on th

199、e customers text input,to answer queries,help customers find products and services,or send targeted updates.Operates with little need for human oversight or intervention.Illustrative AI-related regulatory implication:Unintentional inclusion of inaccurate or misleading information in training data,pr

200、oducing harmful instructions or convincingly spreading misinformation.Case study 3.2:Automated healthcare triage systems Adaptivity:Predicts patient conditions based on the pathology,treatment and risk factors associated with health conditions from the analysis of medical datasets,patient records an

201、d real-time health data.Autonomy:Generates information about the likely causes of a patients symptoms and recommends potential interventions and treatments,either to a medical professional or straight to a patient.Illustrative AI-related regulatory implication:Unclear liability for an AI triage syst

202、em that provides incorrect medical advice,leading to negative health outcomes for a patient and affecting the patients ability to obtain redress.Case study 3.3:Text-to-image generators Adaptivity:Uses large amounts of online content to learn how to create rich,highly specific images on the basis of

203、a short text prompt.Autonomy:Based on text input,these systems generate images that mimic the qualities A pro-innovation approach to AI regulation 25 of human-created art,with no ongoing oversight from the user.Illustrative AI-related regulatory implication:Reproduction of biases or stereotyping in

204、training data,leading to offensive language or content.43.Industry,regulators,and civil society responded positively to our proposed definition,recognising that it supports our context-based and flexible approach to AI regulation.We will monitor how regulators interpret and apply adaptivity and auto

205、nomy when formulating domain-specific definitions of AI.Government will support coordination between regulators when we see potential for better alignment between their interpretations and use of our defining characteristics.44.Active and collaborative horizon scanning will ensure that we can identi

206、fy developments and emerging trends,and adapt our framework accordingly.We will convene industry,academia and other key stakeholders to inform economy-wide horizon scanning activity.This work will build on the activity of individual regulators.3.2.2 Regulating the use not the technology 45.Our frame

207、work is context-specific.83 We will not assign rules or risk levels to entire sectors or technologies.Instead,we will regulate based on the outcomes AI is likely to generate in particular applications.For example,it would not be proportionate or effective to classify all applications of AI in critic

208、al infrastructure as high risk.Some uses of AI in critical infrastructure,like the identification of superficial scratches on machinery,can be relatively low risk.Similarly,an AI-powered chatbot used to triage customer service requests for an online clothing retailer should not be regulated in the s

209、ame way as a similar application used as part of a medical diagnostic process.46.A context-specific approach allows regulators to weigh the risks of using AI against the costs of missing opportunities to do so.84 Regulators told us that AI risk assessments should include the failure to exploit AI ca

210、pabilities.For example,there can be a significant opportunity cost related to not having access to AI in safety-critical operations,from heavy industry,85 to personal healthcare(see box 1.1).Sensitivity to context will allow the framework to respond to the level of risk in a proportionate manner and

211、 avoid stifling innovation or missing opportunities to capitalise on the social benefits made available by AI.47.To best achieve this context-specificity we will empower existing UK regulators to apply the cross-cutting principles.Regulators are best placed to conduct detailed risk analysis and enfo

212、rcement activities within their areas of expertise.Creating a new AI-specific,cross-sector 83 See Establishing a pro-innovation approach to regulating AI,Office for Artificial Intelligence,2022.The context-based approach received wide support in feedback received following publication of this policy

213、 paper.84 FIDO Direct launched as end-to-end solution to solve water loss,Smart Water Magazine,2023.85 AI on-side:how artificial intelligence is being used to improve health and safety in mining,Axora,2023.A pro-innovation approach to AI regulation 26 regulator would introduce complexity and confusi

214、on,undermining and likely conflicting with the work of our existing expert regulators.3.2.3 A principles-based approach 48.Existing regulators will be expected to implement the framework underpinned by five values-focused cross-sectoral principles:o Safety,security and robustness o Appropriate trans

215、parency and explainability o Fairness o Accountability and governance o Contestability and redress These build on,and reflect our commitment to,the Organisation for Economic Co-operation and Development(OECD)values-based AI principles,which promote the ethical use of AI.49.The principles set out the

216、 key elements of responsible AI design,development and use,and will help guide businesses.Regulators will lead the implementation of the framework,for example by issuing guidance on best practice for adherence to these principles.50.Regulators will be expected to apply the principles proportionately

217、 to address the risks posed by AI within their remits,in accordance with existing laws and regulations.In this way,the principles will complement existing regulation,increase clarity,and reduce friction for businesses operating across regulatory remits.51.A principles-based approach allows the frame

218、work to be agile and proportionate.It is in line with the Plan for Digital Regulation,86 the findings from the independent Taskforce on Innovation,Growth and Regulatory Reform,87 the Regulatory Horizons Councils Closing the Gap report on implementing innovation-friendly regulation,88 and Sir Patrick

219、 Vallances Regulation for Innovation report.89 52.Since publishing the AI regulation policy paper,90 we have updated and strengthened the principles.We have:o Reflected stakeholder feedback by expanding on concepts such as robustness and governance.We have also considered the results of public engag

220、ement research that highlighted an 86 Plan for Digital Regulation,DSIT(formerly DCMS),2021.87 The Taskforce on Innovation,Growth and Regulatory Reform independent report,10 Downing Street,2021.The report argues for UK regulation that is:proportionate,forward-looking,outcome-focussed,collaborative,ex

221、perimental,and responsive.88 Closing the gap:getting from principles to practices for innovation friendly regulation,Regulatory Horizons Council,2022.89 Pro-innovation Regulation of Technologies Review:Digital Technologies,HM Treasury,2023.90 Establishing a pro-innovation approach to regulating AI,O

222、ffice for Artificial Intelligence,2022.A pro-innovation approach to AI regulation 27 expectation for principles such as transparency,fairness and accountability to be included within an AI governance framework.91 o Merged the safety principle with security and robustness,given the significant overla

223、p between these concepts.o Better reflected concepts of accountability and responsibility.o Refined each principles definition and rationale.Principle Safety,Security and Robustness Definition and explanation AI systems should function in a robust,secure and safe way throughout the AI life cycle,and

224、 risks should be continually identified,assessed and managed.Regulators may need to introduce measures for regulated entities to ensure that AI systems are technically secure and function reliably as intended throughout their entire life cycle.Rationale for the principle The breadth of possible uses

225、 for AI and its capacity to autonomously develop new capabilities and functions mean that AI can have a significant impact on safety and security.Safety-related risks are more apparent in certain domains,such as health or critical infrastructure,but they can materialise in many areas.Safety will be

226、a core consideration for some regulators and more marginal for others.However,it will be important for all regulators to assess the likelihood that AI could pose a risk to safety in their sector or domain,and take a proportionate approach to managing it.Additionally,AI systems should be technically

227、secure and should reliably function as intended and described.System developers should be aware of the specific security threats that could apply at different stages of the AI life cycle and embed resilience to these threats into their systems.Other actors should remain vigilant of security issues w

228、hen they interact with an AI system.We anticipate that regulators may wish to consider the National Cyber Security Centre(NCSC)principles for securing machine learning models when assessing whether AI actors are adequately prioritising security.92 When applying this principle,regulators will need to

229、 consider providing 91 The Centre for Data Ethics and Innovation(CDEI)has engaged with the public to understand their expectations for AI governance.This engagement has informed our policy development.Participants also referred to a privacy principle,which is embedded in the broader regulatory consi

230、derations as regulators and AI life cycle actors are expected to comply with the UKs data protection framework.Public expectations for AI governance(transparency,fairness and accountability),Centre for Data Ethics and Innovation,2023.92 Principles for the security of machine learning,National Cyber

231、Security Centre,2022.A pro-innovation approach to AI regulation 28 guidance in a way that is coordinated and coherent with the activities of other regulators.Regulators implementation of this principle may require the corresponding AI life cycle actors to regularly test or carry out due diligence on

232、 the functioning,resilience and security of a system.93 Regulators may also need to consider technical standards addressing safety,robustness and security to benchmark the safe and robust performance of AI systems and to provide AI life cycle actors with guidance for implementing this principle in t

233、heir remit.Principle Appropriate transparency and explainability Definition and explanation AI systems should be appropriately transparent and explainable.Transparency refers to the communication of appropriate information about an AI system to relevant people(for example,information on how,when,and

234、 for which purposes an AI system is being used).Explainability refers to the extent to which it is possible for relevant parties to access,interpret and understand the decision-making processes of an AI system.94 An appropriate level of transparency and explainability will mean that regulators have

235、sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles(e.g.to identify accountability).An appropriate degree of transparency and explainability should be proportionate to the risk(s)presented by an AI system.Regulators may ne

236、ed to look for ways to support and encourage relevant life cycle actors to implement appropriate transparency measures,for example through regulatory guidance.Parties directly affected by the use of an AI system should also be able to access sufficient information about AI systems to be able to enfo

237、rce their rights.In applying the principle to their business processes,relevant life cycle actors may be asked to provide this information in the form and manner required by regulators,including through product labelling.Technical standards could also provide useful guidance on available methods to

238、assess,design,and improve transparency and explainability within AI systems recognising that consumers,users and regulators will require different information.95 93 For example,digital security can affect the safety of connected products such as automobiles and home appliances if risks are not appro

239、priately managed.See Principle 1.4:Robustness,security and safety,OECD AI,2019.94 Adapted from IEEE 7001-2021,Standard for Transparency of Autonomous Systems.95 For example IEEE 7001-2021(Active Standard)describes measurable,testable levels of transparency so that autonomous systems can be objective

240、ly assessed,and levels of compliance determined;ISO/IEC TS6254(Under development)will describe approaches and methods that can be used to achieve explainability objectives of stakeholders with regards to ML models and AI systems behaviours,outputs,and results.A pro-innovation approach to AI regulati

241、on 29 Rationale for the principle Transparency can increase public trust,96 which has been shown to be a significant driver of AI adoption.97 When AI systems are not sufficiently explainable,AI suppliers and users risk inadvertently breaking laws,infringing rights,causing harm and compromising the s

242、ecurity of AI systems.At a technical level,the explainability of AI systems remains an important research and development challenge.The logic and decision-making in AI systems cannot always be meaningfully explained in a way that is intelligible to humans,although in many settings this poses no subs

243、tantial risk.It is also true that in some cases,a decision made by AI may perform no worse on explainability than a comparable decision made by a human.98 Future developments of the technology may pose additional challenges to achieving explainability.AI systems should display levels of explainabili

244、ty that are appropriate to their context,including the level of risk and consideration of what is achievable given the state of the art.Principle Fairness Definition and explanation AI systems should not undermine the legal rights of individuals or organisations,discriminate unfairly against individ

245、uals or create unfair market outcomes.Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a systems use,outcomes and the application of relevant law.Fairness is a concept embedded across many areas of law and regulation,including equalit

246、y and human rights,data protection,consumer and competition law,public and common law,and rules protecting vulnerable people.Regulators may need to develop and publish descriptions and illustrations of fairness that apply to AI systems within their regulatory domain,and develop guidance that takes i

247、nto account relevant law,regulation,technical standards,99 and assurance techniques.Regulators will need to ensure that AI systems in their domain are designed,deployed and used considering such descriptions of fairness.Where 96 BritainThinks:Complete transparency,complete simplicity,CDEI and CDDO,2

248、021.97 Trust in Artificial Intelligence:a five country study,KPMG and the University of Queensland,2021;Evidence to support the analysis of impacts for AI governance,Frontier Economics,2023.98 Should AI models be explainable?That depends,Stanford Institute for Human-Centered Artificial Intelligence,

249、2021.99 For example,ISO/IEC TR 24027:2021 describes measurement techniques and methods for assessing bias in AI systems across their life cycle,especially in AI-aided decision-making.A pro-innovation approach to AI regulation 30 concepts of fairness are relevant in a broad range of intersecting regu

250、latory domains,we anticipate that developing joint guidance will be a priority for regulators.Rationale for the principle In certain circumstances,AI can have a significant impact on peoples lives,including insurance offers,credit scores,and recruitment outcomes.AI-enabled decisions with high impact

251、 outcomes should not be arbitrary and should be justifiable.In order to ensure a proportionate and context-specific approach regulators should be able to describe and illustrate what fairness means within their sectors and domains,and consult with other regulators where multiple remits are engaged b

252、y a specific use case.We expect that regulators interpretations of fairness will include consideration of compliance with relevant law and regulation,including:1)AI systems should not produce discriminatory outcomes,such as those which contravene the Equality Act 2010 or the Human Rights Act 1998.Us

253、e of AI by public authorities should comply with the additional duties placed on them by legislation(such as the Public Sector Equality Duty).2)Processing of personal data involved in the design,training,and use of AI systems should be compliant with requirements under the UK General Data Protection

254、 Regulation(GDPR),the Data Protection Act 2018,100 particularly around fair processing and solely automated decision-making.3)Consumer and competition law,including rules protecting vulnerable consumers and individuals.101 4)Relevant sector-specific fairness requirements,such as the Financial Conduc

255、t Authority(FCA)Handbook.Principle Accountability and governance Definition and explanation Governance measures should be in place to ensure effective oversight of the supply and use of AI systems,with clear lines of accountability established across the AI life cycle.AI life cycle actors should tak

256、e steps to consider,incorporate and adhere to the principles and introduce measures necessary for the effective 100 The Data Protection and Digital Information(No.2)Bill reforms the UKs data protection regime(Data Protection Act 2018 and the UK GDPR).101 Guidance on vulnerability includes:FCA guidan

257、ce on vulnerable consumers,FCA,2019;Consumer vulnerability protections,Ofgem,2020;Vulnerable consumers,CMA,2018.A pro-innovation approach to AI regulation 31 implementation of the principles at all stages of the AI life cycle.Regulators will need to look for ways to ensure that clear expectations fo

258、r regulatory compliance and good practice are placed on appropriate actors in the AI supply chain,and may need to encourage the use of governance procedures that reliably ensure these expectations are met.Regulator guidance on this principle should reflect that“accountability”refers to the expectati

259、on that organisations or individuals will adopt appropriate measures to ensure the proper functioning,throughout their life cycle,of the AI systems that they research,design,develop,train,operate,deploy,or otherwise use.Rationale for the principle AI systems can operate with a high level of autonomy

260、,making decisions about how to achieve a certain goal or outcome in a way that has not been explicitly programmed or foreseen.102 Establishing clear,appropriate lines of ownership and accountability is essential for creating business certainty while ensuring regulatory compliance.Doing so for actors

261、 in the AI life cycle is difficult,given the complexity of AI supply chains,as well as the adaptivity,autonomy and opacity of AI systems.In some cases,technical standards can provide useful guidance on good practices for AI governance.103 Assurance techniques like impact assessments can help to iden

262、tify potential risks early in the development life cycle,enabling their mitigation through appropriate safeguards and governance mechanisms.Regulatory guidance should also reflect the responsibilities such life cycle actors have for demonstrating proper accountability and governance(for example,by p

263、roviding documentation on key decisions throughout the AI system life cycle,conducting impact assessments or allowing audits where appropriate).Principle Contestability and redress Definition and explanation Where appropriate,users,impacted third parties and actors in the AI life cycle should be abl

264、e to contest an AI decision or outcome that is harmful or creates material risk of harm.Regulators will be expected to clarify existing routes to contestability and redress,and implement proportionate measures to ensure that the outcomes 102 AI has the potential to learn to solve problems without hu

265、man intervention instructing it to do so,or cope with situations the systems have not encountered before,producing potentially different associated risks that require clear lines of accountability and governance mechanisms to be in place.For example,see AI is learning how to create itself,MIT Techno

266、logy Review,2021.103 For example,ISO/IEC 42001(Under development)will provide guidance for establishing,implementing and maintaining an AI management system within an organisation to develop or use AI systems responsibly.ISO/IEC 23894(Under development)will provide guidance for establishing AI risk

267、management principles and processes within an organisation.A pro-innovation approach to AI regulation 32 of AI use are contestable where appropriate.We would also expect regulators to encourage and guide regulated entities to make clear routes(including informal channels)easily available and accessi

268、ble,so affected parties can contest harmful AI outcomes or decisions as needed.Rationale for the principle The use of AI technologies can result in different types of harm and can have a material impact on peoples lives.AI systems outcomes may introduce risks such as the reproduction of biases or sa

269、fety concerns.People and organisations should be able to contest outcomes where existing rights have been violated or they have been harmed.It will be important for regulators to provide clear guidance on this principle so that AI life cycle actors can implement it in practice.This should include cl

270、arifying that appropriate transparency and explainability are relevant to good implementation of this contestability and redress principle.The UKs initial non-statutory approach will not create new rights or new routes to redress at this stage.53.We anticipate that regulators will need to issue guid

271、ance on the principles or update existing guidance to provide clarity to business.Regulators may also publish joint guidance on one or more of the principles,focused on AI use cases that cross multiple regulatory remits.We are keen to work with regulators and industry to understand the best approach

272、 to providing guidance.We expect that practical guidance will support actors in the AI life cycle to adhere to the principles and embed them into their technical and operational business processes.Regulators may also use alternative measures and introduce other tools or resources,in addition to issu

273、ing guidance,within their existing remits and powers to implement the principles.54.Government will monitor the overall effectiveness of the principles and the wider impact of the framework.104 This will include working with regulators to understand how the principles are being applied and whether t

274、he framework is adequately supporting innovation.Consultation questions:1.Do you agree that requiring organisations to make it clear when they are using AI would improve transparency?2.Are there other measures we could require of organisations to improve AI transparency?3.Do you agree that current r

275、outes to contest or get redress for AI-related harms are 104 While this activity is likely to be led centrally(see part 3.3.1),this will involve continuation of the existing collaboration across government to ensure alignment with(and appropriate leveraging of)existing work being undertaken in relat

276、ion to the National Cyber Strategy,UKRI work on Safe and Trusted AI,the work of the Centre for Connected and Autonomous Vehicles,the NHS AI Lab and other examples.A pro-innovation approach to AI regulation 33 adequate?4.How could current routes to contest or seek redress for AI-related harms be impr

277、oved,if at all?5.Do you agree that,when implemented effectively,the revised cross-sectoral principles will cover the risks posed by AI technologies?6.What,if anything,is missing from the revised principles?A pro-innovation approach to AI regulation 34 Case Study 3.4:Explainable AI in practice The le

278、vel of explainability needed from an AI system is highly specific to its context,including the extent to which an application is safety-critical.The level and type of explainability required will likely vary depending on whether the intended audience of the explanation is a regulator,technical exper

279、t,or lay person.For example,a technical expert designing self-driving vehicles would need to understand the systems decision-making capabilities to test,assess and refine them.In the same context,a lay person may need to understand the decision-making process only in order to use the vehicle safely.

280、If the vehicle malfunctioned and caused a harmful outcome,105 a regulator may need information about how the system operates in order to allocate responsibility similar to the level of explainability currently needed to hold human drivers accountable.While AI explainability remains a technical chall

281、enge and an area of active research,regulators are already conducting work to address it.In 2021,the ICO and the Alan Turing Institute issued co-developed guidance on explaining decisions made with AI,106 giving organisations practical advice to help explain the processes,services and decisions deli

282、vered or assisted by AI to the individuals affected by them.The audience for an explanation of AIs outcomes will often be a regulator,who may require a higher standard of explainability depending on the risks represented by an application.The MHRAs Project Glass Box work is addressing the challenge

283、of setting medical device requirements that take into account adequate consideration of human interpretability and its consequences for the safety and effectiveness for AI used in medical devices.107 105 Responsible Innovation in Self-Driving Vehicles,CDEI,2022.106 Explaining decisions made with AI,

284、ICO and the Alan Turing Institute,2021.107 Software and AI as a Medical Device Change Programme Roadmap,MHRA,2022.A pro-innovation approach to AI regulation 35 Case Study 3.5:What the principles mean for businesses in practice A fictional company,“Good AI Recruitment Limited”,provides recruitment se

285、rvices that use a range of AI systems to accelerate the recruitment process,including a service that automatically shortlists candidates based on application forms.While potentially useful,such systems may discriminate against certain groups that have historically not been selected for certain posit

286、ions.After the implementation of the UKs new AI regulatory framework,the Equality and Human Rights Commission(EHRC)and the Information Commissioner Office(ICO)will be supported and encouraged to work with the Employment Agency Standards Inspectorate(EASI)and other regulators and organisations in the

287、 employment sector to issue joint guidance.The joint guidance could address the cross-cutting principles relating to fairness,appropriate transparency and explainability,and contestability and redress in the context of the use of AI systems in recruitment or employment.Such joint guidance could,for

288、example,make things clearer and easier for Good AI Recruitment Limited by:1.Clarifying the type of information businesses should provide when implementing such systems 2.Identifying appropriate supply chain management processes such as due diligence or AI impact assessments 3.Suggesting proportionat

289、e measures for bias detection,mitigation and monitoring 4.Providing suggestions for the provision of contestability and redress routes.Good AI Recruitment Limited would also be able to apply a variety of tools for trustworthy AI,such as technical standards,that would supplement regulatory guidance a

290、nd other measures promoted by regulators.In their published guidance regulators could,where appropriate,refer businesses to existing technical standards on transparency(e.g.IEEE 7001-2021),as well as standards on bias mitigation(e.g.ISO/IEC TR 24027:2021).By following this guidance Good AI Recruitme

291、nt Limited would be able to develop and deploy their services responsibly.3.2.4 Our preferred model for applying the principles 55.Initially,the principles will be issued by government on a non-statutory basis and applied by regulators within their remits.We will support regulators to apply the prin

292、ciples using the powers and resources available to them.This initial period of implementation will provide a valuable A pro-innovation approach to AI regulation 36 opportunity to ensure that the principles are effective and that the wider framework is supporting innovation while addressing risks app

293、ropriately.56.While industry has strongly supported non-statutory measures in the first instance,favouring flexibility and fewer burdens,some businesses and regulators have suggested that government should go beyond a non-statutory approach to ensure the principles have the desired impact.108 Some r

294、egulators have also expressed concerns that they lack the statutory basis to consider the application of the principles.We are committed to an approach that leverages collaboration with our expert regulators but we agree that we may need to intervene further to ensure that our framework is effective

295、.57.Following a period of non-statutory implementation,and when parliamentary time allows,we anticipate that we will want to strengthen and clarify regulators mandates by introducing a new duty requiring them to have due regard to the principles.Such a duty would give a clear signal that we expect r

296、egulators to act and support coherence across the regulatory landscape,ensuring that the framework displays the characteristics that we have identified.109 One of the strengths of this approach is that regulators would still be able to exercise discretion and expert judgement regarding the relevance

297、 of each principle to their individual domains.58.A duty would ensure that regulators retain the ability to exercise judgement when applying the principles in particular contexts benefiting from some of the flexibility expected through non-statutory implementation.For example,while the duty to have

298、due regard would require regulators to demonstrate that they had taken account of the principles,it may be the case that not every regulator will need to introduce measures to implement every principle.In having due regard to a particular principle,a regulator may exercise their expert judgement and

299、 determine that their sector or domain does not require action to be taken.The introduction of the duty will,however,give regulators a clear mandate and incentive to apply the principles where relevant to their sectors or domains.59.If our monitoring of the effectiveness of the initial,non-statutory

300、 framework suggests that a statutory duty is unnecessary,we would not introduce it.Similarly,we will monitor whether particular principles cannot be,or are not being,applied in certain circumstances or by specific regulators because of the interpretation of existing legal requirements or because of

301、technical constraints.Such circumstances may require broader legislative changes.Should we decide there is a need for statutory measures,we will work with regulators to review the interaction of our principles with their existing duties and powers.Consultation questions:7.Do you agree that introduci

302、ng a statutory duty on regulators to have due regard to the principles would clarify and strengthen regulators mandates to implement our principles while retaining a flexible approach to implementation?8.Is there an alternative statutory intervention that would be more effective?108 Following public

303、ation of our policy paper in July 2022.109 Pro-innovation,proportionate,adaptable,trustworthy,clear and collaborative see paragraph 37 above.A pro-innovation approach to AI regulation 37 A pro-innovation approach to AI regulation 38 3.2.5 The role of individual regulators in applying the principles

304、60.In some sectors,principles for AI governance will already exist and may even go further than the cross-cutting principles we propose.Our framework gives sectors the ability to develop and apply more specific principles to suit their own domains,where government or regulators identify these are ne

305、eded.61.The Ministry of Defence published its own AI ethical principles and policy in June 2022,which determines HM Governments approach regarding AI-enabled military capabilities.We will ensure appropriate coherence and alignment in the application of this policy through a context specific approach

306、 and thereby promote UK leadership in the employment of AI for defence purposes.Ahead of introducing any statutory duty to have due regard to our principles,and in advance of introducing other material iterations of the framework,we will consider whether exemptions are needed to allow existing regul

307、ators(such as those working in areas like national security)to continue their domain-level approach.62.Not all principles will be equally relevant in all contexts and sometimes two or more principles may come into conflict.For example,it may be difficult to assess the fairness of an algorithms outpu

308、ts without access to sensitive personal data about the subjects of the processing.Regulators will need to use their expertise and judgement to prioritise and apply the principles in such cases,sharing information where possible with government and other regulators about how they are assessing the re

309、levance of each principle.This collaboration between regulators and government will allow the framework to be adapted to ensure it is practical,coherent and supporting innovation.63.In implementing the new regulatory framework we expect that regulators will:o Assess the cross-cutting principles and

310、apply them to AI use cases that fall within their remit.o Issue relevant guidance on how the principles interact with existing legislation to support industry to apply the principles.Such guidance should also explain and illustrate what compliance looks like.o Support businesses operating within the

311、 remits of multiple regulators by collaborating and producing clear and consistent guidance,including joint guidance where appropriate.64.Regulators will need to monitor and evaluate their own implementation of the framework and their own effectiveness at regulating AI within their remits.We underst

312、and that there may be AI-related risks that do not clearly fall within the remits of the UKs existing regulators.110 Not every new AI-related risk will require a regulatory response and there is a growing ecosystem of tools for trustworthy AI that can support the application of the cross-cutting pri

313、nciples.These are described further in part four.65.Where prioritised risks fall within a gap in the legal landscape,regulators will need to collaborate with government to identify potential actions.This may include identifying iterations to the 110 For example,there are only six specific legal serv

314、ices activities that are overseen by regulators in the legal services sector.These“reserved legal activities”are set out in the Legal Services Act,HM Government,2007 and can only be carried out by those who are authorised(or exempt).AI-driven systems could offer other services like writing wills or

315、contracts(which many might consider to be legal services)without being subject to oversight from legal services regulators.A pro-innovation approach to AI regulation 39 framework such as changes to regulators remits,updates to the Regulators Code,111 or additional legislative interventions.Our appro

316、ach benefits from our strong sovereign parliamentary system,which reliably allows for the introduction of targeted and proportionate measures in response to emerging issues,including by adapting existing legislation if necessary.112 66.Sir Patrick Vallances review has highlighted that rushed attempt

317、s to regulate AI too early would risk stifling innovation.113 Our approach aligns with this perspective.We recognise the need to build a stronger evidence base before making decisions on statutory interventions.In doing so,we will ensure that we strike the right balance between retaining flexibility

318、 in our iterative approach and providing clarity to businesses.As detailed in section 3.3.1,we will deliver a range of central functions,including horizon scanning and risk monitoring,to identify and respond to situations where prioritised risks are not adequately covered by the framework,or where g

319、aps between regulators remits are negatively impacting innovation.Case study 3.6:Responding to regulatory policy challenges self-driving vehicles Some aspects of a new AI use case may sit outside regulators existing remits,meaning they do not have a mandate to address specific harms or support a new

320、 product to enter the market.The advent of self-driving vehicles highlighted such a regulatory and policy challenge.Where sophisticated AI-enabled software is capable of performing the designated driving task,existing regulatory structures where responsibility for road safety is achieved by licensin

321、g human drivers are not fit for purpose.This creates uncertainty regarding the development and deployment of self-driving vehicles that cannot be addressed by regulators alone.To achieve the governments ambition to make the UK one of the best places in the world to develop and deploy self-driving ve

322、hicles technology,114 manufacturers need clarity about the regulatory landscape they are operating in and the general public needs to have confidence in the safety,fairness and trustworthiness of these vehicles.111 Regulators Code,Office for Product Safety and Standards,2014.112 What is the UK Const

323、itution?,The Constitution Unit,University College London,2023.113 Pro-innovation regulation of technologies review:digital technologies,HM Treasury,2023.114 UK on the cusp of a transport revolution,Department for Transport,2021.A pro-innovation approach to AI regulation 40 The government published i

324、ts Connected&Automated Mobility 2025 report115 to address this challenge,describing how the ecosystem could be adapted to spur innovation and secure the economic and social benefits of this technology.The work of the UKs Centre for Connected and Autonomous Vehicles is an example of government acting

325、 to identify regulatory gaps,develop policy and build UK capabilities.A central monitoring and evaluation function,described below,will identify and assess gaps in the regulatory ecosystem that could stifle AI innovation so that government can take action to address them.3.2.6 Guidance to regulators

326、 on applying the principles 67.The proposed regulatory framework is dependent upon the implementation of the principles by our expert regulators.This regulator-led approach has received broad support from across industry,with stakeholders acknowledging the importance of the sector-specific expertise

327、 held by individual regulators.We expect regulators to collaborate proactively to achieve the best outcomes for the economy and society.We will work with regulators to monitor the wider framework and ensure that this collaborative approach to implementation is effective.If improvements are needed,in

328、cluding interventions to drive stronger collaboration across regulators,we will take further action.68.Our engagement with regulators and industry highlighted the need for central government to support regulators.We will work with regulators to develop guidance that helps them implement the principl

329、es in a way that aligns with our expectations for how the framework should operate.Existing legal frameworks already mandate and guide regulators actions.For example,nearly all regulators are bound by the Regulators Code116 and all regulators as public bodies are required to comply with the Human Ri

330、ghts Act.117 Our proposed guidance to regulators will seek to ensure that when applying the principles,regulators are supported and encouraged to:o Adopt a proportionate approach that promotes growth and innovation by focusing on the risks that AI poses in a particular context.o Consider proportiona

331、te measures to address prioritised risks,taking into account cross-cutting risk assessments undertaken by,or on behalf of,government.o Design,implement and enforce appropriate regulatory requirements and,where possible,integrate delivery of the principles into existing monitoring,investigation and e

332、nforcement processes.o Develop joint guidance,where appropriate,to support industry compliance with the principles and relevant regulatory requirements.115 Connected&Automated Mobility 2025,Department for Transport,2022.116 Regulators Code,Office for Product Safety and Standards,2014.117 Human Right

333、s Act,HM Government,1998 A pro-innovation approach to AI regulation 41 o Consider how tools for trustworthy AI like assurance techniques and technical standards can support regulatory compliance.o Engage proactively and collaboratively with governments monitoring and evaluation of the framework.A pro-innovation approach to AI regulation 42 Case Study 3.7:What this means for businesses A fictional

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(英国政府:2023促进创新的人工智能监管方法白皮书(英文版)(91页).pdf)为本站 (Kelly Street) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部