上海品茶

OECD:2024人工智能(AI)、数据治理和隐私:协同作用及国际合作报告(英文版)(55页).pdf

编号:166765 PDF  中文版   DOC  55页 1.66MB 下载积分:VIP专享
下载报告请您先登录!

OECD:2024人工智能(AI)、数据治理和隐私:协同作用及国际合作报告(英文版)(55页).pdf

1、AI,DATA GOVERNANCE AND PRIVACYSYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATIONOECD ARTIFICIAL INTELLIGENCE PAPERSJune 2024 No.222 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS Foreword The report“AI,data governance,and privacy:Syner

2、gies and areas of international co-operation”explores the intersection of AI and privacy and ways in which relevant policy communities can work together to address related risks,especially with the rise of generative AI.It highlights key findings and recommendations to strengthen synergies and areas

3、 of international co-operation on AI,data governance and privacy.This paper was approved and declassified by written procedure by the OECD Digital Policy Committee(DPC)on 20 June 2024 and prepared for publication by the OECD Secretariat.The paper is informed by the contributions of the OECD.AI Exper

4、t Group,on AI,Data and Privacy(hereafter the“Expert Group”)of the OECD Network of Experts on AI.It was prepared under the aegis of the OECD Working Party on Artificial Intelligence Governance(AIGO)and the OECD Working Party on Data Governance and Privacy,both working parties of the OECD Digital Poli

5、cy Committee(DPC).At the time of publishing,the Expert Group was co-chaired by Reuven Eidelman(Israeli Privacy Protection Authority),Denise Wong(Singapore Infocomm Media Development Authority(IMDA),and Clara Neppel(IEEE European Business Operations).The Expert Group also benefitted from input and gu

6、idance from a Steering Group comprised of Yordanka Ivanova(European Commission),Kari Laumann(Norwegian Data Protection Authority),Winston Maxwell(Tlcom Paris-Institut Polytechnique de Paris),and Marc Rotenberg(Center for AI and Digital Policy).The report development and drafting were led by members

7、of the OECD Secretariat,in collaboration with Winston Maxwell(Tlcom Paris-Institut Polytechnique de Paris)who was a major contributor to the report:Sarah Brub,Celine Caira,and Yuki Yokomori from the OECD AI Unit,Sergi Galvez Duran,Andras Molnar and Limor Shmerling-Magazanik from the OECD Data Govern

8、ance and Privacy Unit.Clarisse Girot and Karine Perset,Heads of the Data Governance and Privacy Unit and the AI Unit respectively,recognised the value of the two working parties and the associated policy communities working together and provided resources,input and oversight.Gallia Daor,Digital Econ

9、omy Policy Division,and Audrey Plonk,Deputy Director of Science,Technology and Innovation,provided advice and oversight.The authors gratefully acknowledge the contributions made by individuals and institutions that took the time to participate in presentations to the Expert Group.Finally,the authors

10、 thank Andreia Furtado,Marion Barberis,and Shellie Phillips for administrative and communications support,the overall quality of the report benefited from their engagement.AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 3 OECD ARTIFICIAL INTELLIGENCE PAPERS Note to Deleg

11、ations:This document is also available on O.N.E under the reference code:DSTI/CDEP/AIGO/DGP(2023)1/FINAL This document,as well as any data and map included herein,are without prejudice to the status of or sovereignty over any territory,to the delimitation of international frontiers and boundaries an

12、d to the name of any territory,city or area.OECD 2024 The use of this work,whether digital or print,is governed by the Terms and Conditions to be found at http:/www.oecd.org/termsandconditions.4 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE

13、 PAPERS Table of contents Foreword 2 Acronyms and abbreviations 6 Abstract 7 Abrg 8 Executive summary 9 Rsum 11 Introduction 13 1 Generative AI:a catalyst for collaboration on AI and privacy 19 The opportunities and risks of generative AI for privacy 19 Privacy concerns emerging from generative AI:P

14、rivacy Enforcement Authorities step in 22 Generative AI enhances the urgency to work on the interplay between AI and privacy regulations 24 2 Mapping existing OECD principles on privacy and on AI:key policy considerations 26 The five values-based principles in the OECD AI Recommendation 27 Key polic

15、y considerations from mapping AI and privacy principles 27 Overview of possible commonalities and divergences in AI and privacy principles 29 3 National and regional developments on AI and privacy 42 International responses by Privacy Enforcement Authorities 42 Guidance provided by Privacy Enforceme

16、nt Authorities on the application of privacy laws to AI 42 PEA enforcement actions in AI,including generative AI 44 4 Conclusion 46 References 47 Notes 55 TABLES Table 1.The OECD AI Principles,revised 2024 26 Table 2.Overview of similarities and relevant areas of coordination between AI and privacy

17、policy communities 27 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 5 OECD ARTIFICIAL INTELLIGENCE PAPERS Table 3.Key concepts with different meanings between AI and privacy policy communities 28 BOXES Box 1.1.Real and potential risks associated with AI systems 22 6 AI

18、,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS Acronyms and abbreviations AI Artificial intelligence DPA Data Protection Authority DPC Digital Policy Committee GPU Graphics processing unit HPC High-performance computing ICT Information

19、and communication technology IGO Intergovernmental organisation ML Machine learning NGO Non-governmental organisation NLP Natural language processing OECD Organisation for Economic Co-operation and Development ONE AI OECD AI Network of Experts PEA Privacy Enforcement Authority R&D Research and devel

20、opment SDG Sustainable Development Goals SME Small and medium-sized enterprise VC Venture Capital WPAIGO Working Party on Artificial Intelligence Governance WPDGP Working Party on Data Governance and Privacy AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 7 OECD ARTIFICI

21、AL INTELLIGENCE PAPERS Abstract Recent AI technological advances,particularly the rise of generative AI,have raised many data governance and privacy questions.However,AI and privacy policy communities often address these issues independently,with approaches that vary between jurisdictions and legal

22、systems.These silos can generate misunderstandings,add complexities in regulatory compliance and enforcement,and prevent capitalising on commonalities between national frameworks.This report focuses on the privacy risks and opportunities stemming from recent AI developments.It maps the principles se

23、t in the OECD Privacy Guidelines to the OECD AI Principles,takes stock of national and regional initiatives,and suggests potential areas for collaboration.The report supports the implementation of the OECD Privacy Guidelines alongside the OECD AI Principles.By advocating for international co-operati

24、on,the report aims to guide the development of AI systems that respect and support privacy.8 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS Abrg Les rcentes avances technologiques en matire dIA,en particulier lessor de lIA gnrative,on

25、t soulev de nombreuses questions concernant la gouvernance des donnes et la protection de la vie prive.Cependant,les communauts de lIA et de la politique de protection de la vie prive abordent souvent ces enjeux de manire indpendante,avec des approches qui varient dune juridiction lautre et dun syst

26、me juridique lautre.Ces cloisonnements peuvent gnrer des malentendus,ajouter des complexits au respect et lapplication des rglementations et empcher de capitaliser sur les points communs entre les cadres nationaux.Ce rapport se concentre sur les risques et les opportunits en matire de protection de

27、la vie prive dcoulant des rcents dveloppements de lIA.Il compare les Lignes directrices de lOCDE en matire de protection de la vie prive avec les Principes de lOCDE en matire dIA,fait le point sur les initiatives nationales et rgionales et suggre des domaines potentiels de collaboration.Le rapport s

28、outient la mise en uvre des Lignes directrices de lOCDE relatives la protection de la vie prive paralllement aux Principes de lOCDE relatifs lIA.En prnant la coopration internationale,le rapport vise guider le dveloppement de systmes dIA qui respectent et soutiennent la protection de la vie prive.AI

29、,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 9 OECD ARTIFICIAL INTELLIGENCE PAPERS Executive summary Recent AI technological advancesparticularly the rise of generative AI raise both opportunities and risks related to data protection and privacy.As a general-purpose tec

30、hnology,AI is wide-reaching and rapidly permeating products,sectors,and business models across the globe.Recent progress in generative AI owes its progress in large part to the availability and use of vast training data stored worldwide.Like the data,actors involved in the AI lifecycle are distribut

31、ed across jurisdictions,underscoring the need for global synchronisation,clear guidance and cooperative efforts to address the challenges posed by AIs impact on privacy.However,the AI and privacy policy communities currently tend to address challenges separately,without much co-operation,such that t

32、heir approaches vary across jurisdictions and legal systems.For instance,the practice of scraping personal data to train generative AI raises significant privacy questions and is attracting increasing regulatory attention as a consequence.However,discussions on practical solutions to align data scra

33、ping practices with Privacy Guidelines have been limited.Likewise,the practical implementation of individual data protection and privacy rights in the development of Generative AI is not yet the subject of collective in-depth reflection.As more countries move to regulate AI,lack of co-operation betw

34、een these communities could result in misunderstandings as to the actual reach of data protection and privacy laws,as well as in conflicting and/or duplicative requirements that could lead to additional complexity in regulatory compliance and enforcement.As both communities consider possible respons

35、es to the opportunities and risks of AI,they could benefit from each others knowledge,experience,and priorities through greater collaboration,aligning policy responses and improving complementarity and consistency between AI policy frameworks on the one hand,and data protection and privacy framework

36、s on the other.With their differences in history,profiles and approaches,the AI and privacy policy communities have lessons to learn from each other.In recent years,the AI community,including AI researchers and developers,from academia,civil society,and the public and private sectors,has formed dyna

37、mic and strong networks.Many in the AI community have taken an innovation-driven approach,while the privacy community has been generally adopting a more cautious approach marked by decades of implementation of long-standing privacy and data protection laws.The privacy community is also often charact

38、erised as more established because of long-standing privacy and data protection laws and has evolved over time to include a diverse array of stakeholders such as regulators,privacy and data protection officers,technologists,lawyers,public policy professionals,civil society groups,and regulatory tech

39、nology providers,among others.This community is focused on establishing privacy safeguards and mitigating risks assessed within often sophisticated and firmly established regulatory frameworks.Despite these differences,synergies exist,and co-operation is essential.This report identifies areas that w

40、ould benefit from further synergy and complementarity,including key terminological differences between the two policy communities.It maps existing privacy and data protection considerations to the AI values-based principles set in the OECDs 2019 Recommendation on AI to identify relevant areas for cl

41、oser coordination.This mapping illustrates the different interpretations of 10 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS the privacy and AI communities around key concepts including fairness,transparency and explainability.Unders

42、tanding these differences is essential for building sustainable co-operation actions.Actors in both the AI and privacy communities have implemented measures at the national,regional,and global levels to tackle opportunities and risks posed by AI.The report provides a snapshot of national and regiona

43、l developments on AI and privacy,including guidance provided by privacy regulators on the application of privacy laws to AI and related enforcement actions,specifically regarding generative AI.It finds that while many actions have been taken,including policy initiatives and enforcement actions by Pr

44、ivacy Enforcement Authorities,they could benefit from further coordination as AI-specific laws emerge worldwide.With its international reach and substantive expertise in both AI and data protection and privacy,the OECD appears as a key forum to strengthen synergies and areas of international co-oper

45、ation in this area.It can draw on well-established policy work in both areas,including the leading principles included in the 1980 OECD Privacy Guidelines,updated in 2013,and the 2019 OECD Recommendation on AI,updated in 2024.Moreover,in 2024,the OECD has established a unique Expert Group on AI,Data

46、,and Privacy,which convenes leading voices in both communities to explore key questions and policy solutions at the intersection of AI and data protection and privacy.Despite the challenges,both this policy work and the ongoing activities within this expert group showcase that broad and lasting co-o

47、peration,as well as mutual understanding,are achievable.To provide a common reference framework for these co-operation opportunities and highlight the OECDs distinctive role,the report aligns the OECD AI Principlesthe first intergovernmental standard on AIwith the well-established OECD Privacy Guide

48、lines,which serve as the foundation for data protection laws globally.The report assesses national and regional initiatives related to AI and privacy and identifies areas for collaboration,such as in the area of Privacy Enhancing Technologies(PETs),that can help address privacy concerns,particularly

49、 regarding the“explainability”of AI algorithms.The joint expert group on AI,data,and privacy will play a crucial role,in more precisely articulating the concrete opportunities for innovative,technological and regulatory developments of AI that respect privacy and personal data protection rules.AI,DA

50、TA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 11 OECD ARTIFICIAL INTELLIGENCE PAPERS Rsum Les rcentes avances technologiques en matire dIA en particulier lessor de lIA gnrative soulvent la fois des opportunits et des risques lis la protection des donnes et de la vie prive.E

51、n tant que technologie polyvalente,lIA a une grande porte et pntre rapidement les produits,les secteurs et les modles dentreprise dans le monde entier.Les progrs rcents de lIA gnrative sont dus en grande partie la disponibilit et lutilisation de vastes donnes dentranement stockes dans le monde entie

52、r.Tout comme les donnes,les acteurs impliqus dans le cycle de vie de lIA sont rpartis entre diffrentes juridictions,ce qui souligne la ncessit dune synchronisation mondiale,dorientations claires et defforts de coopration pour relever les dfis poss par limpact de lIA sur la vie prive.Toutefois,les co

53、mmunauts de lIA et de la politique de protection de la vie prive tendent actuellement relever les dfis sparment,sans grande coopration,de sorte que leurs approches varient dune juridiction lautre et dun systme juridique lautre.Par exemple,la pratique consistant gratter(scraping)des donnes personnell

54、es pour entraner lIA gnrative soulve des questions importantes en matire de protection de la vie prive et suscite de ce fait une attention croissante sur le plan rglementaire.Cependant,les discussions sur les solutions pratiques permettant daligner les pratiques de rcupration de donnes sur les princ

55、ipes de protection de la vie prive ont t limites.De mme,la mise en uvre pratique de la protection des donnes individuelles et des droits la vie prive dans le dveloppement de lIA gnrative ne fait pas encore lobjet dune rflexion collective approfondie.Alors que de plus en plus de pays sapprtent rgleme

56、nter lIA,le manque de coopration entre ces communauts pourrait entraner des malentendus quant la porte relle des lois sur la protection des donnes et de la vie prive,ainsi que des exigences contradictoires et/ou redondantes susceptibles daccrotre la complexit du respect et de lapplication de la rgle

57、mentation.Alors que les deux communauts envisagent des rponses possibles aux opportunits et aux risques de lIA,elles pourraient bnficier de leurs connaissances,de leur exprience et de leurs priorits respectives grce une plus grande collaboration,en alignant les rponses politiques et en amliorant la

58、complmentarit et la cohrence entre les cadres politiques de lIA,dune part,et les cadres de protection des donnes et de la vie prive,dautre part.Avec leurs diffrences dhistoire,de profils et dapproches,les communauts de lIA et de la politique de protection de la vie prive ont des leons tirer lune de

59、lautre.Ces dernires annes,la communaut de lIA,qui comprend des chercheurs et des dveloppeurs dans le domaine de lIA,issus du monde universitaire,de la socit civile et des secteurs public et priv,a form des rseaux dynamiques et solides.De nombreux membres de la communaut de lIA ont adopt une approche

60、 axe sur linnovation,tandis que la communaut de la protection de la vie prive est souvent considre comme plus tablie en raison de lexistence de lois de longue date sur la protection de la vie prive et des donnes.La communaut de la protection de la vie prive a volu au fil du temps pour inclure un ven

61、tail diversifi de parties prenantes telles que des rgulateurs,des responsables de la protection de la vie prive et des donnes,des technologues,des juristes,des professionnels des politiques publiques,des groupes de la socit civile et des fournisseurs de technologies de rgulation,entre autres.Cette c

62、ommunaut se concentre sur la mise en place de garanties en matire de protection de la vie prive et sur lattnuation 12 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS des risques valus dans des cadres rglementaires souvent sophistiqus e

63、t solidement tablis,adoptant gnralement une approche plus prudente que celle qui tend caractriser la communaut de lIA.Malgr ces diffrences,des synergies existent et la coopration est essentielle.Ce rapport identifie les domaines qui bnficieraient dune synergie et dune complmentarit accrues,y compris

64、 les principales diffrences terminologiques entre les deux communauts politiques.Il compare les considrations existantes en matire de protection de la vie prive et des donnes avec les principes fonds sur les valeurs de lIA noncs dans la Recommandation de 2019 de lOCDE sur lIA,afin didentifier les do

65、maines pertinents pour une coordination plus troite.Cette cartographie illustre les diffrentes interprtations des communauts de la protection de la vie prive et de lIA autour de concepts cls notamment lquit,la transparence et lexplicabilit.Il est essentiel de comprendre ces diffrences pour mettre en

66、 place des actions de coopration durables.Les acteurs des communauts de lIA et de la protection de la vie prive ont mis en uvre des mesures aux niveaux national,rgional et mondial pour faire face aux opportunits et aux risques poss par lIA.Le rapport donne un aperu des volutions nationales et rgiona

67、les en matire dIA et de protection de la vie prive,y compris les orientations fournies par les autorits de rgulation de la protection de la vie prive sur lapplication des lois sur la protection de la vie prive lIA et les mesures dexcution connexes,en particulier en ce qui concerne lIA gnrative.Il co

68、nstate que si de nombreuses mesures ont t prises,notamment des initiatives politiques et des mesures dapplication par les autorits charges de lapplication des lois sur la protection de la vie prive,elles pourraient bnficier dune plus grande coordination mesure que des lois spcifiques lIA voient le j

69、our dans le monde entier.Grce sa porte internationale et son expertise dans les domaines de lIA,de la protection des donnes et de la vie prive,lOCDE apparat comme un forum essentiel pour renforcer les synergies et les domaines de coopration internationale dans ce domaine.Elle peut sappuyer sur des t

70、ravaux politiques bien tablis dans les deux domaines,notamment les principes fondamentaux inclus dans les Lignes directrices de lOCDE relatives la protection de la vie prive de 1980,mises jour en 2013,et la Recommandation de lOCDE sur lIA de 2019,mise jour en 2024.En outre,en 2024,lOCDE a cr un grou

71、pe dexperts unique sur lIA,les donnes et la protection de la vie prive,qui runit des personnalits de premier plan des deux communauts afin dexaminer les questions cls et les solutions politiques lintersection de lIA et de la protection des donnes et de la vie prive.Malgr les dfis,le travail de polit

72、iques publiques et les activits en cours au sein de ce groupe dexperts montrent quune coopration large et durable,ainsi quune comprhension mutuelle,sont ralisables.Afin de fournir un cadre de rfrence commun pour ces opportunits de coopration et de souligner le rle distinctif de lOCDE,le rapport alig

73、ne les Principes de lOCDE relatifs lIA la premire norme intergouvernementale sur lIA sur les Lignes directrices de lOCDE relatives la protection de la vie prive,qui servent de fondement aux lois sur la protection des donnes lchelle mondiale.Le rapport value les initiatives nationales et rgionales li

74、es lIA et la protection de la vie prive et identifie les domaines de collaboration,tels que les technologies damlioration de la protection de la vie prive(PET),qui peuvent contribuer rpondre aux proccupations en matire de protection de la vie prive,en particulier en ce qui concerne l explicabilit de

75、s algorithmes dIA.Le groupe dexperts conjoint sur lIA,les donnes et la vie prive jouera un rle crucial en dfinissant plus prcisment les possibilits concrtes de dveloppement innovant,technologique et rglementaire de lIA dans le respect de la vie prive et des rgles de protection des donnes caractre pe

76、rsonnel.AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 13 OECD ARTIFICIAL INTELLIGENCE PAPERS Introduction Strengthening synergies between the AI and privacy communities Recent AI advancements,including the rise of generative AI in late 2022,have raised data governance

77、and privacy challenges.Difficult questions have come to the fore around the use of input and output data,data quality and data availability for training AI models.Namely,how to protect the rights and interests of all parties involved,including individuals to whom the data collected,used,and produced

78、 by these models and systems relate.In contrast to previous AI systems,recent advances in neural networks and deep learning have resulted in larger,more advanced and more compute-intensive AI models and systems.In 2017,a group of researchers introduced a type of neural network architecture called“tr

79、ansformers”a key conceptual breakthrough underpinning major progress in AI language models and generative AI.These advances centre on“foundation models”models trained on large amounts of data that can be adapted to a wide range of downstream tasks such as OpenAIs Generative Pretrained Transformers(G

80、PT)series.Advances in AI computing infrastructure such as graphics processing units(GPUs)and the availability and quality of data,have also been fundamental in fuelling technological leaps in machine learning,as they form the basic AI production function:algorithms,data and computing resources(OECD,

81、20241).With the rise of new machine learning techniques,and generative AI applications in particular,calls have become pressing to consider the privacy implications related to the training and use of AI systems,and for the different policy communities in this area including policymakers,researchers,

82、civil society,industry,and oversight and enforcement agencies to help address such concerns by cross-fertilising their respective efforts.Neural networks have often been referred to as a“black box”.The term“black box”reflects the considerable challenge in understanding how AI systems make decisions,

83、a challenge that is especially apparent in neural networks-based methods.The OECD is helping to build and strengthen synergies between the AI and privacy communities,drawing on well-established policy work in both areas,and on the leading principles included in the 1980 OECD Privacy Guidelines,updat

84、ed in 2013,and the 2019 OECD Recommendation on AI(hereafter“OECD Recommendation on AI”),updated in 2024.This analysis concludes that,despite challenges,AIs innovative,technological and regulatory developments are mainly compatible with,and can even reinforce,privacy and personal data protection rule

85、s.By identifying risks and opportunities,mapping existing OECD Privacy Guidelines to AI Principles,taking stock of national and regional initiatives,and providing key policy considerations for the way forward,it advances the OECDs mission to help implement the OECD AI Principles,the worlds first int

86、ergovernmental standard on AI,and the well-established OECD Privacy Guidelines,a flagship legal instrument which forms the bedrock of data protection laws globally.Such efforts help to enable international co-operation,at the OECD and beyond,to foster a shared understanding to help chart the course

87、for successful implementation of AI and privacy rules around the globe.14 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS AI and privacy:a dynamic policy landscape In response to the recent rise of advanced machine learning systems and

88、 generative AI,many in both the AI and the privacy and data protection communities are posing questions at the intersection of AI,data governance,and privacy,and organising policy responses.Policy actions and rules around AI systems applied in high-risk areas have also emerged in a growing number of

89、 jurisdictions.In likely the best-known example,the European Unions Artificial Intelligence Act(the EU AI Act)outlines a risk-based regulatory approach to the use of AI systems,including areas of high-risk,for example threats to European values such as privacy and data protection(European Parliament

90、,20242).G7 Digital and Technology Ministers have also put AI high on their agendas through the G7 Hiroshima Process on Generative AI(OECD,20233),emphasising the need to protect human rights,including the right to privacy.Individual countries have also taken actions.For example,the 2023 United States

91、 Executive Order on the Safe,Secure,and Trustworthy Development and Use of Artificial Intelligence,directs public sector entities to establish new standards for AI safety and security,privacy protections,equity and civil rights,consumers and workers rights,and innovation and competition(The White Ho

92、use,20234).In parallel to the establishment of AI-specific laws,regulations and standards,regulators in a growing number of jurisdictions have started applying existing privacy and data protection laws(hereinafter privacy laws for short)to address the privacy issues generated by the processing of pe

93、rsonal data used to train AI systems,such as through enforcement actions or the adoption of ad hoc guidance(see Section 3).Several Privacy Enforcement Authorities(PEAs)1 announced the launch of AI action plans,including the establishment of dedicated AI units.These national initiatives are reinforce

94、d by initiatives from the international community of privacy authorities,for example the public statement issued by the G7 Data Protection and Privacy Authorities(DPAs)Roundtable in June 2023 and the“Resolution on Generative Artificial Intelligence Systems”adopted at the 45th meeting of the Global P

95、rivacy Assembly,the premier global network of privacy regulators,on 20 October 2023.With variations between jurisdictions and legal systems,however,the AI and privacy policy communities are still largely responding to AI and privacy challenges independent of each other.As more countries move to regu

96、late AI,this could result in misunderstandings on the actual reach of data protection and privacy laws,and in conflicting and/or duplicative requirements that can lead to additional complexity in compliance and enforcement of responsible policies and regulations.As both communities consider possible

97、 responses to AI opportunities and risks,they could benefit from each others knowledge,experience,and priorities through greater collaboration,aligning policy responses and improving complementarity and consistency between AI policy frameworks on the one hand,and data protection and privacy framewor

98、ks on the other.The existence of parallel work streams involving two different policy communities is not unusual nor is it necessarily problematic.Each community brings a unique perspective that can lead to a richer policy debate and approach to solutions.Rapid advances in AI as a technology and its

99、 diffusion across sectors are,however,putting pressure on different policy communities to devise solutions quickly,and the existence of silos raises the risk of inconsistent policy responses and even misunderstandings due to differences in terminology and approaches.Achieving consensus on language i

100、s often a pre-requisite for effective co-operation.In this vein,the OECD has played an important role promoting standardisation of key terminology in the AI and privacy space.Namely,the 2019 OECD Recommendation on AI includes a widely cited definition of an AI system which was revised in late 2023 t

101、o ensure it reflects and addresses important technology and policy developments,notably with respect to generative AI,and heightened concerns around safety,information integrity,and environmental sustainability.The Recommendation has influenced AI policy and legal frameworks around the world,includi

102、ng in the EU AI Act,the Council of Europe,and in standards bodies like the US National Institute of Standards and Technology(NIST).The EU-US Trade and AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 15 OECD ARTIFICIAL INTELLIGENCE PAPERS Technology Council(TTC)is also ac

103、tively collaborating and released an initial draft of a common EU-US Terminology and Taxonomy for Artificial Intelligence on 31 May 2023,which includes terms relevant to both the AI and privacy communities(European Commission and US TTC,20235).The OECD Working Party on Artificial Intelligence Govern

104、ance(WPAIGO)and the OECD Working Party on Data Governance and Privacy(WPDGP)are well positioned to support existing international co-operation efforts on these issues.As part of the OECD.AI Network of Experts(ONE AI)the OECD.AI Expert Group on AI,Data,and Privacy(hereafter Expert Group)established i

105、n 2024,also helps to bring both communities together to promote synergies and complementariness.Such OECD Working Party delegates and members of the Expert Group have contributed analysis and insights to this report and work more broadly.The OECD privacy and data protection community is well-establi

106、shed,with a robust“toolbox”relevant to risks raised by AI to individual rights and freedoms.This toolbox comprises various legal instruments,notably the OECD Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data(hereafter“OECD

107、Privacy Guidelines”),adopted in 1980 and revised in 2013 OECD/LEGAL/0188 and the OECD Recommendation on Enhancing Access to and Sharing of Data OECD/LEGAL/0463,as well as relevant materials and expertise in implementing fundamental concepts and compliance mechanisms in different contexts.Elements of

108、 this toolbox are already being used in several jurisdictions to inform frameworks for the responsible use of AI.While frameworks related to trustworthy AI are comparatively more recent,they have garnered significant attention globally.In efforts to keep pace with the speed of technological advancem

109、ents,the AI policy community is rapidly developing and implementing frameworks including on AI risk management and accountability,mitigating bias,promoting explainability of AI system outputs,and improving robustness,among others,along the AI system lifecycle.For example,the OECDs 2019 Recommendatio

110、n on AI OECD/LEGAL/0449,updated in 2024,forms a foundational set of principles that has guided the development of AI laws,regulations and standards globally.The rapidly evolving AI,data governance,and privacy policy landscape calls for greater collaboration and information sharing across these polic

111、y communities.Co-operation strives to support efforts not only to address privacy risks in AI,but also to optimise and enhance the collective benefits of AI to society,including both unleashing AIs innovative potential,while also protecting privacy and personal data.International co-operation on AI

112、and privacy requires ensuring the long-term interoperability of legal,technical,and operational frameworks applying to AI and privacy.This will allow policy-and decision-makers to leverage commonalities,complementarities,and elements of convergence in their respective policy frameworks,or,conversely

113、,to identify the stumbling blocks that could hinder the development of common positions or co-operation.These co-operation efforts could help assess whether the AI and Privacy Recommendations need to be updated to reflect the synergies between the AI and privacy communities.AI and the OECD Privacy G

114、uidelines On the privacy side,the OECD Privacy Guidelines,were adopted in 1980 and revised in 2013 OECD/LEGAL/0188.They are the cornerstone of the OECDs work on privacy and are recognised as the global minimum standard for privacy and data protection.The OECD Privacy Guidelines are complemented by o

115、ther flagship OECD legal instruments relevant to co-operation on privacy and AI issues,including the Recommendation on Cross-border Co-operation on Enforcement of Laws Protecting Privacy(OECD,20076),the Recommendation on Enhancing Access to and Sharing of Data(OECD,20217),and subsequent implementati

116、on guidance and work carried out under the auspices of the WPDGP.The Declaration on Government Access to Personal Data Held by 16 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS Private Sector Entities OECD/LEGAL/0487 is also relevant

117、when accessing personal data stored in AI systems for national security and law enforcement purposes.In recent years,the WPDGP has produced analyses relevant to privacy and AI,in particular on Privacy Enhancing Technologies(PETs)(OECD,20238).International co-operation is a core principle of the OECD

118、 Privacy Guidelines(Part Six)and a growing area of work in the WPDGPs agenda,including on the need to clarify the intersection of baseline privacy frameworks with sectoral or other cross-cutting frameworks that include AI and new technologies.The OECD Privacy Guidelines are technology neutral and do

119、 not explicitly cover the privacy challenges posed by AI nor of other specific digital technologies.At the same time,the need to address the potential for bias and other harmful consequences from personal data processing without hindering innovation and preventing the beneficial uses of these techno

120、logies was highlighted in the 2021 review of the Recommendation(OECD,20219).One of the dominant themes is the importance of“explainability”of AI algorithms to ensure accuracy,fairness,and accountability.Experts also noted that AI increases demand for large data sets,which are critical to build AI sy

121、stems that generate more accurate outputs,but also increase privacy-related risks.Furthermore,experts highlighted that most AI Principles refer to privacy in general terms but do not establish an explicit connection between the capabilities of AI and the nature of AI-specific privacy challenges,with

122、 the possible effect of shifting the focus away from privacy when it comes to AI.Their overall suggestion was that additional guidance may be helpful to ensure that current AI Principles sufficiently address privacy-related concerns.Adherents to the OECD Privacy Guidelines agreed with these points a

123、t the time.They noted that the technology-neutral language of the OECD Privacy Guidelines was key to their adaptability and decided not to amend the basic principles to account for AI(OECD,20219).Rather,Adherents decided that these important matters related specifically to AI could be addressed in m

124、echanisms and guidance related to the OECD Recommendation on AI,which had just been adopted in 2019.The analysis undertaken in this report,and the work of the Expert Group,also demonstrate ways fulfilling the request for further collaboration with AI communities.Privacy and the OECD Recommendation o

125、n AI Since 2016,the OECD has undertaken significant work on AI policy and governance through its Working Party on AI Governance(AIGO)(OECD AI,202310),including the adoption of the OECD Recommendation of the Council on Artificial Intelligence in May 2019,updated in 2024.Comprising five principles app

126、licable to all stakeholders and five recommendations to governments,the OECD AI Principles provide guidance on how governments and other actors can shape a human-centric approach to trustworthy AI.With 46 adherents,the AI Principles emphasise international collaboration,including in the areas of pri

127、vacy and accountability.The global significance of the OECD AI Principles cannot be overstated.Since their adoption,countries worldwide have taken actions to codify the Principles through national AI strategies,and soft and hard laws.These include several AI initiatives,including the establishment o

128、f AI Offices or AI Commissioners to guide implementation of national or regional laws,regulations and standards,for example,in the EU AI Act,and Canadas proposed Artificial Intelligence and Data Act(AIDA).The United States Executive Order on the Safe,Secure,and Trustworthy Development and Use of Art

129、ificial Intelligence,also emphasizes the role of the Federal Trade Commission to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from AI harms.Some jurisdictions are also exploring the role of existing national PEAs in implementing AI-related law

130、and regulation.AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 17 OECD ARTIFICIAL INTELLIGENCE PAPERS Exploring the role of existing data protection actors in the implementation of AI frameworks may not come as a surprise,considering values-based principles such as the O

131、ECD AI Principles complement existing OECD standards in areas such as privacy,with privacy and data protection being core components of the Recommendation.Three principles in particular mention data protection and privacy explicitly in the OECD AI Principles(OECD,201911):Principle 1.2 Respect for th

132、e rule of law,human rights,and democratic values,including fairness and privacy,throughout the AI system lifecycle.Principle 1.4.Robustness,security and safety calls for AI actors to,among other things,ensure the traceability of AI systems including in relation to datasets,processes and decisions ma

133、de during the AI system lifecycle,and that AI actors should apply a systematic risk management approach to AI system lifecycle phases to address risks such as privacy,digital security,safety and bias.Principle 2.1.Investing in AI research and development calls for,among other things,governments to c

134、onsider public investment and encourage private investment in open datasets that are representative and respect privacy and data protection.Privacy is also referenced in many tools included in the work of AIGO and ONE AI.For example,the OECD Framework for the Classification of AI Systems has been ap

135、plied to contexts where privacy and data protection is paramount,including in evaluating medical technology applications in the United Kingdom and in Australia.Key definitions of terms used throughout the report AI and privacy communities This report refers to“AI and privacy communities”,which for t

136、he purposes of this analysis is defined as including policymakers,enforcement authorities,as well as experts in academia,civil society,and private sector professionals.The“AI community”,including AI researchers and developers,and members from academia,civil society,and the public and private sectors

137、,has grown in recent years into a dynamic and strong network focused both on advancing highly technical aspects of AI as well as global AI governance to promote its responsible use.This group is subject to established or emerging rules including AI laws,regulatory frameworks,and standards,which in m

138、any parts of the world are still evolving.Many in the AI community have taken an innovation-driven approach,including the exploration of largely uncharted territories as technological leaps and new applications are discovered.The“privacy community”,in contrast,may be characterised as more establishe

139、d,although its profile has dramatically evolved over time to include a very diverse array of stakeholders including regulators,privacy and data protection officers,technologists,lawyers,public policy professionals,civil society groups,and regulatory technology providers,among others.This broad globa

140、l community has been largely shaped by the growing number of regulatory frameworks that have developed over decades around the globe.Because it operates in this mature context,the privacy community adopts approaches that can generally be described as more cautious than the innovation-driven approach

141、es which tend to characterise the AI community.The differing nature of these communities approaches to AI and privacy issues has implications.While the AI community may benefit from its agility and innovative spirit,it might lack experience with and in-depth understanding of the regulatory implicati

142、ons of technology advancements,regulatory implementation and enforcement of AI-specific rules.Meanwhile,the privacy community,with its depth of regulatory experience,may lack the technical knowledge necessary to fully comprehend how personal 18 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTER

143、NATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS data is used in designing,developing,and deploying AI systems.This technical gap could lead to an overly conservative approach,potentially hindering innovation due to concerns over the privacy risks associated with using personal data to trai

144、n and test AI systems.Building bridges between these communities would not only facilitate compliance with emerging AI laws,but also ensure that AI development continues to thrive within existing data protection and privacy frameworks.A collaborative approach is essential for formulating regulations

145、 that protect societal values without impeding technological progress.AI actors According to the OECD AI Principles,the term“AI actors”refers to those who play an active role in the AI system lifecycle,including organisations and individuals that deploy or operate AI.An AI system lifecycle typically

146、 involves several phases that include to:plan and design;collect and process data;build model(s)and/or adapt existing model(s)to specific tasks;test,evaluate,verify and validate;make available for use/deploy;operate and monitor;and retire/decommission.These phases often take place in an iterative ma

147、nner and are not necessarily sequential.The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase.Generative AI Generative artificial intelligence(AI)systems create new content(e.g.text,image,audio,or video)in response to prompts,based on the

148、 data the models have been trained on.Generative AI is based on machine learning(ML),which has developed gradually since the 1950s.ML models leverage deep neural networks to emulate human intelligence(i.e.by imitating information processing of neurons in the human brain)by being exposed to data(trai

149、ning)and finding patterns that are then used to process previously unseen data.This allows the model to generalise based on probabilistic inference(i.e.,informed guesses)rather than causal understanding.Unlike humans,who learn from only a few examples,deep neural networks need hundreds of thousands,

150、millions,or even billions,meaning that machine learning requires vast quantities of data(Lorenz,Perset and Berryhill,202312).Privacy and data protection The terms“privacy”and“data protection”can have different meanings in the AI policy and privacy policy communities,also factoring in that the concep

151、t of“data protection”,as a contraction of“the protection of individuals with regard to the processing of their personal data”,is itself already frequently the subject of misinterpretations.Some members of the AI policy community may in particular view“privacy”as relating principally to the risk of r

152、e-identification,data leakage or inferences from AI(OECD,202313),in practice subsuming privacy and data protection,possibly the larger concept of data governance,and the concept of AI safety.As is evident in the work of privacy authorities on AI systems,however,privacy and data protection go beyond

153、issues of safety and security,as explained in an OECD report(OECD,202314)and in the Resolution of the Global Privacy Assembly(GPA,202315).Nevertheless,the risk that the AI community treats privacy and data protection as a well-defined“box”to be“ticked”(OECD,202316),may lead to underestimating the ro

154、le of privacy in addressing many of the human rights impacts resulting from AI.Effective coordination requires a common understanding of the terminology in each domain.The basic concepts,tests,and rules that policymakers,and in particular regulators,use in the privacy space provide existing obligati

155、ons are therefore important to know for the AI policy community.Even more so as this community is itself is moving towards an era where AI regulation is being implemented in practice and may overlap and complement existing privacy and data protection rules.AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AR

156、EAS OF INTERNATIONAL CO-OPERATION 19 OECD ARTIFICIAL INTELLIGENCE PAPERS The need for closer coordination between the AI and privacy communities has been identified for some time.But this need has become even more evident,and urgent,with the emergence of generative AI systems,including language mode

157、ls,that generate various forms of content(e.g.text)based on patterns found in vast volumes of training data.While generative AI creates new opportunities across industries and sectors,including in code development,creative industries and arts,education,healthcare,and more(OECD,202313),this technolog

158、y also raises new risks,as well as amplifying existing ones,including discrimination,polarisation,opaque decision-making,or potential social control.The opportunities and risks of generative AI for privacy The OECD has contributed to analytical work and raising awareness around the opportunities and

159、 risks posed by AI systems in relation to privacy and data protection,including those posed by generative AI,through its paper on“Advancing accountability in AI:Governing and managing risks throughout the lifecycle for trustworthy AI”(OECD,202314)and its“Framework for the Classification of AI system

160、s”(OECD,202217).Both the AI system lifecycle and the classification of AI systems can provide useful structure for discussion as the nature of privacy challenges will vary based on the phase of the lifecycle and the type of AI system at stake.Some of these opportunities and risks are explored below.

161、AI training techniques and other tools may bring new opportunities for enhancing privacy(Privacy Enhancing Technologies)While there are many questions around possible threats to privacy raised by AI,emerging technology applications also bring new opportunities for enhanced privacy protection(OECD,20

162、241).These reinforce recent and ongoing OECD work on emerging Privacy-Enhancing Technologies(PETs)(OECD,20238).PETs refer to a range of digital technologies and techniques that enable the collection,processing,analysis,and sharing of information while safeguarding data confidentiality and privacy.Al

163、though many emerging PETs are still in their early stages of development,some of them hold immense potential to advance privacy-by-design principles in AI and foster trust,including with regard to data sharing and re-use across organisations.For example,researchers are developing different encrypted

164、 data processing tools which allow data to remain encrypted while in use and thus can help enhance privacy at various stages throughout the AI system lifecycle(OECD,202318).Such techniques include homomorphic encryption and trusted execution environments(TEEs),where actors along the AI lifecycle are

165、 not able to view the underlying data without permission(OBrien,202019;Mulligan et al.,202120).Other techniques allow executing analytical tasks upon data that are not visible or accessible to those executing the tasks(federated and distributed analytics).For instance,federated learning enables deve

166、lopers to train a model using data within their own network,which is then transferred to a central 1 Generative AI:a catalyst for collaboration on AI and privacy 20 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS server to combine the

167、data into an improved model that is shared back with all the users(MIT,202221).Federated learning solutions are beginning to be implemented in healthcare,showing positive results.That said,ensuring the accessibility of health data for both primary and secondary use purposes remains critical for the

168、development and effective use of AI in healthcare,making it a crucial policy concern.Other techniques increase privacy protections at both the training and use stage by altering the data,by adding“noise”or by removing identifying details.Among such“data obfuscation techniques”,“differential privacy”

169、algorithms ensure that the output of the AI system minimally changes when a single point of data about an individual is added or retrieved from the training dataset(Harvard,202422).The technique of synthetic data has also attracted significant interest as a PET approach.Synthetic data are generated

170、via computer simulations,machine-learning algorithms,and statistical or rules-based methods,while preserving the statistical properties of the original dataset.2 They can be used to train AI when data are scarce or contain confidential or personally identifiable information.These can include dataset

171、s on minority languages;training computer vision models to recognise objects that are rarely found in training datasets;or data on different types of possible accidents in autonomous driving systems(OECD,202323).However,challenges remain.Similar to anonymisation and pseudonymisation,synthetic data c

172、an be susceptible to re-identification attacks(Stadler,Oprisanu and Troncoso,202024),and“re-identification is still possible if records in the source data appear in the synthetic data”(OPC,202225).Furthermore,some research shows that models trained over a high volume of synthetic data can collapse o

173、ver time(Shumailov,202326).In other words,while synthetic data can help fill in some gaps and improve knowledge,it cannot be expected to entirely replace real-world data.Machine unlearning is another emergent subfield of machine learning that would grant individuals control over their personal data,

174、even after it has been shared.Indeed,recent research has shown that in some cases it may be possible to infer with high accuracy whether an individuals data was used to train a model even if that individuals data has been deleted from a database.Machine unlearning aims to tackle this challenge and w

175、ould give individuals the possibility to withdraw their consent to the collection and processing of their data and to ask for the data to be deleted,even after they have been shared(Tarun,202327)In consideration of the many promises which PETs hold for enabling data sharing and the next-generation d

176、ata economy model,the OECD is considering additional deliverables and further synergies to highlight their potential.A key focus concerns established and emerging AI-related use cases in the public and private sector,including health and finance.Future work will also explore how governments and regu

177、lators can best incentivise innovation in and with PETs and discuss how to measure and compare the effectiveness and impact of different techniques.AI systems,and generative AI in particular,raise privacy concerns Technical breakthroughs have fueled the development of generative AI systems that are

178、so advanced that users may not be able to distinguish between human and AI-generated content.While such developments are impressive on the technological level,the large amounts of data required to train large AI models,including increasing amounts of personal data acquired through various means,rais

179、e serious questions around risks to privacy and data protection.Generative AI poses privacy risks during both its development and deployment stages.Many developers depend on publicly accessible sources for training data,which often includes data about individuals shared online.However,just because d

180、ata is accessible does not automatically mean that it is free to be collected and used to train AI models.The collection of personal data for training AI systems,like any data processing activity,is subject to the privacy principles set forth in the OECD Privacy Guidelines and in data protection law

181、s globally.Among others,these principles require that personal data be obtained through lawful and fair means,with the knowledge of the data subject,and that any further uses of the data are not incompatible with the original purposes.While individuals may have shared their data consenting to AI,DAT

182、A GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 21 OECD ARTIFICIAL INTELLIGENCE PAPERS another use or uses,these do not necessarily include training AI models(GPAs International Enforcement Cooperation Working Group,202328).More recent research shows that generative AI models

183、are actually able to infer personal attributes of the data subject from large collections of unstructured text(e.g.public forum or social network posts)with high accuracy,yet at a low cost(Robin Staab,202329).This could result in inferences based on gender,race or age data that exacerbate the risk o

184、f harmful bias and discrimination.Moreover,some research pre-dating the prominence of generative AI models already suggested(Ahmed Salem,201830)that de-identification,which has been used historically to find a balance between using the data in aggregate and protecting peoples privacy,does not scale

185、to big data datasets.In certain situations,it is possible to reconstruct and de-anonymise original training data by analysing the behaviour of a model that includes it.Such concerns are compounded with the inherent lack of transparency in data processing,in possible contradiction with the“Openness P

186、rinciple”in the OECD Privacy Guidelines and with related information requirements in national laws.Thus,given the capacity of AI models to“memorise”significant volumes of training data,large language models behind text-based generative AI tools pose a particular risk of collection,use and re-use of

187、personal data without the knowledge of the persons concerned (Hannah Brown,202231).AI systems,and generative AI specifically,can also appear to be in tension with individuals rights to access,correct,and where necessary delete their personal data(also known as the“Individual Participation Principle”

188、).Where personal data are used to train machine learning models,their deletion or correction can be complicated,for example because they require additional resources to retrain the model.Furthermore,ensuring these rights in the context of generative AI models might be difficult when training data in

189、cludes unstructured information curated from the Internet.It might be challenging and resource intensive to identify the data point associated with an individual in an unstructured dataset.Interactions with users and the feedback loops of autonomous self-learning models may lead to a degradation of

190、the models accuracy and reliability,introducing the risk of hallucinations”or other forms of misleading content,disinformation,or misinformation.The privacy concern arises from the fact that this new data generated by making inferences can reveal personal information that either has not been disclos

191、ed by the individual or is inaccurately attributed to the individual.These various forms of misleading content can also result in security vulnerabilities(Solove,202432),especially when the AI system is deployed in specific contexts,such as law enforcement,medicine,education,or employment.A question

192、 that arises then is whether the privacy rights of individuals are adequately tailored to address these concerns ex post.For instance,if a“hallucination”includes inaccurate information including personal data generated by AI,do individuals have a right to have their data corrected and/or deleted?And

193、 if the identification and deletion of specific data sets from an AI model is extremely complex,both technically and logistically,to the point of rendering the right of rectification not possible in practice,should then the entire AI model,including the personal data in question,be deleted?As this e

194、xample shows,it is still difficult to fully appreciate both the privacy risks and the consequences of the application of privacy laws to AI models in the current state of the art.22 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS Box 1

195、.1.Real and potential risks associated with AI systems The OECD has worked to identify real and potential risks associated with AI systems,including generative AI,across its workstreams.Some risks are listed below:The amplification of mis-and dis-information at a large scale and scope,particularly t

196、hrough creation of artificial content that humans mistake for real content;AI model“hallucinations”that give incorrect or non-factual responses in a credible way,or the generation of illicit images such as fake child sexual exploitation material(e.g.“fake nudes”);Harmful bias and discrimination at a

197、n increased scale;Risks to privacy and data governance,at the level of training data,at the model level,at the intersection of data and model levels,or at the human-AI interaction level;Challenges to transparency and explainability due to the opacity and complexity of large models;The inability to c

198、hallenge the outcome of models;and,Privacy breaches through the leaking or inferring of private information,among others.Sources:(OECD,202313);(OECD,202314);(Lorenz,Perset and Berryhill,202312).Some risks from generative AI are poorly understood but would be very harmful if they materialised,for exa

199、mple,leading to systemic,delayed harms such as embedded and perpetuated bias and labour disruptions among others,and to collective disempowerment(Lorenz,Perset and Berryhill,202312).Examples include models that display negative sentiment towards social groups,link occupations to gender(Weidinger,202

200、233)or express bias regarding specific religions(Abid,202134).While some risks are identified as explicitly relating to privacy,some risks touch on topics that the privacy community already addresses without these risks necessarily being identified as involving privacy”by the AI community(e.g.explai

201、nability,transparency,self-determination,challenging the output of an automated decision-making process,etc.).Before delving into these issues,it is important to acknowledge that while measures and instruments from both the AI and privacy communities can help mitigate known harms caused by the use o

202、f AI systems,they may have limitations in addressing intentional malicious uses of the technology.This highlights the necessity for broader collaboration between both communities to investigate,prevent,and mitigate potential misuses of AI.Privacy concerns emerging from generative AI:Privacy Enforcem

203、ent Authorities step in The privacy and data protection issues raised by generative AI have quickly become a core area of focus for many PEAs.PEAs address these concerns either at the national level or in the context of the regional or global co-operation networks which are now part of their daily o

204、perations.Three recent initiatives of networks of privacy regulators must be specifically noted here,whereas an overview of national and regional initiatives is provided further down(section IV).AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 23 OECD ARTIFICIAL INTELLIGE

205、NCE PAPERS G7 Roundtable of Data Protection and Privacy Authorities In June 2023 the G7 Roundtable of Data Protection and Privacy Authorities(“G7 DPA Roundtable”)issued a statement on generative AI(G7,202335)listing key areas of concerns from a privacy and data protection perspective,which include:L

206、egal authority for the processing of personal information,particularly that of minors and children,in relation to train models;Security safeguards to protect against threats and attacks that can leak personal information originally processed in the database used to train the model;Mitigation and mon

207、itoring measures to ensure personal information generated by generative AI tools is accurate and non-discriminatory;Transparency measures to promote openness and explainability in the operation of generative AI tools;Technical and organisational measures to ensure the ability for individuals affecte

208、d by or interacting with these systems to exercise their rights,e.g.to erasure or not to be subject to automated decisions;Accountability measures to ensure appropriate levels of responsibility among actors in the AI supply chain;Limiting collection of personal data to only that which is necessary t

209、o fulfil the specified task.The G7 DPAs urged close attention by technology companies to legal requirements and guidance from the DPAs when developing AI systems and services that use generative AI in Italy.The G7 DPAs also stressed that privacy and other human rights must be recognised and protecte

210、d by those who design,develop,and deploy AI products and services,including generative AI.Shortly after,on 29 January 2024,following its fact-finding efforts,the Italian PEA formally notified the company of the existence of GDPR breaches(Garante per la protezione dei dati personali,202436).The Offic

211、e of the Privacy Commissioner of Canada(OPC),the Office of the Information and Privacy Commissioner for British Columbia,the Commission daccs linformation du Qubec,and the Office of the Information and Privacy Commissioner of Alberta are also jointly conducting an investigation into OpenAIs ChatGPT(

212、Office of the Privacy Commissioner of Canada,202337).Web scraping statement of the GPAs International Enforcement Co-operation Working Group On 24 August 2023,twelve international PEAs,members of the International Enforcement Co-operation Working Group(IEWG)of the Global Privacy Assembly(GPA),adopte

213、d a joint statement to address the issue of data scraping on social media platforms and other publicly accessible sites(GPAs International Enforcement Cooperation Working Group,202328).The group notes that web scraping raises significant privacy concerns as these technologies can be exploited for pu

214、rposes including monetisation through reselling data to third-party websites,including to malicious actors,private analysis or intelligence gathering.The joint statement:i)outlines the key privacy risks associated with data scraping;ii)sets out how social media companies(SMCs)and other websites shou

215、ld protect individuals personal information from unlawful data scraping to meet regulatory expectations;and iii)sets out steps that individuals can take to minimise the privacy risks from data scraping.Global Privacy Assemblys Resolution on Generative Artificial Intelligence Systems These concerns w

216、ere reiterated by the 45th Session of the GPA on 20 October 2023 in a Resolution on Generative Artificial Intelligence Systems(GPA,202315).24 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS In this landmark Resolution,the GPA endorses

217、a series of data protection and privacy principles as core elements for the development,operation,and deployment of generative AI systems,including:Lawful basis for processing;Purpose specification and use limitation;Data minimisation;Accuracy;Transparency;Security;Privacy by Design and Default;Righ

218、ts of data subjects;Accountability.Among other findings,the GPA emphasised the difficulty of reconciling the data minimisation and purpose limitation principles,which are core elements of privacy policy globally,with massive and indiscriminate collection of training data for machine learning.Generat

219、ive AI enhances the urgency to work on the interplay between AI and privacy regulations As seen above,the community of PEAs has been active when it comes to responding to privacy risks in generative AI,emphasising an urgent need to reflect on the interplay between privacy frameworks and rules on tru

220、stworthy AI in general.Against the backdrop of actions from both the privacy and data protection community on the one hand,and the rise in proposed regulations of AI like the EU AI Act on the other,comes the question of how these frameworks will fit together,namely as proposed AI legislation will en

221、ter amid well-established privacy and data protection rules and enforcement action by regulators.Co-regulatory approaches could be envisioned,when appropriate,to avoid the duplications and overlaps of requirements under different regulatory frameworks.In the EU specifically,the EU AI Act amplifies t

222、he need for clarity on the interplay of regulations with the General Data Protection Regulation(GDPR).For instance,GDPRs protections for individuals against forms of automated decision making(ADM)and profiling have been applied by courts and regulators alike for several years,ranging from detailed t

223、ransparency obligations to applying the fairness principle to avoid situations of discrimination and strict conditions for valid consent in ADM cases(Barros Vale,202238)(Barros Vale,202238).Other studies illustrate how some PEAs take fundamental rights into account in their decisions,which is a rele

224、vant aspect of the assessment obligations outlined in many of the AI regulations currently under discussion(Grazia,201739)While such precedents present opportunities to enhance or clarify obligations under the AI Act through the lens of ADM jurisprudence,the overlap can also be a source of confusion

225、 for policy-and lawmakers and generates some unclarity regarding compliance.This is especially true for actors such as SMEs that are equipped with limited resources to understand the interplay of AI and data protection and privacy rules in practice.The numerous enforcement actions already undertaken

226、 by PEAs in the generative AI domain globally show that large portions of AI practices already fall under intense regulatory scrutiny,making it urgent to bring the two communities together,so as to ensure that the policy objectives underlying the different legislations are all satisfied.At the same

227、time,to fully unlock the potential of generative AI,substantial,diverse,and relevant data is essential for training models effectively.Having greater access to data enables AI models to perform better as they have the ability to learn from examples in an iterative process.In parallel,having varied a

228、nd high-AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 25 OECD ARTIFICIAL INTELLIGENCE PAPERS quality data(e.g.accuracy,completeness,consistency,reliability,validity,timeliness)is key to building trustworthy algorithms and enhancing AI model performance.In this cycle,sm

229、ooth and efficient data flows are crucial for optimal AI model functioning,ensuring a continuous exchange of information for ongoing learning and refinement of these models.In addition,the availability of training data from various sources,particularly from as many regions or countries as possible,i

230、s essential to contain the risk of the level of bias in AI systems,especially for models used across borders.In this regard,an increasingly frequently expressed concern is that more barriers to cross-border data flows are being adopted globally,covering both personal and non-personal data,raising a

231、risk that data may become less accessible(or potentially restricted to specific regions or countries)for the development of AI-driven tools.These obstacles are not fundamentally new and have been analysed in part in the context of the work of the OECD to advance international policy discussions to h

232、arness the full potential of cross-border data flows under the banner of Data Free Flow with Trust(DFFT)(OECD,202340).For this,prioritizing international collaboration and aligning policy responses to foster trust and facilitate data exchange becomes essential.By capitalizing on commonalities and ar

233、eas of convergence,the privacy and the AI communities can address obstacles that might impede the widespread,lawful,and successful deployment of AI.Relatedly,for data to be available and have the most effective impact,it needs to be appropriately accessible.That means encouraging better coordination

234、,as well as access to and sharing of data between organisations in the public sector and the private sector.These aspects have been addressed in the context of the work leading to the adoption of the OECD Recommendation on Enhanced Access and Sharing of Data(OECD,20217).In this context,an increasing

235、 number of initiatives are emerging to ensure data availability while protecting privacy,which are worth exploring.These include,for example,regulatory sandboxes and/or the involvement of data intermediaries(such as data trusts or data cooperatives)to foster responsible data sharing within the AI ec

236、onomy,which are provided in a growing number of privacy legislations(such as Singapores PEA)and are the subject of complementary workstreams at the OECD.26 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS The OECD AI Principles set in t

237、he 2019 Recommendation on AI,and updated in 2024,can be used as an analytical grid for comparison with the privacy principles set in the landmark 1980 Privacy Guidelines,revised in 2013.The OECD AI principles are divided into two categories:1)five value-based principles that serve as guidance for go

238、vernments to develop AI strategies and policies for trustworthy AI,which can also be used by companies and AI developers;and 2)five recommendations to governments for national policies,for AI ecosystems to benefit societies(OECD,201911).This mapping aims to identify priority areas of co-operation be

239、tween the AI and privacy and data protection communities and begins by analysing key terminology used in the five values-based principles in the OECD Recommendation on AI.Analysis around the five recommendations to governments,which cover actions to foster vibrant AI ecosystems,could be covered in s

240、ubsequent analysis.This mapping exercise permits the identification of:Policy areas where joint work between the two communities,at the OECD and beyond,could yield strong mutual benefits;Policy areas where synergies are low or non-existent;and Terminological differences between the two policy commun

241、ities that could hinder interoperability and coordination.This mapping exercise refers to established principles of privacy and data protection which have grown out of the OECD Privacy Guidelines.Some of these established principles,such as data minimisation,and rights with regard to automated decis

242、ion making,do not appear explicitly termed as such in the OECD Privacy Guidelines,but have de facto become state-of-the-art in privacy policy through the implementation of the OECD Privacy Guidelines by OECD members.Table 1.The OECD AI Principles,revised 2024 5 value-based principles for trustworthy

243、,human-centric AI 5 recommendations to governments for AI ecosystems to benefit societies 1.1 Inclusive growth,sustainable development and well-being 2.1 Investing in AI research and development 1.2 Respect for the rule of law,human rights and democratic values,including fairness and privacy 2.2 Fos

244、tering an inclusive AI-enabling ecosystem(data,compute,technologies)1.3 Transparency and explainability 2.3 Shaping an enabling interoperable governance and policy environment for AI 1.4 Robustness,security and safety 2.4 Building human capacity and preparing for labour market transformation 1.5 Acc

245、ountability 2.5 International co-operation and measurement on trustworthy AI Note:The 2019 OECD AI Principles were updated by OECD Ministerial Council in May 2024.2 Mapping existing OECD principles on privacy and on AI:key policy considerations AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTER

246、NATIONAL CO-OPERATION 27 OECD ARTIFICIAL INTELLIGENCE PAPERS The methodological choice to map privacy considerations on to the OECD AI Principles in no way subsumes one framework to the other.Since this analysis focuses on privacy in the context of AI systems,the OECD AI Principles appeared as an ap

247、propriate starting point for comparative analysis.Moreover,an increasing number of data protection laws now include additional provisions relevant to AI(e.g.limitations on automated decision making,explicit incorporation of privacy by design/default principles)and must be considered when analysing t

248、he intersection of different frameworks at the national level.The five values-based principles in the OECD AI Recommendation The OECD AI Recommendation promotes the use of AI that is innovative and trustworthy and that respects human rights and democratic values,including privacy and data protection

249、.The Recommendation on AI includes a definition of an AI system,which is currently used in AI frameworks around the world,such as the EU AI Act and the Council of Europes Framework Convention on Artificial Intelligence,Human Rights,Democracy and the Rule of Law.According to this definition,an AI sys

250、tem is“a machine-based system that,for explicit or implicit objectives,infers,from the input it receives,how to generate outputs such as predictions,content,recommendations,or decisions that can influence physical or virtual environments.Different AI systems vary in their levels of autonomy and adap

251、tiveness after deployment”(OECD,Updated 202341).Key policy considerations from mapping AI and privacy principles The OECDs work on classification and accountability of AI systems(OECD,202217;OECD,202314)has yielded a clear understanding of the AI system lifecycle,a framework that can help both priva

252、cy and AI communities analyse risks and mitigation measures in a uniform manner.On the privacy side,the privacy community has advanced methods for evaluating the impacts of AI on privacy rights in specific cases,and how different rights and collective interests,including the utility of AI systems fo

253、r society,can be balanced in a manner respectful of privacy principles and the rule of law.For certain generative AI risks,particularly those relating to systemic,delayed harms to society,or the risks of autonomous generative AI agents(Lorenz,Perset and Berryhill,202312),privacy and AI communities s

254、hare a level of uncertainty regarding how to address such issues,and thus have a high interest in collaborating as concrete situations,risks,and policy solutions emerge.Table 2 outlines preliminary findings on the likely benefits and relevance from coordination between AI and privacy policy communit

255、ies,based on the five values-based AI principles in the OECD Recommendation on AI.Table 2.Overview of similarities and relevant areas of coordination between AI and privacy policy communities OECD AI Principle Preliminary analysis of similarities and relevant areas of coordination between AI and pri

256、vacy policy communities Principle 1.1 Inclusive growth,sustainable development and well-being Weighing economic and social benefits of AI against risks to privacy rights.Promoting data stewardship for trustworthy AI in pursuit of beneficial outcomes for people and the planet where data controllers a

257、ct with the benefits of data subjects in mind.Coordination on considerations for data required to address environmental harms,displacement of labor,and other economic impacts(positive&negative)of AI.Principle 1.2 Respect for the rule of law,human rights and democratic values,Identifying,assessing an

258、d treating AI risks to privacy and data protection.Learning from each communitys terminology.The principles,concepts,and rules that structure the policy discourse in the privacy realm are important to know for the AI policy community;they include concepts like personal data/information,lawfulness of

259、 processing,the purpose and use limitation principles,personal data minimisation,privacy by design/by default,data retention,personal data accuracy,and data subjects rights(e.g.individual control).28 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLI

260、GENCE PAPERS including fairness and privacy Existing alignment between AI data preparation for processing,which involves data cleaning and deduplication,to ensure the accuracy of the AI model and the data quality and accuracy privacy principles.Harmonising requirements and methodologies of Human Rig

261、hts Impact Assessments and Privacy Risk Assessments,which are a pillar of accountability under both approaches when applicable.Principle 1.3 Transparency and explainability Coordination on transparency and explainability to data subjects the persons impacted by AI processing which is a long-standing

262、 focus of privacy and data protection communities,in co-operation with consumer protection communities.Interdisciplinary work on the connection between existing legal requirements for the explainability of AI systems set out in frameworks like EU GDPR and the current state of the art in the field of

263、 explainable AI.Principle 1.4 Robustness,security and safety Clarification that both AI risk-based approach regulation and global data protection rules should be considered together to ensure the effective integration of privacy principles and considerations throughout the AI system lifecycle.Coordi

264、nation on data security(e.g.confidentiality,integrity)and Privacy Enhancing Technologies(PETs)Principle 1.5 Accountability Incorporating AI system lifecycle methodology in privacy management programmes(PMP).AI experts can leverage the work done in the field of privacy accountability by the privacy c

265、ommunity,including at the OECD(see implementation guidance on the OECD Privacy Guidelines).Harmonising oversight of Human Rights Impact Assessments and Privacy Risk Assessments when applicable.Note:This table represents preliminary analysis and is not exhaustive.Differences in terminology between th

266、e two policy communities can hinder mutual understanding and further coordination.Table 3 provides a list of key concepts that often have different meanings between AI and privacy policy communities,in order to promote awareness of these possible variations,thus improve mutual understanding between

267、the communities and optimise co-operation actions.Table 3.Key concepts with different meanings between AI and privacy policy communities Terminology Preliminary analysis of differences in meaning,focus or implementation of key concepts Fairness For AI policy communities,fairness often refers to outc

268、omes from the application of AI(such as predictions,recommendations,or decisions)that are based on algorithms and datasets with consideration for bias,for example,through mitigating algorithmic or dataset bias for specific groups(e.g.those categorised by class,gender,race,or sexual orientation).For

269、privacy policy communities,fairness mainly refers to reasonable and transparent practices,respectful of consumers and citizens interests.It covers the prohibition of deceptive or misleading practices at the time of data collection,the obligation to handle peoples data only in ways they would reasona

270、bly expect,and to consider how the processing may negatively affect the individuals concerned.Discrimination is one form of unfairness,but it is not the only one.Transparency and explainability For AI policy communities,transparency,explainability and interpretability have different meanings but ove

271、rall refer to the good practice of AI actors providing accessible information to users to foster a general understanding of AI systems,making stakeholders aware of their interactions with AI systems,enabling those affected by an AI system to understand the outcome,and to enable those affected by an

272、AI system to challenge its outcome based on plain and easy-to-understand information that served as the basis for the prediction,recommendation or decision.For privacy policy communities,transparency is a positive legal obligation to inform individuals,from whom personal data are collected,no later

273、than at the time of data collection,on the use purposes for which consent is requested and the subsequent use is then limited to the fulfilment of those purposes.It also includes notifying these individuals about the data collection and use purposes when the processing of the data is based on anothe

274、r legal basis,so that individuals may consequently exercise their privacy rights,permitting them to exercise human agency,including free choice,optionality and redress.Privacy and data protection While the AI policy communities recognise privacy as a human right,“privacy”and“data protection”are comm

275、only used in a narrower sense to refer to personal information included in datasets for training AI,and to the risks related to the loss of personal data through leakage or inference by AI models/systems.For privacy policy communities,privacy and data protection are part of the larger,overall human

276、rights and consumer protection legal fabric,covering threats to fairness,lack of transparency,and threats to data security and robustness,to be addressed through data subjects rights,accountability,and regulatory intervention.From this perspective,privacy is not only an individual right but also a s

277、ocial value.Note:This table represents preliminary analysis and is not exhaustive.AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 29 OECD ARTIFICIAL INTELLIGENCE PAPERS Overview of possible commonalities and divergences in AI and privacy principles Principle 1.1:Inclusiv

278、e growth,sustainable development and well-being Sometimes referred to as“People and Planet”,this category covers a wide range of interests and risks,including harms to the environment,impact on jobs,and harms to vulnerable populations.In addition to the risks,Principle 1.1 focuses on the positive ef

279、fects of AI on society,including improving healthcare and fighting climate change.While collaboration between privacy and AI communities can lead to positive environmental outcomes,such as smart cities initiatives like traffic optimisation or waste management,there appears to be limited synergies fo

280、r collaboration between privacy and AI policy communities when it comes to protection of the environment.While crucial,environmental protection largely falls outside the scope of privacy and data protection regulation,except when cross-referencing demographic data with environmental data to understa

281、nd the impact of climate change on various groups.A discussion on the overlaps between data protection,freedoms and the environment seems to be underway,for the moment in a more prospective than normative manner(CNIL,202342).As well,the Global Privacy Assembly for instance mentions environmental har

282、ms as among those caused by indiscriminate collection of training data in violation of the data minimisation principle(GPA,202315).The positive effects of AI on economic well-being,for example by decreasing the costs of products and services through,for instance,the automation of specific tasks,or i

283、mproving health-related outcomes,are also not the direct focus of data protection regulation.Nevertheless,achieving these benefits can require striking a balance between protection of privacy and other human rights.For example,increased accuracy of AI systems will create better predictions leading t

284、o better health outcomes.But higher accuracy may come with trade-offs with privacy data protection rights as greater amounts and higher quality training data are needed.PEAs and courts have been dealing with these trade-offs for many years,applying constitutional mechanisms to resolve conflicts betw

285、een competing rights and interests in specific cases,using,for example,the“proportionality test”in some OECD member countries.This balancing exercise is also recognised in certain legislations.The GDPR(Art.1(3),for instance,notes that“the free movement of personal data should not be restricted nor p

286、rohibited for reasons connected with the protection of natural persons with regards to the processing of personal data”(European Union,201643).Synergies appear high between privacy and AI communities in discussing how different collective interests,such as better public health or security resulting

287、from AI,can be balanced against increased interference stemming from AI systems with certain human rights.A prime example of this pathway is the collective effort to leverage data-driven tools,including AI,in the fight against the COVID-19 pandemic while respecting data protection and privacy princi

288、ples.Principle 1.1 also includes harms to vulnerable populations.Economic displacement is not a direct focus of data protection law and the effect of AI on jobs is outside the direct scope of data protection regulation.However,common privacy tools such as privacy management programmes(PMP)require da

289、ta controllers to develop appropriate safeguards based on privacy risk assessment,and“risk”is intended to be a broad concept,taking into account a wide range of possible harms to individuals(OECD,202344).As well,the protection of vulnerable groups personal information,in particular children,is a cor

290、e focus of the WPDGP(OECD,202145)(OECD,202146)(OECD,202147)(OECD,202248).It is furthermore anticipated that the concept of digital vulnerability will be further discussed with a prospective outlook.This is the case of the monitoring of the elderly,the disabled,or patients,for instance as brain compu

291、ter interfaces(BCIs)are being used to modulate brain activity for cognitive disorder management,or in relation to the increased use of smart wearable devices to monitor and detect occupational physical fatigue of employees in the workplace.30 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNA

292、TIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS Key policy consideration:Principle 1.1-Inclusive growth,sustainable development and well-being Most of the interests and risks covered by Principle 1.1 are not the core focus of privacy and data protection laws.However,when applying privacy and

293、 data protection laws,courts and authorities often consider the social benefit of an AI application,including in improving public health or security,when evaluating the proportionality of its interference with privacy and data protection rights.Co-operation between the two communities could be focus

294、ed on how to provide guidance on balancing AI social benefits and risks,including risks to privacy rights.Principle 1.2:Respect for the rule of law,human rights and democratic values,including fairness and privacy Principle 1.2 is divided into three sub-categories:bias and discrimination,privacy and

295、 data governance,and human rights and democratic values(OECD,202314).Undue bias and discrimination Bias and discrimination are important risks associated with AI,studied by both the AI community and the privacy community.Generative AI,as currently developed,deployed and used in the absence of guardr

296、ails,has amplified these risks due to the massive scale and scope of the application of such systems,and their input training data during their development phase.While the objectives are aligned between the AI community and the privacy policy community,the way bias and discrimination are studied in

297、the two communities differs.Bias and discrimination as studied by the AI community Bias and discrimination are heavily studied in AI policy circles,and are cited as major concerns for generative AI,due to the increase in scale and scope of potential algorithmic bias resulting from foundation models

298、and the massive training data they use(OECD,202313;Lorenz,Perset and Berryhill,202312).The work of the AI community has brought to light the many sources of algorithmic bias(OECD,202314),including:historical bias,representations bias,measurement bias,methodological and evaluation bias,monitoring bia

299、s and skewed samples,feedback loops and popularity bias(OECD,202314).AI scholars have also identified the characteristics and incompatibilities between different forms of fairness or non-discrimination,including equality of opportunity,equality of outcome or statistical parity,and counterfactual jus

300、tice(OECD,202314):due to these incompatibilities,creating an“un-biased”AI system is extremely challenging.Bias and discrimination as studied by the privacy community PEAs examine illegal discrimination as one of the negative effects that may render the processing of personal data unfair,and thus unl

301、awful.The lawfulness of processing personal data is evaluated in light of,among other things,the risk of discrimination that may ensue.The concern for discrimination is raised as one of the most important issues for PEAs when addressing the enforcement challenges posed by emerging technologies(OECD,

302、2021,p.489).To prevent discrimination based on the use of personal data,in many countries certain types of personal data have been designated sensitive,and therefore their permitted uses may be more limited and even AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 31 OECD

303、 ARTIFICIAL INTELLIGENCE PAPERS prohibited.This may be the case with age discrimination in employment decisions,gender discrimination in credit decisions,or political affiliation in government services and allowance allocation.Trustworthy AI requires trustworthy data.The quality of the datasets used

304、 is essential for the optimal performance of AI systems.Throughout the data collection process,there is a potential for the inclusion of socially constructed biases,inaccuracies,errors,over-or under-representation,and mistakes.Consequently,there is a very high incentive for both AI actors and privac

305、y advocates to seek high-quality datasets:AI actors are keen to work with the most accurate data possible to maintain trustworthiness in the outcomes and high model performance,while individuals strive to ensure that the AI model will not produce negative outcomes based on incorrect,incomplete,or in

306、sufficient data.In the OECD Privacy Guidelines,the data quality principle refers to the relevance of personal data in relation to the purposes for which they will be used.The more relevant the data,the higher its quality.Data relevance is crucial for generative AI tools,particularly concerning the p

307、otential inclusion of exogenous false or misleading information in the data(OECD,202349).Accuracy,completeness and timeliness are also important elements of the data quality concept.Complete and up-to-date information not only increases the quality of the data but also improves its accuracy,mitigati

308、ng the risk of harmful bias.Privacy management programmes(PMPs)include risk assessments that evaluate,among other things,the probability and potential impact of discrimination (OECD,202344).In jurisdictions,such as the United States that address privacy and data protection through the lens of consum

309、er protection,the discriminatory effects of an AI system could result in a finding that the system is an“unfair practice”prohibited by law(FTC,202250).Discrimination is also addressed by other,non-privacy,laws,including employment,education,and banking laws,which complement the legal data protection

310、 landscape.Nevertheless,most data protection enforcement authorities interpret their role as being,at least in part,to prevent discrimination,including outcomes resulting from AI systems that process personal data.Because of the different but complementary focuses of the AI community and the privacy

311、 policy community around discrimination,coordination between both communities on the discriminatory effects of AI systems would be highly beneficial.Privacy policy communities can learn the many nuanced approaches to non-discrimination,such as fairness considerations,being studied by AI experts,whil

312、e AI policy communities can benefit from the experience of data protection communities in evaluating and sanctioning data controllers whose systems generate discriminatory outcomes.The possibility for synergies are thus high in combining the AI communitys expertise in approaches to bias with the pri

313、vacy communities experience in evaluating discriminatory effects of AI systems in concrete cases.Terminology differences:different meanings of fairness Optimal coordination of privacy and AI approaches to non-discrimination can be hampered by definitional and terminological issues.Fairness in AI dis

314、course For AI communities,fairness is often understood to refer to outcomes from the application of AI(such as predictions,recommendations,or decisions)that are based on algorithms and datasets with consideration for bias,for example,through eliminating algorithmic or dataset bias for specific group

315、s(e.g.those categorised by class,gender,race,or sexual orientation).Bias is a systematic(as opposed to a random)error,associated with certain categories of data inputs.For example,a facial recognition algorithm may make more errors for people wearing glasses than for people without glasses.A differe

316、nce in error rates between people who wear glasses versus people without glasses can be problematic from a technical and operational perspective and might create discrimination or inequal treatment,but would not generally raise legal or ethical concerns because glasses are not a protected attribute.

317、By contrast,a difference in error rates between men and women,or between people with light skin and dark skin,would be considered 32 AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION OECD ARTIFICIAL INTELLIGENCE PAPERS unfair,because the bias(systematic error)is linked to

318、an attribute which in the particular legal and cultural context is associated with population groups that have been historically disadvantaged.Protected attributes may be absent from the input data,but inferred from other apparently neutral attributes.For example,a postal code may become a proxy for

319、 ethnic origin if the postal code corresponds to a neighborhood where many inhabitants share the same ethnic origin.The work on fairness in the AI community has yielded multiple identified sources of bias(systemic,computational and statistical,human-cognitive)(Tabassi,202351)as well as a realisation

320、 that bias can almost never be completely eliminated.Attempts by data scientists to transform fairness into more mathematical properties has helped open debates among legal scholars on non-discrimination laws,in particular on how discrimination should be measured and what should be considered illega

321、l discrimination(Wachter,202252).Thus,a link exists between AI“fairness”and legal debates on non-discrimination laws.Further confusion arises when fairness in the AI context is translated into different languages.In AI communities,the French term for fairness is“quit”,which also means“equity”in Engl

322、ish.Yet equity in English does not just mean absence of discrimination.It also means conduct that respects good faith and the spirit,not just the letter,of the law.Equity in English can also refer to equal opportunity for disadvantaged groups,allowing in some cases for compensating measures to redre

323、ss structural disadvantages.Equity,which allows for compensating measures,is often contrasted with equality,which refers to strict equal treatment for each individual and hence without compensating measures.From an AI standpoint,both equity and equality are variants of fairness in the non-discrimina

324、tion sense.Fairness in privacy discourse In the privacy policy community,fairness is conduct that is consistent with reasonable expectations.Often fairness is best defined by what it is not,i.e.“unfair”practices.Unfair practices are generally illegal under consumer protection law,competition law,pri

325、vacy law,and non-discrimination law.Unfair practices may involve deceit and lack of transparency(Malgieri,202053).For the privacy policy community,transparency is a necessary element of fairness.The prohibition of unfair practices may also target imbalances in economic power.This is because fairness

326、 in privacy is not solely concerned with the mathematical distribution of resources or outcomes but also considers context(including human decision-making)and other qualitative aspects,such as power imbalances between individuals and those who process their data(ICO,202354).Therefore,fairness may re

327、quire imposing extra duties on more powerful actors to ensure that their dealings with less powerful actors reflect more balance(Clifford and Ausloos,201855).Linked to power imbalances is the individual participation principle of the OECD Privacy Guidelines,which empowers individuals to challenge da

328、ta relating to them.In certain countries,“procedural fairness”,sometimes referred to as“due process”,may also impose procedural constraints on government actions.(Mulligan et al.,201956)This could entail the right to discuss,or challenge,a particular aspect of data processing with a human representa

329、tive of the other party,and eventually appeal the matter to an independent and impartial decision-maker.(Mulligan et al.,201956)In this respect,fairness is linked to respect for human rights such as human dignity,individual autonomy,optionality,and redress.In matters of privacy and data protection,f

330、airness is sometimes equated to processing that is consistent with the reasonable expectations of the data subject,i.e.processing that would not“surprise”the data subject(Malgieri,202053).It requires data controllers to handle personal data in a manner that aligns with individuals reasonable expecta

331、tions and avoid using it in any way that could adversely affect them(Datatilsynet,201857).Practices that would lead to unlawful discrimination would also be considered unfair(CNIL,201758).Fairness is also associated with the concept of“good faith”(Malgieri,202053)and“loyaut”loyalty in French.Loyalty

332、 and good faith require avoiding conduct that violates ethical norms,and respecting the spirit,not just the letter,of laws(Mulligan et al.,201956;Malgieri,202053).AI,DATA GOVERNANCE&PRIVACY:SYNERGIES AND AREAS OF INTERNATIONAL CO-OPERATION 33 OECD ARTIFICIAL INTELLIGENCE PAPERS To promote mutual und

333、erstanding,AI and privacy communities should be aware that definitions of“fairness”vary between them.Key terminology and concepts:privacy and data governance There is the need for,and benefits of,coordination between AI and privacy communities on the concepts of“privacy and data governance”in the AI Principles.This element of human-centred values and fairness targets core data protection and priva

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(OECD:2024人工智能(AI)、数据治理和隐私:协同作用及国际合作报告(英文版)(55页).pdf)为本站 (白日梦派对) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

wei**n_... 升级为高级VIP 187**11...  升级为至尊VIP

 189**10... 升级为至尊VIP 188**51... 升级为高级VIP 

134**52... 升级为至尊VIP  134**52... 升级为标准VIP

wei**n_... 升级为高级VIP   学**... 升级为标准VIP  

 liv**vi... 升级为至尊VIP 大婷  升级为至尊VIP

  wei**n_... 升级为高级VIP wei**n_... 升级为高级VIP

微**...  升级为至尊VIP 微**...  升级为至尊VIP

wei**n_... 升级为至尊VIP   wei**n_... 升级为至尊VIP

wei**n_... 升级为至尊VIP  战** 升级为至尊VIP 

玍子  升级为标准VIP ken**81...   升级为标准VIP

 185**71... 升级为标准VIP  wei**n_... 升级为标准VIP

微**...   升级为至尊VIP wei**n_...  升级为至尊VIP 

138**73... 升级为高级VIP 138**36... 升级为标准VIP

138**56...  升级为标准VIP  wei**n_... 升级为至尊VIP

  wei**n_... 升级为标准VIP  137**86... 升级为高级VIP

 159**79... 升级为高级VIP   wei**n_... 升级为高级VIP

139**22...  升级为至尊VIP   151**96...  升级为高级VIP

  wei**n_... 升级为至尊VIP 186**49... 升级为高级VIP 

  187**87... 升级为高级VIP  wei**n_... 升级为高级VIP

wei**n_...   升级为至尊VIP sha**01...  升级为至尊VIP

wei**n_...  升级为高级VIP   139**62...  升级为标准VIP

 wei**n_... 升级为高级VIP 跟**...  升级为标准VIP 

182**26...  升级为高级VIP  wei**n_... 升级为高级VIP 

  136**44... 升级为高级VIP 136**89...   升级为标准VIP

wei**n_... 升级为至尊VIP   wei**n_... 升级为至尊VIP 

 wei**n_... 升级为至尊VIP  wei**n_...  升级为高级VIP

wei**n_...  升级为高级VIP 177**45...  升级为至尊VIP

wei**n_...  升级为至尊VIP wei**n_...  升级为至尊VIP 

微**...  升级为标准VIP wei**n_...  升级为标准VIP

wei**n_...   升级为标准VIP 139**16... 升级为至尊VIP

wei**n_... 升级为标准VIP wei**n_... 升级为高级VIP 

182**00...  升级为至尊VIP wei**n_... 升级为高级VIP 

 wei**n_...  升级为高级VIP wei**n_...  升级为标准VIP 

133**67...  升级为至尊VIP  wei**n_...  升级为至尊VIP

柯平   升级为高级VIP shi**ey...  升级为高级VIP

 153**71... 升级为至尊VIP 132**42...  升级为高级VIP

 wei**n_... 升级为至尊VIP  178**35... 升级为至尊VIP

wei**n_... 升级为高级VIP  wei**n_...  升级为至尊VIP

wei**n_...  升级为高级VIP   wei**n_... 升级为高级VIP

133**95... 升级为高级VIP 188**50... 升级为高级VIP 

138**47... 升级为高级VIP   187**70...  升级为高级VIP

Tom**12...  升级为至尊VIP  微**... 升级为至尊VIP 

wei**n_...  升级为至尊VIP 156**93... 升级为至尊VIP

 wei**n_...  升级为高级VIP wei**n_... 升级为至尊VIP 

wei**n_...  升级为标准VIP  小敏  升级为高级VIP 

hak**a9...  升级为至尊VIP 185**56... 升级为高级VIP 

  156**93... 升级为标准VIP wei**n_... 升级为至尊VIP 

wei**n_... 升级为至尊VIP  Br**e有... 升级为至尊VIP 

wei**n_...  升级为标准VIP wei**n_... 升级为高级VIP

wei**n_...  升级为至尊VIP  156**20... 升级为至尊VIP