《众达(Jones Day):2022人工智能全球监管日益强化白皮书(英文版)(25页).pdf》由会员分享,可在线阅读,更多相关《众达(Jones Day):2022人工智能全球监管日益强化白皮书(英文版)(25页).pdf(25页珍藏版)》请在三个皮匠报告上搜索。
1、WHITE PAPERRising Global Regulation for Artificial IntelligenceAcross multiple continents and industries,artificial intelligence(“AI”)is a topic of intense focus by governments,research institutions,investors,and corporationsfrom start-ups to well-established industry players.As technology and regul
2、atory frameworks continue to evolve rapidly,AI legal issues are emerging as a key topic in a transactional,litigation,and regulatory compliance context.This White Paper outlines key AI regulatory issues and questions that are worthy of con-sideration by private-sector leaders and in-house counsel.De
3、cember 2022iiJones Day White PaperTABLE OF CONTENTSIntroduction 1What is AI?1How is AI Regulated?3Developing a Data Ecosystem 4European Union 4United States 7China 10Japan 10Market Access 11European Union 12United States 14China 15Japan 17AI Liability 17European Union 18United States 18China 19Japan
4、 19ConclusionKey Considerations for the Private Sector 191Jones Day White PaperINTRODUCTIONAcross a wide range of industries,including advertising,bank-ing,telecommunications,manufacturing,transportation,life sciences,waste management,defense,and agriculture,the use of AI and interest in its diverse
5、 applications are steadily increasing.Businesses are turning to AI systems,and the related technology of machine learning,to increase their rev-enue,quality and speed of production or services,or drive down operating costs through automating and optimizing pro-cesses previously reserved to human lab
6、or.Government and industry leaders now routinely speak of the need to adopt AI,maintain a“strategic edge”in AI innovation capabilities,and ensure that AI is used in correct or humane ways.Yet the recent surge of interest in AI sometimes obscures the fact that it remains ungoverned by any single comm
7、on body of“AI law”or even an agreed-upon definition of what AI is or how it should be used or regulated.With applications as diverse as chatbots,facial recognition,digital assistants,intel-ligent robotics,autonomous vehicles,medical image analy-sis,and precision planting,AI resists easy definition,a
8、nd may implicate areas of law that developed largely before AI became prevalent.Because it is an intangible process that requires technical expertise to design and operate,AI can seem mysterious and beyond the grasp of ordinary people.Indeed,most lawyers or business leaders will never personally tra
9、in or deploy an AI algorithmalthough they are increasingly called on to negotiate or litigate AI-related issues.This White Paper seeks to demystify AI for nontechnical read-ers,and reviews the core legal concepts that governments in several key jurisdictionsthe European Union,China,Japan,and the Uni
10、ted States are developing in their efforts to regu-late AI and encourage its responsible development and use.Although AI legal issues facing companies will often be spe-cific to particular products,transactions,and jurisdictions,this White Paper also includes a checklist of key questions that in-hou
11、se counsel may wish to address when advising on the development,use,deployment,or licensing of AI,either within a company or in the transactional context.Ultimately,govern-ments are implementing divergent and sometimes conflicting requirements.This scenario,which calls for patient review and a strat
12、egic perspective by regulated parties,rewards an ability to explain technical products to regulators in clear,nontechni-cal terms.WHAT IS AI?AI comprises complex mathematical processes that form the basis of algorithms and software techniques for knowledge representation,logical processes,and deduct
13、ion.One core technology behind AI is machine learning,in which AI models can be trained to learn from a large amount of data to draw correlations and patterns allowing such models to be used in processing and making autonomous decisions,for example.Key to each AI is its“objective function”the goal o
14、r goals that its developers have designed it to achieve.This objec-tive function can vary widelyfrom identifying molecules with likely antibiotic properties,to predicting where and when inputs in a transportation or manufacturing system will be needed,to spotting potential safety or security threats
15、,to generating text,sound,or images that meet certain specifi-cations.To learn to achieve this objective function,AI models can be trained using large data setswith varying degrees of human oversight and feedbacklearning to identify and make predictions based on patterns,likenesses,and funda-mental
16、attributes,including ones that humans may never have conceptualized or perceived.The AI is then prompted to apply the model it has honed during training to a real-life situation,where it executes its task.This latter activity is often referred to as“inference.”2Jones Day White PaperAI components typ
17、ically comprise data(both training data for training and raw data for inference)and software processes to execute complex algorithms.When trained and applied correctly,AI-based technology can unlock tremendous gains in productivityenabling results or insights that would otherwise require prohibitive
18、ly long periods of time to achieve by means of human reason alone,or even by humans using traditional computing techniques.In some cases,AI can be applied to replace or augment“rote”tasks that a person would otherwise perform much more slowly.In other cases,AI can generate text(including computer co
19、de,or responses to basic customer queries),sound,or images(including aspects of architectural or mechanical designs)that either replace the need for human input or serve as a first draft for human review.Often,a human mind,informed by AI inputs,analysis,and recommendations,can home in faster on a ke
20、y range of options(pharmaceutical,strategic,etc.)war-ranting closer study.In many industries,integrating AI-based technology is consid-ered the key to secure long-term competitiveness.Most indus-trial countries have already started the race for world market leadership in AI technologies through vari
21、ous means such as public funding.In addition,governments seek to support AIs growth through a legislative framework that allows the technol-ogy to develop and optimize its potential.However,as many governments and analysts have noted,the benefits of AI systems can also come with risks.For example,AI
22、 can contribute to the creation of“echo chambers”that dis-play content based only on a users previous online behav-ior and thereby reinforce their views and interests or exploit their vulnerabilities.AI applications are also increasingly used in objects routinely interacting with people,and could ev
23、en be integrated in the human body,which can pose safety and security risks.Governments seeking to regulate AI aim to build citizen trust in such technology while limiting potentially harmful applica-tions.Yet different governments,and different agencies within the same government,sometimes have dif
24、ferent concepts of what constitutes an appropriate manner of training and apply-ing AI.What one authority sees as a feature,another may see as a bug.Further,theyand regulated publicsmay disagree on the ideal relative weight to place on key considerations ARTIFICIAL INTELLIGENCE(AI)COMPONENTSTrial&Er
25、ror changeSoftwareTrainingdataRaw dataTrained AIapplicationAI trainingapplicationAlgorithm3Jones Day White Papersuch as privacy,transparency,liberty,and security.As govern-ments apply divergent perspectives to this technically com-plex(and often inherently multijurisdictional)area,regulated parties
26、face a complex,sometimes contradictory body of reg-ulatory considerations that are themselves changing rapidly.Training,deploying,marketing,using,and licensing AI,par-ticularly if these activities occur across multiple jurisdictions,increasingly requires a multidisciplinary and multijurisdictional l
27、egal perspective.HOW IS AI REGULATED?While many laws already apply to AI,ranging from IP protec-tion to competition law and privacy,AIs rapid expansion has alerted legislators worldwide,leading to updating legal and regulatory frameworks and,in some cases,creating entirely new ones.These global lega
28、l initiatives generally aim at addressing three main categories of issues:First,legislation and regulations aim to foster AI deploy-ment by creating a vibrant and secure data ecosystem.Data is required to train and build the algorithmic models embedded in AI,as well as to apply the AI systems for th
29、eir intended use.In the European Union,AIs hunger for data is regulated in part through the well-known GDPR;addition-ally,a proposed Data Act facilitating data access and shar-ing is underway.In comparison,the United States has taken a more decentralized approach to the development and regulation of
30、 AI-based technologies and the personal data that underpins them.Federal regulatory frameworksoften solely in the form of nonbinding guidancehave been issued on an agency-by-agency and subject-by-subject basis,and authorities have sometimes elucidated their stan-dards in the course of Congressional
31、hearings or agency investigations rather than through clearly proscriptive pub-lished rules.The Peoples Republic of China,for its part,has expanded its data security and protection laws,with a par-ticular emphasis on preventing unauthorized export of data.While the central government promulgates gen
32、erally appli-cable laws and regulations,specialized government agen-cies have provided regulations specific to their respective fields,and local governments are exploring more efficient but secure ways to share or trade data in their areas,such as setting up data exchange centers.Second,regulators i
33、n multiple jurisdictions have proposed or enacted restrictions on certain AI systems or uses assessed to pose safety and human rights concerns.Targets for such restrictions include AI robots capable of taking lethal action without a meaningful opportunity for human intervention,or AI social or finan
34、cial creditworthiness scoring systems that pose unacceptable risks of racial or socio-economic discrimination.In the European Union,the sale or use of AI applications may become subject to uniform conditions(e.g.,standardization or market authorization procedures).For instance,the proposed EU AI Act
35、 aims to prohibit mar-ket access for high-risk AI systems,such as AI systems intended for the“real-time”and“post”remote biometric identification of natural persons.Members of Congress in the United States have advanced legislation that tackles certain aspects of AI technology,though in a more piece-
36、meal,issue-focused fashion.For instance,recently passed legislation aims to combat the effect of certain applications of generative adversarial networks capable of producing convincing synthetic likenesses of individuals(or“deep-fakes”)on U.S.cybersecurity and election security.The PRC and Japan hav
37、e not yet issued mandatory laws or regula-tions restricting application of AI in any specific area for concerns such as discrimination or privacy.But similar to the United States,China regulates various aspects impor-tant to the realization and development of AI,such as data security,personal inform
38、ation protection,and automation,among others.Third,governments are just beginning to update traditional liability frameworks,which are not always deemed suitable to adequately deal with damages allegedly“caused by”AI systems due to the variety of actors involved in the devel-opment,interconnectivity
39、,and complexity of such systems.Thus,new liability frameworks are under consideration,such as establishing strict liability for producers of AI systems,in order to facilitate consumer damage claims.The first comprehensive proposal comes from the European Unions new draft liability rules for AI syste
40、ms,aimed at facilitating access to redress for asserted“victims of AI,”through easier access to evidence,presumption of causality,and reversal of the burden of proof.Each of these will be further discussed in the next sections.4Jones Day White PaperDEVELOPING A DATA ECOSYSTEMOften depicted as the fu
41、el of AI,data is essential to develop and deploy AI systems.AI systems are built with algorithms,which in turn require configuration and training with data sets.To achieve a thriving data ecosystem that meets such AI needs depends on so-called Big Data,i.e.,data that fulfills a“triple-V”criteria:Vol
42、ume:abundant data that increases the accuracy of the analysis;Variety:data that is diverse in nature and from diverse sources,which the AI system can structure and correlate most efficiently;and Velocity:data that is up-to-date and transmitted in real-time(e.g.,from sensors).One could also add a fou
43、rth“V”of Veracity(i.e.,data accu-racy).All of these characteristics lead to a fifth“V”of Value:data that fulfills the above criteria presents the most value for AI systems.Given the central role of data in AI systems,the regulation of data use and access is critical.Availability and access to extens
44、ive,quality-assured data sets are key to the configura-tion,training,and application of AI systems.However,regu-lation may impede or advance such use and access.Data sets are not always openly available,and their use can be restricted,for example,by intellectual property or privacy rights.Data owner
45、ship is also important and may be impacted by regulation seeking to lower barriers to entry and switch-ing.Furthermore,data regulation can also address the veracity element,as data sets can be biased where implemented data is insufficiently screened and therefore not representative of a models inten
46、ded outcome,resulting in biased algorithms that may pose ethical concerns.European UnionCurrent Legislation.The European Union has increasingly regulated the use of data,i.e.,data processing.Initially,per-sonal data was the focus of such regulation,notably starting in 2016 with the General Data Prot
47、ection Regulation(“GDPR”).1 By seeking to establish a human-centric approach to technol-ogy and to ensure that individuals can better control that their personal data is processed only for a legitimate purpose in a lawful,fair,and transparent way,the GDPR aims to establish a solid framework for digi
48、tal trust,while providing for free move-ment of personal data within the European Union and regu-lating international data flowing outside the European Union.However,tension exists between bedrock GDPR principles(such as purpose limitation and data minimization)and the full deployment of the power o
49、f AI and big data.2 For instance,AI depends on vast quantities of data processed for purposes often not fully determined at the time of collection,in arguable tension with the GDPRs purpose limitation requirement.The use of data for training or using AI also faces potential con-straints under the GD
50、PRs requirement to have a legal basis(such as individual consent)for personal data processing.For this reason,for instance,facial recognition based on online data is restricted by data protection authorities in several EU Member States.For non-personal data,the European Union adopted a Regulation on
51、 the Free Flow of Non-Personal Data3 in 2018 to ensure free movement of such data and prohibit Member States from adopting(restrictive)data localization laws simi-lar to other jurisdictions such as Russia.Additionally,the European Unions Open Data Directive4 sets minimum rules allowing government-to
52、-business(“G2B”)data sharing through the publishing of data held by public authorities in dynamic and machine-readable format and through standardized appli-cation programming interfaces(“APIs”).5Jones Day White PaperUpcoming Legislation.In 2020,the European Union announced a European Strategy for D
53、ata5 to more broadly address all data flows and develop an EU single market for data,such that:Data can flow within the European Union and across sectors;European rules and values are fully respected,including data protection,consumer protection,and fair competition;Rules for access and use of data
54、are fair,practical,and clear.This includes a clear and trustworthy data governance mechanism and an open but assertive approach to regulat-ing international data flows;and Data is both secure and,in the case of industrial data,easily accessible to businesses.The EU Strategy for Data also identified
55、issues of concern,including insufficient data availability,unequal market power,insufficient data governance,inadequate data infrastructures and technologies,and poor data interoperability and quality.As a result,the European Union adopted a Data Governance Act(“DGA”)6 in June 2022,which aims at fac
56、ilitating voluntary data sharing by individuals and businesses through enhanced trust in such sharing.The DGA promotes trusted sharing through neutral data brokers notified to the public authorities and through so-called data altruism organizations for gather-ing data voluntarily donated by individu
57、als.The DGA further facilitates the sharing of G2B data that is subject to third party privacy,intellectual property or commercial confidentiality rights.Of broader-scale impact,the European Commission also proposed a Data Act7 in February 2022.This proposed Regulation seeks to facilitate voluntary
58、business-to-business data sharing and to further business-to-government data shar-ing in case of urgency.The proposed Data Act also reviews the existing intellectual property rights framework in order to facilitate data access and use.In parallel,the European Union is also developing sector-specific
59、 data regulation to boost the EU data economy.EU law already provides for some forms of data sharing obliga-tions in the banking sector for payment data,8 in the energy sector for smart meter/consumption data,9 and data pro-vided to or created by digital content/services(all concern-ing personal dat
60、a);10 as well as in the automotive sector for repair and maintenance information11 and intelligent transport systems12(including potentially in-vehicle data13 and alterna-tive fuels infrastructure14)(all non-personal data).The Digital Market Act(“DMA”),15 adopted in March 2022 and published in Octob
61、er 2022,also imposes certain data access obligations on those deemed as“gatekeepers”of core platform services(e.g.,obligations to make available data generated by busi-ness users to vendors using the platform or to provide access to search data to search engine competitors).In addition,the European
62、Commission will pursue regulatory frameworks for the development of sectoral“data spaces”in the below nine areas.EU Data SpacesIndustrial(manufacturing)Green Deal Mobility SkillsHealth Financial Energy Agricultural Public AdministrationsFor the first data space to be established,the European Health
63、Data Space(“EHDS”),the European Commission pub-lished a proposed Regulation on May 3,2022.16 The draft EHDS Regulation aims at giving patients easy access to their health data to facilitate sharing their data with health professionals across the Member States.It also foresees specific rules on secon
64、dary use of electronic health data,e.g.,for research and personalized medicine.6Jones Day White PaperTable 1Summary of Main EU Data Access Regulations and ProposalsName of LegislationType of DataMain PurposeStatusGeneralGDPRPersonal dataPrivacy protection Applicable since May 25,2018Free Flow of Dat
65、a RegulationNon-personal dataPrevent data localization lawsApplicable since May 28,2019Open Data Directive All dataG2B data sharingIn force since July 16,2019;Member State implementation by July 17,2021 DGAAll dataG2B data sharingEntry into force on June 23,2022,and applicable from September 2023Dra
66、ft Data ActAll dataB2B sharingB2G data sharingProposal submitted on February 23,2022Sector-SpecificDMACertain data held by“gatekeepers”B2B sharingEntry into force on November 1,2022,and applicable from May 2,2023PSD2Payment dataOpen payment servicesApplicable since January 13,2018Electricity Directi
67、veSmart meter/con-sumption dataEnergy consumption data availabilityIn force since July 4,2019;Member State implementation by December 31,2020Gas DirectiveSmart meter/con-sumption dataEnergy consumption data availabilityIn force since July 13,2009;Member State implementation by March 3,2011Digital Co
68、ntent and Services DirectiveDigital content/ser-vices dataDigital content/servicesIn force since June 11,2019;Member State implementation by July 1,2021Motor Vehicle RegulationRepair and mainte-nance data Aftermarkets for repairApplicable since September 1,2020Draft ITS Directive Intelligent transpo
69、rt systems dataSmart transport systemsProposal submitted on December 14,2021Draft Recharging Infrastructure RegulationRecharging infrastruc-ture dataInteroperability of recharging infrastructureProposal submitted on July 14,2021Draft In-Vehicle Data RegulationIn-vehicle dataAutonomous vehiclesPropos
70、al expected in 2023Draft European Health Data Space RegulationHealth dataB2B sharing in health sectorProposal submitted on May 3,20227Jones Day White PaperRegulatory Oversight of Data Ownership,Data Pooling,Data Access,and Portability.Data increases in value when avail-able in large pools.This need
71、for big data creates competitive incentives to collect and pool data.In turn,data pooling and aggregation create risks of lock-in effects and raising barri-ers to entry and switching though increased network effects,even if data is“non-rivalrous”(i.e.,it can always be copied).These issues can be dea
72、lt with by EU and/or national com-petition law.For example,data pooling agreements between competitors would be limited to only certain circumstances,17 also when done through trade associations.18 Similarly,com-petition authorities could investigate practices whereby cer-tain dominant companies ref
73、use to provide data akin to an essential facility.19 EU regulation has also progressively sought to facilitate data portability and access through third parties.The GDPR already requires data portability for personal data under cer-tain circumstances.The Free Flow of Data Regulation,con-cerning non-
74、personal data,also included rules on the porting of data for professional users via industry codes of conduct.The DMA also includes rules allowing the portability of data held by gatekeepers and sets out data access rights for business users of gateway service providers(such as online marketplaces).
75、The proposed Data Act now seeks to bring access and data portability to an entirely new level,as it would include gen-eral access and portability rights applicable to all data hold-ers,in particular in the cloud sector.The proposed Data Act would also limit the ability to rely on database IP rights
76、to oppose sharing.However,imposing a data access obligation does not neces-sarily mean that access should be given for free.Most leg-islation does not foresee any pricing mechanism,with few exceptions.20 This regulatory gap raises the thorny issue of the appropriate level of compensation,price regul
77、ation,and the need to apply fair,reasonable,and non-discriminatory,or FRAND,conditions.Such a scenario brings heightened potential for litigation,and businesses should carefully assess related risks.United StatesPatchwork of Competent Authorities.In the United States,administrations and members of C
78、ongress of both parties have declared AI as one of the central strategic and eco-nomic issues of the 21st century,and have convened blue-rib-bon panels to advise the White House,Congress,and federal agencies on AIs policy challenges and opportunities.Work on a substantive legal framework to regulate
79、 AIs devel-opment and use has been comparatively slow,with a handful of federal agencies addressing specific issues posed by AI technologies in select fields.For example:In response to the increasing prevalence of AI-based auto-mated vehicles,the Department of Transportations ongoing efforts focus o
80、n enabling AIs safe integration into the trans-portation system and adopting and deploying AI-based tools into internal operations,research,and citizen-facing services.The Food and Drug Administration(“FDA”)proposed a regu-latory framework for AI-based software incorporated into medical devices.The
81、Department of Commerces Bureau of Industry and Security amended its Export Administration Regulations to impose national security-based license requirements on exports or transfers of certain AI technologies,and the Committee on Foreign Investment in the United States(“CFIUS”)has similarly indicated
82、 that foreign investments in“critical technology”AI companies may be subject to height-ened filing obligations and more searching review.The Department of Commerces NIST(National Institute of Standards and Technology),the Federal Trade Commission(“FTC”),the Consumer Financial Protection Bureau(“CFPB
83、”),and the Federal Housing Finance Agency(“FHFA”)have each promulgated guidelines aimed at protecting consum-ers from misuse of AI.AI-focused legislative activity has likewise been approached in a piecemeal fashion,at both the federal and state levels.The majority of initiatives at the federal level
84、 have targeted spe-cific trends in AI technologies(e.g.,eliminating perceived dis-criminatory bias in AI-based lending technologies,combating 8Jones Day White Paper“deepfakes”),or provided funding or other government sup-port to advance the U.S.role in developing AI technology.Importantly,however,fe
85、deral initiatives generally have been limited to guidance or new proposed rules rather than final binding standards or new legislation.As might be expected,state legislatures have taken varied approaches when craft-ing AI-related laws.The majority of state laws are prohibitory in nature,seeking to r
86、egulate discriminatory uses of AI and protect consumers data.Limited Data Access Through Voluntary Standardization.Data access is critical to promoting and maintaining a vibrant AI ecosystem.Likewise,standardization efforts can sometimes act to encourage growth within the AI sector by facilitating e
87、xchange among industry actors and governmental enti-ties.However,increasing concerns over data privacy have prompted legislation within the United States regulating the use of certain types of data.Striking the appropriate balance between promoting advancements in AI technologies and reg-ulating pot
88、entially improper uses is likely to be a consistent challenge for U.S.policymakers for the foreseeable future.At the forefront of the promotion and standardization efforts for AI data issues is the NIST,created in 1901 and housed within the Department of Commerce.NISTs mission regarding AI is to res
89、earch and develop standards for AI data,with an empha-sis on“cultivating trust in the design,development,use and governance of artificial intelligence technologies and sys-tems”(e.g.,through research to ensure that AI technologies are explainable),as well as promoting AI innovation through technical
90、 standard-setting.In response to the National AI Initiative Act of 2020,the NIST also established and administers the National Artificial Intelligence Advisory Committee(“NAIAC”),which provides recommendations to the President on topics related to the current state of U.S.AI competitiveness,the stat
91、e of the sci-ence around AI,and AI issues in the workforce,among others.One goal of the NAIAC is to develop broad access to high-quality data,models,and computational infrastructure neces-sary for AI research and development for both the government and private industry.Part of developing this infras
92、tructure involves developing a task force to implement a National AI Research Resource,which is envisioned as a shared comput-ing and data infrastructure resource to provide AI research-ers with access to computational services and high-quality data.The NAIAC,in this respect,has put out calls for vo
93、lun-tary data-sharing arrangements between industry,federal-funded research centers,and federal agencies;increased development in high-performance computing infrastructure;and cloud-based AI in an effort to advance AI research and technologies.In addition to overseeing the NAIAC,the NIST is preparin
94、g an AI risk management framework(“AI RMF”),a guidance docu-ment to help manage AIs potential risks to individuals,orga-nizations and society.The NIST released a first draft of the AI RMF in March 2022 and a second draft in August 2022,each time requesting comments.The draft AI RMF establishes con-t
95、ext for AI risk management,provides guidance on outcomes and activities to carry out the process of risk management to maximize the benefits while minimizing the risk of AI,and offers sample practices to be considered when developing and implementing AI products and systems.Legislative efforts at pr
96、omoting the development of AI have been proposed at both the federal and state level.In April 2021,the Senate introduced the Advancing American AI Act,which requires federal agencies to take steps to pro-mote AI while ensuring that such developments align with U.S.values including the protection of
97、privacy,civil rights,and civil liberties.Specifically,the bill charges the Office of Management and Budget with continually refining AI best practices and supporting modernization initiatives;the Office of Federal Procurement Policy with developing a process to ensure that AI contracts align with sp
98、ecific guidelines related to privacy;and the Department of Homeland Security with revising the process for procurement and use of AI-enabled systems to give full consideration to the civil rights impacted by such systems.States have achieved varying levels of success in passing leg-islation aimed at
99、 encouraging AI development.For instance,Alabama enacted State Bill 78,which established a Council on Advanced Technology and Artificial Intelligence to review and advise parties on the use and development of AI in the state,while a similar bill failed in Nevada.Some states are also encouraging inve
100、stment in AI.Pending legislation in Hawaii would establish an income tax credit for investment in qualified businesses that develop cybersecurity and AI within the state.9Jones Day White PaperLimited(State-Level)Regulation of Personal Data.While abundant data is critical to the successful developmen
101、t of AI-based technologies,the prospect of unregulated data collection of an individuals every online interaction has long worried privacy advocates.In the United States,nationwide regulation for data protection exists only for specific segments of the population.For example,the Health Insurance Por
102、tability and Accountability Act(“HIPAA”)governs how personal health information can be accessed and shared,while the Family Educational Rights and Privacy Act,or FERPA,accomplishes a similar function for students private information.Outside of a handful of even more narrowly tailored legislation(e.g
103、.,the Gramm-Leach-Bliley Act,the Childrens Online Privacy Protection Rule,etc.),most federal regulation of AI is con-cerned with potential discriminatory impact and appropriate market access for AI technologies,rather than the underlying data collection practices that AI-based technologies rely on.I
104、n the absence of federal legislation,individual states are start-ing to pass laws aimed at enabling individuals to take more control over how their data is monitored and monetized online.California was the first to enact such legislation.The California Consumer Privacy Act(“CCPA”)of 2018 mirrors the
105、 GDPR and provides consumers with the right to know what information is being collected,and requires businesses to disclose the consumers right to delete personal information.Virginia was the second state to pass comprehensive data privacy regu-lations when it enacted the Consumer Data Protection Ac
106、t in March 2021.Colorado soon followed with the Protect Personal Data Privacy Act in July 2021.Both acts mirror the CCPA and seek to give consumers more control over data collection.Similar proposed bills on data privacy are currently pending in New York,while one recently failed in the state of Was
107、hington.In short,recent federal efforts in the United States have largely focused on promoting AI policy and standardization,leaving states to regulate data privacy.With regard to data accessi-bility for individuals,consumers and privacy advocates have called for more comprehensive legislation at th
108、e federal level.While the chances of a nationwide data privacy act seem increasingly likely,the political consensus to enact a specific piece of such legislation remains to be seen.Table 2Summary of Main U.S.Data Access Regulations Name of LegislationType of DataMain PurposeEffective DateGeneralPriv
109、acy Act of 1974Personal data held by the U.S.governmentProvides rules and regulations for the collection,use,and disclosure of per-sonal information by U.S.government agenciesSeptember 27,1975Federal Trade Commission(FTC)ActN/A Allows the FTC and other authorities to prosecute apps or websites that
110、violate their privacy policies or engage in deceptive marketing language as it relates to privacySeptember 26,1914,and reorganized on May 24,1950California Consumer Privacy Act(CCPA)Personal data Provides privacy protection for California consumersJanuary 1,2020,with amendments by the California Pri
111、vacy Rights Act that go into effect on January 1,2023Data specificHealth Insurance Portability and Accountability Act(HIPAA)Certain medical informationProtects protected health information held by covered entities August 21,1996continued on next page10Jones Day White PaperName of LegislationType of
112、DataMain PurposeEffective DateFair Credit Reporting Act(FCRA)Credit report informationRestricts use of and access to informa-tion related to creditOctober 26,1970,and amended on December 4,2003Family Educational Rights and Privacy Act(FERPA)Student education recordsGoverns access to educational info
113、r-mation and records by public entitiesAugust 21,1974Data specificGramm-Leach-Bliley Act(GLBA)Certain personal informationGoverns the collection,use,and protection of consumer data held by financial institutions November 12,1999Childrens Online Privacy Protection Act(COPPA)Data from minorsImposes ce
114、rtain limits on data collec-tion for children under 13 years oldApril 21,2000PRC regulators remain well-aware that the promotion of free flow of data is crucial to the larger-scale application of AI.The PRC encourages the free flow of data and information within the framework of the above-mentioned
115、protective laws and regulations.For example,local governments are exploring methods to facilitate data sharing or trading,such as by estab-lishing platforms for collection and access to big data,setting up data exchange centers,or designating a new“free trade”zone for the free flow of data,and parti
116、cularly for international data transfers.26 JapanData Protection.In Japan,the use of personal information is regulated by the Act on Protection of Personal Information(Act No.57 of 2003,as amended)(“APPI”).27 Under the APPI,con-sent is not required to collect personal information,except sensitive pe
117、rsonal information(such as health data).However,data subjects must either be notified of the purpose of the use of personal information or the purpose of use must be published promptly after collection unless it was already pub-lished in advance.28 For transfer of personal data to a third party,the
118、APPI in principle requires data subjects advance consent unless any exception applies.29 Additionally,in prin-ciple,cross-border transfer of personal data requires con-sent unless any exception applies.30 The APPIs recent 2020 amendment has further heightened the consent requirement and now strictly
119、 requires more transparency in obtaining advance consent for international transfer of personal data.More specifically,a data-exporting entity must inform data subjects of:(i)the country where such third party is located;(ii)the personal information protection system of such country;ChinaThe PRC doe
120、s not restrict AIs use or development in an AI-specific legislation.However,the PRC is regulating ele-ments needed to build AI technologies,including data(e.g.,personal information,facial recognition,big data,algorithm,or automated decision-making).Data Protection.The PRC boosted its regulation of d
121、ata pro-tection in 2021 by enacting the PRC Personal Information Protection Law(“PIPL”).21 Notably,consent is required to obtain an individuals personal information,unless one of a limited number of exceptions applies.22 PIPL also forbids the use of automated decision-making to discriminate among in
122、dividuals,for example by applying different contractual terms based on analyses of personal information such as habits,health,credit status,or financial situation.23 The PRC Antitrust Law(2007)further provides that business operators may not use data,algorithms,technology,etc.,to engage in monopoliz
123、ation.24The PRC also recently tightened data security through its Measures for the Security Assessment of Outbound Data Transfer(2022).25 Under these Measures,the international exchange of AI knowledge or information may be problem-atic,since data involved in the development or application of AI mig
124、ht be deemed important data.Thus,any international transfer of such data would trigger the data handlers obliga-tion to apply for a security assessment to seek the review and pre-approval of the PRC government.High-end chips,devices,or other technologies may also be the subject of national security
125、and thereby considered highly confidential and pro-hibited from sharing.11Jones Day White Paperand(iii)measures taken by such third party to protect the per-sonal information.31 Measures to Facilitate Data Collection and Flow.The strict consent requirement for the transfer of personal data can somet
126、imes conflict with the business and innovation needs for collecting and analyzing vast amounts of data.The follow-ing legislation and governmental initiatives seek to address this issue.Anonymously processed information.By processing infor-mation so that a person cannot be identified in accordance w
127、ith the strict processing rules set forth in the APPI imple-mentation regulations and related guidelines,this anony-mously processed information32 can be transferred to a third party without data subjects consent,but is subject to additional strict obligations and requirements imposed on the parties
128、 creating and using such information.Anonymized medical information.Medical data is very use-ful big data for medical research and development,includ-ing the development of AI in relation to medical device and drug development(e.g.,image diagnosis).However,the APPI imposes stricter regulations on us
129、e of such medical data than other types of personal data.Collection of sensi-tive personal information,such as medical history,requires advance consent of the data subjects.33 Further,the transfer restriction is also heightened as the opting-out scheme that can apply to other types of personal data
130、for transfer does not apply to medical data.34 In order to facilitate use of personal medical data for medi-cal research and development purposes,Japan estab-lished secure rules to create and use anonymized medical information,enacting the Act on Anonymized Medical Data to Contribute the Research an
131、d Development in the Medical Field(Act No.28 of May 12,2017)(“Next Generation Medical Infrastructure Act”),35 which took effect with the rel-evant cabinet ordinances and guidelines on May 11,2018.Under this Act,medical institutions can collect and provide medical information to organizations certifi
132、ed to anony-mize medical information without obtaining consent from patients,who only need to be notified of certain required items,including the patients right to opt out.36 The certified organization then anonymizes the medical information and can provide it to other organizations for use in medic
133、al research and development.Voluntary sharing of personal datacertified information banks.Businesses can be certified as information banks to promote and facilitate the voluntary exchange and shar-ing of personal data under the APPIs consent requirement regime.Individuals can entrust the handling of
134、 certain per-sonal information(including use of smartphone applications,browsing history,purchase records,location data,etc.)to an information bank,providing consent for the information bank to disclose this information to other business entities subject to certain terms and conditions.In return for
135、 con-senting to disclose this personal data,individuals receive benefits such as discount coupons from the receiving busi-ness entities.To establish standards and rules for certifi-cation of information banks,the Ministry of Internal Affairs and Communications(“MIC”)and the Ministry of Economy,Trade
136、,and Industry(“METI”)together prepared and pub-lished“Guidelines Regarding Certification of Information Entrustment Function”in June 2018(ver.1.0).37 Certification as an information bank is voluntary and not required for engaging in this activity,but it is useful to show the orga-nizations credibili
137、ty and its compliance with security mea-sures to protect privacy.Competition Regulation on Data Pooling and Lock-In.The Japan Fair Trade Commission prepared and published a“Report of the Working Group on Data and Competition Policy”on June 6,2017.38 The report confirmed that the current Anti-Monopol
138、y Act(Act No.54 of April 14,1947,as amended)39 may apply to and regulate unfair data pooling and lock-in by monopoly and oligopoly firms(e.g.,“unreasonable restraint of trade,”“unfair trade practices”).MARKET ACCESSRegulators concerns that certain AI systems could in some instances pose risks to saf
139、ety or fundamental rights have spurred countries to regulate how such systems can access the market.The asserted risks at stake typically depend on the goal pursued and the area where the AI is used.In just a few examples:Algorithms that have the purpose or effect of serving to set up a price cartel
140、 may be caught by antitrust laws.12Jones Day White Paper Certain large-scale uses of facial recognition technology may trigger questions related to privacy,consent,and individual rights,as shown by the restrictions imposed on Clearviews technology in the United States and the European Union.The use
141、of AI systems in selecting job applicants or determining the creditworthiness of borrowers may raise issues related to statutory anti-discrimination protections.Allegations may focus on various factors.For instance,an algorithm may be trained with a historic data set that is identified as reflecting
142、 bias,allegedly amplifying past dis-criminatory hiring practices.Similar effects might also arise from the underrepresentation of a group in the data set or the selection of analyzed characteristics.The Case of Clearview AI Clearview AI offers services(with a reported focus on law-enforcement custom
143、ers)that allow facial recog-nition based on an extensive database of pictures“scraped”from the internet(social media,etc.).In the European Union,several data protection authorities adopted decisions prohibiting the use of Clearview AI technology,based on the lack of legal basis to process biometric
144、data(pictures).In the United States,Clearview AI agreed to cease selling individual access to its database inside the country and committed to destroy its existing stock of facial-recognition vectors under the terms of a settle-ment reached with the American Civil Liberties Union.Rules on market acc
145、ess for AI systems could be focused on limiting such risks and the subsequent harm caused.This might include adapting existing legal frameworks to the spec-ificities of AI systems,but also creating tailored AI market access legislation.European UnionCurrent Legislation.An extensive body of existing
146、EU product safety legislation potentially applies to various AI applications,but attempting to apply this existing legislative framework to new AI systems has raised various problems.For instance,the European Unions current general product safety legislation(dating from 2001)has a limited scope that
147、 applies only to products,thereby excluding AI-based services,such as those related to health,financial,or transport services.In setting out an AI strategy,40 the European Union sought to promote the uptake of AI while addressing the associated risks.One important aspect is regulating market access
148、in view of ensuring user safety and safeguarding fundamental EU values and rights.After recognizing loopholes in current prod-uct safety legislation,the European Commission took action in April 2021 to ensure the safety of AI placed on the market.In addition to its Coordinated Plan on AI41 outlining
149、 necessary policy changes and investment at Member State level,the Commission also set out two proposed Regulations aimed at harmonizing safety requirements and market access of AI applications at the EU level:(i)the AI Act and(ii)the General Product Safety Regulation(to replace the current General
150、Product Safety Directive).Proposed AI Act.The Commissions proposed AI Act,42 pub-lished in April 2021,aims at harmonizing rules for bringing to market,putting into service,and using AI systems in the European Union(see also,“Regulating Artificial Intelligence:European Commission Launches Proposals,”
151、Jones Day Commentary,Apr.2021).13Jones Day White PaperUnder the proposals risk-based approach(see figure below):Certain AI practices are prohibited,as they are considered as a central threat to fundamental rights(e.g.,this includes social scoring by governments,but not“killer robots”);Certain AI sys
152、tems are classified as high-risk and subject to conformity assessment procedures before they can be placed(or put into service)on the EU market.High-risk AI includes:(i)AI used for products already covered by spe-cific EU product safety legislation,such as for machinery,toys,radio equipment,cars and
153、 other types of vehicles,and medical devices;and(ii)AI used in certain contexts,such as safety in the management and operation of criti-cal infrastructures,human resources,and creditworthiness assessments;High-risk AI is also subject to specific obligations such as data governance,human oversight,an
154、d transparency;and Certain low-risk AI systems,like deepfakes,are subject to harmonized transparency rules.On enforcement,market monitoring and surveillance is en-sured by national regulators with the ability to impose signifi-cant fines,under the supervision of an anticipated European Artificial In
155、telligence Board.The proposed AI Act is currently expected to be adopted by year-end 2022,enter into force in 2023,and become appli-cable two years after its entry into force.Preventing Biases.The proposed AI Act aims at resolving,in particular,the issue of biases allegedly created or amplified by A
156、I.Bias and discrimination are inherent risks of any soci-etal or economic activity,including for AI systems.However,AIs large scale means that the impact of its shortcomings could be much greater and more systematic,thus increasing the impact risks.Allegations of AI-based biases typically result fro
157、m either the use of low-quality training data or AI system opaqueness that can make it difficult to identify possible flaws in the AI systems design.While the GDPR can already catch some biases(e.g.,though its data accuracy obligation and prohibition of decision-mak-ing based solely on profiling),th
158、e proposed AI Act may fur-ther limit bias risks.Its high-risk AI requirements minimize the risk of algorithmic discrimination,particularly in relation to the quality of data sets used for developing AI systems and the obligations for testing,risk management,documentation,and human oversight througho
159、ut the entire AI systems lifecycle.Proposed General Product Safety Regulation.Toward adapt-ing current legislation to new technologies and its related challenges,the Commission also proposed a Regulation on General Product Safety in June 2021.43 This would replace the General Product Safety Directiv
160、e,44 whose statutory safety requirements must be met before bringing products to mar-ket.The proposed Regulation aims to broaden the current Directives scope to cover,in particular,AI systems.For exam-ple,as mentioned above,the existing General Product Safety Directives limited scope applies only to
161、 products and does 14Jones Day White Papernot cover AI-based services.The proposed Regulation would expand definitions,such as“product”and“safety,”to enable regulating new technologies.Furthermore,current EU product safety legislation focuses on a producer placing its product on the market.This mean
162、s that such legislation does not cover stand-alone software(which is not the final product)or third parties that introduce an AI component to a product after its introduction on the market.These cases will now be covered in the new proposal.Other Relevant Legislation.Various other sector-specific le
163、g-islative instruments,which do not focus solely on AI,could also be relevant for market access of AI-related products to the extent that these rules would facilitate cross-border trade by businesses.These include the EU Cybersecurity Act,45 in force since 2019,which establishes an EU-wide cybersecu
164、-rity certification framework for information and communi-cation technology products,services,and processes;the Regulation on Medical Devices,46 in force since 2017,whose rules include software medical devices;the Regulation on In-Vitro Diagnostic Medical Devices,47 in force since 2017;and the Commi
165、ssion proposal for a Cyber Resilience Act submit-ted on September 15,202248(See also,“European Commission Proposes Legislation Imposing New Cybersecurity Requirements on Digital Products,”Jones Day Alert,Sep.2022).United StatesPatchwork of Competent Authorities.Federal enforcement authorities have e
166、xpressed concerns for the potential misuse of AI-based technologies,especially as such misuse might affect individuals.Congress has not enacted any new legisla-tion concerning AI,and,accordingly,the scope and validity of federal action to regulate AI remains uncertain.This stands in contrast to the
167、comprehensive efforts to categorize and pro-hibit certain forms of AI as proposed in the European Union.The FTC was one of the first agencies to assert a role in pre-venting the misuse of AI-based technologies,via a blog post in April 2021.While helpful to illustrate the agencys priorities,this guid
168、ance does not bind regulated parties.The FTC claims to draw its asserted authority to curb potentially discriminatory AI-based practices from section 5 of the FTC Act,which pro-hibits unfair or deceptive practices;the Fair Credit Reporting Act;and the Equal Credit Opportunity Act.These theories rema
169、in controversial and are subject to ongoing challenges in the courts.The FTC blog post encourages companies to start with solid AI foundations and improve their data sets,to be mindful of the potential for discriminatory outcomes,and to embrace transparency.Further,the post urges companies to disclo
170、se data collection when engaging with consumers.The post cites an FTC complaint alleging that a social media company misled users about their ability to opt out of their facial recognition software as evidence of the FTCs willing-ness to go after companies that engage in“data malpractice.”The FTC go
171、es on to note that even inadvertent violations will be pursued,and that if a companys AI algorithm results in,for example,credit discrimination against a protected class,the FTC can file a complaint.Finally the post ends with a warning for companies to“hold yourself accountableor be ready for the FT
172、C to do it for you.”Other federal agencies have also voiced their perceived roles in regulating market access and certain forms of AI prohibition,typically as it relates to the potential for discriminating against a protected class.For example:The Equal Employment and Opportunity Commission announce
173、d the launch of an Initiative on AI and Algorithmic Fairness in October 2021.The Initiative is set to examine the use of AI in the hiring and employment process against existing civil rights lawsmany of which were enacted decades before the advent of AI.Similarly,the Department of Housing and Urban
174、Development(“HUD”)announced a proposed rulemaking by which algorithms used in housing decisions potentially could be challenged as having a discriminatory impact or effect.This rule,if finalized,is expected to be challenged on the grounds that it exceeds HUDs authority under the Fair Housing Act,as
175、interpreted by the U.S.Supreme Court.The CFPB controversially asserted in March 2022 that its“unfairness”authority may be used to regulate anti-dis-crimination.According to CFPB Director Rohit Chopra(for-merly an FTC Commissioner),“Companies are not absolved of their legal responsibilities when they
176、 let a black-box model make lending decisions.”Industry trade associations recently brought litigation challenging this as exceeding the CFPBs authority as prescribed by Congress under the Dodd-Frank Act and the Equal Credit Opportunity Act.They argue that Congress did not intend for the CFPB to reg
177、ulate discrimination.15Jones Day White Paper Finally,the FHFA released an advisory bulletin in February 2022 that provides AI and machine-learning risk management guidance for Fannie Mae and Freddie Mack,and it is the first publicly released guidance by a U.S.finan-cial regulator that is focused on
178、AI risk management.Agencies whose primary concerns with AI are not focused on these potentials for discrimination have weighed in on the role these technologies are likely to play in their fields.For example,the FDA is proceeding forward with its 2019 Action Plan on Artificial Intelligence/Machine L
179、earning-Based Software as a Medical Device.The Plan proposes changes to the traditional paradigm of medical device regu-lations for devices that incorporate or predominantly rely on AI and machine learning.The new approach would provide for a premarket program for such devices.However,the FDA states
180、 this would require a commitment from manufacturers on transparency and real-world performance monitoring as a part of the premarket submission process.AI Prohibitions.The U.S.legislative approach to AI prohibi-tion is likewise piecemeal and predominantly issue driven.One prominent example is the 20
181、20 Identifying Outputs of Generative Adversarial Networks,or IOGAN,Act.The Act directs the National Science Foundation and the NIST to sup-port research on“deepfakes”(also referred to as“machine-manipulated media”or“digital content forgeries”),which are highly realistic AI-created media.The Act aims
182、 to encourage technology to detect deepfakes for both consumer protection and national security purposesimplicitly recognizing that a statutory prohibition on specific types of content or content generation could raise significant constitutional questions.AI Bill of Rights.On October 4,2022,the Whit
183、e House Office of Science and Technology Policy published the blueprint for an AI Bill of Rights,a set of voluntary and nonbinding guidelines with the stated purpose of protecting the public from harm-ful outcomes or harmful use of technologies that implement AI.(See,“White House announces Artificia
184、l Intelligence Bill of Rights,”Jones Day Alert,Oct.2022.)The AI Bill of Rights framework applies to companies with“(1)automated systems that(2)have the potential to meaningfully impact the American publics rights,opportunities,or access to critical resources or services.”Companies falling under this
185、 framework are encouraged to follow the five principles out-lined in the AI Bill of Rights:Safe and effective systems.Companies should ensure automated systems are designed to protect users from harm.To achieve and guarantee this,automated systems should undergo regular monitoring designed to identi
186、fy and mitigate safety risks.Algorithmic discrimination protections.Companies should emphasize equity when developing algorithms through use of representative data and by conducting proactive equity assessments.Discriminatory uses of algorithms and algo-rithms that generate discriminatory results sh
187、ould be abol-ished and prohibited.Data privacy.Users sharing their data should have agency over how their data is used and be protected from abusive data practices.As such,companies should include built-in data protections and limit collection to data that is“strictly necessary for the specific cont
188、ext.”Notice and explanation.Users should be notified when an automated system is in use,and accessible plain language should describe how and why such a system contributes to outcomes that impact users.Human alternatives,consideration,and fallback.Companies should provide users with the option to op
189、t out from auto-mated systems and alternatively provide access to a human consultant,where appropriate.While the AI Bill of Rights sets forth voluntary guidelines only,it may set the stage for future legislation and regulations sur-rounding the use and implementation of AI.ChinaPromoting AI.Chinas S
190、tate Council issued a“Development Plan on the New Generation of Artificial Intelligence”(“Plan”)in 2017.49 The Plan anticipated AI as a new economic engine to provide solutions for problems such as an aging population or scarce resources,and as broadly applying in sectors such as education,medical t
191、reatment,environmental protection,city operations,and legal services.The Plan identified various challenges to AI development in China,such as:A lack of original achievements and talent;Large gaps with developed countries in terms of basic the-ories,core algorithms,key devices,high-end chips,major p
192、roducts or systems,materials,software,etc.;Absence of a legal framework;and16Jones Day White Paper Legal or ethical problems arising from the development of AI,such as the infringement of personal privacy,disruption to industry or employment structures,or impact on social governance and stability.Wi
193、th this context in mind,the Development Plan sets out the countrys main tasks,including,among others,promulgat-ing laws,regulations,policies,and ethical rules that promote or regulate AI development,and establishing an AI security monitoring and evaluation system to manage any abuse of data,infringe
194、ment of personal rights,network security,or other potential issues.In 2019,for the purpose of implementing the Development Plan,the Ministry of Science and Technology issued“Work Guidelines for the Construction of National Open Innovation Platforms for New Generation Artificial Intelligence.”50 The
195、Work Guidelines identify enterprises as the main actors or leaders for constructing AI-related open-source platforms,sharing technology and research resources with the public,and rely-ing on the market to provide funds and continuous support for such platforms.The Work Guidelines encourage cooperati
196、on among local governments,industries,research facilities,and universities and the integration of resources for the purpose of developing such platforms.The Work Guidelines list the requirements and procedures applicable to businesses lead-ing the construction of such AI-related platforms in specifi
197、c industrial areas.To develop experimental fields for AI-related activities on a larger scale,the Ministry of Science and Technology fur-ther issued the“Guidelines for the Establishment of the National New Generation Artificial Intelligence Innovation and Development Pilot Zone”in 2020.51 The Guidel
198、ines intend to establish selected pilot zones where new laws,regulations,policies,or standards may first be tested to promote AI-related industries and infrastructure.The Guidelines list the require-ments and procedures for cities seeking to serve as such pilot zones,and the supporting measures that
199、 an approved city may receive,such as local government funding or resources.Thus far,the Ministry has approved multiple cities for the development of such pilot zones,such as Harbin,Shenyang,and Zhengzhou.52Government agencies in charge of specific sectors have also issued opinions or guidance to fa
200、cilitate and support AI-related development in their areas,such as in forestry and grassland,53 higher education,54 medical software products,55 and construction.56Guidance.Several government agencies(e.g.,the National Standardization Administration,the Central Cyberspace Administration Office,the N
201、ational Development and Reform Commission,the Ministry of Science and Technology,and oth-ers)issued“Guidelines for the Construction of the National New Generation Artificial Intelligence Standard System”in 2020.The Guidelines set out eight main categories of various AI-related subjects for which sta
202、ndards are to be promulgated:Basic and common standards(e.g.,terminology or knowl-edge structure,testing,or evaluation);Supporting technology or products(e.g.,algorithms,big data,data storage);Basic AI software or hardware platforms(e.g.,chips,system-atic software,development framework);Key general
203、technologies(e.g.,machine learning,calcula-tion,identification);Technologies in key areas(language or vision processing,biometrics,virtual reality,human-machine interaction);Standards for AI products or services,including industrial standards(e.g.,AIs application in manufacturing,agricul-tural,trans
204、portation,medical treatment,education,and pub-lic governance);Safety standards;and Ethical standards.On ethical risks raised by AI technology,in 2021,the National Information Security Standardization Technical Committee(TC 260)issued the“Network Security Standardization Practice GuideGuidance for Pr
205、evention of Ethical Risks of Artificial intelligence”(“Ethical Guidance”).57 This provides guidance on better addressing the ethical risks of activities such as AI research and development,design and manufacturing,and applications.The Ethical Guidance requires conducting an ethical risk analysis for
206、 an AI-related activity with respect to the following risks:(i)the ethical impact of AI,which may exceed the expectation,understanding,or control of relevant parties(such as the researcher,developer,designer,or manufacturer);(ii)inappropriate use of AI;(iii)AI infringing on basic human rights,includ
207、ing bodily,privacy,or property rights;(iv)AI dis-crimination against specific groups of people that may affect justice or equality;and(v)inappropriate conduct or unclear 17Jones Day White Paperresponsibility of relevant parties,thereby negatively impacting social trust or values or infringing on rig
208、hts.In addition,the Ethical Guidance also sets out obligations on relevant parties to prevent those risks.JapanJapan has chosen to provide only non-legally binding guide-lines,with the intention of leaving AIs use and development undeterred.This contrasts the EU-style horizontal and compre-hensive r
209、egulatory approach to AI,as well as U.S.-style specific and targeted regulations.On July 9,2021,METI published a report titled“AI Governance in Japan ver.1.1”(“AI Governance Report”).58 Following a review of various regulatory approaches taken in other jurisdictions,the AI Governance Report conclude
210、d that for Japan,a desir-able AI governance approach would not establish legally bind-ing comprehensive laws and regulations.Rather,Japan would provide guidelines setting out various risk-based options and practical examples to fill in the gaps and achieve the goals of the parties concerned.Based on
211、 the AI Governance Reports recommended approach,on January 28,2022,METI published“Governance Guidelines for Implementation of AI Principles Ver.1.1”(“AI Governance Guidelines”).59 The AI Governance Guidelines consist of:(i)action targets to be implemented;(ii)practical examples that correspond to ea
212、ch action target;and(iii)prac-tical examples for the purposes of carrying out gap analysis between AI governance goals and current circumstances.The AI Governance Guidelines present in total 21 action targets in accordance with the six categories of:(i)conditions and risks analysis;(ii)goal setting;
213、(iii)system design;(iv)imple-mentation;(v)evaluation;and(vi)re-analysis of conditions and risks.The action targets are general and objective targets that should be implemented by every AI company involved in the AI businesstypically,the development/operation of AI sys-tems that could have a certain
214、level of negative impact on society.60 On the other hand,the practical and gap analysis examples cannot take into account the individual and spe-cific circumstances of every AI company.Accordingly,as the Guidelines indicate,each AI company will determine whether and how to adopt the practical and ga
215、p analysis examples to achieve the action targets in light of its own situation.61Separately,MIC,through the Conference toward AI Network Society,published“Draft AI R&D Guidelines for International Discussions”62 on July 28,2017,and“AI Utilization Guidelines Practical Reference for AI Utilization”63
216、 on August 9,2019.According to its 2022 Annual Report,64 the Conference is con-sidering the review and amendment of these guidelines in light of recent developments in these areas.AI LIABILITYIssues.Notwithstanding any market access limitations,AIs rapid emergence and its distinctive characteristics
217、(such as opacity,unpredictability,connectivity,complexity,and auton-omy)have triggered calls for establishing specific liability rules for material and immaterial harm“caused by”AI.One of the challenges raised by AI is the allocation of liability,since damage might be traced back to neither human er
218、ror nor to a product defect and can derive from its above-referred particularities:Machine learning enables digital systems to learn autono-mously through experience and by using data,which are not all in the hands of the initial programmer.The opacity of AI systems may raise difficulties in under-s
219、tanding how such systems produce a certain output.With the internet of things in industrial production,product defects may be due to the connectivity of an increasing number of robots and devices.In cases where AI“causes”damage,the question therefore arises as to who would be the addressee of a dama
220、ge claim.The answer is not so simple,as many addressees could be considered,such as the algorithms creator,the software pro-ducer,the database owner,the connectivity provider,the AI system owner,the AI user,etc.The requirement to demon-strate a causal link raises another challenge caused by the comp
221、lexity of AI systems and poses a great burden on the injured party.Finally,fulfilling the condition of fault may be dif-ficult to prove in relation to AI systems.As a result,authorities across the globe are considering intro-ducing specific liability regimes for AI damages,such as joint and several
222、liability,strict liability(without fault),etc.18Jones Day White PaperEuropean UnionCurrent Legislation.Member States essentially oversee liabil-ity regimes,with only a small part harmonized at EU level.In particular,the Product Liability Directive imposes strict liability on producers for their defe
223、ctive products,65 but it regulates only certain types of damages and applies only in the event of a defect in a product.Specific Liability Rules for AI.The EU AI strategy(and its annexes66),as well as related expert reports67 and commu-nications,68 concluded that further harmonization of liability r
224、ules was required to address AIs specificities.Following a consultation in October 2021,69 the European Commission published two new proposals on September 28,2022:First,the proposed Revised Product Liability Directive aims at modernizing the current EU framework on manufacturers liability for defec
225、tive products to include the following points:Extending the definition of“product”to enable strict liabil-ity rules to cover intangible products such as software and AI.At present,since most software and applications can be classified as a“service”rather than a“product,”these do not fall under the s
226、cope of the current Product Liability Directive.Broadening the scope of damages to include cyber vul-nerabilities(e.g.,connectivity and cybersecurity)and non-material damage(e.g.,loss of data,environmental damage).Widening the strict liability regime for importers to include online intermediaries(on
227、line market places)where con-sumers cannot identify the producer.Thus,for products originating from outside the European Union,both online intermediaries and importers of physical products would be subject to strict liability rules.Extending the notion of“defect”to cover defective refur-bished or re
228、manufactured products and defective spare parts that cause damage.This expansion would address the fact that AI systems continuously learn and develop while operating,and are continuously updated with new data and software.Reliance on the so-called“development risk defense”(essentially a state-of-th
229、e-art standard at the time of conception)would also be denied for AI products that continue to learn and adapt while in operation.Facilitating claims to compensation by requiring manufac-turers to disclose necessary information in court and by easing the burden of proof for victims in more complex c
230、ases,as in those involving AI-enabled products.Second,the proposed AI Liability Directive provides for a targeted harmonization of national civil liability rules for AI.It supplements the rules under the above-described proposed Revised Product Liability Directive by introducing two main additional
231、measures specifically for AI in noncontractual civil law claims for damages:Alleviating victims burden of proof through the“presump-tion of causality,”whereby courts can establish the causal link between the damage and noncompliance of provid-ers of AI systems with a certain obligation relevant to t
232、he harm(e.g.,with a duty of care under EU or national law),if the victims can demonstrate such noncompliance.This presumption is rebuttable by proving that a different cause provoked the damage.Empowering courts to order providers of high-risk AI sys-tems(as defined under the proposed AI Act)to disc
233、lose relevant information,subject to appropriate safeguards to preserve the legitimate interests of all parties,such as trade secrets or other sensitive information.The proposal for a revised Product Liability Directive would harmonize liability rules across EU countries and thus reduce legal fragme
234、ntation.However,such harmonization would be limited to tort law,while national laws would continue to govern contractual liability(including liability exemptions,etc.).United StatesThe United States does not have a comprehensive approach to AI liability at either the national or state level.At the s
235、tate level,legislatures are updating their general tort laws to cover certain AI-based damages.For example,many states have passed legislation related to autonomous vehicles to update existing damages laws.To more broadly address AI-based harms,this could come in the form of updating existing prod-u
236、ct liability laws.Given product liability laws history of adapt-ing to new technologies,advocates have argued it is the best vehicle to address the potential harms that may result from AI products.19Jones Day White PaperChinaCurrent Legislation.At present,while China does not have a comprehensive ap
237、proach to AI liability,AI is subject to liabil-ity.At the highest judicial levels,Chinese courts are taking interest in safeguarding individual rights against AI software-related infringements.For example,in April 2022,the Supreme Peoples Court identified a number of“model”civil cases on personality
238、 rights issued by lower courts in China.These included a ruling by the Beijing Internet Court,which found that AI software that infringed personality rights by using the portrait of a natural person without the persons consent.70 The AI software at issue allowed users to build an AI virtual charac-t
239、er using the plaintiffs name,portrait,and character traits,and to interact with it.The court ruled that the software provider,by designing this function and algorithm,in fact encouraged users to use the plaintiffs information in this way.Therefore,it was no longer a neutral technology provider and i
240、nfringed the plaintiffs rights to name,portrait,and dignity.This case reflected a thoughtful exploration of standards for assessing AI algorithms and applications,and highlights the significance of protecting personality rights in the AI age.JapanJapan has not yet enacted any specific rules to addre
241、ss AI liability issues.Therefore,AI liability is governed by the current civil contractual or tort liability regimes under the Civil Code of Japan(Act No.89 of April 27,1896,as amended)71 and the Product Liability Act(Act No.85 of July 1,1994).72 Similar to the current EU Product Liability Directive
242、,the Japans current Product Liability Act covers only a defect of a“prod-uct”that is movable property.Therefore,if AI is installed in and constitutes a part of a certain device,the manufacturer of such device could be subject to product liability.However,if AI is not installed in a device and is mer
243、ely a program,it cannot be construed as a movable object,and thus is not a product.Therefore,liability claims cannot be made against a programmer of AI under the Product Liability Act.The notion of defect73 and burden of proof,as discussed in the proposed revision of the EU Product Liability Directi
244、ve,would also need to be examined under the Product Liability Act.CONCLUSIONKEY CONSIDERATIONS FOR THE PRIVATE SECTORFor businesses,innovative development and deployment of AI poses tremendous opportunities but also risks.Navigating these opportunities and risks will require an eye to the evolv-ing
245、legal issues that AI poses.While each situation,product,and service will pose different questions,general recommen-dations for addressing the legal implications of AI include:Keep abreast of the growing regulation of AI globally.AI regulation is growing across the globe,and interest in AI oversight
246、will expand more over time.When developing new AI systems,companies should anticipate constraints that upcoming regulation may impose,including in terms of conditional market access,increased liability,or data usage.Companies should expect to have to adapt to increasing constraints as more regulatio
247、ns are imposed and,in some legal systems,as new causes of action are created or recognized.The European Union is the front-runner in terms of setting the regulatory constraints,with expected regulations covering:(i)the marketing and use of AI-systems;(ii)data access;and(iii)AI liability.This frame-w
248、ork may become a blueprint for regulation in some other countries(or by subnational state or local authorities),as the GDPR did for privacy regulation.In the United States,the patchwork approach to AI regulation has meaningful implications for companies,whether well-established with AI-based technol
249、ogies or just entering the field.Depending on its area of business,a company may find itself entering into a highly regulated space in which established guide-lines govern acceptable practices,or a company may have little oversight and be left to develop best practices on its own.However,the establi
250、shment of the NAIAC indicates the growing interest in taking a more comprehensive approach to AI technologies at the federal level.Consider data collection risks and opportunities.When deploying AI,companies should also consider the risk and opportunities of lock-in effects.Companies should consider
251、 their strategies to gather relevant and sufficient data to sup-port their AI-based products and services.The rising impor-tance of data sharing and pooling arrangements,as well as data access,portability,and privacy issues,may create regulatory concerns.In this regard,they should consider 20Jones D
252、ay White Paperopportunities brought by existing and new regulations in terms of access and portability of data,which may facili-tate access to competitors data or to data owned by third parties that relate to its own activities.Companies should review data pooling agreements with their competitors u
253、nder competition and privacy law.Maintain privacy when personal data is concerned.AI sys-tems using personal data call for specific attention,as these are already covered by GDPR and other privacy legislations.The obligation to conduct an impact assessment,under GDPR and the forthcoming AI Act,shoul
254、d be considered.In the United States,companies can expect the implemen-tation of a national data privacy regulation similar to the GDPR in the near-term future.As the number of states that pass their own data privacy legislation continues to increase along with growing calls to harmonize data privac
255、y laws between the United States and European Union,a nation-wide privacy regulation is becoming increasingly likely.Monitor data flows.Several regulations,like GDPR in the European Union,or the Measures for the Security Assessment of Outbound Data Transfer in China,may con-strain the transfer of da
256、ta or algorithms between jurisdic-tions.Such considerations can apply to transfers of data within a company,or to collaborative software development projects in which code is transferred between or acces-sible by personnel in multiple jurisdictions.For example,the United States and China have each s
257、ignaled an intention to restrict exports of certain high-value AI technologies to each other.Companies should map the data flows triggered by AI use and assess their compliance.Put in place an internal structure to limit the risks of dis-crimination and bias.Specific attention should be given to ris
258、ks of biases triggered or amplified by the use of AI.It has become increasingly clear that,regardless of the field,governments are motivated to focus on ensuring AI tech-nologies are not used in a discriminatory manner or result in discriminatory practices.Given that AI technologies are iterative an
259、d learning based,a company should consult with experts to ensure any training data sets are free from biases from the outset.The regulatory agencies that have com-mented on the matter have made clear that a lack of intent is not exculpatory should an AI system result in discrimi-natory practices.Int
260、ernal audits should be considered to map the AI used within a company and assess the need to establish ethics principles and governance(ethical board,etc.)to control such use.Manage liability risks.Navigating multiple increasingly pro-scriptive,and occasionally conflicting,regulatory regimes and lia
261、bility concepts will pose a growing array of chal-lenges for companies.Company liability and the service-level landscape warrant careful assessment to minimize the exposure to claims based on asserted data protection lapses,malfunction,or bias(e.g.,race or gender related).Using AI systems,even when
262、off-the-shelf,can raise special-ized questions or concerns in certain contexts,such as in relation to employment matters or public safety.Regulatory compliance should be monitored,and licensing contracts relating to software or data call for careful review to properly allocate liability.Protect your
263、 AI-related IP rights.AI providers and users generally want to protect their respective IP rights and busi-ness data,which may raise more complexities if involving AI.For businesses with a multijurisdictional corporate struc-ture,employee or contractor base,or pool of customers or vendors,a key conc
264、ern will be to protect IP and ensure regulatory compliance in multiple jurisdictions whose gov-ernments may approach AI and data regulatory issues in distinctly different mannersand that may restrict the export of data or AI algorithms to each other.Integrate AI-specific aspects in M&A transactions.
265、When conducting an M&A transaction,in particular when an AI system is a key production or a key target asset,it may be advisable to integrate specific questions within the due diligence to enable identifying any specific risks incurred by AI systems,e.g.,in terms of expected restriction to the marke
266、t potential of an AI system,the license contracts used for AI systems,whether adequate IP protections have been secured in relevant jurisdictions,the data to be run on AI systems,etc.In addition,the acquisition of AI assets can trigger particular attention under foreign direct investments ex ante co
267、ntrol,like CFIUS,which may delay or even in some cases prevent the transaction.In each case,attention to these issues in advance can help the parties apportion risk and avoid subsequent delays to closing or post-closing integration.21Jones Day White PaperWith each of these issues,legal frameworks ar
268、e still devel-oping and are subject to changealong with the technol-ogy itself,which continues to evolve rapidly as R&D efforts progress and a wider range of organizations focus on adapt-ing AI to their objectives.The law has developed far enough,however,that AI can no longer be regarded as a purely
269、 tech-nical issue confined to the realm of specialistsit is becom-ing a more mainstream issue for lawmakers,regulators,and practicing lawyers in a range of fields.LAWYER CONTACTSThis White Paper serves as a starting point for consideration of issues that will in many cases warrant fact-specific revi
270、ew,and we encourage readers to contact the following Jones Day lawyers with questions.Laurent De MuyterBrussels+ Haifeng HuangHong Kong/Beijing+852.3189.7288/+Carl A.Kukkonen IIISan Diego/Silicon Valley+1.858.314.1178/+Alexander V.MaugeriNew York+Schuyler J.SchoutenSan Diego/Washington+1.858.314.116
271、0/+Michiru TakahashiTokyo+ Olivier HaasParis+Jrg Hladjk Brussels+Matthew W.JohnsonPittsburgh+Jeffrey J.JonesDetroit/Columbus+1.313.230.7950/+ Mauricio F.PaezNew York+Emily J.TaitDetroit+Alexandre G.VerheydenBrussels+ Undine von DiemarMunich+22Jones Day White PaperENDNOTES1 Regulation(EU)2016/679 of
272、the European Parliament and of the Council of Apr.27,2016,on the protection of natural persons with regard to the processing of personal data and on the free move-ment of such data,and repealing Directive 95/46/EC(General Data Protection Regulation).2 European Parliament,Panel for the Future of Scie
273、nce and Technology,“Study:The impact of the General Data Protection Regulation(GDPR)on artificial intelligence.”(EPRS_STU(2020)641530)(June 2020).3 Regulation(EU)2018/1807 of the European Parliament and of the Council of Nov.28,2018,on a framework for the free flow of non-personal data in the Europe
274、an Union.4 Directive(EU)2019/1024 of the European Parliament and of the Council of June 20,2019,on open data and the reuse of public sec-tor information(recast).As an EU directive(unlike a directly appli-cable EU regulation),the Open Data Directive required Member State transposition into national l
275、aws by July 16,2021.5 Communication from the Commission,“A European Strategy for Data.”6 Regulation EU 2022/868 of the European Parliament and of the Council on European data governance and amending Regulation(EU)2018/1724(Data Governance Act).7 See Jones Day Alert,“European Commission Proposes Legi
276、slation Facilitating Data Access and Sharing”(Feb.2022).8 Directive(EU)2015/2366.See also European Data Protection Board,“Guidelines 06/2020 on the interplay of the Second Payment Directive and the GDPR”(Dec.15,2020).9 Directive 2019/944 of June 5,2019,on common rules for the internal market for ele
277、ctricity;Directive 2009/73 of July 13,2009,concerning common rules for the internal market in natural gas.10 Directive 2019/770 of May 20,2019,on certain aspects concerning contracts for the supply of digital content and digital services(Digital Content Directive)11 Regulation 2007/715,amended by Re
278、gulation 2018/858.12 Directive 2010/40 of July 7,2010,on the framework for the deployment of intelligent transport systems in the field of road transport and for interfaces with other modes of transport.The Commission published a proposal for a revised ITS Directive in Dec.2021.13 The Commission lau
279、nched a public consultation in March 2022 on the sharing of vehicle-generated data and is expected to publish an EU regulation on access to in-vehicle data in 2023.14 Proposal for a Regulation on the deployment of alternative fuels infrastructure,July 14,2021.15 Regulation 2022/1925 of Sep.14,2022,o
280、n contestable and fair markets in the digital sector.See Jones Day White Paper,“Digital Markets Act:European Union Adopts New“Competition”Regulations for Certain Digital Platforms”(Aug.2022).16 Proposal for a regulation of the European Parliament and of the Council on the European Health Data Space,
281、COM/2022/197 final.17 See Draft Horizontal Guidelines,442.18 See,e.g.,Commission decision of June 30,2022,in Case AT.40511,where the European Commission has made commitments offered by Insurance Ireland,an association of Irish insurers,legally binding under EU antitrust rules.Accordingly,Insurance I
282、reland must ensure fair and nondiscriminatory access to its Insurance Link information exchange system.19 See,e.g.,Commission decision of Dec.20,2012,in Case AT.39654,where the European Commission has made commitments offered by Thomson Reuters to create a new license allowing customers,for a monthl
283、y fee,to use Reuters Instrument Codes(“RICs”)for data sourced from Thomson Reuters competitors.20 For example,the Open Data Directive limits the exceptions allowing public bodies to charge more than the marginal costs of dissemi-nation for the reuse of their data and strengthens the transparency req
284、uirements for publicprivate agreements involving public-sector information.21“Personal Information Protection Law of the Peoples Republic of China”(available in Chinese only).See also“China to Start Implementing Restrictions on Cross-Border Transfers of Personal Information,”Jones Day Commentary(Aug
285、.2022).22 PIPL,Article 13.23 PIPL,Article 24.24 PRC Antitrust Law(2007),Article 9.25 Measures for the Security Assessment of Outbound Data Transfer(2022),Article 4.26 See Regulation on Lin-Gang Special Area of China(Shanghai)Pilot Free Trade Zone(2022);Shanghai Data Regulation(2021).27 Significant a
286、mendments to the APPI were recently made in 2020 and 2021.The 2020 amendment in its entirety and the 2021 amendment(available in Japanese only)partially took effect on Apr.1,2022.The 2021 amendment will fully take effect on April 1,2023(available in Japanese only).An English translation is available
287、 only for the 2020 amendment of the APPI.28 APPI,Article 21,Para.1.29 APPI,Article 27,Para.1.30 APPI,Article 28,Para 1.31 APPI Article 28,Para.2,APPI implementation regulation,Article 17,Para.2.32“Anonymously processed information”is information relating to an individual that is processed such that
288、a specific individual cannot be identified and the original form of the personal information cannot be restored,APPI Article 2,Para.2.33 APPI Article 20,Para.2.34 APPI Article 27,Para.2.35“Act on Anonymized Medical Data That Are Meant to Contribute to Research and Development in the Medical Field”(a
289、vailable in Japanese only).36 Next Generation Medical Infrastructure Act,Article 30.37 The most updated version is version 2.1,published in Aug.2021(avail-able in Japanese only).The draft of version 2.2 was published for public comments on June 30,2022(available in Japanese only).38“Study Group on D
290、ata and Competition Policy”(available in Japanese only).39“Act on Prohibition of Private Monopolization and Maintenance of Fair Trade(Act No.54 of April 14,1947).”40 Communication from the Commission to the European Parliament,the European Council,the Council,the European Economic and Social Committ
291、ee,and the Committee of the Regions,Artificial Intelligence for Europe,COM/2018/237 final.41 Communication from the Commission to the European Parliament,the European Council,the Council,the European Economic and Social Committee,and the Committee of the Regions fostering a European approach to arti
292、ficial intelligence,COM(2021)205 final.42 Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence(Artificial Intelligence Act)and amending certain union legislative acts(com(2021)206 final.The proposed AI Regulation is accom-pan
293、ied by a proposal for a new Regulation on Machinery Products,which focuses on the safe integration of an AI system into machinery.43 Proposal for a Regulation of the European Parliament and of the Council on general product safety.44 Directive 2001/95/EC of the European Parliament and of the Council
294、 of Dec.3,2001,on general product safety.45 Regulation(EU)2019/881 of the European Parliament and of the Council of Apr.17,2019,on ENISA and on information and com-munications technology cybersecurity certification and repealing Regulation(EU)NO 526/2013(Cybersecurity Act).46 Regulation(EU)2017/745
295、of the European Parliament and of the Council of Apr.5,2017,on medical devices,amending Directive 2022 Jones Day.All rights reserved.Printed in the U.S.A.Jones Day publications should not be construed as legal advice on any specific facts or circumstances.The contents are intended for general inform
296、ation purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm,to be given or withheld at our discretion.To request reprint permission for any of our publications,please use our“Contact Us”form,which can be found on our w
297、ebsite at .The mailing of this publication is not intended to create,and receipt of it does not constitute,an attorney-client relationship.The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.2001/83/EC,Regulation(EC)No 178/2002 and Regula
298、tion(EC)No.1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC.47 Regulation(EU)2017/746 of the European Parliament and of the Council of Apr.5,2017,on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU.48 Proposal for a Regulatio
299、n of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and amending Regulation(EU)2019/1020.49“The State Council on Printing and Distributing Notice of the New Generation Artificial Intelligence Development Plan”(available in Chine
300、se only).50“Notice of the Ministry of Science and Technology on Printing and Distributing the Guidelines for the Construction of National New Generation Artificial Intelligence Open Innovation Platforms”(avail-able in Chinese only).51“Notice of the Ministry of Science and Technology on Printing and
301、Distributing the Guidelines for the Construction of National New Generation Artificial Intelligence Innovation and Development Pilot Zones(Revised Edition)”(available in Chinese only).52 Letter from the Ministry of Science and Technology on Supporting Harbin in Building a National New Generation Art
302、ificial Intelligence Innovation and Development Pilot Zone(2021);Letter from the Ministry of Science and Technology on Supporting Shenyang to Build a National New Generation Artificial Intelligence Innovation and Development Pilot Zone(2021);Letter from the Ministry of Science and Technology on Supp
303、orting Zhengzhou to Build a National New Generation Artificial Intelligence Innovation and Development Pilot Zone(2021).53 The State Forestry and Grassland Administration issued a Guiding Opinions on Promoting the Development of Artificial Intelligence in Forestry and Grassland in 2019.54 The Minist
304、ry of Education,the National Development and Reform Commission,and the Ministry of Finance issued the Opinions on Promotion of Discipline Integration and Postgraduate Training in the Field of Artificial Intelligence in Colleges and Universities in 2020.These Opinions discuss courses and subjects dev
305、elopment,interna-tional exchange of talents,cooperation with enterprises,and fund-ing support,among others.The Ministry of Education also issued an Action Plan for AI Innovation in Colleges and Universities in 2018,aiming to enhance AI-related research,education,talents training,innovation,and appli
306、cation.55 The State Food and Drug Administration issued the Guiding Principles for the Classification and Definition of Artificial Intelligence Medical Software Products,requiring registration and approval for AI-related medical software products and management of such products according to their me
307、dical instrument classification type.56 The General Office of the Ministry of Housing and Urban-Rural Development approved Beijing and Shenzhen to experiment on using artificial intelligence to review construction drawings.Letter from the General Office of the Ministry of Housing and Urban-Rural Dev
308、elopment on the approval of Beijing to carry out the pilot proj-ect of using artificial intelligence to review construction drawings(2020);Letter from the General Office of the Ministry of Housing and Urban-Rural Development on the approval of Shenzhen to carry out the pilot project of using artific
309、ial intelligence to review construction drawings(2020).57“Notice on Issuing the Guidelines for the Practice of Network Security Standards-Guidelines for Prevention of Artificial Intelligence Ethical Security Risks”(available in Chinese only).58“AI Governance in Japan Ver.1.1:Report from the Expert G
310、roup on How AI Principles Should Be Implemented,”July 9,2021.59 Version 1.0 of the AI Governance Guidelines was published for solicit-ing public comments on July 9,2021.It was then finalized and pub-lished as Version 1.1 on Jan.28,2022.60 See AI Governance Guidelines,A.Introduction,4.“How to Use the
311、 AI Governance Guidelines.”61 Id.62 The Conference toward AI Network Society,“Draft AI R&D Guidelines for International Discussions”(July 28,2017).63 The Conference toward AI Network Society,“AI Utilization Guidelines Practical Reference For AI Utilization”(Aug.9,2019).64“Report 2022:Further Promoti
312、on of Social Implementation of Safe,Secure and Reliable AI”(available in Japanese only).65 Council Directive 85/374/EEC.66 See,in particular,the Staff Working Document on Liability accompa-nying the EU AI Strategy(SWD(2018)137).67 For example,in a 2019 report,the Commissions Expert Group on Liabilit
313、y and New Technologies examined liability issues in con-nection with AI technologies.The Expert Group concluded that contractual or tort liability systems do exist in the Member States,but these insufficiently cover all circumstances that justify liability.Consequently,it remains necessary to close
314、these liability gaps.68 On June 30,2021,the Commission also issued an inception impact assessment.On Oct.20,2020,the European Parliament adopted a resolution,which included a draft for a Regulation on liability for the operation of Artificial Intelligence systems.69 Commission consultation,“Civil Li
315、abilityAdapting Liability Rules to the Digital Age and Artificial Intelligence.”70“Nine Model Civil Cases of Judicial Protection of Personality Rights after the Issuance of the Civil Code Published by the Supreme Peoples Court(2022)”(available in Chinese only).71 Civil Code(Part I,Part II,and Part I
316、II).72 Product Liability Act(Act No.85 of 1994).73 The term“defect”is defined as a lack of safety that a product should normally have,taking into account the characteristics of the prod-uct,the normally foreseeable usage manner,the time at which the manufacturers,etc.delivered the product,and other circumstances of the product.(Article 2,para.2).