上海品茶

美国数据创新中心:2024选择正确的人工智能监管政策解决方案研究报告(英文版)(56页).pdf

编号:164518 PDF   DOCX  中文版 56页 1.29MB 下载积分:VIP专享
下载报告请您先登录!

美国数据创新中心:2024选择正确的人工智能监管政策解决方案研究报告(英文版)(56页).pdf

1、 CENTER FOR DATA INNOVATION 1 Picking the Right Policy Solutions for AI Concerns By Hodan Omaar and Daniel Castro|May 20,2024 INTRODUCTION Policymakers find themselves amid a chorus of calls demanding that they act swiftly to address risks from artificial intelligence(AI).Concerns span a spectrum of

2、 social and economic issues,from AI displacing workers and fueling misinformation to threatening privacy,fundamental rights,and even human civilization.Some concerns are legitimate,but others are not.Some require immediate regulatory responses,but many do not.And a few require regulations addressing

3、 AI specifically,but most do not.Discerning which concerns merit responses and what types of policy action they warrant is necessary to craft targeted,impactful,and effective policies to address the real challenges AI poses while avoiding unnecessary regulatory burdens that will stifle innovation.Th

4、is report covers 28 of the prevailing concerns about AI,and for each one,describes the nature of the concern,if and how the concern is unique to AI,and what kind of policy response,if any,is appropriate.To be sure,there are additional concerns that could have been included and others that will be ra

5、ised in the future,but from a review of the literature on AI and the growing corpus of AI regulatory actions,these are the major concerns that policymakers have to contend with.This report takes 28 of the concerns du jour and groups them into 8 sections:privacy,workforce,society,consumers,markets,ca

6、tastrophic scenarios,intellectual property,and safety and security.Each concern could warrant a report of its own,but the goal here is to distill the essence of each concern and offer a pragmatic,clear-eyed response.CENTER FOR DATA INNOVATION 2 For each issue,we categorize the appropriate policy res

7、ponse as follows:Pursue Regulation That Is AI AI-specific:specific:Some concerns about AI are best addressed by enacting or updating regulation that specifically targets AI systems.These regulations may prohibit certain types of AI systems,create or expand regulatory oversight of AI systems,or impos

8、e obligations on the developers and operators of AI systems,such as requiring audits,information disclosures,or impact assessments.General:General:Some concerns about AI are best addressed by enacting or updating regulation that does not specifically target AI but instead creates broad legal framewo

9、rks that apply across various industries and sectors.Examples of these regulations include data privacy laws,political advertising laws,and revenge porn laws.Pursue Nonregulatory Policies That Are AI AI-specific:specific:Some concerns about AI are best addressed by implementing nonregulatory policie

10、s that target AI.Examples of these policies include funding AI research and development or supporting the development and use of AI-specific industry standards.GeneralGeneral:Some concerns about AI are best addressed by implementing nonregulatory policies that do not target AI but instead focus on t

11、he broader technological and societal context in which AI systems operate.Examples of these policies include job dislocation policies to mitigate the risks of a more turbulent labor market or policies to improve federal data quality.No Policy Needed Some concerns are best addressed by existing polic

12、ies or by allowing society and markets to adapt over time.Policymakers do not need to implement new regulatory or nonregulatory policies at this time.CENTER FOR DATA INNOVATION 3 CONTENTS 1.1.PrivacyPrivacy 1.1.AI may expose PII in a data breach.1.2.AI may reveal PII included in training data.1.3.AI

13、 may enable government surveillance.1.4.AI may enable workplace surveillance.1.5.AI may infer sensitive information.1.6.AI may help bad actors harass and publicly shame individuals.2.2.WorkforceWorkforce 2.1.AI may cause mass unemployment.2.2.AI may dislocate blue collar workers.2.3.AI may dislocate

14、 white collar workers.3.3.S Societyociety 3.1.AI may have political biases.3.2.AI may fuel deepfakes in elections.3.3.AI may manipulate voters.3.4.AI may fuel unhealthy personal attachments.3.5.AI may perpetuate discrimination.3.6.AI may make harmful decisions.4.4.ConsumerConsumers s 4.1.AI may exac

15、erbate surveillance capitalism.5.5.MarketsMarkets 5.1.AI may enable firms with key inputs to control the market.5.2.AI may reinforce tech monopolies.6.6.CatastrophicCatastrophic scenariosscenarios 6.1.AI may make it easier to build bioweapons.6.2.AI may create novel biothreats.6.3.AI may become God-

16、like and“superintelligent.”6.4.AI may cause energy use to spiral out of control.7.7.Intellectual propertyIntellectual property 7.1.AI may unlawfully train on copyrighted content.7.2.AI may create infringing content.7.3.AI may infringe on publicity rights.8.8.Safety and Safety and SecuritySecurity 8.

17、1.AI may enable fraud and identity theft.8.2.AI may enable cyberattacks.8.3.AI may create safety risks.CENTER FOR DATA INNOVATION 4 OVERVIEW OF POLICY NEEDS FOR AI CONCERNS Concerns that warrant AIConcerns that warrant AI-specific regulations:specific regulations:1.3.AI may enable government surveil

18、lance.3.6.AI may make harmful decisions.8.1.AI may enable fraud and identity theft.8.3.AI may create safety risks.Concerns that warrant general regulations:Concerns that warrant general regulations:1.1.AI may expose PII in a data breach.1.5.AI may infer sensitive information.1.6.AI may help bad acto

19、rs harass and publicly shame individuals.3.2.AI may fuel deepfakes in elections.6.1.AI may make it easier to build bioweapons.7.3.AI may infringe on publicity rights.Concerns that warrant AIConcerns that warrant AI-specific nonregulatory policies:specific nonregulatory policies:1.4.AI may enable wor

20、kplace surveillance.3.3.AI may manipulate voters.3.5.AI may perpetuate discrimination.6.2.AI may create novel biothreats.6.3.AI may become God-like and“superintelligent.”6.4.AI may cause energy use to spiral out of control.7.1.AI may unlawfully train on copyrighted content.8.2.AI may enable cyberatt

21、acks.Concerns that warrant general nonregulatory policies:Concerns that warrant general nonregulatory policies:1.2.AI may reveal PII included in training data.2.2.AI may dislocate blue collar workers.2.3.AI may dislocate white collar workers.3.1.AI may have political biases.7.2.AI may create infring

22、ing content.Concerns that doConcerns that do nonot warrant new policies:t warrant new policies:2.1.AI may cause mass unemployment.3.4.AI may fuel unhealthy personal attachments.4.1.AI may exacerbate surveillance capitalism.5.1.AI may enable firms with key inputs to control the market.5.2.AI may rein

23、force tech monopolies.CENTER FOR DATA INNOVATION 5 1.PRIVACY#Risk Policy needs Policy solution 1.1 AI may expose personally identifiably information in a data breach.General regulations Policymakers should require companies to publish security policies to promote transparency with consumers.Congress

24、 should pass federal data breach notification legislation.1.2 AI may reveal sensitive information included in training data.General nonregulatory policies Policymakers should fund research for privacy-and security-enhancing technologies and there should be support for industry-led standards for resp

25、onsible web-scraping.1.3 AI may enable government surveillance.AI-specific regulations Congress should direct the Department of Justice(DOJ)to establish guidelines for use by state and local law enforcement in investigations that outline specific use cases and capabilities,including when a warrant i

26、s necessary for use,as well as transparency guidelines for when to notify the public of law enforcement using AI.1.4 AI may enable workplace surveillance.AI-specific nonregulatory policy Policymakers should help set the quality and performance standards of AI technologies used in the workplace 1.5 A

27、I may infer sensitive information.General regulations Policymakers should craft and enact comprehensive national privacy legislation that addresses the risks of data-driven inference in a tech-neutral way.1.6 AI may help bad actors harass and publicly shame individuals.General regulations Congress s

28、hould outlaw the nonconsensual distribution of all sexually explicit images,including deepfakes that duplicate individuals likenesses in sexually explicit images,and create a federal statute that prohibits revenge porn,including those with computer-generated images.CENTER FOR DATA INNOVATION 6 Issue

29、 1.1:AI May Expose Personal Information in a Data Breach The issue:The issue:Data breaches occur when someone gains unauthorized access to data.For instance,an attacker might circumvent security measures to obtain sensitive data,or an insider might inappropriately access confidential information.Use

30、rs may share personally identifiable information(PII)with AI systems,such as chatbots,offering legal,financial,or health services.In the event of a data breach,the transcripts of these conversations could be exposed and accessed improperly,revealing sensitive information.An example of a data breach

31、is a much-reported incident that occurred with OpenAIs ChatGPT chatbot in March 2023.Due to a bug in an open-source library the system uses,some users were able to see titles from other users chat histories.1 While it is true that AI systems could be subject to data breaches,just like any IT system,

32、they have not created or exacerbated the underlying privacy and security risks.Data breaches have been an unfortunate,yet regular,occurrence for the past two decades.In 2022,there were nearly 1,800 data breaches in the United States impacting hundreds of millions of Americans.2 The solutionThe solut

33、ion:Policymakers should address the larger problem of data breaches rather than focus exclusively on data breaches involving AI systems.One thing Congress can do is require companies to publish security policies to promote transparency with consumers.Most companies publish privacy policies,which cre

34、ate a transparent and accountable mechanism for regulators to ensure companies are adhering to their stated policies.But no such practice exists for information security practices,which has resulted in vague standards,regulation by buzzword,and information asymmetry in markets.By publishing security

35、 policies,companies would be motivated to describe the types of security measures they have in place rather than just make claims of taking“reasonable security measures.”This is a concrete step that policymakers can take to improve security practices in the private sector.3 Moreover,Congress should

36、pass data breach notification legislation that preempts conflicting state laws.4 All 50 states,as well as the District of Columbia,Guam,Puerto Rico,and the Virgin Islands,have data breach laws;however,each jurisdiction has its own set of rules on how quickly to report a data breach or to whom a secu

37、rity incident should be reported.This patchwork quilt of differing requirements provides decidedly uneven protection for consumers and creates an unnecessarily complex situation for companies,which must spend more time navigating this murky legal terrain than actually protecting consumer data.5 CENT

38、ER FOR DATA INNOVATION 7 Issue 1.2:AI May Reveal Personal Information Included in Training Data The issue:The issue:Data leaks occur when AI systems reveal private information included in training data.For example,an AI model trained on confidential user data,such as private contracts or medical rec

39、ords,may unintentionally reveal this private information to users.A case in point was the incident wherein popular chatbot ChatGPT appeared to reveal some of the bits of data it had been trained on when researchers prompted it to repeat random words forever.6 Similarly,AI systems may disclose privat

40、e information when it is inadvertently included in training data,such as personal information scraped from public websites.7 While data leaks are a legitimate privacy concern,they are not unique to AI.Data leaks were an early concern about search engines too,as attackers could use search engines to

41、discover a trove of sensitive data,such as credit card information,Social Security numbers,and passwords,that was scattered across the Internet,often without the affected individuals awareness.8 Internet search engines also widely deploy web crawlers,which are automated programs that index the conte

42、nt of webpages,and to address risks in this area in the past,nongovernment solutions to the risks posed by the scraping of publicly available data have been successful.The solution:The solution:Policymakers can help minimize or eliminate the need for AI-enabled services to process confidential data

43、while still maintaining the benefits of those services by investing in research for privacy-and security-enhancing technologies.These are not specific to AI,but they will have important uses for AI.For instance,policymakers should support additional research on topics such as secure multiparty compu

44、tation,homomorphic encryption,differential privacy,federated learning,zero-trust architecture,and synthetic data.9 They should also fund research exploring the use of“data privacy vaults”to isolate and protect sensitive data in AI systems.10 In this scenario,any PII would be replaced with deidentifi

45、ed data so that large language models(LLMs)would not have access to any sensitive data,thereby preventing data leaks during training and inference and ensuring only authorized users could access the PII.Regarding AI systems that scrape publicly available data,policymakers should support the already

46、burgeoning set of industry-led standards for web scraping.11 The private sector is already taking steps to give website operators more control over whether AI web crawlers scrape their sites.12 Indeed,many websites can use the existing Robots Exclusion Protocol to restrict web crawlers from popular

47、AI companies.There may be instances when PII ends up on public websites that AI systems scrape and consumers dont want this information there.Federal data privacy legislation would create a baseline set of consumer rights for how organizations collect and use personal data.This legislation should pr

48、eempt state laws,ensure reliable enforcement,streamline regulation,and minimize the impact on innovation.13 CENTER FOR DATA INNOVATION 8 Issue 1.3:AI May Enable Government Surveillance The issue:The issue:AI makes it easier to analyze large volumes of data,including about individuals,which may lead

49、to increased government surveillance.For instance,governments can track individuals in public spaces,such as through facial recognition technology,or infer sensitive information about individuals based on less-sensitive data.There can be legitimate reasons for this concern.Governments in certain cou

50、ntries have disturbing histories of intruding into the private lives of their citizens and many fear that they may revert to this type of activity in the future.And some countries,such as China,significantly limit the personal freedoms of their citizens and use surveillance to threaten human rights.

51、Indeed,critics point out that China uses AI-enabled tracking and emotion-recognition technology as part of its domestic surveillance activities,most notably against its Uyghur population,and argue that democratic nations should not use the same technology.14 They fear a slippery slope wherein Wester

52、n governments might exploit AI for nefarious purposes that trample on citizens basic rights.The solution:The solution:Law enforcement agencies should take preemptive steps to recognize the potential impacts of AI on perceptions of acceptable government use of technology for law enforcement activitie

53、s.Congress should direct DOJ to establish guidelines for use by state and local law enforcement in investigations that outline specific use cases and capabilities,including when a warrant is necessary for use,as well as transparency guidelines for when and how to notify the public of AI use by law e

54、nforcement officials.The Facial Recognition Technology Warrant Act introduced in 2019,which requires federal law enforcement to obtain a court order before using facial recognition technology to conduct targeted ongoing public surveillance of an individual,could serve as a useful model to establish

55、limitations on use,legal requirements for appropriate use,transparency,and approval processes for other AI-enabled law enforcement technologies.15 In addition,as new AI products for law enforcement become available,they should undergo a predeployment review to ensure they meet First and Fourth Amend

56、ment protection standards,just as any new technology should.Such assessments should be conducted by federal officials familiar with existing legal requirements and potential applications.DOJ should also conduct independent testing of police tech,as the National Institute of Standards and Technology(

57、NIST)has done for facial recognition algorithms during its Face Recognition Vendor Test,to ensure the technology is accurate and unbiased.16 The General Services Administration(GSA)should establish guidelines to assist agencies in complying with existing government-wide privacy requirements when imp

58、lementing AI solutions.These guidelines should address different government use cases,including for training,service provision,and research.CENTER FOR DATA INNOVATION 9 Issue 1.4:AI May Enable Workplace Surveillance The issueThe issue:One concern about the use of AI in the workplace is that employee

59、 monitoring may become unduly invasive,stemming in part from the fact that workers may not know how or when their employers are using the technology.For instance,the Trades Union Congress(TUC),a national trade union center representing 48 unions across the United Kingdom,published a report in 2020 t

60、hat finds that 50 percent of U.K.employees believe their companies may be using AI systems they are not aware of.17 A more complex concern is that the data AI systems collect can reveal or enable employers to infer information with varying sensitivity levels,which,if misused,risks autonomy violation

61、s.Consider an AI system with eye-tracking technologies,which monitors the behavior of delivery drivers by tracking their gaze patterns.Many studies have found that people with autism react differently to stimuli when driving so an employer may infer from eye-tracking AI software which drivers have a

62、utism,even though employees may want to keep this information private.18 However,as a general rule,employees in the United States have little expectation of privacy while on company grounds or using company equipment,including company computers or vehicles,according to judicial rulings by U.S.courts

63、 and existing federal laws.19 Addressing AI surveillance concerns with AI-specific regulation would not align with the current legal framework for employee privacy and therefore any legal reforms should address employee privacy expectations more broadly.The solutionThe solution:Policymakers should s

64、upport the responsible adoption of AI in the workplace,including by helping set the quality and performance standards of AI technologies used in the workplace.20 For instance,they should fund independent testing of commercial systems that measure behaviors and performance of employees,much like the

65、U.S.Department of Commerce did when it launched a multistakeholder process for commercial use of facial recognition,and in June 2016,a group of stakeholders reached a consensus on a set of best practices that offered guidelines for protecting consumer privacy.21 Doing so would help fill knowledge ga

66、ps ranging from the accuracy of different workplace tools to the efficacy of these tools to the potential uses of these technologies in specific workforce-related applications.Additionally,the Equal Employment Opportunity Commission(EEOC)should investigate the potential autonomy violations from proc

67、essing employee data as part of its AI and algorithmic fairness initiative.There Is currently no comprehensive understanding of the adoption,design,and impact of AI tools that process employee data.22 The EEOCs agency-wide initiative currently focuses on potential harms from bias and discrimination,

68、but the work it is doing to hold listening sessions with key stakeholders about algorithmic tools and their employment ramifications would be valuable for gaining insights from potential autonomy violations.CENTER FOR DATA INNOVATION 10 Issue 1.5:AI May Infer Sensitive Information The issue:The issu

69、e:AI can infer information about peoples identities,habits,beliefs,preferences,and medical conditions,including information that individuals may not know themselves based on other data about those individuals.AI systems can use computational techniques,such as machine learning,to make data-driven in

70、ferences.For instance,an AI system may be able to detect rare genetic conditions from an image of a childs face,or AI-enabled online advertising may infer information about users,such as predicting their age or political leanings based on their online activity.Disclosure of such information without

71、a users consent or knowledge can lead to significant reputational harm or embarrassment socially,politically,or professionally when the nature of the inferred information is particularly sensitive or highly personal.While data-driven inferences may present novel risks,these types of inferences can a

72、lso occur in the absence of AI systems using standard statistical methods.The solution:The solution:Policymakers should craft and enact comprehensive national privacy legislation that addresses the risks of data-driven inference in a tech-neutral way.This would better position regulators and develop

73、ers alike to ensure necessary safeguards are consistently implemented as these technologies continue to evolve.23 Policymakers should enact privacy legislation that establishes clear guidelines for the collection,processing,and sharing of various types of data with consideration for varying levels o

74、f sensitivity;implements user data privacy rights and safeguards against risks of harm;and strengthens notice,transparency,and consent practices to ensure users can make informed decisions about the data they choose to share,including sensitive biometric and biometrically derived information.Because

75、 biometric information is central to many emerging tech use cases and has inference-related risks,any privacy regulations should include clear definitions of biometric identifying and biometrically derived data and present transparency,consent,and choice requirements consistent with the purpose of i

76、ts collection and risks of harm.The relevant federal agencies and regulatory bodies that oversee existing privacy regulations should also provide explicit guidance on their application for any new questions arising from AI.For example,the Department of Health and Human Services could offer guidance

77、on when predictions made by AI systems constitute“protected health information”under HIPAA(the Health Insurance Portability and Accountability Act).CENTER FOR DATA INNOVATION 11 Issue 1.6:AI May Help Bad Actors Harass and Publicly Shame Individuals The issueThe issue:AI makes it easier to create fak

78、e images,audio,and videos of individuals,which can be used to harass them and harm their personal and professional reputations.Deepfakes,a portmanteau of“deep learning”and“fake,”have been around since the end of 2017,created mostly by people editing the faces of celebrities into pornography.As with

79、all types of nonconsensual pornography,deepfake revenge porn that portrays an individual in a sexual situation that never actually happened can have devastating consequences for victims lives and livelihoods.More recently,deepfakes have raised risks for noncelebrities too.In one recent case,students

80、 at a New Jersey high school allegedly used AI image generators to produce fake nude images of their female classmates.24 Deepfakes present a unique challenge,as they can fool both humans and computers,which makes it difficult to moderate this content.While the private sector is taking this concern

81、seriously,and companies such as Google,Adobe,and Meta have announced significant partnerships with academic researchers to explore technical solutions,current deepfake detection technologies such as digital watermarks,embedding metadata,and uploading media to a public blockchain have limited effecti

82、veness.25 This makes non-technical solutions focused on limiting the spread of deepfakes key.The solution:The solution:Policymakers should implement policies that seek to stop the distribution of this content.There is currently no federal law criminalizing nonconsensual pornography,though such laws

83、exist in 48 states and the District of Columbia and the Violence Against Women Act Reauthorization Act of 2022 allows victims of nonconsensual pornography to sue for damages in federal court.26 Additionally,16 states have laws addressing deepfakes.27 Congress should outlaw the nonconsensual distribu

84、tion of all sexually explicit images,including deepfakes that duplicate individuals likenesses in sexually explicit images,and also create a special unit in the Federal Bureau of Investigation(FBI)to provide immediate assistance to victims of actual and deepfake nonconsensual pornography.Moreover,mo

85、st of the laws criminalizing revenge pornintimate images and videos of individuals shared online without their permissiondo not include computer-generated images and only about a dozen states have updated their laws to close this loophole.Here too Congress has an opportunity to act by creating a fed

86、eral statute that prohibits such activity.The Preventing Deepfakes of Intimate Images Act introduced in May 2023 would update the Violence Against Women Act to extend civil and criminal liability to anyone who discloses or threatens to disclose digitally created or altered media containing intimate

87、depictions of individuals with the intent to cause them harm or with reckless disregard to potential harm.28 CENTER FOR DATA INNOVATION 12 2.WORKFORCE#Risk Policy needs Policy solution 2.1 AI may cause mass unemployment.No policies needed Policymakers do not need to focus on concerns about mass unem

88、ployment from AI adoption because the economic evidence does not support this materializing.2.2 AI may dislocate blue collar workers.General nonregulatory policies Policymakers should support full employment,nationally and regionally,not just with macro-economic stabilization policies,but also with

89、robust regional economic development policies;ensure as many workers as possible have needed education and skills before they are laid off;reduce the risk of income loss and other financial hardships when workers are laid off;and provide better transition assistance to help laid off workers find new

90、 employment.2.3 AI may dislocate white collar workers.General nonregulatory policies Policymakers should ensure that job dislocation policies and programs support all workers whose jobs are impacted by automation so they can train for new jobs.They should also proactively support IT modernization in

91、 the public sector,including the adoption of generative AI.CENTER FOR DATA INNOVATION 13 Issue 2.1:AI May Cause Mass Unemployment Issue:Issue:In a 2023 discussion with British Prime Minister Rishi Sunak,tech entrepreneur Elon Musk predicted that AI would make all jobs obsolete,stating that“you can h

92、ave a job if you want a job but the AI will be able to do everything.”29 Some economists,such as Anton Korinek,economics professor at the Darden School of Business at the University of Virginia,share in Musks belief of a potentially jobless future.In a 2023 testimony before the U.S.Senate,Korinek wa

93、rned that AI systems,if able to match human cognitive abilities,could lead to the obsolescence of human workers.30 Korinek further argued that there is about a 10 percent chance AI systems reach artificial general intelligence(AGI)in the near future,which could lead to widespread devaluation of huma

94、n work in all areas.31 However,many concerns about traditional AI leading to mass unemployment are typically based on the“lump of labor fallacy,”the idea that there is a fixed amount of work,and thus productivity growth will reduce the number of jobs.32 The logic goes,if there is a fixed amount of w

95、ork and workers can now produce twice as much as before,half of the previous workforce becomes jobless.But the data shows this is not the case.Labor productivity has grown steadily for the past century(even if that growth has been slower recently)and unemployment is near an all-time low.33 AI will l

96、ikely bring changes to the types of work people do and create disruptions,but the economy has mechanisms and institutions in place to adapt and maintain overall employment levels,as long as policymakers effectively manage these transitions.The challenge of AI is therefore not mass unemployment,but g

97、reater levels of worker transition.The concerns about joblessness from AGI hinge on the existence of AGI,which is a speculative scenario that may take decades or may never fully materialize.There is no scientific consensus saying it will or is likely to.The solution:The solution:Policymakers do not

98、need to focus on concerns about mass unemployment from traditional AI adoption because the economic evidence does not support this materializing.CENTER FOR DATA INNOVATION 14 Issue 2.2:AI May Displace Blue Collar Workers The issue:The issue:Before the very recent advent of generative AI,concerns abo

99、ut job dislocation centered around AI-enabled automation and robotics.The main concern has been that these technologies will lead to the elimination of certain blue collar jobs because machines can perform repetitive and routine tasks more efficiently than humans,with jobs in industries such as manu

100、facturing,data entry,and customer service being particularly vulnerable.It is true that AI-enabled automation will eliminate some blue collar jobs,much like earlier general-purpose technologies such as the steam engine or electricity-automated jobs of the past,but the first thing to note is that the

101、 current evidence of adoption shows that there is not a tsunami of destruction as some fear.Few companies that have blue collared jobs currently use AI in a significant way.In manufacturing,where the advent of AI can transform how firms design,fabricate,operate,and service products,as well as the op

102、erations and processes of manufacturing supply chains,89 percent of manufacturers report that they are not using AI at all according to a 2022 report from the National Science Foundation(NSF).34 In key manufacturing industries such as machinery,electronic products,and transportation equipment,less t

103、han 7 percent of companies report using AI as a production technology in any capacity.The same is true in nonmanufacturing industries;less than 3 percent of companies in retail trade reported using AI.35 While these numbers will grow,the rate of adoption,such as all other technologies in the past,is

104、 likely to be slow.But more importantly,AI-enabled automation will be a net good if there are policies in place to ensure those who are dislocated transition easily into new jobs and new occupations.AI-enabled automationindeed all automationallows workers to be more productive,and more productivity

105、growth is a path to economic and income growth that benefits society.This is because better tools enable companies to produce better products and provide services more efficiently.By boosting productivity,workers can earn more and companies can lower prices,both of which increase living standards.Th

106、e solution:The solution:Policymakers should ensure that workers are better positioned to navigate a potentially more turbulent,but ultimately beneficial,labor market.36 Policymakers should support full employment,nationally and regionally,not just with macro-economic stabilization policies but also

107、with robust regional economic development policies;ensure as many workers as possible have needed education and skills before they are laid off;reduce the risk of income loss and other financial hardships when workers are laid off;and provide better transition assistance to help laid off workers fin

108、d new employment.CENTER FOR DATA INNOVATION 15 Issue 2.3:AI May Displace White-Collar Workers The issueThe issue:Some people are concerned that generative AI will eliminate white collar jobs.A headline from The New York Times in August 2023 encapsulates this sentiment:“In Reversal Because of A.I.,Of

109、fice Jobs Are Now More at Risk.”37 But policymakers should not mistake technical feasibility for economic viability.38 Just because a job is exposed to LLM automation,doesnt determine whether the technology is likely to replace white collar workers or merely augment their skills.39 Tools such as Cha

110、tGPT might be able to draft a legal document in half the time a human legal secretary can,but that doesnt necessarily mean law firms can or should substitute their staff in favor of LLMs,as these tools are still at a stage where they can misrepresent key facts and cite evidence that doesnt exist;the

111、y still need humans to verify and check their outputs.Instead,AI can revalorize the jobs still performed best by humans such as nursing and teaching,making peoples skills more valuable and supplement a diminishing workforce.40 David Autor,an economics professor at Massachusetts Institute of Technolo

112、gy(MIT)who has spent his career exploring how technological change affects jobs,wages,and inequality,underscored this point,when he wrote that“the unique opportunity that AI offers to the labor market is to extend the relevance,reach,and value of human expertise.”41 Moreover,other MIT researchers pu

113、blished a recent paper examining the productivity effects of ChatGPT on mid-level professional writing tasks and found that using the chatbot not only increased productivity but job satisfaction too.42 The solutionThe solution:Policymakers should ensure that job dislocation policies and programs sup

114、port all workers whose jobs are impacted by automation so they can train for new jobs,including through regional economic development policies,skills retraining policies,and transition assistance policies.Policymakers should also proactively support the IT modernization in white collar roles in the

115、public sector,including the adoption of generative AI,to ensure workers reap the productivity,efficiency,and societal gains.The federal government struggles with a variety of challenges,such as slow services and backlogs,significant administrative burden and bureaucratic processes,and impending budg

116、et constraints.Taking advantage of new tools at their disposal,including generative AI,will boost mission delivery and help reduce the perceived risk of the technology and boost domestic demand for AI.43 CENTER FOR DATA INNOVATION 16 3.SOCIETY#Risk Policy needs Policy solution 3.1 AI may have politi

117、cal biases.General nonregulatory policy Policymakers should treat chatbots like the news media,which is subject to market forces and public scrutiny,but is not directly regulated by the government when it comes to expressing political perspectives.3.2 AI may fuel deepfakes in elections.General regul

118、ation Policymakers should update state election laws to make it unlawful for campaigns and other political organizations to knowingly distribute materially deceptive media.3.3 AI may manipulate voters.AI-specific nonregulatory policy Policymakers should update digital literacy programs to include AI

119、 literacy,which teaches individuals to understand and use AI-enabled technologies.3.4 AI may fuel unhealthy personal attachments.No policy needed Not enough evidence of impacts to society yet.3.5 AI may perpetuate discrimination.AI-specific nonregulatory policy Policymakers should support the develo

120、pment of tools that help organizations provide structured disclosures about AI models and related data.3.6 AI may make harmful decisions.AI-specific regulation Policymakers should consider prohibiting the government from using AI systems in certain high-risk,public sector contexts.They should upskil

121、l regulators with better AI expertise and develop tools to monitor and address sector-specific AI risks,as the United Kingdom has done.CENTER FOR DATA INNOVATION 17 Issue 3.1:AI May Have Political Biases The issue:The issue:Both sides of the aisle accuse AI companies of designing tools that reflect

122、the partisan views of the leadership of the companies.The most pervasive concerns come from conservatives who argue generative AI systems display a liberal bias,and cite plenty of anecdotal evidence to back up their claims.One of the most oft-reported anecdotes in early 2023 was a claim made on micr

123、oblogging site X that said ChatGPT wrote an ode to President Biden when prompted but declined to write a similar poem about former President Donald Trump.44 More recently,Google decided to block the ability of its AI image generator Gemini from generating images of people after it was criticized for

124、 depicting specific white figures,such as the U.S.Founding Fathers or German soldiers,as people of color.45 However,there is limited academic research into whether generative AI systems display anti-conservative bias,and some of the research supporting concerns of anti-conservative biases have been

125、heavily critiqued.For instance,when the prompts from a paper published in the social science journal Public Choice found that ChatGPT was more predisposed to answer in ways that aligned with liberal parties internationally were replicated in a different order by other researchers,ChatGPT exhibited b

126、ias in the opposite direction,in favor of Republicans.46 That is not to say chatbots may not exhibit political biases.They very well might lean toward certain ideologies or orientations in their answers either intentionally or inadvertently,but it would be impossible to build an“unbiased”chatbot bec

127、ause bias itself is relativewhat one person considers neutral,another might not.47 Some bias in generative AI systems may be the unintentional result of attempts to implement technical safeguards.Googles AI generator,for instance,was designed to maximize diversity in an effort to subvert the system

128、from amplifying racial and gender stereotypes but resulted in an overcorrection.48 The solution:The solution:First amendment protections place limits on what policymakers can do to regulate AI chatbots answers on political speech.The best course of action is for policymakers to treat chatbots like t

129、he news media,which is subject to market forces and public scrutiny but is not directly regulated by the government when it comes to expressing political perspectives.49 The availability of open source AI models means people of all political backgrounds can create their own custom AI models and eval

130、uate potential biases in their responses.Independent third-party testers can also evaluate proprietary chatbots to see the extent to which they are biased,much like media watchdog organizations scrutinize the news media.For instance,in January 2023,a team of researchers at the Technical University o

131、f Munich and the University of Hamburg posted a preprint of an academic paper explaining how they had prompted ChatGPT with 630 political statements and claimed to uncover the chatbots“pre-environmental,left-libertarian ideology.”50 Policymakers can foster oversight and accountability by funding mor

132、e research into how to measure political bias in AI models through NSF.CENTER FOR DATA INNOVATION 18 Issue 3.2:AI May Fuel Deepfakes in Elections The issue:The issue:Individuals or organizations seeking to influence elections,including foreign adversaries,may exploit advances in generative AI to cre

133、ate realistic media that appears to show people doing or saying things that never happeneda type of media commonly referred to as“deepfakes.”Deepfakes have the potential to influence elections.For example,voters may believe false information about candidates based on fake videos that depict them mak

134、ing offensive statements they never made,thus hurting their electoral prospects.Similarly,a candidates reputation could be harmed by deepfakes that use other peoples likeness,such as a fake video showing a controversial figure falsely supporting that candidate.51 For example,in June 2023,Florida Gov

135、ernor Ron DeSantiss campaign shared an attack ad showing fake AI-generated images of his primary opponent former President Donald Trump hugging former health official expert Dr.Anthony Fauci.52 Finally,if deepfakes become commonplace in elections,voters may simply no longer believe their own eyes an

136、d ears,and they may distrust legitimate digital media showing a candidates true past statements or behaviors.Policymakers are rightfully concerned that bad actors will exploit advances in generative AI to influence elections.The public and private sectors have already launched multiple initiatives t

137、o create technical solutions to address deepfakes,including research to identify fake content and developing standards to improve attribution for authentic content.53 But focusing exclusively on technical interventions,as many proposed legislative bills seek to do,will not comprehensively address th

138、e riskthough some technical interventions are worthwhile.The solution:The solution:State lawmakers should update state election laws to make it unlawful for campaigns and other political organizations to knowingly distribute materially deceptive media that uses a persons likeness to injure a candida

139、tes reputation or manipulate voters into voting against that candidate without a clear and conspicuous disclosure that the content they are viewing is fake.Such a requirement would prevent,for example,an opposing campaign from running advertisements using deepfakes without full transparency to poten

140、tial voters that the media is fake.This transparency requirement should apply to all deceptive media in elections,regardless of whether it is produced with AI.State election laws should focus on setting rules for political organizations that create and share deepfakes,not on the intermediaries,such

141、as email providers,streaming video providers,or social media networks,used by political operatives to share this content.54 Policymakers should pair these rules with effective enforcement mechanisms.Otherwise,a campaign could spread deepfakes about an opponent a few days before an election knowing t

142、hat no oversight and consequences would occur until after people had voted.55 CENTER FOR DATA INNOVATION 19 Issue 3.3:AI May Manipulate Voters The issue:The issue:AI is changing how candidates for elected office conduct their campaigns.In 2023,there were a smattering of examples of generative AI bei

143、ng used in U.S.political ads,raising concerns that AI-driven political persuasion could lead to the dissemination of manipulative content.The Democratic Party tested the use of generative AI tools to write first drafts of some fundraising messages in March 2023.55 Some worry that political operative

144、s could use AI to craft personalized messages to manipulate voters at scale with targeted disinformation.56 For example,campaigns could flood voters social media feeds with AI-created political propaganda designed around their interests.However,while AI may make this problem more acute,the core of t

145、he issue is electoral harms from deceptive political outreach and advertising,not specific technologies.The solutionThe solution:First amendment protections place limits on what policymakers can do.The best course of action at this time is for policymakers to update digital literacy programs to incl

146、ude AI literacy.57 AI literacy teaches individuals to understand and use AI-enabled technologies.Whereas existing digital literacy programs might teach individuals how to use a search engine effectively,how to evaluate different sources,and how to interpret statistics,AI literacy would help individu

147、als understand how to spot deepfakes and whether to verify the results of a ChatGPT prompt are necessarily factual or not.Furthermore,there are existing federal laws against fraudulent misrepresentation in campaign communications and existing federal civil rights laws that prohibit the use of misinf

148、ormation to deprive people of their right to vote.58 DOJ and states attorneys general should commit to enforcing existing civil rights protections related to the electoral process for AIjust as U.S.law-enforcement agencies committed to enforcing existing laws for civil rights,fair competition,consum

149、er protection,and equal opportunity to AI systems in early 2023.59 Congress and state policymakers should support these efforts by allocating funding for law enforcement to explore how best to safeguard the electoral process in new technological contexts.CENTER FOR DATA INNOVATION 20 Issue 3.4:AI Ma

150、y Fuel Unhealthy Personal Attachments The The issue:issue:AI companions,which are AI systems designed to interact with humans in a way that mimics companionship or friendship in the form of chatbots,virtual assistants,or even physical robots,are raising concerns about isolation and the formation of

151、unrealistic societal expectations.Some experts are concerned that relying on AI companions may hinder individuals from forming genuine human relationships,leading to increased social isolation.This isolation could have negative effects on mental health and well-being.60 Other experts,such as Dorothy

152、 Leidner,who teaches business ethics at the University of Virginia,worry that the idealized representations in physical appearance and emotional responses that AI companions present could lead to a distorted perception of what is considered normal or desirable in human interactions,impacting broader

153、 cultural expectations in relationships and behavior.61 Speculating about the role of AI in loneliness is not surprising,as a Washington Post series on technology and loneliness states that“one of our national pastimes is guessing who or what is responsible for loneliness,the ancient human condition

154、.Is it social media?Remote work?The nuclear family?Not enough sidewalks?”62 But the question of whether any technology,including AI,impacts loneliness is too broad and lacks the necessary nuance to understand its specific effects.Its crucial to consider specific types of technology,who is using them

155、,and their purposes.For instance,a 2023 study from Stanford University researchers finds that about 50 percent of older adults believe using virtual reality(VR)alongside their caregivers is“very or extremely”beneficial to their relationship.63 Meanwhile,social media apps such as TikTok have become a

156、 resource for parents to discuss loneliness,online dating apps have become the most common way romantic couples meet,and friend-making apps are becoming a boon for young adults.The solution:The solution:AI companions are not inherently detrimental to social well-being.Policymakers should recognize t

157、he diverse ways in which this technology could impact loneliness and social connections.There is little to no research on which segments of society are using AI companions and for what purposes,and therefore,to get enough of a sense of the impacts to society yet.Without sufficient data to understand

158、 the full scope of impacts on society,policymakers should exercise caution in their approach,lest they inadvertently hinder unforeseen benefits.CENTER FOR DATA INNOVATION 21 Issue 3.5:AI May Perpetuate Discrimination The issue:The issue:A concern about AI systems is that they may mirror and amplify

159、existing biases and discrimination in society,leading to unfair and unjust outcomes.Biased algorithms may produce results or decisions that systemically treat certain individuals less favorably than similarly situated individuals due to a protected characteristic such as their race,sex,religion,disa

160、bility,or age.64 There have long been calls for policymakers to mitigate these risks by requiring algorithmic transparency,explainability,or both;or to create a master regulatory body to oversee algorithms.While the concern of biased AI is legitimate,U.S.regulators have acknowledged that existing ci

161、vil rights laws apply to AI systems and new authorities are not necessary to effectively oversee the use of this technology at this time.65 Many new regulatory solutions proposed thus far would be inadequate.Some are impractical,such as those that would require audits for all high-risk AI systems be

162、cause the ecosystem for AI audits is still immature,while some others stifle innovation,such as by prohibiting the use of algorithms that cannot explain their decision-makingdespite being more accurate than those that can.66 The solution:The solution:Policymakers should focus on supporting the devel

163、opment of tools,which would help organizations provide structured disclosures about AI models and related data to bolster much-needed information flows along the AI value chain that could identify and remedy harmful bias and generally foster AI accountability.67 The National Telecommunications and I

164、nformation Administrations(NTIAs)AI Accountability Report in 2024 rightly recommends that federal agencies improve standard information disclosures using artifacts such as datasheets,model cards,system cards,technical reports,and data nutritional labels.68 Policymakers should also help mitigate issu

165、es of bias emanating from source data by mandating specific data for AI training,as some countries have proposed,although doing so is problematic and typically at odds with the technical realities faced by AI developers.At the same time,policymakers should proactively improve datasets by ensuring th

166、e fair and equitable representation and use of data for all Americans,including improving federal data quality by developing targeted outreach programs for underrepresented communities;enhancing data quality for non-government data;directing federal agencies to update or establish data strategies to

167、 ensure data collection is integrated into diverse communities;and amending the Federal Data Strategy to identify data divides and direct agency action.69 Federal agencies should support the development of best practices for dataset labeling and annotation,and aid the development of high-quality,app

168、lication-specific training and validation data in sensitive and high-value contexts,such as in healthcare and transportation.CENTER FOR DATA INNOVATION 22 Issue 3.6:AI May Make Harmful Decisions The issue:The issue:As the public and private sectors increasingly rely on algorithms in high-impact sect

169、ors such as consumer finance and criminal justice,a flawed algorithm may potentially cause harm at higher rates.When these algorithms make mistakes,the sheer volume of their decisions could end up significantly amplifying the potential negative impact of these flaws.Consider a human decision-maker a

170、t a bank evaluating loan applications.Their output is only a handful of loan applications per week,routinely making errors while evaluating them.However,a flawed algorithm misevaluating hundreds of loan applications per week across an entire bank branch would clearly cause harm on a much larger scal

171、e.In many cases,flawed algorithms hurt the organization using them.Banks making loans would be motivated to ensure their algorithms are accurate because,by definition,errors such as granting a loan to someone who should not receive one or not granting a loan to someone who is qualified costs banks m

172、oney.However,using an AI system to make decisions in certain contexts may introduce more potential for harm when multiple entities use the same ones,even if an algorithmic tool is more accurate than human evaluators and less error prone than other tools on the market.70 This is somewhat analogous to

173、 monoculture in agriculture,wherein a lack of diversity in crops can make the entire system vulnerable to widespread failures.For example,imagine multiple banks using the same algorithmic model to screen and assess loan applications.Even though it might be rational for each bank in isolation to adop

174、t an algorithm,accuracy can become lower than using human evaluators when multiple entities use the same one.While this seems counterintuitive,the potential for this result derives from how probabilistic properties of rankings work.The key thing is,in some contexts,independence may be more important

175、 than accuracy for reducing errors.That said,algorithmic monoculture could be desirable in some settings.It may be the case that in other high-risk areas,multiple decision-makers using a single centralized algorithmic system may reduce errors.In education,for instance,economists have found outcomes

176、have improved as algorithms for school assignment have become more centralized.71 Perhaps in healthcare,the allocation of scarce resources by different hospitals would be best done if they all used the same algorithmic systems.Perhaps not.It isnt known because it has not been studied yet.The solutio

177、n:The solution:Policymakers should investigate how different factors affect desired outcomes such as fairness in high-stakes public sector contexts,where market forces are muted and the cost of the error falls largely on the subject of the algorithmic decision.Where there is evidence that consumer w

178、elfare is significantly lowered,regulators should consider prohibiting the government from using AI systems for such decisions.They should invest in upskilling regulators in AI expertise and developing tools to monitor and address sector-specific AI risks,as the United Kingdom has done,which will be

179、tter equip policymakers to establish and enforce sector-specific rules for AI where necessary,such as potential transparency or reporting requirements.72 CENTER FOR DATA INNOVATION 23 4.CONSUMER CONCERNS#Risk Policy needs Policy solution 4.1 AI may exacerbate surveillance capitalism.No policy needed

180、 Rather than pushing for restrictions on targeted advertising,policymakers and civil society should allow the private sector to do what it does best:innovate and develop novel technologies that improve welfare.CENTER FOR DATA INNOVATION 24 Issue 4.1:AI May Exacerbate Surveillance Capitalism The issu

181、e:The issue:A November 2023 op-ed in the Financial Times reads,“We must stop AI replicating the problems of surveillance capitalism.”73 It warns that AI is making it easier for large tech companies to monetize and profit from the collection,analysis,and use of personal data and user behaviorsan issu

182、e dubbed“surveillance capitalism,”as detailed in Shoshana Zuboffs book of the same name.74 When it comes to AI,the concern is that companies will be able to better commodify user data and exploit consumers even more than they already do because algorithms will enable them to better analyze user data

183、,better anticipate user preferences,and better personalize user experiences.One of the chief ways powerful companies are doing this,critics say,is by using algorithms and personal data for targeted advertising,trampling consumer privacy and rights.But despite claims that targeted ads are a massive i

184、ntrusion on consumer privacy,most ad platforms deliver these ads to Internet users without revealing consumers personal data to the advertisers.And critics of targeted advertising do not acknowledge the ample benefits of personalization to advertisers,publishers,and consumers alike,especially how th

185、ese ads fund the Internet economy.75 Indeed,targeted online ads form an essential part of the digital economy:Advertisers can link consumers to specific queries and interests and then show them relevant ads as they visit different websites.This has three positive effects:First,consumers see ads for

186、items that are likelier to be relevant to them than the nontargeted ads they encounter in traditional media.Second,advertisers spend their marketing budgets on ads that are likelier to generate a response from the audience,which makes their ad spend more cost-effective and affordable than traditiona

187、l forms of marketing.This is why personalized ads have been a godsend to small businesses:Millions of enterprises benefit from being able to show their wares to interested customers,rather than wasting money on ads shown to uninterested audiences.Third,websites and app publishers can sell inventory

188、on their sites to advertisers,earning them valuable income and allowing them to offer content and services to users for free.The solutionThe solution:Policymakers should not introduce laws that ban targeted advertising,as doing so would hurt consumers,businesses,and publishers.Rather than pushing fo

189、r restrictions on targeted advertising,policymakers and civil society should allow the private sector to do what it does best:innovate and develop novel technologies that improve welfare for everyone,including publishers(who can continue to earn billions in advertising income),consumers(who can obta

190、in the benefits of free,ad-supported apps and websites,plus prefer to see ads tailored to their needs rather than being blanketed with irrelevant messages),and advertisers(who can continue to access affordable,effective ads,instead of relying on the kinds of pre-digital marketing that only helps lar

191、ge brands).76 CENTER FOR DATA INNOVATION 25 5.MARKET CONCERNS#Risk Policy needs Policy solution 5.1 AI may enable firms with key inputs to control the market.No policy needed There is no evidence of significant entry barriers to the AI market.If this should change,antitrust policy is already capable

192、 of handling most clear threats to competition.5.2 AI may reinforce tech monopolies.No policy needed Antitrust agencies already have the powers they need to stop problematic acquisitions and partnerships,but they should recognize that vertically integrated AI ecosystems are not inherently problemati

193、c and can have procompetitive effects that benefit consumers overall.CENTER FOR DATA INNOVATION 26 Issue 5.1:AI May Enable Firms With Key Inputs to Control The Market The issueThe issue:The top U.S.antitrust regulators,Federal Trade Commission(FTC)Chair Lina Khan and DOJs antitrust chief Jonathan Ka

194、nter,recently argued that government action may be warranted to prevent large technology companies from using anticompetitive tactics to protect their standing in the emerging AI market.77 For example,Kanter warned that the AI industry has a“greater risk of having deep moats and barriers to entry.”7

195、8 Similar,FTC staff penned an article in June 2023 arguing that generative AI depends on a set of necessary inputssuch as access to data,computational resources,and talentand“incumbents that control key inputs or adjacent markets,including the cloud computing market,may be able to use unfair methods

196、 of competition to entrench their current power or use that power to gain control over a new generative AI market.”79 However,the generative AI market is still in its early stages,and as of now,there is no evidence of significant entry barriers.Concerns about data being an entry barrier in AI are sp

197、eculative and unsubstantiated.Firms seeking to create generative AI models can use data from various sources,including publicly available data on the Internet,government and open-source datasets,datasets licensed from rightsholders,data from workers,and data shared by users.They also have the option

198、 to generate synthetic data to train their models.80 Some firms,such as OpenAI,Anthropic,and Mistral AI,have succeeded in creating leading generative AI models despite not having access to the large corpus of user data held by social media companies such as Meta and X.com.Additionally,companies with

199、 internal data can leverage it to build specialized models tailored to specific tasks or fields,such as financial services or healthcare.Similarly,compute resources required for training generative AI models have not proven to be an entry barrier.There are numerous players in the cloud server market

200、 that provide the necessary infrastructure for training and running AI models.For example,Anthropic used Google Cloud to train its Claude AI models.81 In terms of chips,Nvidias graphics processing units(GPUs)are popular but face meaningful potential competition from firms such as AMD and Intel.82 Ot

201、her firms are also investing in chip design and manufacturing,fostering competition in the market.83 For example,Google has invested heavily in Tensor Processing Units(TPUs),which are specialized chips designed to train and run AI models.The solutionThe solution:Competition regulations should allow

202、the AI industry to continue to develop new and innovative products without unwarranted restrictions so that both businesses and consumers can access the benefits of AI.If there are documented cases of AI companies engaging in anticompetitive behavior,resulting in harm to consumers,antitrust authorit

203、ies already canand shouldact.Antitrust policy is already capable of handling most clear threats to competition,and as the FTC itself notes,it is no stranger to dealing with emerging technologies.84 CENTER FOR DATA INNOVATION 27 Issue 5.2:AI May Reinforce Tech Monopolies The issueThe issue:A brewing

204、concern is that large,vertically integrated firms that control the entire AI stack,from cloud infrastructure to applications,may engage in anticompetitive practices,such as excluding downstream rivals.This could involve restricting access to essential cloud resources or copying and integrating featu

205、res from competitors,which results in effectively squeezing them out of the market due to their own scale and reach.Additionally,these firms might prefer their own AI products and services within their ecosystem,further limiting market access for new entrants.Instead,several competition authorities

206、would like to see“mix-and-match”competition at and between all layers of the vertical chain rather than vertical integration.However,a mix-and-match environment may not drive the same level of competition between generative AI models as ones with vertical ecosystems.85 Imagine a cloud provider and a

207、n AI model developer partnering in a vertically integrated system.In this setup,if the integrated system loses customers downstream(using AI models),it not only loses those specific sales but also faces reduced scale and revenue potential for its other services higher up in the chain(e.g.,cloud serv

208、ices).This means that a loss in one part of the system affects the entire chain more significantly than a system wherein different parts operate independently.Vertical integration can result in a competitive AI market in which several ecosystems exert pressure on each other,and supporting the emerge

209、nce of new vertical ecosystems at this early stage of AI industry could help ensure the AI market does tip to the monopoly.86 It is also important that there are developments in both closed source(proprietary)and open source(accessible to the public)ecosystems,which further contributes to stimulatin

210、g competition.The solutionThe solution:Antitrust agencies already have the powers they need to stop problematic acquisitions and partnerships,but they should recognize that vertically integrated AI ecosystems are not inherently problematic and can have procompetitive effects that benefit consumers o

211、verall.They should base decisions on a detailed understanding of markets,including current and future sources of innovation,and focus on increasing social welfare.Agency guidelines explain that nonprice terms also matter when evaluating a merger or acquisition,including“reduced product quality,reduc

212、ed product variety,reduced service,or diminished innovation.”87 Vertical ecosystems in the AI industry often prioritize differentiation over price competition,emphasizing offering unique features,innovative solutions,and high-quality services to distinguish themselves in the market.Regulators should

213、 consider this focus on differentiation when evaluating the competitive landscape of AI ecosystems.CENTER FOR DATA INNOVATION 28 6.CATASTROPHIC SCENARIOS#Risk Policy needs Policy solution 6.1 AI may make it easier to build bioweapons.General regulation Policymakers should clarify and strengthen exis

214、ting policies related to biosecurity and biosafety oversight.They should update existing biosecurity practices to include guidance for how providers of labs can verify who is using the lab(customer screening)and what it is being used for(experiment screening).6.2 AI may create novel biothreats.AI-sp

215、ecific nonregulatory policies Congress should task the Department of Homeland Security(DHS)and Department of Energy(DOE)with developing state-of-the-art evaluations for dangerous biological capabilities.Benchmarks are needed to scope any future regulations.6.3 AI may become God-like and“superintelli

216、gent.”AI-specific nonregulatory policies Policymakers should establish a Search for Artificial General Intelligence(SAGI)Institute focused on identifying advanced machine intelligence.6.4 AI may cause energy use to spiral out of control.AI-specific nonregulatory policies Policymakers should support

217、the development of energy transparency standards for AI models.They should also accelerate the use of AI across government agencies to decarbonize government operations.CENTER FOR DATA INNOVATION 29 Issue 6.1:AI May Make It Easier To Build Bioweapons The issue:The issue:General-purpose AI capabiliti

218、es could impact the creation of biological threats by increasing malicious actors access to information and expertise.For instance,some are concerned that LLMs could provide detailed guides on acquiring,synthesizing,and spreading dangerous pathogens such as Ebola,potentially leading to a pandemic.88

219、 This concern is particularly focused on AI-enabled chatbots,which could not only assist experts but also enable scientifically inexperienced users to gather information more easily.Chatbots can help decipher scientific concepts and offer step-by-step instructions,streamlining the information-gather

220、ing process.Chatbots could therefore act as biological research assistants,removing the need for users to track down information,decide between multiple sources,and combine these pieces of information into a plan themselves.89 However,while the threat is legitimate,thinking of chatbots as the sole g

221、atekeepers of information overstates how high the barrier to this information is.90 Chatbots are trained on existing information that users could access independently and relatively easily.As the article notes,“A chatbot that lowers the information barrier should be seen as more like helping a user

222、step over a curb than helping one scale an otherwise unsurmountable wall.”91 Furthermore,according to a 2024 report from the National Security Commission on Emerging Biotechnology,a congressionally mandated commission,“LLMs do not significantly increase the risk of the creation of a bioweapon.”92 Fi

223、nally,even if users overcome the barrier to scientific information,being able to produce a known,existing pathogen or toxin will likely also require practical laboratory skills and materials for production.The solutionThe solution:Policymakers should focus on bolstering protections throughout the bi

224、othreat development process because accessing the basic information and resources to cause biological harm doesnt require advanced AI tools.To this end,policymakers should clarify and strengthen existing policies related to biosecurity and biosafety oversight.For instance,cloud labs,also known as on

225、line or virtual labs,are platforms that enable users to conduct scientific experiments and research remotely through cloud computing technology.Instead of needing physical laboratory space and equipment,cloud labs provide a virtual environment wherein users can access and operate scientific instrume

226、nts,conduct experiments,analyze data,and collaborate with others over the Internet.Policymakers should update existing biosecurity practices to include guidance for how providers of labs can verify who is using the lab(customer screening)and what it is being used for(experiment screening).93 CENTER

227、FOR DATA INNOVATION 30 Issue 6.2:AI May Create Novel Biothreats The issue:The issue:General purpose AI capabilities could impact the creation of biological threats by increasing novelty,meaning they could assist malicious actors in developing novel biological threats or more harmful versions of exis

228、ting threats.This concern is particularly focused on bio-design tools(BDTs),which can predict and simulate biological molecules and processes that can help researchers understand large-scale biological patterns.Unlike LLMs,which enable users to better access existing information,BDTs generate novel

229、information.Consider DeepMinds AlphaFold,an AI tool that predicts the shape of proteins,a scientific challenge necessary to make important biological discoveries.94 Prior to AlphaFold,scientists had only determined the 3D shape of about 190,000 proteins,or 0.1 percent of known protein structures,eac

230、h one of which likely took months or years to figure out.DeepMinds AI tool has now expanded that knowledge to more than 200 million predicted protein structures,covering almost every organism in the world that has had its genome sequenced,and made these structures available via a public database.Whi

231、le such tools undoubtably drive significant innovation,there is a risk that bad actors could misuse and exploit them to design new pathogens or toxins.Additionally,these tools might aid malicious actors in evading detection.For instance,they could generate a protein sequence that mimics a regulated

232、toxins function while possessing a distinct genetic code,thereby circumventing sequence-based screening measures.95 Because BDTs are specialized AI tools,scientific novices are unlikely to use them successfully.They need to understand how molecules work and how they can change with genetic,structura

233、l,functional,or chemical adjustments,as well as be able to compare different choices based on BDT predictions,make the biomolecule in a lab,and run tests to see if it works.96 SolutionSolution:Policymakers should identify specific scenarios in which scientifically knowledgeable users could potential

234、ly misuse BDTs and design policies to target these particular areas of concern,avoiding overbearing policies that hinder beneficial applications.Congress should task DHS and DOE with developing state-of-the-art evaluations for dangerous biological capabilities.The problem with the recent executive o

235、rder on safe and secure AI is it directs DHS to assess the potential for AI to enhance chemical,biological,radiological,and nuclear6 threats through consultation with experts,but this will be difficult to do because there has been little progress on developing benchmarks or evaluations for BDTs.97 W

236、ithout evaluation capabilities,policymakers will not be able to scope any regulations or effectively balance safeguards against the potential benefits.Congress should also direct and fund DOE to establish a sandbox for testing evaluations on a variety of AI-enabled biological tools.CENTER FOR DATA I

237、NNOVATION 31 Issue 6.3:AI May Become God-Like and“Superintelligent”The issue:The issue:Most doomsday scenarios predicting catastrophic outcomes stem from the development of what tech entrepreneur and investor Ian Hogarth dubbed“God-like AI”in a now-viral Financial Times article.98 Talking about AGI

238、or“superintelligence,”Hogarth asserted“A three-letter acronym doesnt capture the enormity of what AGI would represent,so I will refer to it as what is:God-like AI.A superintelligent computer that learns and develops autonomously,that understands its environment without the need for supervision and t

239、hat can transform the world around it.”99 There are three broad ways developing God-like AI could hypothetically result in existential harms,which for the purposes of this report,means those harms that would annihilate humanity or permanently and drastically curtail its potential:First are accidents

240、;those creating God-like AI systems could unwittingly develop systems that display unintended and harmful behavior that results in existential or catastrophic harm to human civilization.For example,advanced AI systems that do not“align”with human values(commonly referred to as the“alignment problem”

241、)may launch(or refuse to launch)military weapons systems.Second is misuse;malicious actors,such as a rogue state or terrorist organization,could use a God-like AI system to intentionally cause harm.For example,a malicious actor could use advanced AI capabilities to exploit vulnerabilities in LLMs to

242、 make them release information on how to design new pathogens that cause mass death.Finally,there could be structural disruptions;God-like AI systems could destabilize the broader environment by creating“structural risks”in harmful ways that do not fall into the accident-misuse dichotomy.100 For exa

243、mple,AI systems that identify or assess the retaliatory capabilities of an adversarial nation could disturb the equilibrium of mutual assured destruction and drastically increase the risk of a nuclear war.However,while there are true believers in the risks of dangerous superintelligent AI wiping out

244、 human civilization,as well as those who are skeptical or agnostic,these risks are hypothetical and currently remain unprovable.Other hypothetical but unprovable claims such as the probability of finding adversarial extraterrestrial life does not paralyze policymaking around issues such as radio sig

245、nals,space exploration,or national defense.101 The solutionThe solution:Policymakers should remain clear eyed in the face of grandiose,uncertain claims.To contend with existential risks,one of the most necessary functions at this stage is to better understand the threat vectors.Policymakers should e

246、stablish an SAGI Institute focused on identifying advanced machine intelligence.Its goal should be to develop consensus around signs of AGI,how to test for AGI,different levels of AGI,and what researchers should do if they ever identify AGI.102 CENTER FOR DATA INNOVATION 32 Issue 6.4:AI May Cause En

247、ergy Use to Spiral Out of Control The issue:The issue:Concerns about the energy and carbon footprint of AI have been around since at least 2019.New Scientist ran the headline“Creating an AI can be five times worse for the planet than a car”in June 2019.103 The concerns have gotten more acute,with so

248、me worrying that the rapid adoption of AI in recent years combined with an increase in the size of deep learning models will lead to a massive increase in energy use,causing potentially devastating environmental impact.104 An October 2023 piece in Scientific American reads,“The AI Boom Could Use a S

249、hocking Amount of Electricity.”105 However,looking at the energy cost of AI in isolationwithout addressing the benefitsdoes not answer the question of whether developing an AI model makes sense.AI models have a wide range of applications,including in the climate and energy context.Indeed,powerful AI

250、 technologies enable other sectors to become more energy efficient.From powering intelligent transportation systems to enabling smart grids to improving city operations and maintenance,AI is already supporting smarter energy use and reducing greenhouse gas emissions.106 Even the large natural langua

251、ge processing models critics disparage are helping researchers understand the solar panel innovation process and identify climate risks and investment opportunities from public company disclosures.107 The question should not be whether AI models use energy,but rather whether the energy consumption i

252、nvolved generates net-positive societal benefits.The The solutionsolution:The impact of AI on energy and the environment should be part of the policy debate,but policymakers should also be careful not to overreact.There are reasonable steps policymakers can take to ensure AI is part of the solution,

253、not part of the problem,when it comes to the environment.108 First,policymakers should support the development of energy transparency standards for AI models,both for training and inference.In the United States,for example,NIST should work with DOE to develop a recommended best practice for assessin

254、g the training and inference energy costs.The White House should continue its dialogue with leading AI companies to seek a voluntary commitment to publicly disclose the energy required to train and operate these foundation models,as well as the associated carbon emissions,especially for cloud-based

255、AI service providers.In addition,policymakers should accelerate the use of AI across government agencies to decarbonize government operations.The president should sign an executive order directing the Technology Modernization Funda relatively new funding system for federal government IT projectsto i

256、nclude environmental impact as one of the core priority investment areas for projects to fund.CENTER FOR DATA INNOVATION 33 7.INTELLECTUAL PROPERTY CONCERNS#Risk Policy needs Policy solution 7.1 AI may unlawfully train on copyrighted content.AI-specific nonregulatory policies Policymakers should fun

257、d research on technical measures that AI firms can use to reduce the risk of inadvertently training on copyrighted content,such as the development of machine-readable opt-out standards.They should also support the creation of training datasets with high-quality data in the public domain.7.2 AI may c

258、reate infringing content.General nonregulatory policies Policymakers should consider developing a similarity checker to help courts assess substantial similarity for musical works,regardless of whether a work is created with AI or not.7.3 AI may infringe on publicity rights.General regulation Congre

259、ss should provide rightsholders with a federal cause of action for publicity rights to ensure some basic jurisdictional consistency within the United States.CENTER FOR DATA INNOVATION 34 Issue 7.1:AI May Unlawfully Train on Copyrighted Content The issue:The issue:Generative AI systems may train thei

260、r models on text,audio,images,and videos that are legally accessible to Internet users but are also protected by copyright.AI firms argue that they cannot train LLMs without access to copyrighted work,but they are finding themselves entangled in legal battles with content creators and rights holders

261、 who claim copyright infringement.The New York Times sued OpenAI and Microsoft in 2023 accusing them of“unlawful use”of its work to create their products.109 Getty Images,which owns one of the largest photo libraries in the world,is suing the creator of AI art generator Stable Diffusion for alleged

262、copyright breaches.110 And three of the biggest music publishers are suing AI company Anthropic,alleging that it is misusing copyrighted song lyrics to train its Claude chatbot.111 The underlying question in all these cases is whether training AI models on copyrighted materials falls under the“fair

263、use”doctrine(or other exceptions to copyright law in other countries).112 The concept of fair use is a well-established principle in copyright law that allows for the limited use of copyrighted material without the need for permission from the copyright holder under certain circumstances.113 While i

264、t will ultimately be up to the courts to decide whether a particular use of generative AI infringes on copyright,there is precedent for them to find most uses to be lawful and not in violation of rightsholders exclusive rights.The solutionThe solution:Policymakers should fund research on technical m

265、easures that AI firms can use to reduce the risk of inadvertently training on copyrighted content.114 For example,creating standardized opt-out protocols could allow content publishers to indicate that AI firms should not train AI models on their content,and tools to score training data could help p

266、rovide information on whether an output was influenced by a particular copyrighted text.These attribution scores can then be used as a measure for evaluating the copyright infringement risk associated with the output.115 However,policymakers should not mandate technical mitigations because some may

267、negatively impact other values,such as free speech.116 Policymakers should also support the creation of training datasets with high-quality data in the public domain,as the French government has done.The French-Public Domain-Book,or French-PD-Books,is a collection of 289,000 books(containing more th

268、an 16 billion words)from the French National Library.The dataset is thought to be the largest AI training dataset composed entirely of text that is in the public domain.117 Some organizations,such as the nonprofit Fairly Trained,have created certifications for LLMs built on such databases.118 CENTER

269、 FOR DATA INNOVATION 35 Issue 7.2:AI May Create Infringing Content The issueThe issue:In addition to the concern that generative AI systems may unlawfully train on copyright-protected content,generative AI may allow creators to produce output that is similar to existing copyrighted works.While the l

270、atest generative AI systems mostly produce novel content,it is possible for these systems to replicate content from training data.The law allows creators to produce similar works,but it does not allow them to produce identical or nearly identical works.Copyright owners,including those of literary,mu

271、sical,and artistic works,can claim infringement if someone produces a work that is substantially similar to their own because they have an exclusive right to produce derivative works.Courts have repeatedly intervened in these cases,including for sampling small portions of a song,such as when Queen a

272、nd David Bowie successfully sued Vanilla Ice because the bass line in“Ice Ice Baby”came directly from“Under Pressure,”and for replicating key elements of a song,such as when the estate of Marvin Gaye successfully sued Robin Thicke and Pharrell Williams for the similarities between“Blurred Lines”and“

273、Got to Give It Up.”119 Artists can and should continue to enforce their rights in court when someone produces nearly identical work that unlawfully infringes on their copyright,whether that work was created entirely by human hands or involved the use of generative AI.The solutionThe solution:Policym

274、akers should develop a similarity checker to help courts assess substantial similarity for musical works,regardless of whether the work is created with AI or not.120 Currently,judges or juries,depending on the specific legal procedure and jurisdiction,evaluate works in question to determine if there

275、s sufficient similarity between them to warrant a finding of infringement.However,this legal test is one of the most maligned tests in the legal field for being inconsistently applied and being opaque and mystifying for courts and litigants.121 Congress should make things more consistent,accurate,an

276、d fair by directing and funding the Copyright Office to launch a competition for the private sector to come up with an AI-enabled tool to compare how similar a musical composition or recording is to existing copyright-protected works.This service should be modeled after the popular Turnitin online s

277、ervice that educators use to check how similar their students written submissions are to existing written works.Importantly,these tools do not claim to identify plagiarism.Instead,they simply flag similarity and provide educators with the information they need to make a judgment.They can work to che

278、ck similarity regardless of whether a work was written exclusively by a human or with the help of generative AI tools such as ChatGPT.Perhaps most crucially,creating such a tool would give artists the ability ex ante to anticipate whether their work likely infringes on any existing copyrighted works

279、.Having a portal where artists can check their work before they release it gives them an opportunity to identify areas of risk and change the parts that are flagged as potentially infringing.CENTER FOR DATA INNOVATION 36 Issue 7.3:AI May Infringe on Publicity Rights The issue:The issue:The right of

280、publicity is the intellectual property right that protects individuals from the unauthorized commercial use of their identity.This right is especially important for celebrities,as it enables them to control how others use their likeness commercially,such as in advertisements or in film and TV.Genera

281、tive AIspecifically deepfake technologymakes it easier to create content that impersonates someone else.YouTube star MrBeast and actor Tom Hanks have recently warned of AI ads that ape their faces and voices to falsely show them endorsing products.122 And music publishers and labels were disquieted

282、earlier this year by a viral song created with AI-generated vocals imitating recording artists Drake and The Weeknd that racked up millions of listens before being taken down.123 Generative AI also raises concerns about who will get to own the rights to certain character elements.For example,if a mo

283、vie studio wants to create a sequel to a film,can it use generative AI to digitally recreate a character(including the voice and image)or does the actor own those rights?And does it matter how the film will depict the character,including whether the character might engage in activities or dialogue t

284、hat could reflect negatively on the actor?However,generative AI has not changed the fact that individuals can and should continue to enforce their publicity rights by bringing cases against those who violate their rights.Courts have repeatedly upheld this right,including for cases involving indirect

285、 uses of an individuals identity.In one notable case,game show hostess Vanna White won damages for an advertisement that depicted a robot meant to impersonate her.In another,late-night television star Johnny Carson won a claim against a portable toilet company that used the phrase“Heres Johnny!”with

286、out his permission.And questions about ownership will likely be settled through contracts performers sign addressing who has rights to a performers image,voice,and more.The Screen Actors Guild-American Federation of Television and Radio Artists(SAG-AFTRA)signed a deal with leading AI voice company R

287、eplica Studios in January 2024 that sets terms for SAG-AFTRA members to license their digital voice replicas to Replica.The agreement includes protections for performers,such as fair compensation,protection of voice data,and the need for a performers consent before a replicated voice can be used in

288、a project.The solutionThe solution:Congress should provide rightsholders with a federal cause of action for publicity rights to ensure some basic jurisdictional consistency within the United States.Currently,publicity rights vary widely within the United States,and the legal concept is sparsely reco

289、gnized in international jurisdictions.However,the scope of any federal right of publicity should be limited so as not to stifle free speech or impede creative expression.It could,for instance,provide a minimum set of protections for an individuals name,signature,image,and voice against commercial ex

290、ploitation during their lifetime.124 CENTER FOR DATA INNOVATION 37 8.SAFETY AND SECURITY#Risk Policy needs Policy solution 8.1 AI may enable fraud and identity theft.AI-specific regulation Financial regulatory agencies should update security guidelines to ensure financial institutions do not rely so

291、lely on voice authentication for customers 8.2 AI may enable cyberattacks.AI-specific nonregulatory policy Congress should address the cybersecurity workforce shortage within the federal government by establishing and funding an AI Center of Excellence dedicated to building AI tools and capacity to

292、augment cybersecurity operations.8.3 AI may create safety risks.AI-specific regulation Congress should charge the newly created AI Safety Institutehoused in the Department of Commerces NISTwith creating a national AI incident database and a national AI vulnerability database.CENTER FOR DATA INNOVATI

293、ON 38 Issue 8.1:AI May Enable Fraud and Identity Theft The issue:The issue:In recent months,there has been a concerning rise in nefarious uses of AI-enabled voice cloning.Bad actors have targeted families and small businesses with fraudulent extortion scams.The scams themselves are not new,the term“

294、virtual kidnapping scam”has been around for many years to describe the ways fraudsters trick victims into paying a ransom to free a loved one they believe is being threatened.But AI has made these scams more sophisticated,as the technology can be trained on audio of regular peoplewhich is relatively

295、 easy to find from social media platforms such as TikTok,Instagram,and YouTubeand made to sound incredibly authentic,further blurring the line between genuine communication and malicious manipulation.The concerns about AI-enabled voice cloning are legitimate and need concerted efforts across borders

296、.The solutionThe solution:Financial regulatory agencies should update security guidelines to ensure financial institutions do not rely solely on voice authentication for customers.Some banks use voice recognition for customer authentication when accessing accounts or conducting transactions over the

297、 phone.Given the novel threat vectors of AI voice-cloning,regulators should update these guidelines to incorporate robust multi-factor authentication protocols that do not use voice recognition to enhance security measures against evolving risks.Because fraudulent scam calls can come from any part o

298、f the world,policymakers should internationalize their efforts to find solutions to detect and mitigate voice clones.FTC has already launched an exploratory challenge to foster comprehensive solutions to prevent,monitor,and evaluate malicious voice cloning,while the EUs support network for SMEs,the

299、Enterprise Europe Network,is trying to find international business partners for the EU companies that have already found promising solutions.125 Rather than working in siloes,governments should prioritize working together to find,grow,and adopt the most cutting-edge solutions.Governments should also

300、 prioritize voice cloning research in tandem with clone detection research,as this best allows for a comprehensive understanding of both the vulnerabilities and the effective countermeasures.The United Kingdom is already home to several notable research partnerships in this space,such as Edinburgh U

301、niversitys Centre for Speech Technologys ASVspoof program.Policymakers should seek to bolster these efforts,especially with international pools of voice data.CENTER FOR DATA INNOVATION 39 Issue 8.2:AI May Enable Cyberattacks The issueThe issue:AI may increase the scale and success rate of cyberattac

302、ks.In the near term,AI provides attackers with new methods to facilitate cyberattacks,such as using it to help attackers better identify vulnerabilities,hide malicious code,craft targeted phishing attacks,and evade cyber defenses.Finally,AI systems themselves may be targets of cyberattacks,from deni

303、al-of-service attacks to more advanced data poisoning attacks intended to corrupt AI models to produce harmful results.On the other hand,many cyberattacks still require human labor.126 A recent report from Georgetowns CSET notes that“even if machine learning technology continues to advance at a rapi

304、d pace in other areas,it does not follow that it will also immediately transform offensive cyber operations.For some parts of cyber operations,machine learning techniques may never matter.”127 Indeed,attackers are only likely to apply AI for automating cyberattacks if they perceive unique advantages

305、 or benefits,but as the report goes on to say,there are many limitations and shortcomings to doing so.128 For instance,there are few large public datasets available for training AI models for cyberattacks.Attackers would likely have to spend time and money to build these themselves in order to creat

306、e sufficiently good models.However,attackers can quickly adopt AI tools where they will be effective whereas the government agencies and corporations they are targeting tend to react to technological changes less quickly.For example,the Government Accountability Office(GAO)noted in a 2023 report tha

307、t since 2010,it has made over 100 recommendations on how to protect critical infrastructure from cyberattacks,but agencies have implemented fewer than half of them.129 Likewise,another 2023 GAO report finds that 70 percent of federal civilian agencies have“ineffective”information security programs,l

308、eaving them vulnerable to cyberattacks.130 The solutionThe solution:Policymakers have taken steps to use AI to address cybersecurity risks.For example,President Bidens AI Executive Order directs agencies to“deploy AI capabilities effectively for cyber defense.”131 In addition,the Cybersecurity and I

309、nfrastructure Security Agency(CISA)published a roadmap for AI that includes using AI for cyber defense and expanding AI expertise.132 However,the federal government continues to face a cybersecurity workforce shortage,limiting its ability to address cyber threats,including those from AI.133 Congress

310、 should do more to address this problem by establishing and funding an AI Center of Excellence dedicated to building AI tools and capacity that can augment and automate cybersecurity operations to close the workforce skills gap.CENTER FOR DATA INNOVATION 40 Issue 8.3:AI May Create Safety Risks The i

311、ssueThe issue:AI may cause real-world health and safety risks if an AI system fails.For example,an autonomous vehicle may crash if the onboard system fails to recognize a roadway hazard,or an AI-enabled medical device may incorrectly diagnose or treat a patient leading to undesirable health outcomes

312、.AI already assists in high-stakes domains such as healthcare,criminal justice,and financial services,and the technologys impact on society will only grow as models become more capableand in some of these areas,society may deem certain risks to be acceptable,such as if AI-enabled vehicles reduce tot

313、al injuries and fatalities but do not eliminate them.In many cases,the entities responsible for creating or deploying AI systems will have strong market incentives to address safety risks to maintain a brands reputation and mitigate their liability costs.In addition,in many regulated sectors,such as

314、 health care and transportation,existing regulators may also impose safety obligations on companies before they can bring their products to the market,such as independent testing or certification requirements.However,safety testing for AI systems is still a developing field and neither businesses no

315、r regulators know the optimal ways to reliably test the safety and reliability of AI systems.As a result,despite best efforts to test and evaluate AI systems,some may still contain unknown safety risks.The solutionThe solution:Congress should charge the newly created AI Safety Institutehoused in NIS

316、Twith creating both a national AI incident database and a national AI vulnerability database.134 There is no process in place to systematically track AI failures,vulnerabilities,and incidents to learn from mistakes and uphold public trust.To address this problem,Congress should pass AI-specific legi

317、slation to standardize tracking of incidents from AI systems and monitor AI-specific vulnerabilities,which are not the same as cybersecurity vulnerabilities.The AI Safety Institute should work with other countries to create a common vulnerability reporting and naming standard to facilitate informati

318、on sharing among stakeholders globally.CENTER FOR DATA INNOVATION 41 CONCLUSION Proposals to“regulate AI”mirror the fears being expressed and the harms advanced in the unfolding narrative of AI,where the shadows of concern cast longer than the promises of progress.While certain issues may require re

319、gulation,most are better addressed through other policy actions.By thoughtfully evaluating each concern and tailoring actions accordingly,policymakers can forge targeted,impactful policies that mitigate risks,safeguard fundamental rights,and nurture responsible AI advancement.This strategy is crucia

320、l to ensuring that AI continues to drive positive change while mitigating potential risks.CENTER FOR DATA INNOVATION 42 APPENDIX:THE RIGHT POLICY SOLUTIONS FOR AI CONCERNS Table 1:Concerns that warrant AI-specific regulations#Risk Policy Solution 1.3 AI may enable government surveillance.Congress sh

321、ould direct the Department of Justice(DOJ)to establish guidelines for use by state and local law enforcement in investigations that outline specific use cases and capabilities,including when a warrant is necessary for use,as well as transparency guidelines for when to notify the public of law enforc

322、ement using AI.3.6 AI may make harmful decisions.Policymakers should consider prohibiting the government from using AI systems in certain high-risk,public sector contexts.They should upskill regulators with better AI expertise and develop tools to monitor and address sector-specific AI risks,as the

323、United Kingdom has done.8.1 AI may enable fraud and identity theft.Financial regulatory agencies should update security guidelines to ensure financial institutions do not rely solely on voice authentication for customers 8.3 AI may create safety risks.Congress should charge the newly created AI Safe

324、ty Institutehoused in the Department of Commerces NISTwith creating a national AI incident database and a national AI vulnerability database.CENTER FOR DATA INNOVATION 43 Table 2:Concerns that warrant general regulations#Risk Policy Solution 1.1 AI may expose personally identifiable information in a

325、 data breach.Policymakers should require companies to publish security policies to promote transparency with consumers.Congress should pass federal data breach notification legislation.1.5 AI may infer sensitive information.Policymakers should craft and enact comprehensive national privacy legislati

326、on that addresses the risks of data-driven inference in a tech-neutral way.1.6 AI may help bad actors harass and publicly shame individuals.Congress should outlaw the nonconsensual distribution of all sexually explicit images,including deepfakes that duplicate individuals likenesses in sexually expl

327、icit images,and create a federal statute that prohibits revenge porn,including those with computer-generated images.3.2 AI may fuel deepfakes in elections.Policymakers should update state election laws to make it unlawful for campaigns and other political organizations to knowingly distribute materi

328、ally deceptive media.6.1 AI may make it easier to build bioweapons.Policymakers should clarify and strengthen existing policies related to biosecurity and biosafety oversight.They should update existing biosecurity practices to include guidance for how providers of labs can verify who is using the l

329、ab(customer screening)and what it is being used for(experiment screening).7.3 AI may infringe on publicity rights.Congress should provide rightsholders with a federal cause of action for publicity rights to ensure some basic jurisdictional consistency within the United States.CENTER FOR DATA INNOVAT

330、ION 44 Table 3:Concerns that warrant AI-specific nonregulatory policies#Risk Policy Solution 1.4 AI may enable workplace surveillance.Policymakers should help set the quality and performance standards of AI technologies used in the workplace 3.3 AI may manipulate voters.Policymakers should update di

331、gital literacy programs to include AI literacy,which teaches individuals to understand and use AI-enabled technologies.3.5 AI may perpetuate discrimination.Policymakers should support the development of tools that help organizations provide structured disclosures about AI models and related data.6.2

332、 AI may create novel biothreats.Congress should task the Department of Homeland Security(DHS)and Department of Energy(DOE)with developing state-of-the-art evaluations for dangerous biological capabilities.Benchmarks are needed to scope any future regulations.6.3 AI may become God-like and“superintel

333、ligent.”Policymakers should establish a Search for Artificial General Intelligence(SAGI)Institute focused on identifying advanced machine intelligence.6.4 AI may cause energy use to spiral out of control.Policymakers should support the development of energy transparency standards for AI models.They should also accelerate the use of AI across government agencies to decarbonize government operations

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(美国数据创新中心:2024选择正确的人工智能监管政策解决方案研究报告(英文版)(56页).pdf)为本站 (白日梦派对) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

 wei**n_... 升级为至尊VIP 邓** 升级为标准VIP 

 wei**n_... 升级为标准VIP wei**n_... 升级为至尊VIP  

 186**22... 升级为标准VIP 微**... 升级为至尊VIP

wei**n_... 升级为至尊VIP zhh**_s...  升级为标准VIP

 wei**n_... 升级为至尊VIP wei**n_... 升级为至尊VIP

wei**n_... 升级为高级VIP   wei**n_...  升级为至尊VIP

 131**00... 升级为高级VIP  wei**n_...  升级为高级VIP

 188**05... 升级为至尊VIP  139**80...   升级为至尊VIP

wei**n_...  升级为高级VIP  173**11...  升级为至尊VIP

 152**71... 升级为高级VIP 137**24... 升级为至尊VIP

wei**n_... 升级为高级VIP   185**31...  升级为至尊VIP

186**76...  升级为至尊VIP  wei**n_... 升级为标准VIP 

 wei**n_... 升级为标准VIP  138**50... 升级为标准VIP

wei**n_...   升级为高级VIP   wei**n_... 升级为高级VIP

wei**n_... 升级为标准VIP  wei**n_... 升级为至尊VIP

Bry**-C...  升级为至尊VIP  151**85... 升级为至尊VIP 

136**28...  升级为至尊VIP   166**35... 升级为至尊VIP 

狗**... 升级为至尊VIP   般若  升级为标准VIP

wei**n_... 升级为标准VIP  185**87...  升级为至尊VIP 

131**96... 升级为至尊VIP  琪** 升级为标准VIP 

wei**n_... 升级为高级VIP  wei**n_... 升级为标准VIP

186**76... 升级为标准VIP 微**...  升级为高级VIP

186**38... 升级为标准VIP  wei**n_...   升级为至尊VIP

 Dav**ch...  升级为高级VIP wei**n_...  升级为标准VIP 

wei**n_...  升级为标准VIP 189**34... 升级为标准VIP 

135**95...  升级为至尊VIP   wei**n_... 升级为标准VIP

 wei**n_... 升级为标准VIP 137**73...  升级为标准VIP

wei**n_...  升级为标准VIP   wei**n_... 升级为标准VIP 

wei**n_...  升级为至尊VIP   137**64... 升级为至尊VIP

139**41... 升级为高级VIP   Si**id 升级为至尊VIP

  180**14... 升级为标准VIP 138**48...  升级为高级VIP

180**08... 升级为高级VIP  wei**n_...   升级为标准VIP

wei**n_... 升级为高级VIP    136**67... 升级为标准VIP

136**08...  升级为标准VIP  177**34...  升级为标准VIP

186**59...   升级为标准VIP 139**48... 升级为至尊VIP

  wei**n_... 升级为标准VIP 188**95... 升级为至尊VIP  

 wei**n_... 升级为至尊VIP   wei**n_... 升级为高级VIP

wei**n_... 升级为至尊VIP  微**...  升级为至尊VIP

139**01... 升级为高级VIP  136**15... 升级为至尊VIP 

jia**ia...  升级为至尊VIP wei**n_...  升级为至尊VIP

 183**14... 升级为标准VIP wei**n_...  升级为至尊VIP

微**...  升级为高级VIP  wei**n_...  升级为至尊VIP

Be**en 升级为至尊VIP  微**...  升级为高级VIP

 186**86... 升级为高级VIP  Ji**n方...  升级为至尊VIP

188**48...  升级为标准VIP  wei**n_... 升级为高级VIP

iam**in... 升级为至尊VIP   wei**n_... 升级为标准VIP

135**70...  升级为至尊VIP  199**28...  升级为高级VIP

wei**n_... 升级为至尊VIP wei**n_...  升级为标准VIP 

wei**n_... 升级为至尊VIP   火星**r... 升级为至尊VIP

139**13... 升级为至尊VIP  186**69... 升级为高级VIP