《CSIS:2024年人工智能政策预测报告(英文版)(20页).pdf》由会员分享,可在线阅读,更多相关《CSIS:2024年人工智能政策预测报告(英文版)(20页).pdf(20页珍藏版)》请在三个皮匠报告上搜索。
1、iJanuary 20242024 AI POLICY FORECASTGregory C.AllenGeorgia AdamsonA Report of the Wadhwani Center for AI and Advanced Technologieswadhwani center for aiand advanced technologiesvivThe Center for Strategic and International Studies(CSIS)is a bipartisan,nonprofit policy research orga-nization dedicate
2、d to advancing practical ideas to address the worlds greatest challenges.Thomas J.Pritzker was named chairman of the CSIS Board of Trustees in 2015,succeeding former U.S.sen-ator Sam Nunn(D-GA).Founded in 1962,CSIS is led by John J.Hamre,who has served as president and chief executive officer since
3、2000.CSISs purpose is to define the future of national secu-rity.We are guided by a distinct set of valuesnonpar-tisanship,independent thought,innovative thinking,cross-disciplinary scholarship,integrity and pro-fessionalism,and talent development.CSISs values work in concert toward the goal of maki
4、ng real-world impact.CSIS scholars bring their policy expertise,judgment,and robust networks to their research,analysis,and recommendations.We organize conferences,pub-lish,lecture,and make media appearances that aim to increase the knowledge,awareness,and salience of policy issues with relevant sta
5、keholders and the interested public.CSIS has impact when our research helps to inform the decisionmaking of key policymakers and the thinking of key influencers.We work toward a vision of a safer and more prosperous world.CSIS does not take specific policy positions;accord-ingly,all views expressed
6、herein should be understood to be solely those of the author(s).2024 by the Center for Strategic and International Studies.All rights reserved.The Wadhwani Center for AI and Advanced Technolo-gies is an initiative within CSIS that produces research on technology governance,regulation,national securi
7、ty,and geopoliticswith a particular focus on AI.The center investigates central topics including export controls,semiconductor supply chains,and the impacts of emerging technologies on national security and global economic policymaking.Our anal-yses aim to inform policy solutions that address rap-id
8、ly evolving technology,foster global collaboration on AI and technology,and encourage technological innovation.The center is supported by the Wadhwani Foundation,a nonprofit dedicated to accelerating eco-nomic development.The Wadhwani Center for AI and Advanced Technolo-gies would like to thank the
9、Wadhwani Foundation for making this report possible.ABOUTCSISTHE WADHWANI CENTER FOR AI AND ADVANCED TECHNOLOGIESCONTENTSLetter from the Director 1Year in Review 2A Timeline of Major Developments in AI in 2023Top Takeaways 7The Wadhwani Centers Key Takeaways from Developments in AI Last YearMapping
10、AI Events 16A Global Perspective on Key AI Summits and EventsThe Year Ahead 18Ten Developments to Monitor in 2024Glossary 21Key Definitions for 2024About the Authors 25Endnotes 260vi1LETTER FROM THE DIRECTOR2023 marked the founding year of the Wadhwani Center for AI and Advanced Technolog
11、ies.We established this organization at CSIS to provide impactful research and analysis that can keep pace with the rapidly changing technology and policy landscape.While this is always a struggle,I am immensely proud of the strides we have made.In just the eight months that the Wadhwani Center has
12、been active,we proudly published 10 comprehensive reports,offering insights and recommendations that have resonated with policymakers and industry leaders alike.Our experts testified before Congress on three occasions and had the privilege of briefing our research findings to senior international an
13、d U.S.government policymakers,including cab-inet-level officials,on dozens of occasions.Looking ahead,we remain committed to advancing the discourse on AI and tech-nology.Our goals for 2024 are ambitious,focusing on growing our team of talented staff,deepening our research,expanding our outreach,con
14、vening timely events,and continuing to inform policy at both national and international levels.I would like to extend a special thank you to Dr.Romesh Wadhwani and the Wadhwani Foundation for their unwavering and generous support.Their com-mitment to independent and impactful policy scholarship gave
15、 us this opportu-nity.I am deeply grateful to all our donors,partners,and staff who have worked so hard to get us off to such a strong start.Thank you for being part of our story.I am confident that this year will bring even more exciting opportunities.Sincerely,Gregory C.AllenDirector,Wadhwani Cent
16、er for AI and Advanced Technologies2341630222811APR21JUN11930MAYChinas Cyberspace Administra-tion implements a new law to manage deepfakes,including by enforcing watermarks on AI-gen-erated content.1Microsoft pledges a rumored$10 billion multiyear investment in OpenAI,claiming the future impact of A
17、I technology will be equal to the PC or the internet.2The U.S.Department of Defense updates DOD Directive 3000.09,“Autonomy In Weapons Systems,”changing the original 2012 direc-tive to reflect new technology capabilities in autonomous sys-tems and AI.3The U.S.National Institute of Standards and Tech
18、nology(NIST)releases the NIST AI Risk Manage-ment Framework,a set of guide-lines for AI development,use,and evaluation aimed to enhance transparency and security of AI in businesses and organizations.4The United States and the Euro-pean Union announce an agree-ment to accelerate joint AI research fo
19、r solving global chal-lenges in climate forecasting,agriculture,healthcare,critical infrastructure,and more.5Bloomberg reports that the Netherlands and Japan will join U.S.efforts to restrict exports of semiconductor manufacturing equipment to China.6The White House launches the U.S.-India Initiativ
20、e on Criti-cal and Emerging Technologies(iCET),a partnership with India to advance technology and defense research and innovation and to ensure semiconductor supply chain resiliency.7win mcnamee/getty imagesalex wong/getty imagesjaap arriens/nurphoto via getty imagesOpenAIs chatbot,ChatGPT,becomes t
21、he fastest-growing application in history,with 100 million users in January.8The Department of Defense first announces its“Political Decla-ration on Responsible Military Use of Artificial Intelligence and Autonomy”at the Responsible AI in the Military Domain Summit in The Hague,Netherlands,the first
22、 summit of its kind.9Metas large language model(LLM)“LLaMA”is announced for limited release.10The Netherlands announces plans to join the United States to restrict semiconductor technol-ogy to China.11OpenAI reveals its latest large multimodal model ChatGPT-4,which greatly outperforms GPT-3 and othe
23、r available models in sev-eral areas.12Nvidia reports it has modified one of its top semiconductor chips,the H100,for export to China as H800 following updated U.S.export controls last year.13An open letter calling for pause to all frontier AI developments for six months due to potential catastrophi
24、c risk to society is published.Signatories include prominent CEOs and academics.14Japan announces it will join the United States and the Nether-lands in restricting exports of semiconductor manufacturing equipment overseas.11APR21JUN11930MAYChinas Cyberspace Adminis-tration reveals draft
25、steps to manage generative AI content,including bringing content in line with Chinas core values.16The G7 concludes Digital and Tech Ministers Meeting(April 2930)in Takasaki,Japan,declaring member states commitment to an internationally coopera-tive,adaptable,and risk-based approach to AI governance
26、.17Hollywood writers begin months-long strikes over issues including AIs role in the cre-ative industry.18CEOs of top AI developers OpenAI,Anthropic,Microsoft,and Alphabet meet with Pres-ident Biden to discuss respon-sible AI innovation,including companies responsibility to make products safe.19Open
27、AI CEO Sam Altman and IBM vice president Christina Montgomery testify before Con-gress on the risks of rapid AI development following the quick rise of ChatGPT.20G7 leaders gathered in Hiroshima discuss inclusive governance for AI at the 2023 summit.21The nonprofit Center for AI Safety publishes a o
28、ne-sentence state-ment arguing that mitigating extinction from AI should be a“global priority,”which is signed by top AI CEOs and developers,academics,and other civil soci-ety figures.22Nvidia briefly joins tech giants in trillion-dollar market valuation,with shares up over 200 percent since late 20
29、22.23Senate Majority Leader Chuck Schumer announces SAFE Inno-vation Framework for AI Policy at CSIS.24The Department of Commerce announces new public working group to implement and build upon NISTs AI Risk Management Framework created in January.25OpenAI sued by authors for copyright infringement a
30、fter training ChatGPT on their works without proper licensing,rais-ing wider copyright concerns around AI systems.26mario tama/getty imagescsisstephen rousseau/getty images2345JULAUGSEPUN Security Council convenes to discuss AI risks for first time.27Meta launches open-source LLM“LLaMA 2.”28Leading
31、AI firms including OpenAI,Meta,and Google make voluntary commitments to the White House for ensuring safe AI,including testing products before release and watermarking AI-generated content.293Japans export controls on 23 types of semiconductor man-ufacturing equipment go into e
32、ffect.30OpenAI,Anthropic,Google,and Microsoft announce new industry body Frontier Model Forum,founded to promote the responsible development of AI and to share knowledge with policymakers.31Nvidia announces new cut-ting-edge semiconductor chip GH200,speeding up processing times for generative AI sys
33、tems.32New Dutch restrictions on export-ing semiconductor manufacturing equipment to China go into effect.38Huawei releases new Mate60 Pro smartphone with 7-nanometer semiconductor chip,highlighting Chinas technical advancements despite U.S.export and invest-ment restrictions.36President Biden signs
34、 executive order banning U.S.investment in sensitive technologies such as AI in China for national security and competition reasons.33First AI Insight Forum session is held on Capitol Hill.Led by Senate Majority Leader Chuck Schumer,the meeting convenes senators,prominent tech CEOs,and civil society
35、 figures to discuss U.S.gov-ernment oversight of AI.39Chinese tech company Baidu releases AI chatbot Ernie Bot to the public in China.37The Department of Defense reveals new AI taskforce“Lima”to oversee integration of gen-erative AI capabilities into the department.34The National Security Agency ann
36、ounces a new body,the AI Security Center,to oversee AI adoption into U.S.national secu-rity systems.40The Department of Defense unveils new Replicator initiative to accelerate procurement and fielding of all-domain autono-mous and attritable military sys-tems to compete with China.35ed jones/getty i
37、mageschip somodevilla/getty imagescfoto/future publishing via getty imagesmarlena sloss/bloomberg via getty imagesOCTDECNOVChina aims to boost its total com-puting power by 50 percent by 2025,Chinese ministry reports.41Google launches AI model Gemini,the first model to outperform human experts on Ma
38、ssive Mul-titask Language Understanding(MMLU).57The United Kingdoms AI Safety Summit opens at Bletchley Park,convening prominent political and technology leaders to discuss international cooperation in AI governance for two days(Novem-ber 12).Twenty-eight countries and the European Union sign the“Bl
39、etchley Declaration,”announc-ing international commitment to AI governance and next steps.47Secretary of Commerce Gina Rai-mondo announces launch of a U.S.AI safety institute at the UK AI summit.48Chief Executive Officer Sam Altman is temporarily ousted from OpenAI by the companys board,only to be r
40、einstated five days later on November 21.51The White House announces new and updated measures to restrict AI and semiconductor technology exports to China and other coun-tries,closing loopholes in 2022 control policies.42In a global first,the European Union establishes a landmark comprehensive AI re
41、gulation in passing the EU AI Act.58The Department of Defense releases its AI strategy,direct-ing the accelerated adoption of advanced AI within the DOD.49Media source Reuters gains knowledge of new OpenAI proj-ect“Q*,”rumored to be a break-through in developing artificial general intelligence.52961
42、22315303030China accepts UK invitation to take part in the UK AI Safety Summit in November amid controversy.43UK government reveals plan to form the worlds first institute for AI safety.44The Financial Times reports that generative AI is widely used by multiple political parties in Bangla
43、deshs 2024 elections as deepfakes and AI-generated misinformation circulate social media and news outlets.59Microsoft reveals custom-designed semiconductor chip with aim to cut high costs of AI products.50Russian president Vladimir Putin says Russian AI strategy will be released soon to counter the
44、Western and Chinese monopoly on AI at a conference in Moscow.53President Biden signs Executive Order on the Safe,Secure,and Trustworthy Development and Use of Artificial Intelligence.45The G7 releases a statement on the Hiroshima AI Process on AI risks and the benefits of fostering an open environme
45、nt for global collaboration on AI.46The New York Times sues Mic-rosoft and OpenAI for copyright infringement,claiming AI chat-bots illegally used millions of articles for training.60Nvidia launches advanced gaming chip GeForce RTX 4090 D for Chinese consumers,adapted to comply with updated U.S.expor
46、t controls.6117Meta announces restrictions on AI-generated content for adver-tising ahead of the 2024 election,including mandatory disclosures of AI-generated advertising to the public.54Researchers discover a weakness in ChatGPT,allowing it to reveal sensitive information by using a single prompt.5
47、5The U.S.Patent and Trademark Office announces a new Semicon-ductor Technology Pilot Program designed to encourage innova-tion in semiconductor manufac-turing and support the CHIPS and Science Act.56chip somodevilla/getty imageschris j.ratcliffe/getty images4576“U.S.senate majority leader Chuck Schu
48、merJune 21,2023Though the risk of AI leading to catastrophe or human extinction had been a focus for Elon Musk and many AI researchers in prior years,2023 saw the issue become a genuine priority among global leaders and govern-ment policymakers.The shift was led by calls from high-profile figures in
49、 the private sector such as OpenAI CEO Sam Altman,Google DeepMind CEO Demis Hassa-bis,Tesla and xAI CEO Elon Musk,and AI“godfather”Geoffrey Hinton.In May,a coalition of over 350 lead-ing AI experts and executives signed a one-sentence statement that“mitigating the risk of extinction from AI should b
50、e a global priority alongside other socie-tal-scale risks such as pandemics and nuclear war.”63 This statement immediately led to a rhetorical shift among global policymakers.Concern about AIs malign impact on civilization trick-led down to the wider U.S.public.A survey of more than 20,000 Americans
51、 by YouGov in April 2023 reported that 46 percent were concerned about AIs potential to cause human extinction and 69 percent supported a proposed six-month pause on AI development.64 Similar anxieties about the potential catastrophic risk of AI echoed around Washington last year.At a congressional
52、hearing in May,Sam Altman warned that AI could“cause significant harm to the world”and urgently called for greater regulation of the technology.65 IBM vice president and chief privacy and trust officer Christina Montgomery concurred,2023TOP TAKEAWAYSThe Wadhwani Centers Key Takeaways from Developmen
53、ts in AI Last YearExistential risk became a mainstream concern for AI governance.“with AI the stakes are simply too high,”and“what we need at this pivotal moment is clear,reasonable policy and sound guardrails.”66 At the UK AI Safety Summit,29 countries attending agreed to the Bletchley Declaration,
54、which stated that“there is potential for serious,even catastrophic,harm,either deliberate or unintentional,stemming from the most significant capabilities of these AI models.”67The United States,the Netherlands,and Japan coordinated export controls to target Chinas AI and semiconductor technology de
55、velopment.In late January 2023,reports emerged that the United States had reached an agreement with the Nether-lands and Japan for the two countries to impose new export controls restricting Chinas access to chip-making tools.68 Earlier,on October 7,2022,the United States had imposed strict controls
56、 on exports of advanced semiconductor manufacturing equipment(SME)technology to China;however,as Japanese and Dutch companies produced important SME technol-ogy,such unilateral action had a limited effect.Details on what was included in these new restrictions were scarce until March 2023,when the tw
57、o countries for-CSISNow,friends,we come together at a moment of revolution,not one of weapons or of political power,but a revolution in science and understanding that will change humanity.Its been said that what the locomotive and electricity did for human muscle a century and a half ago,artificial
58、intel-ligence is doing for human knowledge today as we speak.But the effect of AI is far more profound and will certainly occur over a much shorter period of time.62 98mally announced they would be moving forward with export controls on a wide range of semiconductor equipment and technology.Neither
59、country explicitly mentioned China as the target.69 Japan and the Netherlands capture 99 percent of the worlds market share of lithography steppers and scan-nerscrucial for state-of-the-art AI chips.70 Therefore,this was a major step forward in the U.S.mission to bar China from gaining the lead in t
60、he chip race.Still,other countries like Germany and South Korea are also signif-icant producers in the semiconductor value chain,and the United States will need to persuade them both to get on board with controls if they want to continue to slow Chinas technological advancements.However,their succes
61、s in slowing Chinas technological progress remains mixed.Despite international efforts to prevent China from making significant technological advancements,the announcement of Huaweis new Mate60 smartphone raised concerns throughout the national security community about the efficacy of the export con
62、trols.On October 17,2023,the United States Bureau of Industry and Security announced updates to the October 7 con-trols.71 These updates included additional parameters for chips performance density,the restriction of dozens more items of semiconductor equipment,the expansion of licensing requirement
63、s to an additional 22 countries that the United States has an arms embargo with,and the addition of 13 companies to the entity list.72AI chatbots reached billions of users worldwide and continued growing rapidly in scale.OpenAIs ChatGPT reached over one million users in its first week of launch in N
64、ovember 2022.73 By November 2023,that number had skyrocketed to more than 100 million monthly active users.74 OpenAIs success speaks to a wider public entrancement with AI chatbots around the world.Leading AI development companies launched several new large language models(LLMs)last year,including O
65、penAIs updated ChatGPT-4,Metas LlaMa 2,and Googles Gemini.The global demand for this technology has prompted many companies to develop chatbots trained on other languages,such as the Arabic model Jais and Source:Charlie Giattino,Edouard Mathieu,Veronika Samborska,and Max Roser,Artificial Intelligenc
66、e,Our World in Data,2023,https:/ourworldindata.org/artificial-intelligence.Licensed under CC BY 4.0.the Chinese model Ernie Bot,though the United States still leads in the worldwide development of LLMs.75 Language models are getting bigger,both in terms of the data they are trained on and their para
67、meters(GPT-4,for example,is rumored to have up to one tril-lion parameters,compared to GPT-3s 175 billion).76 In fact,models have grown so large that there are legiti-mate concerns that companies are reaching the limits of the existing available text training data.77 How-ever,their growing size come
68、s with growing costs.78 While training costs are rarely disclosed by compa-nies developing LLMs,OpenAI stated that developing and training GPT-4 cost more than$100 million,and Anthropic CEO Dario Amodei suggested that future training costs could exceed$1 billion.79 Growing CO2 emissions and water us
69、age from training and operat-ing chatbots are also attracting increased attention in terms of their effect on the environment.80 What these upward trends mean for AI chatbots profitability and scalability remains to be seen this year.Major economies around the world took substantial steps to regulat
70、e AI.The United StatesIn response to the transformative potential of AI,the U.S.government began to regulate it through administrative law,acknowledging the imperative to navigate the complexities of AI risks and begin estab-lishing domestic standards.On January 26,2023,the U.S.Department of Commerc
71、es National Institute of Standards and Technology(NIST)unveiled the Arti-ficial Intelligence Risk Management Framework(AI RMF 1.0).81 Developed in close collaboration with both private and public sectors,the AI RMF serves as a com-prehensive tool for organizations engaging with AI technologies and i
72、s designed to adapt to the evolving AI landscape.Though the RMF is not intended to be applied as part of formal regulation,many have held the framework up as substantial progress in maturing AI governance.Source:Charlie Giattino,Edouard Mathieu,Veronika Samborska,and Max Roser,Artificial Intelligenc
73、e,Our World in Data,2023,https:/ourworldindata.org/artificial-intelligence.Licensed under CC BY 4.0.1011Announced in June 2023 at CSIS,Senate Majority Leader Chuck Schumers SAFE Innovation Framework marked a strategic effort to confront the profound changes brought about by AI through“comprehen-sive
74、 legislation.”83 Since September 2023,Capitol Hill has seen over 150 AI experts gather as part of Senator Schumers AI Insight Forums.84 These forums have covered an array of crucial topics,from AI innovation and workforce considerations to national security and guarding against doomsday scenarios.85
75、 Notably,the ninth forum,held on December 6,2023,featured a testimony from the Wadhwani Centers director,Gregory Allen.86 This forum focused on maximizing AI development to bolster the United States military capabilities,aligning with Senator Schumers vision for an“all-hands-on-deck”effort.87 Ahead
76、of the UK AI Safety Summit,the Biden adminis-tration announced its Executive Order on Safe,Secure and Trustworthy Artificial Intelligence in late October 2023.88 The order broadly focused on the development of standards and testing mechanisms for AI safety,infra-structure,and social consequences(suc
77、h as discrimi-nation and effects on labor).Since the announcement,executive agencies like the Department of Defense(DOD)and State Department have released their poli-cies on AI and detailed how the executive orders direc-tives will be applied in their respective agencies.89 The European UnionOn Dece
78、mber 8,the European Union passed its Artifi-cial Intelligence Act,the worlds most substantial set of regulations on AI so far.90 After two and a half years in the making,the act was finally agreed upon follow-ing a lengthy 37-hour negotiation between EU states and the European Parliament.91 EU commi
79、ssioner Thi-erry Breton confirmed the event on X,stating that“the EU becomes the very first continent to set clear rules for the use of AI”in passing the AI Act and calling it a“launch pad for EU start-ups and researchers to lead the global AI race.”92 Not all EU leaders agree,however;French preside
80、nt Emmanuel Macron condemned the act on December 11,saying“we can decide to regulate much faster and much stronger than our major com-petitors.But we will regulate things that we will no longer produce or invent.This is never a good idea.”93The AI Act regulates all AI sold,used,or deployed within th
81、e European Union apart from AI used for military purposes,research,and open-source models,though most provisions only apply to the“high-risk”AI sys-tems.It advances a risk-based approach to managing AI systems by sorting levels of risk into four categories:unacceptable,high,limited,and minimal to no
82、ne.94 Unacceptable risks banned under the act include the use of AI for manipulating human behavior,social scoring,and creating biometric databases based on sensitive social categories such as race or religion.The consequences for failing to comply with these rules are steep:under the new rules,comp
83、anies could be fined 35 million or 7 percent of global revenue.95 Full implementation of the new AI law is not expected to begin until 2025 at the earliest,allowing time for the European Commission to establish a regulatory over-sight“AI office”in Brussels and companies to adapt to the new rules.96
84、China In August 2023,Chinas generative AI measures went into effect.97 At the time,these measures were some of the most comprehensive regulations on AI and focused on a regulatory framework for genera-tive AI services to the Chinese public.Earlier in the year,China released a draft that placed signi
85、ficant responsibility onto service providers on topics like ensuring the legality of the data used for training and optimization.98 Service providers would be met with a fine upwards of 100,000 yuan if they failed to meet these standards.However,the finalized version only requires the service provid
86、ers to create measures that prioritize desired values within data training and optimization.Shifting from the strict first draft to a more diluted final draft is likely a reflection of industry We face a genuine inflection point in history,one of those moments where the decisions we make in the very
87、 near term are going to set the course for the next decades.And with the position we lead the world,the toughest challenges are the greatest opportunities.Look,theres no greater change that I can think of in my life than AI presents as a potential:exploring the universe,fighting climate change,endin
88、g cancer as we know it,and so much more.82U.S.president Joe BidenOctober 30,2023“input as well as sensitivity to the current economic challenges that China is facing.The measures apply to any AI generative technology that provides services that“generate any text,image,audio,video,or other such conte
89、nt to the public.”99 Services that are not offered to the public are explic-itly excluded from the legislation.Additionally,prior to providing services to the public,service providers must apply for a security assessment.In implementation,these have not been difficult to get approved.100 Chi-nese mi
90、litary,intelligence,and police services remain broadly exempt from all Chinese AI regulations.In October,the Chinese government announced its Global AI Governance Initiative at the third Belt and Road forum.101 Xi Jinping unveiled the initiative per-sonally,underscoring Chinas AI governance ambi-tio
91、ns,which include enhancing information exchange and technological cooperation with other countries;developing open,fair,and efficient governing mech-anisms;and establishing an international institution within the UN framework to govern AI.The initiative calls for representation and equal rights when
92、 devel-oping AI,regardless of a countrys“size,strength,or social system.”102 The announcement came just weeks after the creation of a BRICS AI study group which aimed to foster closer AI governance ties among the participating nations.103.and to strengthen international cooperation on AI governance.
93、Global efforts to govern AI increased dramatically in 2023.The United Kingdom hosted the first global AI Safety Summit on November 12,convening political and technology leaders for discussions focusing on foundation model safety.Attendees included summit host Prime Minister Rishi Sunak,European Comm
94、is-sion president Ursula von der Leyen,Vice President Kamala Harris,prominent tech CEOs,and,to the surprise of many,Chinas vice minister of science and technology,Wu Zhaohui.105 The most significant achievement of the summit was the Bletchley Decla-ration,which Sunak called a“landmark achievement th
95、at sees the worlds greatest AI powers agree on the urgency behind understanding the risks of AI.”106 The declaration broadly aims to promote transparency and accountability within companies developing fron-tier AI models and to develop AI safety evaluation met-rics,tools,and research capabilities.AI
96、 also featured prominently in G7 meetings in 2023,including the Digital and Technology Ministers Meet-ing in April and the G7 summit in May.The group released the G7 leaders statement,the Hiroshima Process International Guiding Principles for Orga-nizations Developing Advanced AI Systems,and the Hir
97、oshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.107 The code of conduct was the most consequential of the documents,laying out a set of voluntary guidelines for developing advanced AI systems,such as foundation models and generative AI.It is worth noting
98、 that these documents set out voluntary guidelines as opposed to binding international regulations.The G7 countries will also publish the Hiroshima AI Process Compre-hensive Policy Framework.Several other international assemblies gathered in 2023 to discuss AI for the first time.In February,the Resp
99、onsible AI in the Military Domain Summit(REAIM)in The Hague,Netherlands,convened ministers to discuss the responsible use of AI for military applica-tions.108 In July,the United Nations Security Council met to discuss AI risk,at which Secretary-General Antnio Guterres called for the creation of a UN
100、 body“to support collective efforts to govern this extraordinary technol-ogy.”109 The urgent need for international cooperation on balancing potential risks and benefits from AI was a common theme across all these governance efforts.What remains to be seen is how international pledges will translate
101、 into real-world impact in 2024.Last year brought many voluntary commitments and high-level declarations.This is a start.But perhaps the greater challenge will be seeing how the grittier details of leg-islation,funding,and international standards unfold within a window of interest that may not last
102、forever.We call for global collaboration to share knowledge and make AI technologies available to the public under open-source terms.104 Chinese vice minister of science and technology Wu ZhaohuiNovember 1,2023“1312The private sector emphasized its own role in responsible AI governance.As the U.S.go
103、vernment made significant steps to begin regulating AI in 2023,the private com-panies behind AI development have become more,not less,important in pursuing this goal.The government was proactive last year in collaborat-ing with industry leaders to responsibly manage AI.This action has come,in part,f
104、rom a recognition by Congress that it is playing catch-up to a technology and industry that far predates the governments regulatory efforts in 2023.The majority of AI research and develop-ment exists in the private sector,as it takes extremely large datasets,technical expertise,and financial investm
105、ent to develop the kinds of frontier AI models Congress is seeking to regulate.It would make sense,therefore,that Congress should seek AI companies input on AI legislation as it does with other industry leaders in other sectors.Senate Majority Leader Chuck Schumer opened the first AI Insight Forum o
106、n Septem-ber 13,which gathered senators,CEOs,and civil society leaders to discuss AI regulation,noting that“Congress cannot do it alone.”111 He added,“We need help of course from developers and experts who build AI systems.”112 The forums accompanied other congressional hearings that heard from AI C
107、EOs like OpenAIs Sam Altman and IBMs Christina Montgomery last year.113In addition to congressional hearings,AI companies were given a greater responsibility to regulate their technologies in 2023 by Bidens Executive Order on Safe,Secure,and Trustworthy Artificial Intelligence.114 The order requires
108、 AI developers conduct red teaming on their products and report their findings to the gov-ernment before they are released,placing the onus on companies to do the heavy lifting when it comes to due diligence.However,the private sectors growing influence in AI governance is accompanied by growing con
109、cerns that there are fundamental conflicts of interest at play.AI companies dual role of helping to regulate their innovations as they continue to develop them raises serious questions about whether they can truly place safety over profit.As Marietje Shaake,international policy director at Stanfords
110、 Cyber Policy Center,wrote in the Financial Times,“Imagine the chief executive of JPMorgan explaining to Congress that because financial products are too complex for lawmakers to understand,banks should decide for themselves how to prevent money laundering,enable fraud detection and set liquidity to
111、 loan ratios.He would be laughed out of the room.”115 Just as lawmakers are not experts in other industries for which they craft legislation,she and others argue,they must be careful to not be captured by AI CEOs who use technical complexity to advance their own regulatory interests.The number of AI
112、-related lawsuits rose sharply.New AI tools began to test existing legal frameworks in 2023,cutting across domains such as data privacy,copyright,and patent law.So far,the leading cause for lawsuits related to AI is related to data privacy,perhaps unsurprising considering the billions of data points
113、 that generative AI models are trained on from across the internet.AI companies have been resistant to reveal-ing their training data thus far,despite long-running calls for greater transparency and concerns about user privacy from academics and civil society groups.116 A reported hack of ChatGPT in
114、 early December highlighted some basis for these concerns:when asked to repeat certain words like“poem”ad infinitum,ChatGPT even-tually began to spit out sensitive training data,includ-ing phone numbers,names,and addresses.117 OpenAI has since closed this loophole by restricting ChatGPT from repeati
115、ng words forever.118 It also announced in The development of AI is as fundamental as the creation of the micropro-cessor,the personal computer,the Internet,and the mobile phone.It will change the way people work,learn,travel,get health care,and communi-cate with each other.Entire industries will reo
116、rient around it.Businesses will distinguish themselves by how well they use it.110Philanthropist and Microsoft c o-founder Bill GatesMarch 21,2023“August that website owners can now block the compa-nys data-scraping web crawler,GPTBot,from accessing their pages and data for training purposes.119Open
117、AI has been one of many AI companies to face several lawsuits in 2023 due to alleged privacy and intellectual property violations.One class action law-suit against OpenAI and Microsoft made headlines in June 2023 when it claimed that ChatGPT stole millions of sensitive data from hundreds of millions
118、 of inter-net users during training,including from social media accounts,medical records,and personal accounts.120 Though the complaint was dismissed in court,it was not the only one of its kind;a second lawsuit was filed against the same two companies in September and another,nearly identical,lawsu
119、it was filed against Google parent company Alphabet in July.121 As of the time of writing,both cases are ongoing,though all three companies have moved to dismiss them in court.The implications of AI for copyright law have also come under greater scrutiny in the last year.In 2023,Getty Images filed l
120、awsuits in the United States and the United Kingdom against generative AI company Stability AI for training its model,Stable Diffusion,on copyrighted data and metadata.122 OpenAI,Alphabet,and Microsoft faced multiple class action cases from authors whose work,they claim,has been similarly used for t
121、raining purposes without proper licensing or compensation.123 Text-to-image AI generators like Midjourney,DreamStudio,and DreamUp were the subject of a similar lawsuit filed in January by visual artists whose work has been used and replicated with-out permission.124 Several companies,such as Meta,Mi
122、crosoft,and Google,argued they should not have to pay to train AI models on copyrighted work,citing arguments such as AI training“is like the act of reading a book”and curbing access to copyrighted access would chill AI development.125 These kinds of arguments ask funda-mental questions about AI reg
123、ulation,like whether the unique characteristics of the technology should exempt it from certain laws.Finally,the U.S.Federal Court Circuit upheld a prec-edent in patent law when it ruled in August that AI systems are not eligible to own patents for their“inventions”as they are not human beings.126 T
124、he ver-dict confirmed earlier rulings made by the U.S.Patent Office and the U.S.Copyright Office following years of legal disputes by U.S.computer scientist Stephen Thaler,who first tried to copyright an image produced by his AI system in 2019.Federal Circuit Judge Leonard Stark announced the decisi
125、on in August saying“there is no ambiguity:the Patent Act requires that inventors must be natural persons;that is,human beings.”127 The ruling reflects similar decisions made in the United Kingdom and the European Union.128The Department of Defense took steps to safely adopt and deploy AI in its weap
126、ons systems.The United States was the first country to codify a policy on autonomy in weapon systems when it first adopted DOD Directive 3000.09 in 2012.129 The policy did not ban the development or use of autonomous weaponsindeed many types of autonomous weapons,such as some missile defense and cyb
127、er weapons,had already been in use for decades.What 3000.09 did was place new policy and process requirements for the devel-opment and use of autonomous weapons in offensive and kinetic constructs.However,the policy was widely misunderstood as requiring“a human in the loop”and thus banning fully aut
128、onomous systems,which it did not do.In January 2023,the DOD published an updated 3000.09 that sought to address the confusion and to account for the rise in machine learning AI systems.130 Among other things,it formalized that adherence to the Department of Defense(DOD)s AI Ethical Princi-ples was a
129、 requirement at all stages of development and fielding.Reaffirmed by Deputy Secretary of Defense Kathleen Hicks,the directive mandates rigorous test-ing,reviews,and senior-level scrutiny for autonomous systems,aligning with the DODs Ethical Principles and the Responsible AI(RAI)Strategy.131On the in
130、ternational front,the U.S.Bureau of Arms Control,Deterrence,and Stability unveiled a ground-breaking“Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.”132 This declaration is fully consistent with the ideas under-pinning DOD 3000.09 and seeks to make them int
131、erna-tional,particularly among U.S.allies.Introduced at the Responsible AI in the Military Domain Summit,this ini-tiative has since garnered signatures of support from 49 countries,promoting non-legally-binding guidelines for secure AI deployment in defense contexts.133.and to massively accelerate t
132、he DODs adoption of AI-enabled autonomous systems.On August 28,2023,Deputy Secretary of Defense Kath-leen Hicks announced the Replicator initiative.Rep-licator aims to“field attritable autonomous systems 1514at scale of multiple thousands,in multiple domains,within the next 18-to-24 months.”134 The
133、DOD intends for the initiative to counter Chinas perceived advan-tage in military“mass”by rapidly acquiring and field-ing large quantities of small,attritable,and relatively cheap drones.Reports suggest that this initiative was informed by analysis of the war in Ukraine.135Replicator will not solely
134、 focus on acquiring such sys-tems;it will also build the processes that allow for“replicating the rapid adoption and delivery of tech-nology.”136 This line of effort will try and build a pipe-line that will enable similar efforts in the future.The Defense Innovation Unit(DIU)will play a central role
135、 in Replicator,both working with Indo-Pacific Command(INDOPACOM)end users to establish what capabilities the warfighter needs and exploring which industry partners can provide those capabilities within the relevant time frame.137 At least for now,the DOD is not requesting any additional funding for
136、Replicator;instead,the initiative will take advantage of“existing funding,existing programming lines,and existing authorities.”138 DOD officials claim the first Replicator systems will be delivered sometime between February and August 2025.139Corporate interest in AI surged,but the“Magnificent Seven
137、”AI tech giants continued to capture stock markets and global attention.AI was the magic word of 2023 for firms worldwide.Business intelligence company CB Insights reported that private investment in generative AI reached an all-time high last year with an almost 450 percent increase from$3.2 billio
138、n in 2022 to$17.4 billion by Q3 2023.140 As of March 2023,mentions of AI in executives earnings callswhere CEOs discuss corporate performance and strategies with investors and leading analystshad grown by 77 percent compared to 2022.141 By Septem-ber,that figure accelerated to a further 366 percent.
139、142 EYs October CEO Outlook Pulse Survey,which inter-views 1,200 CEOs globally every quarter,revealed that nearly all CEOs(99 percent)are conscious of AIs potential to disrupt current business models and are planning investments into generative AI,either by redirecting capital from other projects(69
140、 percent)or by raising new capital(23 percent).143 However,the rapid change of the AI ecosystem is a significant barrier to companies adoption of the technology:26 percent stated that the breakneck speed in which AI is developing makes investments risky,while another 66 percent stated that the surro
141、unding hype makes it difficult to ascertain whether other companies have legitimate AI capabilities for partnerships and merg-ers and acquisitions.To what extent companies effec-tively adopt AI tools into their business models once the hype settles remains to be seen in 2024.In contrast,leading tech
142、nology companies saw enor-mous returns in 2023.The“Magnificent Seven”(a term given to Meta,Amazon,Apple,Microsoft,Google,Tesla,and Nvidia)continued to dominate the stock market last year,breaking the record for holding the largest proportion of the S&P 500s market cap at 29 percent.144 Goldman Sachs
143、 reported that the seven companies stocks accounted for 71 percent of total returns in 2023 while all other 493 other stocks made up just 6 percent.145 While comparably smaller AI companies have still attracted a surge in investmentincluding 15 new AI unicorns in 2023 as of Q3the lions share of atte
144、ntion in AI was focused on tech giants in 2023.146 Rapid technological progress showed no signs of slowing down.2023 was a groundbreaking year for AI,in terms of both raw model capability and the technologys capacity to deliver remarkable breakthroughs in an increasing number of scientific fields.Th
145、e end of the year saw a leap in generative AI capabilities as Google released its new multimodal system,Gemini,in December,possibly foreshadowing a new era for LLMs.147“Multi-modal”describes models that can process data from a variety of inputs,such as text,video,image,or audio,and can produce outpu
146、ts in a similar array of forms.Unlike other LLMs like OpenAIs ChatGPT-4,Gemini is natively multimodal,meaning it was trained on diverse inputs.148 Last year also brought significant breakthroughs in AI chip capabilities.Nvidia reached a technological mile-stone in May 2023 when it unveiled the world
147、s first 100-terabyte GPU memory system in its new semicon-ductor model DGX GH200.149 The new chip is a signifi-cant advancement compared to the companys earlier models,particularly the DGX A100,and it provides over 500 times more memory to the GPU shared memory programming model compared to a single
148、 DGX A100 320 GB system.These advanced chips open the poten-tial for faster,more efficient AI models and computing systemsadvances to look forward to in 2024.Finally,AI continued to revolutionize an increasingly diverse number of scientific fields.In 2023,AI models from Google DeepMind made signific
149、ant break-throughs in both material science,predicting“ingre-dients and properties of another 2.2 million materials,”and meteorology,with its GraphCast model.150 Earlier in the year,Huawei published a similar model called Pangu-Weather.151 These breakthroughs could reveal new materials and methods t
150、o construct novel bat-teries,superconductors,and catalysts.29%71%450%The Magnificent Sevens record-breaking share of the S&P 500s market cap in 2023.The Magnificent Sevens share in the S&P 500s total returns in 2023.The increase in mentions of AI in executives earnings calls as of early 2023.1617202
151、3MAPPING AI EVENTSA Global Perspective on Key AI Summits and MeetingsJan 31Sep 13Jul 18New York CityWashington,D.C.Mar 10UruguayMontevideoNov 1-2United KingdomBletchley ParkDec 8BelgiumBrussels*New Delhi*IndiaUnited StatesThe White House launches the U.S.-India Initiative on Critical and Emerging Te
152、chnologies(iCET),a partnership with India to compete with China in semiconductor chips,military equipment,and AI.153The Montevideo Declaration on Artificial Intelligence is announced at the 2023 Latin American Meet-ing on Artificial Intelligence(March 610),urging governments and companies to safely
153、develop AI and AI regulation for Latin Ameri-can countries.155The UN Security Council convenes to discuss AI risks for first time.158The first AI Insight Forum session is held on Capitol Hill.Led by Senate Majority Leader Chuck Schumer,the meeting convenes senators,prominent tech CEOs,and civil soci
154、ety figures to discuss U.S.gov-ernment oversight of AI.159The United Kingdoms AI Safety Summit convenes political and technology lead-ers at Bletchley Park.161In a global first,the European Union establishes a landmark comprehensive AI regulation by passing the EU AI Act.162*Joint initiative by the
155、United States and India.Jan 3UkraineOct 18ChinaBeijingFeb 27RwandaKigaliApr 30May 19-21JapanTakasakiHiroshimaDec 13BangladeshDhakaNew Delhi*IndiaUkraines digital transforma-tion minister,Mykhailo Fedorov,states that there is“potential”for introducing fully autono-mous killer drones into combat with
156、Russia in the next six months.Though no confirmed evidence of fully autonomous weapons use follows,AI features prominently in the war.152The African Union High-Level Panel on Emerging Technol-ogies and the African Union Development Agency convene AI experts to finalize the draft-ing of the African U
157、nion AI Con-tinental Strategy for Africa.154The G7 concludes its Digital and Tech Ministers Meeting in Takasaki,Japan,declar-ing member states commit-ment to an internationally cooperative,interoperable,and risk-based approach to AI governance.156G7 leaders gather in Hiro-shima to discuss inclusive
158、governance for AI at the 2023 summit.157China announces its Global AI Governance Initiative at the 2023 Belt and Road Forum.160Generative AI is widely used by multiple political par-ties in the lead-up to Ban-gladeshs 2024 elections as deepfakes and AI-gener-ated misinformation cir-culate on social
159、media and news outlets.163 024THE YEAR AHEADTen Developments to Monitor in 2024How effectively can high-level global AI governance talks translate into tangible impact?Global AI governance talks resulted in significant high-level commitments in 2023,from the Bletchley Declaration signed a
160、t the United Kingdoms AI Safety Summit to the Hiroshima AI Process set in motion under the Japanese G7 presidency last year.How will such commitments translate into actionable policies and enforceable regulations in 2024?01030204How will third-party red teaming work in practice?Bidens AI executive o
161、rder requires that AI develop-ers evaluate their frontier models through a practice called red teaming,in which adversarial attacks are simulated to assess the vulnerabilities and robust-ness of a model.Developers will be required to report their findings to the government before the model is releas
162、ed to the public.How will third-party red team-ing work in practice,and will the government be able to use the findings to keep up with the growing size and capabilities of LLMs?Can Congress pass comprehensive AI legislation?President Bidens AI executive order was a step toward U.S.AI regulation,but
163、 its efficacy depends upon Con-gress passing legislation and budget allowances.Will AI regulation remain a largely bipartisan issue,and how quickly can it pass through Congress given the expected tough timing with 2024 elections?How will the Italian G7 presidency meaningfully build upon the Hiroshim
164、a AI Process this year?In 2023,the Japanese presidency put AI on the G7 agenda and committed to coordinating AI governance efforts under the Hiroshima AI Process.Interopera-bility between governance frameworks is a daunting yet essential task for avoiding a fragmented global AI landscape,and one tha
165、t Italy has signaled it will pursue this year.What steps will the Italian presidency take to move the Hiroshima AI Process forward and to deliver the G7s commitment to harmonized AI regulation?Will scaling up AI continue to deliver new capability breakthroughs?The substantial scaling up of LLMs prod
166、uced dramatic performance improvements in 2023.Will improve-ments continue to grow exponentially this year,or will developers hit diminishing returns without making fundamental architectural improvements?Will the DODs Replicator initiative get the funding it needs?The DODs Replicator initiative is a
167、 comprehensive effort by the United States to accelerate the delivery of AI-enabled autonomous systems to warfighters at speed and scale.The program aims to address stra-tegic competition with China,a subject of critical importance.What will happen if the DOD receives only token funding?05060708fili
168、ppo monteforte/getty imagesnikada/getty imagesHow will U.S.and allied export controls affect Chinas progress in AI and semiconductors?Huaweis late August release of the Mate60 Pro raised serious questions about the enforceability of U.S.and allied export controls and their impact on Chinas tech-nolo
169、gical trajectory.Can the United States and its allies effectively enforce export restrictions,and how might this influence Chinas technological trajectory in 2024?How will AI impact major elections this year?2024 is the biggest election year in history,with over half of the worlds population heading
170、 to the polls.164 It will be the first year that political parties must define their stance regarding AI,and some voters may see AI regulation featured as a question on their ballots.Both would have been almost unthinkable only a year ago.It will also be the first election cycle in which AI will lik
171、ely play a significant role in election campaigning,inter-ference,and spreading disinformation.How might the worlds busiest election year fare with AI in the mix?1820Will open-source AI models continue to be available at leading-edge performance?The open-versus closed-source debate largely revolves
172、around whether AI development should prioritize transparency,collaboration,and accessibility(open source)or proprietary control,safety,competitive advantage,and intellectual property protection(closed source).Who will emerge victorious in this debate and what implications could this have for AIs fut
173、ure devel-opment,regulation,and democratization?Will tech giants continue to dominate AI development?The AI landscape is currently dominated by a hand-ful of key players like Google,OpenAI,and Meta.Will tech giants continue to dominate AI development and market returns in 2024,or will we see a more
174、diversified landscape with winners emerging in niche industries?0910AgentA system that embodies AI and can perceive its environment,make decisions,and act to achieve spe-cific objectives.Agents are used in the field of reinforcement learning;for example,a self-driving car.AI chatbotA computer progra
175、m that uses AI to converse,inform,and assist users in a human-like way;for example,Siri and ChatGPT.These may or may not use large language models as the source of their capabilities.AlignmentThe goal of designing AI to act in accordance with human values and desired outcomes.Artificial general inte
176、lligence(AGI)A theorized AI technology that exceeds human abilities in a broad range of intellectual fields and can learn flexibly.Artificial intelligence(AI)Computer systems that can perform tasks that mimic human intelligence;for example,learning from experience(machine learning),understanding nat
177、ural language,and recognizing patterns.AutomationThe use of technology to perform tasks without human intervention.Deep learningA subset of machine learning where artificial neural networks with many layers are trained to perform tasks by learning patterns from large amounts of data.DiffusionA metho
178、d by which generative models simulate the gradual evolution of images,learning,and applying statistical patterns to generate new,complex visual content.GLOSSARYKey Definitions for 202420gguy/adobe stock21Emergent capabilitiesUnexpected functionalities or behaviors that arise from the interaction and
179、 complexity of an AI sys-tems components;for example,an AI trained to play a video game may discover an unconventional route not explicitly taught during training.Existential riskThe potential for AI systems to pose catastrophic and irreversible threats to the trajectory of human civilization,especi
180、ally related to extinction of the human species.ExplainabilityThe ability to understand how an AI system makes decisions;closely linked to transparency and accountability.Foundation modelA pre-trained,generalized AI model that can be built upon to create more specialized models;for exam-ple,Googles
181、BERT and OpenAIs GPT series.Generative AIAI systems that generate new content such as image,text,audio,and video.Generative Pre-trained Transformer(GPT)A type of large language model developed by OpenAI that is trained on internet data to process and generate text;GPTs can perform a variety of natur
182、al language tasks such as writing code,generating images,and conversing in human-like dialogue.Graphics Processing Unit(GPU)A specialized electronic circuit designed to accelerate the processing of graphics and parallel com-puting tasks;GPUs are often used to enhance the performance of deep learning
183、 models by facilitating multiple calculations simultaneously.Though originally the same GPU chips were used for computer graphics and AI applications,more recently chips have been introduced that are specifically targeting AI applications,leading some to refer to them as AI chips rather than GPUs.Ha
184、llucinationThe generation of inaccurate information by a model;often results from overfitting or exposure to biased data.22HypeSensationalized claims surrounding the capabilities and impact of AI,which can lead to inflated public perceptions that may not align with the current state of AI;the term i
185、s used to caution against unre-alistic expectations.Large language model(LLM)A type of AI model that has been trained on large amounts of text data to understand and generate human-like language;for example,Baidus Ernie Bot and Anthropics Claude.Natural language processing(NLP)A subfield of AI focus
186、ed on training computers to understand,interpret,and generate human language.Neural networkA computational model inspired by the structure of the human brain,consisting of interconnected nodes(neurons)that work together to process and analyze data.Open sourcingMaking the source code and development
187、details of a software project publicly available;this allows others to contribute,modify,and use the code in their own projects.PromptA specific instruction given to an AI model to generate a desired output,guiding the models behavior and responses.Red teamingA method where a“red team”(traditionally
188、 security engineers)simulates adversarial attacks and challenges to evaluate the security,robustness,and vulnerabilities of AI systems by trying to make the system produce undesired outcomes.Reinforcement learningA type of machine learning where an agent learns to make decisions by receiving feedbac
189、k(rewards or penalties)on its actions.2325Supervised learningA type of machine learning where an AI model is trained on a labeled dataset;the model learns to map inputs to corresponding outputs and generalizes this mapping to make predictions on new data.TransformerA state-of-the-art neural network
190、architecture used in advanced machine learning research.Trans-formers use a technique called“self-attention”to quickly learn the contextual relationship between data points(e.g.,words),allowing it to generate more accurate predictions faster.Initially developed for natural language processing,they a
191、re now used for a variety of applications such as computer vision,preventing fraud,or drug discovery.Unsupervised learningA type of machine learning where a model is given training input data without explicit labels.The algo-rithm explores and finds patterns on its own,and the correctness of the mod
192、el is often determined by how well it achieves its intended purpose.24Gregory C.Allen is the director of the Wadhwani Center for AI and Advanced Technologies at the Center for Strategic and International Studies(CSIS).Mr.Allens expertise and professional experience spans AI,robotics,semicon-ductors,
193、space technology,and national security.Prior to joining CSIS,he was the director of strategy and policy at the Department of Defense(DOD)Joint Artifi-cial Intelligence Center,where he oversaw development and implementation of the DODs AI Strategy,devel-oped mechanisms for AI governance and ethics,an
194、d led frequent diplomatic engagements with governments and militaries in Europe and the Indo-Pacific regions,including China.Prior to working at the DOD,he was the head of market analysis and competitive strategy at Blue Origin,a space technology manufacturer and space launch services provider.Mr.Al
195、lens writing and commentary has appeared in the New York Times,the Washington Post,The Economist,Nature,CNN,Foreign Policy,and WIRED.He holds a joint MPP/MBA degree from the Harvard Kennedy School of Government and the Harvard Business School.Georgia Adamson is a research assistant at the Wadhwani C
196、enter for AI and Advanced Technologies at CSIS.She provides research and program support on a range of issues,including emerging global AI governance and policy,advanced technology supply chain security,and the military use of AI technology.Prior to CSIS,Georgia graduated from the University of Camb
197、ridge with an MPhil in international development,where she focused on emerging technology and industrial policy in devel-oping economies.She holds an undergraduate degree in English literature.ABOUT THE AUTHORS27262023 YEAR IN REVIEW1 Asha Hemrajani,“Chinas New Legislation on Deepfakes:Should the Re
198、st of Asia Follow Suit?,”The Diplomat,March 8,2023,https:/ Laney Zhang,China:Provisions on Deep Synthesis Technology Enter into Effect,Library of Congress,April 26,2023,https:/www.loc.gov/item/global-legal-monitor/2023-04-25/china-provisions-on-deep-synthesis-technology-enter-into-effect/.2“Microsof
199、t and OpenAI extend partnership,”Microsoft Blog,January 1,2023,https:/ Cade Metz and Karen Weise,“Microsoft to Invest$10 Billion in OpenAI,the Creator of ChatGPT,”New York Times,January 23,2023,https:/ Jim Garamone,“DoD Updates Autonomy in Weapons System Directive,”U.S.Department of Defense,January
200、25,2023,https:/www.defense.gov/News/News-Stories/Article/Article/3278065/dod-updates-autonomy-in-weapons-system-directive/.4 Laurie E.Locascio,“Launch of the NIST AI Risk Management Framework,”NIST,January 26,2023,https:/www.nist.gov/speech-testimony/launch-nist-ai-risk-management-framework.5 The Wh
201、ite House,“Statement by National Security Advisor Jake Sullivan on the New U.S.-EU Artificial Intelligence Collaboration,”Press release,January 27,2023,https:/www.whitehouse.gov/briefing-room/statements-releases/2023/01/27/statement-by-national-security-advisor-jake-sullivan-on-the-new-u-s-eu-artifi
202、cial-intelligence-collaboration/;and European Commission,“The European Union and the United States of America strengthen cooperation on research in Artificial Intelligence and computing for the Public Good,”Press release,January 27,2023,https:/digital-strategy.ec.europa.eu/en/news/european-union-and
203、-united-states-america-strengthen-cooperation-research-artificial-intelligence.6 Cagan Koc and Jenny Leonard,“Biden Wins Deal with Netherlands,Japan on China Chip Export Limit,”Bloomberg,January 27,2023,https:/ The White House,“United States and India Elevate Strategic Partnership with the initiativ
204、e on Critical and Emerging Technology(ICET),”Fact sheet,January 31,2023,https:/www.whitehouse.gov/briefing-room/statements-releases/2023/01/31/fact-sheet-united-states-and-india-elevate-strategic-partnership-with-the-initiative-on-critical-and-emerging-technology-icet/.8 Krystal Hu,“ChatGPT sets rec
205、ord for fastest-growing user base analyst note,”Reuters,February 2,2023,https:/ David Vergun,“U.S.Endorses Responsible AI Measures for Global Militaries,”U.S.Department of Defense,November 22,2023,https:/www.defense.gov/News/News-Stories/Article/Article/3597093/us-endorses-responsible-ai-measures-fo
206、r-global-militaries/;and Toby Sterling,“US issues declaration on responsible use of AI in the military,”Reuters,February 16,2023,https:/ LLaMA:A foundational,65-billion-parameter large language model,”Meta Blog,February 24,2023,https:/ Toby Sterling,Karen Freifeld,and Alexandra Alper,“Dutch to restr
207、ict semiconductor tech exports to China,joining US effort,”Reuters,March 8,2023,https:/ 14,2023,https:/ Stephen Nellis and Jane Lee,“Nvidia tweaks flagship H100 chip for export to China as H800,”Reuters,March 21,2023,https:/ Giant AI Experiments:An Open Letter,”Future of Life Institute,March 22,2023
208、,https:/futureoflife.org/open-letter/pause-giant-ai-experiments/.15 Michelle Toh and Junko Ogura,“Japan joins the US and Europe in chipmaking curbs on China,”CNN Business,March 31,2023,https:/ Laney Zhang,“China:Cyberspace Authority Releases Draft Measures Regulating Generative Artificial Intelligen
209、ce,”Library of Congress,July 5,2023,https:/www.loc.gov/item/global-legal-monitor/2023-07-04/china-cyberspace-authority-releases-draft-measures-regulating-generative-artificial-intelligence/;and Seaton Huang et al.,“Translation:Measures for the Management of Generative Artificial Intelligence Service
210、s(Draft for Comment)April 2023,”DigiChina,Stanford University,April 12,2023,https:/digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/.17“Ministerial Declaration:The G7 Digital and Tech Ministers Meeting,”G7
211、/G20 Documents Database,April 30,2023,https:/g7g20-documents.org/database/document/2023-g7-japan-ministerial-meetings-ict-ministers-ministers-language-ministerial-declaration-the-g7-digital-and-tech-ministers-meeting#section-1.18 Dawn Chmielewski and Lisa Richwine,“Plagiarism machines:Hollywood writ
212、ers and studios battle over the future of AI,”Reuters,May 3,2023,https:/ The White House,“Readout of White House Meeting with CEOs on Advancing Responsible Artificial Intelligence Innovation,”Press release,May 4,2023,https:/www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-w
213、hite-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/.ENDNOTES20“Oversight of A.I.:Rules for Artificial Intelligence,”Subcommittee on Privacy,Technology,and the Law,U.S.Senate Committee on the Judiciary,May 16,2023,https:/www.judiciary.senate.gov/committee-activit
214、y/hearings/oversight-of-ai-rules-for-artificial-intelligence.21 The White House,“G7 Hiroshima Leaders Communiqu,”Press release,May 20,2023,https:/www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.22“Statement on AI Risk:AI experts and public figures exp
215、ress their concern about AI risk,”Center for AI Safety,accessed December 15,2023,https:/www.safe.ai/statement-on-ai-risk#open-letter;and Kevin Roose,“A.I.Poses Risk of Extinction,Industry Leaders Warn,”New York Times,May 30,2023,https:/ Akash Sriram and Samrhitha A,“Nvidia briefly joins$1 trillion v
216、aluation club,”Reuters,May 30,2023,https:/ Schumer Launches SAFE Innovation in the AI Age at CSIS,”Center for Strategic and International Studies,June 21,2023,https:/www.csis.org/analysis/sen-chuck-schumer-launches-safe-innovation-ai-age-csis.25 NIST,“Biden-Harris Administration Announces New NIST P
217、ublic Working Group on AI,”Press release,June 22,2023,https:/www.nist.gov/news-events/news/2023/06/biden-harris-administration-announces-new-nist-public-working-group-ai.26 Isaiah Poritz,“OpenAI Legal Troubles Mount With Suit Over AI Training on Novels,”Bloomberg Law,June 29,2023,https:/ United Nati
218、ons,“International Community Must Urgently Confront New Reality of Generative,Artificial Intelligence,Speakers Stress as Security Council Debates Risks,Rewards,”Press release,July 18,2023,https:/press.un.org/en/2023/sc15359.doc.htm.28“Meta and Microsoft Introduce the Next Generation of Llama,”Meta N
219、ews,July 18,2023,https:/ The White House,“Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,”Fact sheet,July 21,2023,https:/www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-a
220、dministration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.30 Tim Kelly et al.,“As Japan aligns with U.S.chip curbs on China,some in Tokyo feel uneasy,”Reuters,July 24,2023,https:/ Model Forum,”OpenAI Blog,July 26,2023,https:/ Nvidia,“
221、NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip Platform for Era of Accelerated Computing and Generative AI,”Press release,August 8,2023,https:/ Order on Addressing United States Investments in Certain National Security Technologies and Products in Countries of Concern,”The White House,A
222、ugust 9,2023,https:/www.whitehouse.gov/briefing-room/presidential-actions/2023/08/09/executive-order-on-addressing-united-states-investments-in-certain-national-security-technologies-and-products-in-countries-of-concern/.34 U.S.Department of Defense,“DOD Announces Establishment of Generative AI Task
223、 Force,”Press release,August 10,2023,https:/www.defense.gov/News/Releases/Release/Article/3489803/dod-announces-establishment-of-generative-ai-task-force/.35 Joseph Clark,“Hicks Underscores U.S.Innovation in Unveiling Strategy to Counter Chinas Military Buildup,”U.S.Department of Defense News,August
224、 28,2023,https:/www.defense.gov/News/News-Stories/Article/Article/3507514/hicks-underscores-us-innovation-in-unveiling-strategy-to-counter-chinas-militar/.36 Gregory C.Allen,In Chip Race,China Gives Huawei the Steering Wheel(Washington,DC:Center for Strategic and International Studies,October 6,2023
225、),https:/www.csis.org/analysis/chip-race-china-gives-huawei-steering-wheel-huaweis-new-smartphone-and-future.37 Josh Ye and Urvi Manoj Dugar,“China lets Baidu,others launch ChatGPT-like bots to public,tech shares jump,”Reuters,August 31,2023,https:/ Pieter Haeck and Barbara Moens,“Dutch cozy up to U
226、S with controls on exporting microchip kit to China,”Politico,September 1,2023,https:/www.politico.eu/article/the-netherlands-limits-chinese-access-to-chips-tools-asml/.39 Senate Democrats Newsroom,“Majority Leader Schumer Opening Remarks for the Senates Inaugural AI Insight Forum,”Speech transcript
227、,September 13,2023,https:/www.democrats.senate.gov/newsroom/press-releases/majority-leader-schumer-opening-remarks-for-the-senates-inaugural-ai-insight-forum.40 Joseph Clark,“AI Security Center to Open at National Security Agency,”U.S.Department of Defense,September 28,2023,https:/www.defense.gov/Ne
228、ws/News-Stories/Article/Article/3541838/ai-security-center-to-open-at-national-security-agency/.41 Arjun Kharpal,“China Targets 50%Boost in Computing Power as AI Race with U.S.Ramps Up,”CNBC,October 9,2023,https:/ Bureau of Industry and Security,“Public Information on Export Controls Imposed on Adva
229、nced Computing and Semiconductor Manufacturing Items to the Peoples Republic of China(PRC)in 2022 and 2023,”Press release,October 17,2023,https:/www.bis.doc.gov/index.php/about-bis/newsroom/2082.292843 William James and Muvija M.,“China accepts invitation to AI summit in Britain-Deputy UK PM,”Reuter
230、s,October 26,2023,https:/ Ministers Speech on AI:26 October 2023,”GOV.UK,October 26,2023,https:/www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023.45 The White House,“President Biden Issues Executive Order on Safe,Secure,and Trustworthy Artificial Intelligence,”Fact sheet,Oc
231、tober 30,2023,https:/www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.46 The White House,“G7 Leaders Statement on the Hiroshima AI Process,”Press release,October 30,2023,https:/ww
232、w.whitehouse.gov/briefing-room/statements-releases/2023/10/30/g7-leaders-statement-on-the-hiroshima-ai-process/.47“AI Safety Summit 2023,”Topical Events,GOV.UK,November 2023,https:/www.gov.uk/government/topical-events/ai-safety-summit-2023;and“AI Safety Summit 2023:The Bletchley Declaration,”Policy
233、paper,GOV.UK,November 1,2023,https:/www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration.48 U.S.Department of Commerce,“Remarks by Commerce Secretary Gina Raimondo at the AI Safety Summit 2023 in Bletchley,England,”Speech transcript,November 2,2023,https:/merce.gov/news
234、/speeches/2023/11/remarks-commerce-secretary-gina-raimondo-ai-safety-summit-2023-bletchley;and Paul Sandle and David Shepardson,“US to launch its own AI safety institute,”Reuters,November 1,2023,https:/ Joseph Clark,“DOD Releases AI Adoption Strategy,”U.S.Department of Defense News,November 2,2023,h
235、ttps:/www.defense.gov/News/News-Stories/Article/Article/3578219/dod-releases-ai-adoption-strategy/.50 Jake Siegel,“With a systems approach to chips,Microsoft aims to tailor everything from silicon to service to meet AI demand,”Microsoft Source,November 22,2023,https:/ Cade Metz et al.,“Sam Altman is
236、 Reinstated as OpenAIs Chief Executive,”New York Times,November 22,2023,https:/ Anna Tong et al.,“OpenAI researchers warned board of AI breakthrough ahead of CEO ouster,sources say,”Reuters,November 23,2023,https:/ Guy Faulconbridge,“Putin Says West Cannot Have AI Monopoly so Russia Must up Its Game
237、,”Reuters,November 24,2023,https:/ Nick Clegg,“How Meta Is Planning for Elections in 2024,”Meta,December 14,2023,https:/ Milad Nasr et al.,“Scalable Extraction of Training Data from(Production)Language Models,”arXiv,November 28,2023,https:/arxiv.org/abs/2311.17035.56 United States Patent and Tradema
238、rk Office“USPTO Announces Semiconductor Technology Pilot Program in Support of Chips for America Program,”Press release,November 30,2023,https:/www.uspto.gov/about-us/news-updates/uspto-announces-semiconductor-technology-pilot-program-support-chips-america.57“Gemini,”Google DeepMind,December 6,2023,
239、https:/deepmind.google/technologies/gemini/#capabilities.58 European Parliament,“Artificial Intelligence Act:Deal on Comprehensive Rules for Trustworthy Ai,”Press release,December 9,2023,https:/www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensi
240、ve-rules-for-trustworthy-ai.59 Benjamin Parkin,“Deepfakes for$24 a month:how AI is disrupting Bangladeshs election,”Financial Times,December 14,2023,https:/ Michael M.Grynbaum and Ryan Mac,“The Times Sues OpenAI and Microsoft Over A.I.Use of Copyrighted Work,”New York Times,December 27,2023,https:/
241、Eduardo Baptista,“Nvidia launches new gaming chip for China to comply with US export controls,”Reuters,December 29,2023,https:/ TAKEAWAYS62“Sen.Chuck Schumer Launches SAFE Innovation,”CSIS.63“Statement on AI Risk,”Center for AI Safety.64“How concerned,if at all,are you about the possibility that AI
242、will cause the end of the human race on earth?,”YouGov US,April 3,2023,https:/ you support or oppose a six-month pause on some kinds of AI development?,”YouGov US,April 3,2023,https:/ of A.I.,”Subcommittee on Privacy,Technology,and the Law.66“WATCH:OpenAI CEO Sam Altman testifies before Senate Judic
243、iary Committee,”PBS News Hour,May 16,2023,video,02:50:06,https:/www.pbs.org/newshour/politics/watch-live-openai-ceo-sam-altman-testifies-before-senate-judiciary-committee;and Cat Zakrzewski et al.,“CEO behind ChatGPT warns Congress AI could cause harm to the world,”Washington Post,May 16,2023,https:
244、/ Bletchley Declaration,”GOV.UK.68 Koc and Leonard,“Biden Wins Deal with Netherlands,Japan”;and Gregory C.Allen and Emily Benson,Clues to the U.S.-Dutch-Japanese Semiconductor Export Controls Deal are Hiding in Plain Sight(Washington,DC:Center for Strategic and International Studies,March 1,2023),ht
245、tps:/www.csis.org/analysis/clues-us-dutch-japanese-semiconductor-export-controls-deal-are-hiding-plain-sight.69 Allen and Benson,Hiding in Plain Sight.70 Ibid.71 Bureau of Industry and Security,U.S.Department of Commerce,“Commerce Strengthens Restrictions on Advanced Czomputing Semiconductors,Semico
246、nductor Manufacturing Equipment,and Supercomputing Items to Countries of Concern,”Press release,October 17,2023,https:/www.bis.doc.gov/index.php/documents/about-bis/newsroom/press-releases/3355-2023-10-17-bis-press-release-acs-and-sme-rules-final-js/file.72 Emily Benson,“Updated October 7 Semiconduc
247、tor Export Controls,”CSIS,Commentary,October 18,2023,https:/www.csis.org/analysis/updated-october-7-semiconductor-export-controls.73 Sebastian Buckup and Greta Keenan,“Technology to watch:5 key trends for 2023,”World Economic Forum,January 19,2023,https:/www.weforum.org/agenda/2023/01/5-technology-t
248、rends-to-watch-in-2023/#:text=1%20Green%20hydrogen%2C%20nuclear%20fusion%20and%20other%20green,artificial%20intelligence%20to%20get%20even%20smarter%20in%202023.74 Cheyenne DeVon,“On ChatGPTs one-year anniversary,it has more than 1.7 billion usersheres what it may do next,”CNBC,November 30,2023,http
249、s:/ organisations launched 79 AI large language models since 2020,report says,”Reuters,May 30,2023,https:/ Nestor Maslej et al.,The AI Index 2023 Annual Report(Stanford,CA:AI Index Steering Committee,Institute for Human-Centered AI,Stanford University,April 3,2023),5057,https:/aiindex.stanford.edu/w
250、p-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf;and David Nield,“How ChatGPT and other LLMs Workand Where They Could Go Next,”Wired,April 30,2023,https:/ Gabriel Rivera,“Generative AI tools are quickly running out of text to train themselves on,UC Berkeley professor warns,”Business Insider,Ju
251、ly 11,2023,https:/ Maslej,The AI Index 2023 Annual Report,6263.79 Will Knight,“OpenAIs CEO Says the Age of Giant AI Models is Already Over,”Wired,April 17,2023,https:/ Dario Amodei interview by Dwarkesh Patel,“Dwarkesh Podcast:Dario Amodei(Anthropic CEO)Scaling,Alignment,&AI Progress,”podcast,01:58:
252、43,timestamp 01:01:3701:03:36,https:/ Shana Lynch,“2023 State of AI in 14 Charts,”Institute for Human-Centered AI,Stanford University,April 3,2023,https:/hai.stanford.edu/news/2023-state-ai-14-charts.81 Gina M.Raimondo and Laurie E.Locascio,Artificial Intelligence Risk Management Framework(AI RMF 1.
253、0)(Gaithersburg,MD:National Institute of Standards and Technology,January 2023),https:/nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.82 The White House,“Remarks by President Biden and Vice President Harris on the Administrations Commitment to Advancing the Safe,Secure,and Trustworthy Development an
254、d Use of Artificial Intelligence,”Speech transcript,October 30,2023,https:/www.whitehouse.gov/briefing-room/speeches-remarks/2023/10/30/remarks-by-president-biden-and-vice-president-harris-on-the-administrations-commitment-to-advancing-the-safe-secure-and-trustworthy-development-and-use-of-artificia
255、l-intelligence/.83“Sen.Chuck Schumer Launches SAFE Innovation,”CSIS.84 Gabby Miller,“US Senate AI Insight Forum Tracker,”Tech Policy Press,updated December 8,2023,https:/www.techpolicy.press/us-senate-ai-insight-forum-tracker/.85 Ibid.86“Statements From The Ninth Bipartisan Senate Forum on Artificia
256、l Intelligence,”Senate Majority Leader Chuck Schumer Newsroom,December 6,2023,https:/www.schumer.senate.gov/newsroom/press-releases/statements-from-the-ninth-bipartisan-senate-forum-on-artificial-intelligence.Gregory Allens testimony can be read here:https:/www.schumer.senate.gov/imo/media/doc/Greg%
257、20Allen%20-%20Statement.pdf.87 Senate Democrats Newsroom,“Majority Leader Schumer Floor Remarks on the Senates First AI Insight Forum Taking Place Tomorrow,”September 12,2023,https:/www.democrats.senate.gov/newsroom/press-releases/majority-leader-schumer-floor-remarks-on-the-senates-first-ai-insight
258、-forum-taking-place-tomorrow.88 The White House,“President Biden Issues Executive Order.”89 See,for example,Clark,“DOD Releases AI Adoption Strategy”;and U.S.Department of State,“The Department of State Unveils its First-Ever Enterprise Artificial Intelligence Strategy,”Fact sheet,November 9,2023,ht
259、tps:/www.state.gov/the-department-of-state-unveils-its-first-ever-enterprise-artificial-intelligence-strategy/.90 European Parliament,“Artificial Intelligence Act:deal on comprehensive rules for trustworthy AI,”Press release,December 9,2023,https:/www.europarl.europa.eu/news/en/press-room/20231206IP
260、R15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai;and Melissa Heikkil,“Five things you need to know about the EUs new AI Act,”MIT Technology Review,December 11,2023,https:/ Lisa OCarroll,“EU agrees historic deal with worlds first laws to regulate AI,”The Guardian,Dece
261、mber 8,2023,https:/ Thierry Breton(ThierryBreton),“Historic!The EU becomes the very first continent to set clear rules for the use of AI,”X post,December 8,2023,5:45 p.m.,https:/ Javier Espinoza and Leila Abboud,“EUs new AI Act risks hampering innovation,warns Emmanuel Macron,”Financial Times,Decemb
262、er 11,2023,https:/ Mia Hoffmann,“The EU AI Act:A Primer,”Center for Security and Emerging Technology,September 26,2023,https:/cset.georgetown.edu/article/the-eu-ai-act-a-primer/#:text=The%20AI%20Act%20is%20a,systems%20across%20EU%20member%20states.95 Javier Espinoza,“EU agrees to landmark rules on a
263、rtificial intelligence,”Financial Times,December 9,2023,https:/ OCarroll,“EU agrees historic deal”;and“Europe,a laggard in AI,seizes the lead in its regulation,”The Economist,updated December 12,2023,https:/ Arjun Kharpal,“China finalizes first-of-its-kind rules governing generative A.I.services lik
264、e ChatGPT,”CNBC,July 13,2023,https:/ Josh Ye,“China proposes measures to manage generative AI services,”Reuters,April 11,2023,https:/ Laney Zhang,“China:Generative AI Measures Finalized,”Library of Congress,July 19,2023,https:/www.loc.gov/item/global-legal-monitor/2023-07-18/china-generative-ai-meas
265、ures-finalized/.100 Will Wenshall,“How Chinas New AI Rules Could Affect U.S.Companies,”Time,September 19,2023,https:/ Dewey Sim,“Belt and road forum:China launches AI framework,urging equal rights and opportunities for all nations,”South China Morning Post,October 18,2023,https:/ AI Governance Initi
266、ative,”Chinese Ministry of Foreign Affairs,October 20,2023,https:/ Bill Drexel and Hannah Kelley,“Behind Chinas Plans to Build AI for the World,”Politico,November 30,2023,https:/ Wu Zhaohui quoted in Kelvin Chan and Jill Lawless,“Nations pledge to work together to contain catastrophic risks of artif
267、icial intelligence,”PBS News Hour,November 1,2023,https:/www.pbs.org/newshour/world/nations-pledge-to-work-together-to-contain-catastrophic-risks-of-artificial-intelligence-at-first-international-safety-summit.105“AI Safety Summit Hosted by the UK,”AI Safety Summit,November 2023,https:/www.aisafetys
268、ummit.gov.uk/.106 William James and Farouq Suleiman,“Britain publishes Bletchley Declaration on AI safety,”Reuters,November 1,2023,https:/ The White House,“G7 Leaders Statement”;and“Hiroshima Process International Guiding Principles for Advanced AI system,”European Commission,October 30 2023,https:/
269、digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system.108“REAIM 2023,”Government of the Netherlands,February 2023,https:/www.government.nl/ministries/ministry-of-foreign-affairs/activiteiten/reaim.109 Antnio Guterres,“Secretary-Generals remark
270、s to the Security Council on Artificial Intelligence,”Speeches,July 18,2023,https:/www.un.org/sg/en/content/sg/speeches/2023-07-18/secretary-generals-remarks-the-security-council-artificial-intelligence.110 Bill Gates,“The Age of AI has begun,”Gates Notes,March 21,2023,https:/ Senate Democrats Newsr
271、oom,“Majority Leader Schumer Opening Remarks.”112 Ibid.113“Oversight of A.I.,”Subcommittee on Privacy,Technology,and the Law.114 The White House,“President Biden Issues Executive Order.”115 Marietje Schaake,“We need to keep CEOs away from AI regulation,”Financial Times,June 4,2023,https:/ Anna Tong,
272、“Stanford Researchers issue AI transparency report,urge tech companies to reveal more,”Reuters,October 18,2023,https:/ Beatrice Nolan,“Google researchers say they got OpenAIs ChatGPT to reveal some of its training data with just one word,”Business Insider,December 4,2023,https:/ Aron Mok,“ChatGPT wi
273、ll no longer comply if you ask it to repeat a word foreverafter a recent prompt revealed training data and personal info,”Business Insider,December 5,2023,https:/ Emilia David,“Now you can block OpenAIs web crawler,”The Verge,August 7,2023,https:/ Platform,https:/ Catherine Thorbecke,“OpenAI,maker o
274、f ChatGPT,hit with proposed class action lawsuit alleging it stole peoples data,”CNN Business,June 28,2023,https:/ Emilia David,“A lawsuit alleging privacy violations by OpenAI was dismissed,”The Verge,September 20,2023,https:/ Brittain,“OpenAI,Microsoft hit with new US consumer privacy class action
275、,”Reuters,September 6,2023,https:/ Catherine Thorbecke,“Google hit with lawsuit alleging it stole data from millions of users to train its AI tools,”CNN Business,updated July 12,2023,https:/ Blake Brittain,“Getty Images lawsuit says Stability AI misused photos to train AI,”Reuters,February 6,2023,ht
276、tps:/ Emilia David,“Getty lawsuit against Stability AI to go to trial in the UK,”The Verge,December 4,2023,https:/ The Authors Guild,“The Authors Guild,John Grisham,Jodi Picoult,David Baldacci,George R.R.Martin,and 13 Other Authors File Class-Action Suit Against OpenAI,”Press release,September 20,20
277、23,https:/authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/.124 Tiana Loving,“Current AI Copyright CasesPart 1,”Copyright Alliance Blogs,March 30,2023,https:/copyrightalliance.org/current-ai-copyright-cases-part-1/.125 Wes Davis,“AI companies have all kinds of arguments aga
278、inst paying for copyrighted content,”The Verge,November 4,2023,https:/ James Vincent,“AI systems cant patent inventions,US federal circuit court confirms,”The Verge,August 8,2023,https:/ Ibid.128 Ibid.;“AI cannot be the inventor of a patent,appeals court rules,”BBC,September 23,2023,https:/ European
279、 Commission,“Naming AI as inventor on patent applications:EPO Board of Appeal ratifies decision,”News Article,January 10,2022,https:/intellectual-property-helpdesk.ec.europa.eu/news-events/news/naming-ai-inventor-patent-applications-epo-board-appeal-ratifies-decision-2022-01-10_en.129 U.S.Department
280、 of Defense,“DoD Announces Update to DoD Directive 3000.09,Autonomy In Weapon Systems,”Press release,January 25,2023,https:/www.defense.gov/News/Releases/Release/Article/3278076/dod-announces-update-to-dod-directive-300009-autonomy-in-weapon-systems/.130 Ibid.131 DOD Responsible AI Working Council,U
281、.S.Department Of Defense Responsible Artificial Intelligence Strategy and Implementation Pathway(Washington,DC:U.S.Department of Defense,June 2020),https:/media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PD
282、F.132“Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,”Bureau of Arms Control,Deterrence,and Stability,U.S.Department of State,https:/www.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/.133“Reaim 2023,”Min
283、istry of Foreign Affairs,Netherlands,February 2023,https:/www.government.nl/ministries/ministry-of-foreign-affairs/activiteiten/reaim.134 U.S.Department of Defense,“Deputy Secretary of Defense Kathleen Hicks Keynote Address:The Urgency to Innovate(As Delivered),”Speech transcript,August 28,2023,http
284、s:/www.defense.gov/News/Speeches/Speech/Article/3507156/deputy-secretary-of-defense-kathleen-hicks-keynote-address-the-urgency-to-innov/.135 Phillips Payson OBrien,“What Ukraine Knows About the Future of War,”The Atlantic,September 14,2023,https:/ Eric Tegler,“DoDs Mass-Drone Replicator Initiative I
285、s Process Over Production,”Forbes,November 30,2023,https:/ the Department of Defense Replicator Initiative to Accelerate All-Domain Attritable Autonomous Systems To Warfighters at Speed and Scale,”Defense Innovation Unit,November 30,2023,https:/www.diu.mil/latest/implementing-the-department-of-defen
286、se-replicator-initiative-to-accelerate.138 U.S.Department of Defense,“Deputy Secretary of Defense Kathleen Hicks Remarks:Unpacking the Replicator Initiative at the Defense News Conference(As Delivered),”Speech transcript,September 6,2023,https:/www.defense.gov/News/Speeches/Speech/Article/3517213/de
287、puty-secretary-of-defense-kathleen-hicks-remarks-unpacking-the-replicator-ini/.139 Jaspreet Gill,“DIU eyeing Feb-Aug 2025 to field first Replicator systems,wants industry input,”Breaking Defense,November 30,2023,https:/ AI Bible:The ultimate guide to genAI disruption,”CB Insights,November 7,2023,27,
288、https:/ Michael Tobin et al.,“A.I.is the star of earnings calls as mentions skyrocket 77%with companies saying theyll use for everything from medicine to cybersecurity,”Fortune,March 1,2023,https:/ AI Bible,”CB Insights,26.143 Ernst and Young,“CEOs bet big on generative AI to gain competitive edge d
289、espite hurdles to adoption and M&A challenges,”Press release,October 24,2023,https:/ S&P 500 Index is forecast to return 6%in 2024,”Goldman Sachs,November 20,2023,https:/ Ibid.;and Josh Schafer,“One chart shows how the Magnificent 7 have dominated the stock market in 2023,”Yahoo!Finance,November 15,
290、2023,https:/ Gen Teare,“Meet The New AI Unicorns Of 2023,”Crunchbase News,October 27,2023,https:/ Sundar Pichai and Demis Hassabis,“Introducing Gemini:our largest and most capable AI model,”Google Keyword Blog,December 6,2023,https:/blog.google/technology/ai/google-gemini-ai/#sundar-note.148 Ibid.14
291、9 Pradyumna Desale,“Announcing NVIDIA DGX GH200:The First 100 Terabyte GPU Memory System,”NVIDIA,May 28,2023,https:/ Robert Service,“Materials-predicting AI from DeepMind could revolutionize electronics,batteries,and solar cells,”Science,November 29,2023,https:/www.science.org/content/article/materi
292、als-predicting-ai-deepmind-could-revolutionize-electronics-batteries-and-solar;and Paul Voosen,“AI churns out lightning-fast forecasts as good as the weather agencies,”Science,November 14,2023,https:/www.science.org/content/article/ai-churns-out-lightning-fast-forecasts-good-weather-agencies.151 Hua
293、wei Cloud,“Reshaping Industries with AI:Huawei Cloud Tackles Big Challenges,”Press release,September 22,2023,https:/ AI EVENTS IN 2023152 Frank Bajak and Hanna Arhirova,“Drone advances amid war in Ukraine could bring fighting robots to front lines,”PBS News Hour,January 3,2023,https:/www.pbs.org/new
294、shour/world/drone-advances-amid-war-in-ukraine-could-bring-fighting-robots-to-front-lines.153 The White House,“United States and India Elevate Strategic Partnership.”154“Artificial Intelligence is at the core of discussions in Rwanda as the AU High-Level Panel on Emerging Technologies convenes exper
295、ts to draft the AU-AI Continental Strategy,”AUDA-NEPAD,March 29,2023,https:/www.nepad.org/news/artificial-intelligence-core-of-discussions-rwanda-au-high-level-panel-emerging.155“KHIPU Homepage,”KHIPU,updated March 2023,https:/khipu.ai/;and“Montevideo Declaration on Artificial Intelligence and its I
296、mpact in Latin America,”Zenodo,March 10,2023,https:/zenodo.org/records/8208793.156“Ministerial Declaration,”G7G20 Documents Database.157 The White House,“G7 Hiroshima Leaders Communiqu.”158 United Nations,“International Community Must Urgently Confront New Reality.”159 Senate Democrats Newsroom,“Maj
297、ority Leader Schumer Opening Remarks.”160 Sim,“Belt and road forum.”161“AI Safety Summit 2023,”GOV.UK;and“The Bletchley Declaration,”GOV.UK.162 European Parliament,“Artificial Intelligence Act.”163 Parkin,“Deepfakes for$24 a month.”THE YEAR AHEAD164“2024 is the biggest election year in history,”The Economist,November 13,2023,https:/