上海品茶

斯坦福大学:2024年人工智能指数报告(英文版)(502页).pdf

编号:159164 PDF  DOCX 502页 42.04MB 下载积分:VIP专享
下载报告请您先登录!

斯坦福大学:2024年人工智能指数报告(英文版)(502页).pdf

1、Artificial IntelligenceIndex Report 2024Artificial IntelligenceIndex Report 20242Introduction to the AI Index Report 2024Welcome to the seventh edition of the AI Index report.The 2024 Index is our most comprehensive to date and arrives at an important moment when AIs influence on society has never b

2、een more pronounced.This year,we have broadened our scope to more extensively cover essential trends such as technical advancements in AI,public perceptions of the technology,and the geopolitical dynamics surrounding its development.Featuring more original data than ever before,this edition introduc

3、es new estimates on AI training costs,detailed analyses of the responsible AI landscape,and an entirely new chapter dedicated to AIs impact on science and medicine.The AI Index report tracks,collates,distills,and visualizes data related to artificial intelligence(AI).Our mission is to provide unbias

4、ed,rigorously vetted,broadly sourced data in order for policymakers,researchers,executives,journalists,and the general public to develop a more thorough and nuanced understanding of the complex field of AI.The AI Index is recognized globally as one of the most credible and authoritative sources for

5、data and insights on artificial intelligence.Previous editions have been cited in major newspapers,including the The New York Times,Bloomberg,and The Guardian,have amassed hundreds of academic citations,and been referenced by high-level policymakers in the United States,the United Kingdom,and the Eu

6、ropean Union,among other places.This years edition surpasses all previous ones in size,scale,and scope,reflecting the growing significance that AI is coming to hold in all of our lives.Artificial IntelligenceIndex Report 20243Message From the Co-directorsA decade ago,the best AI systems in the world

7、 were unable to classify objects in images at a human level.AI struggled with language comprehension and could not solve math problems.Today,AI systems routinely exceed human performance on standard benchmarks.Progress accelerated in 2023.New state-of-the-art systems like GPT-4,Gemini,and Claude 3 a

8、re impressively multimodal:They can generate fluent text in dozens of languages,process audio,and even explain memes.As AI has improved,it has increasingly forced its way into our lives.Companies are racing to build AI-based products,and AI is increasingly being used by the general public.But curren

9、t AI technology still has significant problems.It cannot reliably deal with facts,perform complex reasoning,or explain its conclusions.AI faces two interrelated futures.First,technology continues to improve and is increasingly used,having major consequences for productivity and employment.It can be

10、put to both good and bad uses.In the second future,the adoption of AI is constrained by the limitations of the technology.Regardless of which future unfolds,governments are increasingly concerned.They are stepping in to encourage the upside,such as funding university R&D and incentivizing private in

11、vestment.Governments are also aiming to manage the potential downsides,such as impacts on employment,privacy concerns,misinformation,and intellectual property rights.As AI rapidly evolves,the AI Index aims to help the AI community,policymakers,business leaders,journalists,and the general public navi

12、gate this complex landscape.It provides ongoing,objective snapshots tracking several key areas:technical progress in AI capabilities,the community and investments driving AI development and deployment,public opinion on current and potential future impacts,and policy measures taken to stimulate AI in

13、novation while managing its risks and challenges.By comprehensively monitoring the AI ecosystem,the Index serves as an important resource for understanding this transformative technological force.On the technical front,this years AI Index reports that the number of new large language models released

14、 worldwide in 2023 doubled over the previous year.Two-thirds were open-source,but the highest-performing models came from industry players with closed systems.Gemini Ultra became the first LLM to reach human-level performance on the Massive Multitask Language Understanding(MMLU)benchmark;performance

15、 on the benchmark has improved by 15 percentage points since last year.Additionally,GPT-4 achieved an impressive 0.96 mean win rate score on the comprehensive Holistic Evaluation of Language Models(HELM)benchmark,which includes MMLU among other evaluations.Artificial IntelligenceIndex Report 20244Al

16、though global private investment in AI decreased for the second consecutive year,investment in generative AI skyrocketed.More Fortune 500 earnings calls mentioned AI than ever before,and new studies show that AI tangibly boosts worker productivity.On the policymaking front,global mentions of AI in l

17、egislative proceedings have never been higher.U.S.regulators passed more AI-related regulations in 2023 than ever before.Still,many expressed concerns about AIs ability to generate deepfakes and impact elections.The public became more aware of AI,and studies suggest that they responded with nervousn

18、ess.Ray Perrault and Jack ClarkCo-directors,AI IndexMessage From the Co-directors(contd)Artificial IntelligenceIndex Report 20245Top 10 Takeaways1.AI beats humans on some tasks,but not on all.AI has surpassed human performance on several benchmarks,including some in image classification,visual reaso

19、ning,and English understanding.Yet it trails behind on more complex tasks like competition-level mathematics,visual commonsense reasoning and planning.2.Industry continues to dominate frontier AI research.In 2023,industry produced 51 notable machine learning models,while academia contributed only 15

20、.There were also 21 notable models resulting from industry-academia collaborations in 2023,a new high.3.Frontier models get way more expensive.According to AI Index estimates,the training costs of state-of-the-art AI models have reached unprecedented levels.For example,OpenAIs GPT-4 used an estimate

21、d$78 million worth of compute to train,while Googles Gemini Ultra cost$191 million for compute.4.The United States leads China,the EU,and the U.K.as the leading source of top AI models.In 2023,61 notable AI models originated from U.S.-based institutions,far outpacing the European Unions 21 and China

22、s 15.5.Robust and standardized evaluations for LLM responsibility are seriously lacking.New research from the AI Index reveals a significant lack of standardization in responsible AI reporting.Leading developers,including OpenAI,Google,and Anthropic,primarily test their models against different resp

23、onsible AI benchmarks.This practice complicates efforts to systematically compare the risks and limitations of top AI models.6.Generative AI investment skyrockets.Despite a decline in overall AI private investment last year,funding for generative AI surged,nearly octupling from 2022 to reach$25.2 bi

24、llion.Major players in the generative AI space,including OpenAI,Anthropic,Hugging Face,and Inflection,reported substantial fundraising rounds.7.The data is in:AI makes workers more productive and leads to higher quality work.In 2023,several studies assessed AIs impact on labor,suggesting that AI ena

25、bles workers to complete tasks more quickly and to improve the quality of their output.These studies also demonstrated AIs potential to bridge the skill gap between low-and high-skilled workers.Still,other studies caution that using AI without proper oversight can lead to diminished performance.5Art

26、ificial IntelligenceIndex Report 20246Top 10 Takeaways(contd)8.Scientific progress accelerates even further,thanks to AI.In 2022,AI began to advance scientific discovery.2023,however,saw the launch of even more significant science-related AI applicationsfrom AlphaDev,which makes algorithmic sorting

27、more efficient,to GNoME,which facilitates the process of materials discovery.9.The number of AI regulations in the United States sharply increases.The number of AI-related regulations in the U.S.has risen significantly in the past year and over the last five years.In 2023,there were 25 AI-related re

28、gulations,up from just one in 2016.Last year alone,the total number of AI-related regulations grew by 56.3%.10.People across the globe are more cognizant of AIs potential impactand more nervous.A survey from Ipsos shows that,over the last year,the proportion of those who think AI will dramatically a

29、ffect their lives in the next three to five years has increased from 60%to 66%.Moreover,52%express nervousness toward AI products and services,marking a 13 percentage point rise from 2022.In America,Pew data suggests that 52%of Americans report feeling more concerned than excited about AI,rising fro

30、m 37%in 2022.Artificial IntelligenceIndex Report 20247Steering CommitteeStaff and ResearchersCo-directorsMembersJack Clark,Anthropic,OECDRaymond Perrault,SRI InternationalErik Brynjolfsson,Stanford UniversityJohn Etchemendy,Stanford UniversityKatrina Ligett,Hebrew UniversityTerah Lyons,JPMorgan Chas

31、e&Co.James Manyika,Google,University of OxfordJuan Carlos Niebles,Stanford University,SalesforceVanessa Parli,Stanford UniversityYoav Shoham,Stanford University,AI21 LabsRussell Wald,Stanford UniversityResearch Manager and Editor in ChiefResearch AssociateAffiliated ResearchersGraduate ResearchersNe

32、stor MaslejStanford UniversityLoredana FattoriniStanford UniversityJames da Costa,Stanford UniversitySimba Jonga,Stanford UniversityElif Kiesow Cortez,Stanford Law School Research FellowAnka Reuel,Stanford UniversityRobi Rahman,Data ScientistAlexandra Rome,Freelance ResearcherLapo Santarlasci,IMT Sc

33、hool for Advanced Studies LuccaUndergraduate ResearchersEmily Capstick,Stanford UniversitySummer Flowers,Stanford UniversityArmin Hamrah,Claremont McKenna CollegeAmelia Hardy,Stanford UniversityMena Hassan,Stanford UniversityEthan Duncan He-Li Hellman,Stanford UniversityJulia Betts Lotufo,Stanford U

34、niversitySukrut Oak,Stanford UniversityAndrew Shi,Stanford UniversityJason Shin,Stanford UniversityEmma Williamson,Stanford UniversityAlfred Yu,Stanford UniversityArtificial IntelligenceIndex Report 20248How to Cite This ReportPublic Data and ToolsAI Index and Stanford HAINestor Maslej,Loredana Fatt

35、orini,Raymond Perrault,Vanessa Parli,Anka Reuel,Erik Brynjolfsson,John Etchemendy,Katrina Ligett,Terah Lyons,James Manyika,Juan Carlos Niebles,Yoav Shoham,Russell Wald,and Jack Clark,“The AI Index 2024 Annual Report,”AI Index Steering Committee,Institute for Human-Centered AI,Stanford University,Sta

36、nford,CA,April 2024.The AI Index 2024 Annual Report by Stanford University is licensed under Attribution-NoDerivatives 4.0 International.The AI Index 2024 Report is supplemented by raw data and an interactive tool.We invite each reader to use the data and the tool in a way most relevant to their wor

37、k and interests.Raw data and charts:The public data and high-resolution images of all the charts in the report are available on Google Drive.Global AI Vibrancy Tool:Compare the AI ecosystems of over 30 countries.The Global AI Vibrancy tool will be updated in the summer of 2024.The AI Index is an ind

38、ependent initiative at the Stanford Institute for Human-Centered Artificial Intelligence(HAI).The AI Index was conceived within the One Hundred Year Study on Artificial Intelligence(AI100).The AI Index welcomes feedback and new ideas for next year.Contact us at AI-Index-Reportstanford.edu.The AI Ind

39、ex acknowledges that while authored by a team of human researchers,its writing process was aided by AI tools.Specifically,the authors used ChatGPT and Claude to help tighten and copy edit initial drafts.The workflow involved authors writing the original copy,then utilizing AI tools as part of the ed

40、iting process.Artificial IntelligenceIndex Report 20249Supporting PartnersAnalytics and Research PartnersArtificial IntelligenceIndex Report 202410ContributorsIntroductionLoredana Fattorini,Nestor Maslej,Vanessa Parli,Ray PerraultChapter 1:Research and DevelopmentCatherine Aiken,Terry Auricchio,Tama

41、y Besiroglu,Rishi Bommasani,Andrew Brown,Peter Cihon,James da Costa,Ben Cottier,James Cussens,James Dunham,Meredith Ellison,Loredana Fattorini,Enrico Gerding,Anson Ho,Percy Liang,Nestor Maslej,Greg Mori,Tristan Naumann,Vanessa Parli,Pavlos Peppas,Ray Perrault,Robi Rahman,Vesna Sablijakovic-Fritz,Jim

42、 Schmiedeler,Jaime Sevilla,Autumn Toney,Kevin Xu,Meg Young,Milena ZeithamlovaChapter 2:Technical PerformanceRishi Bommasani,Emma Brunskill,Erik Brynjolfsson,Emily Capstick,Jack Clark,Loredana Fattorini,Tobi Gertsenberg,Noah Goodman,Nicholas Haber,Sanmi Koyejo,Percy Liang,Katrina Ligett,Sasha Luccion

43、i,Nestor Maslej,Juan Carlos Niebles,Sukrut Oak,Vanessa Parli,Ray Perrault,Andrew Shi,Yoav Shoham,Emma WilliamsonChapter 3:Responsible AIJack Clark,Loredana Fattorini,Amelia Hardy,Katrina Ligett,Nestor Maslej,Vanessa Parli,Ray Perrault,Anka Reuel,Andrew ShiChapter 4:EconomySusanne Bieller,Erik Brynjo

44、lfsson,Mar Carpanelli,James da Costa,Natalia Dorogi,Heather English,Murat Erer,Loredana Fattorini,Akash Kaura,James Manyika,Nestor Maslej,Cal McKeever,Julia Nitschke,Layla OKane,Vanessa Parli,Ray Perrault,Brittany Presten,Carl Shan,Bill Valle,Casey Weston,Emma WilliamsonChapter 5:Science and Medicin

45、eRuss Altman,Loredana Fattorini,Remi Lam,Curtis Langlotz,James Manyika,Nestor Maslej,Vanessa Parli,Ray Perrault,Emma WilliamsonThe AI Index wants to acknowledge the following individuals by chapter and section for their contributions of data,analysis,advice,and expert commentary included in the AI I

46、ndex 2024 Report:Artificial IntelligenceIndex Report 202411Contributors(contd)Chapter 6:EducationBetsy Bizot,John Etchemendy,Loredana Fattorini,Kirsten Feddersen,Matt Hazenbush,Nestor Maslej,Vanessa Parli,Ray Perrault,Svetlana Tikhonenko,Laurens Vehmeijer,Hannah Weissman,Stuart ZwebenChapter 7:Polic

47、y and GovernanceAlison Boyer,Elif Kiesow Cortez,Rebecca DeCrescenzo,David Freeman Engstrom,Loredana Fattorini,Philip de Guzman,Mena Hassan,Ethan Duncan He-Li Hellman,Daniel Ho,Simba Jonga,Rohini Kosoglu,Mark Lemley,Julia Betts Lotufo,Nestor Maslej,Caroline Meinhardt,Julian Nyarko,Jeff Park,Vanessa P

48、arli,Ray Perrault,Alexandra Rome,Lapo Santarlasci,Sarah Smedley,Russell Wald,Emma Williamson,Daniel ZhangChapter 8:DiversityBetsy Bizot,Loredana Fattorini,Kirsten Feddersen,Matt Hazenbush,Nestor Maslej,Vanessa Parli,Ray Perrault,Svetlana Tikhonenko,Laurens Vehmeijer,Caroline Weis,Hannah Weissman,Stu

49、art ZwebenChapter 9:Public OpinionMaggie Arai,Heather English,Loredana Fattorini,Armin Hamrah,Peter Loewen,Nestor Maslej,Vanessa Parli,Ray Perrault,Marco Monteiro Silva,Lee Slinger,Bill Valle,Russell WaldArtificial IntelligenceIndex Report 202412Center for Research on Foundation ModelsRishi Bommasan

50、i,Percy LiangCenter for Security and Emerging Technology,Georgetown UniversityCatherine Aiken,James Dunham,Autumn ToneyCode.org Hannah WeissmanComputing Research AssociationBetsy Bizot,Stuart ZwebenEpochBen Cottier,Robi RahmanGitHubPeter Cihon,Kevin XuGoviniAlison Boyer,Rebecca DeCrescenzo,Philip de

51、 Guzman,Jeff ParkInformatics EuropeSvetlana TikhonenkoInternational Federation of RoboticsSusanne BiellerLightcastCal McKeever,Julia Nitschke,Layla OKane LinkedIn Murat Erer,Akash Kaura,Casey Weston McKinsey&CompanyNatalia Dorogi,Brittany PrestenMunk School of Global Affairs and Public PolicyPeter L

52、oewen,Lee SlingerQuidHeather English,Bill ValleSchwartz Reisman Institute for Technology and SocietyMaggie Arai,Marco Monteiro SilvaStudyportalsKirsten Feddersen,Laurens VehmeijerWomen in Machine LearningCaroline WeisThe AI Index thanks the following organizations and individuals who provided data f

53、or inclusion in this years report:OrganizationsThe AI Index also thanks Jeanina Casusi,Nancy King,Carolyn Lehman,Shana Lynch,Jonathan Mindes,and Michi Turner for their help in preparing this report;Joe Hinman and Nabarun Mukherjee for their help in maintaining the AI Index website;and Annie Benisch,

54、Marc Gough,Panos Madamopoulos-Moraris,Kaci Peel,Drew Spence,Madeline Wright,and Daniel Zhang for their work in helping promote the report.Artificial IntelligenceIndex Report 202413Report Highlights 14Chapter 1 Research and Development 27Chapter 2 Technical Performance 73Chapter 3 Responsible AI 159C

55、hapter 4 Economy 213Chapter 5 Science and Medicine 296Chapter 6 Education 325Chapter 7 Policy and Governance 366Chapter 8 Diversity 411Chapter 9 Public Opinion 435Appendix 458ACCESS THE PUBLIC DATATable of ContentsArtificial IntelligenceIndex Report 202414Chapter 1:Research and Development 1.Industr

56、y continues to dominate frontier AI research.In 2023,industry produced 51 notable machine learning models,while academia contributed only 15.There were also 21 notable models resulting from industry-academia collaborations in 2023,a new high.2.More foundation models and more open foundation models.I

57、n 2023,a total of 149 foundation models were released,more than double the amount released in 2022.Of these newly released models,65.7%were open-source,compared to only 44.4%in 2022 and 33.3%in 2021.3.Frontier models get way more expensive.According to AI Index estimates,the training costs of state-

58、of-the-art AI models have reached unprecedented levels.For example,OpenAIs GPT-4 used an estimated$78 million worth of compute to train,while Googles Gemini Ultra cost$191 million for compute.4.The United States leads China,the EU,and the U.K.as the leading source of top AI models.In 2023,61 notable

59、 AI models originated from U.S.-based institutions,far outpacing the European Unions 21 and Chinas 15.5.The number of AI patents skyrockets.From 2021 to 2022,AI patent grants worldwide increased sharply by 62.7%.Since 2010,the number of granted AI patents has increased more than 31 times.6.China dom

60、inates AI patents.In 2022,China led global AI patent origins with 61.1%,significantly outpacing the United States,which accounted for 20.9%of AI patent origins.Since 2010,the U.S.share of AI patents has decreased from 54.1%.7.Open-source AI research explodes.Since 2011,the number of AI-related proje

61、cts on GitHub has seen a consistent increase,growing from 845 in 2011 to approximately 1.8 million in 2023.Notably,there was a sharp 59.3%rise in the total number of GitHub AI projects in 2023 alone.The total number of stars for AI-related projects on GitHub also significantly increased in 2023,more

62、 than tripling from 4.0 million in 2022 to 12.2 million.8.The number of AI publications continues to rise.Between 2010 and 2022,the total number of AI publications nearly tripled,rising from approximately 88,000 in 2010 to more than 240,000 in 2022.The increase over the last year was a modest 1.1%.R

63、eport HighlightsArtificial IntelligenceIndex Report 202415Chapter 2:Technical Performance 1.AI beats humans on some tasks,but not on all.AI has surpassed human performance on several benchmarks,including some in image classification,visual reasoning,and English understanding.Yet it trails behind on

64、more complex tasks like competition-level mathematics,visual commonsense reasoning and planning.2.Here comes multimodal AI.Traditionally AI systems have been limited in scope,with language models excelling in text comprehension but faltering in image processing,and vice versa.However,recent advancem

65、ents have led to the development of strong multimodal models,such as Googles Gemini and OpenAIs GPT-4.These models demonstrate flexibility and are capable of handling images and text and,in some instances,can even process audio.3.Harder benchmarks emerge.AI models have reached performance saturation

66、 on established benchmarks such as ImageNet,SQuAD,and SuperGLUE,prompting researchers to develop more challenging ones.In 2023,several challenging new benchmarks emerged,including SWE-bench for coding,HEIM for image generation,MMMU for general reasoning,MoCa for moral reasoning,AgentBench for agent-

67、based behavior,and HaluEval for hallucinations.4.Better AI means better data which means even better AI.New AI models such as SegmentAnything and Skoltech are being used to generate specialized data for tasks like image segmentation and 3D reconstruction.Data is vital for AI technical improvements.T

68、he use of AI to create more data enhances current capabilities and paves the way for future algorithmic improvements,especially on harder tasks.5.Human evaluation is in.With generative models producing high-quality text,images,and more,benchmarking has slowly started shifting toward incorporating hu

69、man evaluations like the Chatbot Arena Leaderboard rather than computerized rankings like ImageNet or SQuAD.Public sentiment about AI is becoming an increasingly important consideration in tracking AI progress.6.Thanks to LLMs,robots have become more flexible.The fusion of language modeling with rob

70、otics has given rise to more flexible robotic systems like PaLM-E and RT-2.Beyond their improved robotic capabilities,these models can ask questions,which marks a significant step toward robots that can interact more effectively with the real world.Report HighlightsArtificial IntelligenceIndex Repor

71、t 202416Chapter 2:Technical Performance(contd)7.More technical research in agentic AI.Creating AI agents,systems capable of autonomous operation in specific environments,has long challenged computer scientists.However,emerging research suggests that the performance of autonomous AI agents is improvi

72、ng.Current agents can now master complex games like Minecraft and effectively tackle real-world tasks,such as online shopping and research assistance.8.Closed LLMs significantly outperform open ones.On 10 select AI benchmarks,closed models outperformed open ones,with a median performance advantage o

73、f 24.2%.Differences in the performance of closed and open models carry important implications for AI policy debates.Artificial IntelligenceIndex Report 202417Chapter 3:Responsible AI 1.Robust and standardized evaluations for LLM responsibility are seriously lacking.New research from the AI Index rev

74、eals a significant lack of standardization in responsible AI reporting.Leading developers,including OpenAI,Google,and Anthropic,primarily test their models against different responsible AI benchmarks.This practice complicates efforts to systematically compare the risks and limitations of top AI mode

75、ls.2.Political deepfakes are easy to generate and difficult to detect.Political deepfakes are already affecting elections across the world,with recent research suggesting that existing AI deepfake methods perform with varying levels of accuracy.In addition,new projects like CounterCloud demonstrate

76、how easily AI can create and disseminate fake content.3.Researchers discover more complex vulnerabilities in LLMs.Previously,most efforts to red team AI models focused on testing adversarial prompts that intuitively made sense to humans.This year,researchers found less obvious strategies to get LLMs

77、 to exhibit harmful behavior,like asking the models to infinitely repeat random words.4.Risks from AI are becoming a concern for businesses across the globe.A global survey on responsible AI highlights that companies top AI-related concerns include privacy,data security,and reliability.The survey sh

78、ows that organizations are beginning to take steps to mitigate these risks.Globally,however,most companies have so far only mitigated a small portion of these risks.5.LLMs can output copyrighted material.Multiple researchers have shown that the generative outputs of popular LLMs may contain copyrigh

79、ted material,such as excerpts from The New York Times or scenes from movies.Whether such output constitutes copyright violations is becoming a central legal question.6.AI developers score low on transparency,with consequences for research.The newly introduced Foundation Model Transparency Index show

80、s that AI developers lack transparency,especially regarding the disclosure of training data and methodologies.This lack of openness hinders efforts to further understand the robustness and safety of AI systems.Report HighlightsArtificial IntelligenceIndex Report 202418Chapter 3:Responsible AI(contd)

81、7.Extreme AI risks are difficult to analyze.Over the past year,a substantial debate has emerged among AI scholars and practitioners regarding the focus on immediate model risks,like algorithmic discrimination,versus potential long-term existential threats.It has become challenging to distinguish whi

82、ch claims are scientifically founded and should inform policymaking.This difficulty is compounded by the tangible nature of already present short-term risks in contrast with the theoretical nature of existential threats.8.The number of AI incidents continues to rise.According to the AI Incident Data

83、base,which tracks incidents related to the misuse of AI,123 incidents were reported in 2023,a 32.3 percentage point increase from 2022.Since 2013,AI incidents have grown by over twentyfold.A notable example includes AI-generated,sexually explicit deepfakes of Taylor Swift that were widely shared onl

84、ine.9.ChatGPT is politically biased.Researchers find a significant bias in ChatGPT toward Democrats in the United States and the Labour Party in the U.K.This finding raises concerns about the tools potential to influence users political views,particularly in a year marked by major global elections.A

85、rtificial IntelligenceIndex Report 202419Chapter 4:Economy 1.Generative AI investment skyrockets.Despite a decline in overall AI private investment last year,funding for generative AI surged,nearly octupling from 2022 to reach$25.2 billion.Major players in the generative AI space,including OpenAI,An

86、thropic,Hugging Face,and Inflection,reported substantial fundraising rounds.2.Already a leader,the United States pulls even further ahead in AI private investment.In 2023,the United States saw AI investments reach$67.2 billion,nearly 8.7 times more than China,the next highest investor.While private

87、AI investment in China and the European Union,including the United Kingdom,declined by 44.2%and 14.1%,respectively,since 2022,the United States experienced a notable increase of 22.1%in the same time frame.3.Fewer AI jobs in the United States and across the globe.In 2022,AI-related positions made up

88、 2.0%of all job postings in America,a figure that decreased to 1.6%in 2023.This decline in AI job listings is attributed to fewer postings from leading AI firms and a reduced proportion of tech roles within these companies.4.AI decreases costs and increases revenues.A new McKinsey survey reveals tha

89、t 42%of surveyed organizations report cost reductions from implementing AI(including generative AI),and 59%report revenue increases.Compared to the previous year,there was a 10 percentage point increase in respondents reporting decreased costs,suggesting AI is driving significant business efficiency

90、 gains.5.Total AI private investment declines again,while the number of newly funded AI companies increases.Global private AI investment has fallen for the second year in a row,though less than the sharp decrease from 2021 to 2022.The count of newly funded AI companies spiked to 1,812,up 40.6%from t

91、he previous year.6.AI organizational adoption ticks up.A 2023 McKinsey report reveals that 55%of organizations now use AI(including generative AI)in at least one business unit or function,up from 50%in 2022 and 20%in 2017.7.China dominates industrial robotics.Since surpassing Japan in 2013 as the le

92、ading installer of industrial robots,China has significantly widened the gap with the nearest competitor nation.In 2013,Chinas installations accounted for 20.8%of the global total,a share that rose to 52.4%by 2022.Report HighlightsArtificial IntelligenceIndex Report 202420Chapter 4:Economy(contd)8.G

93、reater diversity in robot installations.In 2017,collaborative robots represented a mere 2.8%of all new industrial robot installations,a figure that climbed to 9.9%by 2022.Similarly,2022 saw a rise in service robot installations across all application categories,except for medical robotics.This trend

94、 indicates not just an overall increase in robot installations but also a growing emphasis on deploying robots for human-facing roles.9.The data is in:AI makes workers more productive and leads to higher quality work.In 2023,several studies assessed AIs impact on labor,suggesting that AI enables wor

95、kers to complete tasks more quickly and to improve the quality of their output.These studies also demonstrated AIs potential to bridge the skill gap between low-and high-skilled workers.Still,other studies caution that using AI without proper oversight can lead to diminished performance.10.Fortune 5

96、00 companies start talking a lot about AI,especially generative AI.In 2023,AI was mentioned in 394 earnings calls(nearly 80%of all Fortune 500 companies),a notable increase from 266 mentions in 2022.Since 2018,mentions of AI in Fortune 500 earnings calls have nearly doubled.The most frequently cited

97、 theme,appearing in 19.7%of all earnings calls,was generative AI.Artificial IntelligenceIndex Report 202421Chapter 5:Science and Medicine 1.Scientific progress accelerates even further,thanks to AI.In 2022,AI began to advance scientific discovery.2023,however,saw the launch of even more significant

98、science-related AI applicationsfrom AlphaDev,which makes algorithmic sorting more efficient,to GNoME,which facilitates the process of materials discovery.2.AI helps medicine take significant strides forward.In 2023,several significant medical systems were launched,including EVEscape,which enhances p

99、andemic prediction,and AlphaMissence,which assists in AI-driven mutation classification.AI is increasingly being utilized to propel medical advancements.3.Highly knowledgeable medical AI has arrived.Over the past few years,AI systems have shown remarkable improvement on the MedQA benchmark,a key tes

100、t for assessing AIs clinical knowledge.The standout model of 2023,GPT-4 Medprompt,reached an accuracy rate of 90.2%,marking a 22.6 percentage point increase from the highest score in 2022.Since the benchmarks introduction in 2019,AI performance on MedQA has nearly tripled.4.The FDA approves more and

101、 more AI-related medical devices.In 2022,the FDA approved 139 AI-related medical devices,a 12.1%increase from 2021.Since 2012,the number of FDA-approved AI-related medical devices has increased by more than 45-fold.AI is increasingly being used for real-world medical purposes.Report HighlightsArtifi

102、cial IntelligenceIndex Report 202422Chapter 6:Education 1.The number of American and Canadian CS bachelors graduates continues to rise,new CS masters graduates stay relatively flat,and PhD graduates modestly grow.While the number of new American and Canadian bachelors graduates has consistently rise

103、n for more than a decade,the number of students opting for graduate education in CS has flattened.Since 2018,the number of CS masters and PhD graduates has slightly declined.2.The migration of AI PhDs to industry continues at an accelerating pace.In 2011,roughly equal percentages of new AI PhDs took

104、 jobs in industry(40.9%)and academia(41.6%).However,by 2022,a significantly larger proportion(70.7%)joined industry after graduation compared to those entering academia(20.0%).Over the past year alone,the share of industry-bound AI PhDs has risen by 5.3 percentage points,indicating an intensifying b

105、rain drain from universities into industry.3.Less transition of academic talent from industry to academia.In 2019,13%of new AI faculty in the United States and Canada were from industry.By 2021,this figure had declined to 11%,and in 2022,it further dropped to 7%.This trend indicates a progressively

106、lower migration of high-level AI talent from industry into academia.4.CS education in the United States and Canada becomes less international.Proportionally fewer international CS bachelors,masters,and PhDs graduated in 2022 than in 2021.The drop in international students in the masters category was

107、 especially pronounced.5.More American high school students take CS courses,but access problems remain.In 2022,201,000 AP CS exams were administered.Since 2007,the number of students taking these exams has increased more than tenfold.However,recent evidence indicates that students in larger high sch

108、ools and those in suburban areas are more likely to have access to CS courses.6.AI-related degree programs are on the rise internationally.The number of English-language,AI-related postsecondary degree programs has tripled since 2017,showing a steady annual increase over the past five years.Universi

109、ties worldwide are offering more AI-focused degree programs.Report HighlightsArtificial IntelligenceIndex Report 202423Chapter 6:Education(contd)7.The United Kingdom and Germany lead in European informatics,CS,CE,and IT graduate production.The United Kingdom and Germany lead Europe in producing the

110、highest number of new informatics,CS,CE,and information bachelors,masters,and PhD graduates.On a per capita basis,Finland leads in the production of both bachelors and PhD graduates,while Ireland leads in the production of masters graduates.Artificial IntelligenceIndex Report 202424Chapter 7:Policy

111、and Governance 1.The number of AI regulations in the United States sharply increases.The number of AI-related regulations has risen significantly in the past year and over the last five years.In 2023,there were 25 AI-related regulations,up from just one in 2016.Last year alone,the total number of AI

112、-related regulations grew by 56.3%.2.The United States and the European Union advance landmark AI policy action.In 2023,policymakers on both sides of the Atlantic put forth substantial proposals for advancing AI regulation The European Union reached a deal on the terms of the AI Act,a landmark piece

113、 of legislation enacted in 2024.Meanwhile,President Biden signed an Executive Order on AI,the most notable AI policy initiative in the United States that year.3.AI captures U.S.policymaker attention.The year 2023 witnessed a remarkable increase in AI-related legislation at the federal level,with 181

114、 bills proposed,more than double the 88 proposed in 2022.4.Policymakers across the globe cannot stop talking about AI.Mentions of AI in legislative proceedings across the globe have nearly doubled,rising from 1,247 in 2022 to 2,175 in 2023.AI was mentioned in the legislative proceedings of 49 countr

115、ies in 2023.Moreover,at least one country from every continent discussed AI in 2023,underscoring the truly global reach of AI policy discourse.5.More regulatory agencies turn their attention toward AI.The number of U.S.regulatory agencies issuing AI regulations increased to 21 in 2023 from 17 in 202

116、2,indicating a growing concern over AI regulation among a broader array of American regulatory bodies.Some of the new regulatory agencies that enacted AI-related regulations for the first time in 2023 include the Department of Transportation,the Department of Energy,and the Occupational Safety and H

117、ealth Administration.Report HighlightsArtificial IntelligenceIndex Report 202425Chapter 8:Diversity 1.U.S.and Canadian bachelors,masters,and PhD CS students continue to grow more ethnically diverse.While white students continue to be the most represented ethnicity among new resident graduates at all

118、 three levels,the representation from other ethnic groups,such as Asian,Hispanic,and Black or African American students,continues to grow.For instance,since 2011,the proportion of Asian CS bachelors degree graduates has increased by 19.8 percentage points,and the proportion of Hispanic CS bachelors

119、degree graduates has grown by 5.2 percentage points.2.Substantial gender gaps persist in European informatics,CS,CE,and IT graduates at all educational levels.Every surveyed European country reported more male than female graduates in bachelors,masters,and PhD programs for informatics,CS,CE,and IT.W

120、hile the gender gaps have narrowed in most countries over the last decade,the rate of this narrowing has been slow.3.U.S.K12 CS education is growing more diverse,reflecting changes in both gender and ethnic representation.The proportion of AP CS exams taken by female students rose from 16.8%in 2007

121、to 30.5%in 2022.Similarly,the participation of Asian,Hispanic/Latino/Latina,and Black/African American students in AP CS has consistently increased year over year.Report HighlightsArtificial IntelligenceIndex Report 202426Chapter 9:Public Opinion 1.People across the globe are more cognizant of AIs p

122、otential impactand more nervous.A survey from Ipsos shows that,over the last year,the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60%to 66%.Moreover,52%express nervousness toward AI products and services,marking a 13 percen

123、tage point rise from 2022.In America,Pew data suggests that 52%of Americans report feeling more concerned than excited about AI,rising from 38%in 2022.2.AI sentiment in Western nations continues to be low,but is slowly improving.In 2022,several developed Western nations,including Germany,the Netherl

124、ands,Australia,Belgium,Canada,and the United States,were among the least positive about AI products and services.Since then,each of these countries has seen a rise in the proportion of respondents acknowledging the benefits of AI,with the Netherlands experiencing the most significant shift.3.The pub

125、lic is pessimistic about AIs economic impact.In an Ipsos survey,only 37%of respondents feel AI will improve their job.Only 34%anticipate AI will boost the economy,and 32%believe it will enhance the job market.4.Demographic differences emerge regarding AI optimism.Significant demographic differences

126、exist in perceptions of AIs potential to enhance livelihoods,with younger generations generally more optimistic.For instance,59%of Gen Z respondents believe AI will improve entertainment options,versus only 40%of baby boomers.Additionally,individuals with higher incomes and education levels are more

127、 optimistic about AIs positive impacts on entertainment,health,and the economy than their lower-income and less-educated counterparts.5.ChatGPT is widely known and widely used.An international survey from the University of Toronto suggests that 63%of respondents are aware of ChatGPT.Of those aware,a

128、round half report using ChatGPT at least once weekly.Report HighlightsArtificial IntelligenceIndex Report 2024CHAPTER 1:Research and DevelopmentOverview 29Chapter Highlights 301.1 Publications 31Overview 31 Total Number of AI Publications 31 By Type of Publication 32 By Field of Study 33 By Sector 3

129、4AI Journal Publications 36AI Conference Publications 371.2 Patents 38AI Patents 38 Overview 38 By Filing Status and Region 391.3 Frontier AI Research 45General Machine Learning Models 45 Overview 45 Sector Analysis 46 National Affiliation 47 Parameter Trends 49 Compute Trends 50 Highlight:Will Mode

130、ls Run Out of Data?52Foundation Models 56 Model Release 56 Organizational Affiliation 58 National Affiliation 61Training Cost 631.4 AI Conferences 66Conference Attendance 661.5 Open-Source AI Software 69Projects 69Stars 71ACCESS THE PUBLIC DATA28Artificial IntelligenceIndex Report 2024CHAPTER 1:Rese

131、arch and DevelopmentPreviewTable of Contents29Artificial IntelligenceIndex Report 2024CHAPTER 1:Research and DevelopmentOverviewThis chapter studies trends in AI research and development.It begins by examining trends in AI publications and patents,and then examines trends in notable AI systems and f

132、oundation models.It concludes by analyzing AI conference attendance and open-source AI software projects.Table of Contents30Artificial IntelligenceIndex Report 2024CHAPTER 1:Research and Development1.Industry continues to dominate frontier AI research.In 2023,industry produced 51 notable machine lea

133、rning models,while academia contributed only 15.There were also 21 notable models resulting from industry-academia collaborations in 2023,a new high.2.More foundation models and more open foundation models.In 2023,a total of 149 foundation models were released,more than double the amount released in

134、 2022.Of these newly released models,65.7%were open-source,compared to only 44.4%in 2022 and 33.3%in 2021.3.Frontier models get way more expensive.According to AI Index estimates,the training costs of state-of-the-art AI models have reached unprecedented levels.For example,OpenAIs GPT-4 used an esti

135、mated$78 million worth of compute to train,while Googles Gemini Ultra cost$191 million for compute.4.The United States leads China,the EU,and the U.K.as the leading source of top AI models.In 2023,61 notable AI models originated from U.S.-based institutions,far outpacing the European Unions 21 and C

136、hinas 15.5.The number of AI patents skyrockets.From 2021 to 2022,AI patent grants worldwide increased sharply by 62.7%.Since 2010,the number of granted AI patents has increased more than 31 times.6.China dominates AI patents.In 2022,China led global AI patent origins with 61.1%,significantly outpaci

137、ng the United States,which accounted for 20.9%of AI patent origins.Since 2010,the U.S.share of AI patents has decreased from 54.1%.7.Open-source AI research explodes.Since 2011,the number of AI-related projects on GitHub has seen a consistent increase,growing from 845 in 2011 to approximately 1.8 mi

138、llion in 2023.Notably,there was a sharp 59.3%rise in the total number of GitHub AI projects in 2023 alone.The total number of stars for AI-related projects on GitHub also significantly increased in 2023,more than tripling from 4.0 million in 2022 to 12.2 million.8.The number of AI publications conti

139、nues to rise.Between 2010 and 2022,the total number of AI publications nearly tripled,rising from approximately 88,000 in 2010 to more than 240,000 in 2022.The increase over the last year was a modest 1.1%.Chapter HighlightsTable of Contents31Artificial IntelligenceIndex Report 2024Chapter 1 Preview

140、Table of Contents242.29200000220500Number of AI publications(in thousands)Number of AI publications in the world,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportOverviewThe figures below present the global

141、 count of English-and Chinese-language AI publications from 2010 to 2022,categorized by type of affiliation and cross-sector collaborations.Additionally,this section details publication data for AI journal articles and conference papers.1.1 PublicationsTotal Number of AI Publications1Figure 1.1.1 di

142、splays the global count of AI publications.Between 2010 and 2022,the total number of AI publications nearly tripled,rising from approximately 88,000 in 2010 to more than 240,000 in 2022.The increase over the last year was a modest 1.1%.1 The data on publications presented this year is sourced from C

143、SET.Both the methodology and data sources used by CSET to classify AI publications have changed since their data was last featured in the AI Index(2023).As a result,the numbers reported in this years section differ slightly from those reported in last years edition.Moreover,the AI-related publicatio

144、n data is fully available only up to 2022 due to a significant lag in updating publication data.Readers are advised to approach publication figures with appropriate caution.1.1 PublicationsChapter 1:Research and DevelopmentFigure 1.1.1Artificial IntelligenceIndex Report 202432Artificial Intelligence

145、Index Report 2024Chapter 1 PreviewTable of Contents20000022050100150200Number of AI publications(in thousands)0.05,Clinical trial0.12,Other0.57,Book0.70,Dissertation0.79,Unknown1.49,Article5.07,Preprint12.88,Book chapter41.17,Conference232.67,JournalNumber

146、 of AI publications by type,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportBy Type of PublicationFigure 1.1.2 illustrates the distribution of AI publication types globally over time.In 2022,there were roughly 230,000 AI journal articles compared to roughly 42

147、,000 conference submissions.Since 2015,AI journal and conference publications have increased at comparable rates.In 2022,there were 2.6 times as many conference publications and 2.4 times as many journal publications as there were in 2015.2 It is possible for an AI publication to be mapped to more t

148、han one publication type,so the totals in Figure 1.1.2 do not completely align with those in Figure 1.1.1.1.1 PublicationsChapter 1:Research and DevelopmentFigure 1.1.22Artificial IntelligenceIndex Report 202433Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents2001

149、3200020202040506070Number of AI publications(in thousands)6.83,Mathematical optimization7.18,Linguistics8.31,Algorithm9.17,Control theory10.39,Computer network12.05,Process management19.84,Pattern recognition21.31,Computer vision72.23,Machine learningNumber of AI pub

150、lications by?eld of study(excluding Other AI),201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportBy Field of StudyFigure 1.1.3 examines the total number of AI publications by field of study since 2010.Machine learning publications have seen the most rapid growth

151、over the past decade,increasing nearly sevenfold since 2015.Following machine learning,the most published AI fields in 2022 were computer vision(21,309 publications),pattern recognition(19,841),and process management(12,052).1.1 PublicationsChapter 1:Research and DevelopmentFigure 1.1.3Artificial In

152、telligenceIndex Report 202434Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents200000220%10%20%30%40%50%60%70%80%AI publications(%of total)1.46%,Other2.62%,Nonpro?t6.97%,Government7.89%,Industry81.07%,EducationAI publications(%of to

153、tal)by sector,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportBy SectorThis section presents the distribution of AI publications by sectoreducation,government,industry,nonprofit,and otherglobally and then specifically within the United States,China,and the Eur

154、opean Union plus the United Kingdom.In 2022,the academic sector contributed the majority of AI publications(81.1%),maintaining its position as the leading global source of AI research over the past decade across all regions(Figure 1.1.4 and Figure 1.1.5).Industry participation is most significant in

155、 the United States,followed by the European Union plus the United Kingdom,and China(Figure 1.1.5).1.1 PublicationsChapter 1:Research and DevelopmentFigure 1.1.4Artificial IntelligenceIndex Report 202435Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents75.48%14.06%5.60%4.87%75

156、.63%9.47%9.28%5.62%81.75%10.05%7.39%0.80%0%10%20%30%40%50%60%70%80%Nonpro?tGovernmentIndustryEducationUnited StatesEuropean Union and United KingdomChinaAI publications(%of total)AI publications(%of total)by sector and geographic area,2022Source:Center for Security and Emerging Technology,2023|Chart

157、:2024 AI Index report1.1 PublicationsChapter 1:Research and DevelopmentFigure 1.1.5Artificial IntelligenceIndex Report 202436Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents232.6720000022050100150200Number of AI publications(in th

158、ousands)Number of AI journal publications,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportAI Journal PublicationsFigure 1.1.6 illustrates the total number of AI journal publications from 2010 to 2022.The number of AI journal publications experienced modest gro

159、wth from 2010 to 2015 but grew approximately 2.4 times since 2015.Between 2021 and 2022,AI journal publications saw a 4.5%increase.1.1 PublicationsChapter 1:Research and DevelopmentFigure 1.1.6Artificial IntelligenceIndex Report 202437Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of

160、 Contents41.000022025303540Number of AI conference publications(in thousands)Number of AI conference publications,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportAI Conference PublicationsFigure 1.1.7 visu

161、alizes the total number of AI conference publications since 2010.The number of AI conference publications has seen a notable rise in the past two years,climbing from 22,727 in 2020 to 31,629 in 2021,and reaching 41,174 in 2022.Over the last year alone,there was a 30.2%increase in AI conference publi

162、cations.Since 2010,the number of AI conference publications has more than doubled.1.1 PublicationsChapter 1:Research and DevelopmentFigure 1.1.7Artificial IntelligenceIndex Report 202438Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents62.262000162017201

163、82002200Number of AI patents(in thousands)Number of AI patents granted,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportAI PatentsOverviewFigure 1.2.1 examines the global growth in granted AI patents from 2010 to 2022.Over the last decade,

164、there has been a significant rise in the number of AI patents,with a particularly sharp increase in recent 1.2 Patentsyears.For instance,between 2010 and 2014,the total growth in granted AI patents was 56.1%.However,from 2021 to 2022 alone,the number of AI patents increased by 62.7%.1.2 PatentsChapt

165、er 1:Research and DevelopmentFigure 1.2.1Artificial IntelligenceIndex Report 2024This section examines trends over time in global AI patents,which can reveal important insights into the evolution of innovation,research,and development within AI.Additionally,analyzing AI patents can reveal how these

166、advancements are distributed globally.Similar to the publications data,there is a noticeable delay in AI patent data availability,with 2022 being the most recent year for which data is accessible.The data in this section comes from CSET.39Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTabl

167、e of Contents20000022020406080100120Number of AI patents(in thousands)62.26,granted128.95,not grantedAI patents by application status,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportBy Filing Status and RegionThe fol

168、lowing section disaggregates AI patents by their filing status(whether they were granted or not granted),as well as the region of their publication.Figure 1.2.2 compares global AI patents by application status.In 2022,the number of ungranted AI patents(128,952)was more than double the amount granted

169、(62,264).Over time,the landscape of AI patent approvals has shifted markedly.Until 2015,a larger proportion of filed AI patents were granted.However,since then,the majority of AI patent filings have not been granted,with the gap widening significantly.For instance,in 2015,42.2%of all filed AI patent

170、s were not granted.By 2022,this figure had risen to 67.4%.1.2 PatentsChapter 1:Research and DevelopmentFigure 1.2.2Artificial IntelligenceIndex Report 202440Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents2010 2012 2014 2016 2018 2020 20220070802010 2012 2014 201

171、6 2018 2020 20220070802010 2012 2014 2016 2018 2020 2022007080Not grantedGrantedNumber of AI patent?lings(in thousands)ChinaEuropean Union and United KingdomUnited States35.3180.461.172.1712.0815.11AI patents by application status by geographic area,201022Source:Center for Secu

172、rity and Emerging Technology,2023|Chart:2024 AI Index reportThe gap between granted and not granted AI patents is evident across all major patent-originating geographic areas,including China,the European Union and United Kingdom,and the United States(Figure 1.2.3).In recent years,all three geographi

173、c areas have experienced an increase in both the total number of AI patent filings and the number of patents granted.1.2 PatentsChapter 1:Research and DevelopmentFigure 1.2.3Artificial IntelligenceIndex Report 202441Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents2010201120

174、0002120220%10%20%30%40%50%60%70%80%Granted AI patents(%of world total)0.03%,Middle East and North Africa0.12%,Sub-Saharan Africa0.21%,Latin America and the Caribbean0.23%,South Asia0.68%,Rest of the world2.33%,Europe and Central Asia21.21%,North America75.20%,East As

175、ia and Paci?cGranted AI patents(%of world total)by region,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportFigure 1.2.4 showcases the regional breakdown of granted AI patents.As of 2022,the bulk of the worlds granted AI patents(75.2%)originated from East Asia a

176、nd the Pacific,with North America being the next largest contributor at 21.2%.Up until 2011,North America led in the number of global AI patents.However,since then,there has been a significant shift toward an increasing proportion of AI patents originating from East Asia and the Pacific.1.2 PatentsC

177、hapter 1:Research and DevelopmentFigure 1.2.4Artificial IntelligenceIndex Report 202442Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents200000220%10%20%30%40%50%60%Granted AI patents(%of world total)0.23%,India2.03%,European Union

178、and United Kingdom15.71%,Rest of the world20.90%,United States61.13%,ChinaGranted AI patents(%of world total)by geographic area,201022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportDisaggregated by geographic area,the majority of the worlds granted AI patents are f

179、rom China(61.1%)and the United States(20.9%)(Figure 1.2.5).The share of AI patents originating from the United States has declined from 54.1%in 2010.1.2 PatentsChapter 1:Research and DevelopmentFigure 1.2.5Artificial IntelligenceIndex Report 202443Artificial IntelligenceIndex Report 2024Chapter 1 Pr

180、eviewTable of Contents0.280.330.330.420.560.560.661.251.912.062.512.534.238.7310.260LithuaniaFranceNew ZealandUnited KingdomFinlandDenmarkGermanyCanadaAustraliaSingaporeChinaJapanUnited StatesLuxembourgSouth KoreaGranted AI patents(per 100,000 inhabitants)Granted AI patents per 100,000 in

181、habitants by country,2022Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportFigure 1.2.6 and Figure 1.2.7 document which countries lead in AI patents per capita.In 2022,the country with the most granted AI patents per 100,000 inhabitants was South Korea(10.3),followed

182、by Luxembourg(8.8)and the United States(4.2)(Figure 1.2.6).Figure 1.2.7 highlights the change in granted AI patents per capita from 2012 to 2022.Singapore,South Korea,and China experienced the greatest increase in AI patenting per capita during that time period.1.2 PatentsChapter 1:Research and Deve

183、lopmentFigure 1.2.6Artificial IntelligenceIndex Report 202444Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents387%803%907%908%961%1,086%1,137%1,246%1,299%1,463%3,569%3,801%5,366%0%500%1,000%1,500%2,000%2,500%3,000%3,500%4,000%4,500%5,000%5,500%New ZealandCanadaFinlandAustral

184、iaGermanyFranceJapanUnited KingdomUnited StatesDenmarkChinaSouth KoreaSingapore%change of granted AI patents(per 100,000 inhabitants)Source:Center for Security and Emerging Technology,2023|Chart:2024 AI Index reportPercentage change of granted AI patents per 100,000 inhabitants by country,2012 vs.20

185、221.2 PatentsChapter 1:Research and DevelopmentFigure 1.2.7Artificial IntelligenceIndex Report 202445Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsThis section explores the frontier of AI research.While many new AI models are introduced annually,only a small sample repres

186、ents the most advanced research.Admittedly what constitutes advanced or frontier research is somewhat subjective.Frontier research could reflect a model posting a new state-of-the-art result on a benchmark,introducing a meaningful new architecture,or exercising some impressive new capabilities.The A

187、I Index studies trends in two types of frontier AI models:“notable models”and foundation models.3 Epoch,an AI Index data provider,uses the term“notable machine learning models”to designate noteworthy models handpicked as being particularly influential within the AI/machine learning ecosystem.In cont

188、rast,foundation models are exceptionally large AI models trained on massive datasets,capable of performing a multitude of downstream tasks.Examples of foundation models include GPT-4,Claude 3,and Gemini.While many foundation models may qualify as notable models,not all notable models are foundation

189、models.Within this section,the AI Index explores trends in notable models and foundation models from various perspectives,including originating organization,country of origin,parameter count,and compute usage.The analysis concludes with an examination of machine learning training costs.General Machi

190、ne Learning ModelsOverviewEpoch AI is a group of researchers dedicated to studying and predicting the evolution of advanced AI.They maintain a database of AI and machine learning models released since the 1950s,selecting 1.3 Frontier AI Researchentries based on criteria such as state-of-the-art adva

191、ncements,historical significance,or high citation rates.Analyzing these models provides a comprehensive overview of the machine learning landscapes evolution,both in recent years and over the past few decades.4 Some models may be missing from the dataset;however,the dataset can reveal trends in rela

192、tive terms.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 20243“AI system”refers to a computer program or product based on AI,such as ChatGPT.“AI model”refers to a collection of parameters whose values are learned during training,such as GPT-4.4 New and

193、 historic models are continually added to the Epoch database,so the total year-by-year counts of models included in this years AI Index might not exactly match those published in last years report.46Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsSector AnalysisUntil 2014,a

194、cademia led in the release of machine learning models.Since then,industry has taken the lead.In 2023,there were 51 notable machine learning models produced by industry compared to just 15 from academia(Figure 1.3.1).Significantly,21 notable models resulted from industry/academic collaborations in 20

195、23,a new high.Creating cutting-edge AI models now demands a substantial amount of data,computing power,and financial resources that are not available in academia.This shift toward increased industrial dominance in leading AI models was first highlighted in last years AI Index report.Although this ye

196、ar the gap has slightly narrowed,the trend largely persists.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 2024200320042005200620072008200920000022202301020304050Number of notable machine learning models0,Acad

197、emia-government collaboration0,Research collective0,Industryresearch collective collaboration2,Government15,Academia21,Industry-academia collaboration51,IndustryNumber of notable machine learning models by sector,200323Source:Epoch,2023|Chart:2024 AI Index reportFigure 1.3.147Artificial Intelligence

198、Index Report 2024Chapter 1 PreviewTable of ContentsNational AffiliationTo illustrate the evolving geopolitical landscape of AI,the AI Index research team analyzed the country of origin of notable models.Figure 1.3.2 displays the total number of notable machine learning models attributed to the locat

199、ion of researchers affiliated institutions.5 In 2023,the United States led with 61 notable machine learning models,followed by China with 15,and France with 8.For the first time since 2019,the European Union and the United Kingdom together have surpassed China in the number of notable AI models prod

200、uced(Figure 1.3.3).Since 2003,the United States has produced more models than other major geographic regions such as the United Kingdom,China,and Canada(Figure 1.3.4).1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 20242334445802530 35 40 45 5

201、055 60EgyptUnited Arab EmiratesSingaporeUnited KingdomIsraelCanadaGermanyFranceChinaUnited StatesNumber of notable machine learning modelsNumber of notable machine learning models bygeographic area,2023Source:Epoch,2023|Chart:2024 AI Index report20032005200720092000230102030405

202、060Number of notable machine learning models15,China25,European Union andUnited Kingdom61,United StatesNumber of notable machine learning models byselect geographic area,200323Source:Epoch,2023|Chart:2024 AI Index reportFigure 1.3.2Figure 1.3.35 A machine learning model is considered associated with

203、 a specific country if at least one author of the paper introducing it has an affiliation with an institution based in that country.In cases where a models authors come from several countries,double counting can occur.48Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents1.3 Fr

204、ontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 202461100101430Number of notable machine learning models by geographic area,200323(sum)Source:Epoch,2023|Chart:2024 AI Index reportFigure 1.3.449Artificial IntelligenceIndex Report 2024Chapter 1 Previe

205、wTable of ContentsParameter TrendsParameters in machine learning models are numerical values learned during training that determine how a model interprets input data and makes predictions.Models trained on more data will usually have more parameters than those trained on less data.Likewise,models wi

206、th more parameters typically outperform those with fewer parameters.Figure 1.3.5 demonstrates the parameter count of machine learning models in the Epoch dataset,categorized by the sector from which the models originate.Parameter counts have risen sharply since the early 2010s,reflecting the growing

207、 complexity of tasks AI models are designed for,the greater availability of data,improvements in hardware,and proven efficacy of larger models.High-parameter models are particularly notable in the industry sector,underscoring the capacity of companies like OpenAI,Anthropic,and Google to bear the com

208、putational costs of training on vast volumes of data.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 20242003 2004 2005 200620072008 200920000022202310010K1M100M10B1TAcademia Academia-governmentIndustryIndustry

209、research collectiveIndustry-academia GovernmentResearch collectivePublication dateNumber of parameters(log scale)Number of parameters of notable machine learning models by sector,200323Source:Epoch,2023|Chart:2024 AI Index reportFigure 1.3.550Artificial IntelligenceIndex Report 2024Chapter 1 Preview

210、Table of ContentsCompute TrendsThe term“compute”in AI models denotes the computational resources required to train and operate a machine learning model.Generally,the complexity of the model and the size of the training dataset directly influence the amount of compute needed.The more complex a model

211、is,and the larger the underlying training data,the greater the amount of compute required for training.Figure 1.3.6 visualizes the training compute required for notable machine learning models in the last 20 years.Recently,the compute usage of notable AI models has increased exponentially.6 This tre

212、nd has been especially pronounced in the last five years.This rapid rise in compute demand has critical implications.For instance,models requiring more computation often have larger environmental footprints,and companies typically have more access to computational resources than academic institution

213、s.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 20242003 2004 2005 200620072008 20092000002220230.01110010K1M100M10BAcademiaIndustryAcademia-governmentIndustryresearch collectiveGovernmentIndustry-academia Re

214、search collectivePublication dateTraining compute(petaFLOP-log scale)Training compute of notable machine learning models by sector,200323Source:Epoch,2023|Chart:2024 AI Index reportFigure 1.3.66 FLOP stands for“floating-point operation.”A floating-point operation is a single arithmetic operation inv

215、olving floating-point numbers,such as addition,subtraction,multiplication,or division.The number of FLOPs a processor or computer can perform per second is an indicator of its computational power.The higher the FLOP rate,the more powerful the computer is.An AI model with a higher FLOP rate reflects

216、its requirement for more computational resources during training.51Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsFigure 1.3.7 highlights the training compute of notable machine learning models since 2012.For example,AlexNet,one of the papers that popularized the now stand

217、ard practice of using GPUs to improve AI models,required an estimated 470 petaFLOPs for training.The original Transformer,released in 2017,required around 7,400 petaFLOPs.Googles Gemini Ultra,one of the current state-of-the-art foundation models,required 50 billion petaFLOPs.1.3 Frontier AI Research

218、Chapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 2024Llama 2-70BPaLM(540B)Megatron-Turing NLG 530BGPT-3 175B(davinci)RoBERTa LargeBERT-LargeTransformerAlexNetGemini UltraClaude 2GPT-42000020010K100K1M10M100M1B10B100BLanguageVisionMulti

219、modalPublication dateTraining compute(petaFLOP-log scale)Training compute of notable machine learning models by domain,201223Source:Epoch,2023|Chart:2024 AI Index reportFigure 1.3.752Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsArtificial IntelligenceIndex Report 2024Wil

220、l Models Run Out of Data?As illustrated above,a significant proportion of recent algorithmic progress,including progress behind powerful LLMs,has been achieved by training models on increasingly larger amounts of data.As noted recently by Anthropic cofounder and AI Index Steering Committee member Ja

221、ck Clark,foundation models have been trained on meaningful percentages of all the data that has ever existed on the internet.The growing data dependency of AI models has led to concerns that future generations of computer scientists will run out of data to further scale and improve their systems.Res

222、earch from Epoch suggests that these concerns are somewhat warranted.Epoch researchers have generated historical and compute-based projections for when AI researchers might expect to run out of data.The historical projections are based on observed growth rates in the sizes of data used to train foun

223、dation models.The compute projections adjust the historical growth rate based on projections of compute availability.For instance,the researchers estimate that computer scientists could deplete the stock of high-quality language data by 2024,exhaust low-quality language data within two decades,and u

224、se up image data by the late 2030s to mid-2040s(Figure 1.3.8).Theoretically,the challenge of limited data availability can be addressed by using synthetic data,which is data generated by AI models themselves.For example,it is possible to use text produced by one LLM to train another LLM.The use of s

225、ynthetic data for training AI systems is particularly attractive,not only as a solution for potential data depletion but also because generative AI systems could,in principle,generate data in instances where naturally occurring data is sparsefor example,data for rare diseases or underrepresented pop

226、ulations.Until recently,the feasibility and effectiveness of using synthetic data for training generative AI systems were not well understood.However,research this year has suggested that there are limitations associated with training models on synthetic data.For instance,a team of British and Canad

227、ian researchers discovered that models predominantly trained on synthetic data experience model collapse,a phenomenon where,over time,they lose the ability to remember true underlying data distributions and start producing a narrow range of Highlight:Low-qualitylanguage stockHigh-qualitylanguage sto

228、ckImage stockStock type2032.4 2028.4;2039.22024.5 2023.5;2025.72046 2037;2062.8Historical projection2040.5 2034.6;2048.92024.1 2023.2;2025.32038.8 2032;2049.8Compute projectionProjections of ML data exhaustion by stock type:median and 90%CI datesSource:Epoch,2023|Table:2024 AI Index report1.3 Fronti

229、er AI ResearchChapter 1:Research and DevelopmentFigure 1.3.853Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsArtificial IntelligenceIndex Report 2024Will Models Run Out of Data?(contd)outputs.Figure 1.3.9 demonstrates the process of model collapse in a variational autoenco

230、der(VAE)model,a widely used generative AI architecture.With each subsequent generation trained on additional synthetic data,the model produces an increasingly limited set of outputs.As illustrated in Figure 1.3.10,in statistical terms,as the number of synthetic generations increases,the tails of the

231、 distributions vanish,and the generation density shifts toward the mean.7 This pattern means that over time,the generations of models trained predominantly on synthetic data become less varied and are not as widely distributed.The authors demonstrate that this phenomenon occurs across various model

232、types,including Gaussian Mixture Models and LLMs.This research underscores the continued importance of human-generated data for training capable LLMs that can produce a diverse array of content.Highlight:A demonstration of model collapse in a VAESource:Shumailov et al.,20231.3 Frontier AI ResearchCh

233、apter 1:Research and DevelopmentFigure 1.3.97 In the context of generative models,density refers to the level of complexity and variation in the outputs produced by an AI model.Models that have a higher generation density produce a wider range of higher-quality outputs.Models with low generation den

234、sity produce a narrower range of more simplistic outputs.54Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsArtificial IntelligenceIndex Report 2024Will Models Run Out of Data?(contd)In a similar study published in 2023 on the use of synthetic data in generative imaging mode

235、ls,researchers found that generative image models trained solely on synthetic data cyclesor with insufficient real human dataexperience a significant drop in output quality.The authors label this phenomenon Model Autophagy Disorder(MAD),in reference to mad cow disease.The study examines two types of

236、 training processes:fully synthetic,where models are trained exclusively on synthetic data,and synthetic augmentation,where models are trained on a mix of synthetic and real data.In both scenarios,as the number of training generations increases,the quality of the generated images declines.Figure 1.3

237、.11 highlights the degraded image generations of models that are augmented with synthetic data;for example,the faces generated in steps 7 and 9 increasingly display strange-looking hash marks.From a statistical perspective,images generated with both synthetic data and synthetic augmentation loops ha

238、ve higher FID scores(indicating less similarity to real images),lower precision scores(signifying reduced realism or quality),and lower recall scores(suggesting decreased diversity)(Figure 1.3.12).While synthetic augmentation loops,which incorporate some real data,show less degradation than fully sy

239、nthetic loops,both methods exhibit diminishing returns with further training.Highlight:32101230.000.200.400.600.801.001.201.401.60Generation 0Generation 1Generation 2Generation 3Generation 4Generation 5Generation 6Generation 7Generation 8Generation 9DensityConvergence of generated data densities in

240、descendant modelsSource:Shumailov et al.,2023|Chart:2024 AI Index report1.3 Frontier AI ResearchChapter 1:Research and DevelopmentFigure 1.3.1055Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsArtificial IntelligenceIndex Report 2024Will Models Run Out of Data?(contd)Highli

241、ght:024605.000.100.200.300.400.500.600.7002460.000.050.100.150.200.250.300.350.40Fully synthetic loopSynthetic augmentation loopGenerationsGenerationsGenerationsFIDPrecisionRecallAssessing FFHQ syntheses:FID,precision,and recall in synthetic and mixed-data training loopsSource:Alemohammad

242、 et al.,2023|Chart:2024 AI Index report1.3 Frontier AI ResearchChapter 1:Research and DevelopmentFigure 1.3.12An example of MAD in image-generation modelsSource:Alemohammad et al.,2023Figure 1.3.1156Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsFoundation ModelsFoundation

243、 models represent a rapidly evolving and popular category of AI models.Trained on vast datasets,they are versatile and suitable for numerous downstream applications.Foundation models such as GPT-4,Claude 3,and Llama 2 showcase remarkable abilities and are increasingly being deployed in real-world sc

244、enarios.Introduced in 2023,the Ecosystem Graphs is a new community resource from Stanford that tracks the foundation model ecosystem,including datasets,models,and applications.This section uses data from the Ecosystem Graphs to study trends in foundation models over time.8Model Release Foundation mo

245、dels can be accessed in different ways.No access models,like Googles PaLM-E,are only accessible to their developers.Limited access models,like OpenAIs GPT-4,offer limited access to the models,often through a public API.Open models,like Metas Llama 2,fully release model weights,which means the models

246、 can be modified and freely used.Figure 1.3.13 visualizes the total number of foundation models by access type since 2019.In recent years,the number of foundation models has risen sharply,more than doubling since 2022 and growing by a factor of nearly 38 since 2019.Of the 149 foundation models relea

247、sed in 2023,98 were open,23 limited and 28 no access.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 20248 The Ecosystem Graphs make efforts to survey the global AI ecosystem,but it is possible that they underreport models from certain nations like South

248、 Korea and China.329842277220204060800OpenLimitedNo accessFoundation modelsFoundation models by access type,201923Source:Bommasani et al.,2023|Chart:2024 AI Index reportFigure 1.3.1357Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Conten

249、tsIn 2023,the majority of foundation models were released as open access(65.8%),with 18.8%having no access and 15.4%limited access(Figure 1.3.14).Since 2021,there has been a significant increase in the proportion of models released with open access.1.3 Frontier AI ResearchChapter 1:Research and Deve

250、lopmentArtificial IntelligenceIndex Report 20242002220230%10%20%30%40%50%60%70%Foundation models(%of total)15.44%,Limited18.79%,No access65.77%,OpenFoundation models(%of total)by access type,201923Source:Bommasani et al.,2023|Chart:2024 AI Index reportFigure 1.3.1458Artificial Intelligenc

251、eIndex Report 2024Chapter 1 PreviewTable of ContentsOrganizational Affiliation Figure 1.3.15 plots the sector from which foundation models have originated since 2019.In 2023,the majority of foundation models(72.5%)originated from industry.Only 18.8%of foundation models in 2023 originated from academ

252、ia.Since 2019,an ever larger number of foundation models are coming from industry.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 2024200222023020406080100Number of foundation models0,Industry-government collaboration4,Government9,Industry-aca

253、demia collaboration28,Academia108,IndustryNumber of foundation models by sector,201923Source:Bommasani et al.,2023|Chart:2024 AI Index reportFigure 1.3.1559Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsFigure 1.3.16 highlights the source of various foundation models that

254、were released in 2023.Google introduced the most models(18),followed by Meta(11),and Microsoft(9).The academic institution that released the most foundation models in 2023 was UC Berkeley(3).1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 202422333334445

255、7912141618Stanford UniversityDeepMindUC BerkeleyAdobeShanghai AI LaboratoryCerebrasStability AIAI2AnthropicHugging FaceTogetherOpenAIMicrosoftMetaGoogleIndustryAcademiaNonpro?tNumber of foundation modelsNumber of foundation models by organization,2023Source:Bommasani et al.,2023|Chart:202

256、4 AI Index reportFigure 1.3.1660Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsSince 2019,Google has led in releasing the most foundation models,with a total of 40,followed by OpenAI with 20(Figure 1.3.17).Tsinghua University stands out as the top non-Western institution,w

257、ith seven foundation model releases,while Stanford University is the leading American academic institution,with five releases.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 202444455556674823640Shanghai AI LaboratoryBigScienceAI2An

258、thropicHugging FaceStanford UniversityCohereTogetherEleutherAITsinghua UniversityDeepMindMicrosoftMetaOpenAIGoogleIndustryAcademiaNonpro?tNumber of foundation modelsNumber of foundation models by organization,201923(sum)Source:Bommasani et al.,2023|Chart:2024 AI Index reportFigure 1.3.1761Artificial

259、 IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsNational Affiliation Given that foundation models are fairly representative of frontier AI research,from a geopolitical perspective,it is important to understand their national affiliations.Figures 1.3.18,1.3.19,and 1.3.20 visualize the

260、 national affiliations of various foundation models.As with the notable model analysis presented earlier in the chapter,a model is deemed affiliated with a country if a researcher contributing to that model is affiliated with an institution headquartered in that country.In 2023,most of the worlds fo

261、undation models originated from the United States(109),followed by China(20),and the United Kingdom(Figure 1.3.18).Since 2019,the United States has consistently led in originating the majority of foundation models(Figure 1.3.19).1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial In

262、telligenceIndex Report 202482005060708090 100 110FranceSpainSwedenSwitzerlandTaiwanFinlandGermanyIsraelSingaporeCanadaUnited Arab EmiratesUnited KingdomChinaUnited StatesNumber of foundation modelsNumber of foundation models by geographic area,2023Source:Bommasani et al.,2023|C

263、hart:2024 AI Index report200222023020406080100Number of foundation models15,European Union andUnited Kingdom20,China109,United StatesNumber of foundation models by select geographicarea,201923Source:Bommasani et al.,2023|Chart:2024 AI Index reportFigure 1.3.18Figure 1.3.1962Artificial Int

264、elligenceIndex Report 2024Chapter 1 PreviewTable of ContentsFigure 1.3.20 depicts the cumulative count of foundation models released and attributed to respective countries since 2019.The country with the greatest number of foundation models released since 2019 is the United States(182),followed by C

265、hina(30),and the United Kingdom(21).1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 20242Number of foundation models by geographic area,201923(sum)Source:Bommasani et al.,2023|Chart:2024 AI Index reportFigure 1.3.2063Artificial IntelligenceInd

266、ex Report 2024Chapter 1 PreviewTable of ContentsTraining CostA prominent topic in discussions about foundation models is their speculated costs.While AI companies seldom reveal the expenses involved in training their models,it is widely believed that these costs run into millions of dollars and are

267、rising.For instance,OpenAIs CEO,Sam Altman,mentioned that the training cost for GPT-4 was over$100 million.This escalation in training expenses has effectively excluded universities,traditionally centers of AI research,from developing their own leading-edge foundation models.In response,policy initi

268、atives,such as President Bidens Executive Order on AI,have sought to level the playing field between industry and academia by creating a National AI Research Resource,which would grant nonindustry actors the compute and data needed to do higher level AI-research.Understanding the cost of training AI

269、 models is important,yet detailed information on these costs remains scarce.The AI Index was among the first to offer estimates on the training costs of foundation models in last years publication.This year,the AI Index has collaborated with Epoch AI,an AI research institute,to substantially enhance

270、 and solidify the robustness of its AI training cost estimates.9 To estimate the cost of cutting-edge models,the Epoch team analyzed training duration,as well as the type,quantity,and utilization rate of the training hardware,using information from publications,press releases,or technical reports re

271、lated to the models.10Figure 1.3.21 visualizes the estimated training cost associated with select AI models,based on cloud compute rental prices.AI Index estimates validate suspicions that in recent years model training costs have significantly increased.For example,in 2017,the original Transformer

272、model,which introduced the architecture that underpins virtually every modern LLM,cost around$900 to train.11 RoBERTa Large,released in 2019,which achieved state-of-the-art results on many canonical comprehension benchmarks like SQuAD and GLUE,cost around$160,000 to train.Fast-forward to 2023,and tr

273、aining costs for OpenAIs GPT-4 and Googles Gemini Ultra are estimated to be around$78 million and$191 million,respectively.1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 20249 Ben Cottier and Robi Rahman led research at Epoch AI into model training cost

274、.10 A detailed description of the estimation methodology is provided in the Appendix.11 The cost figures reported in this section are inflation-adjusted.64Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents9303,288160,0184,324,8836,405,6531,319,58612,389,05678,352,0343,931,897

275、191,400,000TransformerBERT-LargeRoBERTa LargeGPT-3 175B(davinci)Megatron-Turing NLG 530BLaMDAPaLM(540B)GPT-4Llama 2 70BGemini Ultra200202020M100M150M200MTraining cost(in U.S.dollars)Estimated training cost of select AI models,201723Source:Epoch,2023|Chart:2024 AI Index reportGe

276、mini UltraFalcon 180BStarCoderGPT-4LLaMA-65BLlama 2 70BBLOOM-176BPaLIImagenFlamingoPaLM(540B)T0-XXLHyperCLOVAMeta Pseudo LabelsSwitchGPT-3 175B(davinci)AlphaStarMegatron-BERTRoBERTa LargeSciBERTBERT-LargeBigGAN-deep 512512Big Transformer for Back-TranslationIMPALAJFTTransformerXceptionGNMT2016201720

277、020100010K100K1M10M100MPublication dateTraining cost(in U.S.dollars-log scale)Estimated training cost of select AI models,201623Source:Epoch,2023|Chart:2024 AI Index report1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 2024Figure 1

278、.3.21Figure 1.3.22Figure 1.3.22 visualizes the training cost of all AI models for which the AI Index has estimates.As the figure shows,model training costs have sharply increased over time.65Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsGemini UltraLlama 2 70BGPT-4PaLM(54

279、0B)LaMDAMegatron-Turing NLG 530BGPT-3 175B(davinci)RoBERTa LargeBERT-LargeTransformer10K100K1M10M100M1B10B100B100010K100K1M10M100MTraining compute(petaFLOP-log scale)Training cost(in U.S.dollars-log scale)Estimated training cost and compute of select AI modelsSource:Epoch,2023|Chart:2024 AI Index re

280、port1.3 Frontier AI ResearchChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 2024Figure 1.3.23As established in previous AI Index reports,there is a direct correlation between the training costs of AI models and their computational requirements.As illustrated in Figure 1.3.23,mo

281、dels with greater computational training needs cost substantially more to train.66Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsAI conferences serve as essential platforms for researchers to present their findings and network with peers and collaborators.Over the past two

282、 decades,these conferences have expanded in scale,quantity,and prestige.This section explores trends in attendance at major AI conferences.Conference AttendanceFigure 1.4.1 graphs attendance at a selection of AI conferences since 2010.Following a decline in attendance,likely due to the shift back to

283、 exclusively in-person formats,the AI Index reports an increase in conference attendance from 2022 to 2023.12 1.4 AI ConferencesSpecifically,there was a 6.7%rise in total attendance over the last year.Since 2015,the annual number of attendees has risen by around 50,000,reflecting not just a growing

284、interest in AI research but also the emergence of new AI conferences.1.4 AI ConferencesChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 202412 This data should be interpreted with caution given that many conferences in the last few years have had virtual or hybrid formats.Confer

285、ence organizers report that measuring the exact attendance numbers at virtual conferences is difficult,as virtual conferences allow for higher attendance of researchers from around the world.The conferences for which the AI Index tracked data include NeurIPS,CVPR,ICML,ICCV,ICRA,AAAI,ICLR,IROS,IJCAI,

286、AAMAS,FAccT,UAI,ICAPS,and KR.20000022202307080Number of attendees(in thousands)63.29Attendance at select AI conferences,201023Source:AI Index,2023|Chart:2024 AI Index reportFigure 1.4.167Artificial IntelligenceIndex Report 2024Chapter 1 PreviewT

287、able of ContentsNeural Information Processing Systems(NeurIPS)remains one of the most attended AI conferences,attracting approximately 16,380 participants in 2023(Figure 1.4.2 and Figure 1.4.3).Among the major AI conferences,NeurIPS,ICML,ICCV,and AAAI experienced year-over-year increases in attendan

288、ce.However,in the past year,CVPR,ICRA,ICLR,and IROS observed slight declines in their attendance figures.1.4 AI ConferencesChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 2024200000222023051015202530Number of attendees(in thousands)3.

289、65,IROS3.76,ICLR4.47,AAAI6.60,ICRA7.33,ICCV7.92,ICML8.34,CVPR16.38,NeurIPSAttendance at large conferences,201023Source:AI Index,2023|Chart:2024 AI Index reportFigure 1.4.268Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of Contents1.4 AI ConferencesChapter 1:Research and DevelopmentA

290、rtificial IntelligenceIndex Report 2024Figure 1.4.32000002220230.000.501.001.502.002.503.003.50Number of attendees(in thousands)1.99,IJCAI0.97,AAMAS 0.83,FAccT0.48,UAI 0.31,ICAPS 0.25,KRAttendance at small conferences,201023Source:AI Index,2023|Chart:2024

291、AI Index report69Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsGitHub is a web-based platform that enables individuals and teams to host,review,and collaborate on code repositories.Widely used by software developers,GitHub facilitates code management,project collaboration

292、,and open-source software support.This section draws on data from GitHub providing insights into broader trends in open-source AI software development not reflected in academic publication data.ProjectsA GitHub project comprises a collection of files,including source code,documentation,configuration

293、 files,and images,that together make up a software project.Figure 1.5.1 looks at the total number of 1.5 Open-Source AI SoftwareGitHub AI projects over time.Since 2011,the number of AI-related GitHub projects has seen a consistent increase,growing from 845 in 2011 to approximately 1.8 million in 202

294、3.13 Notably,there was a sharp 59.3%rise in the total number of GitHub AI projects in the last year alone.1.5 Open-Source AI SoftwareChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 202413 GitHubs methodology for identifying AI-related projects has evolved over the past year.For

295、 classifying AI projects,GitHub has started incorporating generative AI keywords from a recently published research paper,a shift from the previously detailed methodology in an earlier paper.This edition of the AI Index is the first to adopt this updated approach.Moreover,the previous edition of the

296、 AI Index utilized country-level mapping of GitHub AI projects conducted by the OECD,which depended on self-reported dataa method experiencing a decline in coverage over time.This year,the AI Index has adopted geographic mapping from GitHub,leveraging server-side data for broader coverage.Consequent

297、ly,the data presented here may not align perfectly with data in earlier versions of the report.20000202021202220230.000.501.001.50Number of AI projects(in millions)1.81Number of GitHub AI projects,201123Source:GitHub,2023|Chart:2024 AI Index reportFigure 1.5.170Artif

298、icial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsFigure 1.5.2 reports GitHub AI projects by geographic area since 2011.As of 2023,a significant share of GitHub AI projects were located in the United States,accounting for 22.9%of contributions.India was the second-largest contribu

299、tor with 19.0%,followed closely by the European Union and the United Kingdom at 17.9%.Notably,the proportion of AI projects from developers located in the United States on GitHub has been on a steady decline since 2016.1.5 Open-Source AI SoftwareChapter 1:Research and DevelopmentArtificial Intellige

300、nceIndex Report 202420000202021202220230%10%20%30%40%50%60%AI projects(%of total)3.04%,China17.93%,European Union and United Kingdom19.01%,India22.93%,United States37.09%,Rest of the worldGitHub AI projects(%of total)by geographic area,201123Source:GitHub,2023|Chart:

301、2024 AI Index reportFigure 1.5.271Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsStarsGitHub users can show their interest in a repository by“starring”it,a feature similar to liking a post on social media,which signifies support for an open-source project.Among the most st

302、arred repositories are libraries such as TensorFlow,OpenCV,Keras,and PyTorch,which enjoy widespread popularity among software developers in the AI coding community.For example,TensorFlow is a popular library for building and deploying machine learning models.OpenCV is a platform that offers a variet

303、y of tools for computer vision,such as object detection and feature extraction.The total number of stars for AI-related projects on GitHub saw a significant increase in the last year,more than tripling from 4.0 million in 2022 to 12.2 million in 2023(Figure 1.5.3).This sharp increase in GitHub stars

304、,along with the previously reported rise in projects,underscores the accelerating growth of open-source AI software development.1.5 Open-Source AI SoftwareChapter 1:Research and DevelopmentArtificial IntelligenceIndex Report 202420000202024681012Number of

305、GitHub stars(in millions)12.21Number of GitHub stars in AI projects,201123Source:GitHub,2023|Chart:2024 AI Index reportFigure 1.5.372Artificial IntelligenceIndex Report 2024Chapter 1 PreviewTable of ContentsIn 2023,the United States led in receiving the highest number of GitHub stars,totaling 10.5 m

306、illion(Figure 1.5.4).All major geographic regions sampled,including the European Union and United Kingdom,China,and India,saw a year-over-year increase in the total number of GitHub stars awarded to projects located in their countries.1.5 Open-Source AI SoftwareChapter 1:Research and DevelopmentArti

307、ficial IntelligenceIndex Report 2024200002020246810Number of cumulative GitHub stars(in millions)1.92,India2.12,China4.53,European Union and United Kingdom7.86,Rest of the world10.45,United StatesNumber of GitHub stars by geographic area,201123Source:GitHu

308、b,2023|Chart:2024 AI Index reportFigure 1.5.4CHAPTER 2:TechnicalPerformanceArtificial IntelligenceIndex Report 2024Overview 76Chapter Highlights 772.1 Overview of AI in 2023 78Timeline:Significant Model Releases 78State of AI Performance 81AI Index Benchmarks 822.2 Language 85Understanding 86 HELM:H

309、olistic Evaluation of Language Models 86 MMLU:Massive Multitask Language Understanding 87Generation 88 Chatbot Arena Leaderboard 88Factuality and Truthfulness 90 TruthfulQA 90 HaluEval 922.3 Coding 94Generation 94 HumanEval 94 SWE-Bench 952.4 Image Computer Vision and Image Generation 96Generation 9

310、6 HEIM:Holistic Evaluation of Text-to-Image Models 97 Highlighted Research:MVDream 98Instruction-Following 99 VisIT-Bench 99Editing 100 EditVal 100 Highlighted Research:ControlNet 101 Highlighted Research:Instruct-NeRF2NeRF 103Segmentation 105 Highlighted Research:Segment Anything 1053D Reconstructi

311、on From Images 107 Highlighted Research:Skoltech3D 107 Highlighted Research:RealFusion 1082.5 Video Computer Vision and Video Generation 109Generation 109 UCF101 109 Highlighted Research:Align Your Latents 110 Highlighted Research:Emu Video 1112.6 Reasoning 112General Reasoning 112 MMMU:A Massive Mu

312、lti-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI 112 GPQA:A Graduate-Level Google-Proof Q&A Benchmark 115 Highlighted Research:Comparing Humans,GPT-4,and GPT-4V on Abstraction and Reasoning Tasks 116Mathematical Reasoning 117 GSM8K 117 MATH 119 PlanBench 120Visual Reaso

313、ning 121 Visual Commonsense Reasoning(VCR)121Preview74Artificial IntelligenceIndex Report 2024CHAPTER 2:Technical PerformanceTable of ContentsMoral Reasoning 122 MoCa 122Causal Reasoning 124 BigToM 124 Highlighted Research:Tbingen Cause-Effect Pairs 1262.7 Audio 127Generation 127 Highlighted Researc

314、h:UniAudio 128 Highlighted Research:MusicGEN and MusicLM 1292.8 Agents 131General Agents 131 AgentBench 131 Highlighted Research:Voyageur 133Task-Specific Agents 134 MLAgentBench 1342.9 Robotics 135 Highlighted Research:PaLM-E 135 Highlighted Research:RT-2 1372.10 Reinforcement Learning 138Reinforce

315、ment Learning from Human Feedback 138 Highlighted Research:RLAIF 139 Highlighted Research:Direct Preference Optimization 1402.11 Properties of LLMs 141 Highlighted Research:Challenging the Notion of Emergent Behavior 141 Highlighted Research:Changes in LLM Performance Over Time 143 Highlighted Resea

316、rch:LLMs Are Poor Self-Correctors 145 Closed vs.Open Model Performance 1462.12 Techniques for LLM Improvement 148Prompting 148 Highlighted Research:Graph of Thoughts Prompting 148 Highlighted Research:Optimization by PROmpting(OPRO)150Fine-Tuning 151 Highlighted Research:QLoRA 151Attention 152 Highl

317、ighted Research:Flash-Decoding 1522.13 Environmental Impact of AI Systems 154General Environmental Impact 154 Training 154 Inference 156 Positive Use Cases 157Preview(contd)ACCESS THE PUBLIC DATA75Artificial IntelligenceIndex Report 2024CHAPTER 2:Technical PerformanceTable of Contents76Artificial In

318、telligenceIndex Report 2024OverviewThe technical performance section of this years AI Index offers a comprehensive overview of AI advancements in 2023.It starts with a high-level overview of AI technical performance,tracing its broad evolution over time.The chapter then examines the current state of

319、 a wide range of AI capabilities,including language processing,coding,computer vision(image and video analysis),reasoning,audio processing,autonomous agents,robotics,and reinforcement learning.It also shines a spotlight on notable AI research breakthroughs from the past year,exploring methods for im

320、proving LLMs through prompting,optimization,and fine-tuning,and wraps up with an exploration of AI systems environmental footprint.CHAPTER 2:Technical PerformanceTable of Contents77Artificial IntelligenceIndex Report 20241.AI beats humans on some tasks,but not on all.AI has surpassed human performan

321、ce on several benchmarks,including some in image classification,visual reasoning,and English understanding.Yet it trails behind on more complex tasks like competition-level mathematics,visual commonsense reasoning and planning.2.Here comes multimodal AI.Traditionally AI systems have been limited in

322、scope,with language models excelling in text comprehension but faltering in image processing,and vice versa.However,recent advancements have led to the development of strong multimodal models,such as Googles Gemini and OpenAIs GPT-4.These models demonstrate flexibility and are capable of handling im

323、ages and text and,in some instances,can even process audio.3.Harder benchmarks emerge.AI models have reached performance saturation on established benchmarks such as ImageNet,SQuAD,and SuperGLUE,prompting researchers to develop more challenging ones.In 2023,several challenging new benchmarks emerged

324、,including SWE-bench for coding,HEIM for image generation,MMMU for general reasoning,MoCa for moral reasoning,AgentBench for agent-based behavior,and HaluEval for hallucinations.4.Better AI means better data which means even better AI.New AI models such as SegmentAnything and Skoltech are being used

325、 to generate specialized data for tasks like image segmentation and 3D reconstruction.Data is vital for AI technical improvements.The use of AI to create more data enhances current capabilities and paves the way for future algorithmic improvements,especially on harder tasks.5.Human evaluation is in.

326、With generative models producing high-quality text,images,and more,benchmarking has slowly started shifting toward incorporating human evaluations like the Chatbot Arena Leaderboard rather than computerized rankings like ImageNet or SQuAD.Public feeling about AI is becoming an increasingly important

327、 consideration in tracking AI progress.6.Thanks to LLMs,robots have become more flexible.The fusion of language modeling with robotics has given rise to more flexible robotic systems like PaLM-E and RT-2.Beyond their improved robotic capabilities,these models can ask questions,which marks a signific

328、ant step toward robots that can interact more effectively with the real world.7.More technical research in agentic AI.Creating AI agents,systems capable of autonomous operation in specific environments,has long challenged computer scientists.However,emerging research suggests that the performance of

329、 autonomous AI agents is improving.Current agents can now master complex games like Minecraft and effectively tackle real-world tasks,such as online shopping and research assistance.8.Closed LLMs significantly outperform open ones.On 10 select AI benchmarks,closed models outperformed open ones,with

330、a median performance advantage of 24.2%.Differences in the performance of closed and open models carry important implications for AI policy debates.Chapter HighlightsCHAPTER 2:Technical PerformanceTable of Contents78Artificial IntelligenceIndex Report 2024Chapter 2 PreviewTable of ContentsTimeline:S

331、ignificant Model ReleasesAs chosen by the AI Index Steering Committee,here are some of the most notable model releases of 2023.2.1 Overview of AI in 2023The technical performance chapter begins with a high-level overview of significant model releases in 2023 and reviews the current state of AI techn

332、ical performance.2.1 Overview of AI in 2023Chapter 2:Technical PerformanceArtificial IntelligenceIndex Report 2024DateModelTypeCreator(s)SignificanceImageMar.14,2023ClaudeLarge language modelAnthropicClaude is the first publicly released LLM from Anthropic,one of OpenAIs main rivals.Claude is design

333、ed to be as helpful,honest,and harmless as possible.Figure 2.1.1Source:Anthropic,2023Mar.14,2023GPT-4Large language modelOpenAIGPT-4,improving over GPT-3,is among the most powerful and capable LLMs to date and surpasses human performance on numerous benchmarks.Figure 2.1.2Source:Medium,2023Mar.23,2023Stable Diffusion v2Text-to-image modelStability AIStable Diffusion v2 is an upgrade of Stability A

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(斯坦福大学:2024年人工智能指数报告(英文版)(502页).pdf)为本站 (无糖拿铁) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

we***n_... 升级为标准VIP  we***n_...  升级为至尊VIP

17***17...  升级为高级VIP 17***17... 升级为标准VIP 

 we***n_...  升级为高级VIP Fr***De... 升级为至尊VIP

we***n_...  升级为高级VIP 18***28...  升级为标准VIP

H***T 升级为至尊VIP  ci***hu 升级为高级VIP  

we***n_...  升级为标准VIP we***n_... 升级为高级VIP 

Mo***so...  升级为至尊VIP  15***06... 升级为至尊VIP 

缘**  升级为至尊VIP we***n_...  升级为标准VIP

13***62...  升级为至尊VIP  we***n_...  升级为高级VIP

微**...  升级为标准VIP   xi***in... 升级为高级VIP 

13***25... 升级为标准VIP  we***n_...   升级为高级VIP

栀**...  升级为至尊VIP we***n_... 升级为高级VIP 

we***n_... 升级为高级VIP  we***n_...  升级为标准VIP

  we***n_... 升级为至尊VIP  ba***in... 升级为高级VIP

we***n_...  升级为高级VIP 56***55...  升级为高级VIP

 we***n_... 升级为至尊VIP   15***67... 升级为高级VIP

 15***19...  升级为高级VIP  we***n_... 升级为标准VIP

 18***95...  升级为至尊VIP   13***62... 升级为至尊VIP

13***86... 升级为至尊VIP     13***30... 升级为高级VIP

we***n_...  升级为标准VIP 想**...  升级为标准VIP 

18***61... 升级为标准VIP   ca***e2...  升级为至尊VIP

 we***n_... 升级为高级VIP  we***n_... 升级为至尊VIP

we***n_...  升级为标准VIP 19***85... 升级为高级VIP

13***90...  升级为高级VIP we***n_... 升级为至尊VIP 

13***18... 升级为至尊VIP 15***81... 升级为至尊VIP

we***n_... 升级为至尊VIP  Am***c 升级为至尊VIP

13***04... 升级为至尊VIP  18***88...  升级为至尊VIP

we***n_... 升级为至尊VIP  we***n_...  升级为至尊VIP

13***78... 升级为至尊VIP   18***21... 升级为至尊VIP

13***63... 升级为至尊VIP  we***n_... 升级为标准VIP 

we***n_... 升级为至尊VIP   18***46... 升级为高级VIP 

Ji***hx  升级为标准VIP  we***n_... 升级为高级VIP

 we***n_...  升级为至尊VIP we***n_... 升级为标准VIP 

 皮***n...  升级为标准VIP  we***n_... 升级为标准VIP

13***38... 升级为至尊VIP   we***n_... 升级为标准VIP 

 13***49...  升级为高级VIP we***n_... 升级为标准VIP 

18***75...  升级为至尊VIP  18***77... 升级为至尊VIP 

13***78... 升级为高级VIP  we***n_... 升级为至尊VIP

we***n_...  升级为标准VIP we***n_...  升级为标准VIP

 15***00... 升级为至尊VIP  we***n_...  升级为至尊VIP

we***n_...  升级为标准VIP  we***n_... 升级为至尊VIP 

  we***n_... 升级为标准VIP  13***31... 升级为标准VIP

we***n_...  升级为高级VIP  we***n_... 升级为至尊VIP 

 邓** 升级为至尊VIP we***n_...  升级为标准VIP 

升级为标准VIP 15***67...  升级为至尊VIP

 we***n_... 升级为高级VIP 13***52...   升级为高级VIP

we***n_... 升级为标准VIP    微**... 升级为至尊VIP

 微**... 升级为至尊VIP  15***65... 升级为高级VIP

we***n_...  升级为至尊VIP  13***14... 升级为至尊VIP

 we***n_... 升级为高级VIP 微**... 升级为至尊VIP