上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

世界经济论坛:2024年人工智能联盟简报系列(英文版)(60页).pdf

编号:155068  PDF   DOCX 60页 1.66MB 下载积分:VIP专享
下载报告请您先登录!

世界经济论坛:2024年人工智能联盟简报系列(英文版)(60页).pdf

1、AI Governance Alliance Briefing Paper SeriesJ A N U A R Y 2 0 2 4ForewordOur world is experiencing a phase of multi-faceted transformation in which technological innovation plays a leading role.Since its inception in the latter half of the 20th century,artificial intelligence(AI)has journeyed throug

2、h significant milestones,culminating in the recent breakthrough of generative AI.Generative AI possesses a remarkable range of abilities to create,analyse and innovate,signalling a paradigm shift that is reshaping industries from healthcare to entertainment,and beyond.As new capabilities of AI advan

3、ce and drive further innovation,it is also revolutionizing economies and societies around the world at an exponential pace.With the economic promise and opportunity that AI brings,comes great social responsibility.Leaders across countries and sectors must collaborate to ensure it is ethically and re

4、sponsibly developed,deployed and adopted.The World Economic Forums AI Governance Alliance(AIGA)stands as a pioneering collaborative effort,uniting industry leaders,governments,academic institutions and civil society organizations.The alliance represents a shared commitment to responsible AI developm

5、ent and innovation while upholding ethical considerations at every stage of the AI value chain,from development to application and governance.The alliance,led by the World Economic Forum in collaboration with IBM Consulting and Accenture as knowledge partners,is made up of three core workstreams Saf

6、e Systems and Technologies,Responsible Applications and Transformation,and Resilient Governance and Regulation.These pillars underscore a comprehensive end-to-end approach to address key AI governance challenges and opportunities.The alliance is a global effort that unites diverse perspectives and s

7、takeholders,which allows for thoughtful debates,ideation and implementation strategies for meaningful long-term solutions.The alliance also advances key perspectives on access and inclusion,driving efforts to enhance access to critical resources such as learning,skills,data,models and compute.This w

8、ork includes considering how such resources can be equitably distributed,especially to underserved regions and communities.Most critically,it is vital that stakeholders who are typically not engaged in AI governance dialogues are given a seat at the table,ensuring that all voices are included.In doi

9、ng so,the AI Governance Alliance provides a forum for all.As we navigate the dynamic and ever-evolving landscape of AI governance,the insights from the AI Governance Alliance are aimed at providing valuable guidance for the responsible development,adoption and overall governance of generative AI.We

10、encourage decision-makers,industry leaders,policy-makers and thinkers from around the world to actively participate in our collective efforts to shape an AI-driven future that upholds shared human values and promotes inclusive societal progress for everyone.John Granger Senior Vice-President,IBM Con

11、sultingPaul Daugherty Chief Technology and Innovation Officer(CTIO),AccentureCathy Li Head,AI,Data and Metaverse;Member of the Executive Committee,World Economic ForumJeremy Jurgens Managing Director,World Economic ForumAI Governance Alliance Briefing Paper SeriesJanuary 2024AI Governance Alliance2I

12、ntroduction to the briefing paper seriesThe AI Governance Alliance was launched in June 2023 with the objective of providing guidance on the responsible design,development and deployment of artificial intelligence systems.Since its inception,more than 250 members have joined the alliance from over 2

13、00 organizations across six continents.The alliance is comprised of a steering committee along with three working groups.The Steering Committee comprises leaders from the public and private sectors along with academia and provides guidance on the overall direction of the alliance and its working gro

14、ups.The Safe Systems and Technologies working group,led in collaboration with IBM Consulting,is focused on establishing consensus on the necessary safeguards to be implemented during the development phase,examining technical dimensions of foundation models,including guardrails and responsible releas

15、e of models and applications.Accountability is defined at each stage of the AI life cycle to ensure oversight and thoughtful expansion.The Responsible Applications and Transformation working group,led in collaboration with IBM Consulting,is focused on evaluating business transformation for responsib

16、le generative AI adoption across industries and sectors.This includes assessing generative AI use cases enabling new or incremental value creation,and understanding their impact on value chains and business models while evaluating considerations for adoption and their downstream effects.The Resilien

17、t Governance and Regulation working group,led in collaboration with Accenture,is focused on the analysis of the AI governance landscape,mechanisms to facilitate international cooperation to promote regulatory interoperability,as well as the promotion of equity,inclusion and global access to AI.This

18、briefing paper series is the first output from each of the three working groups and establishes the foundational focus areas of the AI Governance Alliance.In a time of rapid change,the AI Governance Alliance seeks to build a multistakeholder community of trusted voices from across the public,private

19、,civil society and academic spheres,united,to tackle some of the most challenging and potentially most rewarding issues in contemporary AI governance.AI Governance Alliance3Reading guideThis paper series is composed of three briefing papers that have been grouped into thematic categories according t

20、o the three working groups of the alliance.Each briefing paper of the report can also be read as a stand-alone piece.For example,developers,adopters and policy-makers who are more interested in the technical dimensions can easily jump to the Safe Systems and Technologies briefing paper to obtain a c

21、ontemporary understanding of the AI landscape.For decision-makers engaged in corporate strategy and business implications of generative AI,the Responsible Applications and Transformation briefing paper offers specific context.For business leaders and policy-makers occupied with the laws,policies,pri

22、nciples and practices that govern the ethical development,deployment,use and regulation of AI technologies,the Resilient Governance and Regulation briefing paper offers guidance.While each briefing paper has a unique focus area,many important lessons are learned at the intersection of these varying

23、multistakeholder communities,along with the consensus and knowledge that emanate from each working group.Therefore,many of the takeaways from this briefing paper series should be viewed at the intersection of each working group,where findings become additive and are enhanced in context and interrela

24、tion with one another.Presidio AI Framework:Towards Safe Generative AI ModelsI N C O L L A B O R A T I O N W I T H I B M C O N S U L T I N GAI Governance AllianceBriefing Paper Series 20241/3Unlocking Value from Generative AI:Guidance for Responsible TransformationI N C O L L A B O R A T I O N W I T

25、 H I B M C O N S U L T I N G2/3AI Governance AllianceBriefing Paper Series 2024I N C O L L A B O R A T I O N W I T H A C C E N T U R EGenerative AI Governance:Shaping a Collective Global Future3/3AI Governance AllianceBriefing Paper Series 2024Theme 1Safe Systems and TechnologiesTheme 2Responsible A

26、pplications and TransformationTheme 3Resilient Governance and RegulationAI Governance Alliance Briefing Paper SeriesJ A N U A R Y 2 0 2 4AI Governance Alliance4AI Governance Alliance Steering CommitteeNick CleggPresident,Global Affairs,MetaGary CohnVice-Chairman,IBMSadie CreeseProfessor of Cybersecu

27、rity,University of OxfordOrit GadieshChairman,Bain&CompanyPaula IngabireMinister of Information Communication Technology of RwandaDaphne KollerFounder and Chief Executive Officer,InsitroXue LanProfessor;Dean,Schwarzman College,Tsinghua UniversityAnna MakanjuVice-President,Global Affairs,OpenAIDurga

28、MalladiSenior Vice-President,QualcommAndrew NgFounder,DeepLearning.AISabastian NilesPresident and Chief Legal Officer,SalesforceOmar Sultan Al OlamaMinister of State for Artificial Intelligence,United Arab EmiratesLynne ParkerAssociate Vice-Chancellor and Director,AI Tennessee Initiative,University

29、of TennesseeBrad SmithVice-Chair and President,MicrosoftMustafa SuleymanCo-Founder and Chief Executive Officer,Inflection AIJosephine TeoMinister for Communications and Information Ministry of Communications and Information(MCI)of SingaporeKent WalkerPresident,Global Affairs,GoogleAI Governance Alli

30、ance5GlossaryTerminology in AI is a fast-moving topic,and the same term can have multiple meanings.The glossary below should be viewed as a snapshot of contemporary definitions.Artificial intelligence system:a machine-based system that,for explicit or implicit objectives,infers,from the input it rec

31、eives,how to generate outputs such as predictions,content,recommendations or decisions that can influence physical or virtual environments.Different AI systems vary in their levels of autonomy and adaptiveness after deployment.1Causal AI:AI models that identify and analyse causal relationships in da

32、ta,enabling predictions and decisions based on these relationships.Causal inference models provide responsible AI benefits,including explainability and bias reduction through formalizations of fairness,as well as contextualisation for model reasoning and outputs.The intersection and exploration of c

33、ausal and generative AI models is a new conversation.Fine-tuning:The process of adapting a pre-trained model to perform a specific task by conducting additional training while updating the models existing parameters.Foundation model:A foundation model is an AI model that can be adapted to a wide ran

34、ge of downstream tasks.Foundation models are typically large-scale(e.g.billions of parameters)generative models trained on a vast array of data,encompassing both labelled and unlabelled datasets.Frontier model:This term generally refers to the most advanced or cutting-edge models in AI technology.Fr

35、ontier models represent the latest developments and are often characterized by increased complexity,enhanced capabilities and improved performance over previous models.Generative AI:AI models specifically intended to produce new digital material as an output(e.g.text,images,audio,video and software

36、code),including when such AI models are used in applications and their user interfaces.These are typically constructed as machine learning systems that have been trained on massive amounts of data.2Hallucination:Hallucinations occur when models produce factually inaccurate or untruthful information.

37、Often,hallucinatory output is presented in a plausible or convincing manner,making detection by end users difficult.Jurisdictional interoperability:The ability to operate within and across different jurisdictions governed by differing policy and regulatory requirements.3Mis/disinformation:Misinforma

38、tion involves the dissemination of incorrect facts,where individuals may unknowingly share or believe false information without the intent to mislead.Disinformation involves the deliberate and intentional spread of false information with the aim of misleading others.4Model drift monitoring:The act o

39、f regularly comparing model metrics to maintain performance despite changing data,adversarial inputs,noise and external factors.Model hyperparameters:Adjustable parameters of a model that must be tuned to obtain optimal performance(as opposed to fixed parameters of a model,defined based on its train

40、ing set).Multi-modal AI:AI technology capable of processing and interpreting multiple types of data(like text,images,audio,video),potentially simultaneously.It integrates techniques from various domains(natural language processing,computer vision,audio processing)for more comprehensive analysis and

41、insights.Prompt engineering:The process of designing natural language prompts for a language model to perform a specific task.Retrieval augmented generation:A technique in which a large language model is augmented with knowledge from external sources to generate text.In the retrieval step,relevant d

42、ocuments from an external source are identified from the users query.In the generation step,portions of those documents are included in the model prompt to generate a response grounded in the retrieved documents.Parameter-efficient fine-tuning:An efficient,low-cost way of adapting a pre-trained mode

43、l to new tasks without retraining the model or updating its weights.It involves learning a small number of new parameters that are appended to a models prompt while freezing the models existing parameters(also known as prompt-tuning).AI red teaming:A method of simulating attacksbya group of people a

44、uthorized and organizedtoidentify potential weaknesses,vulnerabilities and areas for improvement.It should be integral from model design to development to deployment and application.The red teams objective is to improve security and robustness by demonstrating the impacts of successful attacks and b

45、y demonstrating what works for the defenders in an operational environment.Reinforcement learning from human feedback(RLHF):An approach for model improvement where human evaluators rank model-generated outputs for safety,relevance and coherence,and the model is updated based on this feedback to broa

46、dly improve performance.AI Governance Alliance6Release access A gradient covering different levels of access granted.5 Fully closed:The foundation model and its components(like weights,data and documentation)are not released outside the creator group or sub-section of the organization.The same organ

47、ization usually does model creation and downstream model adaptation.External users may interact with the model through an application.Hosted:Creators provide access to the foundation model by hosting it on their infrastructure,allowing internal and external interaction via a user interface,and relea

48、sing specific model details.Application programming interface(API):Creators provide access to the foundation model by hosting it on their infrastructure and allowing adapter interaction via an API to perform prescribed tasks and release specific model details.Downloadable:Creators provide a way to d

49、ownload the foundation model for running on the adapters infrastructure while withholding some of its components,like training data.Fully open:Creators release all model components,including all parameters,weights,model architecture,training code,data and documentation.Responsible adoption:The adopt

50、ion of individual use cases and opportunities within the responsible AI framework of an organization.It requires thorough evaluation to ensure that value can be realized and change management is successfully aligned with defined goals in a responsible framework.Responsible AI:AI that is developed an

51、d deployed in ways that maximize benefits and minimize the risks it poses to people,society and the environment.It is often described by various principles and organizations,including but not limited to robustness,transparency,explainability,fairness and equity.6Responsible transformation:The organi

52、zational effort and orientation to harness the opportunities and benefits of generative AI while mitigating the risks to individuals,organizations and society.Responsible transformation is strategic coordination and change across an organizations governance,operations,talent and communications.Trace

53、ability:Determining the original source and facts of the generated output.Transparency:The disclosure of details(decisions,choices and processes)in the documentation about the sources,data and model to enable informed decisions regarding model selection and understanding.Usage restriction:The proces

54、s of restricting the usage of the model beyond the intended use cases/purpose to avoid unintended consequences of the model.Watermarking:The act of embedding information into outputs created by AI(e.g.images,videos,audio,text)for the purposes of verifying the authenticity of the output,identity and/

55、or characteristics of its provenance,modifications and/or conveyance.7Endnotes1.“OECD AI Principles overview”,Organisation for Economic Co-operation and Development(OECD)AI Policy Observatory,2023,https:/oecd.ai/en/ai-principles.2.OECD,G7 Hiroshima Process on Generative Artificial Intelligence(AI)To

56、wards a G7 Common Understanding on Generative AI,2023,https:/www.oecd.org/publications/g7-hiroshima-process-on-generative-artificial-intelligence-ai-bf3c0c60-en.htm.3.World Economic Forum,Interoperability In the Metaverse,2023,https:/www.weforum.org/publications/interoperability-in-the-metaverse/.4.

57、World Economic Forum,Toolkit for Digital Safety Design Interventions and Innovations:Typology of Online Harms,2023,https:/www.weforum.org/publications/toolkit-for-digital-safety-design-interventions-and-innovations-typology-of-online-harms/.5.Solaiman,Irene,“The Gradient of Generative AI Release:Met

58、hods and Considerations”,Hugging Face,2023,https:/arxiv.org/abs/2302.04844.6.World Economic Forum,The Presidio Recommendations on Responsible Generative AI,2023,https:/www.weforum.org/publications/the-presidio-recommendations-on-responsible-generative-ai/.7.The White House,Executive Order on the Saf

59、e,Secure,and Trustworthy Development and Use of Artificial Intelligence,2023:https:/www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.AI Governance Alliance7Presidio AI Framework:Towards

60、 Safe Generative AI ModelsI N C O L L A B O R A T I O N W I T H I B M C O N S U L T I N GAI Governance AllianceBriefing Paper Series 20241/3ContentsCover image:MidJourney 2024 World Economic Forum.All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any

61、means,including photocopying and recording,or by any information storage and retrieval system.Disclaimer This document is published by the World Economic Forum as a contribution to a project,insight area or interaction.The findings,interpretations and conclusions expressed herein are a result of a c

62、ollaborative process facilitated and endorsed by the World Economic Forum but whose results do not necessarily represent the views of the World Economic Forum,nor the entirety of its Members,Partners or other stakeholders.Executive summaryIntroduction1 Introducing the Presidio AI Framework2 Expanded

63、 AI life cycle3 Guardrails across the expanded AI life cycle3.1 Foundation model building phase3.2 Foundation model release phase3.3 Model adaptation phase4 Shifting left for optimized risk mitigationConclusionContributorsEndnotes521/3:Presidio AI Framework9Executive summaryThe

64、 rise of generative AI presents significant opportunities for positive societal transformations.At the same time,generative AI models add new dimensions to AI risk management,encompassing various risks such as hallucinations,misuse,lack of traceability and harmful output.Therefore,it is essential to

65、 balance safety,ethics and innovation.This briefing paper identifies a list of challenges to achieving this balance in practice,such as lack of a cohesive view of the generative AI model life cycle and ambiguity in terms of the deployment and perceived effectiveness of varying safety guardrails thro

66、ughout the life cycle.Amid these challenges,there are significant opportunities,including greater standardization through shared terminology and best practices,facilitating a common understanding of the effectiveness of various risk mitigation strategies.This briefing paper presents the Presidio AI

67、Framework,which provides a structured approach to the safe development,deployment and use of generative AI.In doing so,the framework highlights gaps and opportunities in addressing safety concerns,viewed from the perspective of four primary actors:AI model creators,AI model adapters,AI model users,a

68、nd AI application users.Shared responsibility,early risk identification and proactive risk management through the implementation of appropriate guardrails are emphasized throughout.The Presidio AI Framework consists of three core components:1.Expanded AI life cycle:This element of the framework esta

69、blishes a comprehensive end-to-end view of the generative AI life cycle,signifying varying actors and levels of responsibility at each stage.2.Expanded risk guardrails:The framework details robust guardrails to be considered at different steps of the generative AI life cycle,emphasizing prevention r

70、ather than mitigation.3.Shift-left methodology:This methodology proposes the implementation of guardrailsatthe earliest stage possible in the generative AI life cycle.While shift-left is a well-established concept in software engineering,its application in the context of generative AI presents a uni

71、que opportunity to promote more widespread adoption.In conclusion,the paper emphasizes the need for greater multistakeholder collaboration between industry stakeholders,policy-makers and organizations.The Presidio AI Framework promotes shared responsibility,early risk identification and proactive ri

72、sk management in generative AI development,using guardrails to ensure ethical and responsible deployment.The paper lays the foundation for ongoing safety-related work of the AI Governance Alliance and the Safe Systems and Technologies working group.Future work will expand on the core concepts and co

73、mponents introduced in this paper,including the provision of a more exhaustive list of known and novel guardrails,along with a checklist to operationalize the framework across the generative AI life cycle.The Presidio AI Framework addresses generative AI risks by promoting safety,ethics,and innovati

74、on with early guardrails.1/3:Presidio AI Framework10IntroductionThe current AI landscape includes both challenges and opportunities for progress towards safe generative AI models.This briefing paper outlines the Presidio AI Framework,providing a structured approach to addressing both technical and p

75、rocedural considerations for safe generative artificial intelligence(AI)models.The framework centres on foundation models and incorporates risk-mitigation strategies throughout the entire life cycle,encompassing creation,adaptation and eventual retirement.Informed by thorough research into the curre

76、nt AI landscape and input from a multistakeholder community and practitioners,the framework underscores the importance of established safety guidelines and recommendations viewed through a technical lens.Notable challenges in the existing landscape impacting the development and deployment of safe ge

77、nerative AI include:Fragmentation:A holistic perspective,which covers the entire life cycle of generative AI models from their initial design to deployment and the continuous stages of adaptation and use,is currently missing.This can lead to fragmented perceptions of the models creation and the risk

78、s associated with its deployment.Vague definitions:Ambiguity and lack of common understanding of the meaning of safety,risks1(e.g.traceability),and general safety measures(e.g.red teaming)at the frontier of model development.Guardrail ambiguity:While there is agreement on the importance of risk-miti

79、gation strategies known as guardrails clarity is lacking regarding accountability,effectiveness,actionability,applicability,limitations and at what stages of the AI design,development and release life cycle varying guardrails should be implemented.Model access:An open approach presents significant o

80、pportunities for innovation,greater adoption and increased stakeholder population diversity.However,the availability of all the model components(e.g.weights,technical documentation and code)could also amplify risks and reduce guardrails effectiveness.There is a need for careful analysis of risksand

81、common consensus among the use of guardrails considering the gradient of release;2 that is,varying levels at which AI models are accessible once released,from fully closed to fully open-sourced.Simultaneously,there are some identified opportunities for progress towards safety,such as:Standardization

82、:By linking the technical aspects at each phase of design,development and release with their corresponding risks and mitigations,there is the opportunity for bringing attention to shared terminology and best practices.This may contribute towards greater adoption of necessary safety measures and prom

83、ote community harmonization across different standards and guidelines.Stakeholder trust and empowerment:Pursuing clarity and agreement on the expected risk mitigation strategies,where these are most effectively located in the model life cycle and who is accountable for implementation paves the way f

84、or stakeholders to implement these proactively.This improves safety,prevents adverse outcomes for individuals and society,and builds trust among all stakeholders.While this briefing paper details the generative AI model life cycle along with some guardrails,it is by no means exhaustive.Some topics o

85、utside this papers scope include a discussion of current or future government regulations of AI risks and mitigations(this is covered in the Resilient Governance working group briefing paper)or consideration of downstream implementation and use of specific AI applications.1/3:Presidio AI Framework11

86、Introducing the Presidio AI Framework1A structured approach that emphasizes shared responsibility and proactive risk mitigation by implementing appropriate guardrails early in the generative AI life cycle.Those releasing,adapting or using foundation models often face challenges in influencing the or

87、iginal model design or setting up the necessary infrastructure for building foundation models.The combined need for regulatory compliance,the significant investments companies are making in AI,and the potential impacts the technology can have on society mean coordination among multiple roles and sta

88、keholders becomes indispensable.The Presidio AI Framework(illustrated in Figure1)offers a streamlined approach to generative AI development,deployment and use from the perspective of four primary actors:AI model creators,AI model adapters,AI model users and AI application users.This human-centric fr

89、amework harmonizes the activities of these roles to enable more efficient information transfer between upstream development and downstream applications of foundation models.AI model creators are responsible for the end-to-end design,development and release of generative AI models.AI model adapters t

90、ailor generative AI models to specific generative tasks before integration into AI applications and can provide feedback to the AI model creator.AI model users interact with a generative AI model through an interface provided by the creator.AI application users interact indirectly with the adapted m

91、odel through an application or application programming interface(API).These actors include secondary groups,for instance,AI model validators and AI model auditors,whose goal is to test and validate against defined metrics,perform safety evaluations or certify the conformity of the AI models pre-rele

92、ase.Validators are internal to AI creator or adapter organizations,while auditors are external entities pursuing model certification.The three elements of the Presidio AI FrameworkFIGURE 1Expanded AI life cycleExpanded risk-guardrailsShift-leftmethodology1/3:Presidio AI Framework12Expanded AI life c

93、ycle2The expanded AI life cycle encompasses risks and guardrails with varying safety benefits and challenges throughout each phase.The expanded AI life cycle synthesizes elements from data management,foundation model design and development,release access,use of generative capabilities and adaptation

94、 to a use case.The expanded AI life cycle is introduced in Figure 2.Presidio AI Frameworks expanded AI life cycleFIGURE 2Model access gradientModel integration phase(with application)AI application userData access gradientFully open publicData with consentCopyrighted dataPrivate dataData sources typ

95、esWeb crawled data User contentSensor dataPublic dataModel life cycleSelect a techniqueBuild,validate,audit and deployPrompt engineeringModel usage phase(general use of generative capabilities)Prompt engineeringRetrieval augmented generationParameter-efficient fine-tuningFine-tuningFully closedHoste

96、dAPIDownloadableFully openDesignData acquisitionData processingModel trainingModel fine-tuningModel performance validationModel audit and model approvalAccessibility gradientReinforcement learning human feedbackEvaluate use case context and risksSelect a foundation modelData management phaseFoundati

97、on model building phaseAI model adapterAI model user(business or individual)Technical and procedural guardrailsNorms,standards and release guardrailsFoundation model release phaseModel adapatation phase(generative task specific use)AI model creator1/3:Presidio AI Framework13The data management phase

98、 describes the data foundations for responsible AI development,including the data access gradient and the catalogue of data source types.The latter aids the AI model creator in navigating various legal implications and challenges,where multiple data source types are typically considered in model cre

99、ation.In the foundation model building phase,the model moves through various stages from design to internal audit and approval.In contrast,each stage is accompanied by a set of distinct guardrails,detailed in the following section.The foundation model release phase provides responsible model dissemi

100、nation and risk mitigation,benefiting downstream users and adapters.Foundation models are classified based on how they are released,depending on the level of access granted to downstream actors.This gradient of access spans from fully closed to fully open access;each access type has its own set of n

101、orms,standards and release guardrails and has specific benefits and challenges,highlighted in Table 1.In all phases,unexpected model behaviour could harm users and bring reputational risks or legal consequences to the user and the model creator or adapter.However,the chances of misuse such as plagia

102、rism,intentional non-disclosure,violation of intellectual property(IP)rights,deepfakes,creation of biologically harmful compounds,generation of toxic content,and misinformation generation may increase if vigilant oversight processes are not adequately implemented going from fully closed to fully ope

103、n model access.The model adaptation phase describes several stages,techniques and guardrails for adapting a pre-trained foundation model to perform specific generative tasks.This phase precedes the model integration phase,involving the models integration with an application,including developing APIs

104、 to serve downstream AI application users.In the model use phase,users engage with hosted access models using natural language prompts through an interface provided by the model creator or test it for vulnerabilities.This phase highlights the importance of having necessary guardrails during the foun

105、dation model building and release phases as users directly interact with the model.In contrast,adapters can add additional guardrails based on the use case.Safety benefits and challenges of release typesRelease typeSafety benefitsSafety challenges Fully closedCreators control the model use and can p

106、rovide safeguards for data privacy and the IP contained in the model.There is more clarity around responsibility and ownership.Other actors have limited visibility into the model design and development process.Auditability and contributors diversity are limited.Application users have minimal influen

107、ce on model outputs.HostedCreators can provide safeguards for model outputs,such as blocking model response for sensitive queries.They can streamline user support.Use can be tracked and used to improve model responses.Similar challenges as“fully closed”.Other actors have little insight into the mode

108、l,limiting their ability to understand its decisions.APICreators retain control over the model while empowering users to adapt the model for specific use cases.They can provide user support.This level of access increases the“researchability”of the model.Increased access allows users to help identify

109、 risks and vulnerabilities.Even though transparency is limited,model details can be inferred by third-party tools or attacks(in case of bad actors).DownloadableAlong with creators,adapters and users are also empowered through the release of model components.This means more transparency,flexibility f

110、or model use and modification of the model.Lowered barriers for misuse and potential bypassing of guardrails.Model creators have difficulties in tracking and monitoring model use.Users typically have less support when experiencing unexpected undesirable model outputs/outcomes.Fully openThese models

111、provide the highest levels of auditability and transparency.This level of access increases global participation and contribution to innovation also in terms of safety and guardrails.Adapters and users are empowered to adapt models that better align with their specific task and improve existing model

112、 functionality and safety via fine tuning.These models present a higher chance of possible misuse.Access to model weights means higher risk of model replication for unintended purposes by bad actors.Ambiguity around accountability and ownership.TABLE 11/3:Presidio AI Framework14Guardrails across the

113、 expanded AI life cycle3Implementation of known and novel guardrails is necessary for safe systems to ensure technical quality,consistency and control.Guardrails for safe AI systems refer to guidelines,principles and practices that are put in place to ensure the responsible development,deployment an

114、d use of generative AI systems and technologies.They are intended to mitigate risks,prevent harm and ensure AI systems operate according to specific standards and ethical and societal values.Guardrails are implemented from the model-building phase and onward throughout the expanded AI life cycle and

115、 may be technical or procedural.Technical guardrails involve tools or automated systems and controls,while procedural guardrails rely on human adherence to established processes and guidelines.A combination of both types is often needed to ensure safe systems.Technical guardrails ensure technical qu

116、ality and consistency,while procedural guardrails provide process consistency and control.The section below provides a snapshot of selected guardrails applicable at varying phases of the AI life cycle.Due to brevity,only two of the most widely used guardrails are highlighted,along with their phase p

117、lacement.Highlighted guardrails and their phase placementHighlighted guardrailsPhase placementRed teaming and reinforcement learning from human feedback(RLHF)3 BuildingTransparent documentation and use restriction ReleaseModel drift monitoring and watermarking AdaptationPerforming red teaming early,

118、especially during fine-tuning and validation of the building phase,is crucial for preventing adverse outcomes and ensuring model safety.Addressing vulnerabilities and ethical concerns earlier in the life cycle demonstrates a commitment to security and ethics while building trust among stakeholders.F

119、or foundation models,tests should cover prompt injection,leaking,jailbreaking,hallucination,IP and personal information(PI)generation,as well as identifying toxic content.While red teaming is effective for known vulnerabilities,it may have limitations in identifying unknown risks,especially before m

120、ass release.Incorporating reinforcement learning from human feedback(RLHF)early on provides a strategic advantage by enabling efficient learning,faster iterations and a strong foundation for subsequent phases,ultimately leading to improved model performance and alignment with human objectives.RLHF m

121、ay be used here to train a reward model,which is then used to fine-tune the primary model,eliciting more desirable responses.This process ensures the reliability and alignment of the model outputs and improves performance,including an iterative feedback loop between human raters,a trained reward mod

122、el and the foundation model.Although effective for ongoing improvement,there is a risk of introducing new biases with this method and data privacy and security considerations around the use of generated data.Model building phase3.1TABLE 21/3:Presidio AI Framework15Guardrails implemented in the relea

123、se phase include a combination of approaches designed to empower downstream actors(such as transparent documentation)and protect them(such as use restrictions).Transparent documentation is a collection of details(decisions,choices and processes)about the AI model,including the data.It mitigates the

124、risk of lack of transparency,5 and therefore empowers downstream adapters and users to understand the models limitations,evaluate its impact and make decisions on model use.This guardrail increases the auditability of the model and helps advance policy initiatives.Some best practices include underst

125、anding target consumers,their requirements,and expectations,developing persona-based(e.g.business owner,validator and auditors)templates with pre-defined fields and assigning responsibility for gathering information at every phase of the life cycle.Datasheets,data cards,model cards,factsheets and St

126、anfords foundation model transparency index indicators are a few examples of building templates.Automating fact collection,building documentation and auditing transparency could improve overall efficiency and effectiveness.Limitations include identifying the most useful facts and ambiguity in balanc

127、ing the disclosure of proprietary and required information.Use restriction limits the model use beyond intended purposes.It mitigates the risk of model misuse and other unintended harms like generating harmful content and model adaptation for problematic use cases.Some best practices involve using r

128、estrictive licences like responsible AI licences(RAIL),setting up model use and user tracking,and providing clear guidelines on allowed use while implementing feedback/incident reporting mechanisms.Additionally,integrating moderation tools to filter or flag undesirable content,disallowing harmful or

129、 sensitive prompts and blocking the model from responding to misaligned prompts must be considered.Limitations include having standards for model licences and guidelines and high-quality tools to help restrict the model response.A critical goal of the adaptation phase is to ensure that the adapted m

130、odel remains effective and aligned with the selected use case.Model drift monitoring involves regularly comparing post-deployment metrics to maintain performance in the face of evolving data,adversarial inputs,noise and external factors.The goal is to mitigate the risk of model drift,where the model

131、s output deviates from expectations over time.Best practices include systematically using data,algorithms,and tools for tracking data drift,and defining response protocols and adaptation techniques to sustain model performance and customer trust.The decision to watermark model outputs depends on the

132、 use case,model nature and watermarking goals.Watermarking adds hidden patterns for algorithmic detection,mitigating mass production of misleading content.It aids in identifying AI-generated content for policy enforcement,attribution,legal recourse and deterrence.However,workarounds exist,such as re

133、moving watermarks or paraphrasing content.Watermarking can be applied earlier(during model creation for ownership)and adaptation for control over visibility.Novel approaches to implement these guardrails include“red teaming language models with language models”and reinforcement learning from AI feed

134、back(RLAIF).4 Both techniques employ language models to generate test cases or provide safety-related feedback on the model.The automation significantly reduces the time needed to implement these guardrails.These may also be applied in later phases,but the advantage of using them earlier allows for

135、adjustments to the model hyperparameters to enhance performance.However,they may come with new vulnerabilities that are not yet fully identified.Model release phaseModel adaptation phase3.23.31/3:Presidio AI Framework16Shifting left for optimized risk mitigation4The“shift-left”approach involves impl

136、ementing safety guardrails earlier in the life cycle to mitigate risks and increase efficiency.The term“shift-left”6 describes implementing quality assurance and testing measures earlier in a product cycle.The core objective is proactively identifying and managing potential risks,increasing efficien

137、cy and cost-effectiveness.This well-established concept applies to various technologies and processes,including software engineering.In the Presidio AI Framework,the concept of shift-left is extended and applied to generative AI models.It gains a new dimension of importance due to:Increased interest

138、 in foundation models where model creators are not always the model adapters.Increased accessibility of powerful models by users of varying skills and technical backgrounds,raising the demand for model transparency.Considerable risk for users using factually incorrect output without validation,model

139、 misuse(e.g.in disinformation campaigns)and adversarial attacks on the model(e.g.jailbreaking).These considerations require understanding and coordination of the activities of different actors(creators,adapters and users)across the AI value chain to avoid significant effort in resolving issues durin

140、g model adoption and use.For example,data subject rights in some countries allow people to request that their personal information be deleted from the model.The removal can be costly for model creators as they may need to retrain the model.It can also be challenging for adaptors to apply effective g

141、uardrails to prevent sensitive information from surfacing in the output.For generative AI,the shift-left methodology proposes guardrails earlier in the life cycle,considering their effectiveness in mitigating risk at a particular phase,along with essential foundation model safety features,the need f

142、or balancing safety with model creativity and implementation cost.Based on the models purpose,there could be a trade-off between guardrail placement and safety dimensions like privacy,fairness,accuracy and transparency.Figure 3 illustrates three shift-left instances crucial for building safe generat

143、ive AI models.Release to build shift occurs when an AI model creator proactively incorporates guardrails throughout the foundation model-building phase and collects necessary data and model facts and transparency surrounding these.Adaptation/use to release shift occurs during the foundation model re

144、lease phase.The AI model creator incorporates additional guardrails,establishes norms and standards for use,and creates comprehensive documentation to help downstream actors understand and make informed decisions regarding model use.Application to adaptation shift occurs when the AI model adapter pr

145、oactively incorporates guardrails considering the use case and considering the documentation from AI model creators about the foundation model.These would be documented for the downstream application user.Some organizations have already integrated the shift-left approach into their responsible AI de

146、velopment process.However,it is vital to extend and emphasize the importance of this practice across all expanded phases of the generative AI life cycle and ensure its adoption by all organizations.Those that shift left to implement appropriate safety guardrails where most effective can minimize leg

147、al consequences and reputational risk,increase trusted adoption and positively impact society and users.1/3:Presidio AI Framework17Presidio AI Framework with shift-left methodology for generative AI modelsFIGURE 3Model adaptation phaseModel usage phase(business or individual)AI model adapterAI model

148、 userData management phaseFoundation model building phaseFoundation model release phaseShift leftShift leftShift leftAI model creatorModel integration phase(with application)AI application userConclusionThe Presidio AI Framework promotes shared responsibility,early risk identification and proactive

149、risk management in generative AI development,using guardrails to ensure ethical and responsible deployment.The AI Governance Alliance and the Safe Systems and Technologies working group encourage greater information exchange between industry stakeholders,policy-makers and organizations.This collabor

150、ative effort aims to increase trust in AI systems,ultimately benefiting society.In addition to known guardrails,the group will continue to identify novel mechanisms for AI safety,including emerging technical guardrails such as red teaming language models,7 liquid neural networks(LNN),8 BarrierNets,9

151、 causal foundation models10 and neurosymbolic learning,11 among others.Additionally,the group will investigate the various guardrail options and introduce a checklist to operationalize the framework to assess AI model risks and guardrails across the generative AI life cycle.1/3:Presidio AI Framework

152、18ContributorsAcknowledgementsWorld Economic Forum Benjamin LarsenLead,Artificial Intelligence and Machine LearningCathy LiHead,AI,Data and Metaverse;Deputy Head,Centre for the Fourth Industrial Revolution;Member of the Executive CommitteeSupheakmungkol SarinHead,Data and Artificial Intelligence Eco

153、systemsAI Governance Alliance Project FellowsRavi Kiran Singh ChevvanAI Strategy&Complex Program Executive,IBMJerry CuomoExecutive Fellow and Vice-President,Technology,IBMSteven EliukExecutive Fellow and Vice-President,AI&Governance,IBMJennifer KirkwoodExecutive Fellow,Partner,IBMEniko RozsaDistingu

154、ished Engineer,IBMSaishruthi SwaminathanTech Ethics Program Adviser,IBMJoseph WashingtonSenior Technical Staff Member,IBMThis paper is a combined effort based on numerous interviews,discussions,workshops and research.The opinions expressed herein do not necessarily reflect the views of the individua

155、ls or organizations involved in the project or listed below.Sincere thanks are extended to those who contributed their insights via interviews and workshops,as well as those not captured below.Sincere appreciation is extended to the following working group members,who spent numerous hours providing

156、critical input and feedback to the drafts.Their diverse insights are fundamental to the success of this work.Uthman AliSenior Product Analyst,AI Ethics SME,BPAnimashree(Anima)AnandkumarBren Professor of Computing and Mathematical Sciences,California Institute of Technology(Caltech)Amir BanifatemiCo-

157、Founder and Director,AI CommonsMichael BentonDirector,Responsible AI Practice,MicrosoftStella BidermanExecutive Director,EleutherAIShane CahillDirector,Privacy and AI Legislation and Policy Development,Meta PlatformsSuha CanChief Information Security Officer,GrammarlyJennifer ChayesDean of the Colle

158、ge of Computing,Data Science,and Society,University of California,BerkeleyKevin ChungChief Operating Officer,WriterJeff CluneAssociate Professor,Department of Computer Science,Faculty of Science,Vector InstituteCathy R CobeyGlobal Responsible Co-Lead,EY1/3:Presidio AI Framework19Umeshwar DayalCorpor

159、ate Chief Scientist,HitachiMona DiabDirector of Language Technologies Institute,Carnegie Mellon UniversityMennatallah El-AssadyProfessor,ETH ZurichGilles FayadAdviser,Institute of Electrical and Electronics Engineers(IEEE)Jocelyn GoldfeinManaging Director,Zetta Venture PartnersTom GruberFounder,Huma

160、nistic AILan GuanGlobal Data and AI Lead,Senior Managing Director,AccentureGillian HadfieldProfessor of Law and Professor of Strategic Management,University of TorontoPeter HallinanLeader,Responsible AI,Amazon Web ServicesOr HiltchChief Data and AI Architect,JLLBabak HodjatChief Technology Officer A

161、I,Cognizant Technology Solutions USSara HookerHead,Research,CohereDavid KanterFounder and Executive Director,MLCommonsVijay KarunamurthyHead of Engineering and Vice-President,Engineering,Scale AISean KaskChief AI Strategy Officer,SAPRobert KatzVice-President,Responsible AI&Tech,SalesforceMichael Kea

162、rnsFounding Director,Warren Center for Network and Data Sciences,University of PennsylvaniaSteve KellyChief Trust Officer,Institute for Security and TechnologyJin KuChief Technology Officer,SendbirdSophie LebrechtChief,Operations and Strategy,Allen Institute for Artificial IntelligenceAiden LeeCo-Fo

163、under and Chief Technology Officer,Twelve LabsStefan LeichenauerVice-President,Engineering,SandboxAQTze Yun LeongProfessor of Computer Science;Director,NUS Artificial Intelligence LaboratoryScott LikensGlobal AI and Innovation Technology Lead,PwCShane LukeVice-President,Product and Engineering,Workd

164、ayRichard MallahPrincipal AI Safety Strategist,Future of Life InstitutePilar ManchnSenior Director,Engineering,GoogleRisto MiikkulainenProfessor of Computer Science,University of Texas at AustinLama NachmanIntel Fellow,Director of Human&AI Systems Research Lab,IntelSyam NairChief Technology Officer,

165、ZscalerMark NitzbergExecutive Director,UC Berkeley Center for Human-Compatible AI,Vijoy PandeySenior Vice-President,Outshift by Cisco,Cisco SystemsLouis PoirierVice-President AI/ML,C3 AIVictor RiparbelliCo-Founder and Chief Executive Officer,SynthesiaJason RugerChief Information Security Officer,Len

166、ovoDaniela RusDirector,Computer Science and Artificial Intelligence Laboratory,Massachusetts Institute of Technology(MIT)Noam SchwartzChief Executive Officer and Co-Founder,Activefence1/3:Presidio AI Framework20Jun SeitaTeam Leader(Principal Investigator),Medical Data Deep Learning Team,RIKENSusanna

167、h ShattuckHead,Product,Credo AIPaul ShawGroup Security Officer,Dentsu GroupEvan SparksChief Product Officer,AI,Hewlett Packard EnterpriseCatherine StihlerChief Executive Officer,Creative CommonsFabian TheisScience Director,Helmholtz AssociationLi TieyanChief AI Security Scientist,Huawei Technologies

168、Kush VarshneyDistinguished Research Scientist and Senior Manager,IBMLauren WoodmanChief Executive Officer,DataKindYuan XiaohuiSenior Expert,Tencent HoldingsGrace YeeDirector,Ethical Innovation,AI Ethics,AdobeMichael YoungVice-President,Products,Private AILeonid ZhukovVice-President,Data Science,BCGX

169、;Director of BCG Global AI Institute,Boston Consulting GroupWorld Economic ForumJohn BradleyLead,Metaverse InitiativeKaryn GormanCommunications Lead,Metaverse InitiativeDevendra JainLead,Artificial Intelligence,Quantum TechnologiesJenny JoungSpecialist,Artificial Intelligence and Machine LearningDae

170、gan KingeryEarly Careers Programme,AI Governance AllianceConnie KuangLead,Generative AI and Metaverse Value CreationHannah RosenfeldSpecialist,Artificial Intelligence and Machine LearningStephanie TeeuwenSpecialist,Data and AIKarla Yee AmezagaLead,Data Policy and AIHesham ZafarLead,Digital TrustIBMJ

171、ess MantasGlobal Managing DirectorChristina MontgomeryChief Privacy&Trust OfficerProductionLaurence Denmark Creative Director,Studio MikoSophie Ebbage Designer,Studio MikoMartha Howlett Editor,Studio Miko1/3:Presidio AI Framework21Endnotes1.IBM AI Ethics Board,Foundation models:Opportunities,risks a

172、nd mitigations,2023,https:/ Gradient of Generative AI Release:Methods and Considerations”,Hugging Face,2023,https:/arxiv.org/abs/2302.04844.3.Christiano,Paul F.,Jan Leike,Tom B.Brown,Miljan Martic et al.,“Deep Reinforcement Learning from Human Preferences”,arxiv,17 February 2023,https:/arxiv.org/pdf

173、/1706.03741.pdf.4.Harrison Lee,Samrat Phatale,Hassan Mansoor,Thomas Mesnard et al.,“RLAIF:Scaling Reinforcement Learning from Human Feedback with AI Feedback”,Google Research,1 December 2023,https:/arxiv.org/pdf/2309.00267.pdf.5.Bommasani,Rishi,Kevin Klyman,Shayne Longpre,Sayah Kapoor et al,“The Fou

174、ndation Model Transparency Index”,Stanford Center for Research on Foundation Models and Stanford Institute for Human-Centered Artificial Intelligence,2023,https:/arxiv.org/pdf/2310.12941.pdf.6.Smith,Larry,“Shift-left testing”,Association for Computing Machinery Digital Library,2001,https:/dl.acm.org

175、/doi/10.5555/500399.500404.7.Perez,Ethan,Saffron Huang,Francis Song,Trevor Cai et al.,“Red Teaming Language Models with Language Models”,Association for Computational Linguistics,2022,https:/aclanthology.org/2022.emnlp-main.225.pdf.8.Hasani,Ramin,Mathias Lechner,Alexander Amini,Daniela Rus et al.,“L

176、iquid Time-constant Networks”,arxiv,2020,https:/arxiv.org/pdf/2006.04439.pdf.9.Xiao,Wei,Ramin Hasani,Xiao Li and Daniela Rus,“BarrierNet:A Safety-Guaranteed Layer for Neural Networks”,Massachusetts Institute of Technology,2021,https:/arxiv.org/pdf/2111.11277.pdf.10.Willig,Moritz,Matej Zecevic,Devend

177、ra Singh Dhami and Kristian Kerting,“Can Foundation Models Talk Causality?”,arxiv,2022,https:/arxiv.org/pdf/2206.10591.pdf.11.Roy,Kaushik,Yuxin Zi,Vignesh Narayanan,Manas Gaur and Amit Seth,“Knowledge-Infused Self Attention Transformers”,arxiv,2023,https:/arxiv.org/pdf/2306.13501.pdf.1/3:Presidio AI

178、 Framework22Unlocking Value from Generative AI:Guidance for Responsible TransformationI N C O L L A B O R A T I O N W I T H I B M C O N S U L T I N G2/3AI Governance AllianceBriefing Paper Series 2024ContentsImages:Getty Images,MidJourney 2024 World Economic Forum.All rights reserved.No part of this

179、 publication may be reproduced or transmitted in any form or by any means,including photocopying and recording,or by any information storage and retrieval system.Disclaimer This document is published by the World Economic Forum as a contribution to a project,insight area or interaction.The findings,

180、interpretations and conclusions expressed herein are a result of a collaborative process facilitated and endorsed by the World Economic Forum but whose results do not necessarily represent the views of the World Economic Forum,nor the entirety of its Members,Partners or other stakeholders.Executive

181、summaryIntroduction1 New opportunities with generative AI2 Assessing use cases for adoption2.1 Evaluation gate:business impact2.2 Evaluation gate:operational readiness2.3 Evaluation gate:investment strategy3 Responsible transformation3.1 The case for responsible transformation3.2 Addressing accounta

182、bility:defined governance for immediate and downstream outcomes3.3 Addressing trust:enabling transparency through communication3.4 Addressing challenges to scale:diverse and agile operations structures3.5 Addressing human impact:value-based change managementConclusionContributorsEndnotes252627293030

183、31323233 3334 343435392/3:Unlocking Value from Generative AI24Executive summaryGenerative AI entered the popular domain with the launch of OpenAIs ChatGPT in November 2022,igniting global fascination surrounding its capabilities and potential for transformative impact.As generative AIs technical mat

184、urity accelerates,its adoption by organizations seeking to capitalize on its potential is maturing at pace while also swiftly disrupting business and society and forcing leaders to rethink their strategies in real time.This paper addresses the impact of generative AI on industry and introduces best

185、practices for responsible transformation.Leaders have realized new generative AI opportunities for their organizations,from streamlining enterprise processes to supporting artists in reimagining furniture design or even aiding nations in addressing global climate challenges.From the public to the pr

186、ivate sector,organizations are witnessing generative AIs ability to enhance enterprise productivity,create net new products or services,and redefine industries and societies.In adopting generative AI,leaders report a shift towards a use-case-based approach,focusing on evaluating and prioritizing use

187、 cases and structures that enable the successful deployment of generative AI technologies and compound value generation.Organizations should evaluate potential use cases across the following domains:business impact,organisational readiness and investment strategy.Strategic alignment with the organiz

188、ations goals,revenue and cost implications,and impact on resources are key factors when leaders prioritize use cases based on their potential for business impact.The requisite technical talent and infrastructure,the ability to track data and model lineage,and the governance structure to manage risk

189、are considerations when leaders evaluate use cases against their operational readiness.Balancing upfront development cost with reusability potential,projected time to value and an increasingly complex regulatory environment are criteria when leaders select use cases in alignment with an organization

190、s investment strategy.Following use case selection,organizations weigh benefits against downstream impacts such as impact to the workforce,sustainability or inherent technology risk such as hallucinations.A multistakeholder approach helps leaders to mitigate risk and scale responsibly.Multistakehold

191、er governance with distributed ownership is central to addressing accountability.Communications teams that shape a cohesive narrative are essential to addressing trust through transparency.Operational structures that roadmap and cascade use cases to extract,realize,replicate and amplify value across

192、 the entire organization are key to addressing challenges to scale.Value-based change management is critical to addressing human impact and ensuring the workforce remains engaged and upskilled.The findings in this briefing paper provide leaders with insights on how to realise the benefits of generat

193、ive AI while mitigating its downstream impacts.Future publications will build on these recommendations for responsible transformation as generative AI becomes increasingly able to mimic human skills and reasoning,and technology advances in pursuit of artificial general intelligence.Organizations sho

194、uld emphasize responsible transformation with generative AI to build a sustainable future.2/3:Unlocking Value from Generative AI25IntroductionGenerative AI raises new questions about responsible transformation for industry executives,government leaders and academia.Generative artificial intelligence

195、(AI)has captured global imagination with its human-like capabilities and has shown the potential to elevate creativity,amplify productivity,reshape industries and enhance the human experience.As a result,cross-sector executives,government leaders and academia are considering the potential impact of

196、this technology as they weigh answers to critical questions:Where are the growing opportunities and novel application areas to drive sustainable economic growth?What are the new challenges and downstream impacts?What are the best practices for scaling responsibly and bringing about exponential trans

197、formation?Finally,as the curiosity to replicate or even exceed human intelligence grows in the future,what does this mean for organizations seeking to capitalize on the opportunities offered by this technology?2/3:Unlocking Value from Generative AI26New opportunities with generative AI1Generative AI

198、 creates new opportunities but requires a distinctive approach to value generation focused on use cases and experimentation.Generative AI is expected to unlock opportunities that will significantly impact the global economy.Organizations are already using generative AI to enhance existing products,s

199、ervices,operations and provide hyper-personalized customer experiences.While most use cases focus on boosting human capabilities,some have the potential to radically accelerate benefits to humanity.For example,novel synthetic protein structures generated to help fix DNA errors can significantly acce

200、lerate the creation of new cancer therapies.1 Generative AI is also used to orchestrate deep synthesis of numerous data catalogues to enable work to protect the oceans.2 These bolder bets have the potential to reshape not just entire industries but economies and societies at large.In general,use cas

201、es can be considered under different categories that include enhancing enterprise productivity,creating new products or services and,eventually,redefining industries and societies.Snapshot of sample generative AI case studies in the marketTABLE 1CategoryCompanyChallengeActionImpactEnhancing enterpri

202、se productivityBrex:automating corporate card expenses3Support corporate card customers to categorize transactions and add notes to meet company policies and Internal Revenue Service(IRS)compliance.Brex,with OpenAI and Scale,used generative AI to create the Brex Assistant to streamline expense repor

203、ting,automatically classify expenses and create IRS-compliant notes.Brex Assistant fully handles 51%of card swipes,saving time and improving expense accuracy and compliance.It generated over 1.4 million receipts and 1 million receipt memos.Enhancing enterprise productivityIKEA:reimagining furniture

204、design4Seek creative solutions to aid furniture designers in crafting new designs inspired by their iconic past.IKEA and SPACE10 used generative AI to explore furniture design concepts,training a model on 1970s and 1980s catalogues for students to create future-focused designs inspired by the past.F

205、urniture designers collaborate with AI,expanding design possibilities and speeding up cycles.Enhancing enterprise productivity and net-new product or serviceGoogle:streamlining software prototyping5Reduce software development cycles internally and simplify access to generative AI models.Google creat

206、ed Google AI Studio,a generative AI tool to simplify software prototyping and democratize access to their foundation models,which were first used internally.Increased proactive UX and product prototyping,provided an efficient UI for easy model prompting and was later launched as a new product in 179

207、 countries and territories.Net-new product or serviceSynthesia and PepsiCo:reinventing the football fan experience6Connect brand and performance marketing efforts into one seamless experience.Fans could generate and share personalized videos using Lionel Messis AI avatar in eight languages,bypassing

208、 traditional production limits.Seven million videos weregenerated,attracting over 38 million website visits in 24 hours.2/3:Unlocking Value from Generative AI27The speed of adoption and implementation of generative AI is unparalleled to any other technological advancement.The technology is no longer

209、 dependent on the manual labelling of significant amounts of data often the most time-consuming and costly part of traditional AI workflows.Across the board,leaders report a new approach to generative AI opportunities that extends beyond rapid proofs of concept(POCs)based on large models.Instead,org

210、anizations are shifting towards smaller,use-case based approaches that emphasize ideation and experimentation.They are involving the workforce in the use case discovery and ideation process.Smaller use cases with low complexity are often applied first,allowing leaders to find value while minimizing

211、downstream implications.In either case,leaders start with diverse POCs,which are scaled across the enterprise once value is proven.In many instances,generative AI experiments may yield unexpected learnings about where value,and often also cost and challenges,truly lie.Organizations may realize the c

212、ompound benefits of generative AI when implementing it in tandem with technologies such as causal AI models10 to increase explainability,advances in quantum technologies to accelerate the generative AI life cycle,or 5G to increase reach.These compounding benefits will help organizations to prioritiz

213、e use cases for adoption.Organizations are shifting towards smaller,use-case based approaches that emphasize ideation and experimentation.CategoryCompanyChallengeActionImpactRedefining industries and societiesInsilico Medicine:accelerating drug discovery7,8Discover and develop new treatments for ser

214、ious diseases more quickly and cheaply compared to traditional processes.Generative AI was used during the preclinical drug discovery process to identify a novel drug candidate for idiopathic pulmonary fibrosis.A preclinical drug candidate was discovered in less than 18 months and at one-tenth of th

215、e cost of a conventional programme.The drug candidate has now entered phase two trials.Redefining industries and societies NASA and IBM:unique global planning for climate phenomena and sustainability9Build a unique foundation model to generate insights from over 250 terabytes(TBs)of mission satellit

216、e imagery.NASA and IBM created the first open-source geospatial foundation model,available via Hugging Face,using NASA data to enhance and democratize global environmental research and planning.The model is estimated to increase geospatial analysis speed by four times with 50%less labelled data;used

217、 to solve global climate challenges,including reforestation in Kenya and other development efforts in the Global South.Snapshot of sample generative AI case studies in the market(continued)TABLE 12/3:Unlocking Value from Generative AI28Assessing use cases for adoption2Generative AI use cases may be

218、assessed by business impact,organizational readiness and investment strategy prior to adoption.As organizations consider generative AI,they must assess all factors involved to move a use case from concept to implementation.Leaders need to ensure that each use case benefits the organization,its custo

219、mers,its workforce and/or society.While evaluation criteria can differ between organizations,the following gates comprise the most common approaches adopted by industry leaders to evaluate the viability and value-generation potential of use cases.The order is not sequential and can differ depending

220、on each organization and use case.Funnelling use cases through evaluation gatesFIGURE 1Filter the bestIterate on the restIdentify generative AI use casesBusiness impactOperational readinessInvestment strategyFunnel through evaluation gatesScale and transform1232/3:Unlocking Value from Generative AI2

221、9Leaders evaluate the use cases value alignment with the organizations strategic objectives and its stakeholder responsibility.After alignment on the outcomes and generative AI as the best technology to address a specific use case,the impact of each use case on an organization can be categorized as

222、follows:1.Scaling human capability by enhancing productivity and existing human skills(e.g.near instant new content generation for rapid idea iteration;creation of multiple versions of an advertising campaign).2.Raising the floor by increasing accessibility to technologies and capabilities previousl

223、y requiring specific resources,skills and expertise(e.g.giving everyone the ability to code).3.Raising the ceiling by solving problems thus far unsolvable by humans(e.g.generating new molecular structures,which could aid the creation of novel and more effective therapeutic agents.11Generative AI opp

224、ortunities have created strong competitive pressures and inaction can come with significant opportunity costs.12 In industries such as marketing or consumer goods,understanding the criticality of time to market and improved experience for users,helps leaders prioritise use cases and resource allocat

225、ion.Reputation is another important consideration will the use case enhance the organizations brand as a pioneer of innovation?Enabling the workforce to access generative AI tools can be an important factor for talent attraction and retention.When generative AI performs administrative tasks that pre

226、viously required significant time and effort,the workforce can repurpose their time from rote activities to those that allow them to explore their creativity and hone their unique skillset.Responsible adoption of generative AI requires operational readiness for technological dependencies and outcome

227、s.Before organizations expose generative AI to their data,data curation is essential to ensure it is accurate,secure,representative and relevant.In developing or implementing generative AI technologies,organizations must considerifthey have the right technical talent and infrastructure,such as appro

228、priate models and necessary computing power.In deploying generative AI technologies,organizations should ensure human feedback loops are in place to mitigate risks by ensuring user feedback is elicited,standardized and incorporated into the continuous fine-tuning of the model.Additionally,organizati

229、ons require the ability to track model lineage and data sources that inform model outputs,as well as vet models and systems for cybersecurity robustness.Evaluation gate:business impactEvaluation gate:operational readiness2.12.2Operational readiness considerations(non-exhaustive)across the model life

230、 cycleFIGURE 2GuardrailsTechnical talentTechnical infrastructureData curationAccountabilityCompliance and legalStakeholder trustHuman feedback loopsExplainability and auditabilityModel securityModel buildingModel releaseModel adaptationData management2/3:Unlocking Value from Generative AI30While inv

231、estment considerations are important to any organizational decision-making,they are particularly significant for generative AI opportunities.Use cases often require a higher upfront investment,the regulatory environment is becoming increasingly complex and the technology is evolving at a rapid pace.

232、When prioritizing use cases,leaders must consider if each merits the use of models adopted from open-source communities,acquired from other third parties or developed in-house.Model selection must account for alignment with the use case,speed to market,requisite resource investments,including capita

233、l and talent,licensing and acceptable use policies,risk exposure and competitive differentiation offered by each option.Leaders evaluate the reusability potential of a use case across the organization,as it can offset development costs and curtail sustainability footprints.Additionally,they evaluate

234、 whether the use case can operate viably within the current regulatory environment and whether the organization can monitor compliance to minimize legal risk.This can require significant investment of capital and human resources,such as developers,lawyers,senior leadership and ethics boards.Talent a

235、vailability is central to an organizations investment strategy as well.Total investment may include upskilling,re-skilling or hiring additional employees with appropriate generative AI skills,such as content creation,model development or model tuning.Following the evaluation of use cases by business

236、 impact,organizational readiness and investment strategy,the next step is to implement and scale selected use cases.How can they maximize opportunities while mitigating risks to ensure a responsible and successful transformation?Organizations will be held responsible for the outcomes of their AI tec

237、hnology and must,therefore,ensure compliance with the global complexity of regulation and policies as cited in Generative AI Governance:Shaping the Collective Global Future.13 This will require new skills and roles for accountability,compliance and legal responsibilities as a multistakeholder approa

238、ch.Generative AIs evolutionary nature and its inherent potential for downstream implications create a greater need to continually evaluate even if the necessary guardrails are in place.Finally,organizations need a plan to enhance stakeholder trust with a technology that can elicit great scepticism t

239、o ensure their workforce,customers and other critical parties responsibly adopt generative AI.Evaluation gate:investment strategy2.32/3:Unlocking Value from Generative AI31Responsible transformation3A multistakeholder approach creates value while balancing challenges of trust,accountability,scale an

240、d the workforce.As The Presidio Recommendations on Responsible Generative AI detail,responsible transformation requires specific considerations for generative AIs unique capabilities,along with multistakeholder collaboration and proper steering during the transformation journey.Global generative AI

241、regulations and standards(NIST et al.)are changing,and so the current need for self-governance is shared by organizations and leaders.There is also a need to ensure that the technology is accessible to all.Organizations are committed to aligning with global environmental and sustainability goals,ple

242、dging to adopt AI in a responsible and accessible manner.The lack of responsibility in an organizations transformation can have many negative consequences,which are multi-fold and compounded for a technology as revolutionary as generative AI.From perpetuating biases,introducing security vulnerabilit

243、ies and spreading misinformation causing severe reputational damage irresponsible generative AI applications and practices not only threaten the organization itself but can also negatively impact society at speed and scale.Generative AI comes with several downstream implications associated with more

244、 traditional forms of AI,together with amplified and new ones.The following are most often noted for their potential impact,with a further list to be explored in future work.1.Workforce and talent impactWhile AI is commonly used to automate tasks,the scale at which generative AI can accomplish this

245、amplifies its impact on the workforce.The potential risk of job displacement presents significant challenges for society that can exacerbate inequality.Research indicates that generative AIs automation capabilities provide the greatest exposure for clerical jobs,which have traditionally been held by

246、 women.In some cases,particularly in developing countries,these types of jobs may cease to exist,removing an avenue that has historically served as an entry for women into the labour market.14 Additionally,generative AIs novel capability to create,generate and simulate human-like interactions may no

247、w overlap with tasks in creative industries,and its ability to rapidly learn domain expertise may influence the roles of knowledge workers.Skills and workloads are changing,and organizational structures need to evolve at pace.15 Generative AI is profoundly changing the way employees view their jobs

248、and the value work brings.Nevertheless,the technology presents a unique opportunity for organizations to re-evaluate their working practices and skills:to inspire,incentivize,motivate,upskill and reskill workers,while evaluating the agility of their own organizational structures.2.Hallucination impa

249、ct Generative AI introduces the risk of hallucinations,which can propagate misinformation,leading to confusion,mistrust and even potential harm.Equally,hallucinations are a corollary of generative AIs capability to create net-new content,which is central to its power to accelerate creativity.Organiz

250、ations need to understand whether the benefit of content creation outweighs the risk of hallucination for each use case.Hallucinations are particularly concerning when generative AI outputs appear authoritative but are factually inaccurate,especially when used to influence decision-making that may i

251、mpact global communities in areas such as health,politics and science.Organizations that rely on digital content production or customer engagement face challenges as brand reputation and customer trust could be damaged.Guardrails from Presidio AI Framework:Towards Safe Generative AI Models need to b

252、e considered and embedded in the process.163.Sustainability impact Training and fine-tuning generative AI models demand very high energy consumption.17 Growing global efforts to offset or mitigate their sustainability footprint are ongoing,such as advancements in model,runtime and hardware The case

253、for responsible transformation3.12/3:Unlocking Value from Generative AI32optimization,as well as improved education on model choices.Algorithmic approaches like federated computing can further minimize the energy consumption of data collection and processing.Organizations also consider their choices

254、 in data needs as a growing move towards smaller,more targeted,and more energy-efficient models underlines.In addition to ensuring generative AI models are more sustainable,the technology itself can be used to improve sustainability,for example,through use cases focussed on energy modelling and supp

255、ly chain optimization.18As the risks associated with generative AI amplify and expand,traditional organizational structures need to pivot with agility.Leaders need to ensure cross-functional connectivity from the board level down and across all impacted functions.The following are four interconnecte

256、d and interdependent functions that support this organizational effort to balance the opportunities and benefits of generative AI with its downstream impacts as organizations implement and scale generative AI applications.Multistakeholder governance with distributed ownership is central to responsib

257、le transformation in the age of generative AI.This approach is characteristic of industry leaders,with legal,governance,IT,cybersecurity,human resources(HR),as well as environmental and sustainability representatives requiring a seat at the table to ensure responsible transformation across the organ

258、ization.The positive and negative externalities of generative AI expand the conventional responsibilities in governance towards a more holistic,human-centred and values-driven approach.An AI ethics council modelled on value-based principles19 is indispensable for any organization;larger organization

259、s appoint members from their stakeholder and shareholder groups,while smaller organizations may need to rely on a limited committee or an external ethics council.Councils must collaborate with stakeholders on aspects such as workplace policies,even if they do not deploy generative AI,as the workforc

260、e is likely already using it at work on personal devices.The council should expand to incorporate a diverse set of members from across the entire organization to ensure the responsible adoption of not just individual use cases but also emerging and intersecting strategies on open technologies,artifi

261、cial general intelligence(AGI),5G and quantum technology.The evolving nature of generative AI requires rigorous self-regulation and internal AI governance leads may serve as the sentinels of the organization.Generative AI supports human-led analysis in regulatory,environmental and sustainability eff

262、orts.It assists in algorithm monitoring and policy formulation,but crucially,it requires human oversight to ensure responsible and effective application,addressing potential risks and maintaining quality outcomes.Addressing accountability:defined governance for immediate and downstream outcomes3.2Ge

263、nerative AI evokes mixed reactions from stakeholders,placing a high demand on communications teams.These teams shape a cohesive narrative to showcase how their organization optimizes transparency,explainability,coherence and trustworthiness on a use case basis.They play a role in educating stakehold

264、ers and shareholders on the capabilities and fallibilities of the technology while managing expectations.They can inspire and instruct end-users about the benefits on the horizon,thus building trust and increasing adoption.External communications need to assuage stakeholders that seek innovation,but

265、 not at the cost of ethical behaviour,trust and actions that prove that the organization is committed to the greater good of humanity.Internal accountability and advocacy are needed from top leadership to obtain buy-in fromthe workforce and establish a culture that benefits from generative AI.Exampl

266、es of effective trust programmes include taking a prominent ethics stance in policy or the executive community,buddy programmes for all employees seeking(generative)AI immersion and novel career pathways that can lead to increased trust and ownership from the workforce.Addressing trust:enabling tran

267、sparency through communication3.3 An AI ethics council modelled on value-based principles is indispensable for any organization.2/3:Unlocking Value from Generative AI33Initial adoption of generative AI across organizations has focused on targeted,often isolated,use cases.However,as leaders plan thei

268、r strategic roadmaps,many are challenged with how to scale these use cases across their organizations to realize the compound benefits of generative AI.Operations teams are the primary implementers of use cases.Data analysts,research and development teams,resource managers,HR executives and business

269、 leaders ensure use cases are roadmapped and cascaded across the organization for maximum benefit.In their initial development,use cases require a diverse operational structure to ensure a multistakeholder approach to extracting,realizing,replicating and amplifying value.However,as use cases become

270、integrated and scale,an interlocking and agile operational structure is needed to understand how compound value can be unlocked,and corollary impacts to other parts of the workforce or other lines of business can be anticipated.Technologies that develop as rapidly as generative AI require adoption b

271、y a workforce that evolves at pace.The implications of generative AI on the workforce are central to business and need to be managed well.The chief human resources officer,the chief information officer,and the chief financial officer teams should come together to support the workforce as needed when

272、 implementing and scaling generative AI use cases.Leaders plan and implement talent transformation while ensuring staff have access to the necessary technological tools and training.This starts with communicating the vision for generative AI pilots that clearly states desired benefits for customers

273、and employees alike,together with emerging professional development pathways for staff.Competencies,capabilities and skills are rapidly evolving as generative AI use cases are implemented across the organization.Change management responsibilities across the organization are significant.HR profession

274、als engage with the implementation of use cases from the beginning so they can proactively assess the impact on staff and put workforce transformation plans in place.Including employees in idea generation for use cases and encouraging them to own their career paths can increase engagement.Hackathons

275、 and company-wide training days are effective in upskilling the workforce while also encouraging experimentation and innovation.The immense potential of generative AI for benefit as well as for harm requires that all four of these primary functions are dynamic,interlocked and in equilibrium.The effe

276、ctiveness of this interlock correlates directly with the extent to which an organization scales generative AI applications responsibly.Addressing challenges to scale:diverse and agile operations structuresAddressing human impact:value-based change management3.43.5ConclusionNew technologies driving p

277、roductivity have always been positioned as repurposing workers to higher-value work,which has traditionally required human oversight and creativity.However,with generative AI becoming increasingly advanced in its ability to mimic human skills and capabilities,it opens more questions about its impact

278、 on the organizations choosing to adopt it.Technological advances towards human reasoning in the pursuit of artificial general intelligence demand ongoing discourse on the responsibility of organizations to their workforce,customers and wider society.Future work through the World Economic Forums AI

279、Governance Alliance will build on this foundation and address essential considerations,such as internal metrics for responsibility,understanding organizational barriers to responsible transformation,as well as broader issues such as intellectual property,regulatory alignment and workforce considerat

280、ions.Generative AI is reimagining the status quo for every organization.Providing a roadmap for organizations that guides them to innovate responsibly is key to adopting and scaling this powerful technology.Technologies that develop as rapidly as generative AI require adoption by a workforce that ev

281、olves at pace.2/3:Unlocking Value from Generative AI34ContributorsAcknowledgementsWorld Economic Forum Hubert Halop Lead,Artificial Intelligence and Machine LearningDevendra Jain Lead,Artificial Intelligence,Quantum TechnologiesDaegan Kingery Early Careers Programme,AI Governance AllianceConnie Kuan

282、g Lead,Generative AI&Metaverse Value CreationBenjamin Larsen Lead,Artificial Intelligence and Machine LearningCathy Li Head of AI,Data and Metaverse;Deputy Head,Centre for the Fourth Industrial Revolution;Member of the Executive CommitteeAI Governance Alliance Project FellowsAnn-Sophie Blank Managin

283、g Consultant,IBMAlison Dewhirst Senior Managing Consultant,IBMHeather Domin Executive Fellow,Director of Responsible AI Initiatives,IBMSophia Greulich Senior Consultant,IBMMichelle Hannah Jung Senior Managing Consultant,IBMJennifer Kirkwood Executive Fellow,Partner,IBMAvi Mehra Associate Partner,IBM

284、 Sandra Misiaszek Associate Partner,IBMThis paper is a combined effort based on numerous interviews,discussions,workshops and research.The opinions expressed herein do not necessarily reflect the views of the individuals or organizations involved in the project or listed below.Sincere thanks are ext

285、ended to those who contributed their insights via interviews and workshops,as well as those not captured below.Sincere appreciation is extended to the following working group members,who spent numerous hours providing critical input and feedback to the drafts.Their diverse insights are fundamental t

286、o the success of this work.Martin Adams Co-Founder,METAPHYSIC Basma AlBuhairan Managing Director,Centre for the Fourth Industrial Revolution,Saudi ArabiaUthman Ali Senior Product Analyst,AI Ethics SME,BP Mohamed Alsharid Chief Digital Officer,Dubai Electricity and Water Authority(DEWA)Stefan Bada Di

287、rector,Team for Special Projects,Office of the Prime Minister of SerbiaRicardo Baptista Leite Chief Executive Officer,Health AI,The Global Agency for Responsible AI in HealthElisabeth Bechtold Head,AI Governance,Zurich Insurance GroupSbastien Bey Senior Vice-President and Global Head of IT at Siemen

288、s Smart Infrastructure,Siemens Lu Bo Vice-President;Head,Corporate Strategy,Lenovo Group2/3:Unlocking Value from Generative AI35Ting Cai Group Senior Managing Executive Officer;Chief Data Officer,Rakuten GroupCansu Canca Director,Responsible AI Practice,Institute for Experiential AI,Northeastern Uni

289、versityNadia Carlsten Vice-President,Product,SandboxAQWill Cavendish Global Digital Services Leader,Arup Group Rohit Chauhan Executive Vice President,AI&Security Solutions,Mastercard InternationalAdrian Cox Managing Director,Thematic Strategist,Deutsche Bank Research,Deutsche Bank Bhavesh Dayalji Ch

290、ief Executive Officer,Kensho Technologies Evren Dereci Chief Executive Officer,KocDigitalDan Diasio Global Artificial Intelligence Consulting Leader,EYP.Murali Doraiswamy Professor of Psychiatry and Medicine,Duke University School of Medicine Elena Fersman Vice-President and Head of Global AI Accele

291、rator,EricssonRyan Fitzpatrick Senior Vice-President,Strategy,Vindex Lucas Glass Vice-President,Analytics Center of Excellence,IQVIAMark Gorenberg Chair,Massachusetts Institute of Technology(MIT)Mark Greaves Executive Director,AI2050,Schmidt FuturesOlaf Groth Professional Faculty,Haas School of Busi

292、ness,University of California,BerkeleySandeep Grover Trust&Safety Leadership,TikTok Sangeeta Gupta Senior Vice-President,National Association of Software and Services Companies(NASSCOM)Bill Higgins Vice-President,watsonx Platform Engineering and Open Innovation,IBM Matissa Hollister Assistant Profes

293、sor of Organizational Behaviour,McGill UniversityMichael G.Jacobides Professor of Strategy;Sir Donald Gordon Professor of Entrepreneurship and Innovation,London Business SchoolFariz Jafarov Executive Director,Centre for the Fourth Industrial Revolution,AzerbaijanReena Jana Head,Content&Partnership E

294、nablement,Responsible Innovation,Google Jeff Jarvis Professor,Graduate School of Journalism,City University of New YorkEmilia Javorsky Director,Futures Program,Future of Life InstituteSiddhartha Jha AI and Digital Innovation Lead,Botnar FoundationShailesh Jindal Vice-President of Corporate Strategy,

295、Palo Alto NetworksAthina Kanioura Executive Vice-President,Chief Strategy and Transformation Officer,PepsiCoVijay Karunamurthy Head and Vice-President,Engineering,Scale AISean Kask Chief AI Strategy Officer,SAPFaisal Kazim Head,Centre for the Fourth Industrial Revolution,United Arab EmiratesRom Kosl

296、a Chief Information Officer,Hewlett Packard EnterpriseNikhil Krishnan Chief Technology Officer,Products,C3 AISebastien Lehnherr Chief Information Officer,SLBGiovanni Leoni Head,Business Development and Strategy,Credo AIArt Levy Chief Strategy Officer,Brex Leland Lockhart Director,Artificial Intellig

297、ence&Machine Learning,Vista Equity Partners2/3:Unlocking Value from Generative AI36Harrison Lung Group Chief Strategy Officer,e&Manny Maceda Chief Executive Officer,Bain&Company Jim Mainard Chief Technology Officer and Executive Vice-President,Deep Technology,XPRIZE FoundationNaveen Kumar Malik Chie

298、f of Staff,Office of the Chief Technology Officer,HCL Technologies Thomas W.Malone Professor of Management and Director,Center for Collective Intelligence,MIT Sloan School of ManagementDarren Martin Chief Digital Officer,AtkinsRalisFrancesco Marzoni Chief Data&Analytics Officer,Ingka Group(IKEA)Dark

299、o Matovski Chief Executive Officer,causaLensAndrew McMullan Chief Data and Analytics Office,Commonwealth Bank of AustraliaNicolas Miailhe Founder and President,The Future Society(TFS)Steven Mills Partner and Chief Artificial Intelligence Ethics Officer,Boston Consulting GroupAngela Mondou President

300、and Chief Executive Officer,TECHNATIONYao Morin Chief Technology Officer,JLLMashael Muftah International and Regional Organizations Adviser,Ministry of Information and Communication Technology(ICT)of QatarAbhishek Pandey Global Head of Services Business Development,GEPCharna Parkey Real-Time AI Prod

301、uct and Strategy Leader,DataStax Cyril Perducat Senior Vice-President and Chief Technology Officer,Rockwell Automation Andreas Prsch Vice-President and Head,Aker AI Unit,Aker ASAPhilippe Rambach Chief AI Officer,Schneider ElectricMary Rozenman Chief Financial Officer and Chief Business Officer,Insit

302、ro Crystal Rugege Managing Director,Centre for the Fourth Industrial Revolution,RwandaPrasad Sankaran Executive Vice-President,Software and Platform Engineering,Cognizant Technology Solutions US Isa Scheunpflug Head,Automation Office,UBS Mikkel Skovborg Senior Vice-President,Innovation,Novo Nordisk

303、FoundationGenevieve Smith Founding Co-Director,Responsible&Equitable AI Initiative,Berkeley Artificial Intelligence Research Lab(UC Berkeley)Eric Snowden Vice-President,Design,Digital Media,AdobeJim Stratton Chief Technology Officer,Workday Murali Subbarao Vice-President,Generative AI Solutions,Serv

304、iceNowNorihiro Suzuki Chairman of the Board,Hitachi Research Institute,Hitachi Behnam Tabrizi Co-Director and Teaching Faculty of Executive Program,Stanford UniversityAmogh Umbarkar Vice-President,SAP Product Engineering,SAP Ingrid Verschuren Executive Vice-President,Data and AI;General Manager,Euro

305、pe,Middle East and Africa,Dow JonesDaniel Verten Strategy Partner,SynthesiaJudy Wade Managing Director,CPP InvestmentsAnna Marie Wagner Senior Vice-President,Head of AI,Ginkgo BioworksMin Wang Chief Technology Officer,Splunk Amy Webb Chief Executive Officer,Future Today InstituteChaoze Wu Head of R&

306、D Department,Managing Director,China Securities 2/3:Unlocking Value from Generative AI37Joe Xavier Chief Technology Officer,GrammarlyAlice Xiang Global Head,AI Ethics,Sony Zhang Ya-Qin Chair Professor and Dean,Tsinghua UniversityZhang Ying Professor of Marketing and Behavioral Science,Guanghua Schoo

307、l of Management,Peking UniversityZhang Yuxin Chief Technology Officer,Huawei Cloud,Huawei TechnologiesYijie Zeng Chief Technology Officer,Beijing Langboat TechnologyWorld Economic Forum John Bradley Lead,Metaverse InitiativeKaryn Gorman Communications Lead,Metaverse InitiativeJenny Joung Specialist,

308、Artificial Intelligence and Machine LearningHannah Rosenfeld Specialist,Artificial Intelligence and Machine LearningSupheakmungkol Sarin Head,Data and Artificial Intelligence EcosystemsStephanie Teeuwen Specialist,Data and AIKarla Yee Amezaga Lead,Data Policy and AIHesham Zafar Lead,Digital TrustIBM

309、Phaedra Boinodiris Associate PartnerFrank Madden Privacy and Regulatory Risk AdviserJess Mantas Global Managing DirectorChristina Montgomery Chief Privacy&Trust OfficerCatherine Quinlan Vice-President,AI EthicsSencan Sengul Distinguished EngineerJamie VanDodick Director AI Ethics and GovernanceProdu

310、ctionLaurence Denmark Creative Director,Studio MikoSophie Ebbage Designer,Studio MikoMartha Howlett Editor,Studio Miko2/3:Unlocking Value from Generative AI38Endnotes1.Gordon,Rachel,“Generative AI imagines new protein structures”,MIT News,12 July 2023,https:/news.mit.edu/2023/generative-ai-imagines-

311、new-protein-structures-0712#:text=.2.“Navigating the Ocean of Data:Harnessing the Power of Knowledge Graphs in Data Catalogs”,HUB Ocean,n.d.,https:/www.hubocean.earth/blog/ocean-knowledge-graph.3.“Brex Gives Every Employee an Expense Assistant with AI”,Brex,September 2023,https:/ the future with vin

312、tage designs in AI”,IKEA Newsroom,20 April 2023 https:/ with MakerSuite Part 1:An Introduction”,Google for Developers,26 September 2023,https:/ renamed the product to Google AI Studio on December 6,2023.6.“See our personalized Messi video campaign”,Synthesia Newsroom,30 October 2023,https:/www.synth

313、esia.io/post/messi.7.“New Milestone in AI Drug Discovery:First Generative AI Drug Begins Phase II Trials with Patients”,Insilico Newsroom,1 July 2023,https:/ target discovery and generative chemistry AI platforms for a drug discovery breakthrough”,Nature Research Media,n.d.,https:/ and NASA are buil

314、ding an AI foundation model for weather and climate”,IBM Newsroom,30 November 2023,https:/ for causal AI taken from:Forney,Andrew,“Casual Inference in AI Education:A Primer”,Journal of Causal Inference,2022,https:/ftp.cs.ucla.edu/pub/stat_ser/r509.pdf.11.Centre for Trustworthy Technology,A New Front

315、ier for Drug Discovery and Development:Artifical Intelligence and Quantum Technology,n.d.,https:/c4tt.org/1155-2/.12.“Building a Value-Driving AI Strategy for Your Business”,Gartner,n.d.,https:/ Economic Forum,Generative AI Governance:Shaping the Collective Global Future,2024.14.International Labour

316、 Organization(ILO),Generative AI and jobs:A global analysis of potential effects on job quantity and quality,2023,https:/www.ilo.org/wcmsp5/groups/public/-dgreports/-inst/documents/publication/wcms_890761.pdf.15.World Economic Forum,The Future of Jobs Report 2023,2023,https:/www.weforum.org/publicat

317、ions/the-future-of-jobs-report-2023/.16.World Economic Forum,Presidio AI Framework:Towards Safe Generative AI Models,2024.17.Strubell,Emma,Ananya Ganesh and Andrew McCallum,Energy and Policy Considerations for Deep Learning in NLP,Cornell University Department for Computer Science,Computation and La

318、nguage,5 June 2019,https:/arxiv.org/abs/1906.02243.18.“Generative AI:The Next Frontier in Energy&Utilities and Oil&Gas Innovation”,BirlaSoft Newsroom,26 October 2023,https:/ AI Principles overview adopted in May 2019:“OECD AI Principles overview”,Organisation for Economic Co-operation and Developmen

319、t,n.d.,https:/oecd.ai/en/ai-principles.2/3:Unlocking Value from Generative AI39I N C O L L A B O R A T I O N W I T H A C C E N T U R EGenerative AI Governance:Shaping a Collective Global Future3/3AI Governance AllianceBriefing Paper Series 2024ContentsImages:Getty Images,MidJourney 2024 World Econom

320、ic Forum.All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any means,including photocopying and recording,or by any information storage and retrieval system.Disclaimer This document is published by the World Economic Forum as a contribution to a proje

321、ct,insight area or interaction.The findings,interpretations and conclusions expressed herein are a result of a collaborative process facilitated and endorsed by the World Economic Forum but whose results do not necessarily represent the views of the World Economic Forum,nor the entirety of its Membe

322、rs,Partners or other stakeholders.Executive summaryIntroduction1 Global developments in AI governance1.1 Evolving AI governance tensions2 International cooperation and jurisdictional interoperability2.1 International coordination and collaboration2.2 Compatible AI standards2.3 Flexible regulatory me

323、chanisms3 Enabling equitable access and inclusive global AI governance3.1 Structural limitations and power imbalances3.2 Inclusion of the Global South in AI governanceConclusionContributorsEndnotes42434445474748484949505152563/3:Generative AI Governance41Executive summaryThe global landscape for art

324、ificial intelligence(AI)governance is complex and rapidly evolving,given the speed and breadth of technological advancements,as well as social,economic and political influences.This paper examines various national governance responses to AI around the world and identifies two areas of comparison:1.G

325、overnance approach:AI governance may be focused on risk,rules,principles or outcomes;and whether or not a national AI strategy has been outlined.2.Regulatory instruments:AI governance may be based on existing regulations and authorities or on the development of new regulatory instruments.Lending to

326、the complexity of AI governance,the arrival of generative AI raises several governance debates,two of which are highlighted in this paper:1.How to prioritize addressing current harms and potential risks of AI.2.How governance should consider AI technologies on a spectrum of open-to-closed access.Int

327、ernational cooperation is critical for preventing a fracturing of the global AI governance environment into non-interoperable spheres with prohibitive complexity and compliance costs.Promoting international cooperation and jurisdictional interoperability requires:International coordination:To ensure

328、 legitimacy for governance approaches,a multistakeholder approach is needed that embraces perspectives from government,civil society,academia,industry and impacted communities and is grounded in collaborative assessments of the socioeconomic impacts of AI.Compatible standards:To prevent substantial

329、divergence in standards,relevant national bodies should increase compatibility efforts and collaborate with international standardization programmes.For international standards to be widely adopted,they must reflect global participation and representation.Flexible regulatory mechanisms:To keep pace

330、with AIs fast-evolving capabilities,investment in innovation and governance frameworks should be agile and adaptable.Equitable access and inclusion of the Global South in all stages of AI development,deployment and governance is critical for innovation and for realizing the technologys socioeconomic

331、 benefits and mitigating harms globally.Access to AI:Access to AI innovations can empower jurisdictions to make progress on economic growth and development goals.Genuine access relies on overcoming structural inequalities that lead to power imbalances for the Global South,including in infrastructure

332、,data,talent and governance.Inclusion in AI:To adequately address unique regional concerns and prevent a relegation of developing economies to mere endpoints in the AI value chain,there must be a reimagining of roles that ensure Global South actors can engage in AI innovation and governance.The find

333、ings of this briefing paper are intended to inform actions by the different actors involved in AI governance and regulation.These findings will also serve as a basis for future work of the World Economic Forum and its AI Governance Alliance that will raise critical considerations for resilient governance and regulation,including international cooperation,interoperability,access and inclusion.Shapi

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(世界经济论坛:2024年人工智能联盟简报系列(英文版)(60页).pdf)为本站 (Yoomi) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部