上海品茶

微软:2024年AI透明度报告(英文版)(40页).pdf

编号:160708 PDF 40页 2.93MB 下载积分:VIP专享
下载报告请您先登录!

微软:2024年AI透明度报告(英文版)(40页).pdf

1、Responsible AI Transparency ReportHow we build,support our customers,and growMay 20242TABLE OF CONTENTS2RESPONSIBLE AI TRANSPARENCY REPORTContentsForeword34Key takeaways5How we build generative applications responsibly6Govern:Policies,practices,and processes9Map:Identifying risks10Measure:Assessing

2、risks and mitigations11Manage:Mitigating AI risks16How we make decisions about releasing generative applications17Deployment safety for generative AI applications20Sensitive Uses program in the age of generative AI22How we support our customers in building responsibly23AI Customer Commitments 24Tool

3、s to support responsible development25Transparency to support responsible development and use by our customers26How we learn,evolve,and grow27Governance of responsible AI at Microsoft:Growing our responsible AI community30Building safe and responsible frontier models through partnerships and stakeho

4、lder input32Using consensus-based safety frameworks33Supporting AI research initiatives34Investing in research to advance the state of the art in responsible AI 35Tuning in to global perspectives36Looking ahead37Sources and resourcesSOURCES AND RESOURCES3MICROSOFT TRANSPARENCY REPORT 20243RESPONSIBL

5、E AI TRANSPARENCY REPORT3FOREWORDRESPONSIBLE AI TRANSPARENCY REPORTForewordIn 2016,our Chairman and CEO,Satya Nadella,set us on a clear course to adopt a principled and human-centered approach to our investments in Artificial Intelligence(AI).Since then,we have been hard at work to build products th

6、at align with our values.As we design,build,and release AI products,six valuestransparency,accountability,fairness,inclusiveness,reliability and safety,and privacy and securityremain our foundation and guide our work every day.To advance our transparency practices,in July 2023,we committed to publis

7、hing an annual report on our responsible AI program,taking a step that reached beyond the White House Voluntary Commitments that we and other leading AI companies agreed to.This is our inaugural report delivering on that commitment,and we are pleased to publish it on the heels of our first year of b

8、ringing generative AI products and experiences to creators,non-profits,governments,and enterprises around the world.As a company at the forefront of AI research and technology,we are committed to sharing our practices with the public as they evolve.This report enables us to share our maturing practi

9、ces,reflect on what we have learned,chart our goals,hold ourselves accountable,and earn the publics trust.Weve been innovating in responsible AI for eight years,and as we evolve our program,we learn from our past to continually improve.We take very seriously our responsibility to not only secure our

10、 own knowledge but to also contribute to the growing corpus of public knowledge,to expand access to resources,and promote transparency in AI across the public,private,and non-profit sectors.In this inaugural annual report,we provide insight into how we build applications that use generative AI;make

11、decisions and oversee the deployment of those applications;support our customers as they build their own generative applications;and learn,evolve,and grow as a responsible AI community.First,we provide insights into our development process,exploring how we map,measure,and manage generative AI risks.

12、Next,we offer case studies to illustrate how we apply our policies and processes to generative AI releases.We also share details about how we empower our customers as they build their own AI applications responsibly.Lastly,we highlight how the growth of our responsible AI community,our efforts to de

13、mocratize the benefits of AI,and our work to facilitate AI research benefit society at large.There is no finish line for responsible AI.And while this report doesnt have all the answers,we are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI

14、practices.We invite the public,private organizations,non-profits,and governing bodies to use this first transparency report to accelerate the incredible momentum in responsible AI were already seeing around the world.Brad Smith Vice Chair&PresidentNatasha Crampton Chief Responsible AI Officer4MICROS

15、OFT TRANSPARENCY REPORT 20244KEY TAKEAWAYSRESPONSIBLE AI TRANSPARENCY REPORTKey takeawaysIn this report,we share how we build generative applications responsibly,how we make decisions about releasing our generative applications,how we support our customers as they build their own AI applications,and

16、 how we learn and evolve our responsible AI program.These investments,internal and external,continue to move us toward our goaldeveloping and deploying safe,secure,and trustworthy AI applications that empower people.We created a new approach for governing generative AI releases,which builds on our R

17、esponsible AI Standard and the National Institute of Standards and Technologys AI Risk Management Framework.This approach requires teams to map,measure,and manage risks for generative applications throughout their development cycle.30Weve launched 30 responsible AI tools that include more than 100 f

18、eatures to support customers responsible AI development.33Weve published 33 Transparency Notes since 2019 to provide customers with detailed information about our platform services like Azure OpenAI Service.We continue to participate in and learn from a variety of multi-stakeholder engagements in th

19、e broader responsible AI ecosystem including the Frontier Model Forum,the Partnership on AI,MITRE,and the National Institute of Standards and Technology.We support AI research initiatives such as the National AI Research Resource and fund our own Accelerating Foundation Models Research and AI&Societ

20、y Fellows programs.Our 24 Microsoft Research AI&Society Fellows represent countries in North America,Eastern Africa,Australia,Asia,and Europe.16.6%In the second half of 2023,we grew our responsible AI community from 350 members to over 400 members,a 16.6 percent increase.99%Weve invested in mandator

21、y training for all employees to increase the adoption of responsible AI practices.As of December 31,2023,99 percent of employees completed the responsible AI module in our annual Standards of Business Conduct training.5RESPONSIBLE AI TRANSPARENCY REPORTSection 1.How we build generative applications

22、responsibly AI is poised to shape the future.Generative AIartificial intelligence models and applications capable of creating original content,including text,image,and audiohas accelerated this transformation.At Microsoft,we recognize our role in shaping this technology.We have released generative A

23、I technology with appropriate safeguards at a scale and pace that few others have matched.This has enabled us to experiment,learn,and hone cutting-edge best practices for developing generative AI technologies responsibly.As always,we are committed to sharing our learnings as quickly as possible,and

24、generative AI is no exception.1.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLY61.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTIn 2023,we regularly published resources to share best practices for developing generative applications responsibly.These include an ov

25、erview of responsible AI practices for OpenAI models available through Azure OpenAI Service,1 a Transparency Note2 describing how to deploy Azure OpenAI models responsibly,examples relevant to generative AI in the HAX toolkit,3 best practices4 and a case study5 for red teaming large language model(L

26、LM)applications,and system messageor metaprompt guidance.6 In March 2024,we released additional tools our customers can use to develop generative applications more responsibly.This includes prompt shield to detect and block prompt injection attacks,7 safety evaluation in Azure AI Studio to evaluate

27、AI-generated outputs for content risks,8 and risks&safety monitoring in Azure OpenAI Service to detect misuse of generative applications.9In the following sections,we outline some of our recent innovations to map,measure,and manage risks associated with generative AI.First,we cover specific requirem

28、ents for generative applications,based on our Responsible AI Standard.Next,we discuss how AI red teaming plays an important role in mapping generative AI risks at the model and application layers.Then,we discuss the role of systematic measurement and how it provides metrics that inform decision maki

29、ng.2341Finally,we describe some of our approaches to managing generative AI risks.This includes using technology to reinforce trust in democratic processes and manage generative AIs impact on the information ecosystem by implementing provenance tools to label AI-generated content.Govern:Policies,pra

30、ctices,and processesPutting responsible AI into practice begins with our Responsible AI Standard.The Standard details how to integrate responsible AI into engineering teams,the AI development lifecycle,and tooling.In 2023,we used our Responsible AI Standard to formalize a set of generative AI requir

31、ements,which follow a responsible AI development cycle.Our generative AI requirements align with the core functions of the National Institute for Standards and Technology(NIST)AI Risk Management Frameworkgovern,map,measure,and managewith the aim of reducing generative AI risks and their associated h

32、arms.Govern,map,measure,manage:An iterative cycleIdentify and prioritize AI risks Align roles and responsibilities and establish requirements for safe,secure,and trustworthy AI deploymentMapManageManage or mitigateidentified risksSystematically measure prioritized risks to assess prevalence and the

33、effectiveness of mitigationsMeasureGovernPLATFORM +APPLICATIONS71.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTGovernGovernance contextualizes the map,measure,and manage processes.Weve implemented policies and practices to encourage a culture of risk management a

34、cross the development cycle.Policies and principles:Our generative applications are designed to adhere to company policies,including our responsible AI,security,privacy,and data protection policies.We update these policies as needed,informed by regulatory developments and feedback from internal and

35、external stakeholders.Procedures for pre-trained models:For the use of pre-trained generative AI models,teams must review available information about the model,its capabilities,and its limitations,then map,measure,and manage relevant risks.Stakeholder coordination:Our policies,programs,and best prac

36、tices include input from a diverse group of internal and external stakeholders.Cross-functional teams work together to map,measure,and manage risks related to generative applications.Documentation:We provide transparency materials to customers and users that explain the capabilities and limitations

37、of generative applications,as well as guidelines to help them use generative applications responsibly.Pre-deployment reviews:We require teams to map,measure,and manage generative AI risks pre-deployment and throughout their development cycle.This includes identifying high-impact uses of generative A

38、I for additional review by experts within the company.MapMapping risks is a critical firstand iterativestep toward measuring and managing risks associated with AI,including generative AI.Mapping informs decisions about planning,mitigations,and the appropriateness of a generative application for a gi

39、ven context.Responsible AI Impact Assessments:The development of generative applications begins with an impact assessment as required by the Responsible AI Standard.The impact assessment identifies potential risks and their associated harms as well as mitigations to address them.Privacy and security

40、 reviews:Processes for identifying and analyzing privacy and security risks,like security threat modeling,inform a holistic understanding of risks and mitigations for generative applications.Red teaming:We conduct red teaming of generative AI models and applications to develop a deeper understanding

41、 of how the identified risks manifest and to identify previously unknown risks.MeasureWeve implemented procedures to measure AI risks and related impacts to inform how we manage these considerations when developing and using generative applications.Metrics for identified risks:We have established me

42、trics to measure identified risks for generative applications.Mitigations performance testing:We measure how effective mitigations are in addressing identified risks.ManageWe manage or mitigate identified risks at the platform and application levels.We also work to safeguard against previously unkno

43、wn risks by building ongoing performance monitoring,feedback channels,processes for incident response,and technical mechanisms for rolling applications back.Finally,we release and operate the application.Weve learned that a controlled release to a limited number of users,followed by additional phase

44、d releases,helps us map,measure,and manage risks that emerge during use.As a result,we can be confident the application is behaving in the intended way before a wider audience accesses it.User agency:We design our generative applications to promote user agency and responsible use,such as through use

45、r interfaces that encourage users to edit and verify AI-generated outputs.Transparency:We disclose the role of generative AI in interactions with users and label AI-generated visual content.Human review and oversight:We design generative applications so that users can review outputs prior to use.Add

46、itionally,we notify users that the AI-generated outputs may contain inaccuracies and that they should take steps to verify information generated by the tool.Managing content risks:We build generative applications to address potential content risks,such as by incorporating content filters and process

47、es to block problematic prompts and AI-generated outputs.Ongoing monitoring:Our teams also implement processes to monitor performance and collect user feedback to respond when our applications dont perform as expected.Defense in depth:We use an approach to risk management that puts controls at every

48、 layer of the process,including platform-and application-level mitigations.We map,measure,and manage generative AI risks throughout the development cycle to reduce the risk of harm.81.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTBecause there is no finish line fo

49、r responsible AI,our framework is iterative.Teams repeat processes to govern,map,measure,and manage AI-related risks throughout the product development and deployment cycle.As we expand and evolve our responsible AI program,each new improvement builds on the foundation of the Responsible AI Standard

50、.For example,we recently updated our Security Development Lifecycle(SDL)to integrate Responsible AI Standard governance steps.We also enhanced internal guidance for our SDL threat modeling requirement,which integrates ongoing learnings about unique threats specific to AI and machine learning(ML).10

51、Incorporating responsible AI requirements into existing security guidance embodies our unified approach to developing and deploying AI responsibly.Threat modeling is key to mapping potential vulnerabilities,enabling us to measure and manage risks,and closely evaluate the impacts of mitigations.Evolv

52、ing our cybersecurity development cycle in the new age of AIWeve developed and deployed technology using our state-of-the-art cybersecurity practices for decades.In our efforts to develop robust and secure AI infrastructure,we build on our extensive cybersecurity experience and work closely with cyb

53、ersecurity teams from across the company.Our holistic approach is based on thorough governance to shield AI applications from potential cyberattacks across multiple vectors.Defense strategies include governance of AI security policies and practices;identification of potential risks in AI application

54、s,data,and supply chains;protection of applications and information;detection of AI threats;and response and recovery from discovered AI issues and vulnerabilities,including through rapid containment.We take valuable learnings from these strategies,customer feedback,and external researcher engagemen

55、t to continuously improve our AI security best practices.All Microsoft products are subject to Security Development Lifecycle(SDL)practices and requirements.11 Teams must execute threat modeling to map potential vulnerabilities,measure and manage risks,and closely evaluate the impacts of mitigations

56、.Central engineering teams and our Digital Security and Resilience team facilitate and monitor SDL implementation to verify compliance and secure our products.Additional layers of security include centralized and distributed security engineering,physical security,and threat intelligence.Security ope

57、rations teams drive implementation and enforcement.We synthesize and organize learnings about AI threats into security frameworks,such as:The Adversarial Machine Learning Threat Matrix,which we developed with MITRE and others.12Our Aether13 Security Engineering Guidance,which added AI-specific threa

58、t enumeration and mitigation guidance to existing SDL threat modeling practices.14Our AI bug bar,which provides a severity classification for vulnerabilities that commonly impact AI and ML applications.15 Further,we apply SDL protection,detection,and response requirements to AI technology.Specifical

59、ly for our products that leverage pre-trained models,model weights are encrypted-at-rest and encrypted-in-transit to mitigate the potential risk of model theft.We apply more stringent security controls for high-risk technology,such as for protecting highly capable models.For example,in our AI produc

60、t environments where highly capable proprietary AI models are deployed,we employ strong identity and access control.We also use holistic security monitoring(for both external and internal threats)with rapid incident response and continuous security validation(such as simulated attack path analysis).

61、91.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTMap:Identifying risksAs part of our overall approach to responsible development and deployment,we identify AI risks through threat modeling,16 responsible AI impact assessments,17 customer feedback,incident response

62、 and learning programs,external research,and AI red teaming.Here,we discuss our evolving practice of AI red teaming.Red teaming,originally defined as simulating real-world attacks and exercising techniques that persistent threat actors might use,has long been a foundational security practice at Micr

63、osoft.18 In 2018,we established our AI Red Team.This group of interdisciplinary experts dedicated to thinking like attackers and probing AI applications for failures19 was the first dedicated AI red team in industry.20 Recently,we expanded our red teaming practices to map risks outside of traditiona

64、l security risks,including those associated with non-adversarial users and those associated with responsible AI,like the generation of stereotyping content.Today,the AI Red Team maps responsible AI and security risks at the model and application layers:Red teaming models.Red teaming the model helps

65、to identify how a model can be misused,scope its capabilities,and understand its limitations.These insights not only guide the development of platform-level evaluations and mitigations for use of the model in applications but can also be used to inform future versions of the model.Red teaming applic

66、ations.Application-level AI red teaming takes a system view,of which the base model is one part.This helps to identify failures beyond just the model,by including the application specific mitigations and safety system.Red teaming throughout AI product development can surface previously unknown risks

67、,confirm whether potential risks materialize in an application,and inform measurement and risk management.The practice also helps clarify the scope of an AI applications capabilities and limitations,identify potential for misuse,and surface areas to investigate further.For generative applications we

68、 characterize as high-risk,we implement processes to ensure consistent and holistic AI red teaming by experts independent from the product team developing the application.We are also building external red teaming capacity to enable third-party testing before releasing highly capable models,consisten

69、t with our White House Voluntary Commitments.21 Externally led red teaming for highly capable models will cover particularly sensitive capabilities,including those related to biosecurity and cybersecurity.In 2018We established the first dedicated AI red team in industry.101.HOW WE BUILD GENERATIVE A

70、PPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTMeasure:Assessing risks and mitigationsAfter mapping risks,we use systematic measurement to evaluate application and mitigation performance against defined metrics.For example,we can measure the likelihood of our applications to generate ident

71、ified content risks,the prevalence of those risks,and the efficacy of our mitigations in preventing those risks.We regularly broaden our measurement capabilities.Some examples include:22Groundedness,to measure how well an applications generated answers align with information from input sources.Relev

72、ance,to measure how directly pertinent a generated answer is to input prompts.Similarity,to measure the equivalence between information from input sources and a sentence generated by an application.Content risks,multiple metrics through which we measure an applications likelihood to produce hateful

73、and unfair,violent,sexual,and self-harm related content.Jailbreak success rate,to measure an applications resiliency against direct and indirect prompt injection attacks that may lead to jailbreaks.We also share capabilities and tools that support measurement of responsible AI concepts and developme

74、nt of new metrics.We share some of these tools as open source on GitHub and with our customers via Azure AI,which includes Azure Machine Learning and Azure AI Studio.Azure AI Content Safetyuses advanced language and vision models to help detect content risks such as hateful,sexual,violent,or self-ha

75、rm related content.Safety evaluations in Azure AI StudioMany generative applications are built on top of large language models,which can make mistakes,generate content risks,or expose applications to other types of attacks.While risk management approaches such as safety system messages and content f

76、ilters are a great start,its also crucial to evaluate applications to understand if the mitigations are performing as intended.With Azure AI Studio safety evaluations,customers can evaluate the outputs of generative applications for content risks such as hateful,sexual,violent,or self-harm related c

77、ontent.Additionally,developers can evaluate their applications for security risks like jailbreaks.Since evaluations rely on a robust test dataset,Azure AI Studio can use prompt templates and an AI-assisted simulator to create adversarial AI-generated datasets to evaluate generative applications.This

78、 capacity harnesses learning and innovation from Microsoft Research,developed and honed to support the launch of our own first-party Copilots,and is now available to customers in Azure as part of our commitment to responsible innovation.111.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE

79、 AI TRANSPARENCY REPORTManage:Mitigating AI risksOnce risks have been mapped and measured,they need to be managed.We evaluate and improve our generative AI products across two layers of the technology to provide a defense in depth approach to mitigating risks.1Platform:Based on the products intended

80、 use,model-level mitigations can guide the application to avoid potential risks identified in the mapping phase.For example,teams can experiment with and fine-tune different versions of many generative AI models to see how potential risks surface differently in their intended use.This experimentatio

81、n allows teams to choose the right model for their application.In addition,platform-level safety measures such as content classifiers reduce risks by blocking potentially harmful user inputs and AI-generated content.For example,Azure AI Content Safety provides API-level filters for content risks.Har

82、mful user input or content risks generated by the AI model will be blocked when flagged by Azure AI Content Safety.2Application:A number of mitigations implemented in a specific application can also further manage risks.For example,grounding a models outputs with input data alongside safety system m

83、essages to limit the model within certain parameters helps the application align with our responsible AI Standard and user expectations.For example,a safety system message guides Microsoft Copilot in Bing to respond in a helpful tone and cite its sources.23 Additionally,user-centered design is an es

84、sential aspect of our approach to responsible AI.Communicating what the technology is and is not intended to do shows the applications potential,communicates its limitations,and helps prevent misuse.For example,we include in-product disclosures of AI-generated content in our Copilots,FAQs on respons

85、ible AI for our applications like GitHub Copilot,24 and Transparency Notes for our platform products such as Azure OpenAI Service.25As part of our commitment to build responsibly and help our customers do so as well,we integrate content filtering across Azure OpenAI Service.26 We regularly assess ou

86、r content filtering systems to improve accuracy and to ensure theyre detecting as much relevant content as possible.Over the past year,we expanded our detection and filtering capabilities to include additional risk categories,such as jailbreaks,and improved the performance of our text,image,multimod

87、al,and jailbreak models.These improvements rely on expert human annotators and linguists who evaluate offline evaluation sets.We also anonymously sample online traffic to monitor for regressions while leveraging the at-scale annotation capabilities of OpenAIs GPT-4.Importantly,weve made these detect

88、ion and evaluation tools available to our customers with the October 2023 general release of Azure AI Content Safety.Customers can choose to use our advanced language and vision models to help detect hate,violent,sexual,and self-harm related content,plus added jailbreak protections.When problematic

89、content is detected,the models assign estimated severity scores to help customers efficiently tackle prioritized items and take action to reduce potential harm.27 The models are offered in Azure AI Content Safety as standalone APIs,and customers can configure the filters to detect content with defin

90、ed severity scores to fit their specific goals and policies.The application of AI in our safety systems empowers organizations to monitor and support product safety at a scale that would be impossible for humans alone.These same tools are also offered in Azure AI Studio,Azure Open AI,and Azure AI Co

91、ntent safety where customers can discover,customize,and operationalize large foundation models at scale.121.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTA new jailbreak risk detection modelBecause generative AI models have advanced capabilities,they can be suscep

92、tible to adversarial inputs that can result in safety system bypass.These could provoke restricted behaviors and deviations from built-in safety instructions and system messages.This kind of adversarial technique is called a“jailbreak attack,”also known as a user prompt injection attack(UPIA).In Oct

93、ober 2023,to increase the safety of large language model deployments,we released a new jailbreak risk detection model,now called prompt shield.Prompt shield was integrated with existing comprehensive content safety filtering systems across Azure OpenAI Service and made available in Azure AI Content

94、Safety as an API.When a jailbreak attempt is detected,customers can choose to take a variety of steps best suited for their application,such as further investigations or banning users.Types of jailbreak attacksPrompt shield recognizes four different classes of UPIA.CategoryDescriptionAttempt to chan

95、ge application rules This category includes requests to use a new unrestricted application without rules,principles,or limitations,or requests instructing the application to ignore,forget,or disregard rules,instructions,or previous turns.Embedding a conversation mockup to confuse the model This atta

96、ck takes user-crafted conversational turns embedded in a single user query to instruct the application to disregard rules and limitations.Role-play This attack instructs the application to act as another persona without application limitations or assigns anthropomorphic human qualities to the applic

97、ation,such as emotions,thoughts,and opinions.Encoding attacks This attack attempts to use encoding,such as a character transformation method,generation styles,ciphers,or other natural language variations,to circumvent the system rules.In March 2024,prompt shield was expanded to include protections a

98、gainst indirect prompt injection attacks,where a generative application processes malicious information not directly authored by the application developer or the user,which can result in safety system bypass.28Limited Access for customized service safety settings Because safety is a priority for us,

99、our Azure OpenAI Service is offered with default content safety settings and safeguards.Customers must complete registration under our Limited Access policy framework29 and attest to approved use cases to gain access to Azure OpenAI Service.Customized settings for content filters and abuse monitorin

100、g are only allowed for approved use cases,and access to the broadest range of configurability is limited to managed customers.Managed customers are those who are working directly in trusted partnerships with Microsoft account teams.Managed customers must also attest their use cases for customized co

101、ntent filtering and abuse monitoring.All customers must follow the Azure OpenAI Service Code of Conduct,30 which outlines mitigations and content requirements that apply to all customer uses to support safe deployment.In the next section,we use a specific example of an identified riskinformation int

102、egrity risks in the age of generative AIto illustrate how we manage risks by combining technological advancements with policies and programs.131.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTManaging information integrity risks in the age of generative AI Amid gro

103、wing concern that AI can make it easier to create and share disinformation,we recognize that it is imperative to give users a trusted experience.As generative AI technologies become more advanced and prevalent,it is increasingly difficult to identify AI-generated content.An image,video,or audio clip

104、 generated by AI can be indistinguishable from real-world capture of scenes by cameras and other human-created media.As more creators use generative AI technologies to assist their work,the line between synthetic content created by AI tools and human-created content will increasingly blur.Labeling A

105、I-generated content and disclosing when and how it was made(otherwise known as provenance)is one way to address this issue.In May 2023,we announced our intent to build new media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its sourc

106、e and history.Since then,weve made significant progress on our commitment to deploy new state-of-the-art tools to help the public identify AI-generated audio and visual content.By the end of 2023,we were automatically attaching provenance metadata to images generated with OpenAIs DALL-E 3 model in o

107、ur Azure OpenAI Service,Microsoft Designer,and Microsoft Paint.This provenance metadata,referred to as Content Credentials,includes important information such as when the content was created and which organization certified the credentials.To apply Content Credentials to our products AI-generated im

108、ages,we use an open technical standard developed by the Coalition for Content Provenance and Authenticity(C2PA),which we co-founded in 2021.The industry has increasingly adopted the C2PA standard,which requires cryptographic methods to sign,seal,and attach metadata to the file with a trusted identit

109、y certificate.This means C2PA Content Credentials can deliver a high level of trust with information that is tamper-evident while also preserving privacy.Certification authorities issue identity certificates to vetted organizations,and individual sources within those organizations can be anonymized.

110、The C2PA coalition and standard body builds on our early efforts to prototype and develop provenance technologies and our collaboration with the provenance initiative Project Origin,31 which we founded alongside the British Broadcasting Corporation,the Canadian Broadcasting Corporation,and the New Y

111、ork Times to secure trust in digital media.Beyond Microsoft,we continue to advocate for increased industry adoption of the C2PA standard.There are now more than 100 industry members of C2PA.In February 2024,OpenAI announced that they would implement the C2PA standard for images generated by their DA

112、LL-E 3 image model.This is in addition to completing pre-deployment risk mapping and leveraging red-teaming practices to reduce potential for harman approach similar to ours.While the industry is moving quickly to rally around the C2PA standard,relying on metadata-based provenance or even watermarki

113、ng approaches alone will be insufficient.It is important to combine multiple methods,such as embedding invisible watermarks,alongside C2PA Content Credentials and fingerprinting,to help people recover provenance information when it becomes decoupled from its content.Additionally,authentication,verif

114、ication,and other forensic technologies allow people to evaluate digital content for generative AI contributions.No disclosure method is foolproof,making a stacked mitigation approach especially important.We continue to test and evaluate combinations of techniques in addition to new methods altogeth

115、er to find effective provenance solutions for all media formats.For example,text can easily be edited,copied,and transferred between file formats,which interferes with current technical capabilities that attach Content Credentials to a files metadata.We remain committed to investing in our own resea

116、rch,sharing our learnings,and collaborating with industry peers to address ongoing provenance concerns.In May 2023we announced our intent to build new media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history.End of

117、2023we began to automatically apply Content Credentials to AI-generated images from Microsoft Designer,Microsoft Paint,and DALL-E 3 in our Azure OpenAI Service.141.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTThis work is especially important in 2024,a year in wh

118、ich more people will vote for their elected leaders than any year in human history.A record-breaking elections year combined with the fast pace of AI innovation may offer bad actors new opportunities to create deceptive AI content(also known as“deepfakes”)designed to mislead the public.To address th

119、is risk,we worked with 19 other companies,including OpenAI,to announce the new Tech Accord to Combat Deceptive Use of AI in 2024 Elections at the Munich Security Conference in February 2024.32 These commitments include advancing provenance technologies,innovating robust disclosure solutions,detectin

120、g and responding to deepfakes in elections,and fostering public awareness and resilience.Since signing the Tech Accord,we continue to make progress on our commitments.We recently launched a portal for candidates to report deepfakes on our services.33 And in March,we launched Microsoft Content Integr

121、ity tools in private preview,to help political candidates,campaigns,and elections organizations maintain greater control over their content and likeness.The Content Integrity tools include two components:first,a tool to certify digital content by adding Content Credentials,and second,tools to allow

122、the public to check if a piece of digital content has Content Credentials.34In addition to engaging in external research collaborations35 and building technical mitigations,its equally important to consider policies,programs,and investments in the broader ecosystem that can further manage informatio

123、n integrity risks associated with generative AI.We know that false or misleading information is more likely to spread in areas where there is limited or no local journalism.A healthy media ecosystem acts as a virtual town square where people gather reliable information and engage on the most pressin

124、g issues facing society.We support independent journalism to advance free,open coverage of important issues on a local and national scale.Our Democracy Forward Journalism Initiative provides journalists and newsrooms with tools and technology to help build capacity,expand their reach and efficiency,

125、distribute trustworthy content,and ultimately provide the information needed to sustain healthy democracies.36In addition to the commitments made in the Tech Accord,we continue to build on our existing programs to protect elections and advance democratic values around the world.We support the rights

126、 of voters,candidates,political campaigns,and election authorities through a variety of programs and investments.These include our partnership with the National Association of State Election Directors,our endorsement of the Protect Elections from Deceptive AI Act in the United States,and our Electio

127、ns Communications Hub.37Case Study:Content Credentials for Microsoft DesignerMicrosoft Designer allows users to input text prompts to generate images,such as the one below which was generated using the prompt:“people using an AI system for farming in a corn field.”Each image,like this one generated

128、by Designer,is automatically marked and signed with Content Credentials,displayed in the right pane of the product interface.The Content Credentials indicate the date and time of creation using AI.Because Content Credentials are cryptographically signed and sealed as part of the image files metadata

129、,this information is tamper-evident and can be examined with tools such as Content Authenticity Initiatives open source Verify tool38 and our Content Integrity Check tool.39 Both tools are available as websites where users can upload files to check Content Credentials.For example,the image to the ri

130、ght shows that when examined with Verify or Check,Content Credentials for images generated by Designer indicate that they were created by a Microsoft product and the date of generation.We continue to build provenance capabilities into our products,including most recently in the DALL-E 3 series model

131、s hosted in Azure OpenAI Service.AI-generated images from Azure OpenAI Service using DALL-E 3 now include provenance information that attach source and generation date through Content Credentials.40151.HOW WE BUILD GENERATIVE APPLICATIONS RESPONSIBLYRESPONSIBLE AI TRANSPARENCY REPORTThird-party eval

132、uation of Microsoft DesignerThe work of AI risk management cannot be done by companies alone.This is why we are committed to learning from stakeholders in academia,civil society,and government whose perspectives,evaluations,and concerns we consider as we build.Below is an example of how weve exercis

133、ed this commitment through an external assessment of Microsoft Designer.Designer is a general use text-to-image generative AI tool.Its many uses can make it vulnerable to adversarial use,including the generation of images that can be used for information operations.While we cant control what happens

134、 to images generated by our applications once they leave our platform,we can mitigate known risks at the user input and image output stages.This is why weve put in place safeguards to restrict what the application will generate,including deceptive images that could further information operations.To

135、better understand the risk of misleading images,reduce potential harms,and promote information integrity,we partnered with NewsGuard to evaluate Designer.NewsGuard is an organization of trained journalists that scores news sources for adherence to journalistic principles.As part of their analysis,Ne

136、wsGuard prompted Designer to create visuals that reinforced or portrayed prominent false narratives related to politics,international affairs,and elections.Of the 275 images created:Mitigations worked in 88 percent of the attempts and the output images contained no problematic content.12 percent of

137、the output images contained problematic content.To enhance performance related to information integrity,we regularly improve prompt blocking,content filters,and safety system message mitigations.Following the mitigation improvements,we input the same prompts developed by NewsGuard which had previous

138、ly resulted in problematic content and reevaluated the images generated by Designer.We found that only 3.6 percent of the output images contained problematic content.NewsGuards analysis and our mitigation improvements were steps in the right direction,but there is more work to be done.Evaluating and

139、 managing risks of our generative applications is an ongoing process and inherently iterative in nature.As new issues surfaceidentified by our internal responsible AI practices,external evaluations,and feedback submitted by userswe take action to address them.In addition,the question of how to build

140、 scaled measurement for evaluating information integrity in images is still an open research question.To address this challenge,we are excited to announce a new collaboration with researchers from Princetons Empirical Studies and Conflict Project to advance this research.96.4%of test prompts success

141、fully mitigated following improvements to the Designer safety system.16RESPONSIBLE AI TRANSPARENCY REPORTSection 2.How we make decisions about releasing generative applications2.HOW WE MAKE DECISIONS ABOUT RELEASING GENERATIVE APPLICATIONS2.HOW WE MAKE DECISIONS ABOUT RELEASING GENERATIVE APPLICATIO

142、NS17RESPONSIBLE AI TRANSPARENCY REPORTDeployment safety for generative AI applicationsAt each stage of the map,measure,and manage process for generative AI releases,weve built best practices,guidelines,and tools that reflect our learnings from the last year of releasing generative applications.For e

143、xample,when teams are asked to evaluate the potential for generative applications to produce ungrounded content,they are provided with centralized tools to measure that risk alongside patterns and best practices to guide their design of specific mitigations for their generative application.After tea

144、ms complete their initial analysis,senior experts review the evaluations and mitigations and make any further recommendations or requirements before products are launched.These reviews ensure that we apply a consistent and high bar to the design,build,and launch of our generative AI applications.Whe

145、n gaps are identified,these experts dive deep with product teams and leaders to assess the problems and agree on further mitigations and next steps.This oversight by senior leadership provides important touch points throughout a products development cycle to manage risks across the company.This proc

146、ess improves each AI product and generates important lessons to apply to our map,measure,and manage approach.As we learn more about how generative AI is used,we continue to iterate on our requirements,review processes,and best practices.In this section,we share what weve learned through two examples

147、.These short case studies demonstrate how weve improved our products through our work to map,measure,and manage risks associated with generative AI.Our collaboration with OpenAIWhile we often build and deploy state-of-the-art AI technology,we also partner with other companies building advanced AI mo

148、dels,including OpenAI.Our most recent agreement with OpenAI41 extends our ongoing collaborations in AI supercomputing and research.The agreement also allows both OpenAI and Microsoft to independently commercialize any advanced technologies developed by OpenAI.We deploy OpenAIs models across several

149、consumer and enterprise products,including Azure OpenAI Service,which enables developers to build cutting-edge AI applications through OpenAI models with the benefit of Azures enterprise-grade capabilities.Microsoft and OpenAI share a commitment to building AI systems and products that are trustwort

150、hy and safe.OpenAIs leading research on AI Alignment,42 their preparedness framework,43 and our Responsible AI Standard establish the foundation for the safe deployment of our respective AI technologies and help guide the industry toward more responsible outcomes.2.HOW WE MAKE DECISIONS ABOUT RELEAS

151、ING GENERATIVE APPLICATIONS18RESPONSIBLE AI TRANSPARENCY REPORTCase Study:Safely deploying Copilot StudioIn 2023,we released Copilot Studio,which harnesses generative AI to enable customers without programming or AI skills to build their own copilots.44 Natural language processing enables the cloud-

152、based platform to interpret customer prompts and create interactive solutions that customers can then deploy to their users.It also enables customers to test,publish,and track the performance of copilots within the platform so they remain in control of the experience.As with all generative applicati

153、ons,the Copilot Studio engineering team mapped,measured,and managed risks according to our governance framework prior to deployment.Map.As part of their process,the engineering team mapped key risks associated with the product in their Responsible AI Impact Assessment as well as security and privacy

154、 reviews,including the potential for copilots to provide ungrounded responses to user prompts.Measure and Manage.The Copilot Studio team worked with subject matter experts to measure and manage key risks iteratively throughout the development and deployment process.To mitigate AI-generated content r

155、isks,the Copilot Studio team included safety system message mitigations and leveraged Azure OpenAI Services content filtering capabilities to direct copilots to generate only acceptable content.One of the key risks for this product is groundedness,or potential for AI-generated output to contain info

156、rmation that is not present in the input sources.By improving groundedness mitigations through metaprompt adjustments,the Copilot Studio team significantly enhanced in-domain query responses,increasing the in-domain pass rate from 88.6 percent to 95.7 percent.This means that when a user submits a qu

157、estion that is in-domainor topically appropriatecopilots built with Copilot Studio are able to respond more accurately.This change also resulted in a notable 6 percent increase in answer rate within just one week of implementation.In other words,the improved groundedness filtering also reduced the n

158、umber of queries that copilots declined to respond to,improving the overall user experience.The team also introduced citations,so copilot users have more context on the source of information included in AI-generated outputs.By amending the safety system message and utilizing content filters,the Copi

159、lot Studio team improved citation accuracy from 85 percent to 90 percent.Following the map,measure,and manage framework and supported by robust governance processes,the Copilot Studio team launched an experience where customers can build safer and more trustworthy copilots.Case Study:Safely deployin

160、g GitHub CopilotGitHub Copilot is an AI-powered tool designed to increase developer productivity through a variety of features,including code suggestions and chat experience to ask questions about code.45 Code completion is a feature that runs in the integrated development environment(IDE),providing

161、 suggested lines of code as developers work on projects.GitHub Copilot Chat can be used in different environments,including the IDE and on GitH,and provides a conversational interface for developers to ask coding questions.GitHub Copilot runs on a variety of advanced Microsoft and OpenAI technologie

162、s,including OpenAIs GPT models.In developing the features for GitHub Copilot,the team worked with their Responsible AI Championsresponsible AI experts within their organizationto map,measure,and manage risks associated with using generative AI in the context of coding.Map.The team completed their Re

163、sponsible AI Impact Assessment as well as security and privacy reviews to map different risks associated with the product.These risks included 1)the generation of code that may appear valid but may not be semantically or syntactically correct;2)the generation of code that may not reflect the intent

164、of the developer;and 3)more fundamentally,whether GitHub Copilot was actually increasing developer productivity.The last category,generally referred to as fitness for purpose,is an important concept for establishing that an AI application effectively addresses the problem its meant to solve.Measure.

165、In addition to assessing performance,like the quality of responses,and running measurement sets to evaluate risks like insecure code or content risks,the GitHub Copilot team set out to understand if Copilot improved developer productivity.Research on how 450 developers at Accenture used the GitHub C

166、opilot code completion feature over six months found that:4694 percent of developers reported that using GitHub Copilot helped them remain in the flow and spend less effort on repetitive tasks.90 percent of developers spent less time searching for information.90 percent of developers reported writin

167、g better code with GitHub Copilot.95 percent of developers learned from Copilot suggestions.2.HOW WE MAKE DECISIONS ABOUT RELEASING GENERATIVE APPLICATIONS19RESPONSIBLE AI TRANSPARENCY REPORTCase Study:Safely deploying GitHub Copilot,cont.In a follow-on study of GitHub Copilot Chat,the team saw simi

168、lar improvements in productivity.47 In this study,researchers recruited 36 participants that had between five and ten years of coding experience.Participants were asked to a)write code,being randomly assigned to use or not use GitHub Copilot Chat and b)review code,being randomly assigned to review c

169、ode that was authored with assistance from GitHub Copilot Chat or not.The researchers created a framework to evaluate code quality,asking participants to assess if code was readable,reusable,concise,maintainable,and resilient.In analyzing the data,researchers found that:85 percent of developers felt

170、 more confident in their code quality when authoring code with GitHub Copilot and GitHub Copilot Chat.Code reviews were more actionable and completed 15 percent faster than without GitHub Copilot Chat.88 percent of developers reported maintaining flow state with GitHub Copilot Chat because they felt

171、 more focused,less frustrated,and enjoyed coding more.This research indicates that not only is GitHub Copilot making developers more productive,it also increases developer satisfaction.Manage.As weve shared throughout the report,risks often need to be mitigated at multiple levels,and mitigations oft

172、en work to manage multiple risks.Human oversight.Responsible AI is a shared responsibility,and our goal is to empower users to use GitHub Copilot in a safe,trustworthy,and reliable way.To support developer oversight of AI-generated code,GitHub Copilot was designed to offer suggested lines of code,wh

173、ich a developer reviews and accepts.In GitHub Copilot Chat,developers can review and copy code suggestions generated in the chat window into their coding environment.Importantly,the developer remains in control of the code theyre writing.48Staying on topic.While the models behind GitHub Copilot can

174、generate a wide range of content,one key approach to mitigate AI-generated content risks is to keep conversations limited to coding.The GitHub Copilot team built a classifier to reduce the number of off-topic conversations on the platform to keep conversations on topic and to protect users.In additi

175、on to the off-topic classifier,GitHub Copilot runs a variety of content filters,including to block content related to self-harm,violence,and hate speech.Transparency.The transparency documentation we provide on our GitHub Copilot features provides developers with important information about how best

176、 to use the features responsibly.49,50 We also bake transparency right into the experience.GitHub Copilot discloses that code suggestions are AI-generated and may contain mistakes,empowering developers to make informed decisions about how best to use GitHub Copilot features.2.HOW WE MAKE DECISIONS A

177、BOUT RELEASING GENERATIVE APPLICATIONS20RESPONSIBLE AI TRANSPARENCY REPORTSensitive Uses program in the age of generative AIThe generative AI release process integrates with existing responsible AI programs and processes,such as our Sensitive Uses program,established in 2017 to provide ongoing revie

178、w and oversight of high-impact and higher-risk uses of AI.Employees across the company must report AI uses to our Sensitive Uses program for in-depth review and oversight if the reasonably foreseeable use or misuse of AI could have a consequential impact on an individuals legal status or life opport

179、unities,present the risk of significant physical or psychological injury,or restrict,infringe upon,or undermine the ability to realize an individuals human rights.Particularly high-impact use cases are also brought before our Sensitive Uses Panel.Professionals from across our research,policy,and eng

180、ineering organizations with expertise in human rights,social science,privacy,and security lend their expertise to the Sensitive Uses team and the Sensitive Uses Panel to help address complex sociotechnical issues and questions.After review and consultation,the Sensitive Uses team delivers directed,c

181、oncrete guidance and mitigation requirements tailored to the project.Since 2019,the Sensitive Uses team has received over 900 submissions,including 300 in 2023 alone.In 2023,nearly 70 percent of cases were related to generative AI.The increase in generative AI cases led to new insights about emergin

182、g risks,such as the capability of generative applications to make ungrounded inferences about a person.In some scenarios,our teams observed that a chatbot could provide realistic sounding but incorrect responses to questions that were outside the scope of their grounding data.Depending on the contex

183、t,these ungrounded responses could misattribute actions or information about individuals or groups.For example,a chatbot designed to answer questions about workplace benefits shouldnt answer questions about employee performance when that information is not included in the grounding data.Some of the

184、mitigations that can prevent ungrounded inferences include safety system message provisions to guide which questions chatbots should respond to,ensuring that application responses are grounded in the right source data,and isolating private data.Once risks are assessed by the Sensitive Uses team,guid

185、ance is given to product teams on the mitigations for the use case.900submissions have been received by the Sensitive Uses team since 2019,including 300 in 2023 alone.Nearly 70%of cases in 2023 were related to generative AI.2.HOW WE MAKE DECISIONS ABOUT RELEASING GENERATIVE APPLICATIONS21RESPONSIBLE

186、 AI TRANSPARENCY REPORTSensitive Uses in action:Microsoft Copilot for SecurityOne example of a product that underwent Sensitive Uses review is Copilot for Security,51 an AI-powered tool that helps security professionals respond to threats and assess risk exposure faster and more accurately.Copilot f

187、or Security uses generative AI to investigate analysts digital environments,flag suspicious activity or content,and improve analysts response to incidents.It generates natural language insights and recommendations from complex data,which helps analysts catch threats they may have otherwise missed an

188、d helps organizations potentially prevent and disrupt attacks at machine speed.Through a Responsible AI Impact Assessment and with the support of their Responsible AI Champion,the Copilot for Security team identified that the project could meet the threshold for Sensitive Uses.They submitted the pro

189、ject to the Sensitive Uses program as part of early product development.The Sensitive Uses team confirmed that Microsoft Copilot for Security met the criteria for a Sensitive Uses review.They then worked with the product team and Responsible AI Champion to map key risks associated with the product,i

190、ncluding that analysts could be exposed to potential content risks as part of their work.The team landed on an innovative approach to address risks,which,due to the nature of routine security work,are different than those for consumer solutions.For example,security professionals may encounter offens

191、ive content or malicious code in source information.To allow analysts to stay in control of when they encounter potentially harmful content,the team made sure that Microsoft Copilot for Security surfaces these risks when requested by the security professionals.While Microsoft Copilot for Security su

192、ggests next steps,analysts ultimately decide what to do based on their organizations unique needs.The Copilot for Security team worked closely with subject matter experts to validate their approach and specific mitigations.They improved in-product messaging to avoid overreliance on AI-generated outp

193、uts.They also refined metrics for grounding to improve the products generated content.Through required ongoing monitoring of the product over the course of its phased releases,the team triaged and addressed responsible AI issues weekly.This process led to a more secure,transparent,and trustworthy ge

194、nerative AI product that empowers security professionals to protect their organizations and customers,furthering our pursuit of the next generation of cybersecurity protection.5222RESPONSIBLE AI TRANSPARENCY REPORTSection 3.How we support our customers in building AI responsiblyIn addition to buildi

195、ng our own AI applications responsibly,we empower our customers with responsible AI tools and features.We invest in our customers responsible AI goals in three ways:1We stand behind our customers deployment and use of AI through our AI Customer Commitments.2We build responsible AI tools for our cust

196、omers to use in developing their own AI applications responsibly.3We provide transparency documentation to customers to provide important information about our AI platforms and applications.3.HOW WE SUPPORT OUR CUSTOMERS IN BUILDING RESPONSIBLY3.HOW WE SUPPORT OUR CUSTOMERS IN BUILDING RESPONSIBLY23

197、RESPONSIBLE AI TRANSPARENCY REPORTAI Customer Commitments In June 2023,we announced our AI Customer Commitments,53 outlining steps to support our customers on their responsible AI journey.We recognize that ensuring the right guardrails for the responsible use of AI will not be limited to technology

198、companies and governments.Every organization that creates or uses AI applications will need to develop and implement governance systems.We made the following promises to our customers:We created an AI Assurance Program to help customers ensure that the AI applications they deploy on our platforms me

199、et the legal and regulatory requirements for responsible AI.This program includes regulator engagement support,along with our promise to attest to how we are implementing the NIST AI Risk Management Framework.We continue to engage with customer councils,listening to their views on how we can deliver

200、 the most relevant and compliant AI technology and tools.We created a Responsible AI Partner program for our partner ecosystem and 11 partners have joined the program so far.These partners have created comprehensive practices to help customers evaluate,test,adopt,and commercialize AI solutions.54 We

201、 announced,and later expanded,the Customer Copyright Commitment55 in which Microsoft will defend commercial customers who are sued by a third party for copyright infringement for using Azure OpenAI Service,our Copilots,or the outputs they generate and pay any resulting adverse judgments or settlemen

202、ts,as long as the customer met basic conditions such as not attempting to generate infringing content and using our required guardrails and content filters.56Ultimately,we know that these commitments are an important start,and we will build on them as both the technology and regulatory conditions ev

203、olve.We are excited by this opportunity to partner more closely with our customers as we continue on our responsible AI journey together.11partners have joined since we created the Responsible AI Partner program.3.HOW WE SUPPORT OUR CUSTOMERS IN BUILDING RESPONSIBLY24RESPONSIBLE AI TRANSPARENCY REPO

204、RTTools to support responsible developmentTo empower our customers,weve released 30 responsible AI tools that include more than 100 features to support customers responsible AI development.These tools work to map and measure AI risks and manage identified risks with novel mitigations,real-time detec

205、tion and filtering,and ongoing monitoring.Tools to map and measure risksWe are committed to developing tools and resources that enable every organization to map,measure,and manage AI risks in their own applications.Weve also prioritized making responsible AI tools open access.For example,in February

206、 2024,we released a red teaming accelerator,Python Risk Identification Tool for generative AI(PyRIT).57 PyRIT enables security professionals and machine learning engineers to proactively find risks in their generative applications.PyRIT accelerates a developers work by expanding on their initial red

207、 teaming prompts,dynamically responding to AI-generated outputs to continue probing for content risks,and automatically scoring outputs using content filters.Since its release on GitHub,PyRIT has received 1,100 stars and been copied more than 200 times by developers for use in their own repositories

208、 where it can be modified to fit their use cases.After identifying risks with a tool like PyRIT,customers can use safety evaluations in Azure AI Studio to conduct pre-deployment assessments of their generative applications susceptibility to generate low-quality or unsafe content,as well as to monito

209、r trends post-deployment.For example,in November 2023 we released a limited set of generative AI evaluation tools in Azure AI Studio to allow customers to assess the quality and safety of their generative applications.58 The first pre-built metrics offered customers an easy way to evaluate their app

210、lications for basic generation quality metrics such as groundedness,which measures how well the models generated answers align with information from the input sources.In March 2024,we expanded our offerings in Azure AI Studio to include AI-assisted evaluations for safety risks across multiple conten

211、t risk categories such as hate,violence,sexual,and self-harm,as well as content that may cause fairness harms and susceptibility to jailbreak attacks.59 Recognizing that evaluations are most effective when iterative and contextual,weve continued to invest in the Responsible AI Toolbox(RAI Toolbox).6

212、0 This open-source tool,which is also integrated with Azure Machine Learning,offers support for computer vision and natural language processing(NLP)scenarios.The RAI Toolbox brings together a variety of model understanding and assessment tools such as fairness analysis,model interpretability,error a

213、nalysis,what-if exploration,data explorations,and causal analysis.This enables ML professionals to easily flow through different stages of model debugging and decision-making.As an entirely customizable experience,the RAI Toolbox can be deployed for various functions,such as holistic model or data a

214、nalysis,comparing datasets,or explaining individual instances of model predictions.On GitHub,the RAI Toolbox has received 1,200 stars with more than 4,700 downloads per month.Tools to manage risks Just as we measure and manage AI risks across the platform and application layers of our generative pro

215、ducts,we empower our customers to do the same.For example,Azure AI Content Safety helps customers detect and filter harmful user inputs and AI-generated content in their applications.Importantly,Azure AI Content Safety provides options to detect content risks along multiple categories and severity l

216、evels to enable customers to configure settings to fit specific needs.Another example is our system message framework and templates,which support customers as they write effective system messagessometimes called metapromptswhich can improve performance,align generative application behavior with cust

217、omer expectations,and help mitigate risks in our customers applications.61In October 2023,we made Azure AI Content Safety generally available.Since then,weve continued to expand its integration across our customer offerings,including its availability in Azure AI Studio,a developer platform designed

218、to simplify generative application development,and across our Copilot builder platforms,such as Microsoft Copilot Studio.We continue to expand customer access to additional risk management tools that detect risks unique to generative AI models and applications,such as prompt shield and groundedness

219、detection.Prompt shield detects and blocks prompt injection attacks,which bad actors use to insert harmful instructions into the data processed by large language models.62 Groundedness detection finds ungrounded statements in AI-generated outputs and allows the customer to implement mitigations such

220、 as triggering rewrites of ungrounded statements.63In March 2024,we released risks&safety monitoring in Azure OpenAI Service,which provides tools for real-time harmful content detection and mitigation,offering insights into content filter performance on actual customer traffic and identifying users

221、who may be abusing a generative application.64 Customers can use these insights to fine-tune content filters to align with their safety goals.Additionally,the potentially abusive user detection feature 3.HOW WE SUPPORT OUR CUSTOMERS IN BUILDING RESPONSIBLY25RESPONSIBLE AI TRANSPARENCY REPORTanalyzes

222、 trends in user behavior and flagged content to generate reports for our customers to decide whether to take further action in Azure AI Studio.The report includes a user ranking and an abuse report,enabling customers to take action when abuse is suspected.As we continue to improve our tools to map,m

223、easure,and manage generative AI risks,we make those tools available to our customers to enable an ecosystem of responsible AI development and deployment.Transparency to support responsible development and use by our customersBeginning in 2019,weve regularly released Transparency Notesdocumentation c

224、overing responsible AI topicsfor our platform services which customers use to build their own AI applications.Since then,weve published 33 Transparency Notes.Required for our platform services,these follow a specific template to provide customers with detailed information about capabilities,limitati

225、ons,and intended uses to enable responsible integration and use.Some examples include Transparency Notes for Azure AI Vision Face API,65 Azure OpenAI Service,66 and Azure Document Intelligence.67In 2023,we expanded our transparency documentation beyond Transparency Notes.We now require our non-platf

226、orm services,such as our Copilots,to publish Responsible AI Frequently Asked Questions(FAQs)and include user-friendly notices in product experiences to provide important disclosures.For example,Copilot in Bing provides users with responsible AI documentation68 and FAQs69 that detail our risk mapping

227、,measurement,and management methods.In addition,when users interact with Copilot in Bing,we provide in-product disclosure to inform users that they are interacting with an AI application and citations to source material to help users verify information in the responses and learn more.Other important

228、 notices may include disclaimers about the potential for AI to make errors or produce unexpected content.These user-friendly transparency documents and product integrated notices are especially important in our Copilot experiences,where users are less likely to be developers.Transparency documentati

229、on and in-product transparency work together to enable our customers to build and use AI applications responsibly.And as with our other responsible AI programs,we anticipate that the ways we provide transparency for specific products will evolve as we learn.26RESPONSIBLE AI TRANSPARENCY REPORTSectio

230、n 4.How we learn,evolve,and growAs weve prioritized our company-wide investments in responsible AI over the last eight years,people remain at the center of our progress.From our growing internal community to the global responsible AI ecosystem,the individuals and communities involved continue to pus

231、h forward whats possible in developing AI applications responsibly.In this section,we share our approach to learning,evolving,and growing by bringing outside perspectives in,sharing learnings outwards,and investing in our community.4.HOW WE LEARN,EVOLVE,AND GROW4.HOW WE LEARN,EVOLVE,AND GROW27RESPON

232、SIBLE AI TRANSPARENCY REPORTGovernance of responsible AI at Microsoft:Growing our responsible AI communityAt Microsoft,no one team or organization can be solely responsible for embracing and enforcing the adoption of responsible AI practices.Rather,everyone across every level of the company must adh

233、ere to these commitments in order for them to be effective.We developed our Responsible AI Standard to communicate requirements and guidance so all teams can uphold our AI principles as they develop AI applications.Specialists in research,policy,and engineering combine their expertise and collaborat

234、e on cutting-edge responsible AI practices.These practices ensure we meet our own commitments while also supporting our customers and partners as they work to build their own AI applications responsibly.Research:Researchers in Aether,70 Microsoft Research,71 and our engineering teams keep the respon

235、sible AI program on the leading edge of issues through thought leadership.They conduct rigorous AI research,including on transparency,fairness,human-AI collaboration,privacy,security,safety,and the impact of AI on people and society.Our researchers actively participate in broader discussions and deb

236、ates to ensure that our responsible AI program integrates big-picture perspectives and input.Policy:The Office of Responsible AI(ORA)collaborates with stakeholders and policy teams across the company to develop policies and practices to uphold our AI principles when building AI applications.ORA defi

237、nes roles and responsibilities,establishes governance systems,and leads Sensitive Use reviews to help ensure our AI principles are upheld in our development and deployment work.ORA also helps to shape the new laws,norms,and standards needed to ensure that the promise of AI technology is realized for

238、 the benefit of society at large.Engineering:Engineering teams create AI platforms,applications,and tools.They provide feedback to ensure policies and practices are technically feasible,innovate novel practices and new technologies,and scale responsible AI practices throughout the company.Our engine

239、ering teams draw on interactions with customers and user research to address stakeholder concerns in the development of our AI applications.Our responsible AI journey2016Satya NadellasSlate article2017Aether Committee establishedSensitive Uses of AI defined and program established2018AI Red Teamesta

240、blishedFacial RecognitionPrinciples adoptedAI Principlesadopted2019Responsible AI Standard v1Office of Responsible AI established2020Error Analysis Open Source tool released2021Responsible AI Dashboard released2022Responsible AI Standard v22023Launched Global PerspectivesCopyright CommitmentsCo-foun

241、ded the Frontier Model Launched Azure AI Forum Content SafetyPublished White House Governing AI Voluntary BlueprintCommitments2024PyRIT,prompt shield,risks&abuse monitoring and more released 4.HOW WE LEARN,EVOLVE,AND GROW28RESPONSIBLE AI TRANSPARENCY REPORTMicrosoft BoardResponsible AI CouncilOffice

242、 of Responsible AIResearchPolicyEngineeringApplying lessons from previous efforts to address privacy,security,and accessibility,weve built a dedicated responsible AI program to guide our company-wide efforts.72 We combine a federated,bottom-up approach with strong top-down support and oversight by c

243、ompany leadership to fuel our policies,governance,and processes.From a governance perspective,the Environmental,Social,and Public Policy Committee of the Board of Directors provides oversight and guidance on responsible AI policies and programs.Our management of responsible AI starts with CEO Satya

244、Nadella and cascades across the senior leadership team and all of Microsoft.At the senior leadership level,the Responsible AI Council provides a forum for business leaders and representatives from research,policy,and engineering.The council,co-led by Vice Chair and President Brad Smith and Chief Tec

245、hnology Officer Kevin Scott,meets regularly to grapple with the biggest challenges surrounding AI and to drive progress in our responsible AI policies and processes.Executive leadership and accountability are key drivers to ensure that responsible AI remains a priority across the company.At the comm

246、unity level,weve nurtured a unique Responsible AI Champion program that engages our engineering and global field teams in our responsible AI work.The Responsible AI Champion program is guided by a defined structure,with clear roles and responsibilities that empower our Champions and enable a culture

247、 of responsible AI across the company.Senior leaders accountable for responsible AI identify members of their organization to serve as Responsible AI Champions.These Champions enable their organizations to carry out our AI commitments by working together and learning from one anothers expertise.This

248、 company-wide network troubleshoots problems,offers guidance,and advises on how to implement the Responsible AI Standard.Our combined bottom-up and top-down approach empowers individuals,teams,and organizations and facilitates a culture of responsible AI by design.The collaborative and multidiscipli

249、nary structure embedded in our responsible AI program leverages the incredible diversity of the company73 and amplifies what we can achieve.Our engineering,policy,and research teams bring a wealth of passion,experience,and expertise,which enables us to develop and deploy safe,secure,and trustworthy

250、AI.A dedicated responsible AI program Our combined bottom-up and top-down approach empowers individuals,teams,and organizations and facilitates a culture of responsible AI by design.4.HOW WE LEARN,EVOLVE,AND GROW29RESPONSIBLE AI TRANSPARENCY REPORTGrowing a responsible AI communityWe strongly believ

251、e that AI has the potential to create ripples of positive impact across the globe.Over the years,we have matched that belief with significant investments in new engineering systems,research-led incubations,and,of course,people.We continue to grow and now have over 400 people working on responsible A

252、I,more than half of whom focus on responsible AI full-time.In the second half of 2023,we grew our responsible AI community 16.6 percent across the company.We increased the number of Responsible AI Champions across our engineering groups and grew the number of full-time employees who work on centrali

253、zed responsible AI infrastructure,AI red teaming,and assessing launch readiness of our products.We continue to grow a diverse community to fulfill our commitments to responsible AI and to positively impact the products and tools that millions of people use every day.Some responsible AI community mem

254、bers provide direct subject matter expertise,while others build out responsible AI practices and compliance motions.Our community members hold positions in research,policy,engineering,sales,and other core functions,touching all aspects of our business.They bring varied perspectives rooted in their d

255、iverse professional and academic backgrounds,including liberal arts,computer science,international relations,linguistics,cognitive neuroscience,physics,and more.11,000attendees welcomed at SkillUp AI events.30,000employees reached through more than 40 events by our AI/ML connected community.Supporti

256、ng our responsible AI community through trainingWe have an enormous opportunity to integrate AI throughout the products and services we offerand are dedicated to doing so responsibly.This journey begins with educating all of our employees.The 2023 version of our Standards of Business Conduct trainin

257、g,a business ethics course required companywide,covers the resources our employees use to develop and deploy AI safely.As of December 31,2023,99 percent of all employees completed this course,including the responsible AI module.For our responsible AI community members,our training goes even deeper.W

258、e provide extensive training for our over 140 Responsible AI Champions,more than 50 of whom joined the program in 2023.In turn,Responsible AI Champions help scale the implementation of responsible AI practices by training other members of the responsible AI community in their respective Microsoft or

259、ganizations.This cascade strengthens peer relationships,builds trust among employees,and enables Champions to customize instruction to their specific organizations or divisions.We continue to refine our training to keep pace with rapid developments in AI,particularly for responsible AI-focused profe

260、ssionals.Weve developed learning sessions and resources for educating employees on responsible AI skills,our internal processes,and our approach to responsible AI.Ongoing education helps to keep our responsible AI subject matter experts current so they can disseminate up-to-date best practices throu

261、ghout the company.We also provide training on general AI topics so our employees can improve their knowledge and abilities as AI becomes more important for both our society and our business.Throughout 2023,more than 100,000 employees attended conferences and events such as the AI/ML Learning Series

262、and Hackathon,which has incubated more than 11,000 AI-focused projects.At these events,they learn the latest technologies and ways to apply responsible AI principles through company communities and channels.Our employees also lead by sharing their experiences and expertise.For example,our AI/ML conn

263、ected community.reached nearly 30,000 employees through more than 40 events in 2023,and our SkillUp AI events welcomed more than 11,000 attendees.4.HOW WE LEARN,EVOLVE,AND GROW30RESPONSIBLE AI TRANSPARENCY REPORTBuilding safe and responsible frontier models through partnerships and stakeholder input

264、AI sits at the exciting intersection of technological breakthrough and real-world application.We are continually discovering new ways to push the limits with AI,innovating solutions to address societys biggest problems.Frontier models,highly capable AI models that go beyond todays state-of-the-art t

265、echnologies,offer significant opportunities to help people be more productive and creative as well as hold major potential to address global challenges.Alongside these benefits,they also present risks of harm.That is why we are engaging in a number of partnerships across industry,civil society,and a

266、cademia to share our learnings and learn from others.An important example of how we can lead through partnership is our co-founding of the Frontier Model Forum alongside Anthropic,Google,and OpenAI.The Frontier Model Forum is an industry non-profit dedicated to the safe and secure development of fro

267、ntier AI models by sharing information,developing best practices,and advancing research in frontier AI safety.By leveraging the expertise of the founding members,as well as other organizations committed to developing and deploying frontier models safely,the Frontier Model Forum works toward four pri

268、orities:1Advance AI safety research.We must question,investigate,and collaborate on how to responsibly develop and deploy frontier models to address the challenges of frontier AI.We will collaborate to develop standardized approaches to enable independent evaluations of models capabilities and safet

269、y,where appropriate.2Contribute to the development and refinement of best practices.We are working with partners to identify best practices for the responsible development and deployment of frontier models and how to improve our practices as we continuously learn.3Share information and seek input.We

270、 work with policymakers,academics,civil society organizations,and the private sector to share knowledge about safety risks and continuously earn trust.We strongly believe that AI will touch virtually every aspect of life,so frontier models will need input from all corners of society to operate respo

271、nsibly.4Support efforts to develop applications that can help meet societys greatest challenges.Innovation must play a central role in tackling complex and persistent issues,from human-caused climate change to global health.We continue to support the work of the Frontier Model Forum as it advances u

272、nderstanding of how to address frontier AI risks in a way that benefits organizations and communities around the world.4.HOW WE LEARN,EVOLVE,AND GROW31RESPONSIBLE AI TRANSPARENCY REPORTWe are also a founding member of the multi-stakeholder organization Partnership on AI(PAI).We consistently contribu

273、te to workstreams across its areas of focus,including safety-critical AI;fair,transparent,and accountable AI;AI,labor,and the economy;and AI and media integrity.In June 2023,we joined PAIs Framework for Collective Action on Synthetic Media.74 This set of practices guides the responsible development,

274、creation,and sharing of media created with generative AI.We also participated in PAIs process to develop Guidance for Safe Foundation Model Deployment.75 We shared insights from our research on mapping,measuring,and mitigating foundation model risks and benefited from a multi-stakeholder exchange on

275、 this topic.We continually look for ways to engage with stakeholders who represent specific concerns.Since early 2023,we have been actively engaging with news publishers and content creators globally,including in the Americas,Europe,and Australia.We listen to feedback from creators and publishers to

276、 learn how creative industries are using generative AI tools and to understand concerns from creative communities.We have engaged in public and private consultations,roundtables,and events.For example,we participated in Creative Commons community meetings on generative AI and creator empowerment.We

277、also sponsored the Creative Commons Global Summit on AI and the Commons held in Mexico City.76 This summit brought together a diverse set of artists,civil society leaders,technology companies,and academics to address issues related to AI and creative communities.We participated in the inaugural Cent

278、re for News Technology and Innovation roundtable on Defining AI in News.Attendees included news organizations,technology companies,and academics from the United States,the United Kingdom,Brazil,and Nigeria.A report from the event highlights areas of opportunity and further multi-stakeholder collabor

279、ation.77 In Australia,we also participated in government-led roundtables engaging with content and creative industries on addressing the issue of copyright and AI.We support creators by actively engaging in consultations with sector-specific groups to obtain feedback on our tools and incorporate the

280、ir feedback into product improvements.For example,news publishers expressed hesitation around their content being used to train generative AI models.However,they did not want any exclusion from training datasets to affect how their content appeared in search results.In response to that feedback,we l

281、aunched granular controls to allow web publishers to exercise greater control over how content from their websites is accessed and used.78 We are committed to responsibly scaling AI to empower every person on the planet to achieve more.Our engagements with the broader community of concerned artists,

282、civil society organizations,and academics reflect our investment in learning as we evolve our approach to responsible AI.4.HOW WE LEARN,EVOLVE,AND GROW32RESPONSIBLE AI TRANSPARENCY REPORTUsing consensus-based safety frameworksTechnology sector-led initiatives comprise one important force to advance

283、responsible AI.Industry and others stand to significantly benefit from the key role that governments can also play.From within the U.S.Department of Commerce,the National Institute for Standards and Technology(NIST)built and published a voluntary framework to develop AI applications and mitigate rel

284、ated risks.Extensive consultation with industry,civil society organizations,and academic stakeholders helped NIST refine this AI Risk Management Framework(AI RMF).We contributed to NIST consultations and have applied learnings from NISTs work ourselves,including our application of the NIST RMF in ou

285、r generative AI requirements.To implement its tasks in Executive Order(EO)14110(on the Safe,Secure,and Trustworthy Development and Use of AI),NIST will consult with stakeholders to develop additional guidance,such as a generative AI-specific version of the AI RMF.Federal agencies and their AI provid

286、ers can leverage the NIST AI RMF and NISTs additional reference materials to meet obligations required by the implementation of EO 14110.The NIST-led AI Safety Institute Consortium(AISIC),which we have joined,has launched five working groups.79 These working groups will contribute further guidance,d

287、atasets,frameworks,and test environments to advance the field of AI safety.Governments,industry,and other stakeholders can also partner to develop standards,including in international forums.Within the International Standards Organization(ISO),there are ongoing efforts to develop standards to suppor

288、t AI risk management,including the recent publication of ISO/IEC 42001,AI Management System(AIMS).Companion standards will also define controls and support assessments of their implementation.International standards help bring together global expertise in defining widely applicable practices that ca

289、n serve as the basis for requirements in an interoperable ecosystem.We have also partnered with the national security and innovation nonprofit MITRE to incorporate security guidance for generative applications into its ATLAS framework.80 A recent update of the ATLAS framework includes the vulnerabil

290、ities of and adversarial attack tactics targeting generative AI and LLMs so organizations can better protect their applications.81 The framework also highlights case studies of real-world incidents,including how AI red teams and security professionals mitigated identified issues.Finally,the ATLAS up

291、date integrates feedback and best practices from the wider community of government,industry,academia,and security experts.This resource provides an actionable framework so security professionals,AI developers,and AI operators can advance safety in generative applications.We welcome these and other m

292、ulti-stakeholder initiatives to advance responsible AI,knowing that these efforts produce results that address a broad range of concerns from a variety of stakeholders.4.HOW WE LEARN,EVOLVE,AND GROW33RESPONSIBLE AI TRANSPARENCY REPORTSupporting AI research initiativesIn addition to governmental and

293、private sector investment in responsible AI,academic research and development can help realize the potential of this technology.Yet academic institutions do not always have the resources needed to research and train AI models.The National AI Research Resource(NAIRR)seeks to address this challenge.82

294、 It intends to provide high-quality data,computational resources,and educational support to make cutting-edge AI research possible for more U.S.academic institutions.We would also welcome and support an extension of the NAIRR to provide access to academic institutions among partners globally.We beli

295、eve that this comprehensive resource would enable the United States and like-minded nations to continue to lead in AI innovation and risk mitigation.As currently proposed,a U.S.-focused NAIRR will support a national network of users in training the most resource-intensive models on a combination of

296、supercomputer and commercial cloud-based infrastructure.This centralized resource would enable academics to pursue new lines of research and development without individual institutions needing to heavily invest in computing.Democratizing AI research and development is an essential step toward divers

297、ifying the field,leading to a greater breadth in background,viewpoints,and experience necessary to build AI applications that serve society as fully as possible.In short,NAIRR will enable the country to innovate at scale.In 2023,we announced our support of the NAIRR pilot led by the National Science

298、 Foundation(NSF).83 For this pilot,we committed$20 million worth of Azure compute credits and access to leading-edge models including those available in Azure OpenAI Service.In the spirit of advancing AI research,we have developed the Accelerating Foundation Models Research(AFMR)program.84 The AFMR

299、program assembles an interdisciplinary research community to engage with some of the greatest technical and societal challenges of our time.Through the AFMR,we make leading foundation models hosted by Microsoft Azure more accessible to the academic research community.So far,we have extended access t

300、o Azure OpenAI Service to 212 AFMR principal investigators from 117 institutions across 17 countries.These projects focus on three goals:Aligning AI with shared human goals,values,and preferences via research on models.Projects will enhance safety,robustness,sustainability,responsibility,and transpa

301、rency,while exploring new evaluation methods to measure the rapidly growing capabilities of novel models.Improving human interactions via sociotechnical research.Projects will enable AI to extend human ingenuity,creativity,and productivity;reduce inequities of access;and create positive benefits for

302、 people and societies worldwide.Accelerating scientific discovery in natural sciences through proactive knowledge discovery,hypothesis generation,and multiscale multimodal data generation.In the next call for proposals,we will seek projects in the areas of AI cognition and the economy,AI for creativ

303、ity,evaluation and measurement,and AI data engagement for natural and life sciences.We also launched an AFMR grant for AI projects advanced by Minority Serving Institutions,focused on Historically Black Colleges and Universities(HBCUs)and Hispanic-Serving Institutions(HSIs),with 10 inaugural grant r

304、ecipients.85In 2023,we announced the Microsoft Research AI&Society Fellows program to foster research collaboration between Microsoft Research and scholars at the intersection of AI and societal impact.86 We recognize the value of bridging academic,industry,policy,and regulatory worlds and seek to i

305、gnite interdisciplinary collaboration that drives real-world impact.In the fall of 2023,Microsoft Research ran a global call for proposals to seek collaborators for a diverse set of thirteen research challenges.The 24 AI&Society Fellows were announced in early 2024.These fellows will join our resear

306、chers for a one-year collaboration with the goal of catalyzing research and contributing publications that advance scholarly discourse and benefit society more broadly.$20 millionworth of Azure compute credits committed to the NAIRR pilot,in addition to access to leading-edge models.4.HOW WE LEARN,E

307、VOLVE,AND GROW34RESPONSIBLE AI TRANSPARENCY REPORTInvesting in research to advance the state of the art in responsible AI Microsoft researchers are also advancing the state of the art in generative AI,frequently in partnership with experts outside of the company.Researchers affiliated with Microsoft

308、 Research87 and Aether88 published extensive research in 2023 to advance our practices for mapping,measuring,and managing AI risks,89 some of which we summarize here.Identifying risks in LLMs and their applicationsOne of our approaches for identifying risks is the Responsible AI Impact Assessment,wh

309、ich includes envisioning the benefits and harms for stakeholders of an AI application.To address the challenge of identifying potential risks before AI application development or deployment,researchers introduced AHA!(anticipating harms of AI),90 a human-AI collaboration for systematic impact assess

310、ment.Our researchers also contributed greatly to advancing red teaming knowledge through the production of tools,like AdaTest+,91 that augment existing red teaming practices.Our researchers uncovered and shared novel privacy and security vulnerabilities,such as privacy-inferencing techniques,92 or a

311、ttack vectors when integrating LLMs for AI-assisted coding.93 These researchers play a key role in shaping the emerging practice of responsible AI-focused red teaming and in producing resources to share this practice more broadly.94Research to advance our practices for measuring risksAfter weve iden

312、tified potential risks,we can measure how often risks occur and how effectively theyre mitigated.For scaling measurement practices,our researchers developed a framework that uses two LLMs.95 One LLM simulates a users interaction with a generative application,and one LLM evaluates the applications ou

313、tputs against a grading scheme developed by experts.Another area explored by our researchers is measurement validity.Thinking beyond measuring model accuracy,researchers are advancing metrics that align more appropriately with user needsfor example,when capturing productivity gains.96Our researchers

314、 have also made advancements in the emerging field of synthetic data for training and evaluating generative AI models.These include an English-language dataset for evaluating stereotyping and demeaning harms related to gender and sexuality97 and a framework for increasing diversity in LLM-generated

315、evaluation data.98Managing AI risks through transparencyResponsible use of AI applications is a shared responsibility between application developers and application users.As application developers,its important that we design mitigations that enable appropriate use of our generative applicationsin o

316、ther words,to minimize users risk of overreliance 99 on AI-generated outputs.100Our researchers have developed a number of tools and prototypes to assess AI-generated outputs and improve our products.These include an Excel add-in prototype that helps users assess AI-generated code,101 a case study o

317、f how enterprise end users interact with explanations of AI-generated outputs,102 and research on when code suggestions are most helpful for programmers.103In setting out a roadmap for transparency in the age of LLMs,104 our researchers argue that human-centered transparency is key to creating bette

318、r mitigations and controls for AI applications.Their contributions to further a human-centered approach include research on how users interact with AI transparency features,such as explanations of AI-generated outputs,105 and how to communicate model uncertainty when interacting with AI-generated co

319、de completions.106Here,weve just scratched the surface of the contributions our researchers are making to advance our understanding and practice of responsible AI.4.HOW WE LEARN,EVOLVE,AND GROW35RESPONSIBLE AI TRANSPARENCY REPORTTuning in to global perspectivesLike many emerging technologies,if not

320、managed deliberately,AI may either widen or narrow social and economic divides between communities at both a local and global scale.Currently,the development of AI applications is primarily influenced by the values of a small subset of the global population located in advanced economies.Meanwhile,th

321、e far-reaching impact of AI in developing economies is not well understood.When AI applications conceived in advanced economies are used in developing ones,there is considerable risk that these applications either will not work or will cause harm.This is particularly the case if their development do

322、es not carefully consider the nuanced social,economic,and environmental contexts in which they are deployed.Both real and perceived AI-related harms are primary drivers behind increasing calls for AI regulation.Yet developing countries are often left out of multistakeholder regulatory discussions re

323、lated to AI,even though they are pursuing AI regulation themselves.We affirm that to be responsible by design,AI must represent,include,and benefit everyone.We are committed to advancing responsible AI norms globally and adapting to the latest regulations.As our AI footprint continues to grow in dev

324、eloping countries,we must ensure our AI products and governance processes reflect diverse perspectives from underrepresented regions.In 2023,we worked with more than 50 internal and external groups to better understand how AI innovation may impact regulators and individuals in developing countries.G

325、roups included the United Nations Conference on Trade and Development(UNCTAD),the U.S.Agency for International Development(USAID),the U.S.Department of State,and the Microsoft Africa Research Institute.What we learned informed two goals:1Promote globally inclusive policy-making and regulation.AI reg

326、ulation is still in its infancy,especially in developing countries.We must recruit and welcome diverse perspectives,such as representatives from the Global South,in the global AI policy-making process.As our AI presence expands globally,we will continue to make our AI services,products,and responsib

327、le AI program more inclusive and relevant to all.We are pursuing this commitment via several avenues.2Develop globally relevant technology.We must work to ensure the responsible AI by design approach works for all the worlds citizens and communities by actively collaborating with stakeholders in dev

328、eloping countries.UNESCO AI Business Council:Microsoft and Telefonica co-chair the UNESCO AI Business Council.This public-private partnership promotes the implementation of UNESCOs Recommendation on the Ethics of AI,which has been adopted by 193 countries so far.For example,we showcase resources and

329、 processes to align with responsible AI standards in webinars and UNESCO events.We expect this effort to bring more companies and countries under a cooperative,globally relevant regulatory framework for responsible AI.Global Perspectives Responsible AI Fellowships:The Strategic Foresight Hub at Stim

330、son Center and the Office of Responsible AI established a fellowship to investigate the impacts of AI on developing countries.107 The fellowship convenes experts from Africa,Asia,Latin America,and Eastern Europe working to advance AI responsibly.They represent views across academia,civil society,and

331、 private and public sectors,offering insights on the responsible use and development of AI in the Global South.We recognize that we do not have all the answers to responsible AI.We have prioritized collaboration by partnering with a diverse range of private companies,governmental groups,civil societ

332、y organizations,regulators,and international bodies.This dynamic mix of perspectives,lived experiences,technical expertise,and concerns pushes us to continue to do better.50+groups engaged to better understand how AI innovation may impact regulators and individuals in developing countries.36LOOKING

333、AHEADRESPONSIBLE AI TRANSPARENCY REPORTLooking aheadThe progress weve shared in this report would not be possible without the passion and commitment of our employees across the company and around the world.Everyone at Microsoft has a role to play in developing AI applications responsibly.Through innovation,collaboration,and a willingness to learn and evolve our approach,we will continue to drive p

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(微软:2024年AI透明度报告(英文版)(40页).pdf)为本站 (Kelly Street) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

微**... 升级为高级VIP  wei**n_... 升级为至尊VIP 

 Ke**ey 升级为标准VIP  WP**lG  升级为至尊VIP

wei**n_...  升级为至尊VIP  wei**n_... 升级为标准VIP

wei**n_... 升级为高级VIP   156**70...  升级为标准VIP

 wei**n_... 升级为至尊VIP  黑**...  升级为至尊VIP

 wei**n_...  升级为高级VIP 136**21...  升级为高级VIP

coo**on...   升级为高级VIP 卧**...  升级为至尊VIP 

桫椤  升级为高级VIP wei**n_...  升级为高级VIP

137**37...  升级为高级VIP  wei**n_... 升级为至尊VIP

136**23...  升级为高级VIP wei**n_... 升级为高级VIP

微**...  升级为至尊VIP   137**47... 升级为至尊VIP

 微**...  升级为高级VIP 137**82... 升级为高级VIP 

  水民 升级为高级VIP wei**n_... 升级为高级VIP

 wei**n_...  升级为标准VIP 153**19... 升级为高级VIP

153**19... 升级为标准VIP wei**n_... 升级为高级VIP 

158**69... 升级为高级VIP   轨迹**7 升级为至尊VIP

wei**n_... 升级为标准VIP  wei**n_...  升级为标准VIP 

 微**... 升级为标准VIP  186**52... 升级为高级VIP

技**... 升级为至尊VIP  技**...  升级为高级VIP

wei**n_...  升级为至尊VIP  清也 升级为高级VIP

 187**94... 升级为标准VIP 183**89... 升级为标准VIP

wei**n_...  升级为至尊VIP  186**59...   升级为至尊VIP

wei**n_...  升级为标准VIP 185**62...  升级为高级VIP 

 曾** 升级为至尊VIP 139**38...  升级为高级VIP 

 186**33... 升级为高级VIP 187**98...  升级为至尊VIP

 wei**n_...  升级为高级VIP  wei**n_... 升级为至尊VIP

 wei**n_... 升级为高级VIP wei**n_...  升级为标准VIP 

130**06... 升级为至尊VIP   wei**n_...  升级为至尊VIP

wei**n_...  升级为至尊VIP   tem**41... 升级为高级VIP

 185**59...  升级为高级VIP  wei**n_... 升级为至尊VIP

134**41... 升级为至尊VIP wei**n_...  升级为至尊VIP

157**31...  升级为高级VIP 152**58... 升级为高级VIP 

wei**n_... 升级为至尊VIP   wei**n_... 升级为高级VIP 

wei**n_... 升级为标准VIP  180**85...  升级为高级VIP

wei**n_... 升级为至尊VIP 156**86...  升级为至尊VIP 

 bup**27  升级为高级VIP wei**n_... 升级为至尊VIP 

石** 升级为标准VIP   136**86...  升级为至尊VIP

wei**n_... 升级为标准VIP 187**20...  升级为高级VIP

微**... 升级为高级VIP   wei**n_... 升级为高级VIP

  wei**n_... 升级为至尊VIP wei**n_...   升级为至尊VIP

 wei**n_... 升级为高级VIP  158**18... 升级为高级VIP 

  wei**n_... 升级为至尊VIP 186**10... 升级为标准VIP  

 wei**n_... 升级为标准VIP  152**84... 升级为标准VIP

 183**80... 升级为标准VIP  wei**n_...  升级为高级VIP

wei**n_...  升级为至尊VIP 133**11... 升级为至尊VIP 

130**21...  升级为标准VIP wei**n_... 升级为标准VIP  

 wei**n_...  升级为高级VIP wei**n_... 升级为至尊VIP

wei**n_... 升级为至尊VIP   182**03...  升级为高级VIP

wei**n_... 升级为高级VIP  wei**n_...  升级为至尊VIP

 136**65... 升级为至尊VIP 133**16... 升级为至尊VIP