上海品茶

NAIAC:2023国家人工智能咨询委员会第一年度报告(英文版)(89页).pdf

编号:134829 PDF  DOCX 89页 1.38MB 下载积分:VIP专享
下载报告请您先登录!

NAIAC:2023国家人工智能咨询委员会第一年度报告(英文版)(89页).pdf

1、National Artificial Intelligence Advisory Committee(NAIAC)Year l L MAY2023_J 1 TABLE OF CONTENTS I.Preface A.Letter from the Chair&Vice Chair B.Executive Summary C.Introduction D.Acknowledgements II.NAIAC Year 1 Report A.Themes,Objectives,&Actions B.Year 1 Report Appendix III.Addenda A.Committee Mem

2、ber Perspectives B.Committee Member Biographies C.Committee Overview D.Working Groups E.Briefings,Panels,&Public Comments 2 LETTER FROM THE CHAIR&VICE CHAIR Dear Mr.President,The world has changed dramatically since the National Artificial Intelligence Advisory Committee(NAIAC or the“Committee”)was

3、launched last May.Artificial intelligence(AI)now dominates the public discourse,catalyzing both excitement and concern across the globe.As a result,the relevance of our work as a Committee has increased,and we are grateful for our exceptional fellow Committee members who will ensure NAIAC achieves i

4、ts mission.It is no longer newsworthy to assert that AI is one of the most powerful and transformative technologies of our time.From automating everyday tasks to assisting medical and other scientific breakthroughs,AI is reshaping our society and opening up new opportunities.AI has the potential to

5、assist individuals,organizations,and communities and enable important innovations.AI can also help address societys most pressing challenges,such as climate change and early cancer detection.Once a mostly academic area of study,AI has and will continue to have a profound impact on nearly all sectors

6、 and every aspect of our lives.But direct and intentional action is required to realize AIs benefits,reduce potential risks,and guarantee equitable distribution of its benefits across our society.With the acceleration of AI adoption comes a parallel imperative to ensure its development and deploymen

7、t is guided by responsible governance.Such governance begins with a crucial first step:alignment on standards and best practices.And because its training and use has no physical borders,its governance must be workable and understandable for users throughout society,operating in the wide landscape of

8、 legal jurisdictions.A framework for AI governance must start by evaluating an AI systems potential risks and benefits in a particular use case and for a particular audience.Only then can we determine whether and how to proceed with its development or deployment and ensure that AI systems are worthy

9、 of our trust.While we are enthusiastic about the opportunities AI will bring individuals,communities,our economy,and our country,we also realize that this technology is not without potential and consequential flaws,complexities,and risks.AI applications are susceptible to errors and attacks that un

10、dermine public trust and could violate our laws and norms.In addition,AI can be misused by individuals and organizations to cause significant harm,like cyber intrusions or the spread of misinformation.Biases in AI systems can deepen existing disparities in opportunities and access and result in scal

11、ed discrimination,disproportionately impacting under-represented or disadvantaged communities.And privacy and security concerns stemming from AI remain a significant issue.Our Committee understands the importance of having the U.S.government address both the opportunities for AI to benefit society a

12、s well as the related concerns,establishing rules 3 and standards that comport with democratic values,civil liberties,and universal human rights.In this report and future communications,we aim to help achieve this goal and fulfill our mandate by highlighting top priorities to enable AIs opportunitie

13、s and address its challenges for society,and offering concrete and actionable steps forward.This Spring report shares our work to date as a collective body.We highlight our thoughts,areas of focus,and suggested action items on topics discussed in Year 1,as outlined in our statutory mandate.In Year 1

14、 we focused on:Leadership in Trustworthy Artificial Intelligence,Leadership in Research and Development,Supporting the U.S.Workforce and Providing Opportunity,and International Collaboration.This report also indicates issues we plan to focus on in Years 2 and 3.We flag that we have not addressed one

15、 critical area of discussion in this report:the use of AI technologies in the criminal justice system.This significant and complex issue was specifically identified in our authorizing statute with a mandate to establish a separate subcommittee to address this issue comprehensively.We are thrilled th

16、at its membership was recently approved,and very much look forward to working with the Law Enforcement Subcommittee shortly.In the coming year,we look forward to exploring issues discussed in this report further and also delving into new ones,with our steadfast focus on realizing this Presidential d

17、irective and our Committee mandate.By carefully navigating a clear and thoughtful path and balancing the competing priorities,we believe our country can and will maintain its competitive edge in AI innovation while securing economic opportunity for a broader cross section of the population.We are ho

18、nored to share our Committees insights on how the President and White House can achieve these imperatives in this and future communications.Sincerely,Miriam Vogel James Manyika Chair,NAIAC Vice Chair,NAIAC 4 EXECUTIVE SUMMARY The United States is facing a critical moment:Artificial intelligence(AI)t

19、echnology is rapidly accelerating in capability,and being deployed in more contexts with increasing use cases,both in the public and private realms.This is a moment of both significant opportunity and complexity.Our Committee has come into fruition at a time when our nation can and must position its

20、elf as a global leader in trustworthy,inclusive,and responsible AI.The National Artificial Intelligence Advisory Committee(NAIAC)first convened in May 2022.NAIAC consists of 26 leading experts in AI(listed below)who have been tasked to advise the President and the White House National AI Initiative

21、Office(NAIIO).Committee members have experience across a wide range of domains,from industry to academia to civil society.In their service on the NAIAC,Committee members provide expertise and actionable steps for AI policy and related activities how we develop AI,govern it,and ensure it is equitably

22、 created,accessed,and deployed.This work is intended to guide the U.S.government in leveraging AI in a uniquely American way one that prioritizes democratic values and civil liberties,while also increasing opportunity.Miriam Vogel(Chair)James Manyika(Vice Chair)Yll Bajraktari Amanda Ballantyne Sayan

23、 Chakraborty Jack Clark David Danks Victoria A.Espinel Paula Goldman Susan Gonzales Janet Haven Daniel E.Ho Ayanna Howard Jon Kleinberg Ramayya Krishnan Ashley Llorens Haniyeh Mahmoudian Christina Montgomery Liz OSullivan Fred Oswald Frank Pasquale Trooper Sanders Navrina Singh Swami Sivasubramanian

24、 Keith Strier Reggie Townsend This is the first formal NAIAC report,and covers the first year of our three-year appointment.The report is parsed into four major themes:(1)Leadership in Trustworthy Artificial Intelligence;(2)Leadership in Research and Development;(3)Supporting the U.S.Workforce and P

25、roviding Opportunity;and(4)International Cooperation.Under each theme,the committee offers a number of objectives for engaging with AI,from the logistical(e.g.,“Bolster AI leadership,coordination,and funding in the White House and across the U.S.government”)to the innovative(e.g.,“Create an AI resea

26、rch and innovation observatory”).In total,NAIAC presents 14 objectives.Because this report is intended to be actionable,objectives are tied to recommended actions.These actions entail creating and organizing federal AI leadership roles;standing up research and development initiatives;training civil

27、servants in AI;increasing funding of specific programs;and more.In total,the NAIAC presents 24 actions.5 Ultimately,this report frames AI as a technology that requires immediate,significant,and sustained government attention.The U.S.government must ensure AI-driven systems are safe and responsible,w

28、hile also fueling innovation and opportunity at the public and private levels.The report concludes with a look forward,explaining how the NAIAC will continue its work over the next two years and help sustain the U.S.as a global leader in trustworthy AI.6 INTRODUCTION Artificial intelligence(AI)can u

29、nlock significant opportunities for individuals,organizations,businesses,the economy,and society.AI can fuel life-saving advances in healthcare,enhance educational training and workforce readiness,and facilitate the equitable distribution of opportunity.AI also powers many everyday products and serv

30、ices,and this is only likely to increase as the applicability and usefulness of AI advances.In the last few months alone,our awareness of and interest in AI in our daily lives has increased significantly.The release of powerful new AI technologies to the general public such as Generative AI and Larg

31、e Language Models(LLMs)has opened eyes and imaginations to the potential and versatility of AI.We have seen that AI has the potential to power and propel the American economy by enabling innovation and productivity for a broader cross section of our population.AI also has the potential to help addre

32、ss many of societys greatest opportunities and challenges.It can assist with scientific discovery in the health and the life sciences.It can help with climate science and sustainability.And it can help people today survive or avoid natural disasters,with innovations like wildfire and flood forecast

33、alerts.However,like many new technologies,AI also presents challenges and risks to both individuals and society.For example,AI systems used to attract and retain talent in the workforce can expand opportunity,but could also amplify and perpetuate historical bias and discrimination at unprecedented s

34、peed and scale.Further,AI could be misused in harmful ways,such as spreading disinformation or engaging in cybercrime.AI systems could help enhance access,such as accommodating individuals with disabilities or linguistic barriers,or it could deliver incorrect diagnoses.AI could create economic oppor

35、tunity or worsen the digital divide for individuals and communities.In the workforce,we are likely to see growth of new occupations and decline of others,as well as ongoing changes to many more occupations.All such challenges magnify the need for appropriate AI oversight and safeguards.The balance w

36、e establish in addressing these two divergent AI realities fully harnessing its benefits while also effectively addressing its challenges and risks will significantly impact our future.If navigated appropriately,the U.S.government can ensure that AI creates greater opportunities,providing economic a

37、nd societal benefits for a broader cross section of the population.However,if navigated poorly,AI will further widen the opportunity gap,and trustworthy AI for all may become an unrealized aspiration.1 The importance of this moment extends beyond domestic borders,and the U.S.has an essential leaders

38、hip role on the global stage in ensuring we understand and achieve trustworthy AI.The U.S.must proactively establish mandates and mechanisms to advance 1 For purposes of this report,we rely on the definition of“trustworthy AI”provided in the NIST AI Risk Management Framework:“valid and reliable,safe

39、,secure and resilient,accountable and transparent,explainable and interpretable,privacy-enhanced,and fair with harmful bias managed”7 trustworthy AI and avoid ceding AI leadership to those entities with less equitable and inclusive goals.The National Artificial Intelligence Advisory Committee(NAIAC)

40、was created to advise the President on the intersection of AI and innovation,competition,societal issues,the economy,law,international relations,and other areas that can and will be impacted by AI in the near and long term.Committee members hail from diverse backgrounds academia,industry,civil socie

41、ty,government and all possess deep and complementary expertise in AI.Here,we present our year-one findings:high-level themes,our objectives,proposed actions,and a plan for future Committee activities.Our goal is to help the U.S.government and society at large navigate this critical path to harness A

42、I opportunities,create and model values-based innovation,and reduce AIs risks.Our findings are grounded on core beliefs,such as:the establishment of safe and effective AI systems that are opportunity-creating and beneficial to society;there must exist robust defenses against algorithmic discriminati

43、on,including support for civil rights and civil liberties;data privacy is paramount;and people deserve to know if automated decision making is being used and should always have a recourse like human intervention.This report is divided into four thematic AI areas,based on our focused efforts over the

44、 past year,guided by the concerns listed in our statutory mandate including:Leadership in Trustworthy Artificial Intelligence,Leadership in Research and Development,Supporting the U.S.Workforce and Providing Opportunity,and International Collaboration.Under each theme,we provide our broad objectives

45、 for U.S.leadership,and several,more granular recommended actions.The content was developed by five working groups,with each NAIAC member serving on two working groups,and ultimately presenting the consensus of the full Committee.There are several intended audiences for this report.In line with our

46、congressional mandate,we write this report to advise the President and the White House in navigating AI policy.We also write for the Members of Congress,to whom we are grateful for the creation of NAIAC and for their continued support for our work,and for AI innovators and policymakers more generall

47、y.Finally,as noted in our first NAIAC meeting in May 2022,we will continue to engage a broad cross section of the population that includes underrepresented communities and geographically diverse regions.We will foster a national conversation on AI governance to better understand and achieve trustwor

48、thy AI.We will do this by creating ongoing dialogues,sharing our findings,and amplifying known and new experts in this space.8 DEFINITION OF AI For the purposes of this report,the definition of an AI system is one that was established as a best practice in the recently released NIST AI Risk Manageme

49、nt Framework:“An AI system is an engineered or machine-based system that can,for a given set of objectives,generate outputs such as predictions,recommendations,or decisions influencing real or virtual environments.AI systems are designed to operate with varying levels of autonomy.”(Adapted from:OECD

50、 Recommendation on AI:2019;ISO/IEC 22989:2022.)It is important to note that NAIACs undertakings are a work-in-progress that will continue over the next two years.There are issues not addressed in this first-year report that we will focus on extensively in subsequent reports,as well as in panel discu

51、ssions and other mediums.We highlight some of those areas in the final section of this report.9 ACKNOWLEDGEMENTS This inaugural report represents the collective work of this Committee and does not necessarily represent the complete opinion of each individual Committee member or their organizations.T

52、he Committee would like to express its sincere gratitude to the National Institute of Standards and Technology(NIST)at the Department of Commerce,through the NIST Information Technology Laboratory and the NIST Directors Office,which has been responsible for administering the National Artificial Inte

53、lligence Advisory Committee(NAIAC).In particular,the Committee would like to express its gratitude to the following individuals and agencies that shared their time and insights in briefings for our relevant working groups during the Committees first year:FEDERAL AGENCIES U.S.Department of Commerce U

54、.S.Department of Health and Human Services(HHS)U.S.Department of Justice U.S.Department of Labor U.S.Department of Veterans Affairs National Science Foundation The White House We acknowledge and thank the experts and thought leaders who took the time to share their insights with our Committee,includ

55、ing:Dr.Catherine Aiken,Daniel Chasen,Rene Cummings,Tara Murphy Dougherty,Brian Drake,Dr.Kadija Ferryman,Michele Gilman,Gerard de Graaf,Hon.Don Graves,William Hurd,Andrei Iancu,Cameron Kerry,Dr.Karen Levy,Dr.Percy Liang,Hon.Dr.Laurie Locascio,Deirdre Mulligan,Dr.Alondra Nelson,Dr.Lynne Parker,Hon.Gin

56、a Raimondo,Hon.Julie Su,and Randi Weingarten.We acknowledge and thank the organizations and individuals who generously hosted our Committee meetings,including the leadership and staff at the Department of Commerce;Stanfords Institute for Human-Centered AI(HAI)and Law School,including Dr.Fei-Fei Li,R

57、ussell Wald,Celia Clark,Holly McCall,Tina Huang,and Daniel Zhang;and SAS Institute,including Debbie Williams,Barbara Flannery,Phillip Sloop,and Robert Parker.We acknowledge and thank the individuals whose support made this Committees work and report possible:Dorianna Andrade,Natasha Bansgopaul,Melis

58、sa Banner,James Bond,Alicia Chambers,Tyler Christiansen,Jennifer Chung,Isaac Cui,Landon Davidson,John Garofolo,Evi Fuelle,Ryan Hagemann,Alicia Jayson,Mark Latonero,Christie Lawrence,Chandler Morse,Jennifer Nist,Ayodele Odubela,Serena Oduro,Kathy Pham,Evangelos Razis,Mary Theofanos,Rachel Trello,Crai

59、g Scott,Meredith Schoenfeld,Reva Schwartz,Eli 10 Sherlock,Jenilee Keefe Singer,Elham Tabassi,Melissa Taylor,Shaundra Watson,Jim Wiley,Br A.Williams,Felix Wu,and Cat Xu.Public Comment Submissions Pursuant to the provisions of the Federal Advisory Committee Act,as amended(5 U.S.C.App.)and the William

60、M.(Mac)Thornberry National Defense Authorization Act for Fiscal Year 2021(P.L.116-283,FY 21 NDAA),the NAIAC receives public comments to inform its work.Thank you to all who submitted comments that have informed this report.11 NAIAC YEAR 1 REPORT:THEMES,OBJECTIVES,&ACTIONS THEME:Leadership in Trustwo

61、rthy Artificial Intelligence Objective:Operationalize trustworthy AI governance Action:Support public and private adoption of NIST AI Risk Management Framework Objective:Bolster AI leadership,coordination,and funding in the White House and across the U.S.government Action:Empower and fill vacant AI

62、leadership roles in the Executive Office of the President Action:Fund NAIIO to fully enact their mission Action:Create a new Chief Responsible AI Officer(CRAIO)Action:Establish an Emerging Technology Council(ETC)Action:Fund NIST AI work Objective:Organize and elevate AI leadership in federal agencie

63、s Action:Ensure AI leadership and coordination at each department or agency Action:Continue implementing congressional mandates and executive orders on AI Objective:Empower small-and medium-sized organizations for trustworthy AI development and use Action:Create a multi-agency task force to develop

64、frameworks for small-and medium-sized organizations to adopt trustworthy AI Objective:Ensure AI is trustworthy and lawful and expands opportunities Action:Ensure sufficient resources for AI-related civil rights enforcement 12 THEME:Leadership in Research and Development Objective:Support sociotechni

65、cal research on AI systems Action:Develop a research base and community of experts focused on sociotechnical research in the AI R&D ecosystem Objective:Create an AI Research and Innovation Observatory Action:Create an AI Research and Innovation Observatory to measure overall progress in the global A

66、I ecosystem Objective:Create a large-scale national AI research resource Action:Advance the implementation plan from the NAIRR final report to create a large-scale national research resource THEME:Supporting the U.S.Workforce and Providing Opportunity Objective:Modernize federal labor market data fo

67、r the AI era Action:Support DOL efforts to modernize federal labor market data for the AI era Objective:Scale an AI-capable federal workforce Action:Develop an approach to train the current and future federal workforce for the AI era Action:Train a new generation of AI-skilled civil servants Action:

68、Invest in AI opportunities for federal workforce Action:Boost short-term federal AI talent Action:Reform immigration policies to attract and retain international tech talent 13 THEME:International Cooperation Objective:Continue to cultivate international collaboration and leadership on AI Action:Mai

69、ntain AI leadership by expanding and deepening international alliances Action:Internationalize the NIST AI RMF Objective:Create a multilateral coalition for the Department of Commerce(NOAA)and the Department of State to accelerate AI for climate efforts Action:Establish a U.S.-based multilateral coa

70、lition for international cooperation on accelerating AI for climate efforts Objective:Expand international cooperation on AI diplomacy Action:Fully fund States newly expanded Bureau of Cyberspace and Digital Policy and newly created Office of the Special Envoy for Critical and Emerging Technology Ob

71、jective:Expand international cooperation on AI R&D Action:Stand up MAIRI via the National Science Foundation and Department of State THEME:What is Ahead for NAIAC,Years 2 and 3 YEAR 1 REPORT APPENDIX 14 THEME:Leadership in Trustworthy Artificial Intelligence OBJECTIVE:Operationalize trustworthy AI g

72、overnance _ In January 2023,per congressional mandate,the National Institute of Standards and Technology(NIST)released an AI Risk Management Framework(AI RMF),2 which was created following extensive stakeholder engagement and is already being used in numerous contexts and jurisdictions,such as in th

73、e state of California.3 The AI RMF has been well-received by a broad cross section of stakeholders,including Members of Congress,civil rights organizations,policymakers,industry,and international experts.It provides detailed guidance on how organizations can address AI risks in all phases of the AI

74、lifecycle.This framework presents the Administration with expert guidance on how to best manage AI risks internally,and to facilitate both public and private sector efforts to address these risks.The AI RMF offers a tenet:AI can help address significant and complex societal problems but AI that is n

75、ot developed and deployed responsibly can harm individuals and communities,and potentially violate civil liberties and Constitutional rights.NAIAC examined and discussed the varying degrees and types of risks related to AI,including through NAIAC public meetings in California4 and North Carolina.5 W

76、e understand that trustworthy AI is not possible without public trust,and public trust cannot be attained without clear mechanisms for its transparency,accountability,mitigation of harms,and redress.The Administration should require an approach that protects against these risks while allowing the be

77、nefits of values-based AI services to accrue to the public.As stated in the AI RMF:“AI risk management is a key component of responsible development and use of AI systems.Responsible AI practices can help align the decisions about AI system design,development,and uses with intended aim and values.Co

78、re concepts in responsible AI emphasize human centricity,social responsibility,and sustainability.AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design,develop,and deploy AI to think more critically about context and potential or u

79、nexpected negative and positive impacts.Understanding and managing the risks of AI systems will help to enhance trustworthiness,and in turn,cultivate public trust.”6 2 NIST:Artificial Intelligence Risk Management Framework(AI RMF 1.0)3 Brookings:How California and other states are tackling AI legisl

80、ation 4 NAIAC:Field Hearing 5 NAIAC:Meeting 3 6 NIST:AI RMF 15 NAIAC appreciates that the AI RMF recognizes risk from AI systems as both technical and societal.It provides a roadmap for AI development and deployment to identify new and recurring risks and harms,with the end goal of earning and maint

81、aining trust,both by users internal and external to the process.This process is flexible and is intended to be revisited and implemented throughout the AI lifecycle.ACTION:Support public and private adoption of NIST AI Risk Management Framework NAIAC recommends the White House encourage federal agen

82、cies to implement either the AI RMF,or similar processes and policies that align with the AI RMF,to address risks in all phases of the AI lifecycle effectively,with appropriate evaluation and iteration in place.We believe federal agencies can leverage the AI RMF to address issues relating to AI in s

83、coping,development,and vending processes.These include but are not limited to bias,discrimination,and social harms that arise when building,assessing,and governing AI systems.Indeed,the AI RMF is a country-,industry-,and AI-use case agnostic framework crafted for use by government,businesses,and oth

84、ers to navigate the complex path toward responsible AI governance.To facilitate AI RMF operationalization and adoption in the U.S.government,the Administration should issue an executive order creating a pilot program directing at least three agencies to implement the AI RMF.Agencies would then repor

85、t on their lessons learned within one year,including the challenges,benefits,and potential for more widespread use across the U.S.government.The Office of Management and Budget(OMB),the National AI Initiative Office(NAIIO),or another appropriate designated body should establish an interagency proces

86、s to review the agencies results and determine the effectiveness of the AI RMF and opportunities to expand its implementation.This designated body could also explore whether modifications to the approach are necessary as new versions of the AI RMF are released.AI RMF adoption need not stop at the pu

87、blic sector.The Administration should also encourage private sector adoption through available mechanisms,such as education and training,exchange and amplification of best practices,procurement policies,and conditions on receipt of federal funding.For example,the Administration could direct and fund

88、 NIST to provide continued education and training about the AI RMF and other standards and tools to small businesses who might struggle to implement the framework.Additionally,the 16 Administration could amplify and further support the AI RMFs profile development by stakeholders in coordination with

89、 NIST.7 As another example,OMB could guide agencies on the procurement process to ensure that contracting companies have adopted the AI RMF or a similar framework to govern their AI.8 OBJECTIVE:Bolster AI leadership,coordination,and funding in the White House and across the U.S.government The U.S.go

90、vernment must align on its goals for,and use of,trustworthy AI to maintain global leadership.Effective coordination and funding of federal agency efforts is one critical piece of this effort.A core principle of ensuring trustworthy AI includes meaningful participation of all stakeholders.These are i

91、ndividuals and communities impacted by,or involved in,the design of accountability systems,and redress mechanisms for algorithmic accountability.We understand that determining the appropriate body to lead on trustworthy AI within the White House must consider the internal workings,relationships,and

92、dynamics within the White House.As such,in this report we propose alternate ways for the U.S.government to structure AI leadership.Each way could provide an appropriate and sufficient mechanism to coordinate,lead,and model responsible AI use,governance,and regulation.Leadership and coordination are

93、dependent on funding.And within the White House there are areas where funding appropriations are particularly essential to enabling and maintaining U.S.leadership in AI.The National AI Initiative Office(NAIIO)is tasked with significant responsibility of interagency coordination on matters relating t

94、o AI.9 For most of NAIIOs existence,it has been staffed by three full-time equivalent(FTE)detailed10 employees,nine advisors in total.Without adequate staffing and leadership,NAIIO cannot maintain the level of output needed to meet its ongoing statutory requirements,nor provide the required interage

95、ncy coordination to ensure U.S.AI leadership.Resource challenges in government are not unique to this issue,but are of particular concern in this area.In FY 2021,the National AI Initiative Act(NAIIA)authorized over$1 billion,with escalating sums moving forward,to the Department of Commerces NIST and

96、 7 AI RMF use-case profiles are intended to illustrate implementations of the AI RMF functions,categories,and subcategories for a specific setting or application based on the requirements,risk tolerance,and resources of the Framework user.For example,an AI RMF hiring profile or an AI RMF fair housin

97、g profile 8 This approach could be similar in practice to the Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government 9 See Year 1 Report Appendix,section d 10 GSA:TTS Handbook 17 the National Oceanic and Atmospheric Administration(NO

98、AA),the National Science Foundation(NSF),and the Department of Energy(DOE)to carry out the provisions of the Act.However,these funds were not fully appropriated.The agencies responsible for carrying out the provisions of NAIIA still attempted to implement its mandate,but with insufficient funding an

99、d resources.For example,NIST published its 1.0 version of the AI RMF in January 2023;NSF established 18 AI Institutes;and NSF and OSTP stood up the National AI Research Resource(NAIRR)Task Force,which released its final report in January 2023.However,due to a lack of resources,gaps exist in developm

100、ent and implementation of critical policy initiatives.With regard to coordination,past groups and reports have recommended creating a coordinating entity within the Executive Office of the President to address the technology challenges of today,including how technology intersects with civil rights a

101、nd equity,the economy,and national security.11 Currently,multiple White House Offices,including but not limited to the National Security Council(NSC),the Office of Science and Technology Policy(OSTP),the National Economic Council(NEC),the Domestic Policy Council(DPC)and a constellation of federal ag

102、encies play critical,specific roles in setting U.S.technology policy.In line with their authority and mandate,each entitys role focuses on its distinct domain.Outside of the White House,the Department of State focuses on diplomatic efforts,including its interconnection with technology policy and dev

103、elopment.The Department of Commerce focuses on trade and technology issues through its commercial lens,such as export controls,standards development,and technology governance.And the Department of Labor is exploring AI and emerging technologys impact on the workplace.Each of these departments and of

104、fices initiatives is of critical importance.These issues can have meaningful interrelationships,but also significant redundancy.Further,a lack of coordination can cause confusion and missed opportunities,particularly with the business community,civil society,and in the rapidly developing global AI p

105、olicy community.Although White House offices such as OSTP successfully coordinate AI,the U.S.government would benefit significantly from additional direction and coordination efforts guiding national AI strategy.Specifically,there is a need for a White House entity that is sufficiently resourced to

106、systematically coordinate related technology policy and initiatives across all of its departments and offices.This function could be housed in an office currently in operation or with alternative structures,if appropriately organized and resourced,as outlined below.11 NSCAI:Final Report,chapter 9;Ma

107、y 2021 Amendment to the U.S.Innovation and Competition Act(USICA)filed by Senators Michael Bennet and Ben Sasse(the amendment was not adopted);June 2022 House Resolution 8027 introduced in the 117th Congress by Representatives Bacon,Franklin,Carbajal,and Lamb(the resolution was not adopted);and Nove

108、mber 2022 Platforms Interim Panel Report of the Special Competitive Studies Project 18 ACTION:Empower and fill vacant AI leadership roles in the Executive Office of the President NAIAC recommends the President and OSTP immediately appoint a Director of the National Artificial Intelligence Initiative

109、 Office(NAIIO),which has remained a vacant position since August of 2022,and a Chief Technology Officer of the United States(CTO),which has remained a vacant position in this Administration.These two roles are critical to ensuring leadership and consistency in AI preparedness,policy organization,and

110、 implementation across the executive branch.ACTION:Fund NAIIO to fully enact their mission NAIAC recommends the President or Congress provide sufficient resources for NAIIOs statutorily mandated coordinating functions and oversight responsibilities,including providing no less than six full-time equi

111、valent employees.These roles should be filled by permanent staff with expertise in both trustworthy AI governance and executive branch coordination.ACTION:Create a Chief Responsible AI Officer(CRAIO)NAIAC recommends the President create the permanent role of a Chief Responsible AI Officer(CRAIO).Thi

112、s new role could be announced in an executive order which clearly articulates the CRAIOs responsibilities and authority to coordinate with federal agencies.This position could sit in one of multiple offices,including the OMB or NAIIO,and report to the director in either office.The CRAIO would be tas

113、ked with implementation and advancement of trustworthy AI principles12 across agencies,a cohesive AI interagency strategy,and response to executive orders in this domain.The CRAIO would draw on tools like the AI RMF and Blueprint for an AI Bill of Rights and also meaningful stakeholder engagement,pa

114、rticularly with impacted communities.Further,the CRAIO should determine whether additional Chief AI Officers are necessary in additional agencies where they do not yet exist.The CRAIO should 12 Trustworthy AI principles as defined in Executive Order 13960 and Executive Order 14091 4(b),the AI RMF,an

115、d the Blueprint for an AI Bill of Rights 19 create a structure to interface with counterparts at each federal agency implementing AI.ACTION:Establish an Emerging Technology Council(ETC)NAIAC recommends the establishment of an Emerging Technology Council(ETC).The ETC would coordinate and drive techno

116、logy policy across the U.S.government and ensure that the opportunities and challenges associated with these technologies are addressed in a holistic and ethical manner.The ETC should be led by the most senior levels of the White House.One option would be for the ETC to be led by the Vice President

117、and composed of cabinet and key White House leaders.The ETC would focus attention on three key pillars:(1)civil rights and equity;(2)the economy;and(3)national security.The three pillars should be treated as equal and overlapping policy considerations.The council would provide greater AI and related

118、 technology coordination within the White House and government interagency,and ensure that any gaps among OSTP,NSC,NEC,OMB,departments,and agencies of defense and nondefense posture are filled and linked.Such a council could play an important role in coordinating policies until OSTP and the NAIIO ar

119、e strengthened in responsibilities,resources,and staff to perform the tasks of addressing AI and related technologies in the short term.Or,this council could play a longer-term role in partnership with OSTP and NAIIO based on the focus and efforts designated for their leadership.The suggested member

120、s of the ETC could include:The Vice President(Chair);White House Chief of Staff;National Security Advisor;OSTP Director;NEC Director;U.S.Trade Representative;OMB Director;Director of National Intelligence;Domestic Policy Council Director;and Cabinet Secretaries from the Departments of State,Defense,

121、Treasury,Commerce,Homeland Security,Justice,Energy,Health and Human Services,Labor,and Education.20 The Chair should have flexibility to include other government leaders,as deemed necessary,including leaders who may not be cabinet-level but may be able to provide substantive expertise.The ETC would

122、not replace the NSC,NEC,or OSTP-led NSTC structures,nor would it supplant the independent,mission-specific work of departments and agencies.Rather,the ETC would elevate interrelated key issues in the technology space and treat them as overlapping and adjacent technology and budgetary priorities.Thes

123、e issues may include domestic security,impacts to trade and labor,and supporting human rights,like mitigating algorithmic bias.ACTION:Fund NIST AI work NAIAC recommends adequately funding the National AI Initiative Act(NAIIA)programs and associated AI activities at NIST.NAIIA provides an overarching

124、 framework to strengthen and coordinate AI research,development,demonstration,and education activities across all U.S.departments and agencies,in cooperation with academia,industry,non-profits,and civil society organizations.NIST has not only achieved the significant AI developments with which they

125、have been charged,but also earned international acclaim for those efforts,including on the recently released AI RMF.Yet,NIST is underfunded,especially NISTs Trustworthy and Responsible AI Program.13 Fully funding NIST will advance NIST efforts to carry out NAIIA provisions,such as establishing testb

126、eds for the benchmarking and evaluation of AI systems(a key piece to fulfilling the promise of the AI RMF);increasing participation in standards development activities;and growing technical and sociotechnical staff.Further,lack of funding hinders NISTs ability to educate and thereby strengthen the U

127、.S.business community,AI researchers,AI governance experts,and other stakeholders,including foreign companies and likeminded governments,about the AI RMF.To continue to fulfill this crucial role of providing standards,guidance,and evaluation programs,NIST will require a sufficient sustained budget.W

128、e stress the importance of this recommendation given the high stakes and urgency of these tasks,which are crucial to supporting the development and deployment of AI and will impact government,industry,civil society,and the general public alike.13 See:Section 5301(g)of the National AI Initiative Act;

129、Administrations FY 2023 budget request for NISTs AI activities 21 OBJECTIVE:Organize and elevate AI leadership in federal agencies The U.S.government must lead by example in adopting and promoting trustworthy AI.The President and Congress have prioritized trustworthy AI innovation and adoption both

130、inside and outside the federal government,as demonstrated by numerous executive orders and legislation.14 This progress is welcome.However,a recent assessment of the implementation of AI-specific executive orders and the AI in Government Act demonstrates that the U.S.government can do more to lead b

131、y example.15 Requirements should be implemented to foster agencies strategic planning around AI,increase awareness about agencies use and regulation of AI,and strengthen public confidence in the federal governments commitment to trustworthy AI.16 The assessment of longstanding existing legal require

132、ments in AI-specific executive orders and congressional mandates17 reveals the importance of senior leadership and strategic planning at each department and agency.Agencies need empowered officials and strong organizational leadership to meaningfully comply,in a timely manner,with pre-existing and f

133、orthcoming legal requirements.They also need leadership to capture benefits AI may offer agencies,like increased efficiency and more equitable benefits provision.Promoting innovation and fostering public trust requires a clear and equitable AI strategy that empowers and holds its senior leaders acco

134、untable.Likewise,a well-articulated government AI strategy would help agencies promote consistent,trustworthy AI development,acquisition,and use.Although several do have such a strategy,all U.S.federal departments and agencies would benefit from a strategic plan that articulates their goals for AI d

135、esign,development,procurement,and adoption;that signals approaches to implementing trustworthy AI principles;that creates priorities for promoting trustworthy AI innovation in the private sector;and that builds associated internal organizational and governance structures.18 Research on embedding tru

136、stworthy AI innovation into institutions indicates the importance of having executive-level support and cross-functional teams with technical 14 Executive Order 13859,Maintaining American Leadership in Artificial Intelligence;Executive Order 13960,Promoting the Use of Trustworthy Artificial Intellig

137、ence in the Federal Government;Executive Order,14091,Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government;Executive Orders 13960 and 13859;AI in Government Act 15 Christie Lawrence,Isaac Cui,and Daniel E.Ho:Implementation Challenges to Three Pillars

138、of Americas AI Strategy 16 Year 1 Report Appendix,section e 17 In Executive Order 13859,Executive Order 13960,and the AI in Government Act;Implementation Challenges to Three Pillars of Americas AI Strategy 18 A number of federal departments and agencies have published AI strategies,but the majority

139、of these public facing documents do not provide the level of detail and delineated responsibilities for specific stakeholders necessary for the strategic planning proposed here.e.g.,NAIIO:U.S.Federal Agency AI Strategy Documents 22 and domain expertise that can dedicate significant time and resource

140、s.19 Yet,there is a lack of clarity on who is participating and leading in the U.S.governments current AI ecosystem.No existing executive order or statute requires agencies to identify and designate a senior official to lead its AI efforts.Executive Order 13960 Section 8(c)requires agencies to“speci

141、fy the responsible official(s)at that agency who will coordinate implementation.”20 However,agencies can delegate this position,as well as other AI-specific requirements,to junior staff who may lack sufficient decision-making authority.Conversely,the Foundations for Evidence-Based Policymaking Act o

142、f 2018 required each agency to identify a Chief Data Officer,Evaluation Officer,and Statistical Officer.OMB also provided agencies with a memorandum of guidance and expectations for the designation of these officials and their roles within agencies.21 ACTION:Ensure AI leadership and coordination at

143、each department and agency NAIAC recommends ensuring senior agency leadership(e.g.,a Chief AI Officer)and staff at each department or agency provide clarity and transparency,while also ensuring the executive branch captures the benefits and promotes the adoption of trustworthy AI inside and outside

144、of government.We also recommend five avenues for developing AI strategy coordination and leadership at each department or agency:First,clarify who is leading or participating in the AI ecosystem within the government at the agency level.We suggest creating organizational mappings across federal agen

145、cies and the White House that include:(1)primary authority;(2)leadership team;and(3)point of contact for AI development and agency-level policy making.Second,appoint and resource dedicated AI leadership in agencies.Each agency should have a senior-level official(i.e.,Senior Executive Service,Senior

146、Level22 or political appointee)that is sufficiently resourced and empowered to determine whether an AI tool is appropriate to adopt in the first place and if so,institute oversight for AI development,deployment,and use within the agency.Given the 19 World Economic Forum:Ethics by Design:An organizat

147、ional approach to responsible use of technology;U.C.Berkeley Center for Long-term Cybersecurity:Decision Points in AI Governance,UC Berkeley Center for Long-Term Cybersecurity;Alex Mankoo,Aoife Spengeman,and Danil Mikhailov:Integrating Ethics into Data Science:Insights from a Product Team 20 Executi

148、ve Order 13960,Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government 21 White House:Phase 1 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018:Learning Agendas,Personnel,and Planning Guidance 22 The Senior Level(SL)category is used by“agencies

149、that are statutorily exempt from inclusion in the Senior Executive Service(SES)”to staff positions classified above the GS-15 level:Policy,Data,Oversight:Senior Executive Service,OPM 23 dual obligation to promote the use of appropriate AI and implement the trustworthy AI principles,this agency senio

150、r-level officials responsibilities should include:Acting as the primary point of contact and expertise on AI strategy and trustworthy AI within the given agency,and the coordinating member across agencies,particularly in relation to the NAIIO and the Chief Responsible AI Officers directions and requ

151、ests;Promoting responsible AI innovation within the agency through deliberative design,development,and deployment,such as identifying and overseeing pilot projects and removing internal barriers to AIs creation and use;Overseeing compliance with existing legal requirements23 and future legal require

152、ments,and efforts to manage AI risk(e.g.,AI RMF);and Ensure procurement of AI tools and systems is aligned with the agencys trustworthy AI principles.This official will establish and oversee internal AI governance structures and spearhead collaboration both within the agency and across the interagen

153、cy,including with the Executive Office of the President(through reporting and coordination,where appropriate).There are several pathways to ensure a senior-level official at each agency.First,existing Agency Chief Technology Officers and/or Chief Information Officers could be assigned these capabili

154、ties and responsibilities,if given sufficient resources and authority to extend their work to include this responsibility.Alternatively,the Administration could appoint a Chief AI Officer(CAIO)at those agencies where one does not yet exist.The CAIO would be distinct from,but coordinate with,the Chie

155、f Information Officer and other relevant officials,such as the Chief Data Officer,Evaluation Officer,and Statistical Official.For the agencies with a Responsible AI officer already in place,this person or their superior could serve in this role,and fulfill the responsibilities delineated above.To fi

156、nalize adoption of this recommendation,the President could issue an executive order requiring agencies to(1)designate a senior official to oversee AI efforts in the agency;and(2)provide the designated official with sufficient authorities and resources,including staff,to achieve their responsibilitie

157、s.If the executive order directs agencies to designate a Chief AI Officer(CAIO)to coordinate,oversee,and advise on different elements of AI development,use,and procurement,then the executive order should require OMB to issue a memorandum,similar to the memorandum issued for the Evidence Act.This mem

158、orandum would provide guidance on how to choose and empower a CAIO,as 23 Executive Order 13960,Executive Order 13859,Executive Order 14091,AI in Government Act,among other AI-related laws and executive orders 24 well as how to establish relevant governance bodies and other internal structures.The ex

159、ecutive order should specify the agencies that are subjected to this requirement.24 The NAIAC recommends this requirement apply,at a minimum,to all cabinet-level departments and agencies,and agencies subject to the Chief Financial Officers Act,25 although there are additional departments and agencie

160、s with demonstrable AI use cases that may also benefit from a strategic plan and dedicated senior official.26 Third,reinstitute OMB-led meetings of Deputy Secretaries through“The Presidents Management Council,”to the extent they are not currently in effect,and add AI governance and policy to the age

161、nda.Such meetings ensure senior-level attention on critically important issues that benefit from interagency coordination,to include implementation of executive orders related to AI use.This process could be facilitated through routine meetings convened by the OMB Deputy Director with deputies at th

162、e federal agencies,whose membership would be determined by the OMB Director.Deputies Council meetings could also explore other efficiencies and challenges that are of high-level concern and warrant leadership attention and consensus on action.Such issues in the AI space could include AI procurement

163、protocols,preparedness for emerging and new technologies(e.g.,Generative AI,LLMs),cyberthreats emerging from AI use,and adoption of the AI RMF or a similar framework by appropriate recipients of federal funds.Fourth,develop a strategy at each department or agency for the adoption,approach,and incorp

164、oration of AI systems and trustworthy AI principles.Given the varying levels of adoption of AI use and AI principles across agencies,we expect each agency will develop a strategy specific to its needs to develop and integrate AI,but should respond to requirements to:Promote responsible AI innovation

165、,where appropriate,within the agency through deliberative design,development,and deployment;27 Identify pilot project opportunities;Highlight opportunities to eliminate unnecessary barriers to trustworthy AIs development and deployment;Test AI applications in a manner that ensures compliance with la

166、w and public values;Require substantiation of vendor claims about AI;24 There are hundreds of agencies and sub-agencies and there are some agencies where this requirement may not be relevant or desired(e.g.,if the agency has limited to no uses of AI or has particularly limited staff)25 CIO:2.4 Chief

167、 Financial Officers Act;The White House:The Cabinet 26 David Freeman Engstrom,Daniel E.Ho,Catherine M.Sharkey,and Mariano-Florentino Cullar:Government by Algorithm:Artificial Intelligence in Federal Administrative Agencies;The Administrative Conference of the United States;Christie Lawrence,Issac Cu

168、i,and Daniel E.Ho:Implementation Challenges to Three Pillars of Americas AI Strategy 27 As directed by Executive Order 13960 25 Pursue six strategic objectives for promoting and protecting American leadership in AI,consistent with Executive Order 13859s mandate;Affirmatively advance civil rights,per

169、 Executive Order 14091,including by protecting the public from algorithmic discrimination;Implement trustworthy AI principles,as mandated by Executive Orders 13960 and 14091 when designing,developing,acquiring,and using AI;Realize these functions through sufficient organizational structures,processe

170、s,policies,and responsible parties;and Achieve other stated goals of AI-related executive orders28 and statutes.29 Fifth,foster responsible innovation and procurement of AI.When implementing their AI strategy and governance structures,each agency should embed the trustworthy AI principles into the d

171、evelopment cycle without unduly stifling innovation.30 Each potential use of AI should necessarily start with the question of whether an AI tool is the appropriate and best solution.Agencies should foster a culture of continuous piloting and experimentation,mindful of the multi-stakeholder and socio

172、technical considerations addressed in this NAIAC report.An evaluation process should include testing of AI systems for safety and functionality,assessment of impact on stakeholder groups,and processes for reporting,mitigation,and redress of harms should harms occur.The AI design process should there

173、fore include,at a minimum:Computer scientists,social scientists,legal scholars with technology domain expertise,and other stakeholders including historically impacted communities with lived expertise of AI systems;Piloting and evaluation of interventions;and Performance measurements and evaluation a

174、gainst the status quo baseline,with a commitment to continuous improvement against past-performance baselines.The U.S.government has an opportunity to lead in the procurement of trustworthy commercial AI,which is worthy of significant review and discussion by NAIAC or other capable bodies.28 Executi

175、ve Order 13960,Executive Order 13859,Executive Order 14091 29 AI in Government Act,Sect.104(c)30 AI innovation,however,should be appropriate and guided by the trustworthy AI principles in Executive Order 13960,including:“(b)Purposeful and performance-driven.Agencies shall seek opportunities for desi

176、gning,developing,acquiring,and using AI,where the benefits of doing so significantly outweigh the risks,and the risks can be assessed and managed”26 To promote transparency,agencies and the White House should,where appropriate,publicize actions taken pursuant to this recommendation.ACTION:Continue i

177、mplementing congressional mandates and executive orders on AI NAIAC recommends the continued implementation of existing and forthcoming congressional mandates and executive orders on AI oversight.We understand that the OMB has taken steps to fulfill outstanding obligations in response to past orders

178、 and mandates.31 To expedite implementation,federal entities departments,agencies,White House-level offices need sufficient resourcing and staffing to carry out long-standing requirements and implement new efforts.We support continued allocation of resources for these federal agency efforts underway

179、,as well as to understand current AI use and establish a strategy for future AI adoption.In the Year 1 Report Appendix,section b,we note existing AI-related functions required by the cited executive orders and congressional mandates that would be of significant benefit to ensuring transparency and i

180、nfrastructure to support trustworthy AI.To do so,we recommend the President make appropriate funding requests for increased appropriations for OMB,the Office of Personnel Management(OPM),the General Services Administration(GSA),and relevant federal entities.GSAs AI Center of Excellence could serve a

181、s a helpful resource.Per congressional mandate in the AI in Government Act,GSA facilitates the adoption of trustworthy AI throughout the U.S.government.This includes building workforce exchange mechanisms32 that better advise and consult agencies on AI design,development,acquisition,and use.To incre

182、ase the number of agencies that can benefit from the Center of Excellences services,we recommend that the center have an additional funding model.It could receive appropriated funding33 in addition to a revolving fund,as is the case with 18F,where partner agencies must reimburse GSA for labor,materi

183、al costs,and overhead.34 Some amount of baseline budget could be offered competitively based on specific metrics,including scale of potential impact to citizens,savings in costs to 31 Executive Order 13859,Executive Order 13960,and the AI in Government Act,as addressed in the report Implementation C

184、hallenges to Three Pillars of Americas AI Strategy 32 18F recruits IT experts that it assigns to agencies.Although 18F used special hiring authorities like Schedule A excepted service,it has increasingly been using competitive service direct-hire authority;U.S.Government Accountability Office:Digita

185、l Service Programs Need to Consistently Coordinate on Developing Guidance for Agencies 33“To carry out its mission,USDS receives appropriated funding,as well as reimbursements from the agencies to which it has extended digital service teams.USDS officials said that the program uses its own appropria

186、tions to fund core activities.This funding allows it to prioritize projects with urgency and impact and reduces the barrier to critical technical projects,such as at small agencies with smaller budgets”34 Ibid.27 the U.S.government,or significance of potential threat that services would be used to h

187、elp address.OBJECTIVE:Empower small and medium sized organizations for trustworthy AI development and use-Trustworthy AI is a stated goal of numerous public and private sector entities.However,one challenge to the widespread adoption of trustworthy AI for societal benefit is the general lack of know

188、ledge and skills required to implement the required translational efforts.This is particularly true among small-and medium-sized organizations(SMOs),which rarely have the resources or capacity to build full divisions or offices for trustworthy AI.We are not aware of a sufficient number of entities p

189、roviding translational efforts to build capabilities and knowledge for trustworthy AI in SMOs.Currently,practices,standards,and frameworks for designing,developing,and deploying trustworthy AI are created in organizations in a relatively ad hoc way depending on the organization,sector,risk level,and

190、 even country.Regulations and standards are being proposed that require some form of audit or compliance,but without clear guidance accompanying them.Advances in trustworthy AI require the development and validation of practical capabilities,scaffolding,training,and guidance on a large scale.This ty

191、pe of work can provide benefits for a wide array of stakeholders.But closing these gaps in resources,knowledge,methodologies,and skills will require critical support and engagement from a broad range of partners.To be sure,some organizations already develop tools,skills,and capabilities for SMOs.For

192、 example,there are nonprofits that provide data science expertise for companies working in the public interest.Other nonprofits help SMOs integrate privacy and responsible data stewardship across their companies.35 Nonprofits like these could be brought together as stakeholders,in order to maximize

193、and further grow their impact.ACTION:Create a multi-agency task force to develop frameworks for small-and medium-sized organizations to adopt trustworthy AI NAIAC recommends the creation of a multi-agency task force that includes representatives from the Small Business Administration(SBA);NIST;NSF D

194、irectorate for Technology,Innovation and Partnerships(TIP);and GSA.This task force should include key stakeholders from across government,industry,academia,and civil 35 EFF Certbot 28 society,with an emphasis on inclusion of impacted communities36 and historically marginalized groups.The task force

195、would establish a jointly-funded,public-private entity for:(1)efforts to establish and validate practical methods and frameworks for trustworthy AI development and assessment by SMOs;and(2)creation of workforce development,education,training,and,as appropriate,consultative and evaluative capabilitie

196、s,for SMOs outside of the U.S.government to advance AI for societal benefit.This public-private entity should have stable,multi-year funding from multiple stakeholders,including the U.S.government,private philanthropy,and industry.37 This entity should have scientific and administrative advisory boa

197、rds,with members from both funders and representatives of the public.All of this entitys efforts best practices,validation measures,voluntary standards,training materials,and so forth should be made freely available to the public using standard open-source and Creative Commons(CC)licenses.This would

198、 ensure that the translational efforts provide maximal public benefit.Any necessary maintenance costs and efforts should be included in each project from the outset.Annual reports should be transparent about projects,engagements,trainings(including any consulting or evaluations),and funding sources.

199、In addition,this entity should drive regular,proactive outreach and engagement with impacted communities,vulnerable populations,key stakeholders,and the general public to identify strategic emphases for the translational efforts,as well as focus areas for its support of capability and workforce deve

200、lopment in SMOs.A majority of the entitys projects should be responsive to these specific needs,with other projects determined by competitive proposals.In all cases,the projects should contribute to the development of trustworthy AI for widespread societal benefit.Industry guidance and insights will

201、 be critical to ensure that the translational knowledge and capabilities produced by this entity are relevant and useful in the development of more trustworthy AI.Importantly,the industry engagement and support need not involve proprietary technologies or methods,but only high-level or public inform

202、ation about processes and frameworks that are conducive to trustworthy AI.Although this entity should collaborate with industry organizations,its efforts should not be guided by commercial considerations.36 Michele E.Gilman:Beyond Window Dressing:Public Participation for Marginalized Communities in

203、the Datafied Society 37 We suggest at least four reasons why private industry would be interested in participating,including funding.First,these translational efforts would potentially benefit the entire sector if trustworthy AI becomes more widespread.Second,this entity would help build capabilitie

204、s,skills,and knowledge in potential partner SMOs.Third,these efforts would help to ensure that SMOs are able to use products that were previously open-sourced by larger companies.Fourth,there is potentially significant public benefit from the broader design,development,and use of trustworthy AI,whic

205、h would provide broad benefits for these companies.This entity would share some similarities to NSF TIPs Convergence Accelerator program,which funds the many translational efforts required to move basic research(much of which is funded by other parts of NSF)into widespread commercial and public use

206、29 The entity should provide guidance and contributions to international discussions,particularly with regard to standards-setting groups and deliberations.The recommended entity would also provide a neutral venue for information gathering and dissemination,as well as convening different stakeholder

207、s with interests in establishing and validating best practices for advancing and evaluating trustworthy AI.Efforts that focus on education,training,and workforce development for SMOs would provide complementary projects to NSFs program on Expanding AI Innovation through Capacity Building and Partner

208、ships(ExpandAI)and other efforts that focus on more-traditional educational institutions,including minority-serving institutions(MSIs).These efforts could take the form of direct support and consultation,but this entity should emphasize scalable,accessible efforts.These engagements could also be con

209、ducted through the establishment of an entity along the lines of a“trustworthy AI reserve corps,”composed of individuals with the necessary expertise and interests who are affiliates,rather than employees or contractors,of the public-private entity.This entitys efforts could also include:38 Clear ar

210、ticulation and validation of best practices for collaborative and value-centered design,including risk and benefit elicitation,incorporation,and evaluation;Validated processes for red-teaming and other types of adversarial evaluation;Development of testbeds and other mechanisms for real-world perfor

211、mance benchmarking,audit,and evaluation of trustworthy AI systems;and Educational and training materials appropriate for developers,evaluators,users,or the general public,with a particular emphasis on under-resourced or historically marginalized communities and regions.OBJECTIVE:Ensure AI is trustwo

212、rthy and lawful and expands opportunities In the coming year,NAIAC aims to explore ways to amplify opportunities and access through AI,such as accommodations in the workplace;growing skills and unlocking economic opportunity for workers;and personalized and innovative ways to support our childrens e

213、ducation.38 This list is not intended to be exhaustive,nor are the items in the list discrete and separable,as there are many connections between them(e.g.,red-teaming can be part of validated best practices)30 An important piece of this puzzle is ensuring that use of AI is lawful and that it neithe

214、r perpetuates nor scales bias and inequality.U.S.government agencies have recently highlighted that the use of AI-based tools in recruiting,hiring,and monitoring employees can violate existing law if they discriminate against people based on their protected class.The Department of Justice(DOJ)and th

215、e Equal Employment Opportunity Commission(EEOC)have noted that AI-based tools used to recruit,hire,and monitor employees can violate the Americans with Disabilities Act by discriminating against people with disabilities.39 Likewise,the DOJ has addressed unacceptable and illegal occurrences where Bla

216、ck and Hispanic rental applicants are discriminated against when algorithmic systems inappropriately score and screen their applications.40 President Biden has clearly articulated his interest in ending discrimination and bias(including algorithmic discrimination and bias),unequivocally stating that

217、“when any person or community is denied freedom,dignity,and prosperity,our entire Nation is held back.”41 The use of AI to create opportunity depends significantly on building and maintaining public trust.Already,executive orders have directed agencies to“design,develop,acquire,and use AI in a manne

218、r that fosters public trust and confidence while protecting privacy,civil rights,civil liberties,and American values.”42 Dating back to 2016,the U.S.government has increasingly affirmed the importance of combating algorithmic discrimination.43 Heeding this call,federal departments and agencies have

219、directed their respective civil rights authorities and offices to promote equity,prevent and remedy algorithmic discrimination,and eliminate other uses of AI that violate existing law.44 Specifically,the Departments of Justice(DOJ),45 Labor(DOL),Health and Human Services(HHS),Housing and Urban Devel

220、opment(HUD),as well as the Equal Employment Opportunity Commission(EEOC),Federal Trade Commission(FTC),Consumer Financial Protection Bureau(CFPB),and General Services Administration(GSA)have released guidance documents for industry,46 launched compliance initiatives,and confirmed that existing anti-

221、discrimination laws apply to algorithmic discrimination.47 In particular,DOJ is a key agency ensuring the enforcement of civil rights and 39 DOJ:Justice Department and EEOC Warn Against Disability Discrimination;EEOC:The Americans with Disabilities Act and the Use of Software,Algorithms,and Artifici

222、al Intelligence to Assess Job Applicants and Employees;CDT:How Automated Test Proctoring Software Discriminates Against Disabled Students.Center for Democracy and Technology 40 DOJ:Justice Department Files Statement of Interest in Fair Housing Act Case Alleging Unlawful Algorithm-Based Tenant Screen

223、ing Practices 41 Federal Register:Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government 42 Executive Order 13960,Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government 43 Executive Office of the President:Big Data:A Report

224、on Algorithmic Systems,Opportunity,and Civil Rights;DOJ:Justice Department Announces New Initiative to Combat Redlining 44 OSTP:Blueprint for an AI Bill of Rights 45 DOJ:Assistant Attorney General Kristen Clarke Delivers Keynote on AI and Civil Rights for the Department of Commerces National Telecom

225、munications and Information Administrations Virtual Listening Session 46 EEOC:U.S.EEOC and U.S.Department of Justice Warn against Disability Discrimination 47 For more details,see Fact Sheet,Executive Office of the President,Biden-Harris Administration Announces Key Actions to Advance Tech Accountab

226、ility and Protect the Rights of the American Public;Executive Order 13985 Equity Action Plan(explaining that GSA is“dedicated to actions that prioritize equitable user experience as a core design principle,mitigate algorithmic bias,improve digital accessibility,and modernize the delivery of governme

227、nt services to the American people”)31 anti-discrimination laws48 as well as laws that touch on the use of AI in many other areas,such as education,healthcare,employment,housing,credit,policing,criminal justice,and access to consumer goods(see section a,table 1 in Year 1 Report Appendix).49 DOJs enf

228、orcement of civil rights is generally led by its Civil Rights Division(CRT),50 which initiates investigations and compliance with civil rights laws and also acts upon referrals received from other departments and agencies.51 CRT is rising to the challenge of protecting civil rights and enforcing ant

229、i-discrimination laws within the AI context by“taking a holistic approach and marshaling its resources”to combat algorithmic discrimination and address“AI issues that intersect with civil rights,civil liberties and equal opportunity.”52 The technical talent and resource gap seen across the U.S.gover

230、nment also impacts the DOJ.Currently,under the program for“upholding civil rights in the age of artificial intelligence,”the Department lists one attorney53 and has requested 24 full-time employees(FTE),including 15 attorneys.Other federal agencies are devoting resources and hiring new staff to tack

231、le civil rights risks arising from AI.For example,the Consumer Financial Protection Bureau intends to hire 25 technologists to support its supervision and enforcement actions,including on AI.54 In addition,structural and legal impediments can hinder assessments of algorithmic discrimination.Individu

232、als and civil rights agencies often do not have full visibility into a companys AI tool,including information about the data used,the way the tool algorithmically accounts for demographic information or proxy features,or the impact its use has on different demographic groups.55 Allegations of discri

233、mination often require federal prosecutors to demonstrate to a court that someone was plausibly discriminated against because of their specific protected status(such as their race or gender)or because 48 DOJ:Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms,Formerly

234、Known as Facebook,to Resolve Allegations of Discriminatory Advertising;DOJ:Justice Department Announces Settlement with Gap Inc.,While Celebrating the 35th Anniversary of a Law Prohibiting Immigration-Related Employment Discrimination;DOJ:Justice Department Settles with Microsoft to Resolve Immigrat

235、ion-Related Discrimination Claims;DOJ:Justice Department Settles with Large Health Care Organization to Resolve Software-Based Immigration-Related Discrimination Claims 49 DOJ:Civil Rights Division,Fiscal Year 2023 Performance Budget;DOJ:Assistant Attorney General Kristen Clarke Delivers Keynote on

236、AI and Civil Rights for the Department of Commerces National Telecommunications and Information Administrations Virtual Listening Session 50 Note:DOJ uses the acronym“CRT”to refer to the Civil Rights Division 51 See Year 1 Report Appendix,section f 52 DOJ:Assistant Attorney General Kristen Clarke De

237、livers Keynote on AI and Civil Rights for the Department of Commerces National Telecommunications and Information Administrations Virtual Listening Session 53 DOJ:Civil Rights Division,Fiscal Year 2023 Performance Budget,Congressional Justification,pg.98 54 Protocol:In its battle with Big Tech,the C

238、FPB is building an army of engineers 55 For examples,see the“What should be expected of automated systems”action of the Algorithmic Discrimination Protections section of the Blueprint for an AI Bill of Rights 32 of a specific policy or practice.56 But because of information asymmetries in the algori

239、thmic based claims,investigations can be challenging.57 Congress has provided DOJ and other federal agencies with the authority to compel entities to provide certain documents and information to aid investigations.58 Such“administrative subpoenas,”which includes administrative and civil investigativ

240、e demand authority(CID),provide executive branch agencies with sufficient investigatory power to fulfill their statutory obligations to regulate and enforce current law.59 As of 2002,Congress had provided approximately 335 administrative subpoena authorities to various executive branch entities.60 N

241、otably,DOJ already has authority to issue CIDs based on antitrust laws,which cover all kinds of economic activity,as well as the False Claims Act,which regulates efforts to defraud the U.S.government in a contract.61 Those laws like civil rights law regulate conduct where private parties have a stro

242、ng interest in hiding their activities,and so prosecutors need some way to compel evidence.In all cases,the governments CIDs must be“reasonably relevant”to the law enforcement purpose,62 which prevents prosecutors from going on unnecessarily burdensome or intrusive fishing expeditions.(See section c

243、,table 2 in Year 1 Report Appendix for a non-exhaustive list of additional examples of CID authority in many areas of civil rights law.)The proliferation of AI and automated systems used in education,healthcare,housing,employment,credit,policing and criminal justice,and access to consumer goods has

244、therefore placed more strain on civil rights agencies ability to combat algorithmic discrimination,while simultaneously preventing and remedying traditional discrimination.Thus,the NAIACs initial research indicates that the CRT could greatly benefit from increased technical talent and resources,and

245、should explore whether these tools,such as CIDs,could also be helpful in this regard.Note:NAIAC will continue to explore this objective in the years ahead via our forthcoming Law Enforcement Subcommittee.56 Discrimination claims are generally based on disparate treatment or disparate impact.e.g.,29

246、CFR 1607.11-Disparate treatment;OCC:Fair Lending;DOJ:Title VI Legal Manual,Section VII:Proving Discrimination-Disparate Impact 57 Kelsey Finn:The Harsh Reality of Rule 8(A)(2):Keeping the Twiqbal Pleading Standard Plausible,Not Pliable,pg.49;Virginia Foggo and John Villasenor:Algorithms,Housing Disc

247、rimination,and the New Disparate Impact Rule,pg.22 58 See Year 1 Report Appendix,section b;DOJ Administrative Subpoena Authorities Held by the Department of Justice 59 DOJ:Report to Congress on the Use of Administrative Subpoena Authorities by Executive Branch Agencies and Entities,Pursuant to P.L.1

248、06-544,Section II 60 DOJ:Report to Congress on the Use of Administrative Subpoena Authorities by Executive Branch Agencies and Entities,Pursuant to P.L.106-544,Section 7 61 DOJ:Report to Congress on the Use of Administrative Subpoena Authorities by Executive Branch Agencies and Entities,Pursuant to

249、P.L.106-544,Section 7;JDSupra:3 Ways to Respond to a DOJ Civil Investigative Demand(CID)62 United States v.Morton Salt,338 U.S.632,652(1950)33 ACTION:Ensure sufficient resources for AI-related civil rights enforcement NAIAC recommends the U.S.government identify and reduce potential algorithmic disc

250、rimination by continuing to support civil rights agencies and ensuring they have sufficient tools to address this critical task.NAIAC will continue to engage key stakeholders and explore specific recommendations to protect civil and human rights in the age of AI.This may include strengthening existi

251、ng or developing new mechanisms across agencies.NAIAC also recommends agencies and departments supporting our civil and human rights in this emerging legal landscape obtain additional resources.As noted above,there are unique challenges arising from investigating,prosecuting,and enforcing settlement

252、s related to algorithmic discrimination.And at the same time,if AI is going to be trusted,the general public will need to know that the offices tasked with ensuring the protection of their rights are sufficiently equipped.As such,we suggest:First,increase funding to DOJ by at least$4.45 million to g

253、row staff to sufficient levels and with adequate technical expertise.We support this DOJ budget request(discussed above)because these resources would enable the CRT to investigate additional potential violations63 of existing law,file lawsuits,and take other relevant enforcement actions,as well as c

254、oordinate with other federal agencies on related enforcement actions.Second,DOJ and other agencies in this space should explore fellowships,secondments,intergovernmental personnel act assignments,64 and other vehicles to bring in technologists to support this work.Working with technologists,social s

255、cientists,ethicists,and others with AI expertise helps enforce existing anti-discrimination laws where violations result from the use of algorithmic decision-making.Combatting algorithmic discrimination that violates existing law requires that the CRT“evolve to match a changing legal,commercial,tech

256、nological,and social landscape.”65 Third,determine whether civil investigative demand(CID)and other administrative subpoena authority to investigate algorithmic discrimination that may violate existing anti-discrimination law would be helpful.We recommend other traditional tools that may not have be

257、en applied yet to the governments work in the technology space,equivalent to what Congress has provided other DOJ divisions in order to balance changing technology with existing legal obligations.63 DOJ:Civil Rights Division,Fiscal Year 2023 Performance Budget,Congressional Justification,pgs.96,98 6

258、4 The authority under the Intergovernmental Personnel Act allows individuals from academia,state and local governments,Indian tribal governments,nonprofits,and other eligible organizations to work in a federal agency,like the Justice Department,for up to two years:OPM,Policy,Data,Oversight,Hiring In

259、formation:Intergovernmental Personnel Act 65 DOJ:Civil Rights Division,Fiscal Year 2023 Performance Budget,Congressional Justification 34 THEME:Leadership in Research and Development OBJECTIVE:Support sociotechnical research on AI systems _ AI systems are sociotechnical systems and should be studied

260、 as such.Sociotechnical research is an approach to studying technologies within their social,political,economic,and cultural contexts.This field recognizes that successful technological deployment requires understanding and integrating human,material,and cultural infrastructures.A sociotechnical app

261、roach questions the notion that a technologys impact can be predicted from its technical properties alone.Moreover,this field assumes that technical transformations to an existing process or function will likely have moral and political implications.Therefore,a sociotechnical approach considers not

262、simply how to best use a technology,but whether a given technology is an appropriate means to address a problem,and where it fits alongside alternative technologies and non-technical means.Methods for conducting sociotechnical research for AI include:Drawing on observations gathered from multiple so

263、urces,e.g.,quantitative,qualitative,or mixed-methods approaches.Interview-based or ethnographic studies,computational analysis of logged data,sociological audits,case studies,and historical analysis are all employed in sociotechnical research.Sociotechnical research may also propose theoretical fram

264、ings that synthesize insights from observational studies or shape future studies.Inductive reasoning to discover the unexpected when technology is deployed.Although sociotechnical research is guided by theory,it is also designed to capture unexpected real-world uses,processes,and consequences when h

265、umans and technologies interact.These phenomena are not necessarily good or bad,but reflect that what a technology becomes in practice is dependent on other actors,in addition to users themselves.Capturing the viewpoint of those impacted by technology.These methods allow people to have a say in how

266、technology is used and designed for them,making participation a critical element in AI governance.66 Evaluating AI within contextual settings.AI use must be understood in the real-world contexts for which it was built and with respect to the users for whom it was envisioned.More and more professions

267、 will increasingly be using AI,and likewise,we increasingly need to adopt a sociotechnical approach to understanding the opportunities and problems that arise.66 Michele E.Gilman:Beyond Window Dressing:Public Participation for Marginalized Communities in the Datafied Society 35 Sociotechnical resear

268、ch is critical for American leadership in AI R&D.We need values-based AI solutions which go beyond efficiency and cost-savings.67 These solutions should include American values such as equity,just outcomes,fairness,and access to opportunity.Such solutions should use human-centric design,protect huma

269、n agency and dignity,and lead to positive societal outcomes.AI solutions absent broader engagement with expertise on society,politics,economy,and culture risk perpetuating AI systems that fail in deployment.These systems may integrate poorly with human infrastructures and reproduce old patterns of i

270、ncomplete,inefficient,biased,and discriminatory solutions.American AI R&D should lead with an integrated approach that prioritizes both the social and the technical elements of innovation and competitiveness.Early research has shown that the incorporation of sociotechnical approaches into the AI dev

271、elopment and testing process and in use-feedback can create significantly more positive outcomes for the users,impacted communities,and AI developers.68 Despite this,the U.S.government lacks a system to identify sociotechnical research in public AI funding,including what it is,why it matters,where i

272、t is taking place,and how much funding is currently being put toward it.69 Developing those identification and tracking mechanisms would add transparency and facilitate opportunities for collaboration.There is also a need for scale,and thus,for methodologies,tools,standards,and measurement approache

273、s that allow for sociotechnical research to be incorporated rapidly and expansively into the American research environment.Further,the impact of that research must be made visible.70 The recommended actions that follow address some aspects of this,but further steps are needed to ensure an AI environ

274、ment that prioritizes sociotechnical research.Additionally,U.S.policy systems are slow to comprehend the societal impacts of AI and are not fully prepared to respond to the quickly evolving technology landscape.71 Policy-oriented sociotechnical research is needed to support federal agencies and Cong

275、ress in making policy and legislative decisions that support open innovation and robust competition while also protecting society,industry,and government from potential negative impacts and harms.67 Expert testimony,panelists from Trustworthy AIs panel discussion during the October NAIAC public meet

276、ing 68 Deirdre Mulligan and Helen Nissenbaum:The Concept of Handoff as Model for Ethical Analysis and Design;Safiya Noble:Algorithms of Oppression;Karen Levy:Data Driven:Truckers,Technology,and the New Workplace Surveillance 69 Expert testimony,panelists from Trustworthy AIs panel discussion during

277、the October NAIAC public meeting 70 Expert testimony,panelists from Trustworthy AIs panel discussion during the October NAIAC public meeting;NIST:AI RMF 71 Deirdre K.Mulligan and Kenneth A.Bamberger:Procurement As Policy:Administrative Process for Machine Learning 36 ACTION:Develop a research base a

278、nd community of experts focused on sociotechnical research in the AI R&D ecosystem NAIAC recommends the U.S.government make broad,substantial investments in investigating AI through a sociotechnical lens.This R&D spending would dovetail with new public sector vehicles,such as the CHIPS and Science A

279、ct and new R&D programs at NIST.We urge financial support for a strong research base and community of experts;for meaningful,usable,and extensible measures of social considerations for AI development and implementation;for frameworks to support future standards;and for standards and best practices w

280、hich support future policy.These areas should be connected to each other as part of an overall AI R&D ecosystem that integrates societal concerns with technical development.American leadership in AI R&D should prioritize just and equitable AI application and development,right alongside economic deve

281、lopment.This requires basic research at the intersection of technology,the humanities,and the social sciences that broadens the conception of AI research well beyond technocratic frames.Therefore,the following must be considered:The National Science Foundation(NSF),in coordination with other federal

282、 agencies,should fund efforts to create sociotechnical basic and applied research methods and extend these to support values-balanced AI R&D.Fundamental research is necessary to identify,collect,and interpret critical sociotechnical factors;to integrate them into the AI technology lifecycle;and to e

283、nsure they are applicable to a wide range of use cases.This research should be democratized to support participation by underrepresented groups,organizations involved in non-academic-centric research,and community organizations,as well as those more typically funded technical,scientific,and policy r

284、esearch organizations.Further:We hope that the soon-to-be-announced Trustworthy AI Research Institute72 includes research on high-impact sectors and cross-sector examination of AI benefits,harms,and discrimination,and also investigates quantitative and qualitative mitigation measures that will contr

285、ibute to regulatory rulemaking and tools used during their enforcement;The U.S.government should study the societal implications of AI applications created with the intention and/or effect of influencing human behavior at individual,group,and societal levels;The U.S.government should develop and con

286、tinuously improve reusable methods and metrics for sociotechnical AI research and 72 Trustworthy AI Research Institute 37 implementation to encourage rapid incorporation into AI ecosystems of use;The National AI Research Resource should provide computing resources,data,and R&D tools to achieve criti

287、cal mass and democratization of sociotechnical research.And we should request that relevant research reports include a section on sociotechnical considerations,risks,mitigations,alternative approaches explored,and positive and negative impacts;and The U.S.government should create incentives within f

288、unding and publication bodies to promote the widespread development and adoption of sociotechnical innovations and best practices into AI R&D,such as prizes,research grants,best papers,and career grants.NAIIO should support AI governance research to close the gap between new empirical research regar

289、ding sociotechnical systems and new policy development around AI governance.We need translational research to understand how sociotechnical and legal considerations affect policy design and decision-making where AI is used in the public interest.This research should span a number of federal and publ

290、ic-sphere mission spaces,and determine how such considerations can be best incorporated into policy development practices across executive agencies,legislatures,consortia,scientific bodies,industry,academia,federal government R&D,international law,and other domains.This research should investigate p

291、olicy considerations that impact individuals,groups,and society at large.This research should also support participation by underrepresented groups,organizations involved in non-academic-centric research,and community organizations as well as more typically funded technical,scientific,and policy res

292、earch organizations.Further:There should be ongoing research into methodologies of accountability and standards-setting for AI development and deployment.There should be identification of AI risks to civil rights and civil liberties,as well as novel forms of risk and harm to society posed by automat

293、ion,including existential risks.And there should be study of the legal and process mitigations against risks and harms,including mechanisms and methodologies for validation and testing of systems for safety,ethics,and effectiveness.NIST should continue to develop approaches and tools,and expand comm

294、unities,to incorporate sociotechnical approaches into AI test and evaluation mechanisms.Such developments should incorporate measurement science,informed by diverse communities of experience,that provides sociotechnical guidance and tools for AI-driven organizations.NIST programs related to developi

295、ng sociotechnical system guidance,test and evaluation approaches,and peer-reviewed approaches such as suggested by the AI RMF should be supported.NIST should also advance measurement science research similar to what it has already done in 38 identifying and managing bias in AI and establish challeng

296、es,and other test and evaluation mechanisms that incorporate researchers from a broad set of disciplines.Further:The outputs of this research should be clear,specific,repeatable metrics and testing methodologies,including standard reference data and implementations,and with measurements and actions

297、that can be readily reported,and that are not onerous to implement;and This research should include a diverse and engaged community of stakeholders beyond industry,government,and academia including underserved communities,researchers with varied backgrounds and disciplines,and community-focused and

298、social good-focused research organizations.NIST should continue to support the development of consensus-based standards and best practices derived from peer-reviewed measurement research and reporting formats.These should be used to incorporate sociotechnical considerations and research within AI R&

299、D and implementation.Sociotechnical best practices,standards,and policy support should empower the creation of guiding documents,frameworks,tools,and standard reference resources for values-based AI R&D development.NIST should also consider extending these activities to support policy considerations

300、,coordinating this development with sociotechnical policy research.This work could include:Development of working groups that are demographically and disciplinarily diverse,and represent a variety of use cases and backgrounds spanning AI creators,AI users,and the breadth of societal stakeholders;Bal

301、anced representation from industry,academia,government,and underrepresented communities,as well as organizations representing those communities and greater society;Hardening and repeatability testing of the standards and best practices;and Creation of standard reference data,tools,and applications t

302、o support agile and effective incorporation of sociotechnical standards and best practices.OBJECTIVE:Create an AI Research and Innovation Observatory The U.S.government plays a key role ensuring that AI advancements have the broadest possible benefit to society.Given the transformative power of AI,i

303、nvestment and policy decisions made by the U.S.government must be informed by up-to-date knowledge of the capabilities and limitations of the latest in AI;the translational value of those advancements;application areas of where AI may be underutilized;and promising areas for new investment in fundam

304、ental and applied research.39 Playing this critical role is more challenging than ever,given the accelerating pace of breakthroughs and diffusion of AI technologies.Yet,there is not currently a center of excellence for measuring progress in AI,identifying gaps in AI technology and its use in consequ

305、ential applications,and distilling and propagating timely insights to key stakeholders across the government.To ensure continued U.S.leadership in AI,the President should consider taking steps to coordinate and galvanize efforts across three essential functions in relation to the AI R&D ecosystem:me

306、asure,analyze,and inform.Measure:Because the AI R&D sector has rapidly progressed in recent years,nations struggle to obtain reliable insights into national AI competitiveness and trajectory from a research point of view.Likewise,we need to ensure the U.S.government is equipped to monitor AI-related

307、 developments in the public sector.Some areas where the scarcity of data is particularly acute include:lack of information about federal funding for AI R&D;lack of information about use of AI within government;lack of clarity around the size and maturity of different parts of the federally-funded AI

308、 research ecosystem;insufficient information about the AI priorities of individual agencies and across agencies within the federal government;and the environmental cost of AI.73 Further,AI tools and research are increasingly developed as a component of non-AI R&D programs and projects,but they do no

309、t typically get reported or emphasized,given that they are not the prime target of research.Finally,we lack standards for labeling lexicons and methodologies,which would allow for consistent reporting of AI research across federal programs and projects.Analyze:The U.S.government could benefit from s

310、ynthesizing measures of progress in AI R&D into actionable insights,in order to inform policy and investment.There is an ecosystem of third-party efforts to analyze the AI R&D landscape,including universities,think tanks,and non-government organizations.74 All of these efforts are impeded by a lack

311、of usable data available from the federal government,reflected in the“Measure”section above.But there is an opportunity for the federal government to gather information from these existing activities,synthesizing it into a coherent view of the global AI R&D landscape and the U.S.s position within it

312、.Inform:Decision-making on U.S.government investments in AI R&D should be based on as complete a body of information as possible,and would benefit from standards establishing the baseline or types of information required.Expert panels and agency missions largely set funding priorities,with distribut

313、ed decision-making.Consolidated analysis and situational awareness are needed across the federal government to improve coordinated,consistent,efficient,and effective decision-making.Analysis of federal AI programs would help 73 OECD:Measuring the environmental impacts of Artificial Intelligence comp

314、ute and application;noting useful efforts by NITRD 74 Stanford University HAI:2023 AI Index Report 40 understand which classes of AI problems and their supporting ecosystems are being effectively catalyzed through federal R&D investments and which arent.A feedback loop for existing AI R&D data colle

315、ction efforts would be helpful.This should include standards and guidance to ensure concrete,actionable information is provided to relevant stakeholders about how to improve the infrastructure for measuring the AI R&D ecosystem,with an opportunity for the information to grow and be refined over time

316、.Diverse stakeholder engagement will be as critical to this process as stakeholder dissemination.ACTION:Create an AI Research and Innovation Observatory to measure overall progress in the global AI ecosystem NAIAC recommends the U.S.government create an AI Research and Innovation Observatory(AIRIO)t

317、hat identifies and measures key indicators of technical progress in AI spanning research and innovation.It would analyze overall impacts and costs across the global AI ecosystem.It would also inform stakeholders across the government of progress to help steer the co-evolution of AI technology and po

318、licy,maximizing the impact of the U.S.governments investments in AI.This could be housed at the NSF,the proposed Large-Scale National Research Resource described below,or elsewhere as deemed appropriate.The AIRIO should perform the following functions:Identify or create recommended standards for lab

319、eling lexicons and methodologies for such markup and reporting based on research within federal programs and projects;Improve the granularity of data available about AI funding,AI programs,and AI usage within the U.S.government by helping federal agencies to consistently label and report AI programs

320、,projects,and budgets;Monitor frontiers of AI research outside the U.S.government(e.g.,domestically and internationally,in industry and academia).And based on this,identify areas of dramatic progress as well as gaps in technological capabilities,the deployment of AI systems,and our sociotechnical un

321、derstanding of both;Use data about the overall AI landscape as well as granular data about funding,programs,and usage to identify areas of AI research with an increasing amount of societal impact,and also areas that are under-researched in both technical and sociotechnical dimensions of AI.Also,41 c

322、onduct a gap analysis to identify areas of potential positive societal impact from AI where relatively little research is taking place;Determine which interventions catalyze AI progress when applied to under-researched areas.Monitor AI pilots and projects to constantly update the library of interven

323、tions and the contexts within which they work.Infrastructure required to enable this are AI sandboxes where innovations and interventions can be tested with observational data from real environments or synthetic data from simulated environments under different regulatory requirements;Work with feder

324、al agencies to identify sources of data that inform this ecosystem analysis,and to identify gaps in data that make it challenging for agencies to deliver on their AI mandates.Also,conduct informal briefings to identify areas of emerging interest;and Regularly compile and synthesize the results of th

325、is ecosystem analysis and issue reports with such findings to relevant stakeholders in government,as well as the broader public,at least once every three years(and ideally,more frequently).75 OBJECTIVE:Create a large scale national AI research resource-In January 2023,the National AI Research Resour

326、ce(NAIRR)Task Force approved their final report,Strengthening and Democratizing the U.S.Artificial Intelligence Innovation Ecosystem:An Implementation Plan for a National Artificial Intelligence Research Resource.76 One key conclusion was that the AI R&D ecosystem in the U.S.is increasingly inaccess

327、ible to many individuals,groups,and organizations.The data and computational resources required to contribute and compete in the advancement of trustworthy AI systems are largely out of reach to many potential users,including students,non-profit organizations,local and tribal agencies,startups,and s

328、mall businesses.A large-scale national AI research resource would provide much-needed support and opportunities to historically under-resourced and underrepresented groups for innovations in trustworthy AI.As noted in the NAIRR Task Force report,it is critical that any such resource be developed and

329、 deployed in ways that support advances in trustworthiness and innovation for a broad,diverse cross section of the U.S.AI R&D ecosystem.A resource that is reserved for only a few select users,or that is supported by only a few data or compute providers,will ultimately fail to deliver transformative

330、benefits.Any national-scale research resource must be developed with a commitment to its diversity of users,providers,and ultimately benefits.Such a 75 We note that several organizations in this field undertake this effort(e.g.,Stanford Universitys AI Index Report)and intend for this suggested compi

331、lation to have the authority and broader perspective of the U.S.government 76 NAIRR:Strengthening and Democratizing the U.S.Artificial Intelligence Innovation Ecosystem 42 resource will require attending to inevitable ethical,privacy,and civil liberties challenges that arise over time,particularly b

332、ecause many of the benefits will depend on access to datasets that may contain personal or confidential information.There is no perfect design for this type of large-scale national research resource;different plans could be proposed,each with distinct pros and cons.It is thus critical that the resou

333、rce be designed and implemented through processes of broad consultation and engagement with diverse stakeholders.The NAIRR Task Force engaged in exactly such processes,and their proposed implementation plan provides a detailed,feasible path toward this transformative resource.ACTION:Advance the implementation plan from the NAIRR final report to create a large-scale national research resource NAIAC

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(NAIAC:2023国家人工智能咨询委员会第一年度报告(英文版)(89页).pdf)为本站 (无糖拿铁) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

wei**n_...   升级为标准VIP 小敏   升级为高级VIP

 hak**a9...  升级为至尊VIP  185**56... 升级为高级VIP

 156**93... 升级为标准VIP  wei**n_...  升级为至尊VIP

wei**n_...  升级为至尊VIP  Br**e有... 升级为至尊VIP 

wei**n_... 升级为标准VIP   wei**n_... 升级为高级VIP

 wei**n_...  升级为至尊VIP  156**20... 升级为至尊VIP

wei**n_...  升级为至尊VIP 微**... 升级为标准VIP 

  135**45... 升级为标准VIP  wei**n_... 升级为至尊VIP

wei**n_... 升级为高级VIP 157**60...  升级为高级VIP

 150**45... 升级为至尊VIP   wei**n_... 升级为标准VIP

wei**n_... 升级为至尊VIP  151**80... 升级为高级VIP 

135**10...  升级为标准VIP  wei**n_...  升级为高级VIP

 wei**n_... 升级为高级VIP   wei**n_... 升级为至尊VIP

wei**n_...  升级为标准VIP  wei**n_... 升级为高级VIP

 wei**n_... 升级为高级VIP 135**22... 升级为高级VIP

wei**n_...  升级为至尊VIP 181**62...  升级为至尊VIP

黑**... 升级为至尊VIP   wei**n_... 升级为至尊VIP

  178**61... 升级为高级VIP 186**20... 升级为高级VIP 

wei**n_... 升级为标准VIP  wei**n_... 升级为高级VIP 

 wei**n_...  升级为标准VIP wei**n_... 升级为至尊VIP 

 wei**n_... 升级为标准VIP 152**94... 升级为高级VIP  

 wei**n_...  升级为标准VIP  wei**n_... 升级为标准VIP

185**27... 升级为标准VIP    135**37...  升级为至尊VIP

159**71... 升级为高级VIP   139**27...  升级为至尊VIP

wei**n_...  升级为高级VIP wei**n_...  升级为高级VIP

188**66...  升级为标准VIP wei**n_... 升级为至尊VIP 

wei**n_... 升级为高级VIP   wei**n_... 升级为至尊VIP

wei**n_...   升级为高级VIP wei**n_...  升级为高级VIP

wei**n_... 升级为至尊VIP   177**81... 升级为标准VIP 

 185**22... 升级为标准VIP  138**26... 升级为至尊VIP

军歌 升级为至尊VIP  159**75... 升级为至尊VIP  

wei**n_...  升级为标准VIP wei**n_...  升级为至尊VIP

wei**n_... 升级为高级VIP  su2**62... 升级为至尊VIP

 wei**n_... 升级为至尊VIP wei**n_...  升级为至尊VIP 

 186**35... 升级为高级VIP  186**21... 升级为标准VIP

wei**n_...  升级为标准VIP wei**n_... 升级为标准VIP 

wei**n_...  升级为标准VIP  137**40... 升级为至尊VIP

 wei**n_...  升级为至尊VIP 186**37...  升级为至尊VIP

177**05... 升级为至尊VIP wei**n_...  升级为高级VIP

 wei**n_... 升级为至尊VIP wei**n_...   升级为至尊VIP

wei**n_...  升级为标准VIP   wei**n_... 升级为高级VIP

155**91...  升级为至尊VIP 155**91... 升级为标准VIP 

177**25...  升级为至尊VIP   139**88... 升级为至尊VIP

 wei**n_... 升级为至尊VIP  wei**n_... 升级为高级VIP

wei**n_...   升级为标准VIP  135**30... 升级为标准VIP 

wei**n_...  升级为高级VIP 138**62...   升级为标准VIP

洛宾 升级为高级VIP    wei**n_... 升级为标准VIP

 wei**n_...  升级为高级VIP   wei**n_... 升级为标准VIP

180**13... 升级为高级VIP  wei**n_... 升级为至尊VIP 

152**69... 升级为标准VIP  152**69...  升级为标准VIP