上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

美国白宫科技政策办公室:人工智能权利法案蓝图 (英文版)(73页).pdf

编号:106001 PDF   DOCX  73页 11.13MB 下载积分:VIP专享
下载报告请您先登录!

美国白宫科技政策办公室:人工智能权利法案蓝图 (英文版)(73页).pdf

1、 BLUEPRINT FOR AN AI BILL OF RIGHTS MAKING AUTOMATED SYSTEMS WORK FOR THE AMERICAN PEOPLE OCTOBER 2022 About this Document The Blueprint for an AI Bill of Rights:Making Automated Systems Work for the American People was published by the White House Office of Science and Technology Policy in October

2、2022.This framework was released one year after OSTP announced the launch of a process to develop“a bill of rights for an AI-powered world.”Its release follows a year of public engagement to inform this initiative.The framework is available online at:https:/www.whitehouse.gov/ostp/ai-bill-of-rights

3、About the Office of Science and Technology Policy The Office of Science and Technology Policy(OSTP)was established by the National Science and Technology Policy,Organization,and Priorities Act of 1976 to provide the President and others within the Executive Office of the President with advice on the

4、 scientific,engineering,and technological aspects of the economy,national security,health,foreign relations,the environment,and the technological recovery and use of resources,among other topics.OSTP leads interagency science and technology policy coordination efforts,assists the Office of Managemen

5、t and Budget(OMB)with an annual review and analysis of Federal research and development in budgets,and serves as a source of scientific and technological analysis and judgment for the President with respect to major policies,plans,and programs of the Federal Government.Legal Disclaimer The Blueprint

6、 for an AI Bill of Rights:Making Automated Systems Work for the American People is a white paper published by the White House Office of Science and Technology Policy.It is intended to support the development of policies and practices that protect civil rights and promote democratic values in the bui

7、lding,deployment,and governance of automated systems.The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S.government policy.It does not supersede,modify,or direct an interpretation of any existing statute,regulation,policy,or international instrument.It does not constitu

8、te binding guidance for the public or Federal agencies and therefore does not require compliance with the principles described herein.It also is not determinative of what the U.S.governments position will be in any international negotiation.Adoption of these principles may not meet the requirements

9、of existing statutes,regulations,policies,or international instruments,or the requirements of the Federal agencies that enforce them.These principles are not intended to,and do not,prohibit or limit any lawful activity of a government agency,including law enforcement,national security,or intelligenc

10、e activities.The appropriate application of the principles set forth in this white paper depends significantly on the context in which automated systems are being utilized.In some circumstances,application of these principles in whole or in part may not be appropriate given the intended use of autom

11、ated systems to achieve government agency missions.Future sector-specific guidance will likely be necessary and important for guiding the use of automated systems in certain settings such as AI systems used as part of school building security or automated health diagnostic systems.The Blueprint for

12、an AI Bill of Rights recognizes that law enforcement activities require a balancing of equities,for example,between the protection of sensitive law enforcement information and the principle of notice;as such,notice may not be appropriate,or may need to be adjusted to protect sources,methods,and othe

13、r law enforcement equities.Even in contexts where these principles may not apply in whole or in part,federal departments and agencies remain subject to judicial,privacy,and civil liberties oversight as well as existing policies and safeguards that govern automated systems,including,for example,Execu

14、tive Order 13960,Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government(December 2020).This white paper recognizes that national security(which includes certain law enforcement and homeland security activities)and defense activities are of increased sensitivity and intere

15、st to our nations adversaries and are often subject to special requirements,such as those governing classified information and other protected data.Such activities require alternative,compatible safeguards through existing policies that govern automated systems and AI,such as the Department of Defen

16、se(DOD)AI Ethical Principles and Responsible AI Implementation Pathway and the Intelligence Community(IC)AI Ethics Principles and Framework.The implementation of these policies to national security and defense activities can be informed by the Blueprint for an AI Bill of Rights where feasible.The Bl

17、ueprint for an AI Bill of Rights is not intended to,and does not,create any legal right,benefit,or defense,substantive or procedural,enforceable at law or in equity by any party against the United States,its departments,agencies,or entities,its officers,employees,or agents,or any other person,nor do

18、es it constitute a waiver of sovereign immunity.Copyright Information This document is a work of the United States Government and is in the public domain(see 17 U.S.C.105).2 SECTION TITLEFOREWORDAmong the great challenges posed to democracy today is the use of technology,data,and automated systems i

19、n ways that threaten the rights of the American public.Too often,these tools are used to limit our opportunities and prevent our access to critical resources or services.These problems are well documented.In America and around the world,systems supposed to help with patient care have proven unsafe,i

20、neffective,or biased.Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination.Unchecked social media data collection has been used to threaten peoples opportunities,undermine their privacy,or per

21、vasively track their activityoften without their knowledge or consent.These outcomes are deeply harmfulbut they are not inevitable.Automated systems have brought about extraor-dinary benefits,from technology that helps farmers grow food more efficiently and computers that predict storm paths,to algo

22、rithms that can identify diseases in patients.These tools now drive important decisions across sectors,while data is helping to revolutionize global industries.Fueled by the power of American innovation,these tools hold the potential to redefine every part of our society and make life better for eve

23、ryone.This important progress must not come at the price of civil rights or democratic values,foundational American principles that President Biden has affirmed as a cornerstone of his Administration.On his first day in office,the President ordered the full Federal government to work to root out ine

24、quity,embed fairness in decision-making processes,and affirmatively advance civil rights,equal opportunity,and racial justice in America.1 The President has spoken forcefully about the urgent challenges posed to democracy today and has regularly called on people of conscience to act to preserve civi

25、l rightsincluding the right to privacy,which he has called“the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.”2To advance President Bidens vision,the White House Office of Science and Technology Policy has identified five princip

26、les that should guide the design,use,and deployment of automated systems to protect the American public in the age of artificial intelligence.The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threatsand uses technologies in ways that reinforce our hi

27、ghest values.Responding to the experiences of the American public,and informed by insights from researchers,technologists,advocates,journalists,and policymakers,this framework is accompanied by a technical companiona handbook for anyone seeking to incorporate these protections into policy and practi

28、ce,including detailed steps toward actualizing these principles in the technological design process.These principles help provide guidance whenever automated systems can meaningfully impact the publics rights,opportunities,or access to critical needs.3 ABOUT THIS FRAMEWORKThe Blueprint for an AI Bil

29、l of Rights is a set of five principles and associated practices to help guide the design,use,and deployment of automated systems to protect the rights of the American public in the age of artificial intel-ligence.Developed through extensive consultation with the American public,these principles are

30、 a blueprint for building and deploying automated systems that are aligned with democratic values and protect civil rights,civil liberties,and privacy.The Blueprint for an AI Bill of Rights includes this Foreword,the five principles,notes on Applying the The Blueprint for an AI Bill of Rights,and a

31、Technical Companion that gives concrete steps that can be taken by many kinds of organizationsfrom governments at all levels to companies of all sizesto uphold these values.Experts from across the private sector,governments,and international consortia have published principles and frameworks to guid

32、e the responsible use of automated systems;this framework provides a national values statement and toolkit that is sector-agnostic to inform building these protections into policy,practice,or the technological design process.Where existing law or policysuch as sector-specific privacy laws and oversi

33、ght requirementsdo not already provide guidance,the Blueprint for an AI Bill of Rights should be used to inform policy decisions.LISTENING TO THE AMERICAN PUBLICThe White House Office of Science and Technology Policy has led a year-long process to seek and distill input from people across the countr

34、yfrom impacted communities and industry stakeholders to technology develop-ers and other experts across fields and sectors,as well as policymakers throughout the Federal governmenton the issue of algorithmic and data-driven harms and potential remedies.Through panel discussions,public listen-ing ses

35、sions,meetings,a formal request for information,and input to a publicly accessible and widely-publicized email address,people throughout the United States,public servants across Federal agencies,and members of the international community spoke up about both the promises and potential harms of these

36、technologies,and played a central role in shaping the Blueprint for an AI Bill of Rights.The core messages gleaned from these discussions include that AI has transformative potential to improve Americans lives,and that preventing the harms of these technologies is both necessary and achievable.The A

37、ppendix includes a full list of public engage-ments.4 AI BILL OF RIGHTSFFECTIVE SYSTEMSineffective systems.Automated systems should be communities,stakeholders,and domain experts to identify Systems should undergo pre-deployment testing,risk that demonstrate they are safe and effective based on incl

38、uding those beyond the intended use,and adherence to protective measures should include the possibility of not Automated systems should not be designed with an intent reasonably foreseeable possibility of endangering your safety or the safety of your community.They should stemming from unintended,ye

39、t foreseeable,uses or SECTION TITLEBLUEPRINT FOR ANSAFE AND E You should be protected from unsafe or developed with consultation from diverse concerns,risks,and potential impacts of the system.identification and mitigation,and ongoing monitoring their intended use,mitigation of unsafe outcomes domai

40、n-specific standards.Outcomes of these deploying the system or removing a system from use.or be designed to proactively protect you from harms impacts of automated systems.You should be protected from inappropriate or irrelevant data use in the design,development,and deployment of automated systems,

41、and from the compounded harm of its reuse.Independent evaluation and reporting that confirms that the system is safe and effective,including reporting of steps taken to mitigate potential harms,should be performed and the results made public whenever possible.ALGORITHMIC DISCRIMINATION PROTECTIONSYo

42、u should not face discrimination by algorithms and systems should be used and designed in an equitable way.Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race,color,ethnicity,sex(including pregnancy,

43、childbirth,and related medical conditions,gender identity,intersex status,and sexual orientation),religion,age,national origin,disability,veteran status,genetic information,or any other classification protected by law.Depending on the specific circumstances,such algorithmic discrimination may violat

44、e legal protections.Designers,developers,and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.This protection should include proactive equity assessmen

45、ts as part of the system design,use of representative data and protection against proxies for demographic features,ensuring accessibility for people with disabilities in design and development,pre-deployment and ongoing disparity testing and mitigation,and clear organizational oversight.Independent

46、evaluation and plain language reporting in the form of an algorithmic impact assessment,including disparity testing results and mitigation information,should be performed and made public whenever possible to confirm these protections.5 SECTION TITLEDATA PRIVACYYou should be protected from abusive da

47、ta practices via built-in protections and you should have agency over how data about you is used.You should be protected from violations of privacy through design choices that ensure such protections are included by default,including ensuring that data collection conforms to reasonable expectations

48、and that only data strictly necessary for the specific context is collected.Designers,developers,and deployers of automated systems should seek your permission and respect your decisions regarding collection,use,access,transfer,and deletion of your data in appropriate ways and to the greatest extent

49、 possible;where not possible,alternative privacy by design safeguards should be used.Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive.Consent should only be used to justify collection of data in cases wh

50、ere it can be appropriately and meaningfully given.Any consent requests should be brief,be understandable in plain language,and give you agency over data collection and the specific context of use;current hard-to-understand notice-and-choice practices for broad uses of data should be changed.Enhance

51、d protections and restrictions for data and inferences related to sensitive domains,including health,work,education,criminal justice,and finance,and for data pertaining to youth should put you first.In sensitive domains,your data and related inferences should only be used for necessary functions,and

52、 you should be protected by ethical review and use prohibitions.You and your communities should be free from unchecked surveillance;surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protec

53、t privacy and civil liberties.Continuous surveillance and monitoring should not be used in education,work,housing,or in other contexts where the use of such surveillance technologies is likely to limit rights,opportunities,or access.Whenever possible,you should have access to reporting that confirms

54、 your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights,opportunities,or access.NOTICE AND EXPLANATIONYou should know that an automated system is being used and understand how and why it contributes to outcomes that impa

55、ct you.Designers,developers,and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays,notice that such systems are in use,the individual or organiza-tion responsible

56、 for the system,and explanations of outcomes that are clear,timely,and accessible.Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes.You should know how and why an outcome impacting you was determined by an

57、automated system,including when the automated system is not the sole input determining the outcome.Automated systems should provide explanations that are technically valid,meaningful and useful to you and to any operators or others who need to understand the system,and calibrated to the level of ris

58、k based on the context.Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.6 SECTION TITLEHUMAN ALTERNATIVES,CONSIDERATION,AND FALLBACKYou should

59、be able to opt out,where appropriate,and have access to a person who can quickly consider and remedy problems you encounter.You should be able to opt out from automated systems in favor of a human alternative,where appropriate.Appropriateness should be determined based on reasonable expectations in

60、a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts.In some cases,a human or other alternative may be required by law.You should have access to timely human consideration and remedy by a fallback and escalation process if an auto

61、mated system fails,it produces an error,or you would like to appeal or contest its impacts on you.Human consideration and fallback should be accessible,equitable,effective,maintained,accompanied by appropriate operator training,and should not impose an unreasonable burden on the public.Automated sys

62、tems with an intended use within sensi-tive domains,including,but not limited to,criminal justice,employment,education,and health,should additional-ly be tailored to the purpose,provide meaningful access for oversight,include training for any people interacting with the system,and incorporate human

63、consideration for adverse or high-risk decisions.Reporting that includes a description of these human governance processes and assessment of their timeliness,accessibility,outcomes,and effectiveness should be made public whenever possible.Definitions for key terms in The Blueprint for an AI Bill of

64、Rights can be found in Applying the Blueprint for an AI Bill of Rights.Accompanying analysis and tools for actualizing each principle can be found in the Technical Companion.7 SECTION TITLEApplying The Blueprint for an AI Bill of Rights While many of the concerns addressed in this framework derive f

65、rom the use of AI,the technical capabilities and specific definitions of such systems change with the speed of innovation,and the potential harms of their use occur even with less technologically sophisticated tools.Thus,this framework uses a two-part test to determine what systems are in scope.This

66、 framework applies to(1)automated systems that(2)have the potential to meaningfully impact the American publics rights,opportunities,or access to critical resources or services.These rights,opportunities,and access to critical resources of services should be enjoyed equally and be fully protected,re

67、gardless of the changing role that automated systems may play in our lives.This framework describes protections that should be applied with respect to all automated systems that have the potential to meaningfully impact individuals or communities exercise of:RIGHTS,OPPORTUNITIES,OR ACCESSCivil right

68、s,civil liberties,and privacy,including freedom of speech,voting,and protections from discrimi-nation,excessive punishment,unlawful surveillance,and violations of privacy and other freedoms in both public and private sector contexts;Equal opportunities,including equitable access to education,housing

69、,credit,employment,and other programs;or,Access to critical resources or services,such as healthcare,financial services,safety,social services,non-deceptive information about goods and services,and government benefits.A list of examples of automated systems for which these principles should be consi

70、dered is provided in the Appendix.The Technical Companion,which follows,offers supportive guidance for any person or entity that creates,deploys,or oversees automated systems.Considered together,the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlappin

71、g set of backstops against potential harms.This purposefully overlapping framework,when taken as a whole,forms a blueprint to help protect the public from harm.The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm,or ris

72、k of harm,to peoples rights,opportunities,and access.RELATIONSHIP TO EXISTING LAW AND POLICYThe Blueprint for an AI Bill of Rights is an exercise in envisioning a future where the American public is protected from the potential harms,and can fully enjoy the benefits,of automated systems.It describes

73、 princi-ples that can help ensure these protections.Some of these protections are already required by the U.S.Constitu-tion or implemented under existing U.S.laws.For example,government surveillance,and data search and seizure are subject to legal requirements and judicial oversight.There are Consti

74、tutional requirements for human review of criminal investigative matters and statutory requirements for judicial review.Civil rights laws protect the American people against discrimination.8SECTION TITLE Applying The Blueprint for an AI Bill of Rights RELATIONSHIP TO EXISTING LAW AND POLICYThere are

75、 regulatory safety requirements for medical devices,as well as sector-,population-,or technology-spe-cific privacy and security protections.Ensuring some of the additional protections proposed in this framework would require new laws to be enacted or new policies and practices to be adopted.In some

76、cases,exceptions to the principles described in the Blueprint for an AI Bill of Rights may be necessary to comply with existing law,conform to the practicalities of a specific use case,or balance competing public interests.In particular,law enforcement,and other regulatory contexts may require gover

77、nment actors to protect civil rights,civil liberties,and privacy in a manner consistent with,but using alternate mechanisms to,the specific principles discussed in this framework.The Blueprint for an AI Bill of Rights is meant to assist governments and the private sector in moving principles into pr

78、actice.The expectations given in the Technical Companion are meant to serve as a blueprint for the development of additional technical standards and practices that should be tailored for particular sectors and contexts.While existing laws informed the development of the Blueprint for an AI Bill of R

79、ights,this framework does not detail those laws beyond providing them as examples,where appropriate,of existing protective measures.This framework instead shares a broad,forward-leaning vision of recommended principles for automated system development and use to inform private and public involvement

80、 with these systems where they have the poten-tial to meaningfully impact rights,opportunities,or access.Additionally,this framework does not analyze or take a position on legislative and regulatory proposals in municipal,state,and federal government,or those in other countries.We have seen modest p

81、rogress in recent years,with some state and local governments responding to these prob-lems with legislation,and some courts extending longstanding statutory protections to new and emerging tech-nologies.There are companies working to incorporate additional protections in their design and use of aut

82、o-mated systems,and researchers developing innovative guardrails.Advocates,researchers,and government organizations have proposed principles for the ethical use of AI and other automated systems.These include the Organization for Economic Co-operation and Developments(OECDs)2019 Recommendation on Ar

83、tificial Intelligence,which includes principles for responsible stewardship of trustworthy AI and which the United States adopted,and Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,which sets out principles that govern the federal governme

84、nts use of AI.The Blueprint for an AI Bill of Rights is fully consistent with these principles and with the direction in Executive Order 13985 on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.These principles find kinship in the Fair Information Pract

85、ice Principles(FIPPs),derived from the 1973 report of an advisory committee to the U.S.Department of Health,Education,and Welfare,Records,Computers,and the Rights of Citizens.4 While there is no single,universal articulation of the FIPPs,these core principles for managing information about individua

86、ls have been incorporated into data privacy laws and policies across the globe.5 The Blueprint for an AI Bill of Rights embraces elements of the FIPPs that are particularly relevant to automated systems,without articulating a specific set of FIPPs or scoping applicability or the interests served to

87、a single particular domain,like privacy,civil rights and civil liberties,ethics,or risk management.The Technical Companion builds on this prior work to provide practical next steps to move these principles into practice and promote common approaches that allow technological innovation to flourish wh

88、ile protecting people from harm.9Applying The Blueprint for an AI Bill of Rights DEFINITIONSALGORITHMIC DISCRIMINATION:“Algorithmic discrimination”occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race,color,ethnicity,sex(includi

89、ng pregnancy,childbirth,and related medical conditions,gender identity,intersex status,and sexual orientation),religion,age,national origin,disability,veteran status,genetic information,or any other classifica-tion protected by law.Depending on the specific circumstances,such algorithmic discriminat

90、ion may violate legal protections.Throughout this framework the term“algorithmic discrimination”takes this meaning(and not a technical understanding of discrimination as distinguishing between items).AUTOMATED SYSTEM:An automated system is any system,software,or process that uses computation as whol

91、e or part of a system to determine outcomes,make or aid decisions,inform policy implementation,collect data or observations,or otherwise interact with individuals and/or communities.Automated systems include,but are not limited to,systems derived from machine learning,statistics,or other data proces

92、sing or artificial intelligence techniques,and exclude passive computing infrastructure.“Passive computing infrastructure”is any intermediary technology that does not influence or determine the outcome of decision,make or aid in decisions,inform policy implementation,or collect data or observations,

93、including web hosting,domain registration,networking,caching,data storage,or cybersecurity.Throughout this framework,automated systems that are considered in scope are only those that have the potential to meaningfully impact individuals or communi-ties rights,opportunities,or access.COMMUNITIES:“Co

94、mmunities”include:neighborhoods;social network connections(both online and offline);families(construed broadly);people connected by affinity,identity,or shared traits;and formal organi-zational ties.This includes Tribes,Clans,Bands,Rancherias,Villages,and other Indigenous communities.AI and other da

95、ta-driven automated systems most directly collect data on,make inferences about,and may cause harm to individuals.But the overall magnitude of their impacts may be most readily visible at the level of com-munities.Accordingly,the concept of community is integral to the scope of the Blueprint for an

96、AI Bill of Rights.United States law and policy have long employed approaches for protecting the rights of individuals,but exist-ing frameworks have sometimes struggled to provide protections when effects manifest most clearly at a com-munity level.For these reasons,the Blueprint for an AI Bill of Ri

97、ghts asserts that the harms of automated systems should be evaluated,protected against,and redressed at both the individual and community levels.EQUITY:“Equity”means the consistent and systematic fair,just,and impartial treatment of all individuals.Systemic,fair,and just treatment must take into acc

98、ount the status of individuals who belong to underserved communities that have been denied such treatment,such as Black,Latino,and Indigenous and Native American persons,Asian Americans and Pacific Islanders and other persons of color;members of religious minorities;women,girls,and non-binary people

99、;lesbian,gay,bisexual,transgender,queer,and intersex(LGBTQI+)persons;older adults;persons with disabilities;persons who live in rural areas;and persons otherwise adversely affected by persistent poverty or inequality.RIGHTS,OPPORTUNITIES,OR ACCESS:“Rights,opportunities,or access”is used to indicate

100、the scoping of this framework.It describes the set of:civil rights,civil liberties,and privacy,including freedom of speech,voting,and protections from discrimination,excessive punishment,unlawful surveillance,and violations of privacy and other freedoms in both public and private sector contexts;equ

101、al opportunities,including equitable access to education,housing,credit,employment,and other programs;or,access to critical resources or services,such as healthcare,financial services,safety,social services,non-deceptive information about goods and services,and government benefits.10 Applying The Bl

102、ueprint for an AI Bill of Rights SENSITIVE DATA:Data and metadata are sensitive if they pertain to an individual in a sensitive domain(defined below);are generated by technologies used in a sensitive domain;can be used to infer data from a sensitive domain or sensitive data about an individual(such

103、as disability-related data,genomic data,biometric data,behavioral data,geolocation data,data related to interaction with the criminal justice system,relationship history and legal status such as custody and divorce information,and home,work,or school environmental data);or have the reasonable potent

104、ial to be used in ways that are likely to expose individuals to meaningful harm,such as a loss of privacy or financial harm due to identity theft.Data and metadata generated by or about those who are not yet legal adults is also sensitive,even if not related to a sensitive domain.Such data includes,

105、but is not limited to,numerical,text,image,audio,or video data.SENSITIVE DOMAINS:“Sensitive domains”are those in which activities being conducted can cause material harms,including significant adverse effects on human rights such as autonomy and dignity,as well as civil liber-ties and civil rights.D

106、omains that have historically been singled out as deserving of enhanced data protections or where such enhanced protections are reasonably expected by the public include,but are not limited to,health,family planning and care,employment,education,criminal justice,and personal finance.In the context o

107、f this framework,such domains are considered sensitive whether or not the specifics of a system context would necessitate coverage under existing law,and domains and data that are considered sensitive are under-stood to change over time based on societal norms and context.SURVEILLANCE TECHNOLOGY:“Su

108、rveillance technology”refers to products or services marketed for or that can be lawfully used to detect,monitor,intercept,collect,exploit,preserve,protect,transmit,and/or retain data,identifying information,or communications concerning individuals or groups.This framework limits its focus to both g

109、overnment and commercial use of surveillance technologies when juxtaposed with real-time or subsequent automated analysis and when such systems have a potential for meaningful impact on individuals or communities rights,opportunities,or access.UNDERSERVED COMMUNITIES:The term“underserved communities

110、”refers to communities that have been systematically denied a full opportunity to participate in aspects of economic,social,and civic life,as exemplified by the list in the preceding definition of“equity.”11 FROM PRINCIPLES TO PRACTICE A TECHINCAL COMPANION TOTHE Blueprint for an AI BILL OF RIGHTS12

111、TABLE OF CONTENTSFROM PRINCIPLES TO PRACTICE:A TECHNICAL COMPANION TO THE BLUEPRINT FOR AN AI BILL OF RIGHTS USING THIS TECHNICAL COMPANION SAFE AND EFFECTIVE SYSTEMS ALGORITHMIC DISCRIMINATION PROTECTIONS DATA PRIVACY NOTICE AND EXPLANATION HUMAN ALTERNATIVES,CONSIDERATION,AND FALLBACKAPPENDIX EXAM

112、PLES OF AUTOMATED SYSTEMS LISTENING TO THE AMERICAN PEOPLEENDNOTES 0465353556313 -USING THIS TECHNICAL COMPANIONThe Blueprint for an AI Bill of Rights is a set of five principles and associated practices to help guide the design,use,and deployment of automated systems to protect the right

113、s of the American public in the age of artificial intelligence.This technical companion considers each principle in the Blueprint for an AI Bill of Rights and provides examples and concrete steps for communities,industry,governments,and others to take in order to build these protections into policy,

114、practice,or the technological design process.Taken together,the technical protections and practices laid out in the Blueprint for an AI Bill of Rights can help guard the American public against many of the potential and actual harms identified by researchers,technolo-gists,advocates,journalists,poli

115、cymakers,and communities in the United States and around the world.This technical companion is intended to be used as a reference by people across many circumstances anyone impacted by automated systems,and anyone developing,designing,deploying,evaluating,or making policy to govern the use of an aut

116、omated system.Each principle is accompanied by three supplemental sections:12WHY THIS PRINCIPLE IS IMPORTANT:This section provides a brief summary of the problems that the principle seeks to address and protect against,including illustrative examples.WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS:The

117、expectations for automated systems are meant to serve as a blueprint for the development of additional technicalstandards and practices that should be tailored for particular sectors and contexts.This section outlines practical steps that can be implemented to realize the vision of the Blueprint for

118、 an AI Bill of Rights.The expectations laid out often mirror existing practices for technology development,including pre-deployment testing,ongoing monitoring,and governance structures for automated systems,but also go further to address unmet needs for change and offer concrete directions for how t

119、hose changes can be made.Expectations about reporting are intended for the entity developing or using the automated system.The resulting reports can be provided to the public,regulators,auditors,industry standards groups,or others engaged in independent review,and should be made public as much as po

120、ssible consistent with law,regulation,and policy,and noting that intellectual property,law enforcement,or national security considerations may prevent public release.Where public reports are not possible,the information should be provided to oversight bodies and privacy,civil liberties,or other ethi

121、cs officers charged with safeguard ing individuals rights.These reporting expectations are important for transparency,so the American people can haveconfidence that their rights,opportunities,and access as well as their expectations about technologies are respected.3HOW THESE PRINCIPLES CAN MOVE INT

122、O PRACTICE:This section provides real-life examples of how these guiding principles can become reality,through laws,policies,and practices.It describes practical technical and sociotechnical approaches to protecting rights,opportunities,and access.The examples provided are not critiques or endorseme

123、nts,but rather are offered as illustrative cases to help provide a concrete vision for actualizing the Blueprint for an AI Bill of Rights.Effectively implementing these processes require the cooperation of and collaboration among industry,civil society,researchers,policymakers,technologists,and the

124、public.14 SAFE AND EFFECTIVE SYSTEMS You should be protected from unsafe or ineffective sys-tems.Automated systems should be developed with consultation from diverse communities,stakeholders,and domain experts to iden-tify concerns,risks,and potential impacts of the system.Systems should undergo pre

125、-deployment testing,risk identification and miti-gation,and ongoing monitoring that demonstrate they are safe and effective based on their intended use,mitigation of unsafe outcomes including those beyond the intended use,and adherence to do-main-specific standards.Outcomes of these protective measu

126、res should include the possibility of not deploying the system or remov-ing a system from use.Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community.They should be designed to proactively protect you fr

127、om harms stemming from unintended,yet foreseeable,uses or impacts of automated systems.You should be protected from inappropriate or irrelevant data use in the design,de-velopment,and deployment of automated systems,and from the compounded harm of its reuse.Independent evaluation and report-ing that

128、 confirms that the system is safe and effective,including re-porting of steps taken to mitigate potential harms,should be per-formed and the results made public whenever possible.15 SAFE AND EFFECTIVE SYSTEMS WHY THIS PRINCIPLE IS IMPORTANTThis section provides a brief summary of the problems which

129、the principle seeks to address and protect against,including illustrative examples.While technologies are being deployed to solve problems across a wide array of issues,our reliance on technology can also lead to its use in situations where it has not yet been proven to workeither at all or within a

130、n acceptable range of error.In other cases,technologies do not work as intended or as promised,causing substantial and unjustified harm.Automated systems sometimes rely on data from other systems,including historical data,allowing irrelevant informa-tion from past decisions to infect decision-making

131、 in unrelated situations.In some cases,technologies are purposeful-ly designed to violate the safety of others,such as technologies designed to facilitate stalking;in other cases,intended or unintended uses lead to unintended harms.Many of the harms resulting from these technologies are preventable,

132、and actions are already being taken to protect the public.Some companies have put in place safeguards that have prevented harm from occurring by ensuring that key development decisions are vetted by an ethics review;others have identified and mitigated harms found through pre-deployment testing and

133、ongoing monitoring processes.Governments at all levels have existing public consulta-tion processes that may be applied when considering the use of new automated systems,and existing product develop-ment and testing practices already protect the American public from many potential harms.Still,these

134、kinds of practices are deployed too rarely and unevenly.Expanded,proactive protections could build on these existing practices,increase confidence in the use of automated systems,and protect the American public.Inno-vators deserve clear rules of the road that allow new ideas to flourish,and the Amer

135、ican public deserves protections from unsafe outcomes.All can benefit from assurances that automated systems will be designed,tested,and consis-tently confirmed to work as intended,and that they will be proactively protected from foreseeable unintended harm-ful outcomes.A proprietary model was devel

136、oped to predict the likelihood of sepsis in hospitalized patients and was imple-mented at hundreds of hospitals around the country.An independent study showed that the model predictionsunderperformed relative to the designers claims while also causing alert fatigue by falsely alertinglikelihood of s

137、epsis.6On social media,Black people who quote and criticize racist messages have had their own speech silenced whena platforms automated moderation system failed to distinguish this“counter speech”(or other critiqueand journalism)from the original hateful messages to which such speech responded.7A d

138、evice originally developed to help people track and find lost items has been used as a tool by stalkers to trackvictims locations in violation of their privacy and safety.The device manufacturer took steps after release toprotect people from unwanted tracking by alerting people on their phones when

139、a device is found to be movingwith them over time and also by having the device make an occasional noise,but not all phones are ableto receive the notification and the devices remain a safety concern due to their misuse.8 An algorithm used to deploy police was found to repeatedly send police to neig

140、hborhoods they regularly visit,even if those neighborhoods were not the ones with the highest crime rates.These incorrect crime predictionswere the result of a feedback loop generated from the reuse of data from previous arrests and algorithmpredictions.916 SAFE AND EFFECTIVE SYSTEMS WHY THIS PRINCI

141、PLE IS IMPORTANTThis section provides a brief summary of the problems which the principle seeks to address and protect against,including illustrative examples.AI-enabled“nudification”technology that creates images where people appear to be nudeincluding apps thatenable non-technical users to create

142、or alter images of individuals without their consenthas proliferated at analarming rate.Such technology is becoming a common form of image-based abuse that disproportionatelyimpacts women.As these tools become more sophisticated,they are producing altered images that are increasing-ly realistic and

143、are difficult for both humans and AI to detect as inauthentic.Regardless of authenticity,the expe-rience of harm to victims of non-consensual intimate images can be devastatingly realaffecting their personaland professional lives,and impacting their mental and physical health.10A company installed A

144、I-powered cameras in its delivery vans in order to evaluate the road safety habits of its driv-ers,but the system incorrectly penalized drivers when other cars cut them off or when other events beyondtheir control took place on the road.As a result,drivers were incorrectly ineligible to receive a bo

145、nus.1117 SAFE AND EFFECTIVE SYSTEMS WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMSThe expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.In order to ensure that a

146、n automated system is safe and effective,it should include safeguards to protect the public from harm in a proactive and ongoing manner;avoid use of data inappropriate for or irrelevant to the task at hand,including reuse that could cause compounded harm;and demonstrate the safety and effectiveness

147、of the system.These expectations are explained below.Protect the public from harm in a proactive and ongoing manner Consultation.The public should be consulted in the design,implementation,deployment,acquisition,and maintenance phases of automated system development,with emphasis on early-stage cons

148、ultation before a system is introduced or a large change implemented.This consultation should directly engage diverse impact-ed communities to consider concerns and risks that may be unique to those communities,or disproportionate-ly prevalent or severe for them.The extent of this engagement and the

149、 form of outreach to relevant stakehold-ers may differ depending on the specific automated system and development phase,but should include subject matter,sector-specific,and context-specific experts as well as experts on potential impacts such as civil rights,civil liberties,and privacy experts.For

150、private sector applications,consultations before product launch may need to be confidential.Government applications,particularly law enforcement applications or applications that raise national security considerations,may require confidential or limited engagement based on system sensitivities and p

151、reexisting oversight laws and structures.Concerns raised in this consultation should be documented,and the automated system developers were proposing to create,use,or deploy should be reconsidered based on this feedback.Testing.Systems should undergo extensive testing before deployment.This testing

152、should follow domain-specific best practices,when available,for ensuring the technology will work in its real-world context.Such testing should take into account both the specific technology used and the roles of any human operators or reviewers who impact system outcomes or effectiveness;testing sh

153、ould include both automated systems testing and human-led(manual)testing.Testing conditions should mirror as closely as possible the conditions in which the system will be deployed,and new testing may be required for each deployment to account for material differences in conditions from one deployme

154、nt to another.Following testing,system performance should be compared with the in-place,potentially human-driven,status quo procedures,with existing human performance considered as a performance baseline for the algorithm to meet pre-deployment,and as a lifecycle minimum performance standard.Decisio

155、n possibilities resulting from performance testing should include the possibility of not deploying the system.Risk identification and mitigation.Before deployment,and in a proactive and ongoing manner,poten-tial risks of the automated system should be identified and mitigated.Identified risks should

156、 focus on the potential for meaningful impact on peoples rights,opportunities,or access and include those to impacted communities that may not be direct users of the automated system,risks resulting from purposeful misuse of the system,and other concerns identified via the consultation process.Asses

157、sment and,where possible,mea-surement of the impact of risks should be included and balanced such that high impact risks receive attention and mitigation proportionate with those impacts.Automated systems with the intended purpose of violating the safety of others should not be developed or used;sys

158、tems with such safety violations as identified unin-tended consequences should not be used until the risk can be mitigated.Ongoing risk mitigation may necessi-tate rollback or significant modification to a launched automated system.18 SAFE AND EFFECTIVE SYSTEMS WHAT SHOULD BE EXPECTED OF AUTOMATED S

159、YSTEMSThe expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.Ongoing monitoring.Automated systems should have ongoing monitoring procedures,including recalibra-

160、tion procedures,in place to ensure that their performance does not fall below an acceptable level over time,based on changing real-world conditions or deployment contexts,post-deployment modification,or unexpect-ed conditions.This ongoing monitoring should include continuous evaluation of performanc

161、e metrics and harm assessments,updates of any systems,and retraining of any machine learning models as necessary,as well as ensuring that fallback mechanisms are in place to allow reversion to a previously working system.Monitor-ing should take into account the performance of both technical system c

162、omponents(the algorithm as well as any hardware components,data inputs,etc.)and human operators.It should include mechanisms for testing the actual accuracy of any predictions or recommendations generated by a system,not just a human operators determination of their accuracy.Ongoing monitoring proce

163、dures should include manual,human-led monitor-ing as a check in the event there are shortcomings in automated monitoring systems.These monitoring proce-dures should be in place for the lifespan of the deployed automated system.Clear organizational oversight.Entities responsible for the development o

164、r use of automated systems should lay out clear governance structures and procedures.This includes clearly-stated governance proce-dures before deploying the system,as well as responsibility of specific individuals or entities to oversee ongoing assessment and mitigation.Organizational stakeholders

165、including those with oversight of the business process or operation being automated,as well as other organizational divisions that may be affected due to the use of the system,should be involved in establishing governance procedures.Responsibility should rest high enough in the organization that dec

166、isions about resources,mitigation,incident response,and potential rollback can be made promptly,with sufficient weight given to risk mitigation objectives against competing concerns.Those holding this responsibility should be made aware of any use cases with the potential for meaningful impact on pe

167、oples rights,opportunities,or access as determined based on risk identification procedures.In some cases,it may be appropriate for an independent ethics review to be conducted before deployment.Avoid inappropriate,low-quality,or irrelevant data use and the compounded harm of its reuse Relevant and h

168、igh-quality data.Data used as part of any automated systems creation,evaluation,or deployment should be relevant,of high quality,and tailored to the task at hand.Relevancy should be established based on research-backed demonstration of the causal influence of the data to the specific use case or jus

169、tified more generally based on a reasonable expectation of usefulness in the domain and/or for the system design or ongoing development.Relevance of data should not be established solely by appealing to its historical connection to the outcome.High quality and tailored data should be representative

170、of the task at hand and errors from data entry or other sources should be measured and limited.Any data used as the target of a prediction process should receive particular attention to the quality and validity of the predicted outcome or label to ensure the goal of the automated system is appropria

171、tely identified and measured.Additionally,justification should be documented for each data attribute and source to explain why it is appropriate to use that data to inform the results of the automated system and why such use will not violate any applicable laws.In cases of high-dimensional and/or de

172、rived attributes,such justifications can be provided as overall descriptions of the attribute generation process and appropriateness.19 SAFE AND EFFECTIVE SYSTEMS WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMSThe expectations for automated systems are meant to serve as a blueprint for the development

173、of additional technical standards and practices that are tailored for particular sectors and contexts.Derived data sources tracked and reviewed carefully.Data that is derived from other data through the use of algorithms,such as data derived or inferred from prior model outputs,should be identified

174、and tracked,e.g.,via a specialized type in a data schema.Derived data should be viewed as potentially high-risk inputs that may lead to feedback loops,compounded harm,or inaccurate results.Such sources should be care-fully validated against the risk of collateral consequences.Data reuse limits in se

175、nsitive domains.Data reuse,and especially data reuse in a new context,can result in the spreading and scaling of harms.Data from some domains,including criminal justice data and data indi-cating adverse outcomes in domains such as finance,employment,and housing,is especially sensitive,and in some ca

176、ses its reuse is limited by law.Accordingly,such data should be subject to extra oversight to ensure safety and efficacy.Data reuse of sensitive domain data in other contexts(e.g.,criminal data reuse for civil legal matters or private sector use)should only occur where use of such data is legally au

177、thorized and,after examina-tion,has benefits for those impacted by the system that outweigh identified risks and,as appropriate,reason-able measures have been implemented to mitigate the identified risks.Such data should be clearly labeled to identify contexts for limited reuse based on sensitivity.

178、Where possible,aggregated datasets may be useful for replacing individual-level sensitive data.Demonstrate the safety and effectiveness of the system Independent evaluation.Automated systems should be designed to allow for independent evaluation(e.g.,via application programming interfaces).Independe

179、nt evaluators,such as researchers,journalists,ethics review boards,inspectors general,and third-party auditors,should be given access to the system and samples of associated data,in a manner consistent with privacy,security,law,or regulation(including,e.g.,intellectual property law),in order to perf

180、orm such evaluations.Mechanisms should be included to ensure that system access for evaluation is:provided in a timely manner to the deployment-ready version of the system;trusted to provide genuine,unfiltered access to the full system;and truly independent such that evaluator access cannot be revok

181、ed without reasonable and verified justification.Reporting.12 Entities responsible for the development or use of automated systems should provide regularly-updated reports that include:an overview of the system,including how it is embedded in the organizations business processes or other activities,

182、system goals,any human-run procedures that form a part of the system,and specific performance expectations;a description of any data used to train machine learning models or for other purposes,including how data sources were processed and interpreted,a summary of what data might be missing,incomplet

183、e,or erroneous,and data relevancy justifications;the results of public consultation such as concerns raised and any decisions made due to these concerns;risk identification and management assessments and any steps taken to mitigate potential harms;the results of performance testing including,but not

184、 limited to,accuracy,differential demographic impact,resulting error rates(overall and per demographic group),and comparisons to previously deployed systems;ongoing monitoring procedures and regular performance testing reports,including monitoring frequency,results,and actions taken;and the procedur

185、es for and results from independent evaluations.Reporting should be provided in a plain language and machine-readable manner.20 SAFE AND EFFECTIVE SYSTEMS HOW THESE PRINCIPLES CAN MOVE INTO PRACTICEReal-life examples of how these principles can become reality,through laws,policies,and practical tech

186、nical and sociotechnical approaches to protecting rights,opportunities,and access.Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government requires that certain federal agencies adhere to nine principles when designing,developing,acquiring,or using

187、AI for purposes other than national security or defense.These principleswhile taking into account the sensitive law enforcement and other contexts in which the federal government may use AI,as opposed to private sector use of AIrequire that AI is:(a)lawful and respectful of our Nations values;(b)pur

188、poseful and performance-driven;(c)accurate,reliable,and effective;(d)safe,secure,and resilient;(e)understandable;(f)responsible and traceable;(g)regularly monitored;(h)transpar-ent;and,(i)accountable.The Blueprint for an AI Bill of Rights is consistent with the Executive Order.Affected agencies acro

189、ss the federal government have released AI use case inventories13 and are implementing plans to bring those AI systems into compliance with the Executive Order or retire them.The law and policy landscape for motor vehicles shows that strong safety regulationsand measures to address harms when they o

190、ccurcan enhance innovation in the context of com-plex technologies.Cars,like automated digital systems,comprise a complex collection of components.The National Highway Traffic Safety Administration,14 through its rigorous standards and independent evaluation,helps make sure vehicles on our roads are

191、 safe without limiting manufacturers ability to innovate.15 At the same time,rules of the road are implemented locally to impose contextually appropriate requirements on drivers,such as slowing down near schools or playgrounds.16From large companies to start-ups,industry is providing innovative solu

192、tions that allow organizations to mitigate risks to the safety and efficacy of AI systems,both before deployment and through monitoring over time.17 These innovative solutions include risk assessments,auditing mechanisms,assessment of organizational procedures,dashboards to allow for ongoing monitor

193、ing,documentation procedures specific to model assessments,and many other strategies that aim to mitigate risks posed by the use of AI to companies reputation,legal responsibilities,and other product safety and effectiveness concerns.The Office of Management and Budget(OMB)has called for an expansio

194、n of opportunities for meaningful stakeholder engagement in the design of programs and services.OMB also points to numerous examples of effective and proactive stakeholder engagement,including the Community-Based Participatory Research Program developed by the National Institutes of Health and the p

195、articipatory technology assessments developed by the National Oceanic and Atmospheric Administration.18The National Institute of Standards and Technology(NIST)is developing a risk management framework to better manage risks posed to individuals,organizations,and society by AI.19 The NIST AI Risk Man

196、agement Framework,as mandated by Congress,is intended for voluntary use to help incorporate trustworthiness considerations into the design,development,use,and evaluation of AI products,services,and systems.The NIST framework is being developed through a consensus-driven,open,transparent,and collabor

197、ative process that includes workshops and other opportunities to provide input.The NIST framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy,explainability and interpretability,reliability,privacy,robustness,safety,securit

198、y(resilience),and mitigation of unintended and/or harmful bias,as well as of harmful uses.The NIST framework will consider and encompass principles such as transparency,accountability,and fairness during pre-design,design and development,deployment,use,and testing and evaluation of AI technologies a

199、nd systems.It is expected to be released in the winter of 2022-23.21 SAFE AND EFFECTIVE SYSTEMS HOW THESE PRINCIPLES CAN MOVE INTO PRACTICEReal-life examples of how these principles can become reality,through laws,policies,and practical technical and sociotechnical approaches to protecting rights,op

200、portunities,and access.Some U.S government agencies have developed specific frameworks for ethical use of AI systems.The Department of Energy(DOE)has activated the AI Advancement Council that oversees coordina-tion and advises on implementation of the DOE AI Strategy and addresses issues and/or esca

201、lations on the ethical use and development of AI systems.20 The Department of Defense has adopted Artificial Intelligence Ethical Principles,and tenets for Responsible Artificial Intelligence specifically tailored to its national security and defense activities.21 Similarly,the U.S.Intelligence Comm

202、unity(IC)has developed the Principles of Artificial Intelligence Ethics for the Intelligence Community to guide personnel on whether and how to develop and use AI in furtherance of the ICs mission,as well as an AI Ethics Framework to help implement these principles.22The National Science Foundation(

203、NSF)funds extensive research to help foster the development of automated systems that adhere to and advance their safety,security and effectiveness.Multiple NSF programs support research that directly addresses many of these principles:the National AI Research Institutes23 support research on all as

204、pects of safe,trustworthy,fair,and explainable AI algorithms and systems;the Cyber Physical Systems24 program supports research on developing safe autonomous and cyber physical systems with AI components;the Secure and Trustworthy Cyberspace25 program supports research on cybersecurity and privacy e

205、nhancing technologies in automated systems;the Formal Methods in the Field26 program supports research on rigorous formal verification and analysis of automated systems and machine learning,and the Designing Accountable Software Systems27 program supports research on rigorous and reproducible method

206、ologies for developing software systems with legal and regulatory compliance in mind.Some state legislatures have placed strong transparency and validity requirements on the use of pretrial risk assessments.The use of algorithmic pretrial risk assessments has been a cause of concern for civil rights

207、 groups.28 Idaho Code Section 19-1910,enacted in 2019,29 requires that any pretrial risk assessment,before use in the state,first be shown to be free of bias against any class of individuals protected from discrimination by state or federal law,that any locality using a pretrial risk assessment must

208、 first formally validate the claim of its being free of bias,that all documents,records,and information used to build or validate the risk assessment shall be open to public inspection,and that assertions of trade secrets cannot be used to quash discovery in a criminal matter by a party to a crimina

209、l case.22 ALGORITHMIC DISCRIMINATION ProtectionsYou should not face discrimination by algorithms and systems should be used and designed in an equitable way.Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on t

210、heir race,color,ethnicity,sex(including pregnancy,childbirth,and related medical conditions,gender identity,intersex status,and sexual orientation),religion,age,national origin,disability,veteran status,genetic infor-mation,or any other classification protected by law.Depending on the specific circu

211、mstances,such algorithmic discrimination may violate legal protections.Designers,developers,and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.This p

212、rotection should include proactive equity assessments as part of the system design,use of representative data and protection against proxies for demographic features,ensuring accessibility for people with disabilities in design and development,pre-deployment and ongoing disparity testing and mitigat

213、ion,and clear organizational oversight.Independent evaluation and plain language reporting in the form of an algorithmic impact assessment,including disparity testing results and mitigation information,should be performed and made public whenever possible to confirm these protections.23 Algorithmic

214、Discrimination Protections WHY THIS PRINCIPLE IS IMPORTANTThis section provides a brief summary of the problems which the principle seeks to address and protect against,including illustrative examples.There is extensive evidence showing that automated systems can produce inequitable outcomes and amp

215、lify existing inequity.30 Data that fails to account for existing systemic biases in American society can result in a range of consequences.For example,facial recognition technology that can contribute to wrongful and discriminatory arrests,31 hiring algorithms that inform discriminatory decisions,a

216、nd healthcare algorithms that discount the severity of certain diseases in Black Americans.Instances of discriminatory practices built into and resulting from AI and other automated systems exist across many industries,areas,and contexts.While automated systems have the capacity to drive extraordina

217、ry advances and innovations,algorithmic discrimination protections should be built into their design,deployment,and ongoing use.Many companies,non-profits,and federal government agencies are already taking steps to ensure the public is protected from algorithmic discrimination.Some companies have in

218、stituted bias testing as part of their product quality assessment and launch procedures,and in some cases this testing has led products to be changed or not launched,preventing harm to the public.Federal government agencies have been developing standards and guidance for the use of automated systems

219、 in order to help prevent bias.Non-profits and companies have developed best practices for audits and impact assessments to help identify potential algorithmic discrimination and provide transparency to the public in the mitigation of such biases.But there is much more work to do to protect the publ

220、ic from algorithmic discrimination to use and design automated systems in an equitable way.The guardrails protecting the public from discrimination in their daily lives should include their digital lives and impactsbasic safeguards against abuse,bias,and discrimination to ensure that all people are

221、treated fairly when automated systems are used.This includes all dimensions of their lives,from hiring to loan approvals,from medical treatment and payment to encounters with the criminal justice system.Ensuring equity should also go beyond existing guardrails to consider the holistic impact that au

222、tomated systems make on underserved communities and to institute proactive protections that support these communities.An automated system using nontraditional factors such as educational attainment and employment history aspart of its loan underwriting and pricing model was found to be much more lik

223、ely to charge an applicant whoattended a Historically Black College or University(HBCU)higher loan prices for refinancing a student loanthan an applicant who did not attend an HBCU.This was found to be true even when controlling forother credit-related factors.32A hiring tool that learned the featur

224、es of a companys employees(predominantly men)rejected women appli-cants for spurious and discriminatory reasons;resumes with the word“womens,”such as“womenschess club captain,”were penalized in the candidate ranking.33A predictive model marketed as being able to predict whether students are likely t

225、o drop out of school wasused by more than 500 universities across the country.The model was found to use race directly as a predictor,and also shown to have large disparities by race;Black students were as many as four times as likely as theirotherwise similar white peers to be deemed at high risk o

226、f dropping out.These risk scores are used by advisors to guide students towards or away from majors,and some worry that they are being used to guideBlack students away from math and science subjects.34A risk assessment tool designed to predict the risk of recidivism for individuals in federal custod

227、y showedevidence of disparity in prediction.The tool overpredicts the risk of recidivism for some groups of color on thegeneral recidivism tools,and underpredicts the risk of recidivism for some groups of color on some of theviolent recidivism tools.The Department of Justice is working to reduce the

228、se disparities and haspublicly released a report detailing its review of the tool.35 24 WHY THIS PRINCIPLE IS IMPORTANTThis section provides a brief summary of the problems which the principle seeks to address and protect against,including illustrative examples.An automated sentiment analyzer,a tool

229、 often used by technology platforms to determine whether a state-ment posted online expresses a positive or negative sentiment,was found to be biased against Jews and gaypeople.For example,the analyzer marked the statement“Im a Jew”as representing a negative sentiment,while“Im a Christian”was identi

230、fied as expressing a positive sentiment.36 This could lead to thepreemptive blocking of social media comments such as:“Im gay.”A related company with this bias concernhas made their data public to encourage researchers to help address the issue37 and has released reportsidentifying and measuring thi

231、s problem as well as detailing attempts to address it.38Searches for“Black girls,”“Asian girls,”or“Latina girls”return predominantly39 sexualized content,ratherthan role models,toys,or activities.40 Some search engines have been working to reduce the prevalence ofthese results,but the problem remain

232、s.41Advertisement delivery systems that predict who is most likely to click on a job advertisement end up deliv-ering ads in ways that reinforce racial and gender stereotypes,such as overwhelmingly directing supermar-ket cashier ads to women and jobs with taxi companies to primarily Black people.42B

233、ody scanners,used by TSA at airport checkpoints,require the operator to select a“male”or“female”scanning setting based on the passengers sex,but the setting is chosen based on the operators perception ofthe passengers gender identity.These scanners are more likely to flag transgender travelers as re

234、quiringextra screening done by a person.Transgender travelers have described degrading experiences associatedwith these extra screenings.43 TSA has recently announced plans to implement a gender-neutral algorithm44 while simultaneously enhancing the security effectiveness capabilities of the existin

235、g technology.The National Disabled Law Students Association expressed concerns that individuals with disabilities weremore likely to be flagged as potentially suspicious by remote proctoring AI systems because of their disabili-ty-specific access needs such as needing longer breaks or using screen r

236、eaders or dictation software.45 An algorithm designed to identify patients with high needs for healthcare systematically assigned lowerscores(indicating that they were not as high need)to Black patients than to those of white patients,evenwhen those patients had similar numbers of chronic conditions

237、 and other markers of health.46 In addition,healthcare clinical algorithms that are used by physicians to guide clinical decisions may includesociodemographic variables that adjust or“correct”the algorithms output on the basis of a patients race orethnicity,which can lead to race-based health inequi

238、ties.4725Algorithmic Discrimination Protections WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMSThe expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.Any automated

239、 system should be tested to help ensure it is free from algorithmic discrimination before it can be sold or used.Protection against algorithmic discrimination should include designing to ensure equity,broadly construed.Some algorithmic discrimination is already prohibited under existing anti-discrim

240、ination law.The expectations set out below describe proactive technical and policy steps that can be taken to not only reinforce those legal protections but extend beyond them to ensure equity for underserved communities48 even in circumstances where a specific legal protection may not be clearly es

241、tablished.These protections should be instituted throughout the design,development,and deployment process and are described below roughly in the order in which they would be instituted.Protect the public from algorithmic discrimination in a proactive and ongoing manner Proactive assessment of equity

242、 in design.Those responsible for the development,use,or oversight of automated systems should conduct proactive equity assessments in the design phase of the technology research and development or during its acquisition to review potential input data,associated historical context,accessibility for p

243、eople with disabilities,and societal goals to identify potential discrimination and effects on equity resulting from the introduction of the technology.The assessed groups should be as inclusive as possible of the underserved communities mentioned in the equity definition:Black,Latino,and Indigenous

244、 and Native American persons,Asian Americans and Pacific Islanders and other persons of color;members of religious minorities;women,girls,and non-binary people;lesbian,gay,bisexual,transgender,queer,and inter-sex(LGBTQI+)persons;older adults;persons with disabilities;persons who live in rural areas;

245、and persons otherwise adversely affected by persistent poverty or inequality.Assessment could include both qualitative and quantitative evaluations of the system.This equity assessment should also be considered a core part of the goals of the consultation conducted as part of the safety and efficacy

246、 review.Representative and robust data.Any data used as part of system development or assessment should be representative of local communities based on the planned deployment setting and should be reviewed for bias based on the historical and societal context of the data.Such data should be sufficie

247、ntly robust to identify and help to mitigate biases and potential harms.Guarding against proxies.Directly using demographic information in the design,development,or deployment of an automated system(for purposes other than evaluating a system for discrimination or using a system to counter discrimin

248、ation)runs a high risk of leading to algorithmic discrimination and should be avoided.In many cases,attributes that are highly correlated with demographic features,known as proxies,can contribute to algorithmic discrimination.In cases where use of the demographic features themselves would lead to il

249、legal algorithmic discrimination,reliance on such proxies in decision-making(such as that facilitated by an algorithm)may also be prohibited by law.Proactive testing should be performed to identify proxies by testing for correlation between demographic information and attributes in any data used as

250、part of system design,development,or use.If a proxy is identified,designers,developers,and deployers should remove the proxy;if needed,it may be possible to identify alternative attributes that can be used instead.At a minimum,organizations should ensure a proxy feature is not given undue weight and

251、 should monitor the system closely for any resulting algorithmic discrimination.26Algorithmic Discrimination Protections WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMSThe expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and prac

252、tices that are tailored for particular sectors and contexts.Ensuring accessibility during design,development,and deployment.Systems should be designed,developed,and deployed by organizations in ways that ensure accessibility to people with disabili-ties.This should include consideration of a wide va

253、riety of disabilities,adherence to relevant accessibility standards,and user experience research both before and after deployment to identify and address any accessi-bility barriers to the use or effectiveness of the automated system.Disparity assessment.Automated systems should be tested using a br

254、oad set of measures to assess wheth-er the system components,both in pre-deployment testing and in-context deployment,produce disparities.The demographics of the assessed groups should be as inclusive as possible of race,color,ethnicity,sex(including pregnancy,childbirth,and related medical conditio

255、ns,gender identity,intersex status,and sexual orientation),religion,age,national origin,disability,veteran status,genetic information,or any other classifi-cation protected by law.The broad set of measures assessed should include demographic performance mea-sures,overall and subgroup parity assessme

256、nt,and calibration.Demographic data collected for disparity assessment should be separated from data used for the automated system and privacy protections should be instituted;in some cases it may make sense to perform such assessment using a data sample.For every instance where the deployed automat

257、ed system leads to different treatment or impacts disfavoring the identi-fied groups,the entity governing,implementing,or using the system should document the disparity and a justification for any continued use of the system.Disparity mitigation.When a disparity assessment identifies a disparity aga

258、inst an assessed group,it may be appropriate to take steps to mitigate or eliminate the disparity.In some cases,mitigation or elimination of the disparity may be required by law.Disparities that have the potential to lead to algorithmic discrimination,cause meaningful harm,or violate equity49 goals

259、should be mitigated.When designing and evaluating an automated system,steps should be taken to evaluate multiple models and select the one that has the least adverse impact,modify data input choices,or otherwise identify a system with fewer disparities.If adequate mitigation of the disparity is not

260、possible,then the use of the automated system should be reconsidered.One of the considerations in whether to use the system should be the validity of any target measure;unobservable targets may result in the inappropriate use of proxies.Meeting these standards may require instituting mitigation proc

261、edures and other protective measures to address algorithmic discrimination,avoid meaningful harm,and achieve equity goals.Ongoing monitoring and mitigation.Automated systems should be regularly monitored to assess algo-rithmic discrimination that might arise from unforeseen interactions of the syste

262、m with inequities not accounted for during the pre-deployment testing,changes to the system after deployment,or changes to the context of use or associated data.Monitoring and disparity assessment should be performed by the entity deploying or using the automated system to examine whether the system

263、 has led to algorithmic discrimina-tion when deployed.This assessment should be performed regularly and whenever a pattern of unusual results is occurring.It can be performed using a variety of approaches,taking into account whether and how demographic information of impacted people is available,for

264、 example via testing with a sample of users or via qualitative user experience research.Riskier and higher-impact systems should be monitored and assessed more frequently.Outcomes of this assessment should include additional disparity mitigation,if needed,or fallback to earlier procedures in the cas

265、e that equity standards are no longer met and cant be mitigated,and prior mechanisms provide better adherence to equity standards.27Algorithmic Discrimination Protections WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMSThe expectations for automated systems are meant to serve as a blueprint for the deve

266、lopment of additional technical standards and practices that are tailored for particular sectors and contexts.Demonstrate that the system protects against algorithmic discrimination Independent evaluation.As described in the section on Safe and Effective Systems,entities should allow independent eva

267、luation of potential algorithmic discrimination caused by automated systems they use or oversee.In the case of public sector uses,these independent evaluations should be made public unless law enforcement or national security restrictions prevent doing so.Care should be taken to balance individual p

268、rivacy with evaluation data access needs;in many cases,policy-based and/or technological innovations and controls allow access to such data without compromising privacy.Reporting.Entities responsible for the development or use of automated systems should provide reporting of an appropriately designe

269、d algorithmic impact assessment,50 with clear specification of who performs the assessment,who evaluates the system,and how corrective actions are taken(if necessary)in response to the assessment.This algorithmic impact assessment should include at least:the results of any consultation,design stage

270、equity assessments(potentially including qualitative analysis),accessibility designs and testing,disparity testing,document any remaining disparities,and detail any mitigation implementation and assessments.This algorithmic impact assessment should be made public whenever possible.Reporting should b

271、e provided in a clear and machine-readable manner using plain language to allow for more straightforward public accountability.28Algorithmic Discrimination Protections HOW THESE PRINCIPLES CAN MOVE INTO PRACTICEReal-life examples of how these principles can become reality,through laws,policies,and p

272、ractical technical and sociotechnical approaches to protecting rights,opportunities,and access.The federal government is working to combat discrimination in mortgage lending.The Depart-ment of Justice has launched a nationwide initiative to combat redlining,which includes reviewing how lenders who m

273、ay be avoiding serving communities of color are conducting targeted marketing and advertising.51 This initiative will draw upon strong partnerships across federal agencies,including the Consumer Financial Protection Bureau and prudential regulators.The Action Plan to Advance Property Appraisal and V

274、aluation Equity includes a commitment from the agencies that oversee mortgage lending to include a nondiscrimination standard in the proposed rules for Automated Valuation Models.52The Equal Employment Opportunity Commission and the Department of Justice have clearly laid out how employers use of AI

275、 and other automated systems can result in discrimination against job applicants and employees with disabilities.53 The documents explain how employers use of software that relies on algorithmic decision-making may violate existing requirements under Title I of the Americans with Disabilities Act(“A

276、DA”).This technical assistance also provides practical tips to employers on how to comply with the ADA,and to job applicants and employees who think that their rights may have been violated.Disparity assessments identified harms to Black patients healthcare access.A widely used healthcare algorithm

277、relied on the cost of each patients past medical care to predict future medical needs,recommending early interventions for the patients deemed most at risk.This process discriminated against Black patients,who generally have less access to medical care and therefore have generated less cost than whi

278、te patients with similar illness and need.A landmark study documented this pattern and proposed practical ways that were shown to reduce this bias,such as focusing specifically on active chronic health conditions or avoidable future costs related to emergency visits and hospitalization.54 Large empl

279、oyers have developed best practices to scrutinize the data and models used for hiring.An industry initiative has developed Algorithmic Bias Safeguards for the Workforce,a structured questionnaire that businesses can use proactively when procuring software to evaluate workers.It covers specific techn

280、ical questions such as the training data used,model training process,biases identified,and mitigation steps employed.55 Standards organizations have developed guidelines to incorporate accessibility criteria into technology design processes.The most prevalent in the United States is the Access Board

281、s Section 508 regulations,56 which are the technical standards for federal information communication technology(software,hardware,and web).Other standards include those issued by the International Organization for Standardization,57 and the World Wide Web Consortium Web Content Accessibility Guideli

282、nes,58 a globally recognized voluntary consensus standard for web content and other information and communications technology.NIST has released Special Publication 1270,Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.59 The special publication:describes the stakes and

283、 challenges of bias in artificial intelligence and provides examples of how and why it can chip away at public trust;identifies three categories of bias in AI systemic,statistical,and human and describes how and where they contribute to harms;and describes three broad challenges for mitigating bias

284、datasets,testing and evaluation,and human factors and introduces preliminary guidance for addressing them.Throughout,the special publication takes a socio-technical perspective to identifying and managing AI bias.29Algorithmic Discrimination Protections You should be protected from abusive data prac

285、tices via built-in protections and you should have agency over how data about you is used.You should be protected from violations of privacy through design choices that ensure such protections are included by default,including ensuring that data collection conforms to reasonable expectations and tha

286、t only data strictly necessary for the specific context is collected.Designers,de-velopers,and deployers of automated systems should seek your permission and respect your decisions regarding collection,use,access,transfer,and de-letion of your data in appropriate ways and to the greatest extent poss

287、ible;where not possible,alternative privacy by design safeguards should be used.Systems should not employ user experience and design decisions that obfus-cate user choice or burden users with defaults that are privacy invasive.Con-sent should only be used to justify collection of data in cases where

288、 it can be appropriately and meaningfully given.Any consent requests should be brief,be understandable in plain language,and give you agency over data collection and the specific context of use;current hard-to-understand no-tice-and-choice practices for broad uses of data should be changed.Enhanced

289、protections and restrictions for data and inferences related to sensitive do-mains,including health,work,education,criminal justice,and finance,and for data pertaining to youth should put you first.In sensitive domains,your data and related inferences should only be used for necessary functions,and

290、you should be protected by ethical review and use prohibitions.You and your communities should be free from unchecked surveillance;surveillance tech-nologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to pro-te

291、ct privacy and civil liberties.Continuous surveillance and monitoring should not be used in education,work,housing,or in other contexts where the use of such surveillance technologies is likely to limit rights,opportunities,or access.Whenever possible,you should have access to reporting that confirm

292、s your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights,opportunities,or access.DATA PRIVACY30 DATA PRIVACY WHY THIS PRINCIPLE IS IMPORTANTThis section provides a brief summary of the problems which the principle seeks

293、to address and protect against,including illustrative examples.Data privacy is a foundational and cross-cutting principle required for achieving all others in this framework.Surveil-lance and data collection,sharing,use,and reuse now sit at the foundation of business models across many industries,wi

294、th more and more companies tracking the behavior of the American public,building individual profiles based on this data,and using this granular-level information as input into automated systems that further track,profile,and impact the American public.Government agencies,particularly law enforcement

295、 agencies,also use and help develop a variety of technologies that enhance and expand surveillance capabilities,which similarly collect data used as input into other automated systems that directly impact peoples lives.Federal law has not grown to address the expanding scale of private data collecti

296、on,or of the ability of governments at all levels to access that data and leverage the means of private collection.Meanwhile,members of the American public are often unable to access their personal data or make critical decisions about its collection and use.Data brokers frequently collect consumer

297、data from numerous sources without consumers permission or knowledge.60 Moreover,there is a risk that inaccurate and faulty data can be used to make decisions about their lives,such as whether they will qualify for a loan or get a job.Use of surveillance technologies has increased in schools and wor

298、kplaces,and,when coupled with consequential management and evaluation decisions,it is leading to mental health harms such as lowered self-confidence,anxiety,depression,and a reduced ability to use analytical reasoning.61 Documented patterns show that personal data is being aggregated by data brokers

299、 to profile communities in harmful ways.62 The impact of all this data harvesting is corrosive,breeding distrust,anxiety,and other mental health problems;chilling speech,protest,and worker organizing;and threatening our democratic process.63 The American public should be protected from these growing

300、 risks.Increasingly,some companies are taking these concerns seriously and integrating mechanisms to protect consumer privacy into their products by design and by default,including by minimizing the data they collect,communicating collection and use clearly,and improving security practices.Federal g

301、overnment surveillance and other collection and use of data is governed by legal protections that help to protect civil liberties and provide for limits on data retention in some cases.Many states have also enacted consumer data privacy protection regimes to address some of these harms.However,these

302、 are not yet standard practices,and the United States lacks a comprehensive statutory or regulatory framework governing the rights of the public when it comes to personal data.While a patchwork of laws exists to guide the collection and use of personal data in specific contexts,including health,empl

303、oyment,education,and credit,it can be unclear how these laws apply in other contexts and in an increasingly automated society.Additional protec-tions would assure the American public that the automated systems they use are not monitoring their activities,collecting information on their lives,or othe

304、rwise surveilling them without context-specific consent or legal authori-ty.31 DATA PRIVACY WHY THIS PRINCIPLE IS IMPORTANTThis section provides a brief summary of the problems which the principle seeks to address and protect against,including illustrative examples.An insurer might collect data from

305、 a persons social media presence as part of deciding what lifeinsurance rates they should be offered.64A data broker harvested large amounts of personal data and then suffered a breach,exposing hundreds ofthousands of people to potential identity theft.65A local public housing authority installed a

306、facial recognition system at the entrance to housing complexes toassist law enforcement with identifying individuals viewed via camera when police reports are filed,leadingthe community,both those living in the housing complex and not,to have videos of them sent to the localpolice department and mad

307、e available for scanning by its facial recognition software.66Companies use surveillance software to track employee discussions about union activity and use theresulting data to surveil individual employees and surreptitiously intervene in discussions.6732 DATA PRIVACY WHAT SHOULD BE EXPECTED OF AUT

308、OMATED SYSTEMSThe expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.Traditional terms of servicethe block of text that the public is accustomed to clicking thr

309、ough when using a web-site or digital appare not an adequate mechanism for protecting privacy.The American public should be protect-ed via built-in privacy protections,data minimization,use and collection limitations,and transparency,in addition to being entitled to clear mechanisms to control acces

310、s to and use of their dataincluding their metadatain a proactive,informed,and ongoing way.Any automated system collecting,using,sharing,or storing personal data should meet these expectations.Protect privacy by design and by default Privacy by design and by default.Automated systems should be design

311、ed and built with privacy protect-ed by default.Privacy risks should be assessed throughout the development life cycle,including privacy risks from reidentification,and appropriate technical and policy mitigation measures should be implemented.This includes potential harms to those who are not users

312、 of the automated system,but who may be harmed by inferred data,purposeful privacy violations,or community surveillance or other community harms.Data collection should be minimized and clearly communicated to the people whose data is collected.Data should only be collected or used for the purposes o

313、f training or testing machine learning models if such collection and use is legal and consistent with the expectations of the people whose data is collected.User experience research should be conducted to confirm that people understand what data is being collected about them and how it will be used,

314、and that this collection matches their expectations and desires.Data collection and use-case scope limits.Data collection should be limited in scope,with specific,narrow identified goals,to avoid mission creep.Anticipated data collection should be determined to be strictly necessary to the identifie

315、d goals and should be minimized as much as possible.Data collected based on these identified goals and for a specific context should not be used in a different context without assessing for new privacy risks and implementing appropriate mitigation measures,which may include express consent.Clear tim

316、elines for data retention should be established,with data deleted as soon as possible in accordance with legal or policy-based limitations.Determined data retention timelines should be documented and justi-fied.Risk identification and mitigation.Entities that collect,use,share,or store sensitive dat

317、a should attempt to proactively identify harms and seek to manage them so as to avoid,mitigate,and respond appropri-ately to identified risks.Appropriate responses include determining not to process data when the privacy risks outweigh the benefits or implementing measures to mitigate acceptable ris

318、ks.Appropriate responses do not include sharing or transferring the privacy risks to users via notice or consent requests where users could not reasonably be expected to understand the risks without further support.Privacy-preserving security.Entities creating,using,or governing automated systems sh

319、ould follow privacy and security best practices designed to ensure data and metadata do not leak beyond the specific consented use case.Best practices could include using privacy-enhancing cryptography or other types of privacy-enhancing technologies or fine-grained permissions and access control me

320、chanisms,along with conventional system security protocols.33 DATA PRIVACY WHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMSThe expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors

321、 and contexts.Protect the public from unchecked surveillance Heightened oversight of surveillance.Surveillance or monitoring systems should be subject to heightened oversight that includes at a minimum assessment of potential harms during design(before deploy-ment)and in an ongoing manner,to ensure

322、that the American publics rights,opportunities,and access are protected.This assessment should be done before deployment and should give special attention to ensure there is not algorithmic discrimination,especially based on community membership,when deployed in a specific real-world context.Such as

323、sessment should then be reaffirmed in an ongoing manner as long as the system is in use.Limited and proportionate surveillance.Surveillance should be avoided unless it is strictly necessary to achieve a legitimate purpose and it is proportionate to the need.Designers,developers,and deployers of surv

324、eillance systems should use the least invasive means of monitoring available and restrict monitoring to the minimum number of subjects possible.To the greatest extent possible consistent with law enforcement and national security needs,individuals subject to monitoring should be provided with clear

325、and specific notice before it occurs and be informed about how the data gathered through surveillance will be used.Scope limits on surveillance to protect rights and democratic values.Civil liberties and civil rights must not be limited by the threat of surveillance or harassment facilitated or aide

326、d by an automated system.Surveillance systems should not be used to monitor the exercise of democratic rights,such as voting,privacy,peaceful assembly,speech,or association,in a way that limits the exercise of civil rights or civil liber-ties.Information about or algorithmically-determined assumptio

327、ns related to identity should be carefully limited if used to target or guide surveillance systems in order to avoid algorithmic discrimination;such iden-tity-related information includes group characteristics or affiliations,geographic designations,location-based and association-based inferences,so

328、cial networks,and biometrics.Continuous surveillance and monitoring systems should not be used in physical or digital workplaces(regardless of employment status),public educa-tional institutions,and public accommodations.Continuous surveillance and monitoring systems should not be used in a way that

329、 has the effect of limiting access to critical resources or services or suppressing the exer-cise of rights,even where the organization is not under a particular duty to protect those rights.Provide the public with mechanisms for appropriate and meaningful consent,access,and control over their data

330、Use-specific consent.Consent practices should not allow for abusive surveillance practices.Where data collectors or automated systems seek consent,they should seek it for specific,narrow use contexts,for specif-ic time durations,and for use by specific entities.Consent should not extend if any of th

331、ese conditions change;consent should be re-acquired before using data if the use case changes,a time limit elapses,or data is trans-ferred to another entity(including being shared or sold).Consent requested should be limited in scope and should not request consent beyond what is required.Refusal to

332、provide consent should be allowed,without adverse effects,to the greatest extent possible based on the needs of the use case.Brief and direct consent requests.When seeking consent from users short,plain language consent requests should be used so that users understand for what use contexts,time span

333、,and entities they are providing data and metadata consent.User experience research should be performed to ensure these consent requests meet performance standards for readability and comprehension.This includes ensuring that consent requests are accessible to users with disabilities and are available in the language(s)and reading level appro-priate for the audience.User experience design choices

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(美国白宫科技政策办公室:人工智能权利法案蓝图 (英文版)(73页).pdf)为本站 (无糖拿铁) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部