上海品茶

AI_Now研究所:2018年度报告(英文版)(62页).pdf

编号:26026 PDF 62页 650.42KB 下载积分:免费下载
下载报告请您先登录!

AI_Now研究所:2018年度报告(英文版)(62页).pdf

1、 AI Now Report 2018 Meredith Whittaker , AI Now Institute, New York University, Google Open Research Kate Crawford , AI Now Institute, New York University, Microsoft Research Roel Dobbe , AI Now Institute, New York University Genevieve Fried , AI Now Institute, New York University Elizabeth Kaziunas

2、 , AI Now Institute, New York University Varoon Mathur , AI Now Institute, New York University Sarah Myers West , AI Now Institute, New York University Rashida Richardson , AI Now Institute, New York University Jason Schultz , AI Now Institute, New York University School of Law Oscar Schwartz , AI N

3、ow Institute, New York University With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University) DECEMBER 2018 CONTENTS ABOUT THE AI NOW INSTITUTE3 RECOMMENDATIONS4 EXECUTIVE SUMMARY7 INTRODUCTION10 1. THE INTENSIFYING PROBLEM SPACE12 1.1 AI is Amplifying Wid

4、espread Surveillance12 The faulty science and dangerous history of affect recognition13 Facial recognition amplifi es civil rights concerns15 1.2 The Risks of Automated Decision Systems in Government18 1.3 Experimenting on Society: Who Bears the Burden?22 2. EMERGING SOLUTIONS IN 201824 2.1 Bias Bus

5、ting and Formulas for Fairness: the Limits of Technological “Fixes”24 Broader approaches27 2.2 Industry Applications: Toolkits and System Tweaks28 2.3 Why Ethics is Not Enough29 3. WHAT IS NEEDED NEXT32 3.1 From Fairness to Justice32 3.2 Infrastructural Thinking33 3.3 Accounting for Hidden Labor in

6、AI Systems34 3.4 Deeper Interdisciplinarity36 3.5 Race, Gender and Power in AI37 3.6 Strategic Litigation and Policy Interventions39 3.7 Research and Organizing: An Emergent Coalition40 CONCLUSION42 ENDNOTES44 This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International

7、 License 2 ABOUT THE AI NOW INSTITUTE The AI Now Institute at New York University is an interdisciplinary research institute dedicated to understanding the social implications of AI technologies. It is the fi rst university research center focused specifi cally on AIs social signifi cance. Founded a

8、nd led by Kate Crawford and Meredith Whittaker, AI Now is one of the few women-led AI institutes in the world. AI Now works with a broad coalition of stakeholders, including academic researchers, industry, civil society, policy makers, and affected communities, to identify and address issues raised

9、by the rapid introduction of AI across core social domains. AI Now produces interdisciplinary research to help ensure that AI systems are accountable to the communities and contexts they are meant to serve, and that they are applied in ways that promote justice and equity. The Institutes current res

10、earch agenda focuses on four core areas: bias and inclusion, rights and liberties, labor and automation, and safety and critical infrastructure. Our most recent publications include: Litigating Algorithms , a major report assessing recent court cases focused on government use of algorithms Anatomy o

11、f an AI System , a large-scale map and longform essay produced in partnership with SHARE Lab , which investigates the human labor, data, and planetary resources required to operate an Amazon Echo Algorithmic Impact Assessment (AIA) Report , which helps affected communities and stakeholders assess th

12、e use of AI and algorithmic decision-making in public agencies Algorithmic Accountability Policy Toolkit , which is geared toward advocates interested in understanding government use of algorithmic systems We also host expert workshops and public events on a wide range of topics. Our workshop on Imm

13、igration, Data, and Automation in the Trump Era , co-hosted with the Brennan Center for Justice and the Center for Privacy and Technology at Georgetown Law, focused on the Trump Administrations use of data harvesting, predictive analytics, and machine learning to target immigrant communities. The Da

14、ta Genesis Working Group convenes experts from across industry and academia to examine the mechanics of dataset provenance and maintenance. Our roundtable on Machine Learning, Inequality and Bias , co-hosted in Berlin with the Robert Bosch Academy, gathered researchers and policymakers from across E

15、urope to address issues of bias, discrimination, and fairness in machine learning and related technologies. Our annual public symposium convenes leaders from academia, industry, government, and civil society to examine the biggest challenges we face as AI moves into our everyday lives. The AI Now 20

16、18 Symposium addressed the intersection of AI ethics, organizing, and accountability, examining the landmark events of the past year. Over 1,000 people registered for the event, which was free and open to the public. Recordings of the program are available on our website . More information is availa

17、ble at www.ainowinstitute.org . 3 RECOMMENDATIONS 1. Governments need to regulate AI by expanding the powers of sector-specifi c agencies to oversee, audit, and monitor these technologies by domain. The implementation of AI systems is expanding rapidly, without adequate governance, oversight, or acc

18、ountability regimes. Domains like health, education, criminal justice, and welfare all have their own histories, regulatory frameworks, and hazards. However, a national AI safety body or general AI standards and certifi cation model will struggle to meet the sectoral expertise requirements needed fo

19、r nuanced regulation. We need a sector-specifi c approach that does not prioritize the technology, but focuses on its application within a given domain. Useful examples of sector-specifi c approaches include the United States Federal Aviation Administration and the National Highway Traffi c Safety A

20、dministration. 2.Facial recognition and affect recognition need stringent regulation to protect the public interest. Such regulation should include national laws that require strong oversight, clear limitations, and public transparency. Communities should have the right to reject the application of

21、these technologies in both public and private contexts. Mere public notice of their use is not suffi cient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance. Affect recognition deserves particular attention. Affect recognition is a

22、 subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and “worker engagement” based on images or video of faces. These claims are not backed by robust scientifi c evidence, and are being applied in unethical and irresponsible ways that often

23、 recall the pseudosciences of phrenology and physiognomy. Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level. 3.The AI industry urgently needs new approaches to governance. As this report demons

24、trates, internal governance structures at most technology companies are failing to ensure accountability for AI systems. Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines. This shoul

25、d include rank-and-fi le employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. Third party experts should be able to audit and publish about key systems, and companies need to ensure that their A

26、I infrastructures can be understood from “nose to tail,” including their ultimate application and use. 4.AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Vendors and developers who create AI and automated decision systems fo

27、r use in government should agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of their software. Corporate secrecy 4 laws are a barrier to due process: they contribute to the “black box effect” rendering systems opaque and unaccountable, making it har

28、d to assess bias, contest decisions, or remedy errors. Anyone procuring these technologies for use in the public sector should demand that vendors waive these claims before entering into any agreements. 5.Technology companies should provide protections for conscientious objectors, employee organizin

29、g, and ethical whistleblowers. Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers ability to organize, whistleblow, and make ethical choices about what projects they work on. This should

30、include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution. Workers raising ethical concerns must also be protected, as should whistleblowing in

31、 the public interest. 6.Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. The hype around AI is only growing, leading to widening gaps between marketing promises and actual product performance. With these gaps come increasing risks to both individuals

32、 and commercial customers, often with grave consequences. Much like other products and services that have the potential to seriously impact or exploit populations, AI vendors should be held to high standards for what they can promise, especially when the scientifi c evidence to back these promises i

33、s inadequate and the longer-term consequences are unknown. 7.Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. Technology companies and the AI fi eld as a whole have focused on the “pipeline model,” lo

34、oking to train and hire more diverse employees. While this is important, it overlooks what happens once people are hired into workplaces that exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability. Companies need to examine the deeper issues in thei

35、r workplaces, and the relationship between exclusionary cultures and the products they build, which can produce tools that perpetuate bias and discrimination. This change in focus needs to be accompanied by practical action, including a commitment to end pay and opportunity inequity, along with tran

36、sparency measures about hiring and retention. 8.Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” For meaningful accountability, we need to better understand and track the component parts of an AI system and the full supply chain on which i

37、t relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle. We call this accounting for the “full stack supply chain” of AI systems, and it is a necessary condition

38、 for a 5 more responsible form of auditing. The full stack supply chain also includes understanding the true environmental and labor costs of AI systems. This incorporates energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clic

39、kworkers to develop and maintain AI systems. 9.More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. The people most at risk of harm from AI systems are often those least able to contest the outcomes. We need increased support

40、for robust mechanisms of legal redress and civic participation. This includes supporting public advocates who represent those cut off from social services due to algorithmic decision making, civil society organizations and labor organizers that support groups that are at risk of job loss and exploit

41、ation, and community-based infrastructures that enable public participation. 10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary fi eld, but over the decades has narrowed to become a technical discipline. With the increasing

42、application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering d

43、epartments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations. 6 EXECUTIVE SUMMARY At

44、the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those i

45、nterventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight including basic

46、 safeguards of responsibility, liability, and due process is an increasingly urgent concern. Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues: 1.The growing accountability gap in AI, which favors those who create

47、and deploy these technologies at the expense of those most affected 2.The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression 3.Increasing government use of automated decision sy

48、stems that directly impact individuals and communities without established accountability structures 4.Unregulated and unmonitored forms of AI experimentation on human populations 5.The limits of technological solutions to problems of fairness, bias, and discrimination Within each topic, we identify

49、 emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institutes location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the worlds largest AI companies are based. The AI accountability gap is growing: The techno

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(AI_Now研究所:2018年度报告(英文版)(62页).pdf)为本站 (菜菜呀) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

 153**10...  升级为至尊VIP wei**n_...  升级为高级VIP

 微**... 升级为标准VIP wei**n_... 升级为标准VIP 

 157**73... 升级为高级VIP   art**r1... 升级为标准VIP

  wei**n_... 升级为高级VIP  139**23... 升级为标准VIP

wei**n_...  升级为至尊VIP wei**n_... 升级为至尊VIP

wei**n_...  升级为高级VIP  七** 升级为高级VIP

134**20... 升级为标准VIP wei**n_... 升级为至尊VIP 

bai**in... 升级为至尊VIP   wei**n_... 升级为标准VIP

wei**n_...  升级为至尊VIP  ray**19...  升级为高级VIP

136**33...  升级为高级VIP  wei**n_...  升级为至尊VIP

wei**n_...  升级为至尊VIP  网**... 升级为高级VIP 

 梦**... 升级为至尊VIP  wei**n_... 升级为至尊VIP

 wei**n_... 升级为标准VIP  181**18... 升级为至尊VIP 

136**69...  升级为标准VIP 158**27... 升级为至尊VIP 

wei**n_... 升级为至尊VIP   wei**n_... 升级为至尊VIP 

 153**39... 升级为至尊VIP  152**23... 升级为高级VIP 

152**23... 升级为标准VIP   wei**n_...  升级为标准VIP

姚哥 升级为至尊VIP   微**... 升级为标准VIP

182**73...  升级为高级VIP wei**n_...  升级为标准VIP

 138**94... 升级为标准VIP   wei**n_... 升级为至尊VIP

A**o  升级为至尊VIP 134**12... 升级为标准VIP 

wei**n_...  升级为标准VIP wei**n_... 升级为标准VIP 

158**01...  升级为高级VIP  wei**n_... 升级为标准VIP

133**84...  升级为高级VIP  wei**n_... 升级为标准VIP

 周斌  升级为高级VIP  wei**n_...  升级为至尊VIP

182**06...  升级为高级VIP  139**04... 升级为至尊VIP

wei**n_... 升级为至尊VIP  Ke**in  升级为高级VIP

186**28... 升级为至尊VIP  139**96... 升级为高级VIP

she**nz... 升级为至尊VIP  wei**n_... 升级为高级VIP

wei**n_...   升级为高级VIP  wei**n_...  升级为标准VIP

137**19... 升级为至尊VIP  419**13...  升级为标准VIP

 183**33... 升级为至尊VIP  189**41... 升级为至尊VIP

 张友 升级为标准VIP  奈**... 升级为标准VIP

 186**99... 升级为至尊VIP 187**37... 升级为高级VIP

135**15... 升级为高级VIP  朱炜 升级为至尊VIP

ja**r  升级为至尊VIP wei**n_...  升级为高级VIP 

 wei**n_... 升级为高级VIP  崔** 升级为至尊VIP 

187**09... 升级为标准VIP  189**42...  升级为至尊VIP 

 wei**n_... 升级为高级VIP 妙察  升级为标准VIP

 wei**n_... 升级为至尊VIP 137**24... 升级为高级VIP 

185**85...   升级为标准VIP wei**n_... 升级为高级VIP

 136**40... 升级为标准VIP  156**86... 升级为至尊VIP 

186**28...  升级为标准VIP  135**35...  升级为标准VIP

156**86...  升级为高级VIP  wei**n_... 升级为至尊VIP

 wei**n_... 升级为高级VIP wei**n_... 升级为标准VIP

 wei**n_... 升级为标准VIP wei**n_... 升级为高级VIP  

 138**87...  升级为高级VIP  185**51... 升级为至尊VIP 

微**... 升级为至尊VIP   136**44...  升级为至尊VIP

 183**89...  升级为标准VIP  wei**n_... 升级为至尊VIP 

 8**的... 升级为至尊VIP Goo**ar... 升级为至尊VIP