上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

2020算法的道德:算法对人工智能系统道德的贡献 - 恩智浦(英文版)(14页).pdf

编号:21891 PDF 14页 21.17MB 下载积分:VIP专享
下载报告请您先登录!

2020算法的道德:算法对人工智能系统道德的贡献 - 恩智浦(英文版)(14页).pdf

1、THE MORALS OF ALGORITHMS A contribution to the ethics of AI systems 2 When discussing artificial intelligence (AI) with researchers and scientists, one can easily pick up on their enthusiastic optimism about the tremendous potential that AI-based applications carry for the whole of humankind. This i

2、ncludes machine learning, a subset of AI, which uses computer algorithms to automatically learn and improve from environmental data without being explicitly programmed. From detecting COVID-19 in x-ray and CT scans, to diagnosing skin or breast cancer far more reliably than any human physician could

3、, preventing road fatalities, detecting illegal rainforest logging or fighting wildlife poaching: Machine-trained AI systems provide the means to tackle some of the worlds most challenging health crises, social problems and environmental issues, potentially helping hundreds of millions of people in

4、advanced, developing and emerging countries. At the same time, there is increasing concern over potential threats that AI-powered systems pose. Not every new AI application is suitable to instill trust in peoples hearts and minds. From racist chatbots, to discriminating algorithms or sexist recruitm

5、ent tools, there are regular reports about instances of AI gone wrong. Rapidly advancing forms of artificial intelligence can prevent accidents and improve transportation, health care and scientific research. Or, they can violate human rights by enabling mass surveillance, driving cyber attacks and

6、spreading false news. INTRODUCTORY REMARKS 1 TEACHING MORALS TO MACHINES: THE BASICS The fundamental ethical dilemma surrounding the questions of how to teach AI values and how to ensure our ethical preferences can be embedded in code will accompany us for quite some time. Casting ethical values int

7、o a set of rules for machines to follow is no trivial task. To start with, not even humans appear to uniformly agree on what is right and what is wrong conduct. Ethical principles are so it seems quite culture-specific and vary considerably across different groups. In an attempt to show how divergen

8、t peoples ethical values can be, MIT researchers created a platform called the Moral Machine. This online tool is a variant of the classic trolley problem. Study participants are asked to choose whose lives a self-driving car should prioritize in a potentially fatal accident. By asking millions of p

9、eople around the globe for their solution to the dilemma, the researchers found that peoples ethical judgement shows some variation across different cultures. AI applications, building on big data and combined with the omnipresence of devices and sensors of the IoT, will eventually govern core funct

10、ions of society. These applications will reach from education to health, science to business, right up to the sphere of law, security and defense, political discourse and democratic decision making. While some societies strive to leverage the potential of AI to achieve compliance with behavioral nor

11、ms and regulation, other cultures are taking a more cautious approach when it comes to balancing the rights of the individual against the claims of society as a whole. 3 The E.U. and U.S. are both working on policies and laws targeting artificial intelligence. Both entities have started to draft gui

12、delines for AI-based applications to serve as ethical frameworks for future global usage. These guidelines draw inspiration from the Universal Declaration of Human Rights. The common thread in these guidelines is a human-centric approach in which human beings maintain their unique status of primacy

13、in civil, political, economic and social fields. The EUs ethical guidelines on AI mirror its basic ethical principles by operationalizing these in the context of AI: cultivating the constitutional commitment to protect universal and indivisible human rights, ensuring respect for rule of law, fosteri

14、ng democratic freedom and promoting the common good. Other legal instruments further specify this commitment; for instance, the Charter of Fundamental Rights of the European Union or specific legislative acts such as the General Data Protection Regulation (GDPR). The European Commission has set up t

15、he High-Level Expert Group on Artificial Intelligence (AI HLEG). This group has defined respect of European values and principles as set out in the EU Treaties and Charter of Fundamental Rights as the core principle of its “human-centric” approach to AI ethics. In line with its guidelines for trustw

16、orthy AI systems, the group has laid down three principles that need to be met throughout the entire lifecycle of an AI system: Lawful AI Compliance with all laws and regulations, regardless of the positive (what may or should be done) or negative (what cannot be done) nature of imposed rules of con

17、duct. Ethical AI Adherence to ethical principles and values. Cultivating the societal commitment to protect universal and indivisible human rights, ensuring respect for rule of law, fostering democratic freedom and promoting the common good. Holding up a human-centric approach, in which human beings

18、 maintain their unique status of primacy in civil, political, economic and social fields. Robust AI Systems should be able to perform in a safe and secure manner, where safeguards are carefully installed in order to avoid unintended adverse impacts. Therefore, it is necessary to ensure that AI syste

19、ms are robust, not only from a technological, but also from a social perspective. GOVERNING BODIES 4 HOLISTIC APPROACH TO ETHICAL AND TRUSTWORTHY AI In spite of the fact that each of these principles is fundamental for a successful deployment of AI systems, people should not regard them as comprehen

20、sive or self-sufficient. It is up to us as a society to ensure their harmonization and alignment. In addition to these guidelines and principles the HLEG has issued an assessment list to aide stakeholders to check their policies and processes against their requirements for trustworthy AI. Equitable

21、Minimize bias in AI Traceable Possess transparent and auditable data sources and methodologies; have appropriate understanding of the AI technologies used Reliable AI will have explicit and well-defined usage; security safety and effectiveness of AI deployments will be ensured Governable Possess the

22、 ability to deactivate or disengage AI systems; detection and avoidance of unintended consequences Responsible Responsible AI development and deployment 5 PRIVATE SECTOR Governments are not the only entities advancing ethical AI frameworks. As companies look to maintain a competitive edge in a conti

23、nually evolving marketplace, businesses leading the AI sector, such as IBM, Google, Microsoft and Bosch, have published their own AI ethical principles. Other companies, including Facebook and Amazon, have joined consortiums such as the Partnership on AI (PAI) and the Information and Technology Coun

24、cil (ITI). These companies also support standardization bodies, such as the IEEE, that are driving ethical principles for the design of AI systems. These latter consortiums have developed and published their own ethical codes. In a nutshell, these principles are based on: Transparency The decision-m

25、aking process of AI systems should be explainable in such terms that people are able to understand the AIs conclusions and recommendations. In the latter case, it is important to explain the nature and functionality of these algorithms. Users should be aware at all times that they are interacting wi

26、th an AI system. We are aware that, by nature, certain algorithms do not offer capability to explain the entirety of the decision making. In this case, it is important to make clear the nature and functionality of these algorithms. Fairness Minimize algorithmic bias through ongoing research and data

27、 collection which is representative of a diverse population aligning with local regulations designed to avoid discrimination. Safety Develop and apply strong safety and security practices it does things in a reliable way. Cannot be tampered with. Privacy Provide appropriate transparency and control

28、over the use of data. Users should always maintain control over what data is being used and in what context. They must be able to deny access to personal data that they may find compromising or unfit for an AI to know or use. 6 Designing trustworthy AI/ML requires secure solutions for a smarter worl

29、d that reflect ethical principles deeply rooted in fundamental NXP values. We focus on design, development and deployment of artificial intelligence systems that learn from and collaborate with humans in a deep, meaningful way. 1. The Principle of Non-Maleficence: “Be Good” AI systems should not har

30、m human beings. By design, AI building blocks should protect the dignity, integrity, liberty, privacy, safety and security of human beings in society and at work. Human well-being should be the desired outcome in all system designs. Algorithmic bias in relation to AI systems that work with personal

31、data should be minimized through ongoing research and data collection. AI systems should reflect a diverse population and prevent unjust impacts on people, particularly those impacts related to characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, disability, poli

32、tical affiliation or religious belief. 2. The Principle of Human Autonomy: “Human-Centric AI” AI systems should preserve the autonomy of human beings and warrant freedom from subordination to or coercion by AI systems. The conscious act to employ AI and its smart agency, while ceding some of our dec

33、ision-making power to machines, should always remain under human control, so as to achieve a balance between the decision-making power we retain for ourselves and that which we delegate to artificial agents as well as ensure compliance with privacy principles. 3. The Principle of Explainability: “Op

34、erate Transparently” At NXP , we encourage explainability and transparency of AI-decision-making processes in order to build and maintain trust in AI systems. Users need to be aware that they are interacting with an AI system and they need the ability to retrace that AI systems decisions. Explainabl

35、e AI models will propel AI usage in medical diagnoses, where explainability is an ethical practice often required by medical standards, as well as by EU regulations such as the GDPR. Additionally, an AI system needs to be interpretable. The goal of interpretability is to describe the internals of th

36、e system in a way that is understandable to humans. The system should be capable of producing descriptions that are simple enough for a person to understand. It should also use a vocabulary that is meaningful for the user and will enable the user to understand how a decision is made. This might also

37、 reveal issues on inadequate or insufficient data used for model training. At the same time, we are aware that measures to protect IP and to provide safety and security features will require maintaining secrecy about certain operating principles. Otherwise, knowledge of internal operating principles

38、 could be misused to stage an adversarial attack on the system. OUR METHODOLOGY NXP AI PRINCIPLES 2 7 4. The Principle of Continued Attention and Vigilance: “High Standards and Ecosystems” We aspire to the highest standards of scientific excellence as we work to progress AI development. Drawing on r

39、igorous and multidisciplinary scientific approaches, we promote thought leadership in this area in close cooperation with a wide range of stakeholders. We will continue to share what weve learned to improve AI technologies and practices. Thus, in order to promote cross-industrial approaches to AI ri

40、sk mitigation, we foster multi-stakeholder networks to share new insights, best practices and information about incidents. We also foster these networks to identify potential risks beyond todays practices, especially when related to human physical integrity or the protection of critical infrastructu

41、re. As designers and developers of AI systems, it is an imperative to understand the ethical considerations of our work. A technology-centric focus that solely revolves around improving the capabilities of an intelligent system doesnt sufficiently consider human needs. By empowering our designers an

42、d developers to make ethical decisions throughout all development stages, we ensure that they never work in a vacuum and always stay in tune with users needs and concerns. 5. The Principle of Privacy and Security by Design: “Trusted AI Systems” AI must rely on two basic principles: security by desig

43、n and privacy by design. Security and privacy must be taken into account at the very beginning of a new system architecture; they cannot be added only as an afterthought. We must adopt the highest appropriate level of security and data protection to all hardware and software, ensuring that it is pre

44、-configured into the design, functionalities, processes, technologies, operations, architectures and business models. This also requires establishing risk-based methodology and verification to be implemented as baseline requirements for the entire supply chain. In our view, the Charter of Trust init

45、iative for cybersecurity in the IoT has already provided an excellent template for this. When it comes to the attribution of liability in case of damages caused by AI-enabled products or services, the implementation of state-of-the art privacy and security technology can also serve as a key criterio

46、n to be assessed. Privacy by design is enabled through secure management of user identities and data security. Traditional software attack vectors must still be addressed, but they do not provide sufficient coverage in the AI/ML threat landscape. The tech industry should consider not approaching nex

47、t-gen issues with last-gen solutions. Instead, it should strive to build new frameworks and adopt new approaches which address gaps in the design and operation of AI/ML-based services. NXP considers privacy to be a pivotal human right. We are committed to the concept of privacy by design and to inco

48、rporating an appropriate level of security and data protection in hardware and software, within the realm of our control. But once corporate principles are set up, how do we ensure that AI systems are designed and more importantly deployed in an ethical way? To that end, ethics officers and review b

49、oards should be empowered to oversee these principles and warrant their compliance with ethical frameworks. 8 ROOTING MORALS ON THE PHYSICAL LEVEL This leads us to the underlying question: once we have established a basic set of ethical rules for AI, how do we ensure that AI-based systems cannot become compromised? What is often overlooked is the fact that, in order to implement ethical imperatives, we first have to trust AI on a more more fundamental level. Can we? While progress in AI has unfortunately coincided with the development of new

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(2020算法的道德:算法对人工智能系统道德的贡献 - 恩智浦(英文版)(14页).pdf)为本站 (NET) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部