上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

2朱小虎Responsible AI Interpretability Privacy Governance.pdf

编号:158504 PDF 43页 6.51MB 下载积分:VIP专享
下载报告请您先登录!

2朱小虎Responsible AI Interpretability Privacy Governance.pdf

1、Responsible AI:Interpretability Privacy GovernanceShanghaiResponsible AIShanghaiResponsible AIShanghaiGeminiGeminis capabilities and potential applications are vast,but addressing ethical considerations and ensuring responsible development will be crucial for its success.Gemini is natively multimoda

2、l,which gives you the potential to transform any type of input into any type of output,reasoning seamlessly across text,images,video,audio,and code.Gemini“Were approaching this work boldly and responsibly.foundation models and infrastructure and bring them to our products and to others,guided by our

3、 AI Principles.”-Sundar Pichai“At Google,were committed to advancing bold and responsible AI in everything we do.Building upon Googles AI Principles and the robust safety policies across our products,were adding new protections to account for Geminis multimodal capabilities.At each stage of developm

4、ent,were considering potential risks and working to test and mitigate them.”-Demis HassabisGemini 1.01.Gemini Ultra our largest and most capable model for highly complex tasks.2.Gemini Pro our best model for scaling across a wide range of tasks.3.Gemini Nano our most efficient model for on-device ta

5、sks.Gemini 1.0“Using a specialized version of Gemini,we created a more advanced code generation system,AlphaCode 2,which excels at solving competitive programming problems that go beyond coding to involve complex math and theoretical computer science.-Demis HassabisAlphaCode 2Figure 1|High-level ove

6、rview of the AlphaCode 2 system.Gemini ProBard use Gemini ProBefore bringing it to the public,we ran Gemini Pro through a number of industry-standard benchmarks.In six out of eight benchmarks,Gemini Pro outperformed GPT-3.5,including in MMLU(Massive Multitask Language Understanding),one of the key l

7、eading standards for measuring large AI models,and GSM8K,which measures grade school math reasoning.Gemini Ultracompleting extensive trust and safety checks,including red-teaming by trusted external partiesrefining the model using fine-tuning and reinforcement learning from human feedback(RLHF)befor

8、e making it broadly available.select customers,developers,partners and safety and responsibility experts for early experimentation and feedback before rolling it out to developers and enterprise customers early 2024.In 2024 Bard Advanced,a new,cutting-edge AI experience that gives you access to our

9、best models and capabilities,starting with Gemini Ultra.What is responsible AI?Responsible AI1.Responsible AI refers to the ethical and responsible development,deployment,and use of artificial intelligence systems.2.As AI becomes increasingly integrated into our daily lives,it has the potential to s

10、ignificantly impact society,both positively and negatively.3.It is crucial to ensure that AI systems are designed and used in a way that minimizes harm and maximizes benefits for individuals and communities.Responsible Gemini devA structured approach to responsible deployment in order to identify,me

11、asure,and manage foreseeable downstream societal impacts of our models,in line with previous releases of Googles AI technology(Kavukcuoglu et al.,2022).Throughout the lifecycle of the project,we follow the structure below.Ref:https:/ systems should be transparent and explainable,providing insights i

12、nto their decision-making process.2.Interpretability is a critical aspect of AI models as it allows us to understand how and why a model arrives at a particular decision or prediction.It helps in building trust and accountability in AI systems.Interpretability spectrumInterpretability spectrum1.Inte

13、rpretability plays a crucial role in ensuring fairness in AI systems.It allows for the identification and mitigation of biases and discriminatory patterns,enabling the development of more equitable and inclusive AI models.Inner Interpretability1.Mechanistic Interpretability2.Developmental Interpreta

14、bilityInterpretability is the key we needPrivacyShanghaiPrivacy:overview1.AI systems should respect and protect the privacy of individuals,ensuring that personal data is handled securely and confidentially.2.Privacy in AI refers to the protection of personal information and data used by AI systems.A

15、s AI becomes more prevalent in our daily lives,it is important to ensure that personal information is not misused or mishandled by these systems.Importance of Privacy in AIPrivacy in AI is important for several reasons.Firstly,it ensures that personal information is not used for malicious purposes,s

16、uch as identity theft or discrimination.Secondly,it promotes trust in AI systems,as users are more likely to use them if they feel that their privacy is being protected.Finally,it is important for compliance with regulations,such as the General Data Protection Regulation(GDPR)in the European Union.R

17、isks of Data BreachesOne of the main privacy considerations in AI is the risk of data breaches.If sensitive data is not properly protected,it can be accessed by unauthorized individuals,leading to potential harm and misuse.Unauthorized AccessAnother privacy concern in AI is unauthorized access.If AI

18、 systems are not securely designed and implemented,they can be vulnerable to hackers and other malicious actors who may exploit the system for their own gain.User TrustPrivacy in AI is crucial for establishing user trust.When users feel that their personal data is being handled responsibly and secur

19、ely,they are more likely to trust AI systems and engage with them.Compliance with RegulationsPrivacy in AI is also essential for complying with regulations.Many countries have strict data protection laws that require organizations to handle personal data in a secure and transparent manner.Failure to

20、 comply with these regulations can result in severe penalties and damage to an organizations reputation.Privacy TechnologyDifferential privacy is a technique that aims to protect the privacy of individual data points while still allowing for useful analysis and insights.It adds noise to the data to

21、ensure that any individuals information cannot be distinguished or inferred from the aggregate data.This technique is particularly important in AI applications that involve sensitive personal data.Differential PrivacyFederated LearningFederated learning is a privacy-preserving technique that enables

22、 training of AI models without the need to centralize data.Instead,the training process takes place locally on individual devices or servers,and only the model updates are shared with a central server.This approach helps protect the privacy of user data while still allowing for model improvement and

23、 learning from a diverse range of data sources.Federated LearningSecure multiparty computation(MPC)is a technique that allows multiple parties to jointly compute a function on their private inputs without revealing their individual inputs to each other.This technique ensures privacy and confidential

24、ity in AI applications that involve collaboration and sharing of sensitive data.MPC enables parties to perform computations while keeping their data encrypted and secure.Privacy is key to AI-nativeGovernanceShanghaiGovernance:overview1.AI systems should be developed and deployed in a responsible and

25、 ethical manner,adhering to legal and regulatory frameworks.Ensuring Ethical and Responsible Use1.Governance in AI plays a critical role in ensuring the ethical and responsible use of AI technologies.It involves establishing policies,guidelines,and frameworks to guide the development,deployment,and

26、use of AI systems.2.By implementing effective governance measures,organizations can address potential risks and challenges associated with AI,such as bias,privacy concerns,and lack of transparency.3.Governance in AI also helps in building trust and accountability,as it provides a structured approach

27、 to ensure that AI systems are developed and used in a fair,transparent,and responsible manner.Perspectives on Issues in AI governance1.Explainability standards2.Fairness appraisal3.Safety considerations4.Human-AI collaboration 5.Liability frameworksEU AI ACTEU AI ACTArtificial Intelligence Act:deal

28、 on comprehensive rules for trustworthy AIMEPs reached a political deal with the Council on a bill to ensure AI in Europe is safe,respects fundamental rights and democracy,while businesses can thrive and expand.1.Safeguards agreed on general purpose artificial intelligence2.Limitation for the of use

29、 biometric identification systems by law enforcement3.Bans on social scoring and AI used to manipulate or exploit user vulnerabilities4.Right of consumers to launch complaints and receive meaningful explanations5.Fines ranging from 35 million euro or 7%of global turnover to 7.5 million or 1.5%of tur

30、noverUK AI PolicyUS AI GovernanceGovernance needs innovationOur proposalProposed Social Structures1.Establishment of an AGI Governance Org:An org composed of representatives from government,industry,academia,and civil society to oversee the development and deployment of AGI.2.Creation of an AGI Ethi

31、cs Committee:A committee responsible for developing and enforcing ethical guidelines for the development and deployment of AGI.3.Establishment of an AGI Transparency Framework:A framework that requires AGI developers to disclose the source code,data,and algorithms used in their products to ensure transparency and accountability.Keep calm and move responsiblyShanghai

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(2朱小虎Responsible AI Interpretability Privacy Governance.pdf)为本站 (张5G) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部