上海品茶

CSA:2024从原则到实践:动态监管环境中的负责任AI报告(英文版)(55页).pdf

编号:166632 PDF  中文版  DOC  55页 2.64MB 下载积分:VIP专享
下载报告请您先登录!

CSA:2024从原则到实践:动态监管环境中的负责任AI报告(英文版)(55页).pdf

1、Principles to Practice:Responsible AI in a Dynamic RegulatoryEnvironmentThe permanent and official location for the AI Governance and Compliance Working Group ishttps:/cloudsecurityalliance.org/research/working-groups/ai-governance-compliance 2024 Cloud Security Alliance All Rights Reserved.You may

2、download,store,display on yourcomputer,view,print,and link to the Cloud Security Alliance at https:/cloudsecurityalliance.org subject tothe following:(a)the draft may be used solely for your personal,informational,noncommercial use;(b)the draft may not be modified or altered in any way;(c)the draft

3、may not be redistributed;and(d)thetrademark,copyright or other notices may not be removed.You may quote portions of the draft aspermitted by the Fair Use provisions of the United States Copyright Act,provided that you attribute theportions to the Cloud Security Alliance.Copyright 2024,Cloud Security

4、 Alliance.All rights reserved.2AcknowledgmentsLead AuthorsMaria SchwengerLouis PinaultContributorsArpitha KaushikBhuvaneswari SelvaduraiJoseph MartellaReviewersAlan Curran MScUdith WickramasuriyaPiradeepan NagarajanRakesh SharmaGaetano BisazHongtao HaoJan GerstAshish VashishthaGaurav SinghKen HuangF

5、rederick HnigDirce HernandezTolgay Kizilelma,PhDSaurav BhattacharyaMichael RozaGabriel NwajiakuVani MittalMeghana ParwateDesmond FooLars RuddigkeitMadhavi NajanaCSA Global StaffRyan GiffordStephen Lumpe Copyright 2024,Cloud Security Alliance.All rights reserved.3Table of ContentsAcknowledgments.3Tab

6、le of Contents.4Safe Harbor Statement.6Forward-Focused Statements and the Evolving Landscape of Artificial Intelligence.6Document Summary.7Executive Summary.8Introduction.8Scope and Applicability.9Key Areas of Legal and Regulatory Focus for Generative AI.10Data Privacy and Security.10General Data Pr

7、otection Regulation(GDPR)(EU).101.Lawful and transparent data collection and processing.112.Data security and accountability.113.Individual rights and control.12California Consumer Privacy Act/California Privacy Rights Act(CCPA/CPRA).131.Data collection,storage,use,and disclosure under CCPA/CPRA.142

8、.Consumer Rights.143.Compliance&Enforcement.154.Draft Automated Decision-Making Technology(ADMT)Regulations.155.California Executive Order on Generative AI.16European Union AI Act(EU AI Act/EIAA).16EUAIA Compliance for Generative AI.181.Requirements,Obligations and Provisions.182.Promoting Innovatio

9、n(Article 57,58,59,60,61,62,63).213.Prohibitions on certain AI practices.23Health Insurance Portability and Accountability Act(HIPAA).24HIPAA Compliance for GenAI.25Addressing the Impact of GenAIs Hallucinations on Data Privacy,Security,and Ethics.27DHS Policy Statement 139-07 Impact on Gen AI.28Fed

10、eral Trade Commission Policy Advocacy&Research Guidance:.28AI(and other)Companies:Quietly Changing Your Terms of Service Could Be Unfair orDeceptive.28AI Companies:Uphold Your Privacy and Confidentiality Commitments.28OMB Policy to Advance Governance,Innovation,and Risk Management in Federal Agencie

11、s Useof Artificial Intelligence.29President Bidens Executive Order on the Safe,Secure,and Trustworthy Development and Use ofArtificial Intelligence.30Non-discrimination and Fairness.311.Some Existing Anti-discrimination Laws and Regulations.31 Copyright 2024,Cloud Security Alliance.All rights reserv

12、ed.42.Regulatory Challenges.333.Regulatory Focus and Techniques.34Emerging Regulatory Frameworks,Standards,and Guidelines.36Safety,Liability,and Accountability.38Considerations Around Generative AI Liabilities,Risks,and Safety.391.Potential Liability Risks Associated with GenAI Failures.392.Legal Fr

13、ameworks for Assigning Liability.393.Insurance.40Hallucination Insurance for Generative AI.40Intellectual Property.411.Authorship,Inventorship,and Ownership.41Protecting GenAI Components.422.Copyright Protection.423.Patent Protection.434.Trade Secrets.435.Licensing and Protection Strategies.436.Trad

14、emarks.447.Evolving Landscape:.448.Relevant Legislation.45Technical Strategies,Standards,and Best Practices for Responsible AI.45Fairness and Transparency.46Security and Privacy.47Robustness,Control,and Ethical AI Practices.47How Organizations Can Leverage These Standards.48Technical Safeguards for

15、Responsible GenAI(Data Management).49Data process.49Technique.49Description.49Case Study-Demonstrating Transparency and Accountability in Practice.50Ongoing Monitoring and Compliance.52Legal vs.Ethical Considerations in Governing Generative AI.53Conclusion:Addressing the Gaps in AI Governance for a

16、Responsible Future.54 Copyright 2024,Cloud Security Alliance.All rights reserved.5This document is intended for informationalpurposes only and does not constitute legal advice.This research document,prepared for the Cloud Security Alliance(CSA),explores the current landscapeof regulatory governance

17、surrounding Artificial Intelligence(AI).While the document addresses variouslegal and regulatory frameworks,it is essential to emphasize that the information presented should not beconstrued as legal guidance applicable to any specific situation.The regulatory landscape of AI is rapidly evolving,and

18、 the interpretation and application of laws andregulations can vary significantly depending on various factors,including:Jurisdiction(country or region)Specific context(e.g.,industry,use case)Specific AI technology or applicationTherefore,the Cloud Security Alliance and the authors of this document

19、strongly recommend seekingindependent legal counsel for any questions or concerns related to the legal implications of AIdevelopment,deployment,or use.Safe Harbor StatementForward-Focused Statements and the EvolvingLandscape of Artificial IntelligenceThis document contains certain statements that ma

20、y be considered forward-focused in nature.Todetermine their applicability,we encourage seeking guidance from regulatory bodies and legal counsels inthe corresponding countries.The authors and Cloud Security Alliance(CSA)have based thesestatements on their current knowledge and expectations.It is imp

21、ortant to note that forward-focusedstatements are subject to inherent risks,uncertainties,and assumptions that may cause actual results todiffer significantly from those projected or implied by such statements.The following are some important factors that could affect the future developments in the

22、field ofArtificial Intelligence(AI)and the associated regulatory landscape,and thus potentially impact theaccuracy of the forward-focused statements in this document:Rapid technological advancements:The field of AI is constantly evolving,with newtechnologies and applications emerging rapidly.It is d

23、ifficult to predict the exact trajectory ofthese advancements or their impact on various aspects of AI regulation.Uncertainties in regulatory frameworks:Regulatory approaches to AI are still underdevelopment,and the specific regulations governing AI development,deployment,and use mayvary significant

24、ly across different jurisdictions and could change over time.Copyright 2024,Cloud Security Alliance.All rights reserved.6Emerging ethical considerations:As AI applications become more sophisticated,new ethicalconsiderations will likely arise,potentially leading to additional regulations or guideline

25、ssurrounding responsible development and use of these technologies.Economic and social factors:The overall economic climate and social attitudes towards AI caninfluence the development and adoption of new technologies,as well as the regulatory landscapesurrounding them.The authors and the CSA discla

26、im any responsibility for updating or revising any forward-focusedstatements in this document to reflect future events or circumstances.Readers are cautioned not toplace undue reliance on these statements,which reflect the authors and CSAs views only as of the dateof publication of this document.Doc

27、ument SummaryThis paper provides an overview of the legal and regulatory landscape surrounding AI and Generative AI(GenAI).It highlights the challenges of navigating this complex and dynamic landscape because of thediverse applications of GenAI,differing regulatory approaches taken by global regulat

28、ors,and the slowadaptation of existing regulations.The paper aims to equip organizations with the general knowledge they need to fundamentallyunderstand their current standing and navigate the rapidly changing requirements for responsible andcompliant AI use.It explores a selection of existing regul

29、ations,and lays out considerations and bestpractices for developing and deploying responsible AI across regional,national,and international levels.This document provides a high-level overview of the current legal and regulatory landscape for AI,as ofthe time of writing,including Generative AI(GenAI)

30、.While not exhaustive,it is a starting point fororganizations to understand their current position and identify key considerations for navigating theevolving requirements of responsible and compliant GenAI use.Due to the ongoing advancements in the technology and the evolving legal and policy landsc

31、ape,providing a complete overview is challenging.Therefore,we recommend utilizing this information as afoundation for staying informed about the evolving AI regulations and authorities.Its important toconsider that AI regulations come from various levels of governments and jurisdictions across the g

32、lobe.Additionally,laws,such as data privacy and anti-discrimination regulations,will determine where and howAI can be used,even though they were not specifically designed for that purpose.For example,in the US,AI will be governed by city,state,and federal laws,agency actions,executive orders,volunta

33、ry industryagreements,and even common law.Its important to keep this in mind as the origins of AI regulationsarent always intuitive and therefore a diligent analysis should be conducted in preparation for your AIprojects.The first far-reaching legal framework is the European AI Act because it is gua

34、ranteeing thesafety and fundamental rights of people and businesses.Certain AI applications are forbidden if theseinterfere with,or threaten,citizens rights.Regulations are anticipated for high-risk AI systems,such asLarge Language Models(LLMs)because of their significant potential harm to health,sa

35、fety,fundamentalrights,environment,democracy,and the rule of law.Copyright 2024,Cloud Security Alliance.All rights reserved.7Executive SummaryArtificial Intelligence(AI)is rapidly transforming our world,holding immense potential to reshape the veryfabric of our society.However,this transformative po

36、wer comes with a critical challenge:the current legaland regulatory landscape is struggling to keep pace with the explosive growth of AI,particularlyGenerative AI(GenAI).This paper aims to provide a high-level overview of existing legislation andregulations,and their impact on AI development,deploym

37、ent,and usage.Our goal is to identify areaswhere legislation lags behind in search of practical approaches for deploying responsible AI.The currentlandscape lacks well-established legislation leaving a gap in addressing potential risks associated withincreasingly sophisticated AI functionalities.Thi

38、s creates a situation where existing regulations,like GDPRand CCPA/CPRA,provide a foundation for data privacy but dont offer specific guidance for the uniquechallenges of AI development with exceptions too few to be sufficient.With technology innovation that isnot expected to slow down as the big te

39、ch giants plan to invest hundreds of billions into AI,the rapid paceof technological innovation has outpaced the ability of legislation to adapt.A troubling gap is emerging.The widespread use of GenAI,both personal and professional,is happeningalongside a lack of proper governance.Malicious actors a

40、re already wielding GenAI for sophisticatedattacks,and companies are seeing GenAI as a competitive advantage,further accelerating its adoption.This rapid adoption,while exciting,needs to be accompanied by practices for responsible AI developmentthat do not stifle innovation.The ideal solution foster

41、s a global environment that encourages responsible,transparent,and explainable AI use,supported by clear and practical guidelines.To bridge the gapbetween the boundless potential of AI and the need for responsible development,we need athree-pronged collaborative approach:commitment to responsible AI

42、 from all tech companies,clearguidelines from policymakers,and effective regulations from legislatures.This paper opens a critical dialogue on AI governance,focusing on legislation and regulations.It equipspractitioners and businesses venturing into AI with a foundational understanding of the curren

43、t AIgovernance landscape and its shortcomings.By highlighting these gaps,we aim to facilitate an opendiscussion on the necessary legal frameworks for responsible AI development and adoption.IntroductionThe rapidly expanding field of AI necessitates navigating the evolving legal and regulatory landsc

44、apes toensure responsible development,deployment,and innovation while safeguarding individuals and society.Understanding ethical and legal frameworks for AI empowers organizations to achieve three keyobjectives:Building trust and brand reputation:Organizations can build trust with stakeholders andbo

45、lster their brand reputation by demonstrating transparent and responsible AI practices.Mitigating risks:Proactive engagement with frameworks and utilizing a risk-based approach,helps mitigate potential legal,reputational,and financial risks associated with irresponsible AI use,protecting both the or

46、ganization and individuals.Copyright 2024,Cloud Security Alliance.All rights reserved.8Fostering responsible innovation:By adhering to best practices,maintaining transparency,accountability,and establishing strong governance structures,organizations can foster a cultureof responsible and safe AI inn

47、ovation,ensuring its positive impact on society alongside itsdevelopment.Responsible AI,through diverse teams,comprehensive documentation,and humanoversight,would enhance model performance by mitigating bias,catching issues early,andaligning with real-world use.Scope and ApplicabilityNavigating the

48、complex legal landscape of AI and,more specifically,Generative AI(GenAI)presents asubstantial challenge because of its inherent diversity.This paper delves into the regulatory landscapesurrounding AI,encompassing diverse systems,such as deep learning models generating realistic textformats(code,scri

49、pts,articles),computer vision applications manipulating visual content(facialrecognition,deepfake),stable diffusion(text-to-image model),and reinforcement learning algorithmsemployed in autonomous systems(self-driving cars,robots).Broader categories like generativeadversarial networks(GANs)and large

50、 language models(LLMs)underpin numerous GenAI applications,necessitating their inclusion in regulatory considerations.Governing this vast spectrum of rapidly evolvingsystems necessitates a nuanced approach,as current legislation faces challenges adapting to thisdynamic landscape.This creates a criti

51、cal situation where a rapidly evolving technology permeates ourlives and business practices because of competitive pressures,yet is coupled with inadequate andslow-to-adapt legal frameworks.This paper explores:How the most popular existing regulations attempt to address specific areas of GenAISome c

52、hallenges and opportunities surrounding the development of new legislationHigh-level recommendations and best practices for developing responsible AI principles usingexplainable AI techniquesThis paper utilizes a staged approach to analyze the governance of AI,focusing on the following areas.Current

53、 DocumentFuture ConsiderationsTop-Level Government/Federal Legislation:USA:Executive Orders(e.g.,Maintaining American Leadershipin Artificial Intelligence,and theExecutive Order on the Safe,Secure,and TrustworthyDevelopment and Deployment ofArtificial Intelligence),andCongressional Bills(e.g.,Algori

54、thmic Accountability Act of2023)(Proposed)National Level:Some regulations from APAC:China(enacted)(Ministry of Science andTechnology),Japan(Cabinet Office),South Korea(Ministry of Science andICT),Singapore,Indias national policy AIfor All(NITI Aayog)Others with emerging AI policies(Canada,UK,Austral

55、ia)International Organizations:Exploringframeworks from Copyright 2024,Cloud Security Alliance.All rights reserved.9EU:European Commission PolicyPapers(e.g.,Ethics Guidelines forTrustworthy AI)Regulations(e.g.,ArtificialIntelligence Act)Major Regional Regulations:California Consumer Privacy Act(CCPA

56、),amended by the California Privacy RightAct(CPRA)General Data Protection Regulation(GDPR)OECD(Recommendations on AI)UNESCO(Recommendation on theEthics of AI).The Global Partnership on ArtificialIntelligence(GPAI)expertise fromscience,industry,civil society,governments,international organizationsand

57、 academia to foster internationalcooperationISO/IEC 42001:2023(AIMS)OWASP Top 10 for Large LanguageModel ApplicationsTable 1:Scope of Governance AreasFor more information regarding AI Governance in specific industries,please see CSAs AI Resilience:ARevolutionary Benchmarking Model for AI Safety docu

58、ment.Key Areas of Legal and RegulatoryFocus for Generative AIData Privacy and SecurityGenerative AI presents unique challenges in the realm of data privacy and security.Its ability to learn fromvast amounts of data raises concerns about how personal information is collected,stored,used,shared,and tr

59、ansferred throughout the AI development and deployment lifecycle.Several existing laws andregulations,including the General Data Protection Regulation(GDPR),California Consumer Privacy Act(CCPA),the California Privacy Right Act(CPRA),and Health Insurance Portability and Accountability Act(HIPAA),aim

60、 to protect individual privacy and data security as follows.General Data Protection Regulation(GDPR)(EU)Applicability:The GDPR applies to organizations processing the personal data of individuals inthe European Economic Area(EEA),regardless of the organizations location.Key Provisions:Lawful basis f

61、or processing,fairness,and transparency:Organizations must have alawful basis for processing personal data(e.g.,user consent,legitimate interest,etc.).Itrequires clear and specific information about data collection and processing purposes tobe provided to individuals.Copyright 2024,Cloud Security Al

62、liance.All rights reserved.10Data minimization:Limits the collection and retention of personal data to what isstrictly necessary for the stated purpose.Data subject rights:Grants individuals various rights over their personal data,includingthe right to access,rectification,erasure,and restriction of

63、 processing.Security measures:Requires appropriate technical and organizational measures toprotect personal data from unauthorized access,disclosure,alteration,or destruction.Automated individual decision-making,including profiling:The data subjectsexplicit consent is required for automated decision

64、-making,including profiling(GDPR,article 22).GDPR Compliance for Generative AI:The EU GDPR requires that individuals provide consentfor processing their personal data,including data used in AI systems.In addition,the DataProtection requirements imply that systems must comply with GDPR principles suc

65、h aslawfulness,fairness,transparency,purpose limitation,data minimization,accuracy,storagelimitation,integrity,and confidentiality.1.Lawful and transparent data collection and processingLimitations on training and prompt data:The GDPR outlines key principles forhandling data as follows:Purpose limit

66、ation:Data can only be collected and used for specific,clearlydefined or compatible purposes.Necessity:Only the personal data essential for achieving those purposes can becollected and used.Data minimization:The amount of personal data collected and used should bekept to a minimum,only collecting wh

67、at is absolutely necessary.Storage time limitation:Personal data must be stored as short as possible,andtime limits for storage must be established and reviewed regularly.In the context of training data(as well as prompt data,which also might become“trainingdata”),this means collecting and using dat

68、a only to the extent its truly needed for thespecific training objective.Informed consent:GDPR requires explicit user consent for collecting and processingpersonal data used to train Generative AI models.This ensures individuals understandhow their data will be used(e.g.,for model training or fine-t

69、uning)and have the right torefuse.AI developers must facilitate exercising these rights by individuals whose data isprocessed by AI/ML systems.Transparency:The EU individuals have rights concerning their personal data,such asthe right to access,rectify,erase,restrict processing,and data portability.

70、Organizationsmust be transparent about how they use personal data in AI and ML,including thepurpose,legal basis,and data retention period.Users should be able to understand howtheir data contributes to the generated outputs.2.Data security and accountability Copyright 2024,Cloud Security Alliance.Al

71、l rights reserved.11Data security:Article 25 of GDPR states organizations must adopt“data protection bydesign and by default”and implement appropriate technical and organizational measuresto ensure the security of personal data used in the foundational models,includingencryption,access controls,and

72、data breach notification procedures.Additionally,sinceLLMs are part of the overall supply chain,their security requires heightened attention tomalicious techniques like adversarial attacks,data poisoning,and model bias.Accountability:Organizations are accountable for using personal data withinGenAI-

73、enabled systems and must demonstrate compliance with GDPR.This includesconducting data protection impact assessments and maintaining appropriate records.Data anonymization and pseudonymization:While anonymization andpseudonymization can help mitigate privacy risks,they may not always be sufficient i

74、n thecontext of GenAI,where even limited information can be used to infer identities.The potential harm of GenAI outputs:While the GDPR appears to only impact thedata used to train models,the regulation also applies to model outputs.This includesaddressing unintended generated outputs and the malici

75、ous use of deepfake,which candamage individual reputations and violate ethical principles.Establishing clear guidelinesand safeguards is essential to ensure responsible development and use of GenAI,mitigating risks and protecting individuals from potential harm.3.Individual rights and controlRight t

76、o access and rectification:Individuals have the right to understand and accesstheir personal data used in GenAI and request rectification if it is inaccurate orincomplete.This includes information they directly provided or data generated throughtheir interactions with GenAI.However,unlike traditiona

77、l databases,implementingrectification for AI training data poses challenges because of the large size andinterconnected nature of the data,potentially requiring retraining the entire model andcausing unintended consequences.To date,the feasibility of rectification of inaccurateinformation already in

78、gested to an AI models training data is unclear.While research ondata labeling and privacy-preserving techniques is ongoing,ensuring the right torectification remains an open challenge and the research on how to facilitate thisrequirement should be monitored.Right to erasure(right to be forgotten):I

79、ndividuals have the right to request theerasure of their personal data,which may affect how AI/ML models are trained and used.Implementing this right presents a unique challenge for these models,as personal datacan become deeply embedded within their complex internal representations aftertraining.Cu

80、rrently,the technical feasibility and ethical implications of removing specificdata points from trained models remain unclear.Currently,there is a lack of reliableprocesses and established guidance on handling such requests,raising critical questionsabout balancing individual privacy with the models

81、 overall functionality and societalbenefits.Right to object:Individuals have the right to object to processing their personal data forspecific purposes,including in the context of GenAI.However,exercising this right in thecontext of GenAI presents unique challenges.Currently,there is no reliable and

82、standardized process to remove personal data from a training set once the model hasbeen trained on it.Copyright 2024,Cloud Security Alliance.All rights reserved.12Additionally,the right to object might only apply to specific data elements and/or forspecific purposes,not necessarily to all of the inf

83、ormation used to train the model,potentially limiting the scope of an individuals objection.This highlights the need forongoing development of transparent and accountable practices for GenAI systems thatrespect individual privacy rights.Compliance:The GDPR requires Data Privacy Impact Assessments(DP

84、IA)to beperformed for data processing activities.This extends to the data processing by AIsystems and the risks it would pose to the data subjects.Identifying personal data withinthe large datasets used for training large generative models is difficult and it remainsunclear how the European Union wi

85、ll address GDPR compliance in the context of GenAI.ADM Governance:Article 22 of GDPR grants individuals the right to object toautomated decision-making,including profiling,that has legal or significant effects onthem.This means individuals have the right to opt out of Automated Decision-Making(ADM)o

86、r contest the decision by the ADM,particularly when it can significantly impacttheir lives with biases.As a consequence,companies using ADM are required to have ahuman appeal review process.California Consumer Privacy Act/California Privacy Rights Act(CCPA/CPRA)Applicability:This applies to for-prof

87、it businesses that do business in California and meet otherthreshold requirements,such as generating greater than$25 million USD in global revenue.Itgrants Californians the right to know what personal data is being collected about them and torequest its deletion and/or changes for accuracy.Businesse

88、s must also limit their collection andprocessing of personal information to what is necessary for their disclosed purposes.The CCPAextends to AI/ML systems that rely on this data,requiring organizations to ensure these systemscomply with its privacy requirements for training and output generation in

89、volving the personalinformation of California residents.Businesses must carefully consider CCPA obligations whenutilizing Californians personal data in developing and deploying GenAI(foundational)models.Key Provisions:Right to know:Allows consumers to request information about the categories andspec

90、ific personal information collected about them.Right to delete:Grants consumers the right to request the deletion of their personaldata collected by the business.Right to opt-out:Gives consumers the right to opt-out of the sale of their personalinformation.NOTE:The CCPA and its extension CPRA use a

91、broader definition of consumer data than thecommonly used term Personally Identifiable Information(PII).For this reason,this documentadopts the terminology Personal Information(PI)to ensure alignment with the CCPAs scope.PIItypically refers to specific data points like names or social security numbe

92、rs that directly identify anindividual.The CCPAs definition of PI,however,encompasses a wider range of data points.Thisincludes browsing history,IP addresses,or geolocation data,which might not be considered PII ontheir own but could be used to identify someone when combined with other information.T

93、herefore,Personal Information more accurately reflects the CCPAs intent regarding consumer dataprivacy.Copyright 2024,Cloud Security Alliance.All rights reserved.13CCPA/CPRA Compliance for Generative AI:While the CCPA/CPRA does not present directtechnical requirements for GenAI,its focus on individu

94、al data rights can introduce significant datamanagement challenges requiring careful practices to ensure compliance,and potentiallyimpacting model performance and functionality.It is important to remember that CCPA/CPRAonly protects the personal data of California residents.Some considerations inclu

95、de thefollowing.1.Data collection,storage,use,and disclosure under CCPA/CPRACCPA/CPRA primarily focuses on regulating the collection,use,and disclosure of personalinformation by businesses about California residents.This applies to data used to train AI/MLmodels,and the resulting outputs,if they con

96、tain personal information.Californians have theright to access their personal information under CCPA/CPRA.This right may apply to data usedto train models,but its important to distinguish outputs containing personal data from moregeneral model outputs.California residents have the right to know what

97、 personal information isbeing collected about them for AI purposes,the purpose of such collection,and the categories ofthird parties with whom its shared.While CCPA/CPRA doesnt necessarily require disclosingspecific training data sources,it certainly emphasizes transparency.Data provenance,tracking

98、the origin and lineage of data used in training,is essential forCCPA/CPRA compliance,especially considering the vast datasets often used for Generative AI.Complex data provenance can make it challenging to fulfill the Right to Access and Right toKnow requests.Robust data governance practices,proper

99、logging,and potentially usinganonymized training data disclosures can help mitigate these challenges.2.Consumer RightsCCPA/CPRA grants consumers specific rights regarding their personal information,including theright to access,delete,correct their personal information,and opt-out of its sale.The det

100、ails are:Right to Know:Requires disclosing details about the collection and use of personalinformation(PI)for training the model,including specifying data categories used fortraining(e.g.,text,images,audio or names,locations,etc.),identifying sources of PI(e.g.,user interactions,purchased/third-part

101、y datasets,social media,public records,etc.),detailing the purpose of PI usage(e.g.,model training,performance evaluation,etc.).Right to Access:Users can request access to specific data points used in their trainingdata,potentially revealing identifiable information depending on the training process

102、.This may require implementing mechanisms to identify and isolate individual data pointswithin the training data set,which can be technically challenging if anonymization oraggregation techniques are in place.Right to Deletion:Users have the right to request deletion of their PI used in training,imp

103、acting the model in several ways:Data removal:This may necessitate retraining the model with the remainingdata,potentially affecting performance and generalizability.Data modification:Depending on the training process,anonymizing orredacting specific data points might be required,impacting model acc

104、uracy andinterpretability.Copyright 2024,Cloud Security Alliance.All rights reserved.14Knowledge removal:How do you identify learned knowledge in a deep neuralnetwork of hundreds of billions of layers and remove the specific learnedinformation?In practice,this would require a retraining of LLMs from

105、 scratch,which is economically not feasible nor environmentally friendly.From a technical feasibility point of view,identifying and removing individual data pointswithin a complex training data set can be computationally expensive,time-consuming,orsimply impossible at times for advanced AI systems(e

106、.g.,LLMs).The question ofhandling models trained on data that need to be removed remains open.Right to Opt-Out of Sale:If the generated AI output is considered personalinformation under CCPA/CPRA(e.g.,deepfake),users may have the right to opt out ofits sale or disclosure to third parties.This involv

107、es clearly defining and classifying theGenAI outputs under the CCPAs framework,which may require further clarification andlegal interpretation.3.Compliance&EnforcementCCPA/CPRA compliance primarily involves implementing technical and procedural safeguards toprotect personal information.The Californi

108、a Privacy Protection Agency(CPPA)is a relatively new agency that was formed in2020 and is still in the process of establishing regulations across various areas,includingconsumer data and privacy.The CPPA implements and enforces the California Privacy Rights Act(CPRA)and the California Consumer Priva

109、cy Act(CCPA).While they havent issued specificregulations solely focused on governing AI yet,two key developments touch upon AI,particularlyGenAI.4.Draft Automated Decision-Making Technology(ADMT)RegulationsReleased in November 2023,these draft regulations focus on the responsible use ofautomated de

110、cision-making technology(ADMT),which includes many forms of AI usedfor consumer-facing decisions and in nature are similar to Article 22 of GDPR.The draft regulations outline requirements for businesses using ADMT,such as:Pre-use notice:Informing consumers before using ADMT in a decision-makingproce

111、ss that impacts them.Right to opt-out:Allowing consumers the option to choose not to be subject todecisions made solely by ADMT.Right to access and explanation:Providing consumers with access toinformation about how ADMT is used in making decisions about them,along withexplanations for how the decis

112、ions were reached.Risk assessments:Requiring businesses to conduct risk assessments to identifyand mitigate potential harms associated with their use of ADMT,such as bias anddiscrimination.While not specifically mentioning generative AI,these regulations could apply to any AI technology usedto make

113、automated decisions about consumers,potentially impacting how businesses deploy and utilizeGenAI in California.Copyright 2024,Cloud Security Alliance.All rights reserved.155.California Executive Order on Generative AIIn October 2023,California Governor Gavin Newsom issued an executive orderestablish

114、ing a working group to explore the responsible development,adoption,andimplementation of GenAI within the state government.This order emphasizes the potential benefits of GenAI but also acknowledges potentialrisks like the spread of disinformation and the need for responsible deployment.The working

115、group is tasked with developing recommendations for California stateagencies on topics like:Identifying potential benefits and risks of deploying GenAI.Establishing ethical principles for using GenAI.Implementing safeguards to mitigate potential harm.While not directly regulating the private sector,

116、this executive order signifies Californias proactiveapproach to understanding and potentially shaping the future of GenAI development and use.As CPPA continues to evolve and adapt to the complexities of GenAI,additional compliancerequirements,and potentially increased complexity,can be expected.This

117、 highlights the need for ongoingefforts to navigate the evolving regulatory landscape while fostering responsible development anddeployment of GenAI.European Union AI Act(EU AI Act/EIAA)Applicability:The EU AI Act applies to providers,deployers,importers,distributors and otheroperators involved in t

118、he development,deployment,and use of artificial intelligence systems inthe European Union.It does not apply to military,defense,or national security purposes.Itproposes a series of rules and requirements for developers and users of AI systems,focusing onfour levels of risk:unacceptable risk,high ris

119、k,limited risk,and minimal risk.The act aims to ensurethe protection of fundamental rights,such as privacy and non-discrimination,the safety andtransparency of AI systems,and the responsible use of AI technology.It will apply to operatorsbased inside and outside the EU if their AI systems are provid

120、ed or used in the EU market or ifthey affect people in the EU.The act applies to a wide range of AI applications,includingbiometric identification,autonomous vehicles,and critical infrastructures,among others.Key Provisions:Prohibited Practices(Article 5):Article 5 of the regulation outlines prohibi

121、tedpractices related to AI systems.These practices are prohibited to ensure the protectionand safety of individuals and to prevent unethical and harmful use of AI systems.AIsystems considered to be unacceptable risk will be banned in the EU,including AI thatmanipulates human behavior,social scoring

122、systems and the use of real-time remotebiometric identification systems in public spaces for law enforcement purposes.Risk-Based Approach(Article 9):Article 9 of the EU AI Act introduces a risk-basedapproach to regulating AI systems in the EU and to balance regulation with innovation,ensuring that A

123、I systems are safe and trustworthy while avoiding unnecessary compliancecosts.AI systems are classified as either high-risk,limited-risk,or minimal-risk,and the Copyright 2024,Cloud Security Alliance.All rights reserved.16level of regulation will vary depending on the level of potential harm they po

124、se toindividuals.High-risk AI systems,such as those used in critical infrastructure,must meetstrict requirements,undergo scrutiny,and be pre-approved before deployment.The providers of such systems must comply with the strictest provisions of theregulation,including transparency and explainability,h

125、uman oversight,andindependent verification.Limited-risk AI systems pose lower risks but must still adhere to specificrequirements.Providers of these systems must ensure they meet the relevantlegal obligations,transparency,and traceability rules.Minimal-risk AI systems pose little or no risk to indiv

126、iduals.These systems arenot subject to the same regulatory requirements but should still comply with legalframeworks applicable to AI systems.Data Governance(Article 10):The aim of Article 10 is to ensure that the use of data inAI systems is transparent,accountable,and respects individual privacy an

127、d dataprotection rights.It requires that the data used to train and feed AI systems must complywith the provisions of the General Data Protection Regulation(GDPR)and other relevantdata protection laws.The article mandates that providers of high-risk AI systems mustensure that the data used to train

128、and feed the AI system is relevant,reliable,unbiased,and free from errors.They should also ensure that the data is properly documented,labeled and annotated to help monitor and audit the systems performance.Additionally,the article specifies that the data must be transparently managed,and individual

129、s whosedata is used must be informed and give their consent.Transparency and Explainability(Article 13):This article requires that high-risk AIsystems must be transparent and explainable,allowing individuals to understand how theywork and the decisions they make,explaining how they are functioning,a

130、nd providingaccess to documentation for users.AI models would have to be maintained withappropriate records and logging to ensure that they can be audited.This article alsoestablishes the right to be informed and the right to seek human intervention tochallenge the decisions taken by AI systems to e

131、nsure that the AI system operates withintegrity,accountability and transparency.Human Oversight(Article 14):Human oversight should aim to prevent or minimizerisks and can be achieved through measures built into the system or implemented by thedeployer.Moreover,the AI systems would have to be designe

132、d to allow occasional checksby human operators.Natural persons overseeing the system should be able tounderstand its capabilities,monitor its operation,interpret its output,and intervene ifnecessary.Specific requirements are outlined for biometric identification systems.High-risk AI systems will be

133、subject to strict obligations aimed at ensuring humanoversight and should be designed to be effectively overseen by natural persons.Independent Testing and Verification(Article 57 to 63):It requires that high-risk AIsystems should undergo independent testing and verification to ensure safety andreli

134、ability.Governance and Certification(Article 64 to 70):The EU will establish a governancestructure and certification framework to ensure that AI systems in the EU meet therequired standards and regulations.The regulation establishes a governance frameworkto coordinate and support the application of

135、the regulation at the national and Unionlevel.The governance framework aims to coordinate and build expertise at Union level,make use of existing resources and expertise,and support the digital single market.Copyright 2024,Cloud Security Alliance.All rights reserved.17Penalties(Article 99):This arti

136、cle sets out the sanctions,measures,and penalties thatcan be imposed for violating the regulations provisions.It specifies that member statesmust establish appropriate administrative or judicial procedures to enforce the provisionsof the regulation.It ensures that the regulation is effectively enfor

137、ced and to deternon-compliance by imposing significant penalties for infringements.It seeks to ensurethat AI systems are developed,deployed,and used responsibly and ethically,protectingindividuals rights and freedoms.Under the EU Artificial Intelligence Act(EUAIA),sanctions and fines are tiered base

138、d on the seriousness of the violation.The tieredapproach aims to ensure that penalties are proportionate to the level of harm caused byeach violation.EUAIA Compliance for Generative AI1.Requirements,Obligations and ProvisionsThis regulation aims to improve the functioning of the internal market and

139、promote the uptake ofhuman-centric and trustworthy artificial intelligence(AI)systems in the European Union(EU).Itlays down harmonized rules for the placing on the market,putting into service,and use of AIsystems,as well as specific requirements and obligations for high-risk AI systems.It also inclu

140、desprohibitions on certain AI practices and establishes transparency rules for certain AI systems.Furthermore,it addresses market monitoring,market surveillance governance,and enforcement.Obligations of Providers of High-Risk AI Systems:Ensure that high-risk AI systems arecompliant with the outlined

141、 requirements.Risk management(Article 9):Providers must carry out a thorough risk assessmentfor high-risk AI systems,considering potential risks to safety,fundamental rights,andthe intended purpose of the system.A risk management system must be establishedfor high-risk AI systems,which includes iden

142、tifying and analyzing known andforeseeable risks,evaluating risks that may emerge,and adopting risk managementmeasures.Risk management measures should aim to eliminate or reduce identifiedrisks and address the combined effects of requirements set out in the regulation.Therisk management system shoul

143、d ensure that the overall residual risk of the high-risk AIsystem is judged to be acceptable.Data quality and governance(Article 10):Providers must ensure that high-risk AIsystems are trained on high-quality,relevant,and representative data sets.They mustalso implement appropriate data governance me

144、asures to prevent biases and ensuredata accuracy.High-risk AI systems using data training techniques must usehigh-quality training,validation,and testing data sets.Data governance practices mustbe implemented,including considerations for design choices,data collectionprocesses,data-preparation proce

145、ssing operations,and addressing biases and datagaps.Technical documentation(Article 11):Providers must create and maintain accurateand up-to-date technical documentation for high-risk AI systems.This documentationshould include information on the systems design,development,configuration,and Copyrigh

146、t 2024,Cloud Security Alliance.All rights reserved.18operation.Technical documentation for high-risk AI systems must be prepared andkept up-to-date.The documentation should demonstrate compliance with theregulation and provide necessary information for assessment by authorities andnotified bodies.A

147、single set of technical documentation should be prepared forhigh-risk AI systems falling under the Union harmonization legislation.The Commissionmay amend the technical documentation requirements through delegated acts.Record Keeping(Article 12):High-risk AI systems must allow for the automaticrecor

148、ding of events(logs)throughout their lifetime.Logging capabilities should enablethe identification of risk situations,facilitate post-market monitoring,and monitor theoperation of high-risk AI systems.Transparency and provision of information(Article 13):Providers must ensure thathigh-risk AI system

149、s are transparent and provide relevant information to users regardingthe systems capabilities and limitations.High-risk AI systems must operate transparentlyto enable deployers to interpret and use system outputs appropriately.Instructions foruse should include relevant information about the provide

150、r,system characteristics andcapabilities,known risks,technical capabilities to explain output,and provisions forinterpreting output.Human oversight and intervention(Article 14):Providers must incorporateappropriate mechanisms for human oversight and intervention in high-risk AI systems.This includes

151、 ensuring that the system can be overridden or stopped by a humanoperator when necessary.High-risk AI systems must be designed to allow effectiveoversight by natural persons during system use.Human oversight measures should aimto prevent or minimize risks and can be integrated into the system or imp

152、lemented bythe deployer.Natural persons assigned to human oversight should be able tounderstand system capabilities and limitations,detect anomalies,interpret systemoutput,and intervene or override system decisions if necessary.Accuracy,robustness and cybersecurity(Article 15):Providers must ensure

153、thathigh-risk AI systems are accurate,reliable,and robust.They should minimize errors andrisks associated with the systems performance and take necessary measures toaddress accuracy and robustness issues.A security risk assessment should beconducted to identify risks and implement necessary mitigati

154、on measures taking intoaccount the design of the system.High-risk AI systems are required to undergocomprehensive risk assessments and adhere to cybersecurity standards.They shouldalso achieve an appropriate level of accuracy,robustness,and cybersecurity whenlanguage models are utilized.Benchmarks a

155、nd measurement methodologies may bedeveloped to address technical aspects of accuracy and robustness.Levels ofaccuracy and relevant metrics should be declared in the accompanying instructions foruse.Specific requirements for certain AI systems(Article 53 and 55):The regulationidentifies specific req

156、uirements for certain types of high-risk AI systems,such asbiometric identification systems,systems used in critical infrastructure,systems usedin education and vocational training,systems used for employment purposes,andsystems used by law enforcement authorities.Indicate their name,registered trad

157、e name or registered trademark,andcontact address on the high-risk AI system or its packaging/documentation.Have a quality management system in place that ensures compliance with theregulation.Copyright 2024,Cloud Security Alliance.All rights reserved.19Keep documentation,including technical documen

158、tation,quality managementsystem documentation,changes approved by notified bodies,decisions issued bynotified bodies,and EU declaration of conformity.Keep logs generated by the high-risk AI systems for a certain period of time.Undergo the relevant conformity assessment procedure before placing thehi

159、gh-risk AI system on the market or putting it into service.Draw up an EU declaration of conformity and affix the CE marking to indicatecompliance with the regulation.Comply with registration obligations.Take necessary corrective actions and provide information as required.Demonstrate the conformity

160、of the high-risk AI system upon a reasoned requestof a national competent authority.Ensure compliance with accessibility requirements.Importer Obligations:Verify the conformity of high-risk AI systems before placing them on the market.Ensure that the high-risk AI system bears the required CE marking

161、,is accompanied by theEU declaration of conformity,and is accompanied by instructions for use.Ensure that high-risk AI systems are appropriately stored and transported.Keep copies of the certificate issued by the notified body,instructions for use,and EUdeclaration of conformity.Provide necessary in

162、formation and documentation to national competent authoritiesupon request.Cooperate with national competent authorities in mitigating risks posed by high-risk AIsystems.Distributor Obligations:Verify that high-risk AI systems bear the required CE marking,are accompanied by theEU Declaration of Confo

163、rmity,and have instructions for use.Indicate their name,registered trade name or registered trademark,and contact addresson the packaging/documentation,where applicable.Ensure that storage or transport conditions do not jeopardize the compliance of high-riskAI systems.Keep copies of the certificate

164、issued by the notified body,instructions for use,and EUdeclaration of conformity.Provide necessary information and documentation to national competent authoritiesupon request.Cooperate with national competent authorities in mitigating risks posed by high-risk AIsystems.Copyright 2024,Cloud Security

165、Alliance.All rights reserved.202.Promoting Innovation(Article 57,58,59,60,61,62,63)The measures in support of Innovation are as follows:AI Regulatory Sandboxes:Member States are required to establish AI regulatory sandboxes at national level,whichfacilitate the development,testing,and validation of

166、innovative AI systems before beingplaced on the market.Sandboxes provide a controlled environment that fosters innovation and allows for riskidentification and mitigation.They aim to improve legal certainty,support the sharing of best practices,fosterinnovation and competitiveness,contribute to evid

167、ence-based regulatory learning,andfacilitate access to the Union market for AI systems,particularly for SMEs and start-ups.National competent authorities have supervisory powers over the sandboxes and mustensure cooperation with other relevant authorities.Processing of Personal Data in AI Sandboxes:

168、Personal data collected for other purposes may be processed in AI regulatory sandboxessolely for developing,training,and testing certain AI systems in the public interest.Conditions must be met to ensure compliance with data protection regulations,includingeffective monitoring mechanisms,safeguards

169、for data subjects rights,and appropriatetechnical and organizational measures to protect personal data.Testing of High-Risk AI Systems in Real-World Conditions:Providers or prospective providers of high-risk AI systems can conduct testing inreal-world conditions outside of AI regulatory sandboxes.Th

170、ey must develop and submit a real-world testing plan to the market surveillanceauthority.Testing can be done independently or in partnership with prospective deployers.Ethical reviews may be required by union or national law.Guidance and Support:Competent authorities within AI regulatory sandboxes p

171、rovide guidance,supervision,andsupport to participants.Providers are directed to pre-deployment services,such as guidance on regulationimplementation,standardization,and certification.The European Data Protection Supervisor may establish an AI regulatory sandboxspecifically for union institutions,bo

172、dies,offices,and agencies.Governance and Coordination:The regulation establishes a governance framework to coordinate and support theimplementation of AI regulation at national and union levels.The AI Office,composed of representatives of Member States,develops union expertiseand capabilities in AI,

173、and supports the implementation of union AI law.Copyright 2024,Cloud Security Alliance.All rights reserved.21A Board,scientific panel,and advisory forum are established to provide input,advice,andexpertise in the implementation of the regulation.National competent authorities collaborate within a Bo

174、ard and submit annual reports onthe progress and results of AI regulatory sandboxes.The Commission develops a single information platform for AI regulatory sandboxes andcoordinates with national competent authorities.Market Surveillance and Compliance:Market surveillance authorities designated by Me

175、mber States enforce the requirementsand obligations of the regulation.They have enforcement powers,exercise their duties independently and impartially,andcoordinate joint activities and investigations.Compliance is enforceable through measures,including risk mitigation,restriction ofmarket availabil

176、ity,and withdrawal or recall of AI models.Involvement of Data Protection Authorities:National data protection authorities and other relevant national public authorities orbodies with supervisory roles have responsibilities in supervising AI systems in line withUnion law protecting fundamental rights

177、.They may have access to relevant documentation created under the regulation.Collaboration with Financial Services Authorities:Competent authorities responsible for supervising Union financial services law aredesignated as competent authorities for supervising the implementation of the AIregulation,

178、including market surveillance activities in relation to AI systems provided orused by regulated and supervised financial institutions.The Commission coordinates with them to ensure coherent application and enforcementof obligations.Promotion of Ethical and Trustworthy AI:Providers of AI systems not

179、classified as high-risk are encouraged to create codes ofconduct to voluntarily apply some or all of the mandatory requirements applicable tohigh-risk AI systems.The AI Office may invite all providers of general-purpose AI models to adhere to thecodes of practice.Transparent Reporting and Documentat

180、ion:Providers are required to have a post-market monitoring system to analyze the use andrisks of their AI systems.They must report serious incidents resulting from the use of their AI systems to therelevant authorities.Technical documentation and exit reports from AI regulatory sandboxes can be use

181、d todemonstrate compliance with the regulation.The Commission and the Board may access the exit reports for relevant tasks.Copyright 2024,Cloud Security Alliance.All rights reserved.223.Prohibitions on certain AI practicesMaterially distorting human behavior:The placing on the market,putting intoser

182、vice,or use of AI systems with the objective or effect of materially distorting humanbehavior,which can result in significant harms to physical,psychological health,orfinancial interests,is prohibited.This includes the use of subliminal components orother manipulative or deceptive techniques that su

183、bvert or impair a personsautonomy,decision-making,or free choice.Biometric categorization for sensitive personal data:The use of biometriccategorization systems based on natural persons biometric data to deduce or infersensitive personal data such as political opinions,trade union membership,religio

184、us orphilosophical beliefs,race,sex life,or sexual orientation is prohibited.AI systems providing social scoring:AI systems that evaluate or classify naturalpersons based on their social behavior,known,inferred,or predicted personalcharacteristics,or personality traits may lead to discriminatory out

185、comes and theexclusion of certain groups.The use of such AI systems for social scoring purposesthat result in detrimental or unfavorable treatment of individuals or groups unrelated tothe context in which the data was generated or collected is prohibited.Real-time remote biometric identification for

186、 law enforcement:The real-timeremote biometric identification of individuals in publicly accessible spaces for thepurpose of law enforcement is considered intrusive and may affect the private life ofindividuals.This practice is prohibited,except in exhaustively listed and narrowlydefined situations

187、where it is strictly necessary to achieve a substantial public interest,such as searching for missing persons,threats to life or physical safety,or theidentification of perpetrators or suspects of specific serious criminal offenses.4.Compliance,Infringements,and PenaltiesThe regulation provides defi

188、nitions for various terms and sets out the scope of its application.Itemphasizes the protection of personal data,privacy,and confidentiality in relation to AI systems.Itincludes provisions for compliance,penalties for infringement,and remedies for affected persons.Additionally,it allows for future e

189、valuations and reviews of the regulation and delegates implementingpowers to the European Commission and is set to apply within a specified timeframe after its entry intoforce.Provisions for Compliance:Providers of general-purpose AI models must take necessary steps to comply with the obligationslai

190、d down in the regulation within 36 months from the date of entry into force of the regulation.Operators of high-risk AI systems that have been placed on the market or put into service beforea certain date(24 months from the date of entry into force of the regulation)are subject to therequirements of

191、 the regulation only if significant changes are made to their designs.Public authorities using high-risk AI systems must comply with the requirements of the regulationwithin six years from the date of entry into force of the regulation.Copyright 2024,Cloud Security Alliance.All rights reserved.23Pen

192、alties for Infringement:Penalties for infringement in EU AI Act follows a tiered approach:For violations supply of incorrect,incomplete or misleading information to notified bodies ornational competent authorities in reply to a request shall be subject to administrative fines of upto 7,500,000 EUR o

193、r,if the offender is an undertaking,up to 1%of its total worldwide annualturnover for the preceding financial year,whichever is higher.For violations,such as not obtaining certification for high-risk AI systems or not respectingtransparency or oversight requirements such as risk management,or obliga

194、tions of providers,authorized representatives,importers,distributors,or deployers;the proposed fines are up to15,000,000 EUR,or 3%of the worldwide annual turnover,whichever is higher:Obligations of providers pursuant to Article 16Obligations of authorized representatives pursuant to Article 22Obliga

195、tions of importers pursuant to Article 23Obligations of distributors pursuant to Article 24Obligations of deployers pursuant to Article 26Requirements and obligations of notified bodies pursuant to Articles 31,33(1),33(3),33(4)or 34Transparency obligations for providers and users pursuant to Article

196、 50For violations,such as using AI systems that have been deemed to pose an unacceptable risk,ornon-compliance with the prohibited AI practices listed in Article 5 of the regulation,the proposedadministrative fines can be up to 35,000,000 EUR,or 7%of the worldwide annual turnover,whichever is higher

197、.The EUAIA requires that any administrative fine imposed should consider all relevant circumstances of thespecific situation.This includes the nature,gravity,and duration of the infringement and itsconsequences,the number of affected persons,and the damage suffered by them.The fine should beevaluate

198、d with regard to the purpose of the AI system.Additionally,factors such as whetheradministrative fines have been imposed by other authorities,and the size,annual turnover,and marketshare of the operator,should be considered.Other determining factors could include any financialbenefits or losses resu

199、lting from the infringement,the degree of cooperation with national competentauthorities,the responsibility of the operator,the manner in which the infringement became known,whether there was negligence or intentionality on the part of the operator,and any action taken tomitigate harm suffered by th

200、ose affected.It also states that the rights of defense of the partiesconcerned should be fully respected in the proceedings and that they are entitled to have access torelevant information,subject to the legitimate interest of individuals or undertakings in the protection oftheir personal data or bu

201、siness secrets.Health Insurance Portability and Accountability Act(HIPAA)The Health Insurance Portability and Accountability Act,or HIPAA,which is a federal law enacted in 1996in the United States,is primarily known for its provisions related to healthcare data privacy and security.Applicability:HIP

202、AA applies to covered entities,including healthcare providers,health plans,and healthcare clearinghouses,that handle protected health information(PHI)of individuals.Key Provisions:Copyright 2024,Cloud Security Alliance.All rights reserved.24Minimum necessary standard:Requires covered entities to use

203、 and disclose only theminimum amount of PHI necessary to achieve the intended purpose.Administrative,Technical,and Physical safeguards:Mandates the implementationof appropriate safeguards to protect the confidentiality,integrity,and availability of PHI.Patient rights:Grants individuals certain right

204、s regarding their PHI,including the rightto access,amend,and request an accounting of disclosures.HIPAA Compliance for GenAI1.Data Privacy and Security:Data Protection requirements:The strict data protection standards of HIPAA arealready well-established across the technical field,applying to all da

205、ta usage regardlessof technology type or purpose(e.g.,strong encryption is mandatory throughoutdevelopment and deployment to safeguard PHI).However,stakeholders in the realm ofGenAI must shift their focus to understanding and implementing the specific nuances ofapplying these existing principles wit

206、hin the context of GenAI operations and processing.While established rules shouldnt need reinvention,adapting them to this novel contextnecessitates careful attention to the unique challenges posed by GenAI.Limitations on training data:HIPAA restricts access to and sharing of PHI,potentiallylimiting

207、 the amount of medical data available to train GenAI models for healthcareapplications.Tracking the origin and compliance of training data becomes crucial toensure the generated outputs would not inherit privacy concerns.This can complicate thedevelopment and accuracy in areas like diagnosis,treatme

208、nt prediction,and personalizedmedicine,and limit the effectiveness and generalizability of AI models for medicalapplications.De-identification requirements:Even de-identified outputs from GenAI trained onPHI might be re-identifiable through subtle patterns,correlations,or advancedtechniques,raising

209、privacy concerns and potentially violating HIPAA.While anonymizationand pseudonymization can obscure identities,they often fail to prevent re-identificationin the context of GenAI,especially when combined within the model with additional datasources.That necessitates robust privacy-preserving method

210、s(e.g.,differential privacy,federated learning,etc.)to protect individual identities effectively.Limited model sharing:Sharing amongst GenAI models trained on PHI is alsorestricted due to privacy concerns,hindering collaboration and advancements in the field.Stringent access controls,auditing and tr

211、acking:HIPAA mandates strict auditingand tracking of PHI access and use.This would extend to GenAI systems,requiring robustlogging and monitoring mechanisms to ensure HIPAA compliance of the entire supplychain.2.Model training,outputs,and usage:Limitations on training data:As discussed above,HIPAA r

212、estricts access to andsharing of PHI,potentially limiting the amount of medical data available to train GenAImodels for healthcare applications.In terms of model training,restricting the ability totrain models on diverse and comprehensive healthcare data sets can potentially lead tobiased or inaccur

213、ate outputs.Implementing differential privacy or other anonymization Copyright 2024,Cloud Security Alliance.All rights reserved.25techniques may help protect patient privacy while still enabling some level of data utilityfor training.Sharing and disclosure restrictions:Sharing or disclosing generate

214、d contentcontaining PHI is heavily restricted,even if anonymized.This can limit the ability to sharemedical insights or collaborate on research using GenAI and requires careful design andimplementation.Restricted generation of PHI:Generative AI cannot directly output any data that couldbe considered

215、 PHI,governing its use even for tasks like generating synthetic medicalrecords for training or testing purposes.Limited downstream use:Generative AI models trained on PHI may not be used indownstream applications that might expose PHI,even if the model itself does not directlyoutput PHI.Model interp

216、retability and explainability:Understanding how a GenAI model arrivesat its outputs is crucial for ensuring it doesnt violate HIPAA by inadvertently disclosingPHI.This necessitates interpretable models and clear explanations of their reasoning.Ensuring transparency and explainability of AI-generated

217、 medical outputs is crucial forbuilding trust and complying with HIPAAs right to an explanation provision.3.HIPAA regulations may also require:Careful output review and monitoring of output results:All outputs generated byGenAI models trained on or utilizing PHI must undergo thorough review to ensur

218、ethey do not contain any identifiable information or have the potential to re-identifyindividuals.That naturally can increase the development time and the complexity ofongoing monitoring of the model outputs.Patient consent and authorization:Using Generative AI for tasks like diagnosis ortreatment r

219、ecommendation requires explicit patient consent and authorization,even if itmay add complexity to the input/output workflows.Auditing and compliance:Organizations using GenAI with PHI must implement robustauditing and compliance measures to ensure adherence to HIPAA regulations asapplicable to all o

220、ther systems under HIPAA regulations.Risk assessments and mitigation plans:GenAI stakeholders must prioritize regularrisk assessments to safeguard patient privacy and maintain HIPAA compliance.Theseassessments should thoroughly evaluate AI/ML systems,enabling the identification ofpotential privacy v

221、iolations and the implementation of targeted mitigation strategies.HIPAA regulations present significant challenges for applying GenAI in healthcare.These challengesdemand a thorough understanding,implementation,and ongoing monitoring of the AI systems.Bycarefully designing these AI systems,employin

222、g robust privacy-preserving techniques,and adheringstrictly to regulations,we can unlock the potential of GenAI to improve healthcare while safeguardingpatient privacy and ensuring responsible and compliant use.Balancing innovation with patient privacyremains a key challenge in this emerging field.T

223、he dynamic regulatory landscape surrounding AI(including GenAI)and ML in healthcare requirescontinuous adaptation by stakeholders to ensure compliance with the evolving interpretations of HIPAAand other relevant regulations,particularly to GenAI systems.Copyright 2024,Cloud Security Alliance.All rig

224、hts reserved.26Addressing the Impact of GenAIs Hallucinations onData Privacy,Security,and EthicsHallucinations are the phenomenon where AI systems generate realistic but not factually accurate orfabricated outputs,such as images,videos,or text,based on the patterns and data they have beentrained on.

225、These hallucinations raise significant concerns regarding legislation and regulationssurrounding data privacy and security.One critical area impacted by GenAI hallucinations is data privacy.The GenAI models,when fed withsensitive data,have the potential to produce outputs that inadvertently may disc

226、lose private informationabout individuals or organizations.This creates a significant challenge for regulatory frameworks,such asGDPR in Europe or Californias CCPA/CPRA,which mandates strict measures to protect personal datafrom unauthorized access or disclosure.The emergence of AI-generated content

227、 blurs the lines betweengenuine and fabricated information,complicating efforts to enforce data privacy laws effectively.GenAIs hallucinations also introduce security risks into the regulatory landscape.Malicious actors couldfraudulently deceive or manipulate individuals by exploiting AI-generated c

228、ontent,such as fabricatedimages or text.This poses a direct threat to the integrity and security of data systems,requiringregulatory authorities to adapt existing cybersecurity regulations to address the unique challenges posedby AI-generated content.As GenAI technology evolves and the capabilities

229、of its models advance,ensuring compliance with security standards may become increasingly complex,especially if theauthenticity of generated outputs remains uncertain.As policymakers and regulators grapple with the governance of GenAI,they must also confront the ethicalimplications of hallucinations

230、.Beyond legal compliance,ethical considerations are crucial in shapingregulatory frameworks for GenAI governance.Questions surrounding the responsible development anduse of GenAI models,including the potential impact of hallucinated content on individuals rights,autonomy,and well-being,demand carefu

231、l deliberation.Regulatory initiatives must balance fosteringinnovation and safeguarding societal values,ensuring that GenAI governance frameworks prioritizeethical principles such as transparency,accountability,and inclusivity.To tackle the issue of AI-generated hallucinations,it is essential to con

232、tinuously assess AI outputs,verifying information from multiple trusted sources,and employing human judgment in evaluating thecontents accuracy.Additionally,providing clear prompts and using well-curated training data can helpreduce the likelihood of hallucinations from the outset.GenAIs hallucinati

233、ons challenge the existing legislative and regulatory frameworks for AI governance,particularly in the domains of data privacy,security,and ethics.Addressing these challenges requirescollaboration between policymakers,regulatory authorities,industry stakeholders,and ethicists indeveloping comprehens

234、ive governance mechanisms that effectively manage the risks and opportunitiesassociated with GenAI.Copyright 2024,Cloud Security Alliance.All rights reserved.27DHS Policy Statement 139-07 Impact on Gen AIData Input:Prohibit putting the U.S.Department of Homeland Security(DHS)data regardingindividual

235、s(regardless of whether it is personally identifiable information(PII)or anonymized),social media content,or any For Official Use Only,Sensitive but Unclassified Information,nowknown as“Controlled Unclassified Information(CUI),”or Classified information into commercialGen AI tools.Data Retention:Sel

236、ect options in tools that limit data retention and opt out of inputs being usedto further train models.Output Review and Use:Ensure all content generated or modified using these tools is reviewedby appropriate subject matter experts for accuracy,relevance,data sensitivity,inappropriate bias,and poli

237、cy compliance before using it in any official capacity,especially when interacting with thepublic.Decision-Making:Commercial Gen AI tools may not be used in the decision-making process forany benefits adjudication,credentialing,vetting,or law or civil investigation or enforcementrelated actions.Fede

238、ral Trade Commission Policy Advocacy&Research Guidance:AI(and other)Companies:Quietly Changing Your Terms of Service CouldBe Unfair or DeceptiveWith data being the driving force behind innovation in tech and business,companies developingAI products increasingly rely on their user bases as a primary

239、source of data.However,thesecompanies must balance access to this data with their promises to protect users privacy,and anyattempts to surreptitiously loosen privacy policies to use more customer information can result inviolating the law.Companies cannot change the terms of their privacy policy ret

240、roactively,as thiswould be unfair and deceptive to consumers who may have agreed to the policy under differentconditions.The Federal Trade Commission has a history of challenging deceptive and unfairprivacy practices by companies,and will continue to take action against companies who attemptto ignor

241、e privacy regulations and deceive consumers.Ultimately,transparency,honesty,andintegrity are essential for companies that want to establish trust with their users and avoid legalrepercussions.AI Companies:Uphold Your Privacy and Confidentiality CommitmentsDeveloping AI models requires large amounts

242、of data and resources,and not all businesses havethe capacity to develop their own models.Model-as-a-service companies help by providing AImodels for third parties through user interfaces and APIs.These companies constantly need datato improve their models,which can sometimes conflict with their obl

243、igations to protect user dataand privacy.The Federal Trade Commission(FTC)enforces laws against companies that fail toprotect customer data and privacy,and those that misuse customer data.Model-as-a-service Copyright 2024,Cloud Security Alliance.All rights reserved.28companies must adhere to their c

244、ommitments regardless of where they are made and mustensure they do not deceive customers or engage in unfair competition.Misrepresentations,material omissions,and data misuse in the training and deployment of AI models can pose risks tocompetition.Model-as-a-service companies that violate consumer

245、privacy rights or engage inunfair competition methods may be held accountable under both antitrust and consumerprotection laws.OMB Policy to Advance Governance,Innovation,and RiskManagement in Federal Agencies Use of Artificial IntelligenceThe government-wide policy to mitigate risks of artificial i

246、ntelligence and utilize its benefits wasannounced by Vice President Kamala Harris.This policy was issued as part of President Bidens AIExecutive Order(please refer below)aimed at strengthening AI safety and security,promoting equity andcivil rights,and advancing American AI innovation.The new policy

247、 includes concrete safeguards forfederal agencies that use AI in a way that could impact Americans rights or safety.It aims to removebarriers to responsible AI innovation,expand and upskill the AI workforce,and strengthen AI governance.The Administration is promoting transparency,accountability,and

248、the protection of rights and safetyacross federal agencies that utilize AI with this announcement.The key highlights of the government-widepolicy to mitigate risks of artificial intelligence(AI)and harness its benefits are as follows:Concrete Safeguards for AI:By December 1,2024,federal agencies wil

249、l be required toimplement concrete safeguards when using AI that could impact Americans rights or safety.These safeguards include assessing,testing,and monitoring AIs impacts on the public,mitigatingthe risks of algorithmic discrimination,and providing transparency into governments use of AI.Human O

250、versight in Healthcare:When AI is used in the Federal healthcare system to supportcritical diagnostics decisions,a human being is overseeing the process to verify the tools resultsand avoid disparities in healthcare access.Human Oversight in Fraud Detection:When AI is used to detect fraud in governm

251、ent services,there is human oversight of impactful decisions,and affected individuals have the opportunity toseek remedy for AI harms.Transparency of AI Use:Federal agencies are required to improve public transparency in theiruse of AI by releasing expanded annual inventories of their AI use cases,r

252、eporting metrics aboutsensitive use cases,notifying the public of AI exemptions along with justifications,and releasinggovernment-owned AI code,models,and data.Responsible AI Innovation:The policy aims to remove barriers to federal agencies responsibleAI innovation.It highlights examples of AI appli

253、cations in addressing the climate crisis,advancingpublic health,and protecting public safety.Growing the AI Workforce:The guidance directs agencies to expand and upskill their AI talent.Strengthening AI Governance:The policy requires federal agencies to designate Chief AIOfficers to coordinate the u

254、se of AI across their agencies and establish AI Governance Boards togovern the use of AI within their agencies.Ceasing Use of AI:If an agency cannot apply the specified safeguards,it must cease using theAI system,unless agency leadership justifies why doing so would increase risks to safety or right

255、soverall or would create an unacceptable impediment to critical agency operations.Copyright 2024,Cloud Security Alliance.All rights reserved.29President Bidens Executive Order on the Safe,Secure,andTrustworthy Development and Use of Artificial IntelligencePresident Bidens Executive Order on Safe,Sec

256、ure,and Trustworthy Artificial Intelligence(AI),issued inOctober 2023,represents a landmark effort to address societal concerns and establish responsible AIpractices.This order focuses on ensuring the safe,secure,and ethical development and use of AI,encompassingkey areas like data privacy,ethics,wo

257、rkforce development,and international collaboration.It outlines aplan for creating guidelines and best practices to guide the responsible development and deployment ofAI technologies.The plan includes tasking multiple government entities,such as the National Institute ofStandards and Technology(NIST

258、),the National Science Foundation(NSF),and the Department ofCommerce(DOC),with developing resources and best practices related to existing frameworks andtopics like:Algorithmic fairness and biasExplainability and interpretability of AI modelsStandardized testing and evaluation methodologiesWhile spe

259、cific regulatory details are still forthcoming,the order signifies the governments commitment tobuilding a robust framework for trustworthy AI.Although President Bidens executive order doesntredefine the legal and regulatory landscape surrounding AI,it emphasizes the importance of ethical andaccount

260、able use,addressing concerns,such as data privacy,security controls,and cybersecuritythroughout the data lifecycle.While not establishing specific regulations,the Safe,Secure,and Trustworthy AI Executive Order lays thegroundwork for a comprehensive approach to responsible AI development and use,addr

261、essing societalconcerns by focusing on data privacy,ethics,workforce development,and international collaboration.The lack of federal AI regulation has led to a complex situation with many different state and localregulations being proposed and enacted.As highlighted in BCLPlaws“US State-by-State AI

262、LegislationSnapshot”this patchwork of regulations creates a key concern.Copyright 2024,Cloud Security Alliance.All rights reserved.30Non-discrimination and FairnessGenerative AIs ability to produce novel content and influence decision-making raises critical concernsabout discrimination and fairness,

263、prompting legal and regulatory scrutiny.Lets review how theanti-discrimination laws and regulations impact how GenAI is designed,deployed,and used.1.Some Existing Anti-discrimination Laws and RegulationsSummary of existing and proposed laws that address discrimination based on protected characterist

264、ics inAI algorithms and decision-making processes:Title VII of the Civil Rights Act(US,1964):Prohibits discrimination in employment based on race,color,religion,sex,and national origin.AI systems(including GenAI)used in hiring,promotions,orperformance evaluations could face scrutiny under Title VII

265、if they perpetuate bias againstprotected groups.Equal Employment Opportunity and Civil Rights Laws and Authorities(US):Expands Title VIIprotections to age and disability.Algorithmic bias based on these characteristics is alsoprohibited.The EEOCs technical assistance document is part of its Artificia

266、l Intelligence and AlgorithmicFairness Initiative,which ensures that softwareincluding AIused in hiring and otheremployment decisions complies with the federal civil rights laws that the EEOC enforces.In addition,The Genetic Information Nondiscrimination Act of 2008 is a federal law prohibitingdiscr

267、imination based on genetic information in employment and health insurance.While it doesnot directly govern algorithmic decision-making(including those made by AI systems),it prohibitsdiscrimination based on genetic information in employment decisions.Companies usinggenerative AI systems still have a

268、 responsibility to ensure their systems are fair,unbiased,and donot perpetuate discriminatory practices based on any sensitive information,including geneticinformation.Fair Housing Act(US):Bars discrimination in housing based on the same protectedcharacteristics as Title VII.AI-powered tools used in

269、 tenant screening or mortgage approvalsmust comply with these protections.Equal Credit Opportunity Act(US):Prohibits discrimination in credit based on race,color,religion,national origin,sex,marital status,age,or disability.AI-driven credit scoring models must becarefully evaluated for potential dis

270、criminatory impacts.Several federal civil rights laws like Title VI,Title IX,and Section 504 prohibit discrimination ineducational settings based on race,color,national origin,sex,disability,and age.Schools andeducational institutions must comply with these laws to ensure that their practices,includ

271、ingthose involving technology like machine learning and AI,do not discriminate against studentsbased on protected characteristics from above.General Data Protection Regulation(GDPR)(EU):Grants individuals rights to access,rectify,anderase their personal data.This impacts how GenAI systems collect an

272、d use personal informationto avoid discriminatory outcomes.It requires data controllers to implement safeguards against Copyright 2024,Cloud Security Alliance.All rights reserved.31discriminatory profiling and automated decision-making.In addition,CCPA/CPRA prohibitsorganizations from discriminating

273、 against consumers who exercise their privacy rights.Algorithmic Accountability Act(US,2019-2020):Aims to establish federal standards for biasaudits and assess fairness,accountability,and transparency of AI algorithms used by governmentagencies and businesses.European Unions Artificial Intelligence

274、Act(EU AI Act)(2024):Imposes specific requirements onhigh-risk AI applications,including addressing bias and discrimination.New York City Bias in Algorithms Act(US,2021):Requires audits of AI algorithms used by cityagencies for potential bias based on protected characteristics.California Automated D

275、ecision Making Act(US,2023)/New York Automated EmploymentDecision Tools Act(US,2023):Both require businesses to provide notice and explanation whenusing automated decision-making tools that significantly impact consumers.CCPA/CPRA prohibits discrimination against consumers who exercise their privacy

276、 rights.Thiscould pose potential challenges for GenAI models trained on datasets containing inherent biases.Mitigating such biases to ensure non-discriminatory outputs becomes crucial under CCPA/CPRA.Americans with Disabilities Act(ADA):This is a set of standards and regulations that mandateaccommod

277、ations for people with disabilities.AI systems interacting with the public need tocomply with ADA guidelines regarding accessibility.Fair Credit Reporting Act(FCRA):This law regulates how consumer information is collected andused.AI models used within the financial industry(e.g.,for loan determinati

278、ons)need to ensurecompliance with FCRA to avoid unfair biases in decision-making.Recent instances,like lawsuits in the USA,alleging discriminatory hiring practices due to biased AIalgorithms and increased scrutiny of AI-powered facial recognition systems based on the EUs GDPRruling,highlight concern

279、s about potential bias,discriminatory profiling,and the need for compliance withanti-discrimination laws.These recent examples highlight the potential for bias in AI recruitment,with instances where tools usedfor selecting candidates have faced allegations of discrimination:Enforcement in the News:T

280、he EEOCs First Lawsuit Over Discrimination Via AI,2023:“Thelawsuit alleged that iTutorGroup failed to hire more than 200 qualified applicants over age 55because of their age.”The charging party alleged that she applied using her real birthdate,wasimmediately rejected,applied the next day using a mor

281、e recent birthdate,and was offered aninterview.As a result,“In January 2023,the EEOC issued its draft strategic enforcement plan for2023 through 2027,which demonstrates clear EEOC focus on discriminatory use of AI throughoutthe employment life cycle,beginning with recruitment and including employee

282、performancemanagement.”Discrimination and bias in AI recruitment:a case study,2023:This case study is a real-worldexample of bias in AI recruitment.A banks AI tool for shortlisting job candidates was found to bediscriminatory.This case raises important legal considerations and underscores the crucia

283、l needto be aware of potential biases when implementing AI tools in the hiring process.Using AI to monitor employee messages,2024:This article highlights how large enterprises usean AI service to monitor employees messages.It goes beyond simply analyzing sentiment,butcan also evaluate text and image

284、s for“bullying,harassment,discrimination,noncompliance,pornography,nudity and other behaviors,”and even how different demographics(e.g.,age group,location)respond to company initiatives.Although privacy-protecting techniques like dataanonymization are applied,such practices raise concerns about priv

285、acy rights and free speech.Copyright 2024,Cloud Security Alliance.All rights reserved.32While some argue that its an invasion of privacy and could discourage open communication,others see it as a way to identify potential issues and enhance decision-making in protecting thecompanies.The legal landsc

286、ape remains unclear,suggesting potential regulatory and societalhurdles for such practices.2.Regulatory ChallengesCurrent legal frameworks struggle to address non-discrimination and fairness in GenAI due to severallimitations:Applicability gap:Existing laws struggle to address complex AI systems and

287、 lack clarityon how concepts like discrimination translate to algorithms and data.Difficulty in proving bias:Opaque AI systems make it hard to pinpoint and provediscriminatory intent or impact,further complicated by the interconnectedness of factorswithin these systems.Enforcement challenges:Limited

288、 resources and expertise hinder effectiveinvestigations and enforcement,further complicated by the global nature of AIdevelopment.Innovation vs.regulation:Rapidly evolving AI technology outpaces current legalframeworks,creating uncertainties and requiring a delicate balance between innovationand eth

289、ical considerations.Defining and implementing fairness:Achieving fairness in AI is multifaceted.Definingit precisely is complex because of differing interpretations and potential conflictsbetween fairness principles.Implementing measures to ensure fairness often presentssignificant technical challen

290、ges and requires substantial resources.Complexity of interpretation:AI models,especially deep learning models,can beincredibly complex.They may consist of millions of parameters,making it difficult tounderstand how input data is transformed into output predictions.Creating explanationsthat accuratel

291、y reflect these transformations is a non-trivial task that requires significantcomputational resources and time.Trade-off between accuracy and explainability:More accurate models,such asneural networks are often less interpretable.On the other hand,simpler,moreinterpretable models like linear regres

292、sion or decision trees may not perform as well oncomplex tasks.Balancing this trade-off to develop a model that is both accurate andexplainable is a challenging process.GenAI is the best example of accepting betteraccuracy for less explainability.Lack of standardized techniques:While there are some

293、techniques for explaining AIdecisions(like Local Interpretable Model-Agnostic Explanations(LIME),SHapleyAdditive exPlanations(SHAP),etc.),there is no one-size-fits-all method.The appropriatetechnique can vary depending on the type of model and the specific application,whichmeans that developing expl

294、ainable AI often requires custom solutions.Validation of explanations:Verifying that the explanations generated by explainable AItechniques accurately reflect the models decision-making process is a complex task initself.This validation process can be time-consuming and computationally expensive.Cop

295、yright 2024,Cloud Security Alliance.All rights reserved.33Today,the current legal frameworks are not well-equipped to address concerns about non-discriminationand fairness in the rapidly evolving field of GenAI.Public understanding,consensus building,andadaptable regulations are needed to be establi

296、shed to bridge this gap.3.Regulatory Focus and TechniquesRegulatory frameworks for Generative AI should address bias mitigation and fairness across variousstages of the development and deployment lifecycle.Some regulatory considerations and thecorresponding techniques for addressing bias and fairnes

297、s are listed below.Data Debiasing:Regulatory focus:Data privacy regulations can be leveraged to ensure responsible datacollection and usage practices.Specific regulations might mandate data-debiasingtechniques for sensitive data or require transparency in data processing pipelines.Techniques:Data cl

298、eaning(e.g.,removing biased annotations,identifying and correctinginconsistencies),data augmentation(e.g.,generating synthetic data to improverepresentativeness),data weighting(e.g.,assigning higher weights to samples fromunderrepresented groups).Using the so-called safe or pre-sanitized(someprofess

299、ionals prefer pre-processed or de-biased)data sets can be a starting point,however organizations should consider limitations like incomplete bias mitigation,limitedscope,and potential information loss.Companies like IBM offer such data sets as astepping stone in the initial stages of AI development

300、and references can be found onlineas needed(e.g.,Wikipedia).Applicable regulations:Relevant are the General Data Protection Regulation(EU)governing transparency in data processing and responsible data collection practices,California Consumer Privacy Act(CCPA)providing individuals with the right to a

301、ccess,delete,and opt-out of the sale of their personal data,potentially governing data usagefor training GenAI models and AI outputs,Model Cards documentation framework(Hugging Face)as a standardized documentation framework“with a particular focus onthe Bias,Risks and Limitations section.”Algorithmi

302、c transparency:Regulatory focus:Transparency regulations can require developers to provideexplanations for model outputs,particularly in high-impact applications.This couldinvolve standardized explanation formats or access to relevant data subsets forindependent analysis.Techniques:Explainable AI(XA

303、I)methods(e.g.,saliency maps,counterfactuals)thatelucidate model decision-making processes.Applicable regulations:Related are the European Unions Artificial Intelligence Act(EUAI Act),which requires high-risk AI systems to be transparent and explainable,potentially mandating the use of specific expl

304、ainable AI techniques;NISTs FourPrinciples of Explainable Artificial Intelligence(2021);and the Model Cardsdocumentation framework(Google Research),which advocates“the value of a sharedunderstanding of AI models.”Human oversight and feedback:Copyright 2024,Cloud Security Alliance.All rights reserved

305、.34Regulatory focus:Regulations might require specific human oversight mechanisms forcritical decisions or sensitive domains.This could involve qualifications for humanreviewers,defined review protocols,or mandatory reporting of identified biases.Techniques:Human-in-the-loop systems,active learning

306、with human feedback loops,data subjects explicit consent and human review of model outputs.Applicable regulations:FDAs proposed TPLC approach(2021)advocates for humanoversight as manufacturers“monitor the AI/ML device and incorporate a riskmanagement approach and other approaches outlined in“Decidin

307、g When to Submit a510(k)for a Software Change to an Existing Device”Guidance 18 in development,validation,and execution of the algorithm changes.”Diversity,Equity,and Inclusion(DE&I)in AI Development:Regulatory focus:Equality and non-discrimination laws can be applied to ensure fairhiring and develo

308、pment practices within AI teams.Regulations might mandate diversitymetrics for development teams or require bias impact assessments before deployment.Techniques:Building a diversity mindset within development teams,incorporatingdiversity,fairness,impartiality,and inclusion principles into design and

309、 testing phases,and conducting bias audits and impact assessments.Applicable regulations:While specific DE&I regulations for AI are still developing,organizations must proactively adopt ethical standards to ensure their AI systems are fairand equitable and use the opportunity to“Embed DEI Into Your

310、Companys AI Strategy”(Harvard Business Review,2024).Industry guidance emphasizes that AI must be ethicaland equitable in its approach to ensure it empowers communities and benefits society(World Economic Forum,2022),avoiding bias and discrimination.Algorithmic transparency and explainability:Identif

311、y requirements for transparency andexplainability of AI decisions(e.g.,explainable AI initiatives),particularly in high-stakes situations.Explore regulations requiring explainability of AI decisions,particularly in high-risk applications,and how they may impact the organizations approach.Some relate

312、d documents:Algorithmic Accountability Act,2021-2022:Proposed in several states,these bills seek tocreate transparency and ensure auditing of AI systems used for crucial decisions,toreduce disparate impact in“response to problems already being created by AI andautomated systems.”Algorithmic Accounta

313、bility Act,2023-2024:The Algorithmic Accountability Act(September 2023),currently in its introductory stage,aims to establish a framework forresponsible development and use of AI systems.While specific details are still beingdeveloped,below is our understanding of what areas the act might generally

314、focus on:Transparency and explainability:Requiring developers to explain how AIsystems make decisions,improving public understanding and trust.Data privacy and security:Establishing safeguards to protect personal dataused in training and deploying AI systems.Algorithmic fairness and bias:Mitigating

315、the potential for discriminatoryoutcomes by addressing biases in data and algorithms.Risk assessment and mitigation:Identifying and addressing potential risksassociated with AI,such as safety,security,and fairness concerns.As the bill progresses through the legislative process,details regarding its

316、specific regulations forgoverning AI and GenAI will become clearer.Copyright 2024,Cloud Security Alliance.All rights reserved.35Emerging Regulatory Frameworks,Standards,andGuidelinesThe AI Bill of Rights(White House Blueprint),2023:This non-binding set of guidelines emphasizes theneed for AI systems

317、 to be used equitably and without discrimination.It recommends safeguards againstalgorithmic discrimination and harmful biases.United Nations Global Resolution on Artificial Intelligence:The UNs resolution on supporting safe,trustworthy,and human-centric AI calls for member states to promote the dev

318、elopment and use of AI thatis safe,trustworthy,human-centric,and transparent.It also highlights the importance of ensuring that AIis used in ways that respect human rights and fundamental freedoms,and that it is free from bias anddiscrimination.Additionally,the resolution encourages member states to

319、 work together to developinternational norms and standards for the development and use of AI.Some key highlights of the UNsresolution on AI include:Encouraging member states to promote the development and use of AI that is safe,trustworthy,and human-centric.Emphasizing the need for AI to be used in

320、ways that respect human rights and fundamentalfreedoms,while also being transparent and free from bias and discrimination.Urging member states to cooperate and collaborate with one another in the development ofinternational norms and standards for the development and use of AI.Encouraging member sta

321、tes to share best practices and experiences in the development and useof AI to help ensure that it benefits society as a whole.Calling for continued dialogue and engagement with stakeholders from a variety of sectors,including government,civil society,and industry,to help guide the development and u

322、se of AI in aresponsible and ethical manner.National Institute of Standards and Technology(NIST)AI Risk Management Framework:This frameworkaims to help organizations identify,manage,and mitigate risks associated with AI systems,including thoserelated to bias and discrimination.It encourages the incl

323、usion of DEI considerations in AI development.See Artificial Intelligence Risk Management Framework(AI RMF 1.0)for more details about theframework.Effective AI regulations should promote three key aspects:standardization,accountability,andinternational cooperation,as follows:Standardization:This inc

324、ludes establishing common methods for detecting,preventing,andmitigating bias,such as adopting a standardized format for Model Cards as proposed in theACM Library(2019)paper Model Cards for Model Reporting.Accountability:Clear frameworks for liability and accountability are needed to incentivizeresp

325、onsible development and deployment of AI.International Cooperation:Consistent and effective approaches across borders can beachieved through international cooperation on AI regulations.Copyright 2024,Cloud Security Alliance.All rights reserved.36Frameworks,guidelines,and resources exist to encourage

326、 the ethical,transparent,and trustworthydesign,development,deployment,and operations of AI.Some examples are:The IIA AI Auditing Framework provides a comprehensive approach to evaluating trustworthinessof the AI systems.It focuses on four key areas:governance,ethics,control,and the human factor.More

327、 details about the three overarching components(AI Strategy,Governance,and The HumanFactor)and seven elements(Cyber Resilience,AI Competencies,Data Quality,Data Architecture&Infrastructure,Measuring Performance,Ethics,The Black Box)can be found in the frameworkdocumentation.IBMs Trusted AI Ethics of

328、fers guidelines to ensure AI is designed,developed,deployed,andoperated ethically and transparently.Microsofts Responsible AI Practices are guidelines and principles for trustworthy AIdevelopment and use.AWSs Core dimensions of Responsible AI are guidelines and principles for safe and responsibledev

329、elopment of AI,taking a people-centric approach that prioritizes education,science,andcustomers.Googles“Responsible AI Practices and Principles”are designed to guide the development anduse of AI responsibly using a human-centered design approach.The Understanding AI Ethics and Safety guide by the Al

330、an Turing Institute serves as anintroductory resource for AI ethics,potential benefits,challenges,and case studies highlightingethical concerns related to AI.The AI Incident Database by the Partnership on AI is a repository of real-world examples where AIsystems caused unintended harm,and the Ethica

331、lly Aligned Design(EAD)guidelines by IEEEprovides recommendations and frameworks for designing ethical,transparent,and trustworthy AIsystems.These resources provide recommendations to promote the ethical use of AI while being transparent tothe user.The resources also cover topics such as the potenti

332、al benefits of AI,the challenges involved withimplementing it,and case studies related to ethical concerns and incidents where AI systems causedunintended harm.The OWASP Top 10 for Large Language Model Applications project is an initiative aimed at educatingdevelopers,designers,architects,managers,a

333、nd organizations about potential security risks whendeploying and managing Large Language Models(LLMs).This project provides a comprehensive list ofthe top 10 most critical vulnerabilities often seen in LLM applications,highlighting their potential impact,ease of exploitation,and prevalence in real-world applications.Vulnerabilities include prompt injections,sensitive information disclosure(data l

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(CSA:2024从原则到实践:动态监管环境中的负责任AI报告(英文版)(55页).pdf)为本站 (Kelly Street) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

 微**...  升级为标准VIP wei**n_...  升级为标准VIP

 wei**n_... 升级为标准VIP wei**n_...  升级为至尊VIP 

wei**n_...  升级为至尊VIP   wei**n_... 升级为标准VIP 

138**02...  升级为至尊VIP  138**98... 升级为标准VIP

 微**... 升级为至尊VIP wei**n_...  升级为标准VIP

 wei**n_... 升级为高级VIP  wei**n_...  升级为高级VIP

  wei**n_... 升级为至尊VIP  三**... 升级为高级VIP

 186**90... 升级为高级VIP  wei**n_... 升级为高级VIP

 133**56... 升级为标准VIP  152**76... 升级为高级VIP

wei**n_... 升级为标准VIP  wei**n_...   升级为标准VIP

 wei**n_...  升级为至尊VIP wei**n_...  升级为标准VIP

133**18...  升级为标准VIP wei**n_...  升级为高级VIP 

wei**n_... 升级为标准VIP   微**... 升级为至尊VIP

 wei**n_... 升级为标准VIP wei**n_... 升级为高级VIP 

187**11...  升级为至尊VIP    189**10... 升级为至尊VIP

188**51...  升级为高级VIP 134**52... 升级为至尊VIP

134**52... 升级为标准VIP   wei**n_... 升级为高级VIP 

学**... 升级为标准VIP liv**vi...  升级为至尊VIP

 大婷 升级为至尊VIP  wei**n_...  升级为高级VIP 

wei**n_... 升级为高级VIP   微**... 升级为至尊VIP

 微**...  升级为至尊VIP  wei**n_... 升级为至尊VIP 

wei**n_... 升级为至尊VIP  wei**n_...  升级为至尊VIP

战**  升级为至尊VIP  玍子  升级为标准VIP

ken**81... 升级为标准VIP   185**71... 升级为标准VIP 

 wei**n_...  升级为标准VIP  微**... 升级为至尊VIP

wei**n_... 升级为至尊VIP 138**73... 升级为高级VIP 

 138**36...  升级为标准VIP  138**56... 升级为标准VIP 

 wei**n_... 升级为至尊VIP  wei**n_... 升级为标准VIP

 137**86...  升级为高级VIP 159**79...  升级为高级VIP 

 wei**n_... 升级为高级VIP 139**22...   升级为至尊VIP

151**96...  升级为高级VIP  wei**n_... 升级为至尊VIP

 186**49...  升级为高级VIP 187**87... 升级为高级VIP 

 wei**n_...  升级为高级VIP wei**n_...  升级为至尊VIP

sha**01...  升级为至尊VIP wei**n_...  升级为高级VIP 

139**62... 升级为标准VIP  wei**n_... 升级为高级VIP 

  跟**... 升级为标准VIP 182**26...  升级为高级VIP

wei**n_...  升级为高级VIP  136**44... 升级为高级VIP

136**89... 升级为标准VIP wei**n_...  升级为至尊VIP

wei**n_...  升级为至尊VIP wei**n_...  升级为至尊VIP

 wei**n_... 升级为高级VIP wei**n_...  升级为高级VIP

177**45...  升级为至尊VIP wei**n_...  升级为至尊VIP 

 wei**n_...  升级为至尊VIP 微**...   升级为标准VIP

 wei**n_...  升级为标准VIP wei**n_... 升级为标准VIP 

139**16... 升级为至尊VIP   wei**n_... 升级为标准VIP

wei**n_... 升级为高级VIP  182**00... 升级为至尊VIP 

wei**n_...  升级为高级VIP wei**n_... 升级为高级VIP 

 wei**n_...  升级为标准VIP 133**67... 升级为至尊VIP 

 wei**n_... 升级为至尊VIP 柯平 升级为高级VIP 

 shi**ey... 升级为高级VIP 153**71... 升级为至尊VIP 

132**42... 升级为高级VIP    wei**n_... 升级为至尊VIP