《兰德:2024人工智能图像危害分析与网络真实性维护报告(英文版)(20页).pdf》由会员分享,可在线阅读,更多相关《兰德:2024人工智能图像危害分析与网络真实性维护报告(英文版)(20页).pdf(20页珍藏版)》请在三个皮匠报告上搜索。
1、The democratization of image-generating artificial intelligence(AI)tools without regulatory guardrails has amplified preexisting harms on the internet.The emergence of AI images on the internet began with genera-tive adversarial networks(GANs),which are neural networks1 containing(1)a generator algo
2、rithm that creates an image and(2)a discriminator algorithm to assess the images quality and/or accuracy.Through several collaborative rounds between the generator and discriminator,a final AI image is generated(Alqahtani,Kavakli-Thorne,and Kumar,2021).ThisPersonDoesNotE,a site created by an Uber en
3、gineer that generates GAN images of realistic people,launched in Febru-ary 2019 to awestruck audiences(Paez,2019),with serious implications for exploita-tion in such areas of abuse as widespread scams and social engineering.This was just the beginning for AI-generated images and their exploitation o
4、n the internet.Over time,AI image generation advanced away from GANs and toward diffusion models,which produce higher-quality images and more image variety than GANs.Diffusion models work by adding Gaussian noise2 to original training data images BILVA CHANDRA Analyzing Harms from AI-Generated Image
5、s and Safeguarding Online AuthenticityExpert InsightsPERSPECTIVE ON A TIMELY POLICY ISSUEMarch 20242through a forward diffusion process and then,through a reverse process,slowly removing the noise and resynthesiz-ing the image to reveal a new,clean generated image(Ho,Jain,and Abbeel,2020).Diffusion
6、models are paired with neural network techniques to map text-to-image capabili-ties,known as text-image encoders(Contrastive Language-Image Pre-training CLIP was a milestone in this space),to allow models to process visual concepts(Kim,Kwon,and Chul Ye,2022).Thus,the commercialization of dif-fusion
7、models(DALL-E,Stable Diffusion,Midjourney,Imagen,and others)put the power of synthetic image gen-eration in the hands of the user on a global scale.The rise of image generation tools has introduced synthetic forms of such safety harms as mis-and disinformation,extremism,and nonconsensual intimate im
8、agery(NCII),causing fur-ther disarray and damage in the internet ecosystem.The societal harms from AI image generation tools have yet to be effectively addressed from a regulatory standpoint because of a nexus of policy challenges,such as copyright protection,data privacy,ethics,and contrac-tual req
9、uirements.Recent fears rising from generative AI have somewhat moved the policy needle on AI regula-tion,sparking great interest in Congress and the executive branch(as evidenced by the Senate AI insight forums and President Joe Bidens executive order on AI,respectively White House,2023b).However,wi
10、thout legislation addressing safety and societal issues related to generative AI,executing a coherent regulatory strategy to address the harmful effects of AI-generated images on the internet is a tall order.(As of this writing,Section 230 of the Com-munications Decency Act of 1996 serves as the sol
11、e piece of legislation for internet regulation U.S.Code,Title 47,Section 230.)In this paper,I delve into safety harms and challenges from AI-generated images and how such images affect authenticity on the internet.The first section outlines the role of image authenticity on the internet.In the secon
12、d sec-tion,I review the technical safety challenges and harms for the image generation space,then look at industry solutions to authenticity,including the promise of provenance solu-tions and issues with implementing them.The third section outlines several policy considerations to tackle this new pa
13、r-adigm that largely focus on provenance,given its promise as an authenticity solution and relevance in policy conversa-tions.In this paper,content authenticity in the context of images refers to establishing transparent information about images(both human and AI-generated),whether in origin,context
14、,authorship,or other areas,in a way that is accessi-ble to users on the internet.Throughout,this paper focuses AbbreviationsAIartificial intelligenceC2PACoalition for Content Provenance and AuthenticityCAIContent Authenticity InitiativeCIDcivil investigative demandCLIPContrastive Language-Image Pre-
15、trainingCSAMchild sexual abuse materialFTCFederal Trade CommissionGANgenerative adversarial networkGIFCTGlobal Internet Forum to Counter TerrorismNCIInonconsensual intimate imageryNISTNational Institute of Standards and TechnologyOECDOrganisation for Economic Co-operation and Development3on content
16、authenticity,as it could play a key role in shap-ing public trust in image content broadly and covers a wide swath of issues,from disinformation to synthetic NCII.The Role of Image Authenticity on the Internet Images have played an important role in the history of the internet,informing people about
17、 current events,sparking emotional responses to war atrocities and injustice,gal-vanizing individuals to support a cause,and much more.Research shows that people tend to respond more viscer-ally to images than they do to text online(Medill School of Journalism,2019).Studies suggest that the brain pr
18、ocesses visual stimuli more rapidly than it does words(Alpuim and Ehrenberg,2023).Furthermore,images can increase a viewers perception of the truthfulness of an accompanying statement(Alpuim and Ehrenberg,2023).However,the era of treating images as“proof”from a social standpoint is rapidly changing.
19、Image authenticity on the internet is in jeopardy,as AI-generated images without proof of provenance,or the origin of a given image,are affecting how people perceive current events and public figures.The issue is not only a decrease in content authenticity but also a lack of knowl-edge and tools amo
20、ng many users to help navigate this paradigm shift in the information domain.Opportunistic actors are taking advantage of accessible AI tools to reduce trust in content and media,particularly during tumultu-ous periods,such as violent conflicts.An example of this is the inflammatory AI-generated ima
21、ges spawned after Hamass October 7,2023,attack in Israel and the sub-sequent conflict in Gaza.Multiple AI-generated images(some photorealistic)spread widely on the internet,con-juring fake crowds of Israelis marching in support of the Israeli government or unusual images of Gazan children in the mid
22、st of explosions;these images were rarely labeled as AI-generated(Klee and Ramirez,2023).Solutions to this problem that focus on content authen-ticity could be the most valuable by providing individuals with more transparency about content that they consume and,therefore,more agency in terms of how
23、to interpret that content.For example,solutions that provide metadata information to the user,such as authorship,geolocation data,and tools used in the editing of an image,can all be useful for user interpretation.Though the research behind authenticity measures affecting user trust in content is no
24、t conclusive,and content authenticity is not a silver bullet for solving all AI-driven harms on the internet,content authenticity solutions are a step in the right direction.The issue is not only a decrease in content authenticity but also a lack of knowledge and tools among many users to help navig
25、ate this paradigm shift in the information domain.4Image-generation technology will continue to evolve,become more advanced,and likely become more photoreal,increasing the escalation potential for harm.There is no foolproof way to make these models entirely safe for use.To start solving issues of AI
26、-generated images fueling dis-information,harmful propaganda,NCII,and other safety harms,the United States must first develop solutions in regard to authenticity and improve the publics access to information about these images.The first step in navigating this shift is to comprehend how AI image gen
27、eration tools produce safety issues and challenges and how current safeguards are insufficient to tackle the problem.Safety Challenges in Image GenerationSafety challenges in the AI image generation landscape begin at the technical level,and the most significant safety challenges are due to biases a
28、nd harms from training data,the existence of open-source image models,and a piece-meal approach to content moderation at the user level.The current AI image generation space mainly consists of text-to-image diffusion models,such as Midjourney,DALL-E,and Stable Diffusion,that generate images based on
29、 user prompts.Understanding the fundamentals of safety issues in text-to-image diffusion models can show why these models can produce unsafe images.Furthermore,diving deep into training data,open-source models,and content moderation reveals that these technical mitigations are simply not enough to p
30、revent the generation of harmful content,and the United States needs authenticity solutions to manage risks and harms.Image generation models reflect societal and represen-tational biases on the internet because they are trained on data scraped from the internet.For example,there are far more images
31、 sexualizing women on the internet compared with similar images of men and far more images of men in professional roles(doctor,lawyer,engineer,etc.)than similar images of women.Image models conceptualize these representational biases and have become quite adept at generating content that both overse
32、xualizes women and highlights men in professional positions(Heikkil,2022).Recent research shows that there is still much work to be done to make image models safer and less biased,as there are still severe occupational biases in models,which result in the exclusion of groups of people from generated
33、 results(Naik and Nushi,2023).Furthermore,Rest of World in 2023 conducted an experiment with Midjourney that showed that the tool typically represented nationalities using harmful stereotypes:Images of a“Mexican person”mainly showed a man in a sombrero,and images of New Image generation models refle
34、ct societal and representational biases because they are trained on data scraped from the internet.5Delhi almost exclusively showed polluted streets(Turk,2023).The internet is inherently biased,given the nature of its human inputs:When you scrape highly biased data,you will generate it as well.Safet
35、y harms in these models also stem from train-ing data.To start,data labeling is largely outsourced to providers that specialize in scaled labeling,which is cost-effective.However,this process can introduce biases and inaccuracies in human labels for training data(Smith and Rustagi,2020).When develop
36、ers scrape data from the internet for image generation,their main method to ensure that models are safe is to filter training data and attempt to reduce the prevalence of harmful content in the training process(Smith and Rustagi,2020).However,this method is contingent on how effective these filters
37、are in rooting out harmful content while ensuring that they are not too conservative in excluding training data so that the models are still trained on a wide range of data and retain quality and creativity in their generations.Furthermore,even with robust safety filtering,the ability for models to
38、deduce con-cepts across different types of benign images can result in the generation of a harmful image.For example,an image model that is trained on images of beaches but not on por-nography will understand how the human body looks with swimsuits or minimal clothing.The same model could also be tr
39、ained on images of children going to school,playing outside,etc.all of which are benign images.These model capabilities combined(without further safety measures)will likely allow the model to create images of scantily clad or even nude children through malign prompt engineer-ing techniques.However,m
40、ost developers would still want training data to have images of beaches and children to ensure high quality and effective generations.Unfortu-nately,the trade-off between safety and what constitutes a quality product is often difficult.The discussion of safety challenges with image gen-eration would
41、 be incomplete without highlighting the open-source space for this critical technology.The debates around open-source generative AI are abundant:Those who are in favor highlight the benefits of open access in improving models and their safety capabilities with wider researcher access,and those again
42、st are focused on the potential for malign use and challenges with monitor-ing and controlling model use.With image generation in particular,open access has benefited malicious actors in removing safeguards and generating harmful content at scale with the use of fine-tuned open-source models.For exa
43、mple,when Stable Diffusion was open-sourced in 2022,Unstable Diffusion was born.Unstable Diffusion was a large server that used Stable Diffusions open-source model with reduced safeguards to create not-suitable-for-workplace content(Wiggers and Silberling,2022).An even more alarming example is Civit
44、AI,a site created for AI image generation that allows users to browse thousands of models in order to generate pornographic content and synthetic child sexual abuse imagery,streamlining the nonconsensual AI porn economy(Maiberg,2023a).A more recent example is a Stanford Internet Observatory investi-
45、gation that found hundreds of images of confirmed child sexual abuse material(CSAM)in an open dataset called LAION-5B,which is commonly used in image generation models,such as Stable Diffusion(Thiel,2023).The use of open-source image generation poses ethical concerns related to child exploitation,co
46、nsent,and fair use.From a consent and fair use perspective,such use could dispro-portionately affect sex workers,who may have their images 6scraped from the internet during the model training pro-cess,allowing their likeness to be reproduced synthetically without their consent and without compensati
47、on.Despite the benefits of open-source software ensuring greater access to image models,it has accelerated safety harms,especially those related to sexual content and consent.Last,it is valuable to note the content moderation safeguards of classifiers and keyword-blocking,which are used in many imag
48、e generation tools,and to detail why they are insufficient for safety.Safety,as a first principle across the AI development life cycle,is the path to ensur-ing safe image generation,starting from filtering data prior to training.When the majority of safety work is done in a monitoring capacity throu
49、gh content moderation,harm-ful generations are likely to fall through the cracks.The current image generation models on the market cannot entirely prevent the generation of NCII content,synthetic CSAM,extremist propaganda,and misinformation.Some of them,such as Stable Diffusion,rely on a weak form o
50、f content moderation:blocking specific keywords from being used.Stable Diffusion enacted keyword-blocking for words related to the human reproductive system to attempt to prevent the generation of pornography(Heikkil,2023).Bings AI Image Generator,powered by DALL-E 3,blocked the keywords“twin towers
51、”to prevent any harm-ful depictions of 9/11(David,2023).Despite keyword-driven safeguards,bot accounts were able to spread an AI-generated image of the Pentagon on fire,which briefly went viral(Bond,2023).Keyword-driven moderation is at best a piecemeal solution that only helps to moderate low-hangi
52、ng fruit and can hardly cover the multitude of harmful narratives at scale.Another form of moderation can be done through classifiers,such as prompt input clas-sifiers that identify violative prompts and image output classifiers that classify images that should be blocked,both of which are used in D
53、ALL-E 3 to prevent harmful generations(David,2023).Although such moderation is more holistic than just keyword-blocking,these classifiers are not foolproof.Specifically,red-teamers for DALL-E 3 found that(1)the model refusals on disinformation could be bypassed by asking for style changes,(2)the mod
54、el could produce realistic images of fictitious events,and(3)public figures could be generated by the model through prompt engineering and circumvention(OpenAI,2023).Broadly,content moderation,through both classifiers and keyword-blocking,was never built to infer context,and an attempt to do so is p
55、ractically an impossible task.For example,users can attempt to bypass many keyword and classifier safeguards by using visual synonyms,such as“flesh-colored eggplant”(for a phallic image),“red liquid”for blood,and“skin-colored sheer top”to generate nudity;the list becomes endless.These harms and safe
56、ty challenges are shaping content authenticity on the internet.Though much of the spotlight of content authenticity is on mis-and disinformation,the existence of synthetic NCII,synthetic extremist propa-ganda,and more also directly shapes users perceptions of reality and can cause great harm to indi
57、viduals and societ-ies.Unfortunately,from a technical perspective,eradicat-ing all these safety harms seems unlikely,given the chal-lenges of content moderation,bias and safety issues with training data,and the existence of tailor-made open-source models.Instead,the United States should look to crea
58、te greater transparency around these images through solu-tions that promote access to information about the origin of content.These solutions may be less fruitful to mitigate 7synthetic NCII content,given that the core of mitigating NCII is not just deciphering whether the content is AI-generated bu
59、t also ensuring legal recourse and account-ability for individuals and entities that distribute and create such content.However,authenticity solutions such as provenance and watermarking could help dissuade and disincentivize malicious actors from using tools that adopt these safeguards and help deb
60、unk photorealistic NCII,disinformation,and much more.Framing this issue as one of content authenticity could empower users and put the onus of responsibility in terms of securing transparency and accountability for safety issues in image generation models on technology provid-ers and government enti
61、ties that can enforce regulations to combat these harms.Industry Solutions to Preserve Image AuthenticityThe issue of preserving authenticity on the internet is not new.Fake accounts,bots,phishing emails,and more have been persistent issues for years.Challenges with deepfakes started to spark more-d
62、eliberate conversations about photo and video authenticity;in 2017,a Reddit user exploited Googles open-source deep-learning library to post por-nographic face-swap images(Adee,2020).Now,the fight to preserve authenticity has become even more crucial,given a lack of policy safeguards and sufficient
63、platform-level enforcement,as well as the speed at which AI image generation is improving.When examining the issue of harms caused by AI-generated images through the lens of“authenticity,”it is important to think through what kinds of tools and technologies will best support individuals in determini
64、ng whether an image is authentic,despite adver-sarial motives.A user-centric approach to this issue is key,given that the success of content authenticity initiatives can be shaped by user experiences,personal bias,and/or beliefs about technology.Watermarking,Hashing,and DetectionThere have been seve
65、ral debates about the right approach to authenticity.The concept of watermarking has domi-nated the conversation and is a term that the White House included in its voluntary AI commitments for industry(White House,2023a).However,there is confusion and a lack of consensus in the field about the defin
66、ition of a watermark and how effective watermarks can be.A helpful way to define watermarking is provided by the Partnership on AI:a form of disclosure that can be visible or invis-ible to the user and includes“modifications into a piece of content that can help support interpretations of how the Au
67、thenticity solutions such as provenance and watermarking could help dissuade and disincentivize malicious actors.8content was generated and/or edited”(Partnership on AI,2023).Watermarking(both visible and invisible/metadata-based)can be a useful disclosure for the general public for content interpre
68、tation,but it is far from a holistic solution,given the need for watermarks to be robust against adver-sarial attacks,to overcome challenges to secure widespread adoption,and to be understandable by a consumer or user across different social platforms and hardware devices.Another approach in the fie
69、ld is hashing,or finger-printing image content,which happens after an image is created.Cryptographic hashing is used to determine exact matches,whereas perceptual hashing can find similar matches that may not be exactly the same image(Ofcom,2022).The merits of hashing have been particularly evident
70、in the identification of CSAM and terrorist content.For example,the National Center for Missing and Exploited Children and the Global Internet Forum to Counter Ter-rorism(GIFCT)are nonprofit organizations that use hash-sharing platforms with technology companies of thor-oughly vetted hashes of confi
71、rmed CSAM(National Center for Missing and Exploited Children,undated)and terrorist content(GIFCT,undated),respectively.Hashing can prove useful in terms of sharing content among social media platforms that very clearly belongs in specific categories of abuse,but it is less practical for use across a
72、 variety of content,given that hashing occurs retroactively and cannot be done well at scale.It is also susceptible to adversarial attacks and is vulnerable to database integrity issues and discrepancies caused by human review in the content attri-bution process(Ofcom,2022).Last,an approach that has
73、 been discussed for several years is detection.Both established companies and smaller startupssuch as Intel(Clayton,2023),Optic(Kovtun,2023),and Reality Defender(Wiggers,2023)have pro-duced deepfake detection solutions.Though the technology can be promising,it comes with a host of issues.Tradition-a
74、lly,in the cybersecurity space,detection and evasion are a cat-and-mouse game,with detection needing to constantly improve as both adversarial actors and the technology itself improve.Reality Defenders chief executive officer claims that provenance and watermarking solutions are weaker,given that th
75、ey require buy-in,and that Reality Defenders product,which is focused on inference(determining the probability of something being fake),is a more robust solution(Goode,2023).However,even with high rates of efficacy,the onus would still be on users to gauge how much they should trust a piece of conte
76、nt based on a prob-ability metric alone.Furthermore,current image detection capabilities have accuracy issues,as reported in a Bellingcat investigation.Bellingcat assessed a tool by Optic(called“AI or Not”)and determined that it was successful in identify-ing AI-generated and real images quite accur
77、ately,except Detection and evasion are a cat-and-mouse game,with detection needing to constantly improve as both adversarial actors and the technology itself improve.9when AI-generated images were both compressed and pho-torealistic,in which case accuracy dropped significantly(Kovtun,2023).Image com
78、pression(a relatively common practice)can assist malign actors in evading detection,especially on social media platforms,which generally com-press all uploaded images.Detection tools are not foolproof and are fragile to minor perturbations.ProvenanceThough no individual solution to content authentic
79、ity is holistic,provenance is emerging as a useful tool to pro-actively preserve origin metadata and/or any editing or changes to a given piece of content.Detection methods risk being less useful as technology continues to improve and evolve while placing the onus of using detection tools on a user
80、every time they come across content that they deem to be suspicious.Furthermore,such methods can lack accu-racy,further obfuscating the decisionmaking process for an individual to assess content for its authenticity.Prove-nance approaches,such as establishing the origin of a piece of content through
81、 secured metadata,are generally more robust,given that they focus on the origin of content rather than proving whether something is real or fake.Further-more,when implemented well,provenance solutions can be incorporated across the content supply chainin AI image generation tools,social media platfo
82、rms,news sites,and moreso that metadata information is readily available to a user and can complement and be used in tandem with watermarking and fingerprinting initiatives.Examining the original approaches to provenance starts with understanding the history of data creation and modification,both of
83、 which are useful for AI image gen-eration(Zhang,Chapman,and LeFevre,2009).In an AI image generation context,provenance could map the origin of an image through a cryptographic hash or signature that is applied and attached to the content,is stored securely through encryption,and is“tamper-evident”o
84、r is able to show whether the image has been altered in any way.Meta-data information,available to users through labels,could also help define the trustworthiness of an image.In prac-tice,making provenance a success goes far beyond a cryp-tographic signature and requires the widespread adoption of o
85、ne or many interoperable frameworks across different mediums:hardware(e.g.,cameras,smartphones),editing software(e.g.,photo editing programs,face-swap apps),and publishing and sharing entities(e.g.,news media,social media platforms),which is a very challenging task in practice.The industry ecosystem
86、 has rallied around the appli-cation of provenance to support content authenticity eco-systems for AI-generated images.Industry leaders include Adobe,Intel,Microsoft,and Truepic,all of which are members of the Content Authenticity Initiative(CAI)and the Coalition for Content Provenance and Authentic
87、ity(C2PA).Both groups are focused on cross-industry par-ticipation to tackle the issues of media transparency and content provenance,with the C2PA framework underlying many of these initiatives,and have released products that use the C2PA framework.For example,Adobe launched its“Content Credentials”
88、feature in 2023,which uses the C2PA standard to allow the attachment of secure,tamper-evident metadata on an export or download(Quach,2023).The C2PA framework is an interoperable specification that“enables the authors of provenance data to securely bind statements of provenance data to instances of
89、content using 10their unique credentials”(C2PA,undated-a).C2PA also provides a“chain of provenance,”by which it can track whether an asset was created from a bevy of different assets.For example,the author or creator of a video(which may include AI-generated images,deepfake audio,and other varied fo
90、rms of media under the C2PA specification)can tag the provenance of these various assets within the video and can determine what is and is not included.The creator of the content can also redact information to protect their privacy and/or security.C2PA also functions as an opt-in,open framework,mean
91、ing(when applied in good faith)it requires the permission of its users(likely for privacy rea-sons)and is built for any entity to adopt it(Castellanos and Gregory,2021).It is valuable to note that C2PA has gaps in which privacy could be breached by malicious actors,as claim generators could require
92、users to add sensitive infor-mation to manifests3 for a given image(C2PA,undated-b).Outside C2PA,Google has taken its own approach to securing authenticity online through its“about this image”feature,which provides background information on a given image,including when the image was first indexed by
93、 Google,where it appears online,and where it may have first appeared(Goode,2023).In practice,this feature is helpful for users who wish to source content,but it is not a clear and reliable provenance method,given that it does not collect tamper evidence or provide thorough provenance metrics.Instead
94、,this approach can be seen as complemen-tary to C2PA efforts and is one method to empower user literacy for AI-generated images.Issues for Provenance Despite the developments in provenance,challenges remain:adoption across the content supply chain(given resource concerns for some entities),challenge
95、s with implementation and privacy for social media platforms,and broader privacy concerns across the globe.C2PA is much less useful when the necessary organizations across the supply chain for image creation and dissemination do not adopt it.For example,if an original image was devel-oped in an imag
96、e generation tool that adopted the C2PA specification and was subsequently changed in an editing tool that did not use C2PA,the provenance metadata of the image would only show the origin data for the original image,not its subsequent editing.Some entities across the supply chain of content(smaller
97、media companies in the Global South,for example)may not have the resources to adopt a provenance specification and integrate it through-out their products.Furthermore,without the adoption of an interoperable specification by social media companies,users will not be able to get consistent provenance
98、informa-tion on news-related images,political images,and other content across the web.Implementation is a challenge for social media platforms;currently,most platforms strip image metadata when a user posts an image on a site(Lau-rent,2013).This practice is largely for privacy reasons.Digital image
99、metadata or EXIF data can reveal informa-tion about the person who took an image,such as a given images geolocation(Marin,2020).This paradigm of strip-ping metadata would have to change for provenance data to be available to users,and this change must be in done in a way that allows users to opt in.
100、Last,there are consid-erations to be made about how privacy could be infringed by the weaponization of provenance metadata,and frame-works such as C2PA could be abused to enforce journalistic identity and enforce laws restricting freedom of expression,particularly in repressive states(Castellanos an
101、d Gregory,112021).Broader privacy concerns could also cause hesitancy for provenance adoption globally,especially for media enti-ties in nondemocratic states.The ideal user experience for image authenticity would involve provenance information being available for image content from a users phone or
102、camera,news sites,photo editing tools,social media platforms,and other publishers in a manner that uses opt-in to protect user privacy and is interoperable.Realizing this ideal is likely to be a Hercu-lean undertaking.However,even if multiple providers on the internet do not opt in to the same inter
103、operable speci-fication,AI image generators and social media platforms at the very least must work together and use a common provenance standard(such as C2PA)to enable user agency within social spaces and provide transparency to users about metadata tagged in AI-generated images.Public trust in imag
104、e content can only be improved with greater acces-sibility of provenance information for users to decide how to perceive a given image.Policy ConsiderationsThe 2023 White Houseled voluntary AI commitments include the need for the development of“robust prov-enance”and“watermarking”efforts by AI compa
105、nies that have opted in to the commitments(White House,2023a).Voluntary commitments and self-regulation are not enough to hold private entities accountable;however,strik-ing the balance between regulation and voluntary efforts is a notable challenge.The voluntary nature of these com-mitments has som
106、e benefits,such as greater private-sector cooperation,information-sharing,and room to evolve transparency mechanisms and policies on provenance and watermarking.However,without common and uni-versal standards on provenance and/or watermarking for companies to abide by and without forms of social med
107、ia regulation such as transparency reporting,among other methods,policy solutions for image authenticity will be a toothless endeavor.Though the harms of AI-generated content cannot be erased,they can be mitigated through interoperable information disclosures on images.The most effective policy solu
108、tions will focus on building public understanding related to AI-generated images and trans-parency about content authenticity through collaboration between government and industry,as well as government-led accountability mechanisms to hold technology compa-nies accountable.Several policy steps can h
109、elp tackle the issue of harms stemming from AI-generated images,using an authenticity-focused lens.Public trust in image content can only be improved with greater accessibility of provenance information for users to decide how to perceive a given image.12The National Institute of Standards and Techn
110、ology Must Lead U.S.Standards on Provenance and WatermarkingThe United States needs concrete and enforceable stan-dards on provenance and watermarking for industry adoption.Beyond the National Institute of Standards and Technology(NIST)AI Risk Management Framework(RMF),the United States does not hav
111、e concrete govern-ment standards on AI.NISTs AI RMF is a promising start on a long road in the development of robust and adaptable AI standards but is too vague to be directly implemented on specific generative AI issues,such as harms from AI-generated images.NIST helps develop standards through con
112、sensus-building and working with organizations that create standards but does not issue standards on its own.However,it can be a key driver for synthetic content stan-dards in the United States and on the international stage.NIST is working on generative AI profiles via its Genera-tive AI Public Wor
113、king Group and NIST AI consortium and must build specific standards and best practices on provenance and watermarking to create a universal U.S.standard to be implemented in regulation(NIST,undated).Furthermore,President Bidens 2023 executive order on AI designates NIST as the United States epicente
114、r for AI safety and security standards,under which NIST is des-ignated to develop clear standards for digital authenticity and synthetic content for U.S.government adoption efforts(White House,2023b).Designing standards around provenance and water-marking adoption is not a simple task;the design mus
115、t provide flexibility and room for evolution,given that these standards will not be immediately enforced by a regulatory body.Standards should be centered on NISTs trustworthi-ness principles(NIST,undated)and measurement science and should focus on measures that empower the publics consumption of im
116、ages,both human and AI-generated.Measurement science will be crucial to evaluate the effi-cacy of technical solutions,such as forensic techniques,interoperable provenance frameworks,digital identity verification,and image forgery detection.Sociotechnical valuations guided by measurement science coul
117、d also help the U.S.government understand how individuals consume content that is AI-generated and human-generated in dif-ferent settings and how provenance affects the perception of both kinds of content.Standards should be guided by robust measurement science,and vice versa,to evolve and improve i
118、n a symbiotic manner.For example,a standard could designate what is deemed to be a robust watermark:a digital signature that is tamper-evident,that cannot be easily cropped out or removed,and that is securely applied Standards should be guided by robust measurement science,and vice versa,to evolve a
119、nd improve in a symbiotic manner.13to an image.Standards on provenance could also specify that tools with provenance features must be opt-in and transparent to the user to protect privacy and data rights.There likely should also be a documentation standard,in terms of how provenance data are documen
120、ted and secured,to prevent adversarial attacks.These are just a few examples of what standards,shaped by measurement sci-ence and evaluations,could look like at NIST.To determine the efficacy of standards,NIST could use experimentation and testing methods for these standards in a regulatory sandbox,
121、in which firms could test the implementation of potential standards internally in collaboration with NIST,and NIST could determine the efficacy of standards based on results(Organisation for Economic Co-operation and Development OECD,2023).Regulatory sandbox exercises in the financial technology ind
122、ustry,as reported by OECD,have helped garner an evidence-based approach to regula-tion and policy(OECD,2023).After standards are final-ized,the U.S.government should deploy these standards both in technology regulation via Congress(especially for large AI developers that have already made voluntary
123、com-mitments on AI)and for internal government use.The latter will likely be less challenging to implement from a political perspective,given intergovernmental incentives for adoption as a trust mechanism for U.S.government content and the absence of a profit motive to interfere with the adoption of
124、 standards.The development of these standards will not exist in a vacuum,and NIST is well-positioned to work with civil society,industry,and other federal agencies to shape the future of provenance and watermarking and help the public navigate authenticity on the internet.Tripartite Engagement on AI
125、-Generated Image Harms Is CrucialCurrently,industry actors are leading the conversation on AI-generated image harms by releasing and pioneer-ing detection tools,forming coalitions(such as CAI and C2PA),and adopting provenance and/or watermarking solutions.The proactiveness of industry on this issue,
126、likely affected by the White Houses voluntary AI commitments,is promising,but there are no formal engagement bodies or working groups among industry,the U.S.government,and civil society organizations on AI-generated image harms and authenticity issues.A working group that includes relevant U.S.gover
127、nment,civil society,and industry stake-holders will go a long way toward anticipating emerging risks in the AI image space,helping the federal govern-ment create and implement enforceable standards,and ensuring information-sharing on these issues.The United States is on the verge of the 2024 preside
128、ntial elections,thus elevating the need to tackle authenticity issues from AI-generated images and ensure that the public is informed about the content it is consuming,which is where industry and civil society come in.These entities should not operate in silos.The Department of Commerce,in coordinat
129、ion with NIST,must formally engage industry and civil society actors on these issues through a working group to increase transparency about authenticity solutions(provenance,detection,etc.)that are being explored at technology com-panies,with the three branches of government incorporat-ing these ins
130、ights to inform future standards and practices that would best benefit the public.The right pathway to designing effective regulation involves staying ahead of the technology curve,which will only be possible if there is 14open engagement with industry on these issues through a joint working group w
131、ith representatives from government,civil society,and industry.The Federal Trade Commission Should Continue Consumer Protection Actions for AIThe Federal Trade Commission(FTC)must get involved with issues related to AI-generated images and how they affect market competition and consumers on the inte
132、rnet.The FTC has taken an active interest in AI,highlighting consumer concerns and opening an active investigation on OpenAI over data leaks and inaccuracies in ChatGPT(Zakrzewski,2023).Furthermore,the agency issued an omnibus authority for“compulsory process”for AI prod-ucts and services in nonpubl
133、ic investigations,which allows the FTC to issue legally binding civil investigative demands(CIDs)to“obtain documents,information and testimony that advance FTC consumer protection and competition investigations”for the next ten years(FTC,2023).This development could be highly advantageous in terms o
134、f gaining more information from large AI developers and other AI companies about harms created or exacerbated by their systems.To build on these efforts,the FTC must focus its investigation on harmful image generation tools,such as those that create nonconsensual intimate AI images,and impose costs
135、on businesses that are directly profit-ing from consumer harm,such as any companies that are profiting from nonconsensual AI pornography(Maiberg,2023b).By imposing financial costs and issuing CIDs on harmful open-source image generation tools,for example,the FTC can help reduce authenticity-driven h
136、arms and dissuade private actors from profiting off of these tools.Furthermore,the FTC should generally use its CID author-ity to investigate harms from AI image generators and collect detailed information and documentation of con-sumer harms from these companies to better understand the macro state
137、 of safety harms from image generation and potential harms to market competition.The agency should also conduct an internal research study on con-sumer harms that can occur as a result of deceptive activi-ties involving AI-generated images.The results should be shared externally with other federal a
138、gencies and the Executive Office of the President to better shape next steps to understand future market and consumer risks that could result from a lack of image transparency on the internet(FTC,2023),including,for example,the adverse effect on public trust when AI-generated images with no authenti
139、c-ity disclosures are used in deceptive advertising to shape consumer opinion.Though the FTC often gets politicized in Congress,the agency is well-positioned to tackle harms from deceptive activity and to evaluate potential harms to consumer protection with respect to AI-generated images.The United
140、States Is Overdue for Transparency Regulation for Social MediaThe U.S.government needs to develop regulations on AI-generated content on social platforms to safeguard users and society from the harms exacerbated by AI-generated images.These regulations would take the form of trans-parency requiremen
141、ts and accountability mechanisms for regulatory compliance.Social media regulation in the United States is osten-sibly inadequate,given a sheer lack of regulation beyond 15Section 230 of the Communications Decency Act,the effectiveness of which has been debated by members of Congress across the poli
142、tical spectrum.However,any future regulation of AI will be incomplete without the regulation of social media platforms on AI-generated con-tent.Part of the challenge with enforcing new legislation and regulations is the boundaries created by Section 230,which dictates that platforms cannot be treate
143、d as publish-ers or broadcasters and therefore cannot be held legally accountable for the content that proliferates on their plat-forms(Matthews,Williams,and Evans,2023).Though the debate on repealing Section 230 is long and complicated,there are actions that Congress can take within the current leg
144、al status quo.First,transparency requirements are essential.The lack of transparency from social media platforms has led to much legislation in Congress but little that has been passed to enact regulation.The 2022 European Digital Services Act introduced new forms of compliance for plat-form transpa
145、rency disclosures and has already resulted in increased openness and transparency.For example,TikTok opened its application programming interface(API)for access to European academics and researchers and pro-vided options to disclose commercial content through labeling,among other initiatives(Lomas,2
146、023).The United States should follow Europes example on transpar-ency legislation and specifically mandate that very large online platforms(also known as VLOPs)publish biannual(twice per year)transparency reports on the amount of harmful AI-generated images disseminated,how much of that content was
147、taken down,trends related to those par-ticular harms(disinformation,sexual content,etc.),how many users engaged with confirmed AI-generated mis-or disinformation,how much AI-generated content was political disinformation,and in which geographic locations these harms originated.Though large platforms
148、 have their own versions of transparency reporting on a yearly basis,there is no uniform standard for what is reported,much less guidance on what kinds of disclosures should exist.The aforementioned categories for transparency disclosure would(1)represent a meaningful first step toward better unders
149、tanding AI-related harms on social media platforms,(2)inform the public of the breadth of the problem,and(3)encourage platforms to build robust trust and safety teams to tackle these issues.Second,the U.S.government should pass regulation that would require social media platforms(in collabora-tion w
150、ith large AI developers)to use provenance and/or watermarking disclosures that are easily accessible to users and,ideally,interoperable.The purpose of this legislation and regulation should not be to determine exactly how provenance would be implemented in products;technol-The lack of transparency f
151、rom social media platforms has led to much legislation in Congress but little that has been passed to enact regulation.16ogy companies are best positioned to determine these details.Rather,it should require provenance disclosures for AI-generated images based on future standards cre-ated by NIST and
152、 guarantee that users have the ability to view provenance data on AI-generated images,especially those that are deemed to contain political content or other sensitive imagery(such as sexual content).As previously mentioned,making provenance interoperable will require widespread adoption by social me
153、dia platforms,AI devel-opers,and other entities.Some entities will not opt in to provenance frameworks because of resource-related rea-sons or privacy concerns or because tools are being used for malicious purposes.One solution could be open-source tools that do not include provenance metadata.Thoug
154、h the open-source challenge can be daunting,disclosures on trusted content can help individuals and the broader public retain agency and the ability to interpret what content to trust.The success of this regulation will be contingent on the standards developed and whether they contextualize to diffe
155、rent use cases,such as a social media platform versus an AI developer.Third,existing social media regulationspecifically,the Platform Accountability and Transparency Act(PATA)and the Digital Services Oversight and Safety Actmust be pushed forward and implemented to get ahead on transpar-ency and acc
156、ountability actions for technology platforms.PATA will be vital for transparency efforts,given that it requires large technology platforms to provide external platform access to vetted researchers,who submit research proposals approved by the National Science Foundation(Perrino,2023).Furthermore,PAT
157、A imposes costs:If plat-forms fail to comply,they face actions from the FTC and the potential loss of Section 230 protections,which would come with high costs(Perrino,2023).This policy solution would increase the publics access to knowledge regarding generative AI harms,such as disinformation harms
158、from AI-generated images.The Digital Services and Oversight Act mandates that large social media platforms execute risk assessments and mitigation audits to prevent systemic risks,such as content that has a negative effect on“civic discourse,electoral processes,public security,”and many other catego
159、ries(U.S.House of Representatives,2022).This large mandate is written to include generative AI harms,given that AI-generated images can be used to manipulate civic discourse,for example.These two pieces of legisla-tion provide the initial foundational action to help curtail risks from generative AI
160、on social platforms and increase transparency to the public of the harms on these plat-forms.Though there is not likely to be effective Section 230 reform soon,progress on the issue of AI-generated images and their effect on authenticity on the internet is still pos-sible.These actions are not compr
161、ehensive but represent progress in the journey toward garnering greater public trust related to the authenticity of AI-generated images.Making provenance interoperable will require widespread adoption by social media platforms.17ConclusionDeveloping policy solutions for AI-generated images on the in
162、ternet may appear to be a narrow endeavor in the grand scheme of all generative AI harms and risks to soci-ety.However,examining this niche issue can serve as a useful case study about the future of AI and how it can be shaped to empower people.The harms from AI-generated imagesNCII,disinformation u
163、sed during conflicts or to ruin reputations,harmful political propaganda created to galvanize fringe communitiesare ongoing.A funda-mental issue that covers all these harms is authenticity and the publics access to trustworthy information about AI-generated images.The benefits of generative AI will
164、be concentrated and its harms will be diffused if people are left without the right tools to navigate the unencumbered growth of this technology.Industry and policy efforts that center on public trust and authenticity,such as the stan-dards and implementation of provenance methods,will prove to be t
165、he most sustainable and most advantageous for society in the long run.ReferencesAdee,Sally,“What Are Deepfakes and How Are They Created?”IEEE Spectrum,April 29,2020.Alpuim,Margarida,and Katja Ehrenberg,“Why Images Are So Powerfuland What Matters When Choosing Them,”Bonn Institute,August 3,2023.Alqah
166、tani,Hamed,Manolya Kavakli-Thorne,and Gulshan Kumar,“Applications of Generative Adversarial Networks(GANs):An Updated Review,”Archives of Computational Methods in Engineering,Vol.28,March 2021.Bond,Shannon,“Fake Viral Images of an Explosion at the Pentagon Were Probably Created by AI,”National Publi
167、c Radio,May 22,2023.C2PASee Coalition for Content Provenance and Authority.Castellanos,Jacobo,and Sam Gregory,“WITNESS and the C2PA Harms and Misuse Assessment Process,”WITNESS blog,December 2,2021.As of January 19,2024:https:/blog.witness.org/2021/12/witness-and-the-c2pa-harms-and-misuse-assessment
168、-process/Clayton,James,“Intels Deepfake Detector Tested on Real and Fake Videos,”BBC News,July 22,2023.Coalition for Content Provenance and Authority,“C2PA Explainer,”webpage,undated-a.As of January 19,2024:https:/c2pa.org/specifications/specifications/1.3/explainer/Explainer.htmlCoalition for Conte
169、nt Provenance and Authority,“C2PA Harms Modelling,”webpage,undated-b.As of February 21,2024:https:/c2pa.org/specifications/specifications/1.0/security/Harms_Modelling.htmlCoalition for Content Provenance and Authority,“C2PA Specifications,”webpage,undated-c.As of February 26,2024:https:/c2pa.org/spe
170、cifications/specifications/1.0/specs/C2PA_Specification.htmlDavid,Emilia,“Bings AI Image Generator Tries to Block Twin Towers Prompts,but Its Not Working,”The Verge,October 5,2023.Federal Trade Commission,“FTC Authorizes Compulsory Process for AI-Related Products and Services,”press release,November
171、 21,2023.FTCSee Federal Trade Commission.GIFCTSee Global Internet Forum to Counter Terrorism.Notes1 A neural network is a subset of machine learning that uses algorithms to identify relationships within a dataset in order to mimic how the human brain works.2 Gaussian noise is a type of statistical n
172、oise that is characterized by its probability distribution.3 C2PA defines manifest as“the set of information about the prov-enance of an asset based on the combination of one or more assertions(including content bindings),a single claim,and a claim signature”(C2PA,undated-c;emphasis in original).18G
173、lobal Internet Forum to Counter Terrorism,“GIFCTs Hash-Sharing Database,”webpage,undated.As of January 19,2024:https:/gifct.org/hsdb/Goode,Lauren,“Google Image Search Will Now Show a Photos History.Can It Spot Fakes?”Wired,October 25,2023.Heikkil,Melissa,“How It Feels to Be Sexually Objectified by a
174、n AI,”MIT Technology Review,December 13,2022.Heikkil,Melissa,“AI Image Generator Midjourney Blocks Porn by Banning Words About the Human Reproductive System,”MIT Technology Review,February 24,2023.Ho,Jonathan,Ajay Jain,and Pieter Abbeel,“Denoising Diffusion Probabilistic Models,”Proceedings of the 3
175、4th International Conference on Neural Information Processing Systems,December 2020.Kim,Gwanghyun,Taesung Kwon,and Jong Chul Ye,“DiffusionCLIP:Text-Guided Diffusion Models for Robust Image Manipulation,”2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR),2022.Klee,Miles,and Nik
176、ki McCann Ramirez,“AI Has Made the Israel-Hamas Misinformation Epidemic Much,Much Worse,”Rolling Stone,October 27,2023.Kovtun,Dennis,“Testing AI or Not:How Well Does an AI Image Detector Do Its Job?”Bellingcat,September 11,2023.Laurent,Olivier,“Study Exposes Social Media Sites That Delete Photograph
177、s Metadata,”British Journal of Photography,March 13,2013.Lomas,Natasha,“TikTok Expands Research API to Europe and Launches Ads Transparency Library,”TechCrunch,July 20,2023.Maiberg,Emanuel,“Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale,”404 Media,August 22,2023a.Maiberg,Em
178、anuel,“Site for Generating Non-Consensual AI Porn Restricts Content Following 404 Media Investigation,”404 Media,September 14,2023b.Marin,Milena,“Sending Encrypted Photos While Preserving Metadata,”Citizen Evidence Lab,Amnesty International,April 20,2020.Matthews,Luke J.,Heather J.Williams,and Alexa
179、ndra T.Evans,“Protecting Free Speech Compels Some Form of Social Media Regulation,”RAND Blog,October 20,2023.As of January 19,2024:https:/www.rand.org/pubs/commentary/2023/10/protecting-free-speech-compels-some-form-of-social.htmlMedill School of Journalism,“New Research Shows Visual Search Wins ove
180、r Text as Consumers Most Trusted Information Source,”Northwestern University,February 4,2019.Naik,Ranjita,and Besmira Nushi,“Social Biases Through the Text-to-Image Generation Lens,”AIES 23:Proceedings of the 2023 AAAI/ACM Conference on AI,Ethics,and Society,August 2023.National Center for Missing a
181、nd Exploited Children,“CyberTipline 2022 Report,”webpage,undated.As of January 19,2024:https:/www.missingkids.org/cybertiplinedataNational Institute of Standards and Technology,“NIST AI Public Working Groups,”webpage,undated.As of January 19,2024:https:/airc.nist.gov/generative_ai_wgOECDSee Organisa
182、tion for Economic Co-operation and Development.Ofcom,Overview of Perceptual Hashing Technology,November 22,2022.OpenAI,“DALL-E 3 System Card,”October 3,2023.Organisation for Economic Co-operation and Development,“Regulatory Sandboxes in Artificial Intelligence,”OECD Digital Economy Papers,No.356,Jul
183、y 2023.Paez,Danny,“This Person Does Not Exist Creator Reveals His Sites Creepy Origin Story,”Inverse,February 21,2019.Partnership on AI,“Building a Glossary for Synthetic Media Transparency Methods,Part 1:Indirect Disclosure,”December 19,2023.Perrino,John,“Platform Accountability and Transparency Ac
184、t Reintroduced in Senate,”Stanford Cyber Policy Center blog,June 8,2023.As of January 19,2024:https:/cyber.fsi.stanford.edu/news/platform-accountability-and-transparency-act-reintroduced-senateQuach,Katyanna,“How AI Watermarking System Pushed by Microsoft and Adobe Will and Wont Work,”The Register,O
185、ctober 15,2023.Smith,Genevieve,and Ishita Rustagi,Mitigating Bias in Artificial Intelligence:An Equity Fluent Leadership Playbook,Berkeley Haas Center for Equity,Gender and Leadership,July 2020.Thiel,David,“Investigation Finds AI Image Generation Models Trained on Child Abuse,”Stanford Cyber Policy
186、Center blog,December 20,2023.As of January 19,2024:https:/cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse19Turk,Victoria,“How AI Reduces the World to Stereotypes,”Rest of World,October 10,2023.U.S.Code,Title 47,Section 230,Protection for Private Blockin
187、g and Screening of Offensive Material.U.S.House of Representatives,Digital Services Oversight and Safety Act of 2022,Bill 6796,February 18,2022.White House,“Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,”fact
188、sheet,July 21,2023a.White House,“President Biden Issues Executive Order on Safe,Secure,and Trustworthy Artificial Intelligence,”fact sheet,October 30,2023b.Wiggers,Kyle,“Reality Defender Raises$15M to Detect Text,Video and Image Deepfakes,”TechCrunch,October 17,2023.Wiggers,Kyle,and Amanda Silberlin
189、g,“Meet Unstable Diffusion,the Group Trying to Monetize AI Porn Generators,”TechCrunch,November 17,2022.Zakrzewski,Cat,“FTC Investigates OpenAI over Data Leak and ChatGPTs Inaccuracy,”Washington Post,July 13,2023.Zhang,Jing,Adriane Chapman,and Kristen LeFevre,“Do You Know Where Your Datas Been?Tampe
190、r-Evident Database Provenance,”MITRE Corporation,October 1,2009.www.rand.orgPE-A3131-1About This PaperThis paper addresses the myriad safety harms that arise from artificial intelligence(AI)-generated images and what these images mean for authenticity on the internet writ large.The issue of authenti
191、city on the internet is not a novel problem;however,the issue has become exac-erbated by generative AI models,such as image generation tools.The goal of this paper is to highlight the challenges of tackling this pervasive harm and to provide considerations for U.S.policy to safeguard the authenticit
192、y of images on the internet in terms of this specific issue.The U.S.government has several potential policy levers to address this prob-lem,and the most salient center on securing authenticity on the internet and,by extension,restoring public trust online.This paper should be of interest to policyma
193、kers,AI platform developers,and the public.Technology and Security Policy CenterRAND Global and Emerging Risks is a division at RAND that develops novel methods and delivers rigorous research on potential catastrophic risks confronting humanity.This work was undertaken by the divisions Technology an
194、d Security Policy Center,which explores how high-consequence,dual-use technologies change the global competition and threat environment,then develops policy and technology options to advance the security of the United States,its allies and partners,and the world.For more information,contact tasprand
195、.org.FundingFunding for this work was provided by gifts from RAND supporters.AcknowledgmentsI would like to thank the leadership of the RAND Technology and Secu-rity Policy Center,Jeff Alstott and Emma Westerman,for their guidance on this publication;the mentorship of Anu Narayanan;and the review-er
196、s,Todd Helmus of RAND,Doowan Lee of Trust in Media,and Sam Gregory,for their thoughtful reviews and improvement of this work.About the AuthorBilva Chandra wrote this paper while serving at RAND as a Technology and Security Policy fellow.Prior to RAND,she led product safety efforts for DALL-E(image g
197、eneration)at OpenAI and led disinformation and influence operations(as well as election integrity)efforts at LinkedIn.Chandra has expertise in AI-enabled influence operations and disinfor-mation,AI safety,content moderation,and emerging threats.She has an M.A.in security studies.After completing her
198、 fellowship at RAND in December 2023,Chandra assumed the role of senior AI policy adviser at the National Institute of Standards and Technology.RAND is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure,heal
199、thier and more prosperous.RAND is nonprofit,nonpartisan,and committed to the public interest.Research IntegrityOur mission to help improve policy and decisionmaking through research and analysis is enabled through our core values of quality and objectivity and our unwavering commitment to the highes
200、t level of integrity and ethical behavior.To help ensure our research and analysis are rigorous,objective,and nonpartisan,we subject our research publications to a robust and exacting quality-assurance process;avoid both the appearance and reality of financial and other conflicts of interest through
201、 staff training,project screening,and a policy of mandatory disclosure;and pursue transparency in our research engagements through our commitment to the open publication of our research findings and recommendations,disclosure of the source of funding of published research,and policies to ensure inte
202、llectual independence.For more information,visit www.rand.org/about/research-integrity.RANDs publications do not necessarily reflect the opinions of its research clients and sponsors.is a registered trademark.Limited Print and Electronic Distribution RightsThis publication and trademark(s)contained
203、herein are protected by law.This representation of RAND intellectual property is provided for noncommercial use only.Unauthorized posting of this publication online is prohibited;linking directly to its webpage on rand.org is encouraged.Permission is required from RAND to reproduce,or reuse in another form,any of its research products for commercial purposes.For information on reprint and reuse permissions,please visit www.rand.org/pubs/permissions.For more information on this publication,visit www.rand.org/t/PEA3131-1.2024 RAND Corporation