《AI Now:2023年度全景报告:直面科技巨头(英文版)(103页).pdf》由会员分享,可在线阅读,更多相关《AI Now:2023年度全景报告:直面科技巨头(英文版)(103页).pdf(103页珍藏版)》请在三个皮匠报告上搜索。
1、Confronting tech power2023 LandscapeOur latest annual report diagnoses concentration of power in the tech industry as a pressing challenge-and points the path forward to seize this moment of change.2023 Landscape2ACKNOWLEDGEMENTSAuthored by Amba Kak and Dr.Sarah Myers West.With research and editoria
2、l contributions from Alejandro Calcao,Jane Chung,Dr.Kerry McInerney andMeredith Whittaker.Copyediting by Caren Litherland.Design by Partner&Partners.Special thanks to all those that provided feedback on sections in the report including Veena Dubal,Ryan Gerety,Sara Geoghegan,Ellen Goodman,Alex Harman
3、,Daniel Leufer,Estelle Masse,FannyHidvegi,Tamara Kneese,J.Nathan Mathias,Julia Powles,Daniel Rangel,Gabrielle Rejouis,MauriceStucke,Charlotte Slaiman,Lori Wallach,Ben Winters,and Jai Vipra.Cite as:Amba Kak and Sarah Myers West,“AI Now 2023 Landscape:Confronting Tech Power”,AI NowInstitute,April 11,2
4、023,https:/ainowinstitute.org/2023-landscape.2023 Landscape3Table of ContentsExecutive Summary.5chatGPT And More:Large Scale AI Models Entrench Big Tech Power.16Toxic Competition:Regulating Big Techs Data Advantage.24Algorithmic Accountability:Moving beyond audits.35SPOTLIGHT:Data Minimization as a
5、Tool for AI accountability.44Algorithmic Management:Creating Bright-Line Rules to Restrain Workplace Surveillance.48SPOTLIGHT:Tech and Financial Capital.58Antitrust:Its Time for Structural Reforms to Big Tech.64Biometric Surveillance Is Quietly Expanding:Bright-Line Rules Are Key.75International“Dig
6、ital Trade”Agreements:The Next Frontier.80US-China Arms Race:AI Policy as Industrial Policy.86SPOTLIGHT:The Climate Costs of Big Tech.1002023 Landscape4Executive SummaryArtificial intelligence1is captivating our attention,generating both fear and awe about whats comingnext.As increasingly dire progn
7、oses about AIs future trajectory take center stage in the headlinesabout generative AI,its time for regulators,and the public,to ensure that there is nothing aboutartificial intelligence(and the industry that powers it)that we need to accept as given.This watershedmoment must also swiftly give way t
8、o action:to galvanize the considerable energy that has alreadyaccumulated over several years towards developing meaningful checks on the trajectory of AItechnologies.This must start with confronting the concentration of power in the tech industry.The AI Now Institute was founded in 2017,and even wit
9、hin that short span weve witnessed similar hypecycles wax and wane:when we wrote the 2018 AI Now report,the proliferation of facial recognitionsystems already seemed well underway,until pushback from local communities pressured governmento?cials to pass bans in cities across the United States and ar
10、ound the world.2Tech firms wereassociated with the pursuit of broadly beneficial innovation,3until worker-led organizing,mediainvestigations,and advocacy groups shed light on the many dimensions of tech-driven harm.4These are only a handful of examples,and what they make clear is that there is nothi
11、ng aboutartificial intelligence that is inevitable.Only once we stop seeing AI as synonymous with progresscan we establish popular control over the trajectory of these technologies and meaningfully confronttheir serious social,economic,and political impactsfrom exacerbating patterns of inequality in
12、housing,5credit,6healthcare,7and education8to inhibiting workers ability to organize9and incentivizingcontent production that is deleterious to young peoples mental and physical health.10In 2021,several members of AI Now were asked to join the Federal Trade Commission(FTC)to advisethe Chairs o?ce on
13、 artificial intelligence.11This was,among other things,a recognition of the growingcentrality of AI to digital markets and the need for regulators to pay close attention to potential harms11Federal Trade Commission,“FTC Chair Lina M.Khan Announces New Appointments in Agency Leadership Positions,”pre
14、ss release,November 19,2021.10See Zach Praiss,“New Poll Shows Dangers of Social Media Design for Young Americans,Sparks Renewed Call for Tech Regulation,”AccountableTech,March 29,2023;and Tawnell D.Hobbs,Rob Barry,and Yoree Koh,“The Corpse Bride Diet:How TikTok Inundates Teens with Eating-DisorderVi
15、deos,”Wall Street Journal,December 17,2021.9Ibid.8Rashida Richardson and Marci Lerner Miller,“The Higher Education Industry Is Embracing Predatory and Discriminatory Student Data Practices,”Slate,January 13,2021.7Ziad Obermeyer,Brian Powers,Christine Vogeli,and Sendhil Mullainathan,“Dissecting Racia
16、l Bias in an Algorithm Used to Manage the Health ofPopulations,”Science 366,no.6464(October 25,2019):44753.6Christopher Gilliard.“Prepared Testimony and Statement for the Record,”Hearing on“Banking on Your Data:The Role of Big Data in FinancialServices,”House Financial Services Committee Task Force
17、on Financial Technology,2019.5Robert Bartlett,Adair Morse,Richard Stanton,and Nancy Wallace,“Consumer-Lending Discrimination in the FinTech Era,”Journal of FinancialEconomics 143,no.1(January 1,2022):3056.4Varoon Mathur,Genevieve Fried,and Meredith Whittaker,“AI in 2019:A Year in Review,”Medium,Octo
18、ber 9,2019.3See Jenna Wortham,“Obama Brought Silicon Valley to Washington,”New York Times,October 25,2016,;and Cecilia Kang and Juliet Eilperin,“WhySilicon Valley Is the New Revolving Door for Obama Sta?ers,”Washington Post,February 28,2015.2Meredith Whittaker,Kate Crawford,Roel Dobbe,Genevieve Frie
19、d,Elizabeth Kaziunas,Varoon Mathur,Sarah Myers West,Rashida Richardson,JasonSchultz,Oscar Schwartz,AI Now 2018 Report,AI Now Institute,December 2018.Tom Simonite,“Face Recognition Is Being BannedBut Its Still Everywhere,”Wired,December 22,2021.1The term artificial intelligence has come to mean many
20、di?erent things over the course of its history,and may be best understood as a marketingterm rather than a fixed object.See for example:Michael Atleson,“Keep your AI Claims in Check”,Federal Trade Commission,February 27,2023,;Meredith Whittaker,“Signal,and the Tech Business Model Shaping Our World”,
21、Conference on Steward-Ownership 2023,Annie Lowery,“AI IsntOmnipotent.Its Janky”,The Atlantic,April 3,2023.2023 Landscape5to consumers and competition.Our experience within the US government helped clarify the path forthe work ahead.ChatGPT was unveiled during the last month of our time at the FTC,un
22、leashing a wave of AI hype thatshows no signs of letting up.This underscored the importance of addressing AIs role and impact,notas a philosophical futurist exercise but as something that is being used to shape the world around ushere and now.We urgently need to be learning from the“move fast and br
23、eak things”era of Big Tech;we cant allow companies to use our lives,livelihoods,and institutions as testing grounds for noveltechnological approaches,experimenting in the wild to our detriment.Happily,we do not need to draftpolicy from scratch:artificial intelligence,the companies that produce it,an
24、d the a?ordances requiredto develop these technologies already exist in a regulated space,and companies need to follow thelaws already in e?ect.This provides a foundation,but well need to construct new tools andapproaches,built on what we already have.There is something di?erent about this particula
25、r moment:it is primed for action.We haveabundant research and reporting that clearly documents the problems with AI and the companiesbehind it.This means that more than ever before,we are prepared to move from identifying anddiagnosing harms to taking action to remediate them.This will not be easy,b
26、ut now is themoment for this work.This report is written with this task in mind:we are drawing from our experiencesinside and outside government to outline an agenda for how weas a group of individuals,communities,and institutions deeply concerned about the impact of AI unfolding around uscanmeaning
27、fully confront the core problem that AI presents,and one of the most di?cult challenges of ourtime:the concentration of economic and political power in the hands of the tech industryBigTech in particular.There is no AI without Big Tech.Over the past several decades,a handful of private actors have a
28、ccrued power and resources that rivalnation-states while developing and evangelizing artificial intelligence as critical social infrastructure.AIis being used to make decisions that shape the trajectory of our lives,from the deeply impactful,likewhat kind of job we get and how much were paid;whether
29、 we can access decent healthcare and agood education;to the very mundane,like the cost of goods on the grocery shelf and whether the routewe take home will send us into tra?c.Across all of these domains,the same problems show themselves:the technology doesnt work asclaimed,and it produces high rates
30、 of error or unfair and discriminatory results.But the visible problemsare only the tip of the iceberg.The opacity of this technology means we may not be informed when AIis in use,or how its working.This ensures that we have little to no say about its impact on our lives.2023 Landscape6This is under
31、scored by a core attribute of artificial intelligence:it is foundationallyreliant on resources that are owned and controlled by only a handful of Big Tech firms.The dominance of Big Tech in artificial intelligence plays out along three key dimensions:1.The Data Advantage:Firms that have access to th
32、e widest and deepest swath of behavioraldata insights through surveillance will have an edge in the creation of consumer AI products.This is reflected in the acquisition strategies adopted by tech companies,which have of latefocused on expanding this data advantage.Tech companies have amassed a trem
33、endousdegree of economic power,which has enabled them to embed themselves as coreinfrastructure within a number of industries,from health to consumer goods to education tocredit.2.Computing Power Advantage:AI is fundamentally a data-driven enterprise that is heavilyreliant on substantial computing p
34、ower to train,tune,and deploy these models.This isexpensive and runs up against material dependencies such as chips and the location of datacenters that mean e?ciencies of scale apply,as well as labor dependencies on a relatively smallpool of highly skilled tech workers that can most e?ciently use t
35、hese resources.12Only ahandful of companies actually run their own infrastructure the cloud and compute resourcesfoundational to building AI systems.What this means is that even though“AI startups”abound,they must be understood as barnacles on the hull of Big Tech licensing server infrastructure,and
36、 as a rule competing with each other to be acquired by one or another Big Tech firm.We arealready seeing these firms wield their control over necessary resources to throttle competition.For example,Microsoft recently began penalizing customers for developing potentialcompetitors to GPT-4,threatening
37、 to restrict their access to Bing search data.133.Geopolitical Advantage:AI systems(and the companies that produce them)are being recastnot just as commercial products but foremost as strategic economic and security assets for thenation that need to be boosted by policy,and never restrained.The rhet
38、oric around theUS-China AI race has evolved from a sporadic talking point to an increasingly institutionalizedstance(represented by collaborative initiatives between government,military,and Big Techcompanies)that positions AI companies as crucial levers within this geopolitical fight.Thisnarrative c
39、onflates the continued dominance of Big Tech as synonymous with US economicprowess,and ensures the continued accrual of resources and political capital to thesecompanies.To understand how we got here,we need to look at how tech firms presented themselves in theirincipiency:their rise was characteriz
40、ed by marketing rhetoric promising that commercial tech wouldserve the public interest,encoding democratic values like freedom,democracy,and progress.Butwhats clear now is that the companies developing and deploying AI and related technologies aremotivated by the same things thatstructurally and nec
41、essarilymotivate all corporations:growth,profit,and rosy market valuations.This has been true from the start.13Leah Nylen and Dina Bass,“Microsoft Threatens to Restrict Data in Rival AI Search,”March 24,2023.12For example,Microsoft is even rationing access to server hardware internally for some of i
42、ts AI teams to ensure it has the capacity to run GPT-4.See Aaron Holmes and Kevin McLaughlin,“Microsoft Rations Access to AI Hardware for Internal Teams,”The Information.2023 Landscape7Why“Big Tech”?In this report,we pay special attention to policy interventions that target large tech companies.The
43、term“Big Tech”became popular around 201314as a way to describe a handful of US-basedmegacorporations,and while it doesnt have a definite composition,today its typically used asshorthand for Google,Apple,Facebook,Amazon,and Microsoft(often abbreviated as GAFAM),andsometimes also includes companies li
44、ke Uber or Twitter.Its a term that draws attention to the unique scale at which these companies operate:thenetwork e?ects,data,and infrastructural advantages they have amassed.Big Techs financialleverage has allowed these firms to consolidate this advantage across sectors from social mediato healthc
45、are to education and across media(like the recent pivot to virtual and augmentedrealities),often through strategic acquisitions.They seek to protect this advantage from regulatorythreats through lobbying and similar non-capital strategies that leverage their deep pockets.15Following on from narrativ
46、es around“Big Tobacco,”“Big Pharma,”and“Big Oil,”this framing drawsupon lessons from other domains where consolidation of power in industries has led tomovements to reassert public accountability.(As one commentator puts it,“society does notprepend the label Big with a capital B to an industry out o
47、f respect or admiration.It does so out ofloathing and fear and in preparation for battle.”16)Recent name changes,like Google to Alphabetor Facebook to Meta,also make Big Tech helpful terminology to capture the sprawl of thesecompanies and their continually shifting contours.17Focusing on Big Tech is
48、 a useful prioritization exercise for tech policy interventions for severalreasons:Tackling challenges that either originate from or are exemplified by Big Techcompanies can address the root cause of several key concerns:invasive datasurveillance,the manipulation of individual and collective autonom
49、y,the consolidation ofeconomic power,and exacerbation of patterns of inequality and discrimination,to name afew.The Big Tech business and regulatory playbook has a range of knock-on e?ects onthe broader ecosystem,incentivizing and even compelling other companies to fall inline.Google and Facebooks a
50、doption of the behavioral advertising business model thate?ectively propelled commercial surveillance into becoming the business model of theinternet is just one example of this.17Kean Birch Kean and Kelly Bronson,“Big Tech,”Science as Culture 31,no.1(January 2,2022):114.16Will Oremus,“Big Tobacco.B
51、ig Pharma.Big Tech?”Slate,November 17,2017.15Zephyr Teachout and Lina Khan,“Market Structure and Political Law:A Taxonomy of Power,”Duke Journal of Constitutional Law&Public Policy 9,no.1(2014):3774.14Nick Dyer-Witheford and Alessandra Mularoni,“Framing Big Tech:News Media,Digital Capital and the An
52、titrust Movement,”Political Economy ofCommunication 9,no.2(2021):220.2023 Landscape8Growing dependencies on Big Tech across the tech industry and government makethem a single point of failure.A core business strategy for these firms is to makethemselves infrastructural,and much of the wider tech eco
53、system relies on them in oneway or another,from cloud computing to advertising ecosystems and,increasingly,topayments.This makes these companies both a choke point and a single point of failure.Were also seeing spillover into the public sector.While a whole spectrum of vendors for AIand tech product
54、s sells to government agencies,the dependence of government on BigTech a?ordances came into particular focus during the height of the pandemic,whenmany national governments needed to rely on Big Tech infrastructure,networks,andplatforms for basic governance functions.Finally,this report takes aim no
55、t just at the pathologies associated with thesecompanies,but also at the broader narratives that justify and normalize them.Fromunrestricted innovation as a social good to digitization,to data as the only way to see and interpretthe world,to platformization as necessarily beneficial to society and s
56、ynonymous withprogressand regulation as chilling this progressthese narratives pervade the tech industry(and,increasingly,government functioning as well).Strategic prioritiesWhere do we go from here?Across the chapters of this report,we o?er a set of approaches that,inconcert,will collectively enabl
57、e us to confront the concentrated power of Big Tech.Some of these arebold policy reforms that o?er bright-line rules and structural changes.Others arent in the traditionaldomain of policy at all,but acknowledge the importance of non regulatory interventions such ascollective action,worker organizing
58、,while acknowledging the role public policy can play in bolstering orkneecapping these e?orts.We also identify trendy policy responses that seem positive on their surface,but because they fail to meaningfully address power discrepancies should be abandoned.The primaryjurisdictional focus for these r
59、ecommendations is the US,although where relevant we point to policywindows or trends in other jurisdictions(such as the EU)with necessarily global impacts.Four strategic priorities emerge as particularly crucial for this moment:1.Employ strategies that place the burden on companies to demonstrate th
60、at they arenot doing harm,rather than on the public and regulators to continually investigate,identify,and find solutions for harms after they occur.Investigative journalism and independent research has been critical to tech accountability:the hardwork of those testing opaque systems has surfaced fa
61、ilures that have been crucial for establishingevidence for tech-enabled harms.But,as we outline in the section on Algorithmic Accountability,as apolicy response,audits and similar accountability frameworks dependent on third-party evaluation play2023 Landscape9directly into the tech company playbook
62、 by positioning responsibility for identifying and addressingharms outside of the company.The finance sector o?ers a useful corollary for thinking this through.Much like AI,the actions taken bylarge financial firms have di?use and unpredictable e?ects on the broader financial system and theeconomy a
63、t large.Its hard to predict any particular harm these may cause,but we know theconsequences can be severe,and the communities hit hardest are those that already experiencesignificant inequality.After multiple crisis cycles,theres now widespread consensus that the onusneeds to be on companies to demo
64、nstrate that they are mitigating harms and to comply withregulations,rather than on the broader public to root these out.The tech sector,likewise has di?use and unpredictable e?ects not only on our economy,but ourinformation environment and labor market,among many other things.We see value in a due-
65、diligenceapproach that requires firms to demonstrate their compliance with the law rather than turn toregulators or civil society to show where they havent compliedsimilar in orientation to how we alreadyregulate many goods that have significant public impact,like food and medicine.And we needstruct
66、ural curbs like bright lines and no-go zones that identify types of use and domains ofimplementation that should be barred in any instance,as many cities have already established bypassing bans on facial recognition.For example,in the chapter on Algorithmic Management we identifyemotion recognition
67、as a type of technology that should never be deployed,but particularly in theworkplace:aside from the clear concerns about its use of pseudoscience and accompanyingdiscriminatory e?ects,it is fundamentally unethical for employers to seek to draw inferences abouttheir employees inner state to maximiz
68、e their profit.And in Biometric Surveillance,we identify theabsence of such bright-line measures as the animating force behind a slow creep of facial recognitionand other surveillance systems into domains like cars and virtual reality.We also need to lean further toward scrutiny of harms before they
69、 happen rather than waiting to rectifyharms after theyve already occurred.We discuss what this might look like in the context of mergerreviews in the Toxic Competition section,advocating for an approach to merger reviews that looks topredict and prevent abusive practices before they manifest,and in
70、Antitrust,we break down howneeded legal reforms would render certain kinds of mergers invalid in the first place,and put the onuson companies to demonstrate they arent anti-competitive.2.Break down silos across policy areas,so were better prepared to address whereadvancement of one policy agenda imp
71、acts others.Firms play this isolation to theiradvantage.One of the primary sources of Big Tech power is the expansiveness of their reach across markets,withdigital ecosystems that stretch across vast swathes of the economy.This means that e?ective techpolicy must be similarly expansive,attending to
72、how measures adopted in the advancement of onepolicy agenda ramify across other policy domains.For example,as we underscore in the section onToxic Competition,legitimate concerns about third-party data collection must be addressed in a waythat doesnt inadvertently enable further concentration of pow
73、er in the hands of Big Tech firms.Disconnection between the legal and policy approaches to privacy on the one hand and competition onthe other have enabled firms to put forward self-regulatory measures like Googles Privacy Sandbox in2023 Landscape10the name of privacy that ultimately will lead to th
74、e depletion of both privacy and competition bystrengthening Googles ability to collect information on consumers directly while hollowing out itscompetitors.These disconnects can also prevent progress in one policy domain from carrying over toanother.Despite years of carefully accumulated evidence on
75、 the fallibility of AI-based content filtrationtools,were seeing variants of the magical thinking that AI tools will be able to scan e?ectively for illegalcontent,crop up once again in encryption policy with the EUs recent“chat control”client-sidescanning proposals.18Policy and advocacy silos can al
76、so blunt strategic creativity in ways that foreclose alliance orcross-pollination.Weve made progress on this front in other domains,ensuring for example thatprivacy and national security are increasingly seen as consonant,rather than mutually exclusive,objectives.But AI policy has been undermined to
77、o often by a failure to understand AI materially,as acomposite of data,algorithmic models,and large-scale computational power.Once we view AI thisway,we can understand data minimization and other approaches that limit data collection not only asprotecting consumer privacy,but as mechanisms that help
78、 mitigate some of the most egregious AIapplications,by reducing firms data advantage as a key source of their power and rendering certaintypes of systems impossible to build.It was through data protection law that Italys privacy regulatorwas the first to issue a ban on ChatGPT19and,the week before t
79、hat,Amsterdams Court of Appeal ruledautomated firing and opaque algorithmic wages to be illegal.20FTC o?cials also recently called forleveraging antitrust as a tool to enhance worker power,including to push back against workersurveillance.21This opens up space for advocates working on AI-related iss
80、ues to form strategiccoalitions with those that have been leveraging these policy tools in other domains.This multivariateapproach has the added advantage of necessitating that those focused on AI-related issues formstrategic coalitions with those that have been leveraging these policy tools in othe
81、r domains.Throughout this report,we attempt to establish links between related,but often siloed domains:dataprotection or competition reform as AI policy(see section on Data Minimization);Antitrust),or AI policyas industrial policy(see section on Algorithmic Accountability).3.Identify when policy ap
82、proaches get co-opted and hollowed out by industry,andpivot our strategies accordingly.The tech industry,with its billions of dollars and deep political networks,has been both nimble andcreative in its response to anything perceived as a policy threat.There are relevant lessons from theEuropean expe
83、rience around the perils of shifting from a“rights-based”regulatory framework,as in theGDPR,to a“risk-based”approach,as in the upcoming AI Act and how the framing of“risk”(as opposedto rights)could tip the playing field in favor of industry-led voluntary frameworks and technicalstandards.22Respondin
84、g to the growing chorus calling for bans on facial recognition technologies in sensitive socialdomains,several tech companies pivoted from resisting regulation to claiming to support it,somethingthey often highlighted in their marketing.The fine print showed that what these companies actually22Fanny
85、 Hidvegi and Daniel Leufer,“The EU should regulate AI on the basis of rights,not risks”,Access Now,February 17,2021.21Elizabeth Wilkins,“Rethinking Antitrust“,March 30,202320Worker Info Exchange,“Historic Digital Rights Win for WIE and the ADCU Over Uber and Ola at the Amsterdam Court of Appeals“,Ap
86、ril 4,202319Clothilde Goujard,“Italian Privacy Regulator Bans ChatGPT”Politico,March 31,2023.18Ross Anderson,“Chat Control or Child Protection?”,University of Cambridge Computer Lab,October 13,2022.2023 Landscape11supported were soft moves positioned to undercut bolder reform.For example,Washington
87、stateswidely critiqued facial recognition law passed with Microsofts support.The bill prescribed audits andstakeholder engagement,a significantly weaker stance than banning police use which is what manyadvocates were calling for(see section on Biometrics).For example,mountains of research and advoca
88、cy demonstrate the discriminatory impacts of AIsystems and the fact that these issues cannot be addressed solely at the level of code and data.Whilethe AI industry has accepted that bias and discrimination is an issue,companies have also been quickto narrowly cast bias as a technical problem with a
89、technical fix.Civil society responses must be nimble in responding to Big Tech subterfuge,and we must learn torecognize such subterfuge early.We draw from these lessons when we argue that there isdisproportionate policy energy being directed toward AI and algorithmic audits,impact assessments,and“ac
90、cess to data”mandates.Indeed,such approaches have the potential to eclipse and nullifystructural approaches to curbing the harms of AI systems(see section on Algorithmic Accountability).In an ideal world,such transparency-oriented measures would live alongside clear standards ofaccountability and br
91、ight-line prohibitions.But this is not what we see happening.Instead,a steadystream of proposals position algorithmic auditing as the primary policy approach toward AI.Finally,we also need to stay on top of companies moves to evade regulatory scrutiny entirely:forexample,firms have been seeking to i
92、ntroduce measures in global trade agreements(see section onGlobal Digital Trade)that would render regulatory e?orts seeking accountability by signatory countriespresumptively illegal.And companies have sought to use promises of AI magic as a means of evadingstronger regulatory measures,such as by cl
93、inging to the familiar false argument that AI can provide afix for unsolvable problems,such as in content moderation.234.Move beyond a narrow focus on legislative and policy levers and embrace abroad-based theory of change.To make progress and ensure the longevity of our wins,we must be prepared for
94、 the long game,andauthor strategies that keep momentum going in the face of inevitable political stalemates.We can learnfrom ongoing organizing in other domains,from climate advocacy(see section on Climate)thatidentifies the long-term nature of these stakes,to worker-led organizing(see section on Al
95、gorithmicManagement)which has emerged as one of the most e?ective approaches to challenging andchanging tech company practice and policy.We can also learn from shareholder advocacy(see sectionon Tech&Financial Capital),which uses companies own capital strategies to push for accountabilitymeasures-on
96、e example is the work of the Sisters of St.Joseph of Peace using shareholder proposalsto hold Microsoft to account for human rights abuses.The Sisters also used such proposals to seek aban on the sale of facial recognition to government entities,and to require Microsoft to evaluate howthe companys l
97、obbying aligns with its stated principles.24Across these fronts,there is much to learnfrom the work of organizers and advocates well-versed in confronting corporate power.24See Chris Mills Rodrigo,“Exclusive:Scrutiny Mounts on Microsofts Surveillance Technology,”The Hill,June 17,2021;and Issie Lapow
98、sky,“TheseNuns Could Force Microsoft to Put Its Money Where Its Mouth Is,”Protocol,November 19,2021,23Federal Trade Commission,“Combatting Online Harms Through Innovation”,Federal Trade Commission,June 2022.2023 Landscape12Windows for policy movementThese strategic priorities are designed to take ad
99、vantage of current windows for action.We summarizethem below,and review each in more detail in the body of the report.WINDOWS FOR ACTION:THE AI POLICY LANDSCAPEContain tech firms dataadvantage.See:Toxic CompetitionData minimizationData policy is AI policy,and steps taken to curb companies dataadvant
100、age are a key lever in limiting concentration.Create bright-line rules that limit firms ability to collect data onconsumers or produce data about them(also known as data minimization).Connect privacy and competition law both in enforcement and in thedevelopment of AI policy.Firms are using these dis
101、juncts to their ownadvantage.Reform the merger guidelines and enforcement measures such thatconsolidation of data advantages receives scrutiny as part of determiningwhether to allow a merger,and enable enforcers to intervene to stopabusive practices before the harms take place.Build support forcompe
102、tition reforms asa key lever to reduceconcentration in tech.See:AntitrustEnforce competition laws by aggressively curbing mergers that expandfirms data advantage and investigating and penalizing companies whenthey engage in anti-competitive behaviors.Be wary of US versus China“AI race”rhetoric used
103、for deregulatoryarguments in policy debates on competition,privacy,and algorithmicaccountability.Pass the full package of antitrust bills from the 117th Congress to giveantitrust enforcers stronger tools to challenge abusive practices specificto the tech industry.Integrate competition analysis acros
104、s all tech policy domains identifyingplaces where platform companies might take advantage of privacymeasures to consolidate their own advantage,for example,or howconcentration in the cloud market has follow-on e?ects for security bydistributing risk systemically.25Regulate ChatGPT andother large-sca
105、lemodels.See:General Purpose AIApply lessons from the ongoing debate on the EU AI Act to preventregulatory carveouts for“general-purpose AI”:large language models(LLMs)and other similar technologies carry systemic risks;their ability tobe fine-tuned toward a range of uses requires more regulatory sc
106、rutiny,not less.25For example,a 2017 outage in Amazon Web Services S3 server took out several healthcare and hospital systems:Casey Newton,“How a typo tookdown S3,the backbone of the internet”,The Verge,March 2,2017,2023 Landscape13Regulate ChatGPT andother large-scalemodels.(CONT.)Mandate documenta
107、tion requirements that can provide the evidence toensure developers of these models are held accountable for data anddesign choices.Enforce existing law on the books to create public accountability in therollout of generative AI systems and prevent harm to consumers andcompetition.Closely scrutinize
108、 claims to openness;generative AI has structuraldependencies on resources available to only a few firms.Displace audits as theprimary policy responseto harmful AI.See:AlgorithmicAccountabilityAudits and data-access proposals should not be the primary policyresponse to harmful AI.These approaches fai
109、l to confront the powerimbalances between Big Tech and the public,and risk further entrenchingpower in the tech industry.Closely scrutinize claims from a burgeoning audit economy withcompanies o?ering audits-as-a-service despite no clarity on thestandards and methodologies for algorithmic auditing,n
110、or consensus ontheir definitions of risk and harm.Impose strong structural curbs on harmful AI,such as bans,moratoria,and rules that put the burden on companies to demonstrate that they arefit for public and/or commercial release.Future-proof againstthe quiet expansion ofbiometric surveillanceinto n
111、ew domains likecars.See:BiometricsDevelop comprehensive bright-line rules to future-proof biometricregulation from changing forms and use cases.Make sure biometric regulation addresses broader inferences,beyond justidentification.Impose stricter enforcement of data minimization provisions that exist
112、 indata protection laws globally as a way to curb the expansion of biometricdata collection in new domains like virtual reality and automobiles.Enact strong curbs onworker surveillance.See:AlgorithmicManagementWorker surveillance is fundamentally about employers gaining andmaintaining control over w
113、orkers.Enact policy measures that even theplaying field.Establish baseline worker protections from algorithmic management andworkplace surveillance.Shift the burden of proof to developers and employers and away fromworkers.Establish clear red lines around domains(e.g.,automated hiring and firing)202
114、3 Landscape14and types of technology(e.g.,emotion recognition)that are inappropriatefor use in any context.Prevent“internationalpreemption”by digitaltrade agreements thatcan be used to weakennational regulation onalgorithmicaccountability andcompetition policy.See:Digital TradeNondiscrimination proh
115、ibitions in trade agreements should not be used toprotect US Big Tech companies from competition regulation abroad.Expansive and absolute-secrecy guarantees for source code andalgorithms in trade agreements should not be used to undercut e?orts toenact laws on algorithmic transparency.Upcoming trade
116、 agreements like the Indo-Pacific Economic Frameworkshould instead be used to set a more a progressive baseline for digitalpolicy.Its time to move:years of critical work and organizing has outlined a clear diagnosis of the problems weface,regulators are primed for action,and we have strategies ready
117、 to be deployed immediately for thise?ort.Well also need more:those engaged in this work are out-resourced and out-flanked amidst asignificant uptick in industry lobbying and a growing attack on critical work,from companies firing AIEthics teams to universities shutting down critical research center
118、s.And we face a hostile narrativelandscape.The surge in AI hype that opened 2023 has moved things backwards,re-introducing thenotion that AI is associated with innovation and progress and drawing considerable energy towardfar-o?hypotheticals and away from the task at hand.We intend this report to pr
119、ovide strategic guidance to inform the work ahead of us,taking a birds eyeview of the landscape and of the many levers we can use to shape the future trajectory of AI-and thetech industry behind it-to ensure that it is the public,not industry,that this technology serves if welet it serve at all.2023
120、 Landscape15Large Scale AI ModelschatGPT And More:LargeScale AI Models EntrenchBig Tech PowerIndustry is attempting to stave o?regulation,but large-scale AIneeds more scrutiny,not less.1.Large-scale general purpose AI models(such as GPT-3.5 and its user-facingapplication chatGPT)are being promoted b
121、y industry as“foundational”and a majorturning point for scientific advancement in the field.They are also often associatedwith slippery definitions of“open source.”These narratives distract from what we call the“pathologies of scale”that becomemore entrenched every day:large-scale AI models are stil
122、l largely controlled by BigTech firms because of the enormous computing and data resources they require,andalso present well-documented concerns around discrimination,privacy and securityvulnerabilities,and negative environmental impacts.Large-scale AI models like Large Language Models(LLMs)have rec
123、eived the most hype,andfear-mongering,over the past year.Both the excitement and anxiety26around these systems serve toreinforce the notion that these models are foundational and a major turning point for advancement inthe field,despite manifold examples where these systems fail to provide meaningfu
124、l responses to26Future of Life Institute.“Pause Giant AI Experiments:An Open Letter.”Accessed March 29,2023.https:/futureoflife.org/open-letter/pause-giant-ai-experiments/.;Harari,Yuval,Tristan Harris,and Aza Raskin.“Opinion|You Can Have the Blue Pillor the Red Pill,and Were Out of Blue Pills.”The N
125、ew York Times,March 24,2023,sec.Opinion.https:/ Landscape16prompts.27But the narratives associated with these systems distract from what we call the pathologiesof scale that this emergent framing serves to both highlight and distract from.The term“foundational,”for example,was introduced by Stanford
126、 University when announcing a new center of the same namein early 2022,28in the wake of the publication of an article listing the many existential harms associatedwith LLMs.29In notably fortuitous timing,the introduction of these models as“foundational aimed toequate them(and those espousing them)wi
127、th unquestionable scientific advancement,a steppingstone on the path to“Artificial General Intelligence”30(another fuzzy term evoking science-fictionnotions of replacing or superseding human intelligence)thereby making their wide-scale adoptioninevitable.31These discourses have since returned to the
128、 foreground following the launch of Open AIsnewest LLM-based chatbot,chatGPT.On the other hand,the term“general purpose AI”(GPAI)is being used in policy instruments like the EUsAI Act to underscore that these models have no defined downstream use and can be fine-tuned toapply in specific contexts.32
129、It has been wielded to make arguments such as because these systemslack clear intention or defined objectives,they should be regulated di?erently or not at all-e?ectivelycreating a major loophole in the law(more on this in Section 2 below).33Such terms deliberately obscure another fundamental featur
130、e of these models:they currently requirecomputational and data resources at a scale that ultimately only the most well-resourced companies33Alex C.Engler,“To Regulate General Purpose AI,Make the Model Move,”Tech Policy Press,November 10,2022,https:/techpolicy.press/to-regulate-general-purpose-ai-mak
131、e-the-model-move.32The EU Councils draft or“general position”on the AI Act text defines General Purpose AI(GPAI)as an AI system that“that irrespective of how itis placed on the market or put into service,including as open source software is intended by the provider to perform generally applicable fu
132、nctionssuch as image and speech recognition,audio and video generation,pattern detection,question answering,translation and others;a general purposeAI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.”See Council of the European Union,“Proposalfor a
133、Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence(Artificial Intelligence Act)and Amending Certain Union Legislative Acts General Approach,”November 25,2022,https:/data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf;see also
134、Future of Life Institute and University College Londonsproposal to define GPAI as an AI system“that can accomplish or be adapted to accomplish a range of distinct tasks,including some for which it wasnot intentionally and specifically trained.”Carlos I.Gutierrez,Anthony Aguirre,Risto Uuk,Claire C.Bo
135、ine,and Matija Franklin,“A Proposal for aDefinition of General Purpose Artificial Intelligence Systems,”Future of Life Institute,November 2022,https:/futureoflife.org/wp-content/uploads/2022/11/SSRN-id4238951-1.pdf.31See National Artificial Intelligence Research Resource Task Force,“Strengthening an
136、d Democratizing the U.S.Artificial Intelligence InnovationEcosystem:An Implementation Plan for a National Artificial Intelligence Research Resource,”January 2023,https:/www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf;and Special Competitive Studies Project,“Mid-DecadeChallenges
137、to National Competitiveness,”September 2022,https:/www.scsp.ai/wp-content/uploads/2022/09/SCSP-Mid-Decade-Challenges-to-National-Competitiveness.pdf.30See Sam Altman,“Planning for AGI and beyond”,March 2023,https:/ Bender,Timnit Gebru,Angelina McMillan-Major,Shmargaret Shmitchell,“On the Dangers of
138、Stochastic Parrots:Can Language Models Be TooBig?”FAccT 21:Proceedings of the 2021 ACM Conference on Fairness,Accountability,and Transparency,March 2021,https:/dl.acm.org/doi/10.1145/3442188.3445922.28See the Center for Research on Foundation Models,Stanford University,https:/crfm.stanford.edu;and M
139、argaret Mitchell(mmitchell_ai),“Reminder to everyone starting to publish in ML:Foundation models is*not*a recognized ML term;was coined by Stanford alongside announcingtheir center named for it;continues to be pushed by Sford as*the*term for what weve all generally(reasonably)called base models,”Twi
140、tter,June8,2022,4:01 p.m.,https:/ Greg Noone,“Foundation models may be the future of AI.Theyre also deeply flawed,”Tech Monitor,November 11,2021(updated February 9,2023),https:/techmonitor.ai/technology/ai-and-automation/foundation-models-may-be-future-of-ai-theyre-also-deeply-flawed;Dan McQuillan,“
141、We Come to Bury ChatGPT,Not to Praise It,”danmcquillan.org,February 6,2023,https:/www.danmcquillan.org/chatgpt.html;Ido Vock,“ChatGPTProves That AI Still Has a Racism Problem,”New Statesman,December 9,2022,https:/ Gassam Asare,“The Dark Side of ChatGPT,”Forbes,January 28,2023,https:/ Billy Perrigo,“
142、Exclusive:OpenAI UsedKenyan Workers on Less Than$2 Per Hour to Make ChatGPT Less Toxic,”Time,January 18,2023,https:/ Landscape17can a?ord to sustain.34For a sense of the figures,some estimates suggest it will cost 3 million dollars amonth to run chatGPT35and 20 million dollars in computing costs to
143、train Pathways Language Model(PaLM),a recent LLM from Google.36Currently only a handful of companies with incredibly vastresources are able to build them.Thats why the majority of existing large-scale AI models have beenalmost exclusively developed by Big Tech,especially Google(Google Brain,Deepmind
144、),Meta,andMicrosoft(and its investee OpenAI).This includes many o?-the-shelf,pretrained AI models that areo?ered as part of cloud AI services,a market already concentrated in Big Tech players,such as AWS(Amazon),Google Cloud(Alphabet),and Azure(Microsoft).Even if costs are lower or come down asthese
145、 systems are deployed at scale(and this is a hotly contested claim37),Big Tech is likely to retain afirst mover advantage,having had the time and market experience needed to hone their underlyinglanguage models and to develop invaluable in-house expertise.Smaller businesses or start ups mayconsequen
146、tly struggle to successfully enter this field,leaving the immense processing power of LLMsconcentrated in the hands of a few Big Tech firms.38This market reality cuts through growing narratives that highlight the potential for“open-source”and“community or small and medium enterprise(SME)-driven”GPAI
147、 projects or even the conflation of GPAIas synonymous with open source(as weve seen in discussions around the EUs AI Act).39In September2022,for example,a group of ten industry associations led by the Software Alliance(or BSA)publisheda statement opposing the inclusion of any legal liability for the
148、 developers of GPAI models.40Theirheadline argument was that this would“severely impact open source development in Europe”as well as“undermine AI uptake,innovation,and digital transformation.”41The statement leans on hypotheticalexamples that present a caricature of both how GPAI models work and wha
149、t regulatory interventionwould entailthe classic case cited is of an individual developer creating an open sourcedocument-reading tool and being saddled by regulatory requirements around future use cases it canneither predict nor control.41BSA,“BSA Leads Joint Industry Statement on the EU Artificial
150、 Intelligence Act and High-Risk Obligations for General Purpose AI.”40See BSA|The Software Alliance,“BSA Leads Joint Industry Statement on the EU Artificial Intelligence Act and High-Risk Obligations for GeneralPurpose AI,”press release,September 27,2022,;and BSA,“Joint Industry Statement on the EU
151、Artificial Intelligence Act and High-Risk Obligationsfor General Purpose AI,”September 27,2022,https:/www.bsa.org/files/policy-filings/09272022industrygpai.pdf.39Ryan Morrison,“EU AI Act Should Exclude General Purpose Artificial Intelligence Industry Groups,”Tech Monitor,September 27,2022,https:/tec
152、hmonitor.ai/technology/ai-and-automation/eu-ai-act-general-purpose.38Richard Waters,“Falling costs of AI may leave its power in hands of a small group”,Financial Times,March 9,2023,https:/ Lohn and Micah Musser,“AI and Compute”,Center for Security and Emerging Technology,https:/cset.georgetown.edu/w
153、p-content/uploads/AI-and-Compute-How-Much-Longer-Can-Computing-Power-Drive-Artificial-Intelligence-Progress_v2.pdf36Lennart Heim,“Estimating PaLMs training cost,”April 5,2022,https:/blog.heim.xyz/palm-training-cost;Peter J.Denning and Ted G.Lewis,“Exponential Laws of Computing Growth,”Communications
154、 of the ACM 60,no.1(January 2017):5465,https:/cacm.acm.org/magazines/2017/1/211094-exponential-laws-of-computing-growth/abstract.35See Tom Goldstein(tomgoldsteincs),“I estimate the cost of running ChatGPT is$100K per day,or$3M per month.This is aback-of-the-envelope calculation.I assume nodes are al
155、ways in use with a batch size of 1.In reality they probably batch during high volume,buthave GPUs sitting fallow during low volume,”Twitter,December 6,2022,1:34 p.m.,https:/ MetaNews,“Does ChatGPT Really Cost$3M a Day to Run?”December 21,2022,https:/ Ben Cottier,“Trends in the dollar training cost o
156、f machine learning systems”,Epoch,January 31,2023,https:/epochai.org/blog/trends-in-the-dollar-training-cost-of-machine-learning-systems;Je?rey Dastin and Stephen Nellis,“For tech giants,AIlike Bing and Bard poses billion-dollar search problem”,Reuters,February 22,2023,https:/ Vanian and KifLeswing,
157、“ChatGPT and generative AI are booming,but the costs can be extraordinary”,CNBC,March 13,2023,https:/ Gallagher,“Microsoft and Google Will Both Have to Bear AIsCosts”,WSJ,January 18,2023,https:/ AI Boom That Could Make Google and Microsoft Even More Powerful,”Wall Street Journal,February 11,2023,htt
158、ps:/ Diane Coyle,“Preempting a Generative AI Monopoly,”Project Syndicate,February 2,2023,https:/www.project-syndicate.org/commentary/preventing-tech-giants-from-monopolizing-artificial-intelligence-chatbots-by-diane-coyle-2023-02.2023 Landscape18The discursive move here is to conflate“open source,”w
159、hich has a specific meaning related topermissions and licensing regimes,with the intuitive notion of being“open”in that they are accessiblefor downstream use and adaptation(typically through Application Programming Interfaces,or APIs).The latter is more akin to“open access,”though even in that sense
160、 they remain limited since they onlyshare the API,rather than the model or training data sources.42In fact,in OpenAIs paper announcing itsGPT-4 model,the company said it would not provide details about the architecture,model size,hardware,training compute,data construction or training method used to
161、 develop GPT-4,other thannoting it used its Reinforcement Learning from Human Feedback approach,asserting competitive andsafety concerns.Running directly against the current push to increase firms documentationprocesses,43such moves compound what has already been described as a reproducibility crisi
162、s inmachine learning-based science,in which claims about the capabilities of AI-based models cannot bevalidated or replicated by others.44Ultimately,this form of deployment only serves to increase Big Tech firms revenues and entrench theirstrategic business advantage.45While there are legitimate rea
163、sons to consider potential downstreamharms associated with making such systems widely accessible,46even when projects might make theircode publicly available and meet other definitions of open source,the vast computational requirementsof these systems mean that dependencies between these projects an
164、d the commercial marketplacewill likely persist.4747See Coyle,“Preempting a Generative AI Monopoly.”46Arvind Narayanan and Sayash Kapoor,“The LLaMA is Out of the Bag.Should We Expect a Tidal Wave of DIsinformation?”Knight First AmendmentInstitute(blog),March 6,2023.https:/knightcolumbia.org/blog/the
165、-llama-is-out-of-the-bag-should-we-expect-a-tidal-wave-of-disinformation45A report by the UKs Competition&Markets Authority(CMA)points to how Googles“open”approach with its Android OS and Play Store(incontrast to Apples)proved to be a strategic advantage that eventually led to similar outcomes in te
166、rms of revenues and strengthening itsconsolidation over various parts of the mobile phone ecosystem.See Competition&Markets Authority,“Mobile Ecosystems:Market Study FinalReport,”June 10,2022,https:/assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1096277/Mobil
167、e_ecosystems_final_report_-_full_draft_-_FINAL_.pdf.44Sayash Kapoor and Arvind Narayanan.“Leakage and the Reproducibility Crisis in ML-based Science.”arXiv,July 14,2022.https:/arxiv.org/abs/2207.0704843Margaret Mitchell,Simone Wu,Andrew Zaldivar,Parker Barnes,Lucy Vasserman,Ben Hutchinson,Elena Spit
168、zer,Inioluwa Deborah Raji,TimnitGebru,“Model Cards for Model Reporting,arXiv,January 14,2019,https:/arxiv.org/abs/1810.03993;Emily Bender and Batya Friedman,“DataStatements for Natural Language Model Processing:Toward Mitigating System Bias and Enabling Better Science”,Transactions of the Associatio
169、nfor Computational Linguistics,6(2018):587-604.https:/aclanthology.org/Q18-1041/;Timnit Gebru,Jamie Morgenstern,Briana Vecchione,JenniferWortman Vaughan,Hanna Wallach,Hal Daum Iii,and Kate Crawford,Datasheets for Datasets.Communications of the ACM 64,no.12(2021):86-92.https:/arxiv.org/abs/1803.09010
170、42Peter Suber,Open Access(Cambridge,MA:MIT Press,2019),https:/openaccesseks.mitpress.mit.edu.2023 Landscape19On the Dangers of Stochastic Parrots:Can Language Models BeToo Big?By Dr.Emily M.Bender,Dr.Timnit Gebru,AngelinaMcMillan-Major,and Dr.Margaret Mitchell“Are ever larger LMs inevitable or neces
171、sary?What costs are associated with thisresearch direction and what should we consider before pursuing it?”In 2021,Dr.Emily M.Bender,Dr.Timnit Gebru,Angelina McMillan-Major,and Dr.Margaret Mitchellwarned against the potential costs and harms of large language models(LLMs)in a paper titled Onthe Dang
172、ers of Stochastic Parrots:Can Language Models Be Too Big?.48The paper led to Googleforcing out both Gebru and Mitchell from their positions as the co-leads of Googles Ethical AIteam.49This paper could not have been more prescient in identifying pathologies of scale that a?ictLLMs.As public discourse
173、 is consumed by breathless hype around chatGPT and other LLMs as anunarguable advancement in science,this research o?ers sobering reminders of the seriousconcerns that a?ict these kinds of models.Rather than uncritically accept these technologies assynonymous with progress,the arguments advanced in
174、the paper raise existential questions to if,not how,society should be building them at all.The key concerns raised in the paper are asfollows:Environmental and Financial CostsLLMs are hugely energy intensive to train and produce large CO2emissions.Well-documentedenvironmental racism means that margi
175、nalized people and people from the Majority World/GlobalSouth are more likely to experience the harms caused by heightened energy consumption andCO2emissions even though they are also least likely to experience the benefits of these models.Additionally,the high cost of entry and training these model
176、s means that only a small global eliteis able to develop and benefit from LLMs.They argue that environmental and financial costsshould become a top consideration in Natural Language Processing(NLP)research.Unaccountable Training Data“In accepting large amounts of web text as representative of all of
177、 humanity werisk perpetuating dominant viewpoints,increasing power imbalances,and furtherreifying inequality.”The use of large and uncurated training data sets risks creating LLMs that entrench dominant,hegemonic views.The large size of these training data sets does not guarantee diversity,as they49
178、Metz,Cade,and Daisuke Wakabayashi.“Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.”The New York Times,December3,2020,sec.Technology.https:/ M.,Timnit Gebru,Angelina McMillan-Major,and Shmargaret Shmitchell.“On the Dangers of Stochastic Parrots:Can LanguageModels Be Too Big?
179、.”In Proceedings of the 2021 ACM Conference on Fairness,Accountability,and Transparency,61023.FAccT 21.New York,NY,USA:Association for Computing Machinery,2021.https:/doi.org/10.1145/3442188.3445922.2023 Landscape20are often scraped from websites that exclude the voices of marginalized people due to
180、 issuessuch as inadequate Internet access,underrepresentation,filtering practices,or harassment.Thesedata sets run the risk of value-lock or encoding harmful bias into LLMs that are di?cult tothoroughly audit.Creating Stochastic ParrotsBender et al.further warn that the pursuit of LLM benchmarks may
181、 be a misleading direction forresearch,as these models have access to form,but not meaning.They observe that“an LM is asystem for haphazardly stitching together sequences of linguistic forms it has observed in its vasttraining data,according to probabilistic information about how they combine,but wi
182、thout anyreference to meaning:a stochastic parrot”.As stochastic parrots,these models are likely to absorbhegemonic worldviews from their training data and produce outputs that contain both subtleforms of stereotyping and outright abusive language.They can also lead to harms based ontranslation erro
183、rs,and through their misuse by bad actors to create propaganda,spreadmisinformation,and deduce sensitive information.2.Large scale AI models must be subject to urgent regulatory scrutiny,particularlygiven the frenzied speed of rollout to the public.Documentation and scrutiny of dataand related desig
184、n choices at the stage of model development is key to surfacing andmitigating harm.Its not a blank slate.Legislative proposals on Algorithmic Accountability must beexpanded and strengthened and existing legal tools should be creatively applied tointroduce friction and shape the direction of innovati
185、on.There is growing exceptionalism around generative AI models that underplaysinherent risks and justifies their exclusion from the purview of AI regulation.Weshould draw lessons from the ongoing debate in Europe on the inclusion of GeneralPurpose AI under the“high risk”category of the upcoming AI A
186、ct.Along with breathless hype around the future potential of AI,the release of chatGPT(and itssubsequent adaptation into Microsofts search chatbot)immediately surfaced thorny legal questions,such as,who owns and has rights over the content generated by these systems?50Is generative AIprotected from
187、lawsuits relating to illegal content they might generate under intermediary liabilityprotections like Section 230?51Whats clear is that there are already existing legal regimes that apply to large language models,and wearent building them from the ground up.In fact,rhetoric that implies this is nece
188、ssary works mostly toindustrys best interests,by slowing the paths to enforcement and updates to the law.51Electronic Frontier Foundation,“Section 230”,Electronic Frontier Foundation,n.d.,https:/www.e?.org/issues/cda230.50James Vincent,“The scary truth about AI copyright is that nobody knows what wi
189、ll happen next”,The Verge,November 15,2022,https:/ Landscape21A blog post recently published by the FTC outlined several ways the Agencys authorities already applyto generative AI systems:if theyre used for fraud,cause substantial injury,or make false claims aboutthe systems capabilities the FTC has
190、 cause to step in.There are many other domains where other legalregimes are likely to apply:intellectual property law,anti-discrimination provisions,and cybersecurityregulations are among them.Theres also a forward looking question of what norms and responsibilities should apply to thesesystems.The
191、growing consensus around recognized harms from AI systems(particularly inaccuracies,bias,and discrimination)has led to a flurry of policy movement over the last few years centering aroundgreater transparency and diligence around data and algorithmic design practices(See also:AlgorithmicAccountabilit
192、y).These emerging AI policy approaches will need to be strengthened to address theparticular challenges these models bring up,and the current public attention on AI is poised togalvanize momentum where its been lacking.In the EU,this question is not theoretical.It is at the heart of a hotly conteste
193、d debate about whetherthe original developers of so-called“general purpose AI”(GPAI)models should be subject to theregulatory requirements of the upcoming AI Act.52Introduced by the European Commission in April2021,the Commissions original proposal(Article 52a)e?ectively exempted the developers of G
194、PAI fromcomplying with the range of documentation and other accountability requirements in the law.53Thiswould therefore mean that GPAI that ostensibly had no predetermined use or context would not qualifyas high risk another provision(Article 28)confirmed this position,implying stating that develop
195、ers ofGPAI would only become responsible for compliance if they significantly modified or adapted the AIsystem for high-risk use.The European Councils position took a di?erent stance where originalproviders of GPAI will be subject to certain requirements in the law,although working out the specifics
196、of what these would be delegated to the Commission.Recent reports suggest that the EuropeanParliament,too,is considering obligations specific to original GPAI providers.As the inter-institutional negotiation in the EU has flip-flopped on this issue the debate seems to havedevolved into an unhelpful
197、binary where either end users or original developers take on liability,54ratherthan both having responsibilities of di?erent kinds at di?erent stages.55And a recently leaked uno?cialUS government position paper reportedly states that placing burdens on original developers of GPAIcould be“very burden
198、some,technically di?cult and in some cases impossible.”5656Luca Bertuzzi,“The US Uno?cial Position on Upcoming EU Artificial Intelligence Rules,”Euractiv,October 24,2022,https:/ Mozilla Foundations position paper on GPAI helpfully argues in favor of joint liability.See Maximilian Gahntz and Claire P
199、ershan,“ArtificialIntelligence Act:How the EU Can Take on the Challenge Posed by General-Purpose AI Systems,”Mozilla Foundation,2022,https:/ article by Brookings Fellow Alex Engler,for example,argues that regulating downstream end users makes more sense because“goodalgorithmic design for a GPAI mode
200、l doesnt guarantee safety and fairness in its many potential uses,and it cannot address whether any particulardownstream application should be developed in the first place.”See Alex Engler,“To Regulate General Purpose AI,Make the Model Move”,Tech PolicyPress,November 10,2022,https:/techpolicy.press/
201、to-regulate-general-purpose-ai-make-the-model-move/;See also Alex Engler,“The EUsattempt to regulate general purpose AI is counterproductive”,Brookings,August 24,2022,https:/www.brookings.edu/blog/techtank/2022/08/24/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/53European Commissi
202、on,“Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on ArtificialIntelligence(Artificial Intelligence Act)and Amending Certain Union Legislative Acts,”April 21,2021,https:/eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52021PC0206.52Creative Co
203、mmons,“As European Council Adopts AI Act Position,Questions Remain on GPAI”,Creative Commons,December 13,2022,https:/creativecommons.org/2022/12/13/as-european-council-adopts-ai-act-position-questions-remain-on-gpai/;Corporate Europe Observatory,“The Lobbying Ghost in the Machine:Big Techs covert de
204、fanging of Europes AI Act”,February 2023,https:/corporateeurope.org/sites/default/files/2023-02/The%20Lobbying%20Ghost%20in%20the%20Machine.pdf;Gian Volpicelli,ChatGPT brokethe EU plan to regulate AI,Politico,March 3,https:/www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intellige
205、nce-act/2023 Landscape22These accounts lose sight of the two most important reasons that large-scale AI models requireoversight:1.Data and design decisions made at the developer stage determine many of the models mostharmful downstream impacts,including the risks of bias and discrimination.57There i
206、s mountingresearch and advocacy that argues for the benefits of rigorous documentation andaccountability requirements on the developers of large-scale models.582.The developers of these models,many of which are Big Tech or Big Tech-funded,commerciallybenefit from these models through licensing deals
207、 with downstream actors.59Companieslicensing these models for specific uses should certainly be accountable for conductingdiligence within the specific context in which these models are applied,but to make themwholly liable for risks that emanate from data and design choices made at the stage of ori
208、ginaldevelopment would result in both unfair and ine?ective regulatory outcomes.59See for example Madhumita Murgia,“Big Tech companies use cloud computing arms to pursue alliances with AI groups”,Financial Times,February5,2023,https:/ Nylen and Dina Bass,“MicrosoftThreatens Data Restrictions In Riva
209、l AI Search”,Bloomberg,March 25,2023,https:/ Vanian,“Microsoft adds OpenAI technology to Word and Excel”,CNBC,March 16,2023,https:/ Patrick Seitz,“MicrosoftStock Breaks Out After Software Giant Adds AI To O?ce Apps”,Investors Business Daily,March 17,2023,https:/ Timnit Gebru,Jamie Morgenstern,Briana
210、 Vecchione,Jennifer Wortman Vaughan,Hanna Wallach,Hal Daum III,and Kate Crawford,“Datasheets for Datasets,”arXiv:1803.09010,December 2021,;Mehtab Khan and Alex Hanna,“The Subjects and Stages of AI Dataset Development:A Framework for Dataset Accountability,”Ohio State Technology Law Journal,forthcomi
211、ng,accessed March 3,2023,;and Bender,Gebru,McMillan-Major,and Shmitchell,“On the Dangers of Stochastic Parrots.”57Sasha Costanza-Chock,Design Justice:Community-Led Practices to Build the Worlds We Need.Cambridge:MIT Press.2023 Landscape23Toxic CompetitionRegulating Big Techs DataAdvantageOne of the
212、key sources of tech firms power is their dataadvantage.Privacy and competition law are two tools that,usedin concert,can e?ectively curb this source of some of techfirms most harmful behavior.Doing so requires strategiccalibration of the e?ects of each on digital markets and on thebroader public,imp
213、lementing a policy approach that takes in toaccount the benefits firms seek to take advantage of viainformation asymmetries.In this section we identify two domains in which thesedynamics are currently playing out:data mergers and adtech.These provide an illustrative example for analysis of the techi
214、ndustry playbook and a path forward.2023 Landscape24Privacy and competition law are too often siloed from one another,leading tointerventions that easily compromise the objectives of one issue over the other.Firmsare taking advantage of this to amass information asymmetries that contribute tofurther
215、 concentration of their power.Rather than accept the silos of legal expertise andprecedent,its clear that privacy and competition regulators need to work in concert toregulate an industry that draws on invasive surveillance for competitive benefit.60Considered in isolation,traditional antitrust and
216、privacy analyses could indeed lead in divergentdirections.But,as Maurice Stucke and Ariel Ezrachi underscore in their work,competition can be toxic.61As Stucke puts it,“in the digital platform economy,behavioral advertising can skew the platforms,apps,and websites incentives.The ensuing competition
217、is about us,not for us.Here firms compete to exploitus in discovering better ways to addict us,degrade our privacy,manipulate our behavior,and capturethe surplus.”62Scholars in the EU have taken this line of thinking as far as to explore how competitionlaw could be enforced as a substitute for data
218、protection law given the endemic nature of suchpractices within digital markets.63And though privacy measures that aim to curb data collection are insome instances a significant step toward curbing tech firms data advantage,they only go so far forexample,some proposals that focus on third-party trac
219、king in isolation o?er a giant loophole thatenables tech firms to entrench their power.64Bringing privacy and competition policy into closer consideration can,at best,o?er a complementaryset of levers for tech accountability that work in concert with one another to check the power of bigtech firms.6
220、5If left unattended,pursuing privacy and competition in isolation will enable corporateactors to“resolve”critiques through self-regulatory moves that ultimately expand and entrench,ratherthan limit,concentrated tech power.66We see this playing out in two domains in particular:data mergers and adtech
221、.66See Cudos,“The Fable of Self-Regulation:Big Tech and the End of Transparency,”n.d.,https:/www.cudos.org/blog/the-fable-of-self-regulation-big-tech-and-the-end-of-transparency-%F0%9F%8C%AB%EF%B8%8F;and Rys Farthingand Dhakshayini Sooriyakumaran,“Why the Era of Big Tech Self-Regulation Must End,”Au
222、stralian Quarterly 92 no.4(OctoberDecember 2021):310,https:/www.jstor.org/stable/27060078.65Peter Swire,“Protecting Consumers:Privacy Matters in Antitrust Analysis,”Center for American Progress,October 19,2007,https:/www.americanprogress.org/article/protecting-consumers-privacy-matters-in-antitrust-
223、analysis.64Maurice E.Stucke,“The Relationship between Privacy and Antitrust,”Notre Dame Law Review Reflection 97,no.5(2022):400417,https:/ndlawreview.org/wp-content/uploads/2022/07/Stucke_97-Notre-Dame-L.-Rev.-Reflection-400-C.pdf63Giuseppe Colangelo and Mariateresa Maggiolino“Data Protection in Att
224、ention Markets:Protecting Privacy Through Competition?”,Journal ofEuropean Competition Law&Practice,April 3,2017,https:/ E.Stucke,“The Relationship between Privacy and Antitrust,”Notre Dame Law Review Reflection 97,no.5(2022):400417,https:/ndlawreview.org/wp-content/uploads/2022/07/Stucke_97-Notre-D
225、ame-L.-Rev.-Reflection-400-C.pdf61Maurice E.Stucke and Ariel Ezrachi,Competition Overdose:How Free Market Mythology Transformed Us from Citizen Kings to Market Servants(New York:HarperCollins,2020)60Jacques Crmer,Yves-Alexandre de Montjoye and Heike Schweitzer,“Competition policy for the digital era
226、”,European Commission,2019,https:/ec.europa.eu/competition/publications/reports/kd0419345enn.pdf;Autorit de la concurrence and Bundeskartellamt,“Competition Law andData”,Bundeskartellamt,May 10,2016,https:/www.bundeskartellamt.de/SharedDocs/Publikation/DE/Berichte/Big%20Data%20Papier.pdf?_blob=publi
227、cationFile&v=2;Competition&Markets Authority and the Information Commissioners O?ce,“Competition and Data Protection in Digital Markets:A Joint Statement between theCMA and the ICO,”May 19,2021,https:/assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/987358/Join
228、t_CMA_ICO_Public_statement_-_final_V2_180521.pdf;Subcommittee on Antitrust,Commercial,and Administrative Law of the Committee on the Judiciary of the House ofRepresentatives,“Investigation of Competition in Digital Markets,”July 2022,https:/www.govinfo.gov/content/pkg/CPRT-117HPRT47832/pdf/CPRT-117H
229、PRT47832.pdf;Australian Competition&Consumer Commission,“DigitalPlatforms Inquiry:Final Report and Executive Summary,”video,July 26,2019,https:/www.accc.gov.au/focus-areas/inquiries-finalised/digital-platforms-inquiry-0/final-report-executive-summary;and the CompetitionCommission,“Online Intermediat
230、ion Platforms Market Inquiry:Provisional Summary Report,”July 2022,https:/pcom.co.za/wp-content/uploads/2022/07/OIPMI-Provisional-Summary-Report.pdf.2023 Landscape25Data Mergers and AcquisitionsCompetition analysis must account for how firms leverage commercial surveillance toolsand strategies to am
231、ass powerCompetition enforcers have largely now converged in agreement that data plays a critical role in digitalmarkets and that regulators need to attend to how data practices shape information asymmetries andmarket power.67It then follows that competition analysis must account for how firms lever
232、agecommercial surveillance tools and strategies to their competitive advantage and to the detriment ofuser privacy.68One primary means through which tech firms have grown their market power is through theconsolidation of data they are able to collectand when they cant do so on their own,they buy the
233、irway in.Googles acquisition of DoubleClick in 2008 was a bellwether case that led to a practice nowwidespread in the industry:the acquisition of firms in order to gain an information advantage.69Google-DoubleClick was itself not a conventional data merger.Google acquired DoubleClick because itsown
234、publisher ad server had failed to gain traction in the adtech industry.70Through DoubleClick,Google gained control of both the market-leading publisher and ad server,and an advertising exchange.And at the time of the acquisition,Google emphasized that its privacy policies prohibited the companyfrom
235、combining its own data streams with those obtained from other websites.But the companyquietly walked back this internal policy in 2016-notably,a point at which the company had amassedgreater market power and thus was less exposed to the risk users would flock to a competitorplatform.71Following the
236、change,Google could combine all its user data into a single user identificationthat it could integrate into its buying tools to enable uniquely precise targeting of particular usersanextremely valuable advantage for Google Ads advertising clients.72This also made it harder forpublishers to track use
237、rs themselves by masking these user identifiers.73These negative e?ects ledpublishers to complain that the acquisition was anticompetitive;the FTC,however,opted not to blockthe merger from going forward.7474Federal Trade Commission,“Statement of Federal Trade Commission Concerning Google/DoubleClick
238、”,FTC File No.071-0170,December 20,2007,https:/www.ftc.gov/system/files/documents/public_statements/418081/071220googledc-commstmt.pdf.73United States et al.v.Google LLC at 39.72See United States et al.v.Google LLC,Case 1:23-cv-00108(United States District Court for the Eastern District or Virginia,
239、Alexandria Division,January 24,2023),https:/ Wise,“Val Demings repeatedly presses Googles Pichai on staggering consolidation of consumer data”,The Hill,July 29,2020,https:/ Yiu,“Why Did Google Buy DoubleClick?”Towards Data Science,Medium,May 5,2020,https:/ Louise Story and Miguel Helft,“Google Buys
240、DoubleClick for$3.1 Billion,”New York Times,April 14,2007,https:/ Steve Lohr,“This Deal Helped Turn Google into an Ad Powerhouse.IsThat a Problem?”New York Times,September 21,2020,https:/ Kemp,“Concealed data practices and competition law:why privacy matters”,European Competition Journal 16,no.2-3(2
241、020):628-672,https:/doi.org/10.1080/17441056.2020.183922867See Federal Trade Commission,“Remarks of Chair Lina M.Khan as Prepared for Delivery,”IAPP Global Privacy Summit 2022,Washington,D.C.,April11,2022,https:/www.ftc.gov/system/files/ftc_gov/pdf/Remarks%20of%20Chair%20Lina%20M.%20Khan%20at%20IAPP
242、%20Global%20Privacy%20Summit%202022.pdf;Jacques Crmer,Yves-Alexandre de Montjoye and Heike Schweitzer,“Competition policy for the digital era”,European Commission,2019,https:/ec.europa.eu/competition/publications/reports/kd0419345enn.pdf;Autorit de la concurrence and Bundeskartellamt,“CompetitionLaw
243、 and Data”,Bundeskartellamt,May 10,2016,https:/www.bundeskartellamt.de/SharedDocs/Publikation/DE/Berichte/Big%20Data%20Papier.pdf?_blob=publicationFile&v=2;The UnitedStates Department of Justice,“Assistant Attorney General Jonathan Kanter of the Antitrust Division Delivers Remarks at the Keystone Co
244、nferenceon Antitrust,Regulation&the Political Economy”,The United States Department of Justice,March 2,2023,https:/www.justice.gov/opa/speech/assistant-attorney-general-jonathan-kanter-antitrust-division-delivers-remarks-keystone;Federal TradeCommission,“FTC Hearing#6:Privacy,Big Data,and Competitio
245、n”,FTC,November 6-8,2018,https:/www.ftc.gov/news-events/events/2018/11/ftc-hearing-6-privacy-big-data-competition2023 Landscape26List of data mergersDateCompaniesAmountCompleted?2007Google DoubleClick$3.1bYes2013Facebook Onavo$120mYes2014Facebook WhatsApp$19bYes2016Microsoft LinkedIn$26.2bYes2018App
246、le Shazam$400MYes2019Google Fitbit$2.1bYes2020Facebook Giphy$315mNo-CMA forced to sell2022Alphabet BrightBytesUnknownYes2022Amazon OneMedical$3.9bYes2022Amazon iRobot$1.7bNo-EU set to launch an antitrust case2023 Landscape27More explicit instances of data mergers followed,including instances where c
247、ompanies used nonpublicdata obtained through an acquisition in order to shape their decisionmaking.In another notableexample,Facebooks acquisition of the virtual private network(VPN)Onavo enabled the company togain competitive insights by monitoring users network tra?c:internal documents released by
248、 UKregulators demonstrate that Facebook was closely tracking the market reach of Facebooks competitorsby drawing on Onavo tra?c data.75Facebook then paid users between the ages of 13 and 35 up to$20per month to sign up for a rebranded version of Onavo called“Facebook Research”to enable thecompany to
249、 gather even more granular data on their usage habits.Apple eventually blocked the app forbreaking Apples policies,76but the data Facebook collected from Onavo played a significant role inFacebooks decision to acquire WhatsApp,in one of the most famous cases of a larger firm buying andneutralizing a
250、 fast-growing competitor,thereby further extending its reach and collection of data.77Competition Law Requires Understanding How Data Impacts Market BehaviorGiven the clear anti-competitive e?ects of such mergers,the question that follows is why they wereallowed to proceed.Traditional merger analysi
251、s o?ers some foothold for making such claims:the legalanalysis involved in competition cases usually involves some form of harm identification to evaluatewhether a merger deserves closer scrutiny or is likely to substantially lessen competition or create amonopoly.Given the nature of digital markets
252、,data can take shape as an element of a companysmarket power-and as Elettra Bietti argues,this may best be framed in terms of a firms ability toproduce and capture data through its control over infrastructure.78This makes understandingcompanies data collection practices relevant to competition law:u
253、nderstanding how firms usecommercial surveillance to monitor users,competitors,and the market at large is crucial to account forhow these firms build their competitive advantage.79Merger enforcement is increasingly taking such an analysis into account,but the shift has beenincremental.For example,wh
254、ile then relatively novel,testimony submitted to the FTC at the time of theproposed DoubleClick acquisition argued that the Commission should consider potential harms undertwo standard elements of a merger analysis:by examining how the privacy harms enabled by theacquisition could reduce consumer we
255、lfare and lead to a reduction in the quality of a good orserviceboth elements that would fit easily into the FTCs framework for examining the merger.80TheFTC opted not to take up these considerations,and allowed the acquisition to move forward with few80Peter Swire,“Protecting Consumers:Privacy Matt
256、ers in Antitrust Analysis,”Center for American Progress,October 19,2007,https:/www.americanprogress.org/article/protecting-consumers-privacy-matters-in-antitrust-analysis.79Lina M.Khan,“Sources of Tech Platform Power,”Georgetown Law Technology Review 2,no.2(2018):325334,https:/georgetownlawtechrevie
257、w.org/sources-of-tech-platform-power/GLTR-07-2018.;Howard A.Shelanski,“Information,Innovation,andCompetition Policy for the Internet”,University of Pennsylvania Law Review,Vol.161(2013),1663-2013,https:/scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=1025&context=penn_law_review provides an ea
258、rlier perspective that neverthelessconcludes that competition policy must account for nonprice e?ects and consider the cost of underenforcement of competition laws in digitalmarkets.78Elettra Bietti,Data,Context and Competition Policy,Promarket,March 31,2023.77Ariel Ezrachi and Maurice E.Stucke,Virt
259、ual Competition:The Promise and Perils of the Algorithm-Driven Economy(Cambridge,MA:HarvardUniversity Press,2019),43.76Josh Constine,“Apple Bans Facebooks Research App That Paid Users for Data,”TechCrunch,January 30,2019,https:/ Karissa Bell,“Highly Confidential Documents Reveal Facebook Used VPN Ap
260、p to Track Competitors,”Mashable,December 5,2018,https:/ UK Parliament,“Note by Damian Collins MP,Chair of the DCMS Committee,”December 5,2018,https:/www.parliament.uk/globalassets/documents/commons-committees/culture-media-and-sport/Note-by-Chair-and-selected-documents-ordered-from-Six4Three.pdf.20
261、23 Landscape28restrictions,and in a similar decision the European Commission a?rmed it saw a clear separationbetween the applicability of EU antitrust laws and data protection rules for evaluating the merger.81More recently,a 2019 market study by the UKs Competition and Markets Authority(CMA)on onli
262、neplatforms and digital advertising concluded that the lack of controls over the collection and use ofpersonal data by big tech firms indicates that these firms do not face strong enough competitiveconstraints.82And regulators are already integrating an analysis that accounts for the role of data in
263、totheir competition cases.For example,in 2021,when the CMA rejected Metas attempt to acquire Giphy,it acknowledged that the merger would enable Meta to increase its market power by changing theterms of access,such as requiring that Giphy customers like TikTok,Twitter,and Snapchat give up moredata fr
264、om UK users in order to access Giphy GIFs.83And in a concurring statement issued alongside theFederal Trade Commissions decision not to block a merger between Amazon and One Medical,Commissioners Rebecca Kelly Slaughter and Alvaro Bedoya cautioned that Amazon now has access topotentially private hea
265、lth information,because US privacy rules on health related data(HIPPA)exemptde-identified data writ large that can nevertheless be used by the company to its own advantage.Thestatement underscores that the lack of a purpose limitation in HIPPA“cuts against longstandingAmerican information policy”.84
266、Merger Guideline Updates Could Be A Boost for E?ective EnforcementOne of the hindering factors in a more muscular enforcement of merger law to curb data practices is alack of resources for enforcement agencies,as the frequency of such mergers increases exponentially.This has led regulators in both t
267、he US and EU to seek more powerful tools to enable them to moree?ectively tackle the challenge of data mergers,specifically by seeking tools that would enable them tointervene at earlier stages before harms to competition have occurred.For example,the FTC and Justice Department are currently conside
268、ring modernizations to the mergerguidelines that may similarly allow antitrust enforcers to intervene before harms to competition area?ected(known as the incipiency standard),85and to further enable them to more e?ectively handledigital markets,zero-price(“free”)products,multi sided markets,and data
269、 aggregation.86The PlatformCompetition and Opportunity Act,part of the package of antitrust bills currently before the USCongress,would declare acquisitions of direct and potential future competitors presumptively invalid,shifting the burden of proof to dominant platforms to demonstrate why a merger
270、 would not be86Federal Trade Commission,“Federal Trade Commission and Justice Department Seek to Strengthen Enforcement Against Illegal Mergers,”January18,2022,https:/www.ftc.gov/news-events/news/press-releases/2022/01/federal-trade-commission-justice-department-seek-strengthen-enforcement-against-i
271、llegal-mergers.85Andrew I.Gavil,“Competitive Edge:The silver lining for antitrust enforcement in the Supreme Courts embrace of“textualism”Equitable Growth,July 28,2021,https:/equitablegrowth.org/competitive-edge-the-silver-lining-for-antitrust-enforcement-in-the-supreme-courts-embrace-of-textualism/
272、;Baker,Jonathan;Farrell,Joseph;Gavil,Andrew;Gaynor,Martin;Kades,Michael;Katz,Michael;Kimmelman,Gene;Melamed,A.;Rose,Nancy;Salop,Steven;Scott Morton,Fiona;and Shapiro,Carl,Joint Response to the House Judiciary Committee on the State of Antitrust Law and Implications forProtecting Competition in Digit
273、al Markets Congressional and Other Testimony,18,2020,https:/digitalcommons.wcl.american.edu/pub_disc_cong/18/84Federal Trade Commission,“Statement of Commissioner Alvaro M.Bedoya Joined by Commissioner Rebecca Kelly Slaughter RegardingA,Inc.s Acquisition of 1Life Healthcare,Inc.”,FTC,February 27,202
274、3,https:/www.ftc.gov/system/files/ftc_gov/pdf/2210191amazononemedicalambstmt.pdf83Competition and Markets Authority,“CMA Orders Meta to Sell Giphy,”October 18,2022,https:/www.gov.uk/government/news/cma-orders-meta-to-sell-giphy.82Competition and Markets Authority,“Online Platforms and Digital Advert
275、ising Market Study,”July 3,2019,https:/www.gov.uk/cma-cases/online-platforms-and-digital-advertising-market-study.81Giuseppe Colangelo and Mariateresa Maggiolino“Data Protection in Attention Markets:Protecting Privacy Through Competition?”,Journal ofEuropean Competition Law&Practice,April 3,2017,htt
276、ps:/ Landscape29anticompetitive.87And the European Commission(EC)amended its merger referral guidelines to createa lower burden of proof for authorities to justify tests of merger deals for lack of competitiveness indigital markets,enabling them to intervene earlier on by encouraging national compet
277、ition authorities torefer mergers to the EC when at least one of the companies concerned does not reflect its futurecompetitive potential.88ADTECHBright-line rules restricting first-party data collection for advertising purposes wille?ectively tackle toxic competition.“The shift from third-to first-
278、party data collection is beingpropelled by both regulatory momentum and proactive Big Techinitiatives.Ostensibly privacy-enhancing,this shift onlyentrenches Big Techs data advantage,with deleterious e?ectson both privacy and competition.”Critiques of business models that rely on surveillance and pro
279、filing of consumers have reached acrescendo in recent years.89While some bolder advocacy proposals suggest a complete ban onbehavioral targeting business models,90or require divestment of ownership across multiple parts of the90Congresswoman Anna G.Eshoo,“Eshoo,Schakowsky,Booker Introduce Bill to Ba
280、n Surveillance Advertising,”press release,January 18,2022,https:/eshoo.house.gov/media/press-releases/eshoo-schakowsky-booker-introduce-bill-ban-surveillance-advertising.89See Federal Trade Commission,“Remarks of Chair Lina M.Khan as Prepared for Delivery,”IAPP Global Privacy Summit 2022,Washington,
281、D.C.,April11,2022,https:/www.ftc.gov/system/files/ftc_gov/pdf/Remarks%20of%20Chair%20Lina%20M.%20Khan%20at%20IAPP%20Global%20Privacy%20Summit%202022.pdf;Carissa Vliz,“Privacy Is Power:Why and How You Should Take Back Control of Your Data,”International Data Privacy Law 12,no.3(August 2022):255257,ht
282、tps:/doi.org/10.1093/idpl/ipac007;Shoshana Zubo?,The Age of Surveillance Capitalism:The Fight for a Human Future atthe New Frontier of Power(New York:Public A?airs,2020);Kylie Jarrett,Feminism,Labour and Digital Media:The Digital Housewife(London:Routledge,Taylor et Francis Group,2017);Lina Dencik a
283、nd Javier Sanchez-Monedero,“Data Justice,”Internet Policy Review 11,no.1(January 14,2022),https:/doi.org/10.14763/2022.1.1615;Safiya Umoja Noble,Algorithms of Oppression:How Search Engines Reinforce Racism(New York:NYUPress,2018););Haleluya Hadero,“As Amazon Grows,So Does Its Surveillance of Consume
284、rs,”Chicago Tribune,August 23,2022,https:/ Gilliard,“TheRise of Luxury Surveillance,”Atlantic,October 18,2022,https:/ Sign-Up Fast Trackto Surveillance,Consumer Groups Say,June 30,2022,https:/www.bbc.co.uk/news/technology-61980233;Natasha Lomas,“Metas SurveillanceBiz Model Targeted in UK Right to Ob
285、ject GDPR Lawsuit,”TechCrunch,November 21,2022,https:/ Katherine Tangalakis-Lippert,“Amazons Empire ofSurveillance:Through Recent Billion-Dollar Acquisitions of Health Care Services and Smart Home Devices,the Tech Giant Is Leveraging Its MonopolyPower to Track Every Aspect of Our Lives,”Business Ins
286、ider,August 28,2022,https:/ Commission,“Communication from the Commission:Commission Guidance on the Application of the Referral Mechanism Set Out inArticle 22 of the Merger Regulation to Certain Categories of Cases,O?cial Journal of the European Union 113(2021):16.A proposal by the UKgovernment in
287、discussions around the DMA would have gone even further to reverse the burden of proof in digital mergers for dominant firms,though it was ultimately not adopted,see Christopher T.Marsden and Ian Brown,“App stores,antitrust and their links to net neutrality:A review ofthe European policy and academi
288、c debate leading to the EU Digital Markets Act.”Internet Policy Review,Vol 12,no.1(2023),https:/doi.org/10.14763/2023.1.167687Platform Competition and Opportunity Act of 2021,H.R.3826,117th Congress(20212022),https:/www.congress.gov/bill/117th-congress/house-bill/3826/text.2023 Landscape30digital ad
289、vertising ecosystem91,a large majority of regulatory regimes92and legislative proposals93haveinstead homed in on restricting the collection of third-party data through cookie tracking.Big Tech firms have swiftly responded to these headwinds by supporting,and even leading,thistransition away from thi
290、rd-party tracking toward first-party collection of users data.94In lieu oflongstanding methods of third-party data collection,large firms are exploiting the fact that they directlycontrol the vast majority of the environment in which data is collected:they are able to take advantageof the network e?
291、ects associated with the scale at which they operate by collecting,analyzing,andusing data within platforms they wholly own and control.95This is a product of an environment in whichthese firms are so dominant that it is virtually impossible not to use their systems.96Companies like Google might bes
292、t be characterized as ecosystems:97“the providers of the veryinfrastructure of the internet,so embedded in the architecture of the digital world that even theircompetitors have to rely on their services,”as journalist Kashmir Hill put it.98Maintaining an ecosystemenables dominant firms to derive ins
293、ights from multiple points in a market,and to leverage them acrosstheir lines of business.Concerns that this harms competition have led competition enforcers toconclude that some intervention may be needed to level the playing field in arenas such as mobileapps99and cloud computing.100Given this sta
294、te of a?airs,Big Tech firms no longer need third-party trackingin fact,shifting tofirst-party tracking only helps reinforce their dominant position,building a moat that can stave o?anypotential incipient competitors.Policy stances that focus primarily on third-party tracking are alreadyout of date i
295、n light of this situation.Without more aggressive approaches to curbing first-party datacollection and its anticompetitive e?ects,we could find ourselves in a world where concentration ofpower in the tech industry is greatly increased,rather than limited,by e?orts at privacy accountability.100See Of
296、com,“Cloud Services Market Study”,Ofcom,6 October,2022,https:/www.ofcom.org.uk/_data/assets/pdf_file/0025/244825/call-for-inputs-cloud-market-study.pdf;Authority for Consumers&Markets,“Market Study into Cloud Services,”September 5,2022,https:/www.acm.nl/en/publications/market-study-cloud-services;Au
297、torit de laconcurrence,“The Authorit de la concurrence Starts Proceedings Ex O?cio to Analyse Competition Conditions in the Cloud Computing Sector,”January 27,2022,https:/www.autoritedelaconcurrence.fr/en/communiques-de-presse/autorite-de-la-concurrence-starts-proceedings-ex-o?cio-analyse-competitio
298、n;Japan Fair Trade Commission,“Report Regarding Cloud Services,”June 28,2022,https:/www.jftc.go.jp/en/pressreleases/yearly-2022/June/220628.html;and Fair Trade Commission,“KFTC Announces Results of Cloud ServiceMarket Study,”December 28,2022,https:/www.ftc.go.kr/solution/skin/doc.html?fn=06ceef699d6
299、065867e4c69b26c1b3be720409a638b2cd9e3c5b91c70324ab5b4&rs=/fileupload/data/result/BBSMSTR_000000002402.99See National Telecommunications and Information Administration,United States Department of Commerce,“Competition in the Mobile AppEcosystem,”February 1,2023,https:/ntia.gov/report/2023/competition
300、-mobile-app-ecosystem;and Competition and Markets Authority,“MobileEcosystems Market Study,”June 15,2021,https:/www.gov.uk/cma-cases/mobile-ecosystems-market-study.98Kashmir Hill,“I Tried to Live without the Tech Giants.It Was Impossible,”New York Times,July 31,2020,https:/ or not the largest cloud
301、providers are characterized as ecosystems is the focus of an ongoing study by the UKs Ofcom.See Ofcom,“Cloud Services Market Study,”October 6,2022,https:/www.ofcom.org.uk/_data/assets/pdf_file/0025/244825/call-for-inputs-cloud-market-study.pdf.96See Kashmir Hill,“I Cut the Big Five Tech Giants from
302、My Life.It Was Hell,”Gizmodo,February 7,2019,https:/ Nordine Abidi and Ixart Miquel-Flores,“Too Tech toFail?”Faculty of Law Blogs,University of Oxford,July 13,2022,https:/blogs.law.ox.ac.uk/business-law-blog/blog/2022/07/too-tech-fail.95Michael Veale,“Adtechs New Clothes Might Redefine Privacy More
303、than They Reform Profiling,”Netzpolitik.org,February 25,2022,https:/netzpolitik.org/2022/future-of-online-advertising-adtechs-new-clothes-might-redefine-privacy-more-than-they-reform-profiling-cookies-meta-mozilla-apple-google.94See Brian X.Chen and Daisuke Wakabayashi,“Youre Still Being Tracked on
304、the Internet,Just in a Di?erent Way,”New York Times,April 6,2022,https:/ Perry Keller,“After Third Party Tracking:Regulating the Harmsof Behavioural Advertising Through Consumer Data Protection,”May 4,2022,https:/ American Data Privacy and Protection Act,H.R.8152,117th Congress(20212022),https:/www.
305、congress.gov/bill/117th-congress/house-bill/8152/text;and Banning Surveillance Advertising Act of 2022,H.R.6416,117th Congress(20212022),https:/www.congress.gov/bill/117th-congress/house-bill/6416.92State of California Department of Justice,Rob Bonta,Attorney General,“California Consumer Privacy Act
306、(CCPA),”February 15,2023,https:/oag.ca.gov/privacy/ccpa;General Data Protection Regulation,https:/gdpr-info.eu/.91Senator Mike Lee,Lee Introduces Digital Advertising Act,press release,May 19,2022.2023 Landscape31The Privacy Sandbox E?ectGoogles Privacy Sandbox o?ers a useful case in point.The compan
307、y has historically relied heavily onthe use of third-party cookies for targeted advertising,which has been central to how it makesmoney.101Following the passage of the General Data Protection Regulation(GDPR)and other dataprotection regulations indicating behavioral advertising would be under the re
308、gulatory spotlight,Googlebegan to develop a strategy that would enable the company to reduce its reliance on third-partycookies given increasing regulatory constraints,but nevertheless maintain its control of the market.102This culminated in Googles announcement of its Privacy Sandbox initiative:103
309、a self-regulatory moveaimed at heading o?more stringent forms of regulation that could directly target its capacity to drawinferences about and profile consumers.The announcement soon led to complaints by publishers that the proposed moves would serve toundermine their ability to generate revenue th
310、rough advertising by limiting their ability to target users,and that this would ultimately entrench Googles market power.104This prompted the UK Competitionand Markets Authority to open an investigation into these concerns.The CMA informed Google that itsproposals would likely amount to an“abuse of
311、dominance position.”Under UK law,this describes asituation when one or a group of enterprises uses its dominant position in a market to either directlyexploit consumers or exclude other competitors from the market.105Following a traditional competition analysis,the likely outcome would be to require
312、 more widespreadsharing of the data Google collects with publishers:it would treat data as an essential resource to themarket,and the cure would be to ensure that it was more widely available to other market players.Butsuch a remedy would carry widespread harms to consumer privacyharms that would ne
313、cessarilyexploit consumer interests and lead to a race to the bottom by expanding,rather than limiting,commercial surveillance practices.An integrated analysis would thus institute curbs to these kinds ofdata collection practices in the first place,ensuring that nobody,regardless of their market pos
314、ition,gets to benefit from the exploitation of consumers data,whether collected via third-party or first-partytracking.106To avert enforcement action by the CMA,Google o?ered a set of commitments that it hoped wouldbring the proposed framework into compliance with UK competition law.Arguably the str
315、ongest ofthese commitments was the implementation of“data silos”to ensure that Google has the same level ofaccess to data that others doa commitment very much in line with a traditional competition analysis.But even here,it will be challenging for enforcement agencies to ensure that Google is follow
316、ingthrough on this promise:theyve set up an internal monitoring committee that will ostensibly audit the106See for example Maurice E.Stucke,“The Relationship between Privacy and Antitrust,”Notre Dame Law Review Reflection 97,no.5(2022):400417,https:/ndlawreview.org/wp-content/uploads/2022/07/Stucke_
317、97-Notre-Dame-L.-Rev.-Reflection-400-C.pdf.Stuckes analysis isconsistent with the FTCs recently updated policy statement on its Section 5 Unfair Methods of Competition authorities.See FTC,“Policy StatementRegarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commi
318、ssion Act,Commission File No.P221202,”November 10,2022,https:/www.ftc.gov/system/files/ftc_gov/pdf/P221202Section5PolicyStatement.pdf.105O?ce of Fair Trading,“Abuse of a Dominant Position:Understanding Competition Law,”December 2004,https:/assets.publishing.service.gov.uk/government/uploads/system/u
319、ploads/attachment_data/file/284422/oft402.pdf.104Competition and Markets Authority,“CMA to Investigate Googles Privacy Sandbox Browser Changes,”press release,January 8,2021,https:/www.gov.uk/government/news/cma-to-investigate-google-s-privacy-sandbox-browser-changes.103Google,The Privacy Sandbox,htt
320、ps:/102David Eliot and David Murakami Wood,“Culling the FLoC:Market Forces,Regulatory Regimes and Googles(Mis)steps on the Path Away fromTargeted Advertising,”Information Polity 27,no.2(2022):259274,https:/dl.acm.org/doi/abs/10.3233/IP-211535.101Sarah Myers West,“Data Capitalism:Redefining the Logic
321、s of Surveillance and Privacy,”Business&Society 58,no.1(January 2019):2041,https:/doi.org/10.1177/0007650317718185.2023 Landscape32system for compliance,but this puts the burden on the regulator rather than on Google to avert anyharms,and it remains unclear how e?ective this regime will be or what w
322、ill happen when/if it fails.Other commitments included o?ering consultation with third parties about Privacy Sandbox,enablingthe CMA to test the e?ects of its implementation of Privacy Sandbox proposals via a monitoring trustee,and“incorporating user controls”into the system.In exchange for acceptin
323、g the voluntarycommitments,the CMA agreed not to continue its investigation.107But in the absence of a morestructural separation or bright-line rules,were essentially left having to either take Googles word for it,or developing expensive and untested inspection tools to evaluate whether they do what
324、 they say theywill do.“Big Tech firms get away with inflicting privacy harms on usbecause of the absence of competition in tech,making itespecially important for antitrust analysis to be integratedbroadly across tech policy domains.Breaking down the silosbetween tech policy issues will enable a clea
325、rer picture of thelarger whole.”Looking to the realm of cybersecurity may be instructive:security professionals have long expressedconcerns about tech monopolies because they create a single target and point of failure that can havesignificant downstream consequences if breached.A report produced in
326、 2003 by the Computer andCommunications Industry Association,a group of leading security experts,warned that Microsoftsemployment of software designs that lock users into their products led to a dangerous environment inwhich the worlds dominant operating system was riddled with vulnerabilities,expos
327、ing its end users toviruses.108As the researchers noted,“Microsoft must not be allowed to impose new restrictions on itscustomersimposed in the way only a monopoly can doand then claim that such exercise ofmonopoly power is somehow a solution to the security problems inherent in its products.Theprev
328、alence of security flaws in Microsofts products is an e?ect of monopoly power;it must not beallowed to become a reinforcer.”109The report is particularly notable not only given our present-dayclimate,but also for its depiction of how dominant tech firms are incentivized to present solutions toproble
329、ms caused by monopoly power that reinforce their monopoly power.The researchers cautionedthat regulators must treat security policy as intertwined with competition policy,not separate from it.Today,many other policy domains in focus for the tech accountability community are similarlyintertwined with
330、 competition.In addition to security,privacy,content moderation,and algorithmic109“Cyberinsecurity:The Cost of Monopoly:How the Dominance of Microsofts Products Poses a Risk to Security,”AUUGN 24,no.4(December 2003):49.108See Robert Lemos,“Report:Microsoft Dominance Poses Security Risk,”September 24
331、,2003,https:/ Danny Penman,“Microsoft Monoculture Allows VirusSpread,”September 25,2003,https:/ and Markets Authority,“Case 50972 Privacy Sandbox Google Commitments O?er,”February 4,2022,https:/assets.publishing.service.gov.uk/media/62052c6a8fa8f510a204374a/100222_Appendix_1A_Google_s_final_commitme
332、nts.pdf.2023 Landscape33discrimination among others at once shape and are shaped by competition dynamics within digitalmarkets.Yet the e?ects of concentration in the tech industry are rarely integrated into policy analysisacross these domains,and antitrust remains largely separated from other tech p
333、olicy arenas both interms of the regulatory regimes and the expertise needed to engage with them.In order to seekaccountability more e?ectively within the tech industry,these silos must necessarily be broken downto gain a clear picture of the larger whole.Regulators are already moving swiftly to ensure that issues relating to privacy and competition work inconcert,or at least stay in conversation