上海品茶

国际清算银行:2024智能金融系统:人工智能如何改变金融报告(英文版)(42页).pdf

编号:166185 PDF   DOCX  中文版 42页 962.68KB 下载积分:VIP专享
下载报告请您先登录!

国际清算银行:2024智能金融系统:人工智能如何改变金融报告(英文版)(42页).pdf

1、 BIS Working Papers No 1194 Intelligent financial system:how AI is transforming finance by Iaki Aldasoro,Leonardo Gambacorta,Anton Korinek,Vatsala Shreeti and Merlin Stein Monetary and Economic Department June 2024 JEL classification:E31,J24,O33,O40 Keywords:Artificial Intelligence,Generative AI,AI

2、Agents,Financial System,Financial Institutions BIS Working Papers are written by members of the Monetary and Economic Department of the Bank for International Settlements,and from time to time by other economists,and are published by the Bank.The papers are on subjects of topical interest and are te

3、chnical in character.The views expressed in them are those of their authors and not necessarily the views of the BIS.This publication is available on the BIS website(www.bis.org).Bank for International Settlements 2024.All rights reserved.Brief excerpts may be reproduced or translated provided the s

4、ource is stated.ISSN 1020-0959(print)ISSN 1682-7678(online)Intelligent financial system:how AI is transforming financeI AldasoroBISL GambacortaBIS&CEPRA KorinekUniv.of Virginia&GovAIV ShreetiBISM SteinUniv.of OxfordAbstractAt the core of the financial system is the processing and aggregation of vast

5、 amounts of information into price signals that coordinate participants in the economy.Throughout history,advances in information processing,from sim-ple book-keeping to artificial intelligence(AI),have transformed the financial sector.We use this framing to analyse how generative AI(GenAI)and emerg

6、-ing AI agents as well as,more speculatively,artificial general intelligence will impact finance.We focus on four functions of the financial sy stem:financial intermediation,insurance,asset management,and payments.We also assess the implications of advances in AI for financial stability and prudenti

7、al policy.Moreover,we investigate potential spillover effects of AI on the real economy,examining both an optimistic and a disruptive AI scenario.To address the transformative impact of advances in AI on the financial system,we propose a framework for upgrading financial regulation based on well-est

8、ablished gen-eral principles for AI governance.JEL Codes:E31,J24,O33,O40.Keywords:artificial intelligence,generative AI,financial system,financial institu-tions.We thank seminar participants at ECB and GovAI seminars and Douglas Araujo,Fernando Perez-Cruz,Fabiana Sabatini,David Stankiewicz and Andre

9、w Sutton for help-ful comments and suggestions,and Ellen Yang for research assistance.Contact:Al-dasoro(inaki.aldasorobis.org),Gambacorta(leonardo.gambacortabis.org),Korinek(an-),Shreeti(Vatsala.Shreetibis.org)and Stein().Theviews expressed here are those of the authors only and not necessarily thos

10、e of the Bank for In-ternational Settlements.1IntroductionLike the brain of a living organism,the financial system processes vast amountsof dispersed information and aggregates it into price signals that facilitate the co-ordination of all the players of the economy and guide the allocation of scarc

11、eresources.It not only enables the efficient flow of capital but also contributes to theeconomic systems overall health by managing risk,maintaining liquidity and sup-porting stability.Financial markets and intermediaries,when they function well,are a fundamental source of progress and welfare.Conve

12、rsely,the role of financialpolicy and regulation is to correct instances of“brain malfunction”and to insteadharness the intelligence of the financial system to enhance social welfare.Processing all the necessary information and coordinating the actions of numer-ous participants in the economy is a n

13、otably complex problem.As the brain of theeconomy,financial markets and intermediaries have played this role for a long time.At any given point in time,their capacity to do so was shaped in large part by theinformation processing technology available.For example,over the years,techno-logical advance

14、ments like telecommunications and the internet have continuouslyenhanced the capacity of financial markets to solve economic problems:a brain thatcan process more information more efficiently is better suited to solving increasinglycomplex tasks.It is then no surprise that financial markets have bee

15、n a magnet forboth cutting-edge information processing technology and for sophisticated humantalent.Most recently,the information processing capabilities of the financial systemhave been enhanced by fast-paced advancements in artificial intelligence(AI).In this paper,we describe the evolution of the

16、 financial sector through the prismof advancements in information processing,with a special focus on AI.We evaluatethe opportunities and challenges created for the financial sector from different gen-erations of AI,including machine learning(ML),generative AI(GenAI),and theemergence of AI agents.We

17、also provide a discussion of the effects of AI on finan-cial stability and the risk of real sector disruptions caused by AI.In light of theseinsights and increasing AI adoption we discuss the implications for the regulationof the financial sector.Over the course of human history,the trajectory of me

18、thods for informationprocessing of which AI is part has been closely linked with developments incommerce,trade and finance.In section 2 we describe this trajectory in detail.History is replete with examples of the financial system either sparking a change1in the arc of technological development,or i

19、tself being an early adopter of technol-ogy.From the abacus of the ancient Sumerians to double-entry book-keeping,theevolution of information processing technology and finance has often gone hand inhand.Over the last century,the most significant advance in the realm of infor-mation processing was th

20、e invention of computers in the 1950s.This allowed forthe automation of many analytic and accounting functions that were very usefulfor the functioning of the financial system.As computational power increased overtime,more sophisticated technologies emerged that allowed for the processing ofnon-trad

21、itional data,like machine learning models and,most recently,GenAI.Each generation of information processing technology has had a large footprinton the financial system,and has opened new doors for efficiency and innovation.We discuss some of these in Section 3.In general,AI has enhanced the abilityo

22、f the financial system to process information,analyse data,identify patterns andmake predictions.Early rule-based systems were already deployed for automatedtrading and fraud detection.As technology advanced,the use cases of AI in thefinancial sector became more complex.Machine learning and deep lea

23、rning mod-els are used extensively in asset pricing,credit scoring,and risk analysis.WhileGenAI is nascent,the financial system is already adopting it to enhance back-endprocessing,customer support and regulatory compliance.At the same time,as technology has grown more complex,so have the risksand c

24、hallenges for the financial system.Challenges include lack of transparency ofcomplex machine learning models,dependence on large volumes of data,threatsto consumer privacy,cybersecurity and algorithmic bias.GenAI has exacerbatedsome of these challenges and increased the dependence on data and comput

25、ingpower.There are additional concerns about market concentration and competitionas GenAI models are produced by a few dominant companies.There are other,potentially more serious risks to financial stability associatedwith the use of AI in the financial system.Even early rule-based computer tradings

26、ystems were associated with cascade effects and herding,for example,in the 1987US stock market crash.With machine learning models,the risks of uniformity,model herding and network connectedness have only compounded.Additionally,from the point of view of regulators,the use of advanced AI techniques p

27、oses afurther challenge:the proliferation of complex interactions and the inherent lack ofexplainability makes it difficult to spot market manipulation or financial stabilityrisks in time.With GenAI,co-pilots and robo-advising can mean that decisions2become more homogeneous,potentially adding to sys

28、temic risk.In Section 4 of the paper,we highlight another important aspect of AI use forthe financial system:the risk of financial spillovers from disruptions in the realeconomy.We portray two scenarios,one in which widespread use of AI leads toproductivity gains with largely benign effects,and a mo

29、re disruptive scenario withsignificant labour market displacement.We describe the potential impact on theeconomy and discuss the policy implications of both these scenarios.In light of these scenarios,in Section 5 we discuss how AI should be regulatedgoing forward.First,we provide general principles

30、 for AI regulation based on socialwell-being,transparency,accountability,fairness,privacy protection,safety,extentof human oversight and robustness of AI systems.We also provide a compara-tive discussion of different regulatory models based on experiences in the US,EUand China,and highlight the urge

31、nt need for international coordination on how toregulate the integration of AI into the global financial system.Finally,in section 6 we conclude and discuss some avenues for further research.2Decoding artificial intelligenceThe evolution of the financial system has gone hand in hand with the evoluti

32、onof information processing technology.To understand the implications of AI for fi-nance it is therefore helpful to examine the historical development of computationalmethods in tandem with concurrent developments in money and finance.Advancesin computational hardware and software have enabled the e

33、volution of advancedanalytics,machine learning,and generative AI.At each technological turn in thepast,the financial system has either been a catalyser of change or an early adopterof technology.The origins of computation can be traced back to ancient Sumerians and theabacus,the first known computin

34、g device.This was one of the earliest instancesof numerical systems being crafted to address financial needs.Laws have also beendriven by the changing needs of commerce and finance:the Code of Hammurabi,one of the earliest legal edicts,laid out laws to govern financial transactions asearly as the 18

35、thcentury BCE.Similarly,medieval Italian city-states pioneereddouble-entry book-keeping,a seminal development in accounting that opened the3door to an unprecedented expansion of commerce and finance.In fact,double-entry book-keeping underpins regulation,taxation,governance,contract law,andfinancial

36、regulation to this day.ComputationOver time,analytic tools saw tremendous advances at an increas-ing pace.One of the most significant of these advances occurred in the last century:the invention of computers.Unsurprisingly,the financial sector was among the firstto adopt and use computers.For exampl

37、e,the IBM 650,introduced in 1954,be-came popular partly because of the efficiency improvements it brought in finance.Inthe early days of modern computing,capabilities were limited to basic arithmetic,logical and symbolic operations(for example,following“if-then”rules)to solveproblems.With more compu

38、ting power,analytic capabilities evolved and allowedAI to emerge from basic computer systems.Artificial IntelligenceAI broadly refers to computer systems that perform tasksthat typically require human intelligence(Russell and Norvig,2010).1Alan Turingand John von Neumann laid the theoretical groundw

39、ork,delineating principles thatwould become the cornerstone for subsequent computational and AI advancements(Turing(1950),von Neumann et al.(1945),von Neumann(1966).For much of the20th century,AI was dominated by GOFAI and expert systems that were developedin the wake of these seminal contributions.

40、2GOFAI emerged in the late 1950s andcontinued to be the dominant paradigm through the 1980s.During this period,AI researchers focused on developing rule-based systems to emulate human intelli-gence,based on logical rules and symbolic representations.While highly useful forbasic financial functions(e

41、.g.risk management,basic algorithmic trading rules andcredit scoring,fraud detection),they were far from human-level abilities in patternrecognition,handling uncertainty and complex reasoning.Hardware advances en-abled small desktop computers,such as the personal computer in the 1980s and1990s.The a

42、bility to store data and perform basic analytics using spreadsheetsand other computer programs led to wide adoption and efficiency improvements infinance(Ceruzzi,2003).1The term artificial intelligence was first coined by the mathematician John McCarthy in anow mythical workshop at Dartmouth College

43、 in 1956.2GOFAI stands for“Good Old Fashioned AI”,a term coined by philosopher John Haugelandto refer to classic symbolic AI,based on the idea of encoding human knowledge and reasoningprocesses into a set of rules and symbols(Haugeland,1985).4Machine LearningThe next wave of progress came with machi

44、ne learning(ML),a sub-field of AI(Figure 1).ML algorithms can autonomously learn and performtasks,for example classification and prediction,without explicitly spelling out theunderlying rules.Like earlier advances in information processing,ML was quick tobe adopted in finance,even though in the earl

45、y days,its usefulness was limited bycomputing power.Early examples of ML relied on large quantities of structuredand labelled data.3Figure 1:Decoding AISource:Authors illustration.The most advanced ML systems are based on deep neural networks,which arealgorithms that operate in a manner inspired by

46、the human brain.4Deep neuralnetworks are universal function approximators that can learn systematic relation-ships in any set of training data,including increasingly in complex,unstructured3Structured data refers to organised,quantitative information that is stored in relationaldatabases and is easi

47、ly searchable.It typically includes well-organised text and numeric infor-mation.Unstructured data is information that is not organised based on pre-defined models.Itcan include information in text and numeric formats but also audio and video.Some examplesof unstructured data include text files like

48、 emails and presentations,social media content,sensordata,satellite images,digital audio,video,etc.4In such systems,the input layer of artificial neurons receives information from the environ-ment,and the output layer communicates the response;between these layers are“hidden”layers(hence the“deep”in

49、 deep learning)where most of the information processing takes place throughweighted connections to previous layers.5datasets(Hornik et al.(1989),Goodfellow et al.(2016)Broby(2022),Huang et al.(2020).These developments enabled financial institutions to analyse terabytes ofsignals including news strea

50、ms and social media sentiment.At an aggregate level,this led to increasingly fast-paced and dynamic markets,with optimised pricing andvaluation.However,as these models dynamically adapt to new data,often withouthuman intervention,they are somewhat opaque in their decision-making processes(Gensler an

51、d Bailey(2020),Cao(2020).Figure 2:Training compute times of major machine learning systemsNotes:The fitted line indicates the time needed for the doubling of compute requirements.Source:Epoch(2022).Generative AIFor the past 15 years,i.e.,since the beginning of the deep-learningera,the computing powe

52、r used for training the most cutting-edge AI models hasdoubled every six months much faster than Moores law would suggest(Figure2).These advances have given rise to rapid progress in artificial intelligence and arebehind the advent of the recent generation of GenAI systems,which are capable ofgenera

53、ting data.The most important type of GenAI are Large Language Models(LLMs),best exemplified by systems like ChatGPT,that specialise in processingand generating human language.LLMs are trained on enormous amounts of data to predict the continuation oftext based on its beginning,for example,to predict

54、 the next word in a sentence.During their training process,they learn how different words and concepts relate toeach other,allowing them to statistically associate concepts and developing what6many interpret as a rudimentary form of understanding.Drawing from this simplebut powerful principle,LLM-ba

55、sed chatbots can generate text based on a startingpoint or a“prompt”.A leading explanation for the capacity of modern LLMs toproduce reasonable content across a wide range of domains is that the trainingprocess leads such models to generate an internal representation of the world(or“world model”)bas

56、ed on which they can respond to a wide variety of prompts(Liet al.,2023).The use cases of LLMs have blossomed across many sectors.LLMs can generate,analyse and categorise text,edit and summarise,code,translate,provide customerservice,and generate synthetic data.In the financial sector,they can be us

57、ed forrobo-advising,fraud detection,back-end processing,enhancing end-customer expe-rience,and internal software and code development and harmonisation.Regulatorsaround the world are also exploring applications of GenAI and LLMs in the areasof regulatory and supervisory technologies(Cao,2022).5The d

58、ifferent iterations of AI described above can be seen as a continuous processof increasing both the speed of information processing in finance and the ability toinclude more types of information in decision-making.At present,AI has an advan-tage over humans in areas with fast feedback cycles(“reward

59、 loops”)to calibrate itsdecision-making,high degrees of digitisation of relevant data and large quantitiesof data.For these reasons,autonomous computer systems are currently deployedmainly in areas that fit these characteristics,for example,high-frequency trading.With increasing capabilities,over ti

60、me,autonomous computer systems might alsobe at an advantage in medium-term and long-term markets(e.g.short term deriva-tives and bonds respectively),as well as in other applications.AI AgentsThe next frontier on which leading AI labs are currently working areAI Agents,i.e.,AI systems that build on a

61、dvanced LLMs such as GPT-4 or Claude 3and are endowed with planning capabilities,long-term memory and,typically,accessto external tools such as the ability to execute computer code,use the internet,orperform market trades.6Autonomous trading agents have been deployed in specific5To be sure,current g

62、enerative AI systems also have clear limitations.For example,they havebeen shown to fail at elementary reasoning tasks(Berglund et al.,2023;Perez-Cruz and Shin,2024).6The term“AI Agents”reflects AI systems that increasingly take on agency of their own.Chan et al.(2024)define agency as“the degree to

63、which an AI system acts directly in the worldto achieve long-horizon goals,with little human intervention or specification of how to do so.An(AI)agent is a system with a relatively high degree of agency;we consider systems that mainly7parts of financial markets for a long time,for example,in high-fr

64、equency trading.What distinguishes the emerging generation of AI agents is that they have theintelligence and planning capabilities of cutting-edge LLMs.They can,for example,autonomously analyze data,write code to create other agents,trial-run it,update itas they see fit,and so on.7AI agents thus ha

65、ve the potential to revolutionise manydifferent functions of financial institutionsjust like autonomous trading agents havealready transformed trading in financial markets.Artificial General IntelligenceFor several of the leading AI labs,the ultimategoal is the development of Artificial General Inte

66、lligence(AGI),which is defined asAI systems that can essentially perform all cognitive tasks that humans can perform(Morris et al.,2024).Unlike current narrow AI systems,which are designed toperform specific tasks with a pre-defined range of abilities,AGI would be capable ofreasoning,problem-solving

67、,abstract thinking across a wide variety of domains,andtransferring knowledge and skills across different fields,just like humans.Relatedly,some AI researchers and AI lab leaders speak of Transformative AI(TAI),which isdefined as AI that is sufficiently capable so as to radically transform the way o

68、ureconomy and our society operate,for example,because they can autonomously pushforward scientific progress,including AI progress,at a pace that is much faster thanwhat humans are used to,or because they significantly speed up economic growth(Suleyman and Bhaskar,2023).There is an active debate on w

69、hether and how fastconcepts such as AGI or TAI may be reached,with strong views on both sides ofthe debate.As economists,we view it as prudent to give some credence to a widerange of potential scenarios for the future(Korinek,2023b).3AI transforming finance3.1Opportunities and challenges of AI for f

70、inanceThe integration of the rapidly evolving capabilities of AI is transforming the finan-cial system.But as we have seen in Section 2,AI is just the latest informationpredict without acting in the world,such as image classifiers,to have relatively low degrees ofagency”.7Prototypes of such AI agent

71、s currently exist mainly in the realm of coding,where Devin,an autonomous coding agent developed by startup Cognition Labs,or SWE-agent,developed byresearchers at Princeton(Yang et al.,2024),can autonomously take on entire software projects.8processing technology to do so.Table 1 summarises the impa

72、ct of the technologieswe described earlier,from traditional analytics to AI Agents,on four key financialfunctions:financial intermediation,insurance,asset management and payments.Traditional AnalyticsEarly rule-based systems were adopted in financial in-termediation and insurance markets to automate

73、 risk analysis(Quinn(2023).Inasset management,they allowed for automated trading and the emergence of newproducts like index funds.In payments,they automated a significant part of theinfrastructure and were also useful for fraud detection.While these models weregenerally easy to interpret,they were

74、also rigid and required significant human su-pervision.They typically had a small number of parameters a key limitation intheir effectiveness.Moreover,the automation of information processing requires large volumes ofdata,which comes with its own challenges.For example,in the financial sectorthis is

75、 often sensitive,personal data.Ensuring that data are collected,stored,andprocessed in compliance with privacy laws(such as GDPR)is a complex challenge.Other challenges relate to cybersecurity and the risk of adversarial attacks.Shar-ing data with third-party vendors(e.g.,AI service providers)can ex

76、pose sensitiveinformation.Moreover,IT systems can be targets of attacks.This requires the im-plementation of robust encryption and authentication protocols whenever data andalgorithms are shared.All of these challenges carry over to the context of modernAI.Machine LearningAdvances in ML unlocked a n

77、ew range of applications of AI infinance.Whereas earlier generations of computational advances relied on processingnumbers,ML can process a wide range of data formats.Kelly et al.(2023)identifythree factors intrinsic to finance that make the use of ML particularly relevant.First,expected prices or p

78、redictions of prices are central to the analysis of financialmarkets.Second,the set of relevant information for prediction analysis is typicallyvery large and can be challenging to incorporate in traditional models.Third,theanalysis of financial markets can critically depend on underlying assumption

79、s offunctional forms,over which agreement is often lacking.Machine learning modelscan be powerful in this context,as they can incorporate vast amounts of data(and thus,information sets)and are based on flexible,non-parametric functionalforms.Owing to these benefits,ML models have been widely applied

80、 in finance and9economics(Athey(2018).Machine learning has a range of use cases across the four economic functions weconsider.In financial intermediation,the use of ML models can reduce credit un-derwriting costs and expand access to credit for those previously excluded,althoughfew financial institu

81、tions have taken advantage of the full range of these opportu-nities.ML models can also streamline client onboarding and claims processing inseveral industries,particularly in insurance.Across industries,but especially ininsurance and payments,ML models are used to detect fraud and identify security

82、vulnerabilities.ML is also heavily used in asset pricing,in particular to predict returns,toevaluate risk return trade-offs,and for optimal portfolio allocation.Thanks totheir ability to analyse large volumes of data relatively quickly,ML models alsofacilitate algorithmic trading(OECD,2021).In payme

83、nts,ML models can providenew tools for better liquidity management.Finally,not only the private financialsector benefits from ML:these models are also increasingly used by regulators todetect market manipulation and money laundering.The opportunities created by ML models also come with risks and cha

84、llenges.The flip side of flexible,highly non-linear machine learning models is that theyoften function like black boxes.The decision process of these models for examplewhether or not to grant credit can be opaque and hard to decipher.Generative AImostly in the form of LLMs,is part of the new frontie

85、r andcomes with its own set of opportunities.Two key aspects of GenAI are particularlyuseful for the financial sector.First,whereas earlier computational advances havemade the processing of traditional financial data more efficient,GenAI allows forincreased legibility of new types of(often unstructu

86、red)data,which can enhancerisk analysis,credit scoring,prediction and asset management.Second,GenAIprovides machines the ability to converse like humans,which can improve back-endprocessing,customer support,robo-advising and regulatory compliance.Moreover,it also allows for the the automation of tas

87、ks that were until recently considereduniquely human,for example,advising customers and persuading them to buyfinancial products and services(Matz et al.,2024).10Table 1:Opportunities,challenges and financial stability implications of computational advancesFinancialIntermediationInsuranceAsset manag

88、ementPaymentsTraditionalanalyticsOpportunitiesRule-based risk analysis,greater competitionRisk management,portfoliooptimisation,automated and HF tradingFraud detectionChallengesRigid,requires human supervision,small number of parameters,threats to consumer privacy,emergence of data silosZero-sum arm

89、s racesflash crashesTechnical vulnerabilitiesFinancial StabilityHerding,cascade effects and flash crashes,such as the US stock market crash of 1987MachineLearningOpportunitiesCredit risk analysis,lower underwritingcosts,financial inclusionInsurance risk analysis,lowerprocessing costs,fraud detection

90、Analysis of new data sources,high frequency tradingNew liquidity management tools,fraud detection and AMLChallengesBlack box mechanisms,algorithmic discriminationZero-sum arms races,model herding,algorithmic coordinationNew liquidity crises,increased cyber risksFinancial StabilityHerding,network int

91、erconnectedness,lack of explainability,single point of failure,concentrated dependence on third party providersGenerativeAIOpportunitiesCredit scoring(unstructured data),easier back-end processing,better customer supportBetter risk analysis withnewly legible data,easier complianceRobo-advising,asset

92、 embedding,new products,virtual assistantsEnhanced KYC,AML processesChallengesHallucinations,increased market concentration,consumer privacy concerns,algorithmic collusionFinancial StabilityHerding,uniformity,incorrect decisions based on alternative data,macroeconomic effects of potential labour dis

93、placementAIAgentsOpportunitiesAutomated design,marketing and sale of new financialproducts without human interventionIncrease in speed of information processing Faster payment flows,fraud preventionChallengesNew risks to consumer protection,cybersecurity,potential overreliance,fraud and unforeseen r

94、isksCybersecurity,fraud,unforeseen riskconcentration with AI agent interactionsSudden liquidity crises,fraudwith deception and unforeseen risksFinancial StabilityMisalignment risks,inherent unsuitability of AI agents for aspects of macroprudential policies11The financial industry has already started

95、 adopting GenAI.OECD(2023)pro-vides several recent examples:Bloomberg recently launched a financial assistantbased on a finance specific LLM,and the investment banking division of GoldmanSachs uses LLMs to provide coding support for in-house software development.Sev-eral other companies use GenAI to

96、 provide financial advice to customers and helpwith expense management,as well as through co-pilot applications.Despite these potential benefits and growing adoption,LLMs also create newrisks for the financial sector.They are prone to“hallucinations”,i.e.,to generatefalse information as if it were t

97、rue.This can be especially problematic for customer-facing applications.8Moreover,as algorithms become more standardised and areuniformly used,the risk of herding behaviour and procyclicality grows.There are also concerns about market concentration and competition.GenAIis fed by vast amounts of data

98、 and is very hungry for computing power,and thisleads to a risk that it will be provided by a few,dominant companies(Korinekand Vipra,2024).Notably,big tech companies with deep pockets and unparalleledaccess to compute and data are well positioned to reinforce their competitive ad-vantage in new mar

99、kets.Regulators,especially competition authorities,have alsostarted highlighting intentional and unintentional algorithmic collusion,especiallywith algorithms based on reinforcement learning(Assad et al.(2024),Calvano etal.(2020),OECD(2021),with potential implications for algorithmic trading infinan

100、cial markets.The data intensive nature of GenAI,combined with the reliance on a few(bigtech)providers also exacerbates consumer privacy and cybersecurity concerns(Al-dasoro et al.,2024a).AI agentsare AI systems that act directly in the world to achieve mid-and long-term goals,with little human inter

101、vention or specification of how to do so.Whilecurrent AI agents(like those supporting software engineering(Scott,2024)mightbe limited in their planning ability,the pace of advancements might lead to morecapable agents in the near future.Such AI agents come with opportunities to pro-cess novel types

102、of information more quickly than humans and to act autonomously,e.g.,for designing software or performing data analysis.AI agents could expandhigh-frequency information processing and autonomous action from trading to other8A recent example of this risk,outside of finance,is Air Canada being held li

103、able for the falseinformation that its LLM-powered chatbot provided a customer.12parts of finance.For example,they could soon autonomously design,market,andsell financial products and services.Challenges can arise in a world with an increasing adoption of AI agents infinance and in sectors affecting

104、 finance,without oversight and security measures.In the short-term,this might include cybersecurity,fraud and unequal access due tohyper-personalised digital financial assistants;in the mid-term,potential liquiditycrisis,or a structural over-reliance on AI agents.The case of algorithmic trading migh

105、t illustrate challenges with mid-term plan-ning AI agents in other environments.Correlated failures,in the form of flashcrashes due to correlated autonomous actions might happen in a different form infinancial intermediation,asset management,insurance and payments.While foralgorithmic trading,there

106、is a clear digitised environment with precise short-termrewards,AI agents in other environments require more sophisticated reinforcementloops(Cohen et al.(2024).These action-reward loops might be created over time,as AI agents will be more and more capable to act in unstructured,open-ended en-vironm

107、ents.Contingent upon the configuration of action-reward loops,novel risksmight emerge.,including the challenge of aligning them with human goals overlonger-time horizons(Christian(2021).AI agents could also pose significant systemic risks if their behaviour is highlycorrelated,their actions difficul

108、t to explain and missing oversight or behaviors arenot transparent or misaligned.Appendix A discusses the hypothetical influenceof AI agents on a financial crisis.For example,an AI designed for efficient assetallocation might start exploiting market inefficiencies in ways that lead to increasedvolat

109、ility or systemic imbalances.3.2AI and financial stabilityEven with limited capabilities,computational advances already had important im-plications for financial stability.The US stock market crash of 1987 is an illustrativeexample.In October 1987,stock prices in the United States declined significa

110、ntly the biggest ever one-day price drop in percentage terms.This was attributedin large part to the dynamics created by so-called portfolio insurance strategies,which relied on rule-based computer models that placed automatic sell orders whensecurity prices fell below a pre-determined level.Initial

111、 rule-based selling by many13institutions using this strategy led to cascade effects and further selling,and even-tually the crash of October 1987(Shiller(1988),United States presidential taskforce on market mechanisms(1988).Machine learning added new dimensions to financial stability concerns,mostl

112、ydue to increased data uniformity,model herding and network interconnectedness.The first dimension is the reliance of ML models on similar data.Due to economiesof scale and scope in data collection,often there are only a few producers(for exam-ple,big techs)of large datasets critical to train these

113、models.If most ML applica-tions are based on the same underlying datasets,there is a higher risk of uniformityand procyclicality arising from standardised ML models.The second dimension is“model herding”:the inadvertent use of similar optimisation algorithms.The useof similar algorithms can contribu

114、te to flash crashes,increase market volatility andcontribute to illiquidity during times of stress(OECD,2021).9Algorithms that re-act simultaneously to market signals may increase volatility and market disruptions(Svetlova,2022).This problem is exacerbated when financial firms rely on the samethird

115、party providers,which is pervasive in the AI space.A third dimension is thatof network interconnectedness,which may create new failure modes(Gensler andBailey(2020),OECD(2017),Georges and Pereira(2021).There are also other characteristics inherent to ML models that have implica-tions for financial s

116、tability.In particular,the black-box nature of ML models thatarises due to their complexity and non-linearity makes it often hard to understandhow and why such models reach a particular prediction.This lack of explainabilitymight make it difficult for regulators to spot market manipulation or system

117、ic risksin time.Like other ML models,the pervasive use of GenAI will present new challenges(Anwar et al.,2024)and will likely also have consequences for financial stability.As noted earlier,one of the most powerful tools made possible by language modelsis the increased legibility of alternative form

118、s of data.Compared to traditionaldata sources,alternative data can have shorter time series or sample sizes.Rec-ommendations or decisions based on alternative data may therefore be biased andnot generalisable(the so-called fat tail problem,Gensler and Bailey(2020).Fi-nancial or regulatory decision m

119、aking based on alternative data would need to bevery mindful of this limitation.9Khandani and Lo(2008)argue that model herding was one of the main reasons behind the2007 hedge fund crisis.14The risks arising from the use of homogeneous models highlighted above alsoapply to GenAI.A key application of

120、 GenAI in the financial sector is the use ofLLMs for customer interactions and robo-advising.Since many of these applicationsare likely to rely on the same foundational models,there is a risk that the adviceprovided by them becomes more homogenised(Bommasani and Others,2022).Thismay by extension exa

121、cerbate herding and systemic risk.Financial stability concerns that derive from the uniformity of datasets,modelherding,and network interconnectedness are further exacerbated by specific charac-teristics of GenAI:increased automaticity,speed and ubiquity.Automaticity refersto GenAIs ability to opera

122、te and make decisions independently,increasingly with-out human intervention.Speed pertains to AIs capability to process and analysevast amounts of data at rates far beyond human capacity,enabling decisions to bemade in fractions of a second.Ubiquity highlights GenAIs potentially widespreadapplicati

123、on across various sectors of the economy,and its integration into everydaytechnologies.A number of systemic risks could arise from the use of AI agents.These agentsare characterised by direct actions with no human intervention and a potential formisalignment with regards to long-term goals(Chan et a

124、l.,2024).The fundamentalnature of the resulting risks is well-known from both the literature on financialregulation and the literature on AI alignment and control(Korinek and Balwit,2024):if highly capable agents are given a single narrow goal such as profitmaximisation they blindly pursue the speci

125、fied goal without paying attentionto side goals that have not been explicitly spelled out but that an ethical humanactor would naturally consider,such as avoiding risk shifting or preserving financialstability.Moreover,even when constraints such as satisfying the requisite financialregulations are s

126、pecified,AI agents may develop a super-human ability to pursuethe letter rather than the spirit of the regulations and engage in circumvention.Asan early example,an LLM that was asked to maximise profits as a stock trader ina simulation engaged in insider trading even when knowing it is illegal.More

127、over,when caught,the LLM lied about it(Scheurer et al.,2023).We discuss some of thebroader risks in Appendix A in a thought experiment looking at how such agentscould have interacted with known causes of the great financial crisis of 2008/09.AsAI Agents advance towards AGI,the resulting risks would

128、be greatly amplified.153.3AI use for prudential policyAs the private sector increasingly embraces AI,policymakers may find it increas-ingly useful to employ AI for both micro-and macroprudential regulation.In fact,they may have no choice but to resort to AI to process the large resulting quanti-ties

129、 of data produced by regulated financial institutions.10Microprudential policyconcentrates on the supervision of individual financial institutions,whereas macro-prudential policy concerns itself with the supervision of the financial system as awhole.AI can be leveraged for both types of prudential p

130、olicies but comes with adifferent set of risks in each domain.For microprudential policy,AI might enable more sophisticated risk assessmentmodels and improve the prediction of institutional failures or spot market manipula-tion.However,the routine use of such methods is still far away.As AI is parti

131、cularlyadept at recognising patterns in large volumes of data,it could be a powerful toolfor supervisors to predict emerging risks for financial institutions.Moreover,GenAIin particular can be powerful for regulatory reporting and compliance by allowingautomation of repetitive tasks.Some of the impl

132、ications of AI for microprudential policy are already discussed inthe previous section.The main limitations include exacerbated threats to consumerprivacy,challenges arising from the black-box nature of algorithms,the risk ofalgorithms magnifying biases that exist in input data and exposure to sophi

133、sticatedcyber attacks.The use of AI in for macroprudential policy carries a different set of difficulties.Danielsson and Uthemann(2023)identify five main challenges for traditional ML:i)data availability,ii)uniqueness of financial crises,iii)Lucas critique(Lucas(1976),iv)lack of clearly defined obje

134、ctives,and v)challenges of aligning regulatory and AIobjectives.Financial crises are very disruptive but,fortunately,rather rare events.Since the usefulness of ML is directly linked to the amount of data fed into models,applications in macroprudential policy may be limited and extrapolating from a f

135、ewdata points may lead to incorrect outcomes.A related challenge is the uniqueness of each financial crisis.Just as this makesit difficult for humans to predict financial crises,it also makes it difficult for AI:although crises have some commonalities,each has its own specific risk factors which10Se

136、e for example Araujo et al.(2024)for an overview of applications in central banking.16 while rationalisable ex post are nearly impossible to understand ex-ante.Thereason is their“unknown-unknowns”characteristic(Knight(1921),Dan elsson etal.(2022).Accordingly,even if AI were able to learn from past c

137、rises,the lessonsmight have limited applicability for predicting the next one.Moreover,even in thecases where AI is able to generate insights from a specific crisis episode,the policyinsights themselves will change the environment of decision making the so-calledLucas critique(Lucas,1976).Due to con

138、straints of data and the uniqueness of financial crises,it is oftenchallenging even for regulators to have clearly defined objectives for macroprudentialpolicy(Danielsson and Uthemann,2023).Objectives may be fairly broad,such as“maintaining financial stability,”which may be difficult to parse for AI

139、.Finally,there is a risk of misalignment,which is made worse by incompleteinformation on the objectives of macroprudential policy.AI and humans may havevery different ways of reaching the same objective,and there is a risk that AI adoptsways that are detrimental to social welfare or out of touch wit

140、h ethical or moralstandards.However,just like humans have found ways of dealing with these challenges,future advances in AI may open up new possibilities for macroprudential regulationthat go beyond the limitations of traditional ML.As AI systems become moreadvanced,they may be able to better deal w

141、ith the limited data on financial crisesby learning from a much broader set of data sources,including granular data onfinancial transactions,news and sentiment analysis,and simulations of hypotheticalcrisis scenarios.They may also be able to identify more generalisable patterns ofsystemic risk that

142、are robust across different types of crises.Moreover,future AIsystems that can engage in counterfactual reasoning and causal inference couldhelp regulators better understand the potential consequences of different policyinterventions,even in a world where the Lucas critique applies.And as AI alignme

143、nttechniques improve,it may become possible to specify clear objectives for AI systemsto optimise,while ensuring they do so in a way that is consistent with human valuesand regulatory intent.By leveraging these advances,future AI could become apowerful tool for enhancing the speed,scope,and precisio

144、n of macroprudentialregulation,helping to build a more resilient and stable financial system.174Risk of AI disruptionWhile the financial sector is good at smoothing small shocks in the real economyand at helping the economy adjust,large shocks to the economy run the risk ofdisrupting the financial s

145、ector and thus being amplified.Advances in AI pose arisk of disrupting many sectors of the economy and their workforce.Depending onthe extent of the disruption,this may lead to financial stability risks.This is notjust a theoretical possibility,as there are precedents of significant disruptions in t

146、hereal economy spilling over to the financial sector.For example,in the 1920s themechanisation of agriculture displaced more than 10%of the US workforce fromthe agricultural sector and led to widespread mortgage defaults,which played animportant role in the financial crisis of 1929 and the ensuing G

147、reat Depression.Agrowing view among technology experts and business leaders is that advances in AImay be even more transformative in coming years(see,e.g.,Korinek,2023b).To besure,recent data do not show signs of any such large-scale disruption yet.However,policymakers are well-advised to have conti

148、ngency plans in case transformative AIscenarios materialise.To span the range of possible outcomes,we lay out two scenarios.The first isan optimistic scenario in which advances in AI are more likely to benefit financialstability.The second is a downside scenario in which the real effects of AI disru

149、ptfinancial stability.Of course,there are many realistic scenarios in between thesetwo extremes that are worth preparing for.Optimistic AI scenario.In the most benign scenario,AI will lead to a markedincrease in productivity without having significant disruptive effects.So far,the useof AI tools in

150、companies has increased worker productivity,from customer support(Brynjolfsson et al.(2023)and programmers(Peng et al.(2023)to a variety ofother business professionals(Noy and Zhang(2023),and even economists(Korinek,2023a).Recent evidence points to a positive correlation between AI adoption andfirms

151、 productivity(Yang(2022),Czarnitzki et al.(2023),although this likelydiffers across occupations and sectors(Felten et al.,2023).11Under this scenario,one can think of AI as a positive(and moderate)produc-tivity shock with a differential effect across sectors.Calibrating a macroeconomic11See Autor(20

152、22)for a broader overview of the labour market implications of technologicalchange,with a focus on artificial intelligence.18multi-sector model using an index of exposure to AI across sectors based on Feltenet al.(2021),Aldasoro et al.(2024b)find that AI can significantly raise output,consumption an

153、d investment in the short and long run.The supply shock may bedisinflationary in the short run if households and firms do not fully anticipate theeffects of AI in the economy.But irrespective of how agents form expectations,thelong run effect is inflationary.This scenario could lead to a goldilocks

154、situation for monetary policy.Greateruse of AI could ease inflationary pressures in the near-term,thereby supportingcentral banks in their task of bringing inflation back to target.In the medium tolonger term,inflation could rise because of greater AI-induced demand,but centralbanks could dampen dem

155、and by tightening policy.AIs positive contribution togrowth could offset some of the detrimental secular developments that threaten todepress growth going forward,including population aging,re-shoring and changesin global supply chains,as well as geopolitical tensions and political fragmentation.The

156、 positive effects on output could enhance the capacity of economies to servicedebts,with positive effects on debt sustainability.The revaluation of financial assetsthat would come from higher productivity could also support this process,providedrising borrowing costs do not overshadow growth effects

157、.For the financial sector more broadly this scenario would come with challenges,although not insurmountable ones.The likely job turnover that may come fromAI automating some tasks could affect spending patterns by consumers as well asthe ability to repay loans by both consumers and corporations.Knoc

158、k-on effectsthrough defaults could in turn affect the financial sector,which itself would needto support the resource reallocation arising from such displacements in the firstplace.This challenge will of course be tougher the higher the exposure of financialinstitutions to the most affected sectors.

159、Disruptive AI scenario.Alternative scenarios could be vastly more disruptive.Some AI experts predict that highly capable autonomous AI agents may reach thelevel of AGI within the decade and may be able to automate virtually all tasks per-formed by humans.Even if new tasks are invented,such machines

160、might be equallygood at the new tasks as humans,giving rise to massive job market disruption.Unconstrained by the scarcity of labour,output would take off exponentially un-der such circumstances.This would result in very large disruptions for corporationsand,especially,the workforce as labor would b

161、e severely devalued.Korinek(2023b)19considers two such scenarios where AGI is reached within five or alternatively 20years.For simplicity and in order to highlight the key implications of such a shift,here we consider the move to AI agents without pinning down a specific timeline.Korinek and Suh(202

162、4)consider these scenarios within a macroeconomic model ofautomation,highlighting the wide dispersion of outcomes as a function of the speedof automation.Rapid scenarios of AI growth,for example,AI take-off scenarios,could triggermassive redistributions of income and wealth in a short amount of time

163、.To makethis tangible,let us describe how transformative advances in AI may affect factorand goods prices in further detail.To be sure,the slower the advent of transforma-tive AI and the associated structural transformation and the better any supportingpolicy measures are,the less we would have to b

164、e concerned about the describedadverse consequences.First,rapid advances in AI may significantly devalue labor compared to capital,risking widespread consumer defaults,unless countervailing policy actions are taken.In recent decades,there are already some indications that the labor share of incomeha

165、s declined somewhat,and there were large categories of workers who have seentheir income stagnate(Autor,2022).More recently,early studies of generativeAIs impact on the workforce indicate that the skill premium that highly educatedworkers are earning is deflating(Noy and Zhang,2023).By contrast,the

166、valueof the hardware underpinning the“digital brains”behind generative AI systems isrising rapidly.Second,rapid advances in AI may undermine traditional businesses and reallo-cate corporate revenue to new companies that are built around AI.This transitionmay occur much faster than the regular churni

167、ng of businesses,risking corporatedefaults.For example,Sam Altman,the CEO of OpenAI,has recently suggestedthat he expects we will soon see trillion-dollar companies with no(human)workforcethat may rapidly take over certain business sectors.The winner-takes-all effects ofdigital technologies may rein

168、force such dynamics.Third,if rapid advances in AI significantly accelerate economic growth andprices,interest rates may go up by an order of magnitude(Chow and andJ Z Ma-zlish,2023).This surge could lead to a severe deterioration in credit quality andwidespread defaults,potentially placing the balan

169、ce sheet of financial institutionsunder serious stress.20Fourth,governments may experience a significant reduction in tax revenues iflabor markets their primary revenue source are undermined,thereby questioningtheir debt sustainability.Fifth,while rapid advances in AI may boost growth in countries a

170、t the forefrontof technological development,they might also lead to a new form of“intelligencedivide”.This divide could leave other countries behind,resulting in severe terms-of-trade losses(Korinek and Stiglitz,2021).Table 2:Potential disruptions and effects on financial stability in a disruptive A

171、Iscenario.Potential disruptionEffect on financial stabilityLabor devaluationWidespread consumer defaults,unless coun-tervailing policy actions are takenCorporate revenue reallocationUndermining of traditional businesses,risk ofcorporate defaultsAccelerated growth&pricesSignificant increase in intere

172、st rates,deterio-ration in credit quality,potential stress on fi-nancial institution balance sheetsReduced tax revenuesQuestioning of government debt sustainability“Intelligence divide”across coun-triesSevere terms-of-trade losses for countries leftbehindPolitical discontent&instabilityFurther under

173、mining of financial stabilityFinally,all these disruptions to the real economy may also give rise to politicaldiscontent and instability,which could further undermine financial stability(Belland Korinek,2023).Table 2 provides a summary.In summary,the financial stability implications of AI disruption

174、 in the realeconomy could vary widely depending on the pace and extent of AI adoption.Inoptimistic scenarios,AI could boost productivity and growth without severe disrup-tions,easing inflationary pressures and debt burdens,albeit with some challengesfor the financial sector in managing the associate

175、d labor market shifts.However,more disruptive scenarios in which highly capable AI systems rapidly automate hu-man tasks could lead to severe economic dislocations,such as sudden income andwealth redistributions,corporate and consumer defaults,surging interest rates,re-duced government revenues,and

176、political instability,all of which could significantlyundermine financial stability.215Upgrading financial regulation for AI5.1Principles for AI regulationThe risks posed by AI expand the focus of financial regulation beyond traditionalpolicy objectives.In addition to policy concerns such as financi

177、al stability,mar-ket integrity,efficiency,and competition,questions of consumer rights like privacyand algorithmic discrimination also take centre stage.Moreover,AI introduces newgeopolitical risks,most notably those from the geographical concentration of theproduction of microchips and advanced sem

178、iconductors.Striking the right balancebetween harnessing the benefits of AI and managing its risks is thus crucial for eco-nomic policy-making.This in turn requires a careful yet comprehensive regulatoryresponse that incorporates technological,societal and ethical considerations.As advances in AI ar

179、e accelerating,regulation must be proactive,anticipat-ing potential issues that future AI systems may create.A preventive approachthat tackles such issues before escalation can prove more effective from a societalstandpoint.This could include for example evaluating AI models against systemic,nationa

180、l security and societal risks.Yet not all risks associated with AI necessitate regulatory intervention.Regula-tory measures should target risks that manifest as externalities that impact specificpolicy objectives(broader financial stability,market integrity,competition,dataprivacy,and consumer prote

181、ction).At the same time,risks that do not generateexternalities or do not directly influence these intermediate objectives(for exam-ple customer experience and service risks,technology adoption risks,etc)can bemanaged effectively through market mechanisms.Striking the right balance is keyto avoid st

182、ifling innovation while minimising adverse externalities on the financialsystem and its participants.The complexity of GenAI and foundation models as well as current advancestowards AI agents makes it challenging to predict their impact,potentially leadingto unforeseen risks.This limits the ability

183、of regulation to develop effective rulesquickly.Thus,it is crucial to establish and operationalise regulatory principles thatpre-emptively mitigate future risks.Both national and international standard-setters have defined general principlesfor regulating and managing AI systems that can be applied

184、throughout the value22chain from development to deployment of AI.For example,the EU defined an As-sessment List for Trustworthy Artificial Intelligence(EU ALTAI;see Ala-Pietil a etal.,2020;EU,2024);in the US,NIST defined characteristics for trustworthinessas part of its AI Risk Management Framework(

185、NIST,2023),and China definedresponsible AI principles(China Technology Ministry,2019).The ISO standardISO/IEC 23894:2023 provides guidance on risk management for AI systems.Theseframeworks form the cornerstone of many AI regulatory initiatives.Although someof the details vary,there are commonalities

186、 among them.The following is a sum-mary list of principles:Societal and environmental well-being.The development and use of AI shouldbe done in ways that benefit society at large,including environmental sus-tainability.This principle involves considering the long-term impact of AI onsocial structure

187、s,democracy and the planet.Transparency.AI systems should be open and understandable,requiring clearexplanations of how AI systems work,the logic behind decisions and the dataused.Accountability.Entities developing or deploying AI are responsible for theoutcomes of their systems,ensuring mechanisms

188、are in place to address anynegative impacts or errors.Fairness.AI systems should not perpetuate biases or discrimination,ensuringequitable treatment across diverse populations.Privacy protection.AI systems should safeguard personal data,adhering todata protection laws and principles.Safety and secur

189、ity.AI systems should operate reliably and safely underall conditions,implementing safeguards against failures,misuse,or maliciousattacks.Human oversight.Human judgment should be involved in critical decision-making processes,emphasising the importance of human expertise and ethicsin guiding AI acti

190、ons.Robustness and reliability.AI systems should perform consistently under var-ious conditions without failure,ensuring accuracy and reliability over time.235.2Regulatory modelsBradford(2023)identifies three primary regulatory models,adopted in the US,China and the EU.The“market-driven”regulatory m

191、odel in the US is characterisedby a market-based approach that emphasises innovation,self-regulation and scep-ticism of government intervention.The“state-driven”regulatory model in Chinautilises technology for political objectives,and aims to grow the industry whileexporting technology infrastructur

192、e.The“rights-driven”regulatory model of theEU is focused on protecting individual and societal rights and the equitable dis-tribution of digital transformation gains.These regulatory models,while distinct,are not mutually exclusive and show a tendency to converge towards the principleshighlighted ab

193、ove,as well as towards rather similar operationalisations.In the United States,the regulation of AI has evolved from voluntary guidanceto executive actions.Initially,the Blueprint for an AI Bill of Rights in October 2022laid foundational ethical considerations.This was followed by voluntary commit-m

194、ents from leading AI firms in July 2023,signalling industry readiness to addressAIs societal impacts.The shift towards regulatory oversight was marked by theExecutive Order on Safe,Secure,and Trustworthy AI in November 2023,whichmandated over 25 agencies to address AI-related harms,including securit

195、y,privacy,and discrimination.These agencies are now tasked with establishing rules,fundingresearch,assessing risks,and enforcing transparency through safety tests and re-porting by AI developers.However,there has not been significant legislative actionon AI regulation.Chinas AI regulation has evolve

196、d from a state-driven approach to more sector-specific guidance.The 2018 Guiding Opinions for financial institutions mandatedalgorithm filing,risk disclosure,and manual intervention to mitigate pro-cyclicalityrisk in financial markets,highlighting a cautious approach to AIs systemic impacts.The 2022

197、 Deep Synthesis Provisions and the 2023 Generative AI Provisions set thestage for regulatory oversight,emphasising the adherence to socialist values,contentreliability,and discrimination prevention.An AI Law is underway,proposing aframework for public-facing generative AI systems,including content s

198、tandards,privacy respect,and a mandatory filing to the algorithm registry.The European Unions AI Act,approved in February 2024,aims to ensure thatAI technologies are safe and respect fundamental rights while fostering innovationand economic growth.This regulatory framework introduces a risk-based ap

199、proach24that categorises AI systems according to the risk they pose to users.For example,the act identifies specific applications of AI that pose unacceptable risks and aretherefore prohibited.These include social scoring,manipulation or exploitation ofvulnerabilities and certain uses of biometric i

200、dentification.The EU AI Act alsointroduces governance rules for AI applications that might pose risks to health,safety,fundamental rights,the environment,democracy and the rule of law.Forthese high-risk categories,stringent regulatory requirements are set.5.3Operationalising regulatory principles fo

201、r AITo operationalise the principles above for GenAI,policymakers and stakeholder pro-cesses have brought forward specific considerations across the value chain.Theseare embodied in the EU AI Act(EU,2024),in NISTs AI Risk Management Frame-work for GenAI(Barrett et al.,2023)and Chinas Generative AI p

202、rovisions,as wellas UNs recent March 2024 adoption of a resolution on AI safety.These regulatoryinitiatives apply to GenAI and partly also to AI agents,especially when based onfoundation models or used in high-risk systems.Chan et al.(2024)and Janjeva etal.(2023)propose dedicated measures to apply t

203、hese principles to AI agents andto ensure greater systemic resilience.Building upon these operationalisations couldbe a useful step for regulating emerging AI agents in finance.Table 3 presents a summary of the principal considerations for regulating GenAIand AI agents in finance,based on current re

204、gulatory and oversight proposals.Therows in Table 3 correspond to the four categories of the US NIST RMF:govern,map,measure and manage.These correspond closely to the EU AI Acts bestpractices through the AI Acts code of practice and standard development,visibilitymeasures,requirements for evaluation

205、s for high-risk systems and GPAI models withsystemic risks and connected regulatory incentives as well as Chinas recent GenAIinitiatives.The columns correspond to the three main stages of the AI value chain:(i)de-sign and training,(ii)deployment end usage,and(iii)longer-term diffusion.Mostconsiderat

206、ions are specifically mentioned in the regulations and guidelines above or put forward in proposals across regions.As shown in the table,the key aspectsspan the entire life cycle of GenAI systems and AI agents,from the initial design,training,and evaluation stages,through their deployment and ongoin

207、g usage,andultimately to the longer-term diffusion and impact assessment.Appendix B illus-25trates how these considerations could be specified in the case of an Advanced AIchatbot for loan applications.Table 3:Considerations for regulating AIStages of AI value chainDesign,training&evaluationDeployme

208、nt and usageLonger-term diffusion1.Govern/Promote best practicesGovernance and developer guidelinesPre-deployment checklistsDevelop skills and capacity of bothregulators and industryOperational design domainsStepwise roll-out processesUnderstand public perception2.Map/Create visibilityTechnical docu

209、mentationIdentify foreseeable impactsCoordinated labellingInformation accessVisibility into AI agentsMonitor global AI adoption3.Measure/Evaluate risks&capabilitiesEvaluate capabilitiesIncident sharingMeasuring economic impactsThird-party auditsAdversarialandstresstestingWar-gaming of AI risksEvalua

210、te sectoral transformation4.Manage/Establish incentivesAI assurance ecosystemSpecifying“red-lines”Redistributive economic policiesRegistering high-risk use-caseClarity on liabilityEnsurecompetitionandsubsti-tutabilityIn the design,training,and evaluation phase,main considerations cover gov-ernance a

211、nd developer guidelines,the need to create visibility through technicaldocumentation and information access,as well as evaluating the inherent risks andcapabilities of the AI systems.Moving to the deployment and usage stage,commonly mentioned considera-tions include pre-deployment checklists,step-wi

212、se roll-out processes to minimiserisks,and the importance of coordinated labeling and monitoring of the AI agentsactivities and impacts.Finally,governing longer-term diffusion includes developing regulators skillsand public engagement to specify oversight and guidelines,while also measuringthe econo

213、mic,sectoral and redistributive implications as AI technologies becomemore widely adopted in the financial sector.However,it is important to note thatmuch remains to be empirically tested and validated,and concerted internationalcooperation to establish more robust standards and build greater regula

214、tory capac-ity will be crucial going forward.5.4Need for international cooperationThe preceding discussion underscores the need for global cooperation on AI regu-lation.Indeed,authorities are increasingly collaborating to harmonise regulatory26standards and enhance cooperation,recognising that AI tr

215、anscends national bor-ders.Common regulatory standards are needed in particular on AI governance rulesand risk assessment methodologies.Standardising AI governance rules internation-ally is crucial to ensure consistent ethical and safety standards,prevent regulatoryarbitrage,and foster global cooper

216、ation.Uniform guidelines can enhance trust,facilitate cross-border AI applications,and address global challenges like privacy,security and equitable access effectively.There is also a need for standardised riskassessment methodologies of AI models that take into account the unique attributesof AI,su

217、ch as adaptability and learning over time.These methodologies should con-sider the potential for AI systems to develop unforeseen behaviours or outcomes,necessitating continuous oversight and the ability to adjust regulatory measures asthe technology matures and integrates more deeply into societal

218、infrastructures.Global collaboration on AI focuses on ensuring safety and transferring knowl-edge and best practices to ensure that all regions of the world can benefit fromAI advancements responsibly.Initiatives like the G7 Hiroshima Process and theTransatlantic Trade and Technology Council undersc

219、ore the importance of interna-tional collaboration in establishing standards for the safe and ethical use of AI.12The integrity and smooth functioning of the financial system is paramount to thestability and prosperity of modern society.As AI becomes increasingly integratedinto financial operations,

220、it is crucial that these initiatives pay close attention tothe potential challenges that AI poses to financial markets.Maintaining financialstability is an important pillar of AI safety,as any disruption or instability in the fi-nancial system can have far-reaching consequences,affecting individuals

221、,businessesand entire nations.6ConclusionThis study underscores the crucial role of artificial intelligence in shaping the dy-namics of the financial system,conceived of as the“brain”of the economy.By12Meanwhile,the Global Partnership on Artificial Intelligence(GPAI)and the UN AI Advi-sory Body emph

222、asise aligning AI development with global goals such as sustainability and equity.Chinas Global AI Governance Initiative represents a significant move towards creating a coop-erative framework for AI governance,focusing on people-centred development and sustainablegrowth.27studying the evolutionary

223、path from rule-based systems to GenAI,we highlight howAI technologies have progressively augmented information processing,risk man-agement,and customer service within the financial sector,enhancing its cognitivecapacity.But while AI presents significant opportunities for efficiency gains andinnovati

224、on,it also introduces complex challenges,including model opacity,datadependency,and systemic stability concerns.Thus,effective regulation and gov-ernance frameworks are important to harness the benefits of AI while mitigatingassociated risks,emphasising transparency,fairness and global collaboration

225、.Atthe same time,authorities should be mindful that not all risks need regulation regulation should target risks that manifest as externalities,leaving market mech-anisms to address those that do not.By bringing attention to the interconnectedness between AI advancements andthe broader economy,we al

226、so highlight the potential spillover and spillback effectsbetween the real economic and the financial system.As AI permeates businessoperations and decision-making processes,there is a need to carefully consider itsimplications for employment,productivity,income distribution and the broadereconomy.P

227、olicy responses must account for diverse scenarios,ranging from pro-ductivity gains to significant labor market disruptions,to ensure inclusive economicgrowth and stability.Looking ahead,continued vigilance and adaptive regulatory approaches are war-ranted.By fostering dialogue among stakeholders an

228、d promoting interdisciplinarycollaboration,policymakers can develop robust frameworks that harness innovationto promote societal welfare.Ongoing research and empirical analyses are essentialto deepen our understanding of AIs impact on the financial system and to guideinformed policy decisions in a r

229、apidly evolving technological landscape.Ultimately,by leveraging the transformative potential of AI while safeguarding against its risks,we can foster a more resilient and equitable financial ecosystem for the benefit ofsociety as a whole.28ReferencesAcharya,V and M Richardson,“Causes of the financi

230、al crisis,”Critical review,2009,21(2-3),195210.Ala-Pietil a,P,Y Bonnet,U Bergmann,M Bielikova,C Bonefeld-Dahl,W Bauer,L Bouarfa,R Chatil,M Coeckelbergh,V Dignum et al.,Theassessment list for trustworthy artificial intelligence(ALTAI),European Com-mission,2020.Aldasoro,I,O Armantier,S Doerr,L Gambaco

231、rta,and T Oliviero,“GenAI and US households:job prospects amid trust concerns,”BIS Bulletin,2024,(Forthcoming).,S Doerr,L Gambacorta,and D Rees,“The impact of artificial intelligenceon output and inflation,”BIS Working Paper,2024,(1179).Anwar,U,A Saparov,J Rando,D Paleka,M Turpin,P Hase,E SLubana,E

232、Jenner,S Casper,O Sourbut et al.,“Foundational Challengesin Assuring Alignment and Safety of Large Language Models,”arXiv 2404.09932,2024.Araujo,D,S Doerr,L Gambacorta,and B Tissot,“Artificial intelligence incentral banking,”BIS Bulletin,2024.Assad,S,R Clark,D Ershov,and L Xu,“Algorithmic pricing an

233、d competition:Empirical evidence from the German retail gasoline market,”Journal of PoliticalEconomy,2024,132(3),000000.Athey,S,“The impact of machine learning on economics,”in“The economics ofartificial intelligence:An agenda,”University of Chicago Press,2018,pp.507547.Autor,D,“The Labor Market Imp

234、acts of Technological Change:From UnbridledEnthusiasm to Qualified Optimism to Vast Uncertainty,”NBER Working Paper,2022,(w30074).Barrett,A M,J Newman,B Nonnecke,D Hendrycks,E R Murphy,and K Jackson,“AI risk-management standards profile for general-purpose AIsystems(GPAIS)and foundation models,”Cent

235、er for Long-Term Cybersecurity,UC Berkeley.https:/perma.cc/8W6P-2UUK,2023.29Bell,S A and A Korinek,“AIs Economic Peril,”Journal of Democracy,2023,34(4),151161.Berglund,L,M Tong,M Kaufmann,M Balesni,A C Stickland,T Ko-rbak,and O Evans,“The Reversal Curse:LLMs trained on”A is B”fail tolearn”B is A”,”2

236、023.Bommasani,R and Others,“On the Opportunities and Risks of FoundationModels,”2022.Bradford,A,Digital empires:The global battle to regulate technology,OxfordUniversity Press,2023.Broby,D,“The use of predictive analytics in finance,”The Journal of Financeand Data Science,2022,8,145161.Brynjolfsson,

237、E,D Li,and L R Raymond,“Generative AI at Work,”2023.NBER Working Paper No.31161,April.Calvano,E,G Calzolari,V Denicolo,and S Pastorello,“Artificial intelli-gence,algorithmic pricing,and collusion,”American Economic Review,2020,110(10),32673297.Cao,L,“AI in finance:A review,”Available at SSRN 3647625

238、,2020.,“Ai in finance:challenges,techniques,and opportunities,”ACM ComputingSurveys(CSUR),2022,55(3),138.Ceruzzi,P E,A history of modern computing,MIT press,2003.Chan,A,C Ezell,M Kaufmann,K Wei,L Hammond,H Bradley,E Bluemke,N Rajkumar,D Krueger,N Kolt et al.,“Visibility into AIAgents,”arXiv preprint

239、 arXiv:2401.13138,2024.Chia,H,“In machines we trust:Are robo-advisers more trustworthy than humanfinancial advisers?,”Law,Technology and Humans,2019,1,129141.China Technology Ministry,“Governance Principles for a New Generation ofArtificial Intelligence:Develop Responsible Artificial Intelligence,”2

240、019.Chow,T and B Halperin andJ Z Mazlish,“Transformative AI,existentialrisk,and asset pricing,”Working Paper,2023.30Christian,B,“The Alignment Problem Brian Christian brianchristian.org,”https:/brianchristian.org/the-alignment-problem/2021.Accessed 05-04-2024.Cohen,M K,N Kolt,Y Bengio,G K Hadfield,a

241、nd S Russell,“Regulatingadvanced artificial agents,”Science,2024,384(6691),3638.Consulich,F,M Maugeri,T N Poli,G Trovatore et al.,“AI and mar-ket abuse:do the laws of robotics apply to financial trading?,”CONSOB LegalResearch Papers(Quaderni Giuridici)no,2023,29.Czarnitzki,D,G P Fern andez,and C Ram

242、mer,“Artificial Intelligence andFirm-Level Productivity,”Journal of Economic Behavior&Organization,2023,(211),188205.Danielsson,J and A Uthemann,“On the use of artificial intelligence infinancial regulations and the impact on financial stability,”arXiv preprintarXiv:2310.11293,2023.Dan elsson,J,R Ma

243、crae,and A Uthemann,“Artificial intelligence and sys-temic risk,”2022.Epoch,“Parameter,Compute and Data Trends in Machine Learning,”2022.Ac-cessed:2024-01-17.EU,“Regulation of the European parliament and of the Council,laying down har-monised rules on Artificial Intelligence(Artificial Intelligence

244、Act)and amendingcertain union legislative acts,”2024.Felten,E,M Raj,and R Seamans,“Occupational,Industry,and GeographicExposure to Artificial Intelligence:A Novel Dataset and Its Potential Use,”Strategic Management Journal,2021,42(12),21952217.,and,“Occupational Heterogeneity in Exposure to Generati

245、ve AI,”Pa-pers,Available at SSRN April 2023.Gensler,G.and L.Bailey,“Deep learning and financial stability,”2020.Avail-able at SSRN 3723132.Georges,C and J Pereira,“Market stability with machine learning agents,”Journal of Economic Dynamics and Control,2021,122,104032.31Goodfellow,I,Y Bengio,and A Co

246、urville,Deep Learning,MIT Press,2016.Hassija,V,V Chamola,A Mahapatra,A Singal,D Goel,K Huang,S Scardapane,I Spinelli,M Mahmud,and A Hussain,“Interpreting black-box models:a review on explainable artificial intelligence,”Cognitive Computa-tion,2024,16(1),4574.Haugeland,J,Artificial intelligence:The v

247、ery idea,MIT Press,1985.Helleiner,E,“Understanding the 20072008 global financial crisis:Lessons forscholars of international political economy,”Annual review of political science,2011,14,6787.Hornik,K,M Stinchcombe,and H White,“Multilayer feedforward networksare universal approximators,”Neural netwo

248、rks,1989,2(5),359366.Huang,J,J Chai,and S Cho,“Deep learning in finance and banking:Aliterature review and classification,”Frontiers of Business Research in China,2020,14(1),124.Janjeva,A,N Mulani,R Powell,J Whittlestone,and S Avin,“Strengthen-ing Resilience to AI Risk:A Guide for UK Policymakers,”C

249、entre for EmergingTechnology and Security,2023.Kelly,B,D Xiu et al.,“Financial machine learning,”Foundations and Trendsin Finance,2023,13(3-4),205363.Khandani,A E and A W Lo,“What happened to the quants in August 2007?:Evidence from factors and transactions data,”Technical Report,National Bureauof E

250、conomic Research 2008.Knight,F H,Risk,uncertainty and profit,Vol.31,Houghton Mifflin,1921.Korinek,A,“Generative AI for Economic Research:Use Cases and Implicationsfor Economists,”Journal of Economic Literature,Dec.2023,61(4),12811317.,“Scenario Planning for an A(G)I Future,”Finance&Development,Decem

251、ber2023,pp.3033.and A Balwit,“Aligned with Whom?Direct and Social Goals for AI Systems,”in Justin Bullock and et al.,eds.,Oxford Handbook of AI Governance,OxfordUniversity Press,2024,pp.6585.32and D Suh,“Scenarios for the Transition to AGI,”NBER Working Paper,2024,w32255.and J E Stiglitz,“Artificial

252、 Intelligence,Globalization,and Strategies forEconomic Development,”NBER Working Paper,2021,28453.and J Vipra,“Concentrating Intelligence:Scaling Laws and Market Structurein Generative AI,”prepared for Economic Policy,2024.Li,K,A Hopkins,D Bau,F Viegas,H Pfister,and M Wattenberg,“Emergent World Repr

253、esentations:Exploring a Sequence Model Trained on aSynthetic Task,”arXiv:2210.13382,2023.Lucas,R E,“Econometric policy evaluation:A critique,”in“Carnegie-Rochesterconference series on public policy,”Vol.1 North-Holland 1976,pp.1946.Matz,S C,J D Teeny,S S Vaid et al.,“The potential of generative AI f

254、orpersonalized persuasion at scale,”Scientific Reports,2024,14,4692.Morris,M R,J Sohl-dickstein,N Fiedel,T Warkentin,A Dafoe,A Faust,C Farabet,and S Legg,“Levels of AGI:Operationalizing Progress on the Pathto AGI,”2024.NIST,“Artificial Intelligence Risk Management Framework(AI RMF 1.0),”2023.Noy,S a

255、nd W Zhang,“Experimental Evidence on the Productivity Effects ofGenerative Artificial Intelligence,”Science,2023,381(6654),187192.OECD,“Collusion:Competition Policy in the Digital Age,”2017.,“Artificial Intelligence,Machine Learning and Big Data in Finance:Opportu-nities,Challenges,and Implications

256、for Policy Makers,”2021.,“Generative artificial intelligence in finance,”2023,(9).Peng,S,E Kalliamvakou,P Cihon,and M Demirer,“The Impact of AI onDeveloper Productivity:Evidence from Github Copilot,”arXiv preprint,2023.Perez-Cruz,F and H S Shin,“Testing the cognitive limits of large languagemodels,”

257、BIS Bulletin,January 2024,(83).Quinn,B,“Explaining AI in Finance:Past,Present,Prospects,”arXiv preprintarXiv:2306.02773,2023.33Russell,S.J.and P.Norvig,Artificial intelligence:A modern approach 2010.Scheurer,J,M Balesni,and M Hobbhahn,“Technical report:Large languagemodels can strategically deceive

258、their users when put under pressure,”arXivpreprint arXiv:2311.07590,2023.Scott,W,“Introducing Devin,the first AI Software Engineer,”2024.Shiller,R J,“Portfolio insurance and other investor fashions as factors in the 1987stock market crash,”NBER Macroeconomics Annual,1988,3,287297.Suleyman,M and M Bh

259、askar,The Coming Wave:Technology,Power,and theTwenty-first Centurys Greatest Dilemma,Crown,2023.Svetlova,E,“AI ethics and systemic risks in finance,”AI and Ethics,2022,2(4),713725.Turing,A M,“Computing Machinery and Intelligence,”Mind,1950,59(236),433460.United States presidential task force on mark

260、et mechanisms,Report of thepresidential task force on market mechanisms,US Government Printing Office,1988.von Neumann,J,Theory of Self-Reproducing Automata,University of IllinoisPress,1966.,H Goldstine,A Burks,N Metropolis et al.,“First Draft of a Reporton the EDVAC,”Technical Report,University of

261、Pennsylvania,Moore School ofElectrical Engineering 1945.Yang,C H,“How Artificial Intelligence Technology Affects Productivity and Em-ployment:Firm-Level Evidence from Taiwan,”Research Policy,2022,(51.6).Yang,J,C E Jimenez,A Wettig,S Yao,K Narasimhan,and O Press,“SWE-agent:Agent Computer Interfaces E

262、nable Software Engineering LanguageModels,”2024.34ASystemic risk from AI agentsSystemic risks could also increase due to a widespread use of AI agents.These agentsare characterised by direct actions with no human intervention and a potential formisalignments with regards to long-term goals(Chan et a

263、l.(2024).As a simpleexercise,Table 4 describes the hypothetical influence of AI agents in a specificscenario:Would the 2008 financial crisis have been more severe if AI agents hadbeen integrated in 2008s economy?The first column reports some of the core reasons for the 2008 financial cri-sis accordi

264、ng to the literature(Helleiner(2011),Acharya and Richardson(2009):1)shortcomings in financial practices(inadequate risk assessment and securitisa-tion),2)regulation(belief-based oversight,limited oversight of rating agencies)and 3)global industry structure(Interconnectedness of financial institution

265、s,in-centive misalignment)-economic policies like housing ownership programs are verycontext-specific and thus excluded here.The table suggests that the extensive deployment of AI agents in finance couldexacerbate the risk of a financial crisis.This risk stems from automated risk assess-ments,comple

266、x automated oversight,increased interconnectedness,and incentivemisalignment.These risks depend on the extent to which different AI agents arecorrelated and interact,the visibility into AI agents,effectiveness of oversight mech-anisms and deployment restrictions and the alignment of AI agents.As the

267、 use ofAI agents with open goals and limited human intervention is still minimal,thereis an opportunity to deploy them responsibly.This responsible deployment wouldinvolve implementing specific oversight and visibility mechanisms,with initial ex-amples outlined by Chan et al.(2024).35Table 4:Hypothe

268、tical influence of AI agents on a financial crisisContributors to 2008 financial crisisHypothetical influence of AI agentsPotential impact of AI agentsDepends on(condition)Current progress on conditionFinancialpracticesInadequate risk as-sessmentAutomated risk assessments mightparse more information

269、 but be cor-related,biased,or manipulatedThe extent to which different AIagents are correlated and interactin undesirable waysNot much:Highly concentrated AIecosystem built on similar data,with similar training and biasesInadequaterisk-sharingLimited-potentially more complexAI-driven securitisation-

270、RegulationComplexity compli-cates oversightIncreasing complexity,but poten-tial AI use by regulatorsVisibility into AI agents operationsMostly“black-box”AI,visibilityand explainability lagging(Chan etal.(2024),Hassija et al.(2024)Limitedoversightof rating agenciesLimited-potentially easier to scaleo

271、versight with AI agents-GlobalindustrystructureInterconnectednessof financial institu-tionsInterdependentagentswithopaque,globalinteractionsornon-correlated agents identifyinginterconnections beforehandEffective oversight or implementa-tion of“circuit breakers”Regulations since 2008 on inter-connect

272、edness(e.g.,Basel accords),limited on AI in finance(Consulichet al.(2023),Chia(2019)Incentive misalign-mentAI agents alignment could be bet-ter or worse with public vs.finan-cial professionals interestsAlignment of AI agentsAIAlignmentmostlyunsolvedproblem(Christian(2021)36BOperationalising oversigh

273、t principles:Consid-eration for an AI chatbot for loan applicationsThe responsible development and deployment of GenAI and AI agents,is importantfor ensuring the stability and integrity of the financial sector.As AI becomesincreasingly integrated into financial operations,specific measures must be t

274、akento address the challenges and risks posed.As an example for specifying principles and general measures mentioned in Sec-tion 5,consider the case of an AI chatbot designed to assist with loan applications.Suppose the AI chatbot can a)answer questions on loan applications,b)assesscustomers identit

275、y and credit-worthiness with connected tools,and c)send per-sonalised loan offers via e-mail.Governing the design,training,deployment,andlong-term use of such a powerful AI system requires a comprehensive frameworkthat aligns with established regulatory principles and best practices.The following fi

276、gure outlines a comprehensive framework for the design,train-ing,testing,deployment,and long-term management of chatbots in the financialsector.The figure shows what measures need to be adopted so that the chatbotsatisfies the discussed principles.As shown in the figure,the framework highlightskey c

277、onsiderations across the chatbot lifecycle,emphasising the importance of coor-dinated governance,technical documentation,and post-deployment monitoring toensure compliance,mitigate risks,and track the evolving adoption and implicationsof GenAI and AI agents within the financial services industry.37F

278、igure 3:Example Oversight considerations for Advanced AI chatbot for loan applications38 Previous volumes in this series 1193 June 2024 Aging gracefully:steering the banking sector through demographic shifts Patrick A Imam and Christian Schmieder 1192 May 2024 Sectoral heterogeneity in the wage-pric

279、e pass-through:Evidence from Euro area Miguel Ampudia,Marco Lombardi and Thodore Renault 1191 May 2024 The impact of macroprudential policies on industrial growth Carlos Madeira 1190 May 2024 CEO turnover risk and firm environmental performance Giulio Cornelli,Magdalena Erdem and Egon Zakrajsek 1189

280、 May 2024 Sixty years of global inflation:a post GFC update Raphael Auer,Mathieu Pedemonte and Raphael Schoenle 1188 May 2024 Finding a needle in a haystack:a machine learning framework for anomaly detection in payment systems Ajit Desai,Anneke Kosse and Jacob Sharples 1187 May 2024 Nothing to hide?

281、Gender and age differences in willingness to share data Olivier Armantier,Sebastian Doerr,Jon Frost,Andreas Fuster and Kelly Shue 1186 May 2024 Unconditional convergence in the Mexican manufacturing sector(1988-2018)Alex Rivadeneira 1185 May 2024 Allocative efficiency and the productivity slowdown L

282、in Shao and Rongsheng Tang 1184 May 2024 Determinants of currency choice in cross-border bank loans Lorenz Emter,Peter McQuade,Swapan-Kumar Pradhan and Martin Schmitz 1183 May 2024 Why DeFi lending?Evidence from Aave V2 Giulio Cornelli,Leonardo Gambacorta,Rodney Garratt and Alessio Reghezza 1182 Apr

283、il 2024 Reserve requirements as a financial stability instrument Carlos Cant,Rocio Gondo and Berenice Martinez 1181 April 2024 Synthetic controls with machine learning:application on the effect of labour deregulation on worker productivity in Brazil Douglas K G Araujo 1180 April 2024 Incentive-Compatible Unemployment Reinsurance for the Euro Area Alexander Karaivanov,Benoit Mojon,Luiz Pereira da Silva,Albert Pierres Tejada and Robert M Townsend All volumes are available on our website www.bis.org.

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(国际清算银行:2024智能金融系统:人工智能如何改变金融报告(英文版)(42页).pdf)为本站 (无糖拿铁) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

微**... 升级为标准VIP   wei**n_... 升级为标准VIP

  wei**n_... 升级为标准VIP wei**n_...  升级为至尊VIP

wei**n_... 升级为至尊VIP  wei**n_... 升级为标准VIP 

 138**02... 升级为至尊VIP   138**98... 升级为标准VIP

微**... 升级为至尊VIP   wei**n_... 升级为标准VIP 

 wei**n_... 升级为高级VIP wei**n_...   升级为高级VIP

wei**n_... 升级为至尊VIP   三**... 升级为高级VIP 

 186**90... 升级为高级VIP wei**n_...  升级为高级VIP

133**56...  升级为标准VIP 152**76...  升级为高级VIP

 wei**n_... 升级为标准VIP wei**n_...  升级为标准VIP

wei**n_...  升级为至尊VIP  wei**n_... 升级为标准VIP 

133**18... 升级为标准VIP    wei**n_... 升级为高级VIP

 wei**n_... 升级为标准VIP  微**...  升级为至尊VIP 

 wei**n_... 升级为标准VIP  wei**n_... 升级为高级VIP 

 187**11... 升级为至尊VIP  189**10...  升级为至尊VIP

 188**51... 升级为高级VIP  134**52... 升级为至尊VIP 

134**52... 升级为标准VIP   wei**n_... 升级为高级VIP 

 学**... 升级为标准VIP   liv**vi... 升级为至尊VIP

大婷 升级为至尊VIP  wei**n_...  升级为高级VIP

wei**n_...  升级为高级VIP 微**... 升级为至尊VIP 

 微**... 升级为至尊VIP  wei**n_... 升级为至尊VIP 

wei**n_...  升级为至尊VIP  wei**n_... 升级为至尊VIP

 战** 升级为至尊VIP  玍子 升级为标准VIP 

 ken**81... 升级为标准VIP  185**71... 升级为标准VIP 

wei**n_... 升级为标准VIP 微**...  升级为至尊VIP

wei**n_...   升级为至尊VIP  138**73... 升级为高级VIP

138**36...  升级为标准VIP  138**56... 升级为标准VIP

 wei**n_... 升级为至尊VIP  wei**n_...  升级为标准VIP

137**86... 升级为高级VIP  159**79... 升级为高级VIP

 wei**n_... 升级为高级VIP 139**22... 升级为至尊VIP 

151**96...  升级为高级VIP wei**n_... 升级为至尊VIP 

186**49...  升级为高级VIP  187**87...  升级为高级VIP

wei**n_... 升级为高级VIP   wei**n_... 升级为至尊VIP 

 sha**01...  升级为至尊VIP   wei**n_... 升级为高级VIP

139**62... 升级为标准VIP  wei**n_... 升级为高级VIP 

跟**...  升级为标准VIP  182**26... 升级为高级VIP

wei**n_...  升级为高级VIP 136**44...  升级为高级VIP

136**89...  升级为标准VIP  wei**n_... 升级为至尊VIP

 wei**n_...  升级为至尊VIP  wei**n_... 升级为至尊VIP

wei**n_...   升级为高级VIP  wei**n_...  升级为高级VIP

177**45... 升级为至尊VIP wei**n_...  升级为至尊VIP

wei**n_... 升级为至尊VIP     微**... 升级为标准VIP

 wei**n_... 升级为标准VIP  wei**n_...  升级为标准VIP

139**16...   升级为至尊VIP wei**n_...  升级为标准VIP

 wei**n_...  升级为高级VIP 182**00...  升级为至尊VIP

 wei**n_...  升级为高级VIP wei**n_...  升级为高级VIP

 wei**n_... 升级为标准VIP  133**67...  升级为至尊VIP

 wei**n_... 升级为至尊VIP 柯平  升级为高级VIP

shi**ey... 升级为高级VIP   153**71... 升级为至尊VIP

132**42... 升级为高级VIP    wei**n_...  升级为至尊VIP