《中兴:2023数字基建白皮书(英文版)(42页).pdf》由会员分享,可在线阅读,更多相关《中兴:2023数字基建白皮书(英文版)(42页).pdf(42页珍藏版)》请在三个皮匠报告上搜索。
1、Digital Infrastructure Technology Trends White PaperZTE Copyright Do Not Spread Page1Digital InfrastructureTechnology Trends White PaperDigital Infrastructure Technology Trends White PaperDigital Infrastructure Technology Trends White PaperVersionDateAuthorRemarksV1.02023.05ZTEZTE Corporation.All ri
2、ghts reserved.2023Copyright Notice:Copyright of this document is owned by ZTE Corporation.All units and individuals are not allowed to use ordisclose the proprietary information of ZTE Corporation or any images,tables,data,or other informationcontained in this document without the written permission
3、 of ZTE Corporation.The information in this document will continue to be updated as technologies evolve.Digital Infrastructure Technology Trends White PaperContents1.Foreword.52.Requirements and Challenges for Digital Infrastructure Technology.72.1.Requirements and Constraints of Future Digital Infr
4、astructure Technology.72.2.Challenges to The Traditional Technology Development Path.92.3.Overview of Future Technology Development Path.113.Connectivity.133.1.Overview.133.2.Physical Layer(wireless):5G-A&6G Requires More Spatial Multiplexingand Extended Frequency Bands.153.3.Physical Layer(optical)
5、:Single-wavelength rate improvement,BandExtension,and Spatial Multiplexing.183.4.Packet Layer:Packet Forwarding Chip Architecture That Takes intoAccount Both Capacity and Flexibility.193.5.Application Layer:Video Compression Efficiency is Further Improved withNeural Network-based Video Coding.213.6.
6、Interconnection:Replacement of Electrial to Optical.224.Computing Power.244.1.Overview.244.2.Chip Architecture:DSA&3D Stacking&Chiplet.254.3.Computing Architecture:The Integration of Computing and Storage.264.4.Computing Architecture:Peer-to-peer Computing.284.5.Network Architecture:The IP Network T
7、echnology That Supports TheConvergence of Computing and Networks.29Digital Infrastructure Technology Trends White Paper5.Intelligence.325.1.Overview.325.2.AI Chip:Increase Computing Power/Energy Ratio.325.3.AI Algorithm:Evolution from Dedicated Small Models to General LargeModel.345.4.AI for Network
8、 automation:Empower Autonomous Network to HigherLevel.366.Conclusion.397.References.41Digital Infrastructure Technology Trends White Paper1.ForewordTechnological innovation is the core driving force behind productivity progress and industrialdevelopment.Klaus Schwab,founder of the World Economic For
9、um,pointed out in his book TheFourth Industrial Revolution that since the 18th century,humanity has experienced four industrialrevolutions led by technological innovation.The first industrial revolution began around 1760,marked by the invention and widespreadapplication of the steam engine and railw
10、ay,which transitioned humanity from manual labor tomechanized production.The second industrial revolution started in the late 19th century with thewide utilization of electricity,ushering in the era of mass production.The third industrialrevolution emerged in the mid-20th century,driven by communica
11、tion technology,computertechnology,and the internet(referred to as information and communication technology or ICT),leading humanity into an era of automated production.The current fourth industrial revolution is a continuation of the third,but with exponentiallyincreasing speed,scope,and impact of
12、technological innovation.It is primarily characterized bydigitalization and intelligence,with iconic technologies such as the Internet of Things,big data,and artificial intelligence,progressively shifting society from digital to intelligent.Efficient digital infrastructure serves as the fundamental
13、cornerstone for a digital and intelligentsociety.New applications such as industrial interconnection,holographic communication,meta-universe,and autonomous driving have presented greater demands on information andcommunication technologies.However,it is important to note that the development of ICTt
14、echnologies is built upon breakthroughs in mathematics and physics,such as electromagnetism,quantum mechanics,and information theory,achieved from the late 19th century to the middle ofthe 20th century.In recent decades,advancements in basic sciences have decelerated,posingformidable challenges for
15、future technological progress.The traditional technological evolutionroute faces limitations imposed by Moores Law,Shannons Theorem,and carbon emissionreduction.Therefore,there is a pressing need for fundamental innovations in basic theory,corealgorithm and system architecture.This white paper is an
16、 interpretation of the future technology development trends of digitalinfrastructure,which has been jointly prepared by ZTEs Technical Expert Committee.In contrastto typical industry white papers that focus on business models,application visions,and technologyrequirements,this technical white paper
17、focuses more on the challenges confronted by technologydevelopment and the path towards technological realization to overcome them.Chapter 2 outlines the technical requirements for future business scenarios,and proposes three keytechnical elements of digital infrastructure:Connectivity,computing pow
18、er,and intelligence.However,the development of these three technical elements is facing the challenges posed by theShannon limit,the slowdown of Moores law,and insufficient cognition of the nature ofDigital Infrastructure Technology Trends White Paperintelligence,which poses significant challenges t
19、o future technological advancements.Chapters 3 to 5 describe the specific technical trends in three directions:Connectivity,Computingpower,and Intelligence.Each technological direction presents the future technology challengesand solutions,alongside ZTEs technological innovations and predictions for
20、 future trends.Chapter 6 provides a summary of the entire white paper along with some thoughts on how digitalinfrastructure capabilities can better serve all industries.ZTEs technological innovation aligns with the technological development trend and industrialrequirements.We will continue to work w
21、ith our industry partners to promote technologicalinnovation and contribute to the progress towards a digital and intelligent society.Digital Infrastructure Technology Trends White Paper2.RequirementsandChallengesforDigitalInfrastructureTechnology2.1.Requirements and Constraints of Future Digital In
22、frastructure TechnologySince the invention of the Morse code and telegraph in 1837,the development of ICT technologyhas significantly transformed human lifestyles and production methods.The scale of the globaldigital economy continues to rise.In 2021,the added value of the digital economy in 47 majo
23、rcountries worldwide reached 38.1 trillion US dollars,reflecting a nominal increase of 15.6%compared to the previous year,constituting 45.0%of the GDP01.In 2022,Chinas digitaleconomy reached a scale of 50.2 trillion yuan in size,experiencing a year-on-year growth of10.3%,surpassing the nominal GDP g
24、rowth rate for 11 consecutive years02.Efficient digital infrastructure is a fundamental and essential capability of the digital economy.Inthe field of ToC and ToH,the outbreak of applications such as short video and live broadcasting,and the popularity of online education and remote working impose g
25、reater demands for networkbandwidth and coverage;In the field of ToB,the in-depth expansion and integration from ICT toOT(production domain)also puts forward increased expectations regarding network performance,economic viability,security,and reliability.Digital infrastructure encompasses three basi
26、c elements:connectivity,computing power,andintelligence.Connectivity is the core feature of the Internet.Connection rates have increased from about onecharacter per second in the initial telegraph to the current rate of dual gigabit access(that is,both wireless access and optical access reach gigabi
27、t)and dozens of Tbps for a single optical fiberin the backbone network infrastructure.Wireless communication networks are typically upgradedapproximately once every ten years,increasing the rate by 10 times.For 2030(6G),with theemergence of new services,including holographic communication and meta-u
28、niverse,there is anexpectation that the demand for connectivity in business will continue to increase by 1-2 orders ofmagnitude compared with the current technology(5G)04.In the digital society,computing power has become as essential as water,electricity,and gas.According to the assessment by IDC&In
29、spur&Tsinghua,the increase of computing power indexby 1 point will increase the digital economy and the GDP by 3.5 and 1.8 respectively05.According to CAICT(China Academy of Information and Communications Technology),the totalcomputing power of global computing equipment reached 615 EFlops in 2021,a
30、nd is expected toreach 56 ZFlops in 2030,with an average annual growth rate of 65%06.With the breakthrough of deep neural network algorithm in the past decade,artificial intelligenceDigital Infrastructure Technology Trends White Papertechnology has become a driving force for societal advancement to
31、move from digital tointelligent.The premise of digitization is to represent the physical world with mathematicalmodels.Prior to the breakthrough of AI technology,there were a large number of complex systemsin the real world that could not be represented by mathematical models.The essence of deepneur
32、al network is to use large-scale interconnected neural nodes to approximate the mathematicalmodels of various complex systems(such as human cognitive systems or highly nonlinearphysical systems),which greatly expands the breadth and depth of digital applications.It is evident that connectivity,compu
33、ting power,and intelligence are the fundamental technicalrequirements for future digital applications.The foundation of the future digital society lies inintegrated computing-networking infrastructure and intelligent service systems.Under theinfluence of the data deluge,connectivity,computing power,
34、and intelligence are mutuallycomplementary,reflecting a closer relationship and a less distinct boundary.Figure 2.1 illustrates the relationship between various application scenarios and the three technicalelements in the future.These scenarios are derived from the future network application scenari
35、osproposed by the ITU focus group FG-Net2030 in June 20200708.Figure 2.1 Mapping Between Future Scenarios And Three Technical ElementsAt the same time,the goal of sustainable development imposes higher requirements for energyconservation and environmental protection.Information flow has the potentia
36、l to enhance theefficiency of logistics and energy utilization,thereby reducing the overall carbon emissionsassociated with human activities.For instance,SMARTer 2030,a report by GeSI(Global EnablingSustainability Initiative),indicates that ICTs(Information and Communication Technologies)areprojecte
37、d to contribute to a 20%reduction in global carbon emissions by 203009.However,it iscrucial to address the carbon emissions generated by the ICT industry itself.According to theaforementioned GeSI report,the carbon emissions from the information and communicationsindustry are expected to account for
38、 1.97%of global carbon emissions by 2030.Hence,futuretechnological advancements must prioritize energy conservation and emission reduction as criticalconstraints.Digital Infrastructure Technology Trends White Paper2.2.Challenges to The Traditional Technology Development PathFrom the late 19th centur
39、y to the middle of the 20th century,human breakthroughs inelectromagnetic,quantum mechanics,information theory and other scientific theories are the basisof modern information and communication technology.The three major elements of ICT,connectivity,computing power and intelligence,have their own de
40、velopment paths,but also showmutual support and synergistic progress.The main technology to improve communication data rate is to develop better algorithms(modulation and demodulation,shaping and compensation,forward error correction,etc.)toapproach the Shannon limit.Advanced algorithms bring the in
41、crease of computational complexity,and must rely on the progress of microelectronics technology to get stronger digital signalprocessing capabilities.The microprocessor developed in accordance with Moores Law has improved its performance bymore than 1 billion times in the past fifty years,and the pr
42、ogress in semiconductor technologieshas also driven the development of other chips,including digital signal processors(DSPs),network processors,and switching chips for communication.Advances in chip technology haveenabled more complex communication algorithms.AI algorithms have an unprecedented dema
43、nd for increasingly powerful chips,and distributed AIcomputing puts forward high requirements for network bandwidth and delay.In turn,the increase of bandwidth requires the support of AI technologies,such as physical layeroptimization and lossless network parameter optimization.It can be seen that t
44、he development of connectivity,computing power,and intelligence relies onthe support of other factors.and conversely,the stagnation of any technological direction willaffect the development of other technological directions.The future paths of these three technology elements all face their own diffi
45、culties.(1)The communication algorithm is approaching the Shannon limit.The Shannon theorem(C/W=log2(1+S/N)reveals the relationship between the spectral efficiency(the maximum data rate that can be transmitted per unit bandwidth)and the signal-to-noise ratio.In communication practice,the minimum SNR
46、 tolerance is often determined by the pre-defineddata rate and channel width.This SNR tolerance represents the limit of the signal quality requiredto achieve error-free transmission.In 2001,a research paper pointed out10that the LDPC coding algorithm used in wirelesscommunications achieves the SNR t
47、olerance 0.31 dB,while the Shannon limit in this scenario is0.18 dB,resulting in a difference of only 0.13 dB.This means that the actual signal-to-noise ratiotolerance is only 3%higher than the Shannon limit(100.0131.03).Digital Infrastructure Technology Trends White PaperFurthermore,based on test d
48、ata from ZTE,the current optical transmission algorithms using416QAM modulation achieve a signal-to-noise ratio tolerance that is approximately 1 dB awayfrom the Shannon limit.This implies that future algorithms can only increase the transmissiondistance by 25%or improve the spectral efficiency by 0
49、.33 bps/Hz.As the system approaches the Shannon limit,the performance benefits of increasing algorithmcomplexity become diminished.In many cases,several times the computational workload isrequired to achieve only a marginal improvement in performance.Therefore,even if futurealgorithms continue to ap
50、proach the Shannon limit,their demand for computing power will farexceed the levels achievable through Moores Law.(2)Microelectronics is approaching the boundaries set by physics.Moores Law,which represents the advancement of microelectronics,is also encountering greaterdifficulties.Prior to the 28n
51、m process,the industry increased the number of transistors per unit area byreducing transistor size,such as gate length.However,transistors can not continue to shrink insize due to quantum tunneling,parasitic capacitance and other issues(a silicon atom is 0.2 nm indiameter,and the gate length of 20
52、nm is only about 100 silicon atoms).Therefore,innovativetransistor structures,such as FinFET and GAA,have been introduced.However,these complexstructures come with higher costs and power consumption,which become limiting factors foradvancing process nodes.It appears that the technological benefits d
53、erived from mankinds fundamental theoreticalbreakthrough in the microscopic world(such as quantum mechanics)have nearly reached theirlimits.The current technological revolution primarily utilizes quantum phenomena from a macrostatistical perspective,while the observation and manipulation of physical
54、 matter still rely onmacroscopic means such as current,voltage,and light intensity.In order to fully release the potential of quantum,it is necessary to carry out accurate control andobservation of microscopic particles such as photons,electrons and cold atoms and their quantumstates.Scientific rese
55、arch in this area is still in its initial stage,and there are significantuncertainties regarding the future development path,methods,and goals.(3)Intelligence technology lacks the guidance of cognitive scienceResearch on artificial intelligence began in the 1950s,shortly after the birth of computers
56、,and thereal breakthrough came after the success of deep neural networks in 2006.However,the neuralnetwork-based artificial intelligence algorithm is a superficial simulation of the physiologicalstructure of the human brain.The deep working mechanism of human intelligence,which fallswithin the scope
57、 of cognitive science,has not yet been fully understood or achieved a significantbreakthrough.Current deep learning technology heavily relies on large-scale computing power and data.Digital Infrastructure Technology Trends White PaperHowever,in light of the slowdown of Moores Law and the increasing
58、need for energyconservation and emission reduction,this technology path is difficult to sustain in the long run.Atpresent,the growth rate of computing power for artificial intelligence is much higher than that ofMoores Law.Particularly with the emergence of large models like Transformers,the growth
59、rateof computing power required for training models increases to 275 times every two years onaverage,significantly exceeding the 2x growth rate of Moores Law11.It is estimated that AI willconsume approximately 15%of the worlds electricity in the next decade,placing a substantialburden on the environ
60、ment.In general,the development of ICT has approached the boundaries set by three fundamentaltheories:Mathematics(Shannons theorem),physics(quantum mechanics),and cognitive science.Each step forward requires greater resources than ever before,posing a major challenge for thecurrent trajectory of tec
61、hnological evolution.2.3.Overview of Future Technology Development PathHow to break through technical bottlenecks and build a digital foundation of connectivity&computing power&intelligence is a major task facing us now.Chapters 3 to 5 of this white paper outline potential paths for future technolog
62、ical developmentfrom three perspectives:connectivity,computing power,and intelligence.In his book The Nature of Technology,American thinker Brian Arthur proposes that the natureof technology is the collection of phenomena captured and utilized.Technology evolution issimilar to biological evolution,a
63、nd is a process of combinational evolution.A new technology is anew combination of existing technologies.We believe that in the future,in addition to exploitingthe potential of the existing technology,another promising path lies in the collaboration ofmultiple technologies and the optimization of sy
64、stem architecture.The architecture of ICT systems,be it computing or network architecture,is characterized bymodularity,layering,and decoupling.For example,the Von Neumann computing architectureseparates computing and storage,while network architecture employs protocol layering andinterlayer decoupl
65、ing.The advantage of separation and decoupling is that each module developsindependently,facilitating innovation and maintenance.However,achieving optimal performancefor specific services often requires the collaboration and fusion of modules when a single moduleencounters a performance bottleneck.T
66、his collaboration and fusion can lead to performanceimprovements and reduced power consumption.In the following chapters,we describe potential exploitation of existing technical path,forexample,new spectrums and channels are developed in wireless and optical communications.There are also coordinatio
67、n and integration of multiple technologies,such as optical-electricalintegration,computing-memory integration,computing-network convergence,etc.Table 2-1 provides an overview of the technical development path in three directions:connectivity,Digital Infrastructure Technology Trends White Papercomput
68、ing power,and intelligence.Table 2-1 Overview of Future Technology Development PathIn-depth exploitationCordination and IntegrationConnectivityImprove the spectrum efficiency tothe Shannon limit.Spectrum band extensionSpace division multiplexingOptical-electrical integration;Innovative Packet Forwar
69、ding ChipArchitectureComputing powerMore Moore:Continue to pursuehighertransistordensitywithinnovations on transistor structureIntegration of computing,memory,and networkPeer-to-peer distribution systemIntelligenceAIchiparchitectureinnovation:Highercomputingpower/energyconsumption ratioAIalgorithmev
70、olvesfromthediversified small separation model tothe general large model.Intelligentcapabilitiesempowerdigital infrastructure,industries,andenterprises.Digital Infrastructure Technology Trends White Paper3.Connectivity3.1.OverviewImproving connection bandwidth is a key objective of information and c
71、ommunication technology.Currently.Both wireless access(5G)and wired access(10G-PON)are capable of providing userswith dual-gigabit access bandwidth.Furthermore,long-distance 400G single-wavelengthtransmission technology is being deployed in backbone optical networks.As described in Chapter2,in the n
72、ext 5 to 10 years,the demand for bandwidth is expected to increase by one to two ordersof magnitude.Enhancing network bandwidth involves not only improving the physical-layer transmissioncapacity but also enhancing data processing capabilities at the packet and application layers.Additionally,the ba
73、ndwidth of the interconnection between racks and devices needs to beimproved accordingly.(1)Physical layerThe physical layer is based on electromagnetic theory,and the electromagnetic field expression isshown in the following formula:According to the formula,there are five dimensions that can be mul
74、tiplexed in communication:polarization,spatial distribution,amplitude+phase(QAM,Quadrature Amplitude Modulation),wavelength,and symbol period(baud rate).The polarization,QAM and baud rate are related tothe single-wave rate.Therefore,the total transmission rate can be written as:Note:This formula is
75、a simplified representation.The wireless spatial multiplexingis much more complicated.The formula indicates that transmission capacity can be increased by enhancing the single-waverate,expanding the bandwidth,and implementing spatial multiplexing.However,the single-waverate is limited by Shannons th
76、eorem.To improve the single-wave rate,on one hand,spectralefficiency can be enhanced through high-order modulation,polarization multiplexing,and othertechnologies,approaching the Shannon limit.On the other hand,the single-wave bandwidth canbe increased by raising the baud rate.Band extension involve
77、s expanding the frequency bandavailable for communication,while spatial multiplexing aims to increase the number of channels,resulting in a significant capacity boost.Digital Infrastructure Technology Trends White PaperTable 3-1 is a brief summary of the foregoing five dimensions in wireless communi
78、cations andoptical communications.For more detailed information,refer to Section 3.2 and Section 3.3.Table 3-1 The status quo and future development of wireless and opticalFiveDimensionsThreeTechnicalMethodsWirelessOptical transmission(longdistance)Amplitudeand Phase(QAM)Single-waverate Status quo:1
79、024QAM hasbeen standardized,but hasnot been put into commercialuse yet.Trend:Higher ModulationOrder,IncreasingConstellation Shaping Gain,and Coding ModulationJoint Optimization Status quo:Coherent 416QAMmodulation is close to theShannon limit.Baud rate64128GBdTrend:Continue to increase thebaud rate,
80、and improve the SNRby using the new opticalfiber/amplifierPolarizationBaud rateWavelengthBandextension/spectrumefficiencyimprovement Status quo:200 MHz carrieraggregation;Sub-bandfull-duplex is beingstandardized.Trend:CA,full-duplextechnology,millimeterwave/Terahertz Status quo:The 12THzspectrum of
81、the C+L bandssupports 80 waves*400G/wave.Trend:Extension to the S+C+LbandsSpaceSpacedivisionmultiplexing Status quo:The 64TR/16stream has been put intocommercial use.NCR isstandardizing Trends:eMIMO/Beam,distributed MIMO,ultra-large aperture ELAA,Cell-free,RIS,NCRs,etc.Status quo:Not in commercialus
82、e Trend:Multi-core fiber andfew-mode fiber;Multi-coreweak coupling may becommercialized first(2)Packet LayerSince the birth of the Internet,the packet technology represented by IP and Ethernet is the core ofthe network domain.The packet processing capability of network devices often becomes abottlen
83、eck for improving network capacity and performance.Effective packet processing requiresa balance between capacity and agility.The performance of packet processing depends not only onthe progress of the chip technology,but also on the improvement of the packet processing chipDigital Infrastructure Te
84、chnology Trends White Paperarchitecture.Section 3.4 describes the evolution of the packet forwarding architecture in thefuture.(3)Application LayerEfforts to improve the video compression ratio are closely linked to communication capacity.Withthe advancements in applications such as XRs and holograp
85、hics,it is expected that video trafficwill account for over 90%of the total Internet traffic by 2030.Section 3.5 describes the utilizationof deep learning in video coding technology.(4)InterconnectionWith the increase in link bandwidth and port density,the interconnection buses of ICT devicesmay bec
86、ome a bottleneck.Optical interconnection offers significant advantages over electricalinterconnection in terms of performance and power consumption.As CPO(Co-Packaged Optics)technology continues to mature,the trend of“optical replacing copper”may emerge withindevices as well.Please refer to Section
87、3.6 for more details.3.2.Physical Layer(wireless):5G-A&6G Requires More Spatial Multiplexingand Extended Frequency Bands.Since the 1980s,mobile communications have gradually evolved from 1G to 5G.Currently,5Ghas been widely deployed worldwide,while the development of 6G is underway.As discussed in C
88、hapter 2,in response to future service requirements,6G aims to achievesignificant improvements of 1-2 orders of magnitude compared to 5G in core features such asbandwidth,delay,and reliability.The initial version of 6G by 3GPP is expected to be released in2030.Prior to that,there will be three to fo
89、ur versions of enhanced 5G technology known as5G-Advanced.Regarding spectral efficiency,while low-order modulation and medium-order modulation haveapproached the single-link Shannon limit,there is still a gap to be bridged at high-ordermodulation.Additionally,6G will focus on increasing bandwidth,im
90、proving bandwidth utilization,and enhancing spatial multiplexing capabilities.This includes technologies such as carrieraggregation,full-duplex transmission,utilization of higher frequency spectrum(beyond 6G hertzand terahertz range),Non-Orthogonal/Orthogonal Frequency Division Multiplexing(OFDM)and
91、its variations,high-frequency waveform and sensing waveform,massive MIMO and extremelylarge-scale MIMO,Reconfigurable Intelligent Surfaces(RIS)technology,Network ControlledRelays(NCRs),and more.These technologies represent a fundamental trend wherein increasinglypowerful computing capabilities are l
92、everaged to achieve better resource utilization efficiency.Several typical technologies are described below.(1)Higher-order modulation/constellation shaping/code modulation schemeDigital Infrastructure Technology Trends White PaperCurrently,modulation schemes can reach up to 1024QAM,allowing each sy
93、mbol to carry 10 bits.To further enhance spectral efficiency,the modulation order may be increased to 4096QAM oreven higher.However,in high-order modulation modes,the efficiency of the traditional squareQAM constellation may not be optimal,leading to a situation where the higher the spectralefficien
94、cy,the further it gets from the Shannon limit.Therefore,higher-order modulationtechniques based on geometric shaping or probabilistic shaping are expected to approach theShannon limit more closely,especially in areas with high signal-to-noise ratio(SNR).(2)Improving spectrum efficiency:Full duplex a
95、nd subband full duplexFull-duplex is a new technology that improves network data rates and spectrum utilization.Forhigh-bandwidth and low-delay services in the future,full-duplex uses unpaired spectrum resources.By releasing mutually exclusive restrictions on the use of DL/UL resources,spectrum usag
96、eefficiency can be increased and transmission delay can be reduced.However,to implement fullduplex,a base station or a terminal needs to process self-interference(SI)to support a transceiverfunction that is simultaneously performed.Implementation complexity and a hardware cost arestill relatively la
97、rge,especially for a Massive MIMO transceiver.Therefore,the multi-antennatechnology is actually mutually exclusive with the full-duplex technology.Current research primarily focuses on models with a relatively small number of antennas andsubband full-duplex,where separate frequencies are allocated f
98、or uplink and downlink resources.This approach allows for flexible configuration of more uplink resources,thereby reducing uplinkand downlink delays and improving uplink coverage and capacity.Although subband full-duplexreduces the requirements for interference cancellation capabilities at the base
99、station,mutualinterference between user equipment(UE)remains a significant challenge that necessitatesindustry-wide collaboration.(3)Expand More Spectrum:Terahertz TechnologyAs a potential 6G basic technology,THz refers to 100 GHz10THz frequency band resources withcontinuous available large bandwidt
100、h.It will help build a 6G short-distance and high-ratetransmission system.However,terahertz technology does have certain drawbacks.Compared to millimeter waves,terahertz frequencies experience significant propagation path loss,and outdoor communication isalso susceptible to additional loss due to ra
101、in and fog.Moreover,limitations such as low poweroutput of transmitter power amplifiers,high noise coefficients of low noise amplifiers,andchallenges in designing and manufacturing high-gain antennas greatly restrict the transmissionrange of terahertz waves.Terahertz technology can be combined with
102、multi-antenna systems,enabling the use of extremelynarrow beams to mitigate path fading and extend propagation distances.Additionally,theapplication of reconfigurable intelligent surfaces(RIS)in the terahertz frequency band is a futuredevelopment trend,where the dense distribution of RIS both indoor
103、s and outdoors can have aDigital Infrastructure Technology Trends White Paperpositive impact on terahertz coverage.(4)More Space-Division Multiplexing:Extremely Large-Scale Antenna and DistributedMIMOExtremely large-scale antennas can effectively enhance the uplink capacity and the coverage ofnew fr
104、equency bands.For emerging industrial Internet applications,such as machine vision in modern factories,throughput requirements in the order of Gbps or 10 Gbps are necessary.Potential solutions includeincreasing the number of antennas or MIMO layers to support more uplink connections in the NR(5G air
105、 interface),enabling more users with MU-MIMO,and introducing more flexible carrierdistribution and aggregation.The 5G-Advanced supports up to 24 orthogonal demodulationreference signal(DMRS)ports,allowing support for up to 24 users in common time-frequencyresources if each user employs single-stream
106、 uplink transmission.Additionally,5G-Advancedsupports more powerful uplink terminals,with a single user capable of supporting up to 8 streams.This greatly improves peak rates and effectively enhances uplink throughput,particularly in densenetwork deployments.The future trend in air division multiple
107、xing emphasizes higher levels of distribution and largerequivalent apertures.It progresses from systems like MTP/eCoMP with a small number of accesspoints(APs)to larger-scale heterogeneous distributed MIMO,and further evolves into cell-freenetworks with extensive AP scales.Large-scale distributed MI
108、MO systems must addresschallenges such as time-frequency synchronization,forward bandwidth,andAP power supply.(5)Improving Channel Coverage Quality:Reconfigurable Intelligent Surfaces(RIS)Reconfigurable Intelligent Surfaces(RIS)is a wireless environment optimization technologycharacterized by its lo
109、w cost,low energy consumption,high reliability,and large capacity.RISenhances the coverage,throughput,and energy efficiency for users at the cell edge through thefollowing approaches:a.Provide effective reflection propagation paths to avoid coverage holes when the directpropagation paths are blocked
110、.b.Implements beamforming for target users,and makes full use of space diversity andmultiplexing gains.c.Zero-point beamforming is implemented for the interfered UEs to implement inter-cellinterference suppression.In essence,RIS is a distributed spatial multiplexing technology.In comparison,Massive
111、MIMO isa centralized spatial multiplexing technology.Due to its low cost,RIS is easy to be deployed on alarger scale.Digital Infrastructure Technology Trends White Paper3.3.PhysicalLayer(optical):Single-wavelengthrateimprovement,BandExtension,and Spatial MultiplexingHigh speed,large capacity and lon
112、g distance are the most important requirements for opticaltransmission.The current 200G PM-QPSK(Polarization multiplexing four phase shift keying)system using Super C(ultra-wide C-band)has been widely commercially deployed,and 400GPM-QPSK is expected to be commercially available in 2023.As discussed
113、 in Section 3.1,communication capacity can be enhanced through three technicalapproaches:single-wavelength rate improvement,band extension,and space division multiplexing.Single-wavelength rate improvement is the most cost-effective method for expanding capacity,while new band expansions,such as the
114、 L-band and S-band,effectively double the availablespectrum.Furthermore,space division multiplexing has the potential to significantly increase thecapacity of a single fiber.(1)Single-wave rate improvementAccording to Shannons theorem,the way to increase the single-wave rate involves increasingboth
115、the spectral efficiency and the bandwidth/baud rate.Improving spectral efficiency requireshigher signal-to-noise ratio(SNR)at the receiving end.The use of new fibers(such as G.654 fiberand hollow fiber)can reduce loss and nonlinearity,combined with amplifiers to reduce noisefactors,thus supporting a
116、 doubling of the capacity of a single channel.Advanced coherent DSPchips employ high-performance modulation and demodulation techniques,along with high codinggain Forward Error Correction(FEC)coding.This approach allows the SNR tolerance to approachthe theoretical value and the transmission rate to
117、approach the upper limit of the channel capacity.Regarding the improvement of single-wave bandwidth,it is necessary to enhance the bandwidth ofchips and optical devices.This enables an increase in baud rate from 64GBd to 96GBd/128GBd,and will continue to evolve towards 180GBd+.(2)Band extensionBand
118、extension is the primary approach to increasing the capacity of single-mode fiber.Adheringto the principle of increasing single-wave speed without reducing the number of waves anddoubling the capacity,the C4T,C6T,and C6T+L6T bands are employed for long-haul modes of100G,200G,and 400G,respectively.Cu
119、rrently,the commercialization of long-haul 400G relieson the C+L band.The next direction for capacity improvement will be the long-haul 800G,combined with the expansion of the S+C+L band.Band extension relies on material technology for new band optical devices that can support abroader range of wave
120、lengths.Examples include amplifiers utilizing Tm/Bi ion or substratedoping processes,the use of 128GBd+TFLN(thin film lithium niobate)coherent modulators inoptical modules,multi-band external cavity technology in ITLA(tunable laser),and multi-bandDigital Infrastructure Technology Trends White Papera
121、nti-reflection coating design in WSS(wavelength switching)devices.(3)Space division multiplexingThrough space division multiplexing technology,which involves increasing the number of fibercores and transmission modes,the capacity of a single fiber can be greatly improved.Thistechnical approach can b
122、e categorized into multi-core weak coupling,multi-core strong coupling,few-mode weak coupling,and few-mode strong coupling.Among them,multi-core weak-couplingfibers/devices are relatively mature and capable of long-distance transmission.Due to theiradvantages in energy consumption and size/density,m
123、ulti-core fibers show more promise insubmarine cable applications.Few-mode weak-coupling fibers have limited transmission distanceand may be used in data center interconnects(DCI).However,multi-core strong-coupling fibersand few-mode strong coupling fibers are not expected to be practical in the nea
124、r future.3.4.Packet Layer:Packet Forwarding Chip Architecture That Takes intoAccount Both Capacity and FlexibilityWe believe that in the next ten years,the forwarding capability of packet chips will continue to becrucial for improving network bandwidth.Currently,industry has released chips with a pr
125、ocessingcapability of 51.2Tbps.Following the trend of doubling chip capability every 2 to 3 years,it isestimated that the processing capability of a single chip will reach 102.4Tbps by 2025 to 2026.By2030,the maximum processing capability of a single chip is expected to reach 204.8Tbps.Simultaneousl
126、y,over the next ten years,packet chips will need to enhance their flexible serviceprocessing capabilities.This involves strengthening chip programmability to accommodate theinnovation of new services and reducing chip forwarding delay to meet the low-latencyrequirements of emerging scenarios such as
127、 digital twins and metaverses.Based on these businessneeds,we believe that future chips will not only rely on technological progress but also requireinnovations in architecture design and algorithms.There are currently two mainstream programmable forwarding architectures:(1)Parallel RTC(Run To Compl
128、ete)architecture;(2)Serial pipeline architecture.The parallel RTC architecture offers large capacity tables,a vast instruction space,and the abilityto process complex services.However,this architecture has a higher forwarding delay and cannotmeet the requirements of low-latency services.On the other
129、 hand,the serial pipeline architecturehas relatively lower delay and deterministic jitter,but it has smaller forwarding tables and limitedprogramming capability,making it unsuitable for processing complex services.Digital Infrastructure Technology Trends White PaperFigure 3.1 Parallel RTCArchitectur
130、eAnd Serial PipelineArchitectureWe propose a new hybrid forwarding chip architecture that combines both parallel and serialarchitectures.This architecture dynamically allocates services with different characteristics to theappropriate forwarding architecture through orchestration,thereby meeting the
131、 requirements forfuture network performance,delay,and service expandability.In low-latency scenarios,all low-latency services are processed by the serial pipelines,while a fewcomplex services,such as those involving large-capacity forwarding table searches,are processedin parallel using the Run To C
132、omplete(RTC)architecture.In this scenario,since the servicesprocessed by the RTC are relatively simple and require fewer instructions,the architecture can stillensure a relatively low processing delay.For other scenarios,such as general-purpose router scenarios,the services that need to beprocessed
133、by the chip are highly complex and involve the search of multi-level large-capacityentries.In such cases,it becomes necessary to utilize the parallel RTC architecture to effectivelyaddress the limitation of programming capabilities in purely serial pipelines.Figure 3.2 Parallel and Serial HybridArch
134、itecture Of the Packet Forwarding ChipBased on our technical evaluation,if an appropriate service arrangement model is selected,thedelay of the hybrid architecture is basically equivalent to that of the serial pipeline architecture interms of forwarding delay,and is about 40%lower than that of the R
135、TC parallel architecture.Thechip area of the hybrid architecture is about 15%-20%less than that of the RTC parallelarchitecture and the serial pipeline architecture.The power consumption of the hybrid architectureis about 12%-20%less than that of the RTC parallel architecture and the serial pipeline
136、architecture.We believe that the serial-parallel hybrid architecture can not only reduce the cost ofchip development,but also significantly shorten the development and deployment cycle of newDigital Infrastructure Technology Trends White Paperfunctions,and quickly adapt to the continuously changing
137、business needs.3.5.Application Layer:Video Compression Efficiency is Further Improved withNeural Network-based Video Coding.Video traffic has accounted for over 70%of Internet traffic in 2020.The pursuit of improvedvideo content encoding quality and compression efficiency,while maintaining the same
138、visualquality,is a key driving factor in video technology development.Video encoding aims to enhancecompression efficiency within an acceptable range of information loss,thereby reducing videotransmission bandwidth requirements.This represents another approach to address the limitationsimposed by Sh
139、annons theorem.The Joint Video Experts Team(JVET),jointly established by ISO/IEC JTC1 SC29 and ITU-TSG16 VCEG,released the video coding standard H.266/VVC(Versatile Video Coding)14inAugust 2020.Under the traditional hybrid coding framework,H.266/VVC adopts predictivecoding,transform coding and entro
140、py coding techniques to reduce redundancy in the domain ofspatial,time,frequency,inter-component and human visual perception.Compared withH.265/HEVC,H.266/VVC can achieve about 50%bitrate savings under the same visual quality.However,the complexity of video coding algorithms is inevitably increasing
141、 with the morefine-grained block partition methods and coding modes,and with the more complex predictionand transformation technologies.It is becoming clear that its difficult to further improve the videocompression efficiency only with traditional coding technologies.Deep learning technology hasach
142、ieved great success in computer vision tasks such as image classification,target detection.Inrecent years,deep learning technology has defined a new structural paradigm for image/videocoding frameworks,and significantly improved the performance of image and video encoder.Neural Network-based Video C
143、oding(NNVC)technologies mainly include:hybrid video codingtechnology,which combines traditional video coding and neural network coding,and completeend-to-end neural network video coding technology.(1)Hybrid video coding technologyHybrid video coding technology integrates deep neural networks into tr
144、aditional video codingframeworks to further enhance compression performance.One approach in this category utilizesdeep learning strategies to expedite the identification of numerous objects for block partitioningand prediction modes,thereby reducing search complexity and computational overhead.Anoth
145、erapproach focuses on non-standard solutions that aim solely to improve compression efficiency,employing techniques such as super-resolution and post-processing filtering.The former involvesperforming super-resolution operations on the decoded image,resulting in high-resolution andhigh-quality recon
146、structions that effectively enhance coding efficiency.The latter endeavors toestablish a direct value mapping between reconstructed pixels and original pixels,enhancing theDigital Infrastructure Technology Trends White Paperquality of reconstructed images through filter-based strategies.(2)End-to-en
147、d neural network video coding technologyEnd-to-end neural network video coding technology leverages deep learning methods to handlethe entire encoding and decoding process.By training neural networks on extensive datasets,thesemodels learn the inherent knowledge required to remove video compression
148、artifacts.The superiorcompression performance of end-to-end neural network video coding can be attributed to itspowerful non-linear transformation and mapping capabilities.Furthermore,the end-to-end neuralnetwork encoder optimizes the entire coding loop,mitigating the issue of local optimaencountere
149、d in manual design or independent optimization approaches used in traditional encoders.This overall optimization enhances the coding performance of the system as a whole.Although neural network-based video coding can improve compression efficiency greatly,thehigh decoding complexity makes the implem
150、entation of such technologies face certain challengesin the short term.Currently,industry manufacturers are actively studying the joint optimization oftraditional video coding technology and neural network-based video coding technology.Forexample,the Exploration Experiments on Neural Network-based V
151、ideo Coding(EE1)15and theExploration Experiment on Enhanced Compression beyond VVC capability(EE2)16carried outbytheJVET,takeintoaccountthecompressionadvantagesoftraditionalpredictivetransformation coding tools and the quality improvement advantages of deep neural networkmethods.The test results sho
152、w that under the RA and AI configurations,the BD-rates of Y,Cb andCr are saved:-21.17%,-32.29%,-33.05%and-11.06%,-22.62%,-24.13%,respectively,which indicates that enhanced video coding based on neural network has the technical potential toevolve into the next generation of video coding standards.3.6
153、.Interconnection:Replacement of Electrical to Optical.Wider connections,for ICT equipment,mean higher interconnection rate and density with lowerbit power consumption and bit cost.Optical interconnections have unparalleled advantages overelectrical interconnections in terms of capacity and power con
154、sumption.Therefore,with theincrease of data rate and connection density,the interconnection inside the equipment also showsthe trend of optical in copper out.Optical interconnections offer an additional advantage by greatly expanding the spatialinterconnection distance of devices.This means that mor
155、e switching boards and line cards can beinterconnected within a 3-stage CLOS architecture,resulting in a low-cost,low-latency,andlow-power solution for high-capacity information and communication equipment17。CPO(Co-packaged Optics)technology reduces the size of optical engines and co-packaging themw
156、ith the main chip.It is a crucial technology for using optical interconnects into board-to-boardand chip-to-chip interconnections.CPO will result in reduced power consumption,optimizedsignal integrity,reduced costs,and other benefits.Compared to pluggable optical modules onDigital Infrastructure Tec
157、hnology Trends White Paperpanels(FPP),CPO significantly shortens the distance between the main chip and the opticalcomponents,resulting in significant cost and power savings.Taking 112G SerDes as an example,when the length of the SerDes PCB is reduced from 1000mm(CEI-112G-LR)to 50mm(CEI-112G-XSR),th
158、e power consumption is approximately reduced by 75%.18.For CPO inlinear links,the elimination of internal DSP allows for even greater reductions in overall cost andpower consumption19.Figure 3.3Illustration of the evolution from pluggable optical modules to CPO.The co-packaging of low-power,high-den
159、sity,and high-capacity CPO represents the futuredevelopment trend for switch chips.In response to the pressures of power consumption,signalintegrity(SI),and cost,industry stakeholders are actively promoting the standardization andindustrialization of CPO.It is expected that switches with a capacity
160、of 102.4T will serve as thestarting point for large-scale CPO deployments.However,pluggable optical modules on panels(FPP)are continuously evolving and improving through various new technologies.Notably,Linear-drive Pluggable Optics(LPO)has garnered significant attention recently due to itsadvantage
161、s in power consumption and cost compared to non-linear direct-driven pluggable opticalmodules.However,fully covering the existing scenarios of incoherent optical modules by LPOremains challenging.LPO can be seen as a stepping stone towards CPO technology21.Inconclusion,CPO and pluggable optical modu
162、les will coexist for a considerable period of time.In HPC/AI networks and equipment,there is also significant pressure in terms of powerconsumption,cost,and latency.Optical I/O,as a specific form of CPO,is a promising technologyin chip-to-chip interconnectivity between computing chips such as CPUs,G
163、PUs,and XPUs.It isexpected that under a 200G channel bandwidth in the future,an even lower power consumption of0.1pJ/bit can be achieved23.Due to the recent popularity of ChatGPT,it is anticipated that CPO inthe form of Optical I/O will be initially deployed at scale in commercial applications and i
164、nHPC/AI networks and devices.The ultimate goal of CPO development is to achieve monolithic integration of optoelectronics.This aspiration represents the Holy Grail of optoelectronic integration,but it also entailssignificant challenges.Digital Infrastructure Technology Trends White Paper4.Computing
165、Power4.1.OverviewWith the proliferation of new high-performance computing applications,such as artificialintelligence,privacy computing,AR/VR,and gene testing/biomedical research,the demand forcomputing power is rapidly increasing.For instance,the computational requirements of AI largemodels are gro
166、wing at a pace that exceeds Moores Law.Figure 4.1 The computing power demand of AI large models is growing much faster than Moores Law11Since the advent of microprocessors,the growth in computing power has followed Moores Law,which involves increasing the number of gates per chip area to enhance pro
167、cessor performance,while reducing cost and power consumption.However,in recent years,this approach has facedgrowing challenges.Simply relying on continuous miniaturization for performance improvementis no longer sufficient to meet the demands of modern applications.In the post-Moores Law era,continu
168、ous innovation in processes and materials providesopportunities to enhance the computing power of chips.There are two primary approaches:More Moore:The pursuit of higher transistor density by innovating transistor structures,suchas FinFET and GAA.However,this path poses challenges in terms of cost a
169、nd powerconsumption.Beyond CMOS:Exploring new materials and processes,abandoning CMOS technology.Forinstance,novel fabrication processes utilizing carbon nanotubes,molybdenum disulfide,and othertwo-dimensional materials,as well as transistors leveraging the quantum tunneling effect.However,this path
170、 is characterized by significant uncertainty and will require substantial time tomature.Digital Infrastructure Technology Trends White PaperOn the other hand,architectural innovation plays a crucial role in enhancing computing powerdensity and optimizing resource utilization,thereby enabling the con
171、tinuation of Moores Law.This chapter focuses on the following aspects:Chip level architectue:Domain-specific optimization through collaborative software andhardware design.Utilization of 3D stacking and Chiplet technologies to reduce chip design andmanufacturing costs.(See Section 4.2)Computing syst
172、em level:Introduction of new computing architectures and paradigms,suchas computing in memory,to achieve energy-efficient computing.(See Section 4.3)Additionally,the adoption of the peer-to-peer system architecture optimizes computing,control,and datapaths.(See Section 4.4)Network level:Innovations
173、in network architecture and the integration of computing andnetworking to enhance the efficiency of computing power resource scheduling.(See Section 4.5)4.2.ChipArchitecture:DSA&3D Stacking&ChipletIn their 2019 book,The New Golden Age of Computer Architecture,Turing Award winners JohnHennessy and Da
174、vid Patterson propose that as Moores Law becomes less applicable,asoftware-hardware co-design approach known as Domain Specific Architecture(DSA)becomesdominant.This approach involves defining computing architectures specifically tailored to solveproblems in a particular domain.Artificial intelligen
175、ce(AI)chips and emerging DPUs(DataProcessing Units)have emerged as typical examples of DSA technology.DSA utilizes efficient architectures designed for specific domains,employing techniques such asdedicated memory to minimize data movement,optimizing chip resources for computation orstoragebasedonap
176、plicationcharacteristics,simplifyingdatatypes,andemployingdomain-specific programming languages and instructions.Compared to Application SpecificIntegrated Circuits(ASICs),DSA offers similar performance and energy efficiency when utilizingthe same amount of transistors,while retaining flexibility an
177、d versatility in the field.For instance,ZTEs customized chip architecture Quark in the field of artificial intelligence abstractscomputing resources into tensor,vector,and scalar engines based on the computationalcharacteristics of deep neural networks.It separates computation and control,efficientl
178、yscheduling various processing engine(PE)units through an independent control engine(CE),enabling efficient execution of various deep learning neural network computations.Due to thecustomized hardware and software design,DSA can achieve significantly higher performance,upto tens or even hundreds of
179、times faster than traditional CPUs,while consuming the same amountof power.Digital Infrastructure Technology Trends White PaperFigure 4.2 CustomizedArchitecture ofZTE QuarkMoores Law is primarily evaluated in the 2D space of chip manufacturing.However,as chipminiaturization becomes more challenging,
180、3D stacking technology has emerged as a crucialmeans to improve chip density.3D stacking involves vertically stacking chips without changingthe original package area.This chip design architecture helps address the memory wall problem byenabling better scalability and energy efficiency.The Chiplet te
181、chnology is considered to be a key technology to Moores Law.The Chiplettechnology modularizes chip design and miniaturizes large chips,effectively improving yield andreducing complexity.In addition,the Chiplet technology can manufacture different chipletsseparately as required(for example,the core c
182、omputing logic uses advanced process to improveperformance,but the peripheral interfaces still use mature process to reduce costs),and thenassemble them by using advanced encapsulation technologies,which can effectively reducemanufacturing costs.Compared to traditional chip solutions,the Chiplet app
183、roach offers three key advantages:designflexibility,lower costs,and shorter time-to-market.The primary challenge associated with Chiplettechnology lies in interconnection techniques.To address this,the UCIe Industry Alliance wasfounded on March 2,2022.The maturity of the Chiplet industry and the est
184、ablishment of acompleteindustrychainencompassinginterconnectioninterfaces,architecturedesign,manufacturing,and advanced encapsulation are expected.4.3.ComputingArchitecture:The Integration of Computing and StorageThe classic Von Neumann computing architecture follows a paradigm of separating computa
185、tionand storage.However,if the memory access speed fails to keep pace with CPU performance,itcan create a bottleneck known as the memory wall.Google conducted a study on the powerconsumption of its products and discovered that over 60%of the systems power consumption wasattributed to read and write
186、operations between CPUs and memories21.With the advancement ofDigital Infrastructure Technology Trends White Paperbig data and artificial intelligence,the conventional computing architecture is increasingly limitingthe performance of emerging data-intensive applications,necessitating the development
187、 of a newcomputing architecture to address this challenge.Computing-memory integration technology involves a collaborative design that optimizescomputation and memory based on application requirements.Its aim is to reduce unnecessary datamovement,increase data read and write bandwidth,and improve en
188、ergy efficiency.By doing so,the limitations imposed by the memory wall and power consumption can be overcome.Figure 4.3 Three Architectures of Computing-Memory IntegrationTherearethreeformsofcomputing-memoryintegrationarchitecture:ProcessingNearMemory(PNM),Processing In Memory(PIM),Computing In Memo
189、ry(CIM).(1)Processing Near MemoryNear-memory computing introduces computing power in the data cache location,generates localprocessing results,and directly returns the results,reducing data movement,speeding upprocessing,and improving security.As shown in FIG.4.3,a data logical layer is added to anD
190、ata-Centric-type application,and cache processing is introduced to minimize data migration.(2)Processing In MemoryPIM involves integrating the computing engine inside the memory,typically using DRAM.Theobjective is to perform simple processing directly while reading and writing data,without the need
191、to copy the data to the processor.An example is the conversion between Celsius and Fahrenheit.Processing in memory essentially follows a computing-memory separation architecture,but withmemory and computing closely integrated,thereby reducing the overhead caused by datamovement.Memory manufacturers
192、are driving its commercialization.(3)Computing In MemoryCIM involves embedding a computation unit into memory,particularly suited for executing highlyparallel matrix-vector products.It has promising applications in machine learning,cryptography,differential equation solving,and more.Digital Infrastr
193、ucture Technology Trends White PaperFigure 4.4 Computing in memoryArchitectureCIM adopts a unified computing and memory design architecture.Taking the matrix-vectormultiplication and addition operation in deep neural networks as an example,the architectureshown in Figure 4.4 is commonly used.It cons
194、ists of an input DAC,unit array,output ADC,andother auxiliary circuits.The weight data is stored in the storage unit,and the input undergoes DACconversion to perform read and write operations on the stored data.Using Ohms law andKirchhoffs law,the output currents of different storage units are autom
195、atically accumulated andthen output to the ADC unit for sampling and conversion into the output digital signal,therebycompleting the matrix-vector multiplication and addition operation.4.4.ComputingArchitecture:Peer-to-peer ComputingThe traditional computing system is built with the CPU as the cente
196、r.The surge of business hashigher and higher requirements for the systems processing power,and the data interactionbetween accelerators usually need to be transferred through the CPU.CPU is easy to become abottleneck,the efficiency is not high.Peer-to-peer systems based on xPU(data-centric processin
197、g unit)can establish a new type ofdistributed computing architecture.As shown in Figure 4.5,a peer-to-peer system is formed byinterconnecting multiple nodes with similar structures.Each node has xPU as its core,whichcomprises various heterogeneous computing resources such as CPU,GPU,and other comput
198、ingchips.The primary function of xPU is to access and interconnect the heterogeneous computingresources within the node and across other nodes.The general processor core inside xPU canmanage and schedule computing resources within the nodes.The CPU is no longer the centralcomponent of the node,and t
199、he CPU,GPU,and other computing chips are placed on an equalfooting.Tasks are allocated by xPU based on the characteristics and capabilities of eachcomputing chip.A new transmission protocol based on memory semantics is used within nodes and between nodesin a peer-to-peer system.Compared with existin
200、g transport protocols like TCP and RoCE,memoryDigital Infrastructure Technology Trends White Papersemantics-based transport protocols offer advantages such as low latency and high scalability.Figure 4.5 Peer-To-Peer Computing SystemA server based on the peer-to-peer computing architecture can be see
201、n as a distributed computingsystem,which facilitates independent planning and development of each node in the industrychain,allowing them to leverage their advantages.By utilizing peer-to-peer memory semanticinterconnections,the system can be smoothly expanded,treating the vast distributed computing
202、power as a single computer.4.5.Network Architecture:The IP Network Technology That Supports TheConvergence of Computing and NetworksWith the development of edge computing,the computing power resources become deployed in adistributed manner.Considering that the growth of network bandwidth is restrict
203、ed by Shannonstheorem,the improvement of computing power capability is restricted by Moores law,and there isan increasing need for energy conservation and consumption reduction,it is inevitable to achieveeffective scheduling of network and computing power resources and fine granularity systemoperati
204、on.By leveraging high-speed,flexible,and intelligent networks,the convergence ofcomputing and networks can integrate distributed computing power nodes across different regions,provide open computing power services,and enhance the efficient utilization of computing andnetwork resources.The integratio
205、n of computing and networks is driven by two main factors.Firstly,from thedemand side,there is a requirement to coordinate the scheduling of computing power andnetworks to meet the unified demand for computing resources and network connectivity fromvarious services.For instance,high-resolution VR cl
206、oud games necessitate not only computingresources from dedicated graphics processing units(GPUs)for rendering but also deterministicnetwork connections to fulfill end-to-end latency requirements within 10 ms.Secondly,from thesupply side,the deep convergence of computing and networks,leveraging the u
207、biquitous anddistributed nature of network facilities,enables the distributed deployment of computing powerresources to meet diverse application requirements in terms of latency,energy consumption,andsecurity.Digital Infrastructure Technology Trends White PaperThe convergence of computing and networ
208、ks poses challenges to IP network technologies.Fromthe perspective of the Internet architecture,“computing”is typically associated with upper-layerapplications,while“network”is related to lower-layer connections.IP technology,positioned inthe middle layer,plays a crucial role in connecting the upper
209、 and lower layers.The design oftraditional IP networks follows a layered and end-to-end principle,which allows services to bedeveloped independently from networks,reducing the threshold for service innovation andfacilitating rapid service deployment.However,this principle also leads to services oper
210、ating in abest-effort mode,decoupled from the underlying networks.Consequently,for future IP networks,it is challenging to bridge the gap between services andnetworks to realize the coordination and fine granularity management of computing powerresources and network resources.To address this challen
211、ge,ZTE proposes an innovativearchitecture Service Awareness Network(SAN)25.Figure 4.6 Service-Aware Network(SANs)ArchitectureThe architecture of a service-aware network is illustrated in Figure 4.6.The core concept is toencapsulate the computing power resources and network resources provided by serv
212、ice providersas services and assign a unique service identifier to each service.These services can bedynamically deployed anywhere within the network as needed.A service sub-layer is introduced atthe IP network layer to enable service perception,routing,and scheduling.As a result,aservice-aware netw
213、ork encompasses three core design elements:(1)A service ID that can be recognized horizontally from terminal to network to cloud andvertically from applications to network facilitiesA service ID can either identify a connection type of service(i.e.,providing network resources toestablish an end-to-e
214、nd connection from a terminal to the cloud)or a computing type of service(i.e.,providing computing power resources to compute)and it receives unified service governancefrom terminals,networks,and clouds.The application layer can directly use a service ID to initiatea location-independent transport l
215、ayer connection,without the need for DNS domain nameresolution.Such an operation greatly reduces the service response time and supports mobility bynature.Digital Infrastructure Technology Trends White Paper(2)A service sub-layer(3.5 layer)that is introduced at the IP network layer to implementnetwor
216、k-centric service interconnection.The introduction of an identifier-centric service sub-layer to traditional IP host routes offersseveral advantages.It enables the network to perceive the computing power requirements ofservice consumers and the computing power resource status of service providers.Le
217、veragingservice routing,service requirements can be adequately satisfied by available resources,promoting the evolution of network interconnection from host-centric to service-centric.(3)Aconnection sub-layer with enhanced capabilityThe connection sub-layer enhances the basic capabilities of the net
218、work,including the ability toprovide deterministic connections and intrinsic security.The connection sub-layer ensures that thenetwork meets the connection-level QoS requirements of the service.In summary,the service-aware network provides computing power services and network servicesin a unified ma
219、nner and achieves efficient scheduling of computing power resources and networkresources.This approach not only guarantees quality of service but also meets the requirements forenergy saving and emission reduction.Digital Infrastructure Technology Trends White Paper5.Intelligence5.1.OverviewAI techn
220、ology serves as the driving force propelling humanity into the era of intelligence.Recognizing the significance of AI technologies in leading the new wave of industrialtransformation,many countries around the world have actively promoted intelligent infrastructureconstruction and research across var
221、ious fields.The fundamental elements of AI technology encompass computing power,algorithms,and data.Data is intricately linked to specific business domains,and the establishment of an open,shared,and circulatable data resource system is crucial for a digital society.Computing power andalgorithms for
222、m the foundational capabilities that digital infrastructure should possess.The recent advancements in AI technology since 2016 have been remarkable.However,certainbottlenecks still need to be overcome to meet peoples expectations.For instance,achievingpowerful intelligent capabilities often necessit
223、ates complex algorithms and extensive computingpower,resulting in high costs,energy consumption,and environmental pressures.While dedicatedartificial intelligence for specific domains has demonstrated superior performance compared tohuman abilities,general intelligence is still in its nascent stages
224、.As discussed in Chapter Two,breakthroughs in cognitive science are yet to be achieved,and there is a lack of theoreticalguidance in the development ofAI technology.As the requirements for AI computing power surpass the scope of Moores Law,the industry facesthe important task of achieving more effic
225、ient AI chips.Section 5.2 delves into the innovativedirections ofAI chip architectures to attain higher computing power/energy ratios.The success of ChatGPT has positioned large models as a promising research direction forArtificial General Intelligence(AGI).Section 5.3 highlights the trends in larg
226、e model technologyand its expanding applications,which may evolve into a new platform layer.Model-as-a-service(MaaS)emerges as a potential business model,offering universal AI capabilities for diversescenarios.Additionally,Section 5.4 examines Network Intelligence as a use case for intelligent infra
227、structure.The telecom industry has long been intrigued by how AI enables network operation andmaintenance and facilitates the digital transformation of the network itself.With the support ofmore efficient AI computing power and new algorithms like large models,network intelligence isanticipated to p
228、rogress from the current L2-L3 level to the L4-L5 level in the near future.5.2.AI Chip:Increase Computing Power/Energy RatioAs described in Chapter 2,the rapid increase in energy consumption in AI computing today willDigital Infrastructure Technology Trends White Paperplace a heavy burden on the env
229、ironment.Continuous research on more efficient AI chips isnecessary.There are two feasible directions for achieving high Tops/W(computing power/energy ratio)forAIs:spatial computation and approximate computation.(1)Spatial computationThe power consumption of an AI chip is positively correlated with
230、the distance that data istransmitted inside the chip.With innovative chip architecture design,the energy consumption ofthe chip can be significantly reduced by minimizing the distance that each operations data needsto travel inside the chip.Dividing a large computing core into multiple smaller compu
231、ting cores can effectively reduce theaverage distance data needs to move,thereby reducing energy consumption.This has become thedesign trend for new AI chips.However,this kind of multi-core parallel computing introducesadditional overhead,resulting in reduced computational efficiency.Spatial computa
232、tion is acollaborative design of hardware and software architecture.It involves dividing a computing taskinto multiple subtasks,assigning these subtasks to different computing cores,and planning thedata transmission path between tasks to minimize data movement distance.This approach aims toachieve o
233、ptimal performance and the lowest power consumption.To implement multi-core spatial computation,hardware and software need to be co-designed.Interms of hardware,the computing core can add hardware support for common communicationmodes of AI parallel computing,such as Scatter,Gather,and Broadcast,to
234、optimize the topologystructure and dynamic routing capability of the on-chip network.In terms of software,due to thecomplexity of spatial computation optimization,it cannot be solely borne by developers.Thecompiler needs to automatically divide tasks,assign tasks,and plan routes.The runtime shouldha
235、ndle various anomalies,such as packet loss,disorder,and congestion.An evolutionary path for future spatial computation is in-memory computing.In-memorycomputing can divide a macro computing core into tens of thousands of micro computing cores,rather than just hundreds of mini computing cores.In this
236、 architecture,the average movementdistance of data is further reduced to the micrometer scale,and power efficiency can exceed 10TOPS/WINT8.For example,Untether AIs Boqueria chip has more than 300,000 processingelements at 30 TFLOPS/WFP826.Another evolutionary path to spatial computation is determini
237、stic design.For example,Groqtensor flow processors(TSPs)use the deterministic hardware design27,and the compiler canaccurately schedule computing,memory access,and data transmission on each core to avoidaccess conflicts of shared resources.(2)Approximate computationOne characteristic of deep learnin
238、g models is that they do not require high precision.The errorsDigital Infrastructure Technology Trends White Paperthat occur during computation do not significantly affect the final outcome of the model.Approximation algorithms reduce memory usage and computational complexity,makingcomputations more
239、 efficient.Low-precision computing is an important technical direction for deep learning.Using lowprecision data types can reduce chip area and energy consumption.For example,multiplicationand addition operations of INT8 consume only 1/30 and 1/1528of the energy of 32-bit floatingpoint number(FP32).
240、In the current hybrid precision training technology,an FP16-bithalf-precision floating point number and an FP32 single-precision floating point number may beused together to complete model training.Since the inference requires less precision,the model can be transformed into a lower-precisiondata ty
241、pe after training,a technique called model quantization.Currently,INT8 quantizationtechnology has matured significantly,while INT4 quantization technology still faces somechallenges.Another type of approximate computation is sparse computation.It has been observed that theweights of deep learning mo
242、dels are sparse,meaning some weights are zero or very close to zero,especially in Transformer models where sparsity is more prevalent.Exploiting the sparsity of themodel can eliminate unnecessary computations,thereby improving the efficiency of modelcomputation.For instance,the 2 out of 4 sparse acc
243、eleration in Nvidia A100 GPUs can double thechips equivalent computing power 28 while maintaining the same energy consumption.In thefuture,coordinated software and hardware approaches to sparse computation will remain apromising technology direction.In the next 10 years,improving energy efficiency t
244、hrough manufacturing processes will becomeincreasingly challenging.Spatial computation and approximate computation have significantpotential to enhance the energy efficiency ratio of chips.Compared to current mainstream AI chips,these approaches can increase chip efficiency dozens of times,providing
245、 a powerful guarantee forthe AI industry to achieve the dual-carbon goal.5.3.AI Algorithm:Evolution from Dedicated Small Models to General LargeModelThe nature of the AI algorithm is to provide a mapping between the real world and the digitalworld.The quality of the algorithm depends on the accuracy
246、 of the mathematical model used torepresent the real problem.From the early statistical machine learning to CNNs,BERT,andTransformer,to the most recent GPTs,digital models are becoming more and more large andbetter matched with the real world.In particular,the emergence of GPTs,such as ChatGPT andGP
247、T-4,has revolutionized the field of AI.Large models have become the development trend ofartificial intelligence algorithms,and have kicked off the development of artificial generalintelligence.Digital Infrastructure Technology Trends White Paper(1)Basic model behindAIGC:TransformerIn 2016,Google inv
248、ented Transformer,a new deep learning model based on attention mechanism,which was initially only used for machine translation,but in 2017 BERT30divided training for asingle task into two stages:Task-independent pre-training and task-related fine-tuning,makingTransformer a universal model capable of
249、 handling multiple language tasks.In the same period,the Transformer-based OpenAI model uses a different pre-training idea from BERT,that is,onlyTransorformers decoder part is used to pre-train the language model,which also proves itsuniversality and achieves better results after expanding the data
250、and model scale.In 2020,GPT-3was born,which is the first 100-billion-level parameter model,triggering a computing power armsrace.(2)Multi-modal large model:CLIPTransformer has become a universal natural language processing model in the language field,socan its versatility extend beyond language task
251、s?In 2020,ViT31demonstrated that Transformercould handle image tasks better than traditional convolutional neural networks(CNNs).The CLIPmodel proves that the same Transformer model can process data in both natural language andimage modes.Subsequently,Chinese researchers also propose three-modal mod
252、els.Applicationssuch as text-to-image are emerging.In 2022,the open source StableDiffusion32based on thediffusion model can generate clear images with high resolution,further expanding the applicationscenarios of the AIGC.(3)Reinforcement Learning from Human Feedback:ChatGPTOpenAI took an in-depth l
253、ook at its potential after the GPT-3 model was developed.In 2021,CodeX used source codes to replace natural languages as training corpus,so that the CodeX model(also based on GPT-3)can generate codes.In 2022,GPT-3.5 trained the model by using a mixtureof natural languages and source codes,and made t
254、he model have the Chain-of-Thought capability.InstructGPT33uses human feedback to make the content generated by the model more in linewith human values.All of these led to ChatGPT,which enhanced the ability to model historicalconversations,capture user intent effectively,complete contextual understa
255、nding to achievecontinuous conversations,and extract useful knowledge from massive amounts of data and applyit logically.(4)Large models stimulate industry applicationsWith the release of GPT-4 large model and the performance leap,the large model is expected tousher in further applications in variou
256、s fields.With its authenticity,diversity,controllability andcomposability,large model is expected to help enterprises improve the efficiency of contentproduction and provide them with more diversified,dynamic and interactive content.Digital Infrastructure Technology Trends White PaperFigure 5.1 Time
257、line for large model progress and the associated applications34The large model represents a breakthrough in deep learning technology,transcending differenteras.Its most significant advantage over traditional deep learning algorithms lies in its exceptionaluniversality.Unlike traditional models,which
258、 can only handle a single task,large models have thecapability to perform multiple tasks.This has addressed the issue of fragmented artificialintelligence applications in recent years,reducing the cost associated with migrating acrossdifferent scenarios.The universality of large models allows for tr
259、aining a single model toaccomplish dozens or even more tasks,and their contextual learning ability enables them toacquire new tasks without requiring re-training.This universality positions large models as a newplatform,empowering a wide range of applications at higher levels.5.4.AI for Network auto
260、mation:Empower Autonomous Network to HigherLevelNetwork automation refers to the implementation of automatic configuration,fault self-healing,and automatic optimization in networks,ensuring flexible service provisioning,high reliability,and high performance.In 2019,TM Forum introduced the concept of
261、 Autonomous Networks in response to thecommunication industrys requirements.In 2022,they released the white paper AutonomousNetworks:Empowering digital transformationfrom strategy to implementation.TM Forum hasput forward the vision of three-Zero three-Self,which aims to achieve three-Zero userexper
262、iences(Zero Wait,Zero Touch,Zero Trouble)by implementing three-Self capabilities(Self-serving,Self-fulfilling,Self-assuring)at the network O&M layer,as shown in Figure 5.2.Digital Infrastructure Technology Trends White PaperFigure 5.2TMF Vision ofAutonomous NetworksTM Forum has also proposed autonom
263、y upgrading standards,categorized into six levels(L0 to L5)andsixdimensions(Execute,Perception,Analysis,Decision,Intention/Experience,andApplication),as depicted in Figure 5.3.Figure 5.3Autonomous Network LevelsTo reach the L4-level of an automous network,the key lies in integrating AI algorithms in
264、toscenarios such as network self-configuration,fault self-healing,and quality self-optimization.(1)Intent-Based Closed-Loop to Support Network Self-ConfigurationCurrently,services are predominantly configured manually.With the maturation of intentiontechnology,service parameters and system parameter
265、s can be automatically configured throughcustomer intent perception,intent translation,and closed-loop verification based on AI algorithms.Intent-based closed-loop network self-configuration provides a superior zero-wait-zero-touchexperience in both consumer(ToC)and business(ToB)scenarios.(2)Multi-D
266、imensional Data Analysis to Support Self-Healing of Network FaultsAt present,fault recovery primarily relies on aggregating and analyzing multi-dimensional datasuch as alarms,performance KPIs,logs,and service indicators to generate events and implementclosed-loop fault handling based on intelligent
267、event management.In the foreseeable future,theintegration of multi-modal large models(e.g.,NLP,network,and visual models),combined withDigital Infrastructure Technology Trends White Paperenhanced dimensional data,end-to-end training,and knowledge extraction technologies,willsignificantly improve O&M
268、 accuracy and expand the range of O&M scenarios.(3)The algorithm tends to be interpretable to support network quality self-optimization.The interpretability of AI algorithms is crucial for autonomous networks.An interpretablealgorithm allows individuals to understand the decision-making process,incl
269、uding the reasons,methods,and content of the decisions made by the algorithm model.Autonomous networks play avital role in the telecommunications field,directly influencing service quality.If an algorithmrecommendation problem arises,it may lead to complaints.Algorithm interpretability helps userssa
270、fely implement algorithm recommendations in production environments.Additionally,O&Mpersonnel can comprehend the decisions made by the model and identify causes of deviations,optimizing and enhancing model performance.In the long term,with the support of future communication technologies,big data,an
271、d computingpower networks,autonomous networks will gradually evolve into full-stack L5 levels in anorderly manner,ultimately achieving complete self-X autonomy in autonomous networks andfulfilling the zero-X objectives of zero wait,zero touch,and zero trouble.Digital Infrastructure Technology Trends
272、 White Paper6.ConclusionSince the 18th century,technology has been one of the core factors in production.Over the past 20years,with the rapid development of emerging ICT technologies,such as cloud computing,artificial intelligence,and mobile communication,human beings have become increasinglycapable
273、 of mining information and acquiring knowledge from massive amounts of data.Data,along with other production factors,will drive the high-quality growth of the digital economy andempower the construction of the digital society.By 2030,the digital infrastructure of connectivity,computing power,and int
274、elligence will serveas the foundation of the digital era.Expanded connectivity will enable new high-bandwidthapplications,such as the metaverse and 3D holographic communication.Enhanced computingpower will support the storage of vast amounts of data and enable real-time processing.Increasedintellige
275、nce will inject powerful capabilities into the digital infrastructure,further promoting theevolution of communication networks towards intelligent networks,the digital economy towardsan intelligent economy,and the digital society towards an intelligent society.Figure 6.1 Digital Nebula Empower Digit
276、al TransformationExpanded connectivity,enhanced computing power,and increased intelligence need to operateclosely and mutually support each other.The key capability lies in forming a complex softwaresystem that can integrate,coordinate,and empower new ICT technologies as needed.In 2022,ZTE launched
277、Digital Nebula(DN),and in 2023,DN 2.034,aiming to build a cloud-native,service-oriented,and data-driven digital solution and platform that fully integrates and leveragesthe digital infrastructure of connectivity,computing power,and intelligence.Industrial customerscan utilize DN to further develop t
278、heir own digital platforms,resolve the contradiction betweendiversified applications,unified governance,and efficiency improvement,and achieve resilientservices,scalable systems,and cost reduction.ZTE adheres to the positioning of being a Driver of the Digital Economy and embraces theDigital Infrast
279、ructure Technology Trends White Paperconcept of open and win-win.As a provider of digital infrastructure products and technologies,ZTE offers world-leading cloud,network,edge,terminal,software,and industrial products,andactively shares its core atom capabilities to assist carriers and large enterpri
280、ses.Additionally,ZTEsupports the rapid growth of SMEs and promotes coexistence and win-win relationships withecosystem partners.We expect that the release of this white paper will facilitate further in-depth communication andsolicit sincere feedback on the technological development of ICT.Digital In
281、frastructure Technology Trends White Paper7.References1China Academy of Information and Communications Technology:White Paper on Global DigitalEconomy(2022),December 20222China Academy of Information and Communications Technology:Report on the Development ofChinas Digital Economy(2023),April 20233Ch
282、inese government network:The Digital China Construction Overall Deployment Planhttp:/ 5743484.htm4Fang Min,Duan Xiangyang,Hu Liujun:6G Technology Challenges,Innovation,and Outlook,ZTE Technology June 2020 Issue 35IDC&Inspur&Tsinghua:Global Computing Index 2021-20226China Academy of Information and C
283、ommunications Technology(CAICT):White Paper onChinas Computing Power Development Index(20227ITU-T FG-NET2030:Representative Use Cases and Key Network Requirements for Network2030,January 20208ITU-T FG-NET2030:Additional Representative Use Cases and Key Network Requirements forNetwork 2030,June 20209
284、GeSI:SMARTer2030-ICT solutions for the 21st Century,201510 DesignofCapacity-ApproachingIrregularLow-DensityParity-CheckCodes,IEEETRANSACTIONS ON INFORMATION THEORY,VOL.47,NO.2,FEBRUARY 200111 Amir Gholamihttps:/ Tay,Y.,Dehghani,M.,Bahri,D.,&Metzler,D.(2022).Efficient transformers:A survey.ACM Comput
285、ing Surveys,55(6),1-2813 STRUBELL E,GANESH A,MCCALLUM A.Energy and policy considerations for deeplearning in NLP EB/OL.https:/arxiv.org/abs/1906.0224314 ISO/IEC 23090-3Information technology-Coded representation of immersive media-Part3:Versatile video coding First edition 2021-0215 JVET-AB2023 EE1:
286、Summary of Exploration Experiments on Neural Network-based VideoCoding16 JVET-AB2024 Exploration Experiment on Enhanced Compression beyond VVC capability(EE2)17 Alexey Andreyev,Xu Wang,Alex Eckert,Reinventing Facebooks data center network,MARCH 14,201918 M.LaCroix et al.,“A 116Gb/s DSP-Based Wirelin
287、e Transceiver in 7nm CMOSAchieving 6pJ/b at45dB Loss in PAM-4/Duo-PAM-4 and 52dB in PAM-2,”ISSCC,pp.132-133,Feb.2021.19 ODCC-2022-0300 A,White Paper on 112G Linear Optical Interconnection Solution,P7,2022-0920 Rakesh Chopra,Looking Beyond 400G P5,TEF2021,January 25,202021 Janet Chen,Meta,Rob Stone,M
288、eta,Perspective on Linear Drive Pluggable optics,OIFDigital Infrastructure Technology Trends White Paper2023.123.01,22 William Dally,Accelerating Intelligence,P60,GTC China,and December 14,202023 LightCounting comments on CPO panel discussion at Photonics West,Our industry is at acrossroads,February
289、 202324 A.Boroumand,et al.,Google workloads for consumer devices:Mitigating data movementbottlenecks,Proc.23rd Int.Conf.Support Program.Lang.Operating Syst.,201825 ZTE Corporation:White Paper on Future Evolution of IP Networks(2.0),August 202226 BEACHLER R,SNELGROVE M.Untether ai:boqueria C/Proceedi
290、ngs of 2022 IEEE HotChips 34 Symposium(HCS).IEEE,2022:1-19.DOI:10.1109/HCS55958.2022.989561827 ABTS D,KIM J,KIMMELL G,et al.The Groq Software-defined Scale-out Tensor StreamingMultiprocessor:from chips-to-systems architectural overview C/Proceedings of 2022 IEEE HotChips 34 Symposium(HCS).IEEE,2022:
291、1-69.DOI:10.1109/HCS55958.2022.989563028 HOROWITZ M.1.1 Computings energy problem(and what we can do about it)C/Proceedingsof 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers(ISSCC).IEEE,2014:10-14.DOI:10.1109/ISSCC.2014.675732329 POOL J.Accelerating inference with
292、sparsity using the Nvidia ampere architecture and NVIDIATENSORRTEB/OL.2022-10-12.https:/ Lee,J.D.M.C.K.,&Toutanova,K.(2018).Pre-training of deep bidirectional transformers for language understanding.arXiv preprint arXiv:1810.04805.31 DOSOVITSKIY A,BEYER L,KOLESNIKOV A,et al.An image is worth 16x16 w
293、ords:transformersforimagerecognitionatscaleEB/OL.2022-10-12.https:/arxiv.org/abs/2010.1192932 Borji,A.(2022).Generated faces in the wild:Quantitative comparison of stable diffusion,midjourney and dall-e 2.arXiv preprint arXiv:2210.00586.33 Ouyang,L.,Wu,J.,Jiang,X.,Almeida,D.,Wainwright,C.,Mishkin,P.,.&Lowe,R.(2022).Training language models to follow instructions with human feedback.Advances in NeuralInformation Processing Systems,35,27730-27744.34 Sequoia Capital:GenerativeAI:ACreative New Worldhttps:/ Digital Nebula 2.0 https:/