《全球6G技术大会:2024年边缘内生智能白皮书(英文版)(124页).pdf》由会员分享,可在线阅读,更多相关《全球6G技术大会:2024年边缘内生智能白皮书(英文版)(124页).pdf(124页珍藏版)》请在三个皮匠报告上搜索。
1、1/123AbstractEdge computing,as a key technology of the next generation of radio accessnetworks(RAN),has driven the decentralization of networks and computing facilities.Edge servers closer to user terminals can significantly reduce service latency and copewith emerging new scenarios.Simultaneously,t
2、he rapid development of artificialintelligence(AI)plays a significant role in enhancing the performance of edgecomputing,aiding edge devices in coping with the rapidly increasing data on the edge.Therefore,combining the local computing capability of edge data with the strongcomputing capabilities of
3、AI,known as edge intelligence,can enhance the dataprocessing capabilities on the edge,improve the overall performance of wirelesscommunication systems,and enhance user service experiences.Edge intelligence is ahot and rapidly developing field in recent years,and this white paper aims to analyzethe c
4、urrent research progress in edge intelligence.It mainly includes:(1)6G Edge Intelligence Networks and Infrastructure:Firstly,the edge-nativeintelligent architecture for 6G networks is analyzed.Then,the edge intelligencecomputing infrastructure is introduced,including edge intelligent hardware and cl
5、oudplatforms.Finally,the edge intelligence network infrastructure is described,includingthe edge intelligence access network and core network.(2)Key Technologies of Edge-Native Intelligence:It is introduced from theaspects of model lightweight,edge-cloud collaborative intelligence,edge intelligentde
6、ployment,and deep edge nodes.Edge intelligence in wireless federated learning isalso explained in detail,including model sparsification and model quantization infederated learning.(3)Applications of Edge-Native Intelligence:Typical applications of edge-nativeintelligence are analyzed,such as smart t
7、ransportation,smart manufacturing,andsmart energy saving.2/123Contents1.Introduction.31.1 Background.31.2 Overview of Edge Computing and Edge-Native Intelligence.41.3 Importance of Edge-Native Intelligence.52.6G Edge Intelligence Networks and Infrastructure.72.1 Edge-Native Intelligence Architecture
8、 for 6G.72.2 Edge Intelligence Computing Infrastructure.122.3 Edge Intelligence Network Infrastructure.373.Key Technologies of Edge-Native Intelligence.593.1 Model Lightweighting.593.2 Edge-Cloud Collaborative Intelligence.683.3 Wireless Federated Learning in Edge Intelligence.753.4 Edge Intelligenc
9、e Deployment.853.5 Deep Edge Nodes.914.Edge-Native Intelligence Applications.1014.1 Smart Transportation.1014.2 Smart Manufacturing.1104.3 Intelligent Energy Saving.1185.Development and Challenges of Edge-Native Intelligence.1206.Acknowledgment.1233/1231.Introduction1.1 BackgroundFrom 1G to 5G,commu
10、nication technology has undergone multiple upgradesand transformations,significantly improving data transfer rates,reducing latency,andexpanding network coverage.However,with the rapid development of technologiessuch as the Internet of Things(IoT)and AI,the Internet of Everything andincreasingly com
11、plex application scenarios pose challenges that existing networkarchitectures cannot meet.Therefore,as the next generation of communicationtechnology,6G must possess higher performance and more powerful intelligentcapabilities,drivingthetransitionofedge-sidenetworksfromInternetofEverything to Intell
12、igence of Everything.To better adapt to future diverse andcomplex user requests and application scenarios,the concept of edge-nativeintelligence came into being by integrating intelligent technology into the design andimplementation of communication systems.1In recent years,the theory and technology
13、 of AI have progressed and foundwidespread application in industrial scenarios.However,most AI services aretypically deployed on cloud servers.With the advent of the Internet of Everythingera,the number of terminal devices and the amount of data generated are increasingrapidly.The centralized data p
14、rocessing method,which uploads all data to the cloud,cannot meet the low-latency requirements of users.Consequently,edge computingemerged with the development of the Internet of Things(IoT)and AI.However,current research on edge computing implementation fails to meet the demands ofcomplex service sc
15、enarios.Therefore,edge-native intelligence has the potential tobecome the next research hotspot in edge computing.2Edge-native intelligence enables self-dynamic sensing and self-optimizationcapabilities between various units in the native network.It breaks away from thetraditional plug-in AI archite
16、cture by deeply integrating AI into various layers of the4/123network to enhance the overall system network efficiency.It achieves an autonomoussensingoftheoveralllifecycleandself-managementwithinthenetworkarchitecture.31.2OverviewofEdgeComputingandEdge-NativeIntelligenceEdge computing:The concept o
17、f edge computing is introduced to alleviate theprocessing pressure on cloud data centers.It is a technology that migrates computingprocesses from central servers to the edge of devices.The core idea is to integratenetwork,computing,storage,and application services into a platform close to the dataso
18、urce,enabling services to be provided nearby.This technology helps reduce theprocessing load of cloud computing and addresses the issue of data transfer latency,meeting user needs in real-time service,intelligent applications,security,and privacyprotection.Edge-native intelligence:Edge intelligence
19、is the next stage of development after theevolution of edge computing.With the rapid development and iteration of edgecomputing and AI technologies,the concept of edge intelligence came into being.Itexecutes AI algorithms at the edge,which is a more complex data analysis task.Deploying AI applicatio
20、ns on edge nodes,especially on mobile devices and IoTdevices,requires the support of edge computing.Firstly,edge nodes need to providecorresponding hardware and programming libraries to meet the basic operations ofAI.Secondly,an edge computing platform is needed for resource management and tasksched
21、uling on edge nodes.Finally,it is necessary to solve the task offloading and datasecurity problems in cloud-based collaborativeAI.4As AI technology continues to evolve,the level of intelligence in edge deviceshas been elevated.Initially,edge intelligence primarily focused on running AIalgorithms and
22、 models on edge devices to achieve rapid data processing and response.This approach had a relatively low level of intelligence because the functionality andperformance of edge devices were limited,preventing the execution of complex AI5/123algorithms and models.5With ongoing technological advancemen
23、ts,the performanceand intelligence of edge devices have significantly improved.In this process,theconcept of edge-native intelligence has gradually emerged.Edge-native intelligenceemphasizes the integration of AI technology into edge devices,enabling them to haveautonomous data processing and analys
24、is capabilities.This approach enables edgedevices to better adapt to complex application scenarios,and improve the speed andefficiency of data processing and response.61.3 Importance of Edge-Native IntelligenceThe importance of edge-native intelligence includes the following aspects:(1)Full Unleashi
25、ng of Data Potential at the Network Edge Through AI:With thesurge in the number of mobile devices,a massive amount of data(e.g.,audio,images,and videos)will be generated at the device end.The introduction of AI algorithmswill be essential at this point,as they can quickly analyze these large volumes
26、 of dataand extract features from them,leading to high-quality decision-making and improvedefficiency and reliability of data processing.This helps to reduce manual interventionand error rates,improving service efficiency and reliability.7(2)Expansion of Intelligent Algorithm Deployment Scope with R
27、icher Data andApplication Scenarios:In the traditional cloud computing model,data sources aregenerally uploaded and stored in the cloud due to its extremely high computingperformance.8However,with the rapid development of the Internet of Everything era,the traditional cloud computing model is gradua
28、lly shifting towards the edgecomputing model.In the future,the edge side will generate a massive amount of IoTdata.If all of this data needs to be uploaded to the cloud for AI algorithm processing,it will occupy a large amount of bandwidth resources and put a great deal ofcomputing pressure on the c
29、loud computing data center.To address these challenges,offloading cloud computing power to the edge enables low-latency data processing,thus achieving a high-performance edge intelligence processing model.9(3)Better System Availability and Scalability with Edge-Native Intelligence:AI6/123technology
30、has achieved tremendous success in many digital products and services indaily life,such as video surveillance and smart homes.AI is also a critical drivingforce at the forefront of innovation,including areas like autonomous driving andsmart finance.Therefore,AI should be closer to people,data,and te
31、rminal devices.Inthe process of achieving these goals,as data processing occurs locally,edge devicescan continue to operate even if the central server encounters issues.Additionally,withthe addition of new applications or upgrades to existing ones,edge devices can easilyexpand or modify,providing gr
32、eater flexibility.(4)Enhanced Availability and Accessibility of AI Applications:With theenhanced processing capabilities of edge devices,more AI applications can run on thedevices themselves,rather than relying solely on cloud servers.This increases theusability and accessibility ofAI.10References1
33、S.Talwar,N.Himayat,H.Nikopour,F.Xue,G.Wu and V.Ilderem,“6G:Connectivity in the Era of Distributed Intelligence,”IEEE CommunicationsMagazine,vol.59,no.11,pp.45-50,Nov.2021.2 M.Elsayed and M.Erol-Kantarci,“AI-Enabled Future Wireless Networks:Challenges,Opportunities,and Open Issues,”IEEEVehicular Tech
34、nologyMagazine,vol.14,no.3,pp.70-77,Sep.2019.3 S.Deng,H.Zhao,W.Fang,J.Yin,S.Dustdar and A.Y.Zomaya,“EdgeIntelligence:The Confluence of Edge Computing and Artificial Intelligence,”IEEE Internet of Things Journal,vol.7,no.8,pp.7457-7469,Aug.2020.4 M.Pan,W.Su and Y.Wang,“Review of Research on the Curri
35、culum forArtificialIntelligence and Industrial Automation based on Edge Computing,”2021 International Conference on Networking and Network Applications(NaNA),Lijiang City,China,2021,pp.222-226.5Y.Xiao,G.Shi,Y.Li,W.Saad and H.V.Poor,“Toward Self-Learning EdgeIntelligence in 6G,”IEEE Communications Ma
36、gazine,vol.58,no.12,pp.34-40,Dec.2020.6 H.Hu and C.Jiang,“Edge Intelligence:Challenges and Opportunities,”2020International Conference on Computer,Information and TelecommunicationSystems(CITS),Hangzhou,China,2020,pp.1-5.7 M.Mukherjee,R.Matam,C.X.Mavromoustakis,H.Jiang,G.Mastorakis and M.Guo,“Intell
37、igent Edge Computing:Security and Privacy Challenges,”IEEECommunications Magazine,vol.58,no.9,pp.26-31,Sep.2020.8Y.Sun,B.Xie,S.Zhou and Z.Niu,“MEET:Mobility-Enhanced EdgeinTelligence for Smart and Green 6G Networks,”IEEE CommunicationsMagazine,vol.61,no.1,pp.64-70,Jan.2023.7/1239 Q.Cui,Z.Gong,W.Ni,Y
38、.Hou,X.Chen,X.Tao,P.Zhang,“Stochastic OnlineLearningforMobileEdgeComputing:LearningfromChanges,”IEEECommunications Magazine,vol.57,no.3,pp.63-69,Mar.2019.10 M.Yao,M.Sohul,V.Marojevic and J.H.Reed,“Artificial Intelligence Defined5G Radio Access Networks,”IEEE Communications Magazine,vol.57,no.3,pp.14
39、-20,Mar.2019.2.6GEdgeIntelligenceNetworksandInfrastructure2.1 Edge-Native Intelligence Architecture for 6GAs a key enabling technology for the next generation of radio wireless networks,Multi-access Edge Computing(MEC)can support a plethora of emerging services.With the continuous development of AI,
40、its application in MEC is becomingincreasingly widespread.However,in 5G networks,AI is only used as an add-onapplication to assist MEC.In 6G networks,MEC will incorporate AI from the initialdesign phase,treating it as an integral part of the MEC system.This approach aims toenhance the flexibility an
41、d openness of MEC,better addressing the constantlyemerging application scenarios and user demands.As a result,the edge-nativeintelligence architecture has been proposed,which is based on the decoupling andreconstruction ofAI functions to provide users with customized AI services.2.1.1 Overview of th
42、eArchitectureThe edge-native intelligence architecture consists of four layers and threeplanes,as shown in Figure 2.1.The four layers include the infrastructure layer,virtualization layer,function layer,and application layer;the three planes includethe control plane,AI plane,and management and orche
43、stration(MANO)plane.8/123Figure 2.1 Edge-Native Intelligence ArchitectureI.Four layers:Infrastructure layer:Located at the bottom of the edge-native intelligencearchitecture,it encompasses all communication,storage,and computing resourcesin the system.Communication resources include Wi-Fi and the In
44、ternet;storageresources include memory,Hard Disk Drive(HDD)and Solid State Drive(SSD);computing resources include Central Processing Unit(CPU)and GraphicsProcessing Unit(GPU).Virtualization layer:Positioned above the infrastructure layer,it abstracts theunderlying resources into a resource pool for
45、use by upper-layer networkfunctions.When service demands arise,the virtualization layer can create Dockercontainers and run them in the resource pool to supply network functions,ensuring their normal operation and thereby guaranteeing customizedAI services.Function layer:Located above the virtualiza
46、tion layer,it consists of decouplednetwork functions,namely control functions and AI functions,and a service bus.Different network functions can be activated,released,and reconfigured in realtime based on service requirements,interconnected through the service bus.Application layer:Located at the to
47、p of the edge-native intelligence architecture,it includes diverse network applications.The application layer interacts directlywith users and,upon user requests,automatically invokes the network functions9/123of the function layer and the Docker containers of the virtualization layer toprovide serv
48、ices to users.II.Three planes:Control plane:It is responsible for the transmission and processing of controlsignaling from the infrastructure layer to the application layer.MANO plane:It transforms service requests from the control plane into MANOcommands and coordinates and manages the systems func
49、tions and resources.The MANO plane includes the Virtualized Infrastructure Manager(VIM),Functional MANO,and Application MANO,dedicated to the management andorchestration of resources,functions,and applications,respectively.AI plane:Also known as the native AI plane,it serves as the core aspect of th
50、eedge-native intelligence architecture,responsible for learning user and networkbehavior and demands,achieving self-operation of the network.Its virtualizationlayer provides a runtime environment library for AI applications,such as PyTorchand TensorFlow,which can be selected based on application req
51、uests andresource state.The AI plane includes decoupled AI functions and a service bus inits virtualization layer,while its application layer comprises a template selectorand an intelligent algorithm model library for flexible reconstruction ofedge-native intelligence.2.1.2 Design and Implementation
52、 of the NativeAI PlaneIn the edge-native intelligence architecture,the microservice-based AI plane isdecoupled into independent AI functions.These AI functions can be activated andinvoked on demand.When an application request arrives,the decoupled AI functionscan be combined on demand to provide AI
53、services to users,thus achievingedge-native intelligence.I.Decoupling of edge-native intelligence plane:As shown in Figure 2.1,in the edge-native intelligence plane,AI services aredecoupled into Data Collection Function(DCF),Data Preprocessing Function(DPF),10/123Model Training Function(MTF),Model V
54、alidation Function(MVF),and DataStorage Function(DSF).Each function is described as follows:DCF:Collects raw data required for AI model training and generates thecorresponding training dataset.DPF:Preprocesses the raw data containing invalid components.Removes invalidor offset content from the raw d
55、ata through data sampling,feature extraction,anddimensionality reduction.Converts the data into the format required for AI modeltraining.MTF:Selects the appropriate AI algorithm according to service requirements andtrains the core model of theAI algorithm.MVF:Evaluates the performance of the AI mode
56、l during model training orreal-time inference.DSF:Stores and manages all data and AIF-related parameters of theAI plane.Communication and interaction between different AI functions occur through aunified service bus.Additionally,AI functions can communicate with controlfunctions via the service bus
57、and be activated by Functional MANO based on servicetypes.II.Reconstruction of edge-native intelligence plane:Edge-native intelligence reconstruction borrows the idea of template andinstantiation.It performs AI function activation,runtime configuration,and resourceallocation based on service type to
58、 achieve customizedAI services.Template:Provides a common solution for a class of edge intelligent services byextracting and abstracting their commonalities.The edge-native intelligencetemplate involves key elements such as template information(Tinf)and templateidentifier(Tid).Template information e
59、ncompasses the components of the AIapplication,namelythetypesof AIF,requiredresources,andruntimeenvironments,stored in the intelligent algorithm model library.The templateidentifier distinguishes different templates corresponding to AI applications and isstored in the template selector.Before using
60、the template,predefined operationsare necessary,defining parameters related to functionality activation,resource11/123allocation,and runtime environment configuration according to specific AIapplication requirements.Instantiation:Creates an AI application instance based on the parametersdefined in t
61、he template to respond to AI service requests.As shown in Figure 2.2,the edge-native intelligence instantiation process includes the following steps:1)MANO continuously monitors the application layer and sends a templateselection request to the template selector when an application request is receiv
62、ed.2)The template selector selects the corresponding template according to theapplication type and sends its Tid to the intelligent algorithm model library torequest Tinf.3)The intelligent algorithm model library extracts the corresponding Tinfof the template and provides feedback to the template se
63、lector.4)The template selector sends the received Tinf to the MANO plane.5)The MANO plane performs the instantiation operation according to thereceived Tinf:(a)Configures the runtime environment library required by the application.(b)Allocates the required resources.(c)Activates the relevant AIF.12/
64、123Figure 2.2 Edge-Native Intelligence Instantiation Process2.2 Edge Intelligence Computing Infrastructure2.2.1 Edge Intelligent HardwareWith the rapid development of technology,edge intelligent hardware hasgradually become a focal point where the IoT,AI,and cloud computing intersect.Thistype of int
65、elligent hardware not only possesses real-time and efficient data processingcapabilities but also can make intelligent decisions at the network edge,significantlyalleviating data processing pressure on the cloud and improving overall systemresponsiveness and efficiency.In terms of customer demands,e
66、dge intelligent hardware caters to variousindustries,placing high requirements on adaptability to the environment,real-time13/123processing,security,and stability.For example,in smart manufacturing,edgeintelligent hardware can collect,process,and analyze various data on factoryproduction lines in re
67、al-time,enabling automation and intelligence in the productionprocess.In the medical field,edge intelligent hardware can analyze patientsphysiological data,enabling remote healthcare and intelligent diagnosis.From a technical perspective,edge intelligent hardware incorporates advancedalgorithms and
68、data processing technologies,enabling high-efficiency data processingand analysis.Additionally,it adopts a multitude of sensors,communicationtechnologies,andsoftwaredefinitions,achievinginterconnectednessandinteroperability with various devices and systems.Moreover,edge intelligenthardware stands ou
69、t with its low power consumption and high reliability,readilymeeting the usage requirements in diverse harsh environments.In terms of product forms,edge intelligent hardware can manifest in variousdevices such as intelligent cameras,intelligent sensors,intelligent robots,and edgeservers.These device
70、s can connect with various equipment and systems,facilitatingdata sharing and collaborative processing.Moreover,they can undergo remotemanagement and control through the cloud,enabling remote monitoring andmaintenance of devices.I.Edge intelligent hardware requirementsAs shown in the table below,con
71、sidering the distance from the hardwaredeployment location to the data center,edge intelligent hardware can be categorizedinto Near Edge and Far Edge.Near Edge primarily involves the descent of cloudcomputing,resembling cloud data centers in functionality,with powerful andcomprehensivecomputingcapab
72、ilities.Thehardwareproductformsincludeintegrated cabinets and heavy-edge servers.Far Edge focuses more on specificapplications at the edge site,with strong relevance to specific applications such asdata aggregation/transformations,protocolparsing,industrial control,and AIinference.The hardware produ
73、ct forms are diverse,including industrial computers,PLCs,gateways,and MEC.14/123FunctionProduct ExamplesNearEdgeDeep edge computingRegional data centers,CDN(content deliverynetworks),telecom data centers,hosting serviceprovidersDeep edge computingLocal data centers,heavy-edge servers,micro datacente
74、rs(integrated cabinets)FarEdgeAggregation analysis and control,datamanagementAIBox,MEC,HCI(hyper-convergedinfrastructure)Aggregation,conversion,filtering,datareduction,forwardingGateways,small cells,routers,access pointsAnalog to digital conversion(sensors),sending control data(actuators),directanal
75、ysis/controlIndustrial computers,PLC(programmable logiccontroller),DCS(distributed controller),etc.Edge computing hardware products have their unique characteristics,distinctfrom the hardware products of cloud computing and edge computing.The reasonsbehind this distinction are the primary demands fa
76、ced by edge computing:()Diverse and complex application scenarios:(1)Thediversityinedgedeploymentrequiresdifferentinfrastructurecombinations.Edge deployment spans various industry applications,user scenarios,and vertical domains.It includes a wide range of infrastructure solutions,making theedge sol
77、ution ecosystem highly complex in terms of product forms,configurations,and management tools.(2)Edge computing is experiencing rapid growth in industries such astelecommunications,utilities,manufacturing,and finance.Telecom operators areactively building edge computing platforms,leading market devel
78、opment.Otherindustries,particularly utilities,manufacturing,and finance,are also accelerating theadoption of edge computing by deploying dedicated edge infrastructure to enhance15/123efficiency in use cases such as the Industrial Internet,grid management,and smartcommercial buildings.(3)The vigorous
79、 development of technologies like AI,machine learning,bigdata models,and heterogeneous computing further propels the growth of the edgecomputing market.The proliferation of compute-intensive analytical workloads isubiquitous in many industries and use cases,unlocking the potential of untapped data,m
80、ost of which resides or is generated at the edge.The expected convergence ofAI-native computing capabilities with the performance requirements of new analyticalplatforms will drive the growth of many new edge infrastructure deployments.Thediversity of AI applications also diversifies the demand for
81、edge computing hardware,software,services,and solutions.()Long lifecycle product demands:(1)In edge computing applications across various industries like transportation,healthcare,energy,and industry,such as rail traffic control systems,medium to largemedical equipment,substation/distribution statio
82、n control units,and industrial controlDCS/MES,the products often go through a long lifecycle involving stages of productdesign,research and development,testing and verification,implementation andoperation,and later maintenance.Therefore,5-7 years or even longer lifecycle foredge computing products i
83、s crucial for these application scenarios.This not onlyimplies higher stability but also means lower maintenance costs in the later stages.(2)The long lifecycle targets the entire service system,including not only edgecomputing hardware devices but also platforms,service applications,protocols,andge
84、nerated data running on the hardware devices.(3)The extended lifecycle encompasses not just the runtime but also the ongoingprovision,service,updates to hardware devices,and the continuous evolution ofplatforms and service applications.()Demanding operating environments:(1)Harsh physical conditionsE
85、dge devices are deployed in diverse locations,facing complex physical16/123environments with varying temperature,humidity,and electromagnetic radiationlevels.Overall,edge computing physical environments can be classified as indoor andoutdoor.Unlike thecontrolleddatacenterenvironment,indooredge compu
86、tingenvironments typically consist of ordinary computer rooms or human-machineco-existing spaces such as factory workshops and retail stores.These environmentsoften have limited air conditioning and air filtration capabilities compared to datacenters.Therefore,dustproof and temperature requirements
87、are slightly higher.Inhuman-machine co-existing scenarios,there is also a certain requirement for the noiseof edge computing devices.Outdoor edge computing environments are even more complex and demanding.Temperature ranges can reach-20 to 60,posing significant challenges to edgecomputing hardware d
88、evices in terms of waterproof,lightning protection,andvibration resistance.(2)More stringent data security environmentThe rapid development of edge computing has introduced network attack threatsto the network edge.Common countermeasures still rely on relatively traditionalnetwork security protectio
89、n techniques,making it difficult to resist multi-source,cross-domain invasions and attacks in edge computing.The computing capacity,storage capacity,and energy of nodes in the edge computing architecture are limited,and existing security protection measures cannot fully apply to edge nodes.The costo
90、f attacking a single computing node or service node is much lower than that of theoriginally powerful central server,making it more likely to attract attackers attention.The network edge is closer to the devices of the Internet of Everything,involving alarge amount of user personal privacy data.The
91、communication and decision-makingmethods at the network edge,once attacked,will more directly affect theimplementation of system functions.The lower cost and higher income of attacks inthe edge computing architecture make it face significant security threats.Securitythreats exist at multiple levels
92、in the edge computing architecture,including the edgecomputing nodes themselves,edge management nodes,and interactions between17/123layers.Furthermore,the network edge has characteristics such as limited resources,large heterogeneity in software and hardware,and widespread distribution of amassive n
93、umber of terminals,making traditional data center security protectionsolutions inapplicable.()Distributed deployment:Edge computing,being closer to the source of data compared to centralizedcloud computing deployment,is characterized by a more dispersed distribution acrossvarious regions or location
94、s.With the development of edge computing,cloud-edgecollaboration,and the growing demand for lean field management,the need foron-site efficiency becomes increasingly evident.Simultaneously,technologicaladvancements in edge intelligence,IoT control,software-defined edge,and 5G/6Gprivate networks allo
95、w distributed deployment to effectively meet the requirements oflow latency,high throughput,and flexible application deployment and scheduling inedge computing scenarios.This trend continues to drive the deployment andapplication of edge computing at customer sites.Managing all these computing nodes
96、becomes a significant challenge when there are hundreds or thousands of locations.Italso necessitates considering future expansion on the edge side.Enterprises heavilyinvolved in the IoT may find themselves managing millions of different endpoints insome cases.This underscores the importance of auto
97、mating operations as much aspossible,eliminating the need for manual management of daily activities.II.Technical characteristics of edge computing1.Edge heterogeneous accelerated computing:In the fragmented edge computing market,non-structured data accounts for 80%.CPU alone cannot efficiently proce
98、ss massive parallel computing and complex patternrecognition.Heterogeneous computing with CPU and intelligent computing power(GPU/FPGA/ASIC)can effectively handle various workloads while significantlyoptimizing energy consumption.This represents the technological trend in thepost-Moore era.18/123Uns
99、tructuredFragmentationTextVideoLocationSensorAudioImagesAR/VRIndustrial Real-TimeAutonomouDigitalMultimedia ProcessingAIoTHeterogeneous computingEfficient and Low-PowerProcessing of Diverse WorkloadsAs predicted by China Academy of Information andCommunications Technology,by 2023,the intelligent com
100、puting power(GPU/FPGA/ASIC,etc.)Unstructured datainvolves massive parallelcomputing and complex patternrecognition.It is difficult toefficiently process unstructureddata with CPUs alone.Figure 2.3 Diversity ofApplication Scenarios Driving the Development of Edge HeterogeneousAccelerated Computing2.S
101、oftware-defined edge:As edge computing finds applications across various industries and scenarios,software-defined approaches are gradually becoming a crucial technological trend inthefieldofedgecomputing.Byleveraginggeneral-purposehardwareandimplementing soft-hard decoupling,this approach effective
102、ly improves hardwareresource utilization,reduces hardware costs,and enhances the manageability of edgehardware resources.Application scenarios encompass software-defined 5G networks,software-defined industrial control devices,software-defined edge security,andsoftware-defined vehicles.Taking softwar
103、e-defined industrial control devices as anexample,a notable representative is the new generation of PLCs that feature soft-harddecoupling and distributed control.While ensuring reliability and ease of use,thesedevices offer data interoperability and inherent information security.They can notonly cat
104、er to the real-time data processing requirements of traditional applicationautomation systems but also support new applications such as non-real-time dataanalysis,storage,and computation.3.Edge-native intelligence:Edge AI refers to the execution of AI algorithms on edge devices or serverslocated nea
105、r the edge.Given the diverse and complex nature of edge scenarios,edge-native intelligence places greater emphasis on model training and inference atthe edge,which fundamentally differentiates it from cloud-native approaches.Aprevailingtrendinvolvesfine-tuning,compressing,andquantizingindustrypre-tr
106、ained models based on multi-task generalization on edge data at the edge side,19/123resulting in lightweight edge models.With the advancement and application of largemodels,they will further facilitate the deployment of AI in long-tail edge scenarios,which in turn necessitates more robust edge-nativ
107、e intelligent computing capabilities.III.Edge computing hardware form factorsEdge intelligent hardware products come in a wide variety of form factors.Thischapter will primarily introduce four typical categories of edge intelligent hardwareproducts:edge servers,industrial computers,gateways,and edge
108、 converged cabinets.1.Edge server:Edge servers are defined as servers deployed in edge data centers or near-edgelocations.They include general-purpose servers for edge computing scenarios andedge-optimized servers specifically designed for edge environments.General-purpose servers for edge computing
109、 scenarios are nearly identical tothose used in data center environments.They are typically applied in relativelyfriendly environmental conditions.Compared to general-purpose servers,edge-optimized servers have beenoptimized in terms of appearance,function,stability,and other aspects(as shown inthe
110、table below).For example,they may have lower power consumption,a widerworking temperature range,multiple installation methods,and integrated security andOT.In terms of product form factor,edge-optimized servers can be further dividedinto purpose-built edge servers and edge micro servers.Purpose-buil
111、t edge servers are small servers designed and built for specificfunctions.They are deployed in specific use cases such as security and videosurveillance.They have specific form factors,low power consumption,wide workingtemperature ranges,and multiple interface types to adapt to harsh environments.Th
112、eyare usually not deployed in standard data centers but find their place in edge datacenters.Edge micro servers are robust and durable computing devices designed forembedded usage in various environmental conditions.They provide enterprise-levelcomputing and management functions.20/123Characteristic
113、s of Edge-Optimized ServersDesignCharacteristics include a wider temperature range,moisture resistance,dustresistance,corrosionresistance,seismicresistance,andelectromagneticcompatibility.ChassisSmaller chassis depth compared to general-purpose servers,suitable for existing basestation sites,edge da
114、ta centers,or specific industrial field locations.Operation&MaintenanceAutomated operations and maintenance with remote control,minimizing manualinterventions.Provides a unified operations and maintenance management interface.SecurityIn complex deployment environments,safeguard against potential net
115、work attacks toensure data security,high availability,and consistency.Hardware-level protectionagainst malicious interference.I/OFront-end I/O design for more convenient operation and deployment.InstallationMethodMultiple installation methods,such as wall-mounted installations.Simple installation an
116、d removal.PowerConsumptionLow power consumption,supporting DC,AC,and wiring limitations.NetworkHigh reliability,low latency,wireless support,etc.2.Industrial computers:Industrial computers are designed specifically for industrial scenarios,featuringrich input-output interfaces capable of connecting
117、various industrial control systemdevices such as sensors,actuators,and instruments.They exhibit high reliability andstability,adapting to diverse and harsh working environments.With the developmentof the Industrial Internet,the integration of industrial computers with edge computingis becoming incre
118、asingly seamless.By applying technologies like AI and big dataanalytics to edge computing,industrial computers are becoming more intelligent,enabling more efficient and precise automation control and decision-making.Key Features of Industrial Computers:Reliability:Industrial computers operate reliab
119、ly in environments with dust,21/123smoke,high/low temperatures,humidity,vibration,and corrosion.They boastrapid diagnostics and maintainability,with a mean time to repair(MTTR)ofgenerally 5 minutes and a mean time to failure(MTTF)exceeding 100,000 hours.In contrast,ordinary PCs have an MTTF of only
120、10,000 to 15,000 hours.Real-timeperformance:Industrialcomputersperformreal-timeonlinemonitoring and control of industrial production processes.They respond rapidlyto changes in working conditions,promptly collecting data and performing outputadjustments(This watchdog function is absent in ordinary P
121、Cs.).Industrialcomputersalsofeatureself-recoveryincaseofemergencies,ensuringuninterrupted system operation.Expandability:Due to their motherboard+CPU card structure,industrialcomputers boast strong input-output capabilities,supporting the expansion of upto 20 cards.This allows connection to various
122、peripherals and cards in industrialsettings,such as road controllers,video surveillance systems,and vehicledetection devices,to accomplish diverse tasks.Compatibility:Industrial computers can simultaneously utilize ISA,PCI,PCIe,and PICMG resources.They support various operating systems,multipleprogr
123、amming languages,and multitasking operating systems.Rackmount Industrial ComputerEmbedded Industrial ComputerTower Industrial ComputerIndustrial Panel PC/Industrial MonitorFigure 2.4 Schematic Diagram of an Industrial Computer3.Edge gateway:22/123An edge gateway is an IoT gateway with edge computing
124、 capabilities that locallyanalyzes and processes massive amounts of terminal data.Edge gateways,togetherwith cloud-based IoT platforms,constitute the architecture of edge computing IoT.Edge gateways typically support a variety of industrial IoT interfaces(such asPLC,RF,RS-485,DI)and protocols,allowi
125、ng flexible connection to various sensorsand terminals.They also open up software and hardware resources,supportingcontainer deployment.Industry applications can be deployed on demand withincontainers to achieve local processing of terminal device data.Meanwhile,the IoTplatform can interconnect with
126、 various industry application systems,facilitatingintelligent connectivity of terminal devices.Industry ApplicationStreetlightsPower DistributionFacilitiesElevator ControllersSensors&TerminalsInternetCloud-Based IoT Platform5G/4G/ETHEdge Computing GatewayIP-Based PLC/RF/RS485/DIContainer ManagementI
127、ndustry AppFigure 2.5 Schematic Diagram of Edge Computing IoTArchitectureWithadvancementsincomputingtechnology,miniaturizedandhigh-performance processors enable the decentralization of computing power.Traditional gateways,which primarily served data communication and conversionpurposes,are now entru
128、sted with more tasks and expectations.They perform complexprotocol parsing and conversion,prioritize analysis before transmission,and enablerapid edge decision-making,significantly improving the overall efficiency of theIndustry23/123end-edge-cloud architecture.Processors continually undergoing upda
129、tes,heterogeneous acceleration,andotheradvancementsleadtocontinuousimprovementsinthecomputingpower-to-power consumption ratio of CPU/GPU/NPU.As 5G and 6G communicationtechnologies advance,faster and more reliable wireless communication graduallyreplaces some wired data transmissions,allowing for mor
130、e flexible deployment.4.Integrated Edge Cabinet:An integrated edge cabinet is a complete cabinet product that integrates edgeserver nodes,switches,storage,PDU,power distribution,rack air conditioning,andvarious other devices.It serves as the smallest unit for products,integrating thenecessary device
131、s within the cabinet and pre-installing customer application software.This enables rapid edge deployment of IT equipment,quick service launch,and thedeployment of edge applications in non-data center scenarios.Components of theintegrated edge cabinet include servers,switches,distribution boxes,PDU,U
132、PS,battery packs,rack-mounted air conditioning,emergency fans,monitoring displays,monitoring hosts,environmental detection gateways,smoke detectors,temperatureand humidity detectors,water detectors,lighting,front and rear door switch detectors,and more.Compared to data center cabinets,integrated edg
133、e cabinets need to consider morefactors related to the complex edge computing user environment,such as dustprevention,waterproofing,and the inability to attach air conditioning cabinetsexternally.24/123Smart Lighting and Door Sensor UnitPower Distribution UnitWater IngressEmergency FansTemperature a
134、ndHumidityUPSIT EquipmentPDUBattery ModuleCoolingEquipmentFigure 2.6 Schematic Diagram of an Integrated Edge Cabinet2.2.2 Edge Intelligent Cloud PlatformAs described in Section 2.1.1,edge computing devices are numerous and diverse,and are deployed in different regions or locations.To coordinate the
135、use of thesedevices,reduce application adaptation difficulty,and lower the complexity ofdeployment and operation management,a distributed software platform is needed tosolve these problems while meeting the growing demand for intelligent applications.The edge intelligent cloud platform is such a dis
136、tributed software platform.Itadopts cloud-native technology,deploying a system on the edge and in the cloud,realizing unified and rapid deployment of container-based microservices.The overallarchitecture consists of three main parts:edge side,cloud side,and collaborationmechanisms.Edge side:This par
137、t includes edge hardware and the edge-native intelligentplatform running on it.Edge hardware can be a single edge device or a cluster ofmultiple edge devices.The edge-native intelligent platform is a key component of theedge intelligent cloud platform.It provides a solution for running applications
138、andservices on edge devices using containers.The edge-native intelligent platform25/123providesthenecessaryinfrastructureandservicesfortheedgecomputingenvironment,including container orchestration,data storage,security management,and monitoring.Cloud side:This part comprises the cloud computing syst
139、em and the cloudmanagement platform.The cloud computing system operates in the cloud data center,serving as a cluster running the edge-native intelligent platform or a cluster running acloud-native platform.It collaborates with the aforementioned edge-side systems,receiving data from the edge side,c
140、onducting extensive data processing and analysis,and feeding back processed data to edge devices.The cloud management platformunifies the management,scheduling,and monitoring of edge-side and cloud-sidecomputing systems,orchestrating and managing the lifecycle of application servicesto ensure system
141、 operation and effective service provision.Collaboration mechanisms:This part encompasses cloud-edge communicationprotocols,cloud-edge data synchronization,and cloud-edge application collaboration.The cloud-edge communication protocol defines the specifications and standards fordata transmission and
142、 communication between the edge and the cloud sides,ensuringstable and secure data transfer.Cloud-edge data synchronization involves the processof synchronizing and coordinating data between the edge and the cloud sides,ensuring data integrity and consistency.Cloud-edge application collaboration ref
143、ers tothe collaborative work process between edge and cloud-side applications,realizingdistributed computing and intelligent services.Figure 2.7 Overall Architecture of an Edge-Native Intelligent Cloud PlatformFigure 2.7 shows an implementation of an edge intelligent cloud platform.Itdeeply integrat
144、es with various edge devices and provides convenient edge computing26/123capabilities in the form of an all-in-one solution with both hardware and softwareintegration.The platform supports multi-edge computing,storage,networking,lightweight virtualization,and fully integrated management.It offers va
145、rious networkedge access capabilities,including 4G/5G,Wi-Fi,and fixed networks.Additionally,itsupports dynamic sensing scheduling of edge applications and resources,cross-edgeintelligent orchestration and cloud-edge collaboration,along with unified intelligentautonomous operation and maintenance man
146、agement.On the cloud side,it provides multi-edge cluster management,cloud-edgeresourcecollaboration,applicationdistribution,andservicedatainteractioncapabilities,and can be deployed in customer cloud data centers or public clouds.On the edge side,based on the creation of edge computing servers,it bu
147、ildsintegrated hardware and software,computation,storage,and network convergedheterogeneous lightweight edge computing platform.With seamless integrationthrough multi-access capabilities,it achieves local data offloading,aggregating data atthe edge for computation.This setup not only saves data tran
148、smission for users butalso provides low latency and highly reliable services.It offers a novel ICTinfrastructure tailored for intelligent applications in diverse access scenarios.Main Features of Mainstream Edge Intelligent Cloud Platforms:Convenient application deployment:Edge applications are host
149、ed throughcontainers or virtual machines,and the platform provides a user-friendly interfacefor importing application images.Flexible data access:Supports various forms of data access such as 4G,5G,Wi-Fi,and broadband.Data is offloaded to the edge platform for processing andcomputation,with results
150、promptly fed back to the production system oruploaded to the cloud.Efficient application management:Capable of identifying application types andmonitoring resource usage in real time,facilitating seamless adjustments toresource allocation and scaling.Optimized application empowerment:Targeted resour
151、ce optimization isperformed for specific types of applications.For example,GPU virtualization27/123enables the same computing power to support a greater number of AI or otherintensive GPU applications.Simplified deployment/operations and maintenance(O&M):Edge computingplatformsdeployedinanall-in-one
152、formfactorachieveplug-and-playfunctionality,automatic networking,and whole-machine replacements duringmalfunctions.TheypossessintelligentO&Mcapabilities,continuouslymonitoring and collecting multidimensional O&M data,with automatic faultalerts and repairs.Lightweight global security protection:Utili
153、zes unified identity authenticationservices to provide account authentication.Leveraging other relevant contextualinformation(identity,threat/trustassessment,roles,location/time,deviceconfigurations,etc.)enhances security policies.Additionally,it offers encryptionfor both dynamic and static data on
154、the edge platform.The following sections will provide detailed explanations of the edge-nativeintelligent platform,cloud-side O&M management platform,and collaborationmechanisms.I.Edge-native intelligent platformThe edge-native intelligent platform is also based on container technology andK8S orches
155、tration capabilities.Containers,compared to physical machines and virtualmachines,are lightweight,easy to deploy,support multiple environments,have shortstartup times,and are easily scalable and migratable.They address the significantheterogeneity of edge devices.Kubernetes has become the mainstream
156、 option forcontainer orchestration in the cloud and data center.It utilizes microservices andcontainers to encapsulate functional modules,managed and deployed by declarativeorchestration tools.As an extension of cloud-side service,using similar or identicaltechnology at the edge is a logical decisio
157、n.However,to adapt to the lightweightnature of edge devices,lightweight cloud-native platforms such as K3s,Kubeedge,and MicroK8s,which are certified Kubernetes distributions,are used on the edge.These are all certified Kubernetes distributions that fully support the K8s API whileproviding a smaller
158、footprint and higher reliability.These platforms can run on x8628/123and Arm hardware,supporting heterogeneous devices like GPU,NPU,and VPU.Thenetwork environment between edge nodes and the central cloud is complex,and edgenodes may become disconnected from the central cloud for various reasons.Due
159、tothe large number of edge nodes and the complexity of the environment,IT operatorsmay find it challenging to promptly maintain and restore connectivity fordisconnected nodes.Therefore,the edge cloud-native platform needs to have offlineautonomous capabilities to ensure the continuity and reliabilit
160、y of edge-side serviceoperations.Figure 2.8 Edge-Native Intelligent Platform for MECFigure 2.8 shows an implementation of an edge-native intelligent platform forMulti-Access Edge Computing(MEC).It is built upon K3s and lightweightvirtualization for seamless integration of computing and storage,drive
161、n by akernel-supported resource service.The platform features rapid application onboarding,flexible orchestration,elastic resource management support,core technologicalcapabilities like edge clustering,and a framework for interaction between edgeclusters and the central cloud.It provides low-latency
162、,reliable,elastic,collaborative,and secure edge computing services.Serving as a novel micro data center constructedat the network edge,it allows quick and flexible integration of third-party applications,29/123centralized remote O&M,opens up network infrastructure capabilities,and offersreal-time in
163、telligent application services.This platform can accommodate variousapplication scenarios under the 5G-based MEC edge cloud model.The platform islean enough to run on a single server or a cluster of multiple servers.Key functions include:Edge management:Provides an edge console that manages all capa
164、bilities ofedge nodes.Edge containers:Provides container orchestration based on Kubernetes forunified management.Secure containers:Utilizes lightweight virtual machines to run user containerinstances,ensuring a balance between virtual machine security and the rapidmanageabilityofcontainertechnology,
165、providingoptimalsecurityandperformance assurance for edge nodes.Lightweightvirtualization:Provideslightweightvirtualizationwithseconds-level startup time and low memory footprint.Image management:Offers features for virtual machine image upload,updates,deletion,and categorized management.Virtual mac
166、hine templates:Createstemplates tosave virtual machineconfigurations.Hardware acceleration:Supports various types of accelerators,including GPUcards,optimizing edge hardware through hardware and accelerator integration.Network isolation:Provides subnet layer-2 isolation based on virtual networks,ens
167、uring secure network communication.Edge intelligent O&M:Offers rich edge alerts management,and custom alertrules,meeting diverse intelligent O&M requirements.Access control:Supports access control based on user groups.II.Cloud-side O&M management platformThe number of edge nodes is extensive,and the
168、 types of edge devices are diverse.The applications of edge devices are complex,and the edge environment is intricate.The traditional data center O&M mode faces huge challenges and cannot guarantee30/123the stable and continuous operation of widely distributed edge services at a low cost.Edge comput
169、ing requires innovative technology to achieve automation andintelligence in edge node access,application deployment,and O&M.In the LF EdgeAkraino project,the concept of Zero Touch Provisioning and Operations(ZTP)hasbeen proposed,demanding technical support for the integration,orchestration,anddelive
170、ry of multiple applications and services.This is realized using the OpenNetwork Automation Platform(ONAP).Such a design significantly reduces the costof managing and maintaining edge computing while meeting the limitations ofhardware resources and costs.The build once,push anywhere edge DevOps usesc
171、loud technology on the edge,and regards the edge node as the build target of theCI/CD process,and the cloud repository as the trusted code source of these edgesystems.The container-based packaging method reduces the workload of deployingapplications to specific edge hardware.For the O&M of the edge
172、device itself,alightweight mechanism is needed to collect the state of the edge node in terms ofnetwork traffic,CPU utilization,processes,memory,etc.,and provide systematic andcomprehensive edge node state monitoring for O&M personnel in real time.Inaddition,there is also a need for self-inspection
173、capabilities.When a potential fault ispredicted to occur,appropriate measures can be taken for preventive maintenance.When a fault is detected,the root cause needs to be located based on the monitoringdata,and then appropriate measures need to be taken for self-repair.For problems thatcannot be auto
174、matically repaired,notify the O&M personnel.The cloud-side O&M management platform is the edge cloud platformcomponent that realizes the above functions.It can be regarded as an edge hub,whichis centrally located and actually deployed in the cloud or data center.It is connected tothe managed edge no
175、des through the network.In addition to managing a large numberof edge node devices and edge-native cloud platforms,it also realizes thecollaborative scheduling of cloud and edge resources.It is also cloud-native,composed of containerized microservices,and can be easily deployed in acloud-native plat
176、form to achieve unified cloud-edge O&M management.31/123Figure 2.9 Example of Cloud-Side O&M Management Platform FunctionsFigure 2.9 illustrates a cloud-side O&M management platform called CentralManagement and Orchestration(CMO),responsible for cloud-edge collaborativemanagement.The overall design
177、goal is to build a unified cloud-edge managementplatform for edge clouds.It achieves this by integrating the management ofcloud-edge resources,data,and applications,providing automated support for theentire lifecycle of user edge computing service,including onboarding,deployment,scheduling,optimizat
178、ion,and collaboration.CMO functions include:Heterogeneous edge node/cluster management:A unified heterogeneous edgeaccess framework that supports the access and unified management of MEC andlightweight edge clusters.Unified edge resource management:A one-stop view of edge clusters,nodes,and resource
179、s,supporting multiple types of edge management frameworks.Intelligent orchestration deployment of applications:A unified applicationorchestration model that adapts to different edges,supports batch applicationdeployment,and policy-based intelligent scheduling.Rich cloud-edge collaboration mechanisms
180、:A cloud-edge data collaborationframework that supports application distribution and data synchronization.Edge intelligent O&M:Unified edge monitoring and alerting,fault detectionand management for edge-cloud integrated machines,utilizing big data analysis,intelligent algorithms,multi-dimensional he
181、alth analysis,rapid localization,and32/123quick repair to enhance the efficiency of automated O&M.Security management:Includes container image scanning,vulnerability analysis,intelligentdetection,andcloud-baseddatabaseauditbasedondifferentgranularities and policies.III.Collaboration mechanism().Clou
182、d-edge collaboration:Figure 2.10 Capabilities and Connotations of Cloud-Edge Collaboration(ECC Edge ComputingReferenceArchitecture 3.0)For applications in the industrys digital transformation,sending all data to cloudcomputing can result in bandwidth bottlenecks,and the latency of applications canno
183、tbe guaranteed.Relying entirely on edge-based distributed architectures makes edgesystem and application management complex,and the limitations of edge nodes makescalability and high availability of applications challenging.Therefore,collaborativeintegration of cloud computing and edge computing is
184、needed to reduce latency,improve scalability,enhance information access,and make service development moreagile.The edge-native cloud platform is not a single component or layer but involvesend-to-end open platforms for EC-IaaS,EC-PaaS,and EC-SaaS.According to theEdge Computing Reference Architecture
185、 3.0,the capabilities and connotations of33/123cloud-edge collaboration involve comprehensive collaboration at the IaaS,PaaS,andSaaS levels,mainly including six types of collaboration:resource collaboration,datacollaboration,intelligentcollaboration,applicationmanagementcollaboration,business manage
186、ment collaboration,and service collaboration.Resource collaboration refers to the coordinated management and scheduling ofresources between edge computing nodes and the cloud.Edge computing nodesprovide basic infrastructure resources such as computing,storage,network,andvirtualization,with local res
187、ource scheduling and management capabilities.They canalso collaborate with the cloud,accepting and executing cloud-side resourcescheduling and management policies,including device management,resourcemanagement,and network connection management.The cloud provides a global viewof resource scheduling,m
188、aking the utilization of edge node resources more real-timeand effective.Data collaboration involves data interaction and coordination between edgecomputing nodes and the cloud.In cloud-edge collaboration,edge computing nodesare mainly responsible for the collection,processing,and analysis of on-sit
189、e/terminaldata,and uploading processed results and related data to the cloud.The cloudprovides storage,analysis,and value mining for massive data.Cloud-edge datacollaboration supports a controlled and ordered flow of data between the edge and thecloud,forming an efficient,low-cost data lifecycle man
190、agement and value miningprocess.Intelligent collaboration refers to smart processing and capability coordinationbetween edge computing nodes and the cloud.Edge computing nodes executeinference according to AI models,achieving distributed intelligence,while the cloudconducts centralized model trainin
191、g for AI and deploys models to edge computingnodes.This collaborative approach effectively combines distributed and centralizedintelligence,enhancing the efficiency and accuracy of AI processing while reducingthe costs associated with AI processing.The cloud primarily provides applicationdevelopment
192、 and testing environments,as well as capabilities for managing thelifecycleof applications,includingpush,installation,uninstallation,updates,34/123monitoring,and logging.Application management collaboration involves the provision of applicationdeployment andruntime environmentsby edgecomputing nodes
193、,with themanagement and scheduling of the lifecycle of multiple applications on this node.Business management collaboration refers to the edge computing nodesmainly providing modular,microservice-based applications/digital twins/networks,etc.Thecloudmainlyprovidesbusinessorchestrationcapabilitiesfor
194、applications/digital twins/networks according to customer needs,and providescustomers with related network value-added services on demand.Service collaboration refers to the collaboration of EC-SaaS(Edge ComputingSaaS)and cloud SaaS in terms of quality of service and service efficiency at the userap
195、plication layer.Federated learning is a typical intelligent application paradigm using cloud-edgecollaboration mechanisms.Traditional methods involve collecting device data formodel training in the cloud,which poses challenges in terms of bandwidthconsumption and latency,as well as serious privacy r
196、isks when data is stored in thecloud.In this scenario,cloud-edge collaboration for model collaborative training is agood choice.Thanks to the data collection capability of the edge side,thegeneralization performance of the finally trained model will be better.Among them,the edge side is responsible
197、for data collection and partial model training,and thecloud side is responsible for aggregating and updating the edge side models andsending them back to the edge side.Taking face recognition applications as an example,the traditional trainingprocess requires the edge side to collect face data and d
198、irectly interact with the centralserver.However,direct data interaction will inevitably lead to privacy disclosureproblems.Face recognition model training under cloud-edge collaboration does notneed to upload face data to the central server,which prevents privacy disclosureproblems to a certain exte
199、nt.The Sedna component of KubeEdge is based on the edge-cloud collaborationcapabilities provided by KubeEdge to realize the cross-edge-cloud collaborative35/123training and collaborative inference capabilities of AI.It supports the seamlessmigrationofexistingAIapplicationstotheedge,andquicklyrealize
200、scross-edge-cloud incremental learning,federated learning,collaborative inference andother capabilities.It ultimately reduces the cost of building and deploying edge AIservices,improves model performance,and protects data privacy.Sedna applicationscenarios include edge-cloud collaborative incrementa
201、l learning,federated learning,lifelong learning,and federated inference.Figure 2.11 Sedna Architecture Diagram()Edge node collaboration:In certain scenarios,due to the impact of stable cloud-edge communication,bandwidth constraints,and traffic costs,it becomes necessary to rely on the mutualcollabor
202、ation of edge nodes to accomplish intelligent tasks.In such scenarios,localedge nodes exhibit favorable network conditions with stable communication,highbandwidth,and basic cost-free communication between them.However,individualedge nodes have limited processing capacity and cannot independently han
203、dle certainsporadic and complex tasks.The collaboration of multiple edge nodes can fulfill thecomputational,storage,and other requirements of these tasks.Edge-to-edgecollaboration is often coordinated and controlled by cloud-side components.Therefore,edge-to-edge collaboration can be considered as a
204、n application scenario ofcloud-edge collaboration,where the control plane is located in the cloud,while theservice plane is at the edge.The main distinction between edge-to-edge collaboration36/123and regular cloud-edge collaboration lies in addressing network interoperabilityamong multiple edges.Fi
205、gure 2.12 EdgeMesh Architecture DiagramEdgeMesh is a solution implemented by KubeEdge for interconnecting edgenodes in edge scenarios.As the data plane component of the KubeEdge cluster,EdgeMesh provides simple service discovery and traffic proxy functions forapplications,thereby abstracting the com
206、plex network structure in edge scenarios.Inedge computing scenarios,network topologies are complex,and edge nodes indifferent regions often have non-interconnected networks.The mutual communicationof traffic between applications is a primary requirement for services.EdgeMesh meetsthe new requirement
207、s in edge scenarios,such as limited edge resources,unstableedge-to-cloud networks,and complex network structures.It achieves high availability,reliability,and extremely lightweight characteristics:High availability:Uses the capabilities provided by LibP2P to connect thenetworks between edge nodes,an
208、d divides the communication between edgenodes into LAN and cross-LAN.The communication within the LAN adoptsdirect communication between nodes,and the communication across the LANestablishes a direct tunnel between agents when the hole punching is successful;otherwise,the traffic is forwarded throug
209、h the relay.High reliability(offline scenario):Metadata is downloaded and stored locallythrough the KubeEdge edge-cloud channel,without the need to access the cloudapiserver;EdgeMesh internally integrates a lightweight node-level DNS server,37/123and service discovery does not rely on the cloud Core
210、DNS.Extreme lightweight:Each node has only one agent,saving edge resources.2.3 Edge Intelligence Network Infrastructure2.3.1 Edge IntelligentAccess NetworkEdge computing is deployed at the network edge close to devices or data sources,involving customer on-site networks,operator networks,and edge da
211、ta centernetworks.It integrates capabilities in computing,networking,and applications toprovide edge intelligent services nearby,meeting critical requirements in industrydigitization such as agile connectivity,real-time service,data optimization,andapplication intelligence.Operators are the primary
212、providers of edge computingnetwork infrastructure.In the Edge Computing Network Technology White Paperjointly released by the Edge Computing Consortium(ECC)and the Network 5.0Industry and Technology Innovation Alliance(N5A),the network infrastructuretraversed from user systems to edge computing syst
213、ems is defined as Edge ComputingAccess(ECA).Operators conduct in-depth research and practice in the integration ofedge computing and networks.This section focuses on analyzing the key technologiesinvolved in ECA.Figure 2.13 Schematic Diagram of Edge Computing Network Infrastructure(White Paperon Ope
214、rator Edge Computing Network Technology)In the edge intelligent access network,the campus network includes theenterprises internal network,the LAN in the factory.Common network technologiesinclude L2/L3 local area network,Wi-Fi,TSN(Time Sensitive Network),fieldbus.38/123The access network needs to s
215、upport simultaneous access of mobile and fixed users.In the mobile network,access methods such as 2G/3G/4G/5G need to be provided.Inthe fixed network,access methods such as optical access network Passive OpticalNetwork(PON)and various dedicated lines need to be provided.Operators providesimultaneous
216、 access to Mobile Broadband(MBB)and Fixed Broadband(FBB)accessnetworks through edge computing,and cloud service interaction on the two networks.At the edge gateway,the mobile network utilizes the User Plane Function(UPF)formobility management,traffic offloading,charging,Quality of Service(QoS),andot
217、her issues,while the fixed network uses Broadband Remote Access Server(BRAS/BRAS-UP)for traffic offloading.The diverse deployment locations and service scenarios of edge computingpresent new requirements for the access scope and performance metrics of networks,such as latency,bandwidth,high concurre
218、ncy.Therefore,the Edge Computing Access(ECA)network needs to achieve the following key technologies:(1)Cloud-network integration:To meet the access requirements of differentdevices in the service site,operators construct multiple access networks based on theregional locations of edge business applic
219、ations,leading to increased capitalexpenditure(CAPEX).Simultaneously,with continuous adjustments and changes inservice applications,networks are required to be flexible,enabling the agile launch ofnew services and imposing higher demands on operational expenditure(OPEX).Hence,ECA needs to adopt clou
220、d-network integration technology to meet operatorsrequirements for reducing CAPEX and OPEX.(2)Heterogeneous computing:Different edge computing applications possessdistinct service characteristics.For instance,Content Delivery Networks(CDN)require high-bandwidth video services,smart manufacturing dem
221、ands low-latencydeterministicnetworks,andsmarttransportationnecessitateshighlyreliableconnections with low-latency requirements.Due to the diversity of edge devices andlimited resources,it is essential to leverage various types of computing resources suchas Kunpeng,ARM,x86,GPU,NPU,FPGA,for heterogen
222、eous computing in ECA.(3)Intelligent native integration:Different service demands have varying39/123requirements for intelligence.An increasing number of service applications requirenetworks to have specific intelligent capabilities to achieve real-time adjustments tonetwork resources.For example,pr
223、oviding different Quality of Service(QoS)capabilities for different services and scheduling delay-insensitive tasks to other idlenodes for computation to ensure timely processing of local high-priority tasks.Therefore,endogenous intelligence is required in ECA.I.Characteristics of edge intelligent a
224、ccess network technology()Cloud-network integration:The edge intelligent access network breaks the historical chimney-styleconstruction status of the fixed and mobile networks that have been developedindependently.It promotes the traditional relatively rigid and closed networkarchitecture to gradual
225、ly transform into a simple,agile,open,and convergednew network architecture.Cloud-network integration is a key technology.In thedevelopment process,it will go through three stages:network cloudification,cloud-networkintegration,andintegrationandopenness.Finally,thetraditionally relatively independen
226、t cloud computing resources and networkfacilities will be integrated to form a system of integrated supply,integratedoperation,and integrated service.(1)The first stage is network cloudification.In fixed and mobile communicationnetworks,there are numerous traditional devices such as firewalls,switch
227、es,NAT,wireless base stations,and core networks.These devices are characterized byintegrated hardware and software,implementing closed architectures,and areproprietary devices.Each device implements a dedicated function,which is achimney-style independent construction.Network cloudification transfor
228、ms thetraditional chimney-style architecture into a decoupled,open,and cloud-basedarchitecture through SDN and NFV technologies.At the lowest level,it usesgeneral-purpose hardware such as servers,switches,and memories.For specificcomputing or network forwarding requirements,a hardware accelerator is
229、 used asneeded.Above the hardware,virtualization technologies,along with open-source40/123codes like Linux/KVM,QEMU,OpenStack,are widely applied,forming anintermediate layer known as the cloud operating system.All network functions areimplemented through software and run in the form of upper-layer a
230、pplications.General HardwareHardware AcceleratorHyper-Converged ApplianceCloud Operating SystemvRANv5GvBRASvCPE Figure 2.14 Software and Hardware Decoupled,Open,and Cloud-Based NetworkThis architecture has three main advantages:Compared to using proprietary hardware,the use of general-purpose hardwa
231、resignificantly lowers costs.Theimplementationofnetworkfunctionsthroughsoftware-defined,virtualization,and cloudification technologies allows for the rapid deployment ofnew services and the upgrading of existing services.Simultaneously,it reducesthe O&M costs of the network.In an open and decoupled
232、architecture,different software and hardwarecomponents can be selected from different vendors,avoiding vendor lock-in.This,in turn,reduces costs,promotes industry prosperity,and encourages innovation.During this construction phase,network equipment undergoes virtualization andcloudification through
233、technologies such as SDN and NFV,primarily targeting thecentral cloud,including the operators core network,data center switches,firewalls.The cloudification of network functions mostly remains in the stages of softwarizationand virtualization and has not yet reached the cloud-native design of cloud-
234、networkintegration.41/123(2)The second stage is the real integration of cloud and network.Networkfunctions,from design to delivery,follow cloud-native principles,includingmicroservices,containerization,DevOps,and continuous delivery.Additionally,various cloud resource pools with different capabiliti
235、es and locations,such as edgecloud,enterprise private cloud,operator communication cloud,and public cloud,cancarry both network functions and service applications,collaborating to formubiquitous computing and networking resources.In mobile networks,the 5G corenetwork data plane is moved to the edge
236、cloud,and scenarios like cloud desktops andcloud gaming represent the upward shift of terminal-side computation into the cloud.The coordination between edge,cloud,and network is evident.In the fixed networkdomain,by deploying cloud gateways near BRAS nodes,edge cloud-side applicationsare provided to
237、 users based on a Layer 2 internal network,including cloud desktops,cloud NAS.The cloud gateway is responsible for selectively forwarding edgeapplications as needed and providing user interfaces for interaction with edgeapplications.Such scenarios belong to value-added services within the edge cloud
238、,with additional value-added applications gradually increasing in the future.(3)The third stage is industry integration and intelligent openness.This stage hastwo characteristics:On the one hand,cloud-network infrastructure is furtherintegrated with new technologies such as AI,big data,and blockchai
239、n to create a moreintelligent cloud-network infrastructure.On the other hand,the capabilities of theseinfrastructures are further opened up to the upper layer.Different industry applicationscan flexibly configure,call,and even dynamically customize cloud-networkinfrastructure capabilities according
240、to their needs,forming a deep integration ofindustry applications and cloud-network infrastructure.In the edge intelligent access networks,cloud-network integration has beenextensively practiced in the operators network.Firstly,in radio access networks,frombase stations to the core network,global op
241、erators are driving cloudification.The corenetwork,starting from EPC,undergoes NFV virtualization and cloudification,and the5G core network is now almost entirely cloudified.The pace of cloudification ofwireless base stations has seen many overseas operators initiating the cloudification of42/123mac
242、ro stations in the first step,while in China,the cloudification of indoor small basestations is prioritized.For wired networks,broadband access gateways,BRAS,andother devices are also undergoing the process of cloudification.()Heterogeneous computing:In the edge intelligent access network,an essenti
243、al feature of networkcloudification is the use of Commodity Off-The-Shelf(COTS)general-purposehardware.However,communication network elements have stringent requirements interms of performance,real-time capabilities,and reliability.Meeting these specificrequirements poses several challenges when bas
244、ed on general-purpose hardware:(1)CPU instruction set:The CPU instruction set is designed to meet the diversedemands of various applications.SIMD instructions were introduced early to cater tothe needs of multimedia software with a significant amount of vector operations.Many algorithms in the 5G ph
245、ysical layer can optimize performance using the CPUsvector instruction set.However,the quantity of vector computation units,supportedvector bit width,and latency of vector operations are all limited.Additionally,thepower consumption of vector computation units is often much higher than other typesof
246、 computation units.(2)The design of CPU microarchitecture introduces uncertain delays,impactingreal-timecapabilities.Mechanismssuchasmulti-levelcachesandmemoryprefetching are designed to optimize memory access performance.However,occurrences like cache misses,TLB misses,page faults,significantly inc
247、rease thelatency of corresponding instructions,leading to performance uncertainty.Themicroarchitecture of CPUs often includes design features like branch prediction andinstruction scheduling,which involve numerous prediction algorithms.While accuratepredictions optimize CPU performance,incorrect pre
248、dictions trigger a series ofrecovery mechanisms,adding additional performance overhead.The uncertainty inperformance poses a challenge for computations in communication with highreal-time requirements.(3)General-purpose CPUs are designed for multitasking and shared CPU usage,43/123and thread schedul
249、ing and switching can impact performance and real-timecapabilities.Common techniques involve CPU core isolation,core binding,andinterrupt affinity to reduce the impact of external events on thread execution.However,configuring and optimizing these techniques are complex,often resulting inCPU resourc
250、e waste.In the 5G access base station protocol stack,the physical layer includesnumerous computationally intensive and real-time operations.Taking the example ofPolar coding,a 5G control channel coding scheme specified by 3GPP,Polar coding iscrucial in the coding chain,and its encoding process invol
251、ves complex matrixoperations,demanding high computational power.Similarly,5G requires highlyefficient computation for operations like FFT and IFFT with a 4096-point size.Therefore,the performance and real-time requirements of the 5G physical layer posesignificant challenges for the processing capabi
252、lities of general-purpose CPUs.Inresponse to these challenges,heterogeneous architectures combining CPUs withhardware accelerators have been widely adopted in cloud-network integration.Forinstance,in 5G base stations,a general-purpose CPU can handle the RANs layer twoand above protocol stack,while t
253、he overall physical layer computation can beoffloaded to hardware accelerator.Hardware accelerator can take various forms,including FPGAs,DSPs,ASICs,GPUs.The relationship between CPUs and hardware accelerators can also be realizedthrough multiple schemes:The accelerator is plugged into the PCI-E slo
254、t of a general-purpose server as anindependent card.The CPU and acceleration chip are integrated and packaged together,and they areconnected through the PCI-E interface.The CPU and accelerator are deeply integrated to form a single-chip SoCsolution.These three schemes are gradually optimized in term
255、s of performance,powerconsumption,and cost,but their flexibility also decreases accordingly.44/123Figure 2.15 Heterogeneous Computing Hardware FormThere are two calling modes between the CPU and the hardware accelerator:Lookaside mode and Inline mode.In Lookaside mode,the invocation of the hardwarea
256、ccelerator is similar to the function call of a software system.When the applicationexecutes to the part that needs a hardware accelerator,the CPU calls the acceleratorthrough the API.After the accelerator completes the processing,the CPU continues toexecute the subsequent part of the program.Lookas
257、ide mode is more suitable forhardware accelerators that only support part of the physical layer functions,such asLDPC encoding and decoding.In Inline mode,the accelerator supports either LowPHY,High PHY,or the entire physical layer.When uplink data enters the base station,the data first undergoes co
258、mplete physical layer processing in the accelerator,andthen the CPU handles the upper-layer protocol stack.Heterogeneous computing brings new requirements for software.From theperspective of the cloud resource pool,when scheduling the specific servernodes for running virtual machines or containers,t
259、he cloud platform schedulerneeds to sense the hardware capabilities of each server node and what types ofhardware accelerator each node has.Simultaneously,it also needs to sense thehardware requirements of the applications hosted by virtual machines orcontainers,enabling matching of corresponding ha
260、rdware during scheduling ormigration.From the perspective of software and hardware decoupling,5G basestation software expects standardized APIs,adaptable to different hardwareaccelerators,therebyreducingthecomplexityofbasestationsoftwaredevelopment and improving portability.The Open Radio Access Net
261、workAlliance defines the Acceleration Abstraction Layer(AAL)as the hardware45/123accelerator abstraction layer.AAL aims to make it easier for applications to useand control hardware accelerators,facilitating the scheduling and managementof hardware accelerators to fully utilize hardware resources.Th
262、is work iscurrently ongoing.Figure 2.16 Open Radio Access Network Alliance AAL Standardization()Intelligent native integration:Edge computing sinks computing from centralized data centers to the edgeof the communication network access network.By physically integrating thenetwork with computing,it pr
263、ovides low-latency computing services in adistributedmanneratthenetworkedgeclosertousers,meetingtherequirements of low-latency and high-bandwidth scenarios such as videoacceleration.While edge computing is physically deployed within edge networks,thelogical orchestration and scheduling systems for c
264、omputing and networksremain independent,lacking flexible dynamics.This results in the inability toachieve a unified control plane for network and computing,making itchallenging to promptly respond to the requirements of real-time and mobileservices.46/123In the integrated computational power at the
265、network element level,majorplayers in the communication industry implement two main solutions:(1)By adding independent computing boards or dedicated computingservers on network devices to achieve hardware integration of computing andnetworking.For example,additional computing boards are added to the
266、 5Gbaseband unit(BBU)to provide additional computing power.(2)Computing Native Network(CNN).This solution does not requireadditional hardware devices,but instead decouples the idle computing powerof a large number of 5G baseband units from communication services toprovideflexible,real-time,andlow-co
267、stcomputingpowersupportforcomputing applications.The computing native network tightly integratescomputing power with the communication network to meet various computingneeds.The introduction of these two solutions promotes the integration ofcomputing power and networking in the communications field
268、to providestronger computing power and flexibility.The computing native network is based on ICT technologies such as 5G,cloud-native,and AI,providing connectivity,computing power,and applicationservices for the entire industry.The network is built on top of the 3GPP 5Gnetwork protocol and achieves t
269、he decoupling of communication services andphysical CPU cores through virtualization technology.Computing power isquantified with vCPU cores as the basic unit,and idle computing power isisolated from communication service computing power through virtual machineand hypervisor multi-core CPU virtualiz
270、ation technology.Computing tasks arecarried on the idle computing power of network elements in the base station.ThisdecouplingmethodensuresthepriorityguaranteeoftheQoSofcommunication services and realizes strict isolation of internal computing powerboundaries.At the same time,the security boundary b
271、etween computingresources and communication resources is set to protect the stability ofcommunication services,and the security boundary is adjusted intelligently in47/123real time to ensure the priority and stable quality of communication services.At present,the computing native technology has been
272、 successfully appliedto the XR remote expert-assisted maintenance of a power grid project.A powerplant has to undergo a month-long equipment shutdown and maintenance everyyear.During this period,only specially trained personnel wearing specialprotective suits can enter the power plant for maintenanc
273、e.When encounteringdifficulties during maintenance,the staff cannot leave the power plant for help,which seriously delays the entire maintenance process and causes seriouseconomic losses.Through the computing native network,a closer connection isestablished between maintenance engineers,remote exper
274、ts,and maintenanceoperations.Using 5G AR glasses terminals that move with the maintenancepersonnel,visual and real-time interaction for complex maintenance processesis provided via wireless access to the 5G network.Remote experts can guidemaintenance tasks remotely through AR.The computing native ne
275、twork notonly offers 5G high-bandwidth,low-latency,high-definition interactive videodata transmission for AR remote expert assistance but also supports thescheduling of computing services for scene rendering and image distribution sothat maintenance personnel can share the images they see with remot
276、e experts atany time,receiving real-time analysis and guidance,reducing maintenancecosts,and enhancing efficiency.Based on computing native networks,mobile network infrastructure hasevolved from merely providing connectivity services to a new type of unifiedinfrastructure that provides both connecti
277、vity and computing services.Thisinfrastructure caters to the connectivity and distributed computing service needsof AI,better supporting the integrated evolution towards sensing,computing,and intelligence for 6G.II.Future evolution of edge intelligent access networkThis chapter introduces three key
278、technologies for edge intelligent accessnetworks:cloud-network integration,heterogeneous computing,and intelligent nativeintegration.These technologies are currently under development and deployment.48/123Looking towards the future 6G network architecture,in December 2023,theIMT-2030(6G)Promotion Gr
279、oup released a white paper 6G Network ArchitectureVision.Based on the latest ITU-R scenario requirements and key capability indicators,the paper proposes enhancements and refinements to the overall 6G networkarchitecture in terms of service scope and service capability expansion,as well asnetwork se
280、lf-capability improvement.It also proposes the overall direction for aplatform-based service network.Future edge intelligent access networks will be a newgenerationofcommunicationsystemsthatdeeplyintegratecommunicationtechnologies,computing networks,and AI technologies.These networks will exhibitstr
281、ong cross-disciplinary and cross-domain development characteristics.They willfully support digital transformation based on 5G networks and will be characterizedby the following technical aspects:Cloud-nativetechnologywillserveasthecornerstoneof6Gnetworkdevelopment,further promoting network innovatio
282、n.Computing networks will become the underlying infrastructure resource ofcommunication systems,and the convergence of networks and computing willbecome a new development trend.AI technologies represented by ChatGPT will become crucial applications in the6G era,significantly impacting networks.Distr
283、ibuted technologies such as blockchain will contribute to the collaborativedevelopment of centralized and distributed networks.2.3.2 Edge Intelligent Core NetworkAs various industries undergo deep digital transformations,the development ofapplications such as smart manufacturing,smart transportation
284、,and Industrial Internetof Things(IIoT)has led to toB services transitioning from production assistance tothe core of industry production.Many new application scenarios are introducingdifferentiatedServiceLevelAgreement(SLA)requirements,increasingthecomplexity of network operations.Integrating intel
285、ligence into network aspects like49/123service,experience,O&M,and green technology is a continuous demand for theexpansion of new spaces and new service formats in 5G/6G,and it is a key factor inachieving digital and intelligent transformation.The exponential growth of networkcomplexity is also a ma
286、jor driver towards network intelligence.The exponentialgrowth of network complexity is also an important driving force towards intelligence.This necessitates an intelligent native network architecture that possessesreal-time sensing,modeling and prediction,and multi-dimensional decision-makingcapabi
287、lities.It should achieve resource scheduling based on network intelligenceoptimization,optimize the coordination of coverage and capacity,and simplify O&Mthrough intelligent station self-planning,self-activation,and self-healing.Networkintelligence should also achieve a balance between performance a
288、nd energy efficiencythrough intelligent green technology,promoting a comprehensive transformationtowards network intelligence.Simultaneously,the number of intelligent terminal connections is rapidlygrowing,and data scales are expanding exponentially.Faced with massive data,traditional central clouds
289、 fall short in terms of timeliness,transmission distance,andsecurity,particularly in scenarios like industrial manufacturing,autonomous driving,and remote healthcare.Cloud computing is no longer sufficient to meet the bandwidth,latency,and other requirements of new service scenarios.Therefore,pushin
290、g networkand computational power towards the edge has become a necessity,and the edgeintelligent computational network emerges as the preferred choice to meet thesedemands.()Intelligent evolution:The aforementioned network evolution implies that the core network willmanage a larger and more complex
291、amount of data.Currently,the core networklacks sufficient intelligence to provide on-demand services and higher networkresource utilization efficiency.Therefore,it necessitates the provision of globalAI capabilities through the collaboration of distributed intelligent nodes toachieve intelligent nat
292、ive integration.This requires the network to natively50/123support AI and treat AI capabilities as a fundamental network service,realizingAI as a Service(AIaaS).This will enable the network to self-learn,self-evolve,and empower industry AI,building a ubiquitous intelligent ecosystem across allindust
293、ries,namely intelligent native integration.Literally speaking,intelligent native integration can be divided into twoparts:intelligent and native.First,intelligent signifies the utilization ofAI/ML as a core technology for the networks self-sensing,analysis,and optimaldecision-making.AItechnology,wit
294、hitsrobustlearning,analysis,anddecision-making capabilities,coupled with distributed network AI capabilities,collaborates with terminal AI and cloud AI to achieve ubiquitous intelligence acrossall industries,embodying the omnipresent AI concept.Second,native denotesinnate,indicating that AI applicat
295、ions should be seamlessly supported in thenetwork from the initial design phase of 6G networks.These AI applicationsencompass the networks own AI applications and industry-specific AI applications.()Edge deployment:The core network based on the Service Base Architecture(SBA)cansupport lightweight NF
296、V deployment,offering on-demand designed networkfunctionalities to cater to the mobility and QoS requirements of diversescenarios.It can be flexibly deployed based on industry scenarios with newarchitectures and functionalities.Mixed networking with UPF sinking:UPF and MEC can be deployed toenterpri
297、ses factories and campuses,providing data isolation and on-demandcustomized 5G private network services.This approach allows the stackingof 5G local network applications needed for industry applications,featuring5Gintegratedpositioning,lowlatency,andhighreliabilityfordifferentiated value-added servi
298、ces.However,it shares the 5GC controlplane withthe publicnetwork,makingsignalingdata andnetworkfunctionalities reliant on the real-time conditions of the public network.Lightweight 5GC independent networking:AMF,SMF,UDM,and PCF51/123network elements are deployed to enterprises factories and campuses
299、,withthe control plane entirely sinking and a lightweight core network locallybuilt.This establishes a high-bandwidth,low-latency,physically isolatedbasic connection network,ensuring complete isolation of user data frompublic network data,unaffected by changes in the public network.UPF isdedicated e
300、xclusively to industry users,with both data planes and signalingplanes strictly isolated from the public network.The capabilities within thecore network can be extensively opened up to the industry,catering to morecustomized requirements.I.Intelligent evolution of edge intelligent core network3GPP d
301、efined NWDAF(Network Data Analytics Function)in Rel-15.Itserves as the core networks AI+big data engine and is a crucial componentwithin the core network.NWDAF collects raw data from NFs,AFs,and OAMwithin the core network,intelligently analyzes this data,and outputs analyticalresults to NFs,AFs,and
302、OAM for network and service optimization.NWDAFaims to simplify data generation and usage within the core network,generatinginsights and taking actions based on these insights.It is responsible for dataanalysis and intelligent decision-making to optimize network performance andenhance the end-user ex
303、perience.Figure 2.17 Schematic Diagram of NWDAF Intelligent FrameworkIn Rel-18,further research will be conducted on enhancing NWDAF withAI in the network.This includes building commercial-grade data privacy52/123protectionsolutionsbycombiningfederatedlearningwithmobilecommunicationtechnologiestoful
304、lyutilizethemassivedataoftelecommunication networks.Additionally,during model training and inferencestages,considering execution results as model input data allows the model tooptimizebasedonoutcomes,thusimprovingmodelanalysisaccuracy.Collaborating with the intelligent analysis of network elements M
305、DAF in thenetwork management domain leverages the intelligent analysis results from themanagement side to enhance analysis accuracy by increasing input informationfrom the network side.()NWDAF network service characteristics:The core idea of edge computing is to move computing tasks from thecloud to
306、 the network edge closer to users,reducing data transmission latencyand bandwidth consumption.NWDAF,with functions such as data collection,model generation,and intelligent analysis,deploys network data analysisfunctions at edge nodes,enabling real-time capture and analysis of network datafor real-ti
307、me network monitoring and security protection.It can intelligentlyassess user service experience state based on multi-dimensional metrics such asuplink and downlink bandwidth,latency,and jitter,triggering network-sidemeasures to ensure service quality.This effectively reduces data transmissionlatenc
308、y and improves data processing efficiency.Furthermore,NWDAFisconnectedtoallcorenetworkelementsarchitecturally,providing on-demand,rapid,and precise intelligent analysisservices to PCF/OAM/AF through standardized service interfaces.It supportsflexible deployment in various scenarios,meeting applicati
309、on requirements atdifferent levels,enabling network function entities,and achieving low-cost,high-efficiency intelligent closed-loop operations for network operators.In specific service assurance scenarios,through systematic collaborationwith multiple network elements such as UPF and PCF,combining r
310、eal-timesensing,intelligent identification,and data analysis,a user experience baseline53/123is established.This facilitates dynamically triggered protection mechanisms andprovideseffectivesensingassurancetoend-users,achievingend-to-endclosed-loop service for data service experience assurance.()NWDA
311、F-related application scenarios:NWDAF is a data-aware analysis network element that,based on networkdata,autonomously senses and analyzes the network.It actively participates inthe entire lifecycle of network planning,construction,O&M,optimization,andoverall management.This facilitates easy maintena
312、nce and control of thenetwork,enhances network resource utilization efficiency,and improves userexperience.NWDAF is applicable to various service scenarios,especially significantfor large-scale applications such as the IoT and mobile internet.Firstly,in IoTapplications,NWDAF can monitor and analyze
313、IoT device data in real time,providing real-time data analysis and decision support.Secondly,in mobileinternet applications,NWDAF offers real-time network monitoring and securityprotection,safeguarding user privacy and data security.NWDAF can also beapplied in smart transportation,smart manufacturin
314、g,smart city,and otherfields.Here are some examples of application scenarios:(1)Improving system efficiency by analyzing and processing large amounts ofnetwork dataIntelligentdatacollection:Minimizesdatamovementbetweenaccess/aggregation and cloud-to-central locations.Efficient integration and testin
315、g:Built-in/pre-tested NWDAF delivery andplug-and-play interoperability between different generations(2G/3G/4G/5G).Simpler orchestration:Built-in NWDAF is part of the cloud-native networkfunction and can be deployed in the same orchestration process.(2)Customization or optimization of terminal parame
316、tersNWDAF assesses and analyzes different types of users by collecting informationon user connection management,mobility management,session management,and54/123access services.Utilizing reliable analytical and predictive models,NWDAFconstructs user profiles,determines user movement trajectories,and
317、service usagepatterns,and predicts user behavior.The network,based on analysis and predictivedata,optimizes user mobility management parameters and wireless resourcemanagement parameters.(3)Service(path)optimizationThe Internet of Vehicles(IoV)is a crucial technology in the network,and in thecontext
318、 of autonomous driving scenarios within the IoV,predicting the networkperformance of the base stations that vehicles are about to pass(such as QoSinformation and service load)plays a vital role in enhancing the QoS of the IoV.Forinstance,the IoV server can decide whether to continue in unmanned driv
319、ing modebased on the predicted information of network performance.NWDAF,by collectinginformation on network performance,and specific regional service loads,and usingreliable network performance analysis and predictive models,achieves statistical andpredictive analysis of network performance,assistin
320、gAF in optimizing parameters.(4)AF optimization of service parametersThe Internet of Vehicles(IoV)is a crucial technology in the network,and in thecontext of autonomous driving scenarios within the IoV,predicting the networkperformance of the base stations that vehicles are about to pass(such as QoS
321、information and service load)plays a vital role in enhancing the QoS of the IoV.Forinstance,the IoV server can decide whether to continue in unmanned driving modebased on the predicted information of network performance.NWDAF,by collectinginformation on network performance,and specific regional serv
322、ice loads,and usingreliable network performance analysis and predictive models,achieves statistical andpredictive analysis of network performance,assistingAF in optimizing parameters.Overall,in the field of intelligence,NWDAF realizes the endogenous way ofcore network intelligence compared to the wa
323、y of providing AI algorithms as externalplug-ins.However,challenges such as difficulty in data acquisition,ensuring dataquality,issues like data privacy protection for the vast data of operators,inability toshare data between different manufacturers,and the lack of effective means to verify55/123and
324、 guarantee the application effects of various AI models make NWDAF struggle tocollect effective and comprehensive data for modeling.These factors make it difficultfor NWDAF to collect effective and comprehensive data for modeling,resulting inunsatisfactory model evaluation results,and the performanc
325、e and efficiency ofartificial intelligence are lower than expected.Facing the 6G network with multi-domain integration,ubiquitous connection,and resource heterogeneity,incremental and patch-like capability enhancement canhardly meet the diverse and varied service requirements under large-scale netwo
326、rking.In response to these challenges,6G networks need a unified network architecturedesign with endogenous intelligence,that is,to deeply integrate the three elements ofnetwork connection and AI:computing power,algorithm,and data at the architecturelevel.The core technical capabilities such as secu
327、rity and AI are built into the 6Garchitecture and permeated into the entire lifecycle of each field,each network,andeach unit.Through endogenous design,the core technical capabilities such as securityand AI are integrated with the communication network to the deepest extent.Theultimate goal is to co
328、nstruct a comprehensive intelligent system within the network,ensuring efficient and high-quality provision of intelligent services.New 6G intelligent native integration needs to achieve multiple capabilities:Deep integration of communication networks and AI computing:Traditionalnetworks require fre
329、quent interaction and coordination between communicationand computing protocols to provide AI services.In 6G,there is a need for a set ofnative AI protocols that integrate communication and computing,achievingcoordinated control and support for the required AI services.This includesconnectivity and
330、distributed computing services essential for AI,as well as theintegrated control of connection and computation based on AI.The deepintegration of computation and communication is realized across orchestrationmanagement,control plane,and user plane dimensions.Automated integration of AI element resou
331、rces:Utilizing technologies such asfederated learning and network-computing integration,6G should automaticallyintegrate AI element resources(connectivity,computing power,algorithms,and56/123data)within the network.This automated integration supports interaction andintegration among AI elements,prov
332、iding a locally automated integrated AIruntime environment.This ensures efficient operation of AI workflows andreal-time feedback of AI service operational results.This design addresses thecurrent inefficiencies and inaccuracies in AI technology applications withinnetwork scenarios.The design also f
333、acilitates the deployment of AI services neardata sources and service objects to meet real-time requirements.End-to-end intelligent collaboration of the network:Through distributed training,federated training between nodes in multiple regions,joint inference,andmulti-agent collaboration technologies,intelligent service collaboration betweenmultiple nodes from the terminal,wireless access,core netw