上海品茶

中移智库:5G-A_6G移动网络新计算面使能移动算网融合白皮书(2024年)(英文版)(40页).pdf

编号:166771 PDF  中文版  DOC  40页 2.93MB 下载积分:VIP专享
下载报告请您先登录!

中移智库:5G-A_6G移动网络新计算面使能移动算网融合白皮书(2024年)(英文版)(40页).pdf

1、5G-A/6G New Computing Plane for MobileNetwork and Computing Convergence(2024 Edition)ForewordNew services of 6G and future networks require ultimate experience,which calls for the convergence of mobile network and computing.This white paper introduces motivation,features and requirements ofmobile ne

2、twork and computing convergence,proposes 6G architectureand key technologies,and provides typical use cases and potentialsolutions.It is expected that this white paper could help the formation ofindustrial consensus.The copyright of this white paper belongs to all the contributorcompanies.Without au

3、thorization,any entities or individuals are notallowed to reproduce or copy the contents of this white paper in part or inwhole.CONTENTSForeword.1CONTENTS.11.Introduction.11.1.Motivation of mobile network and computing convergence.11.2.Features of mobile network and computing convergence.22.Key requ

4、irements.32.1.Wide-area continuous computing service coverage.32.2.Computing resource awareness and selection.32.3.E2E experience guarantee.42.4.Terminal-in-network-edge-cloud computing network coordination.42.5.Path selection and optimization.52.6.Identification of service requests and associated Q

5、oS/resource requirements.53.Architecture considerations.73.1.Evolution of 5G and Edge system architectures.73.2.Considerations of 6G system architecture.84.Key technologies.104.1.Communication and computing converged orchestration.104.2.E2E QoS control.104.3.Dynamic UE workload offloading and schedu

6、ling.114.4.Security and privacy for computing.114.5.Computing capability exposure.135.Typical use cases and potential solutions.145.1.Case 1:Extended Reality(XR)applications offloading.145.1.1.Description and requirements.145.1.2.Potential solutions:Wireless remote computing.165.2.Case 2:Real-time g

7、aming and AI powered by remote computing.185.2.1.Description and requirements.185.2.2.Potential solutions:Distributed rendering across client/network/cloud.205.3.Case 3:Enabling factory-scale collaborative edge robotics system.255.3.1.Description and requirements.255.3.2.Potential solutions:Edge rob

8、otics offloading system.276.Conclusion and future work.327.References.33Contributors.351.Introduction1.1.Motivation of mobile network and computingconvergenceWith the development of communication technology and the large-scalecommercial usage of edge computing,more and more data will be processed lo

9、cally atthe edge of mobile networks 1.According to Gartners forecast,more than 75%ofdata will be processed at the edge by 2025.In current 5G systems,the network andcomputing(e.g.,edge/cloud computing)are separate,which makes it challenging tomeet the experience requirements of some new services.Acco

10、rding to IMT-2030perspective,new services such as immerse communication and holographiccommunication will be widely used in 6G 2.For example,XR service requires up to100 Mbps of downlink bitrate,5ms of latency and over 99%frame reliability.AIcloud gaming calls for significant GPU resources to implem

11、ent rendering.Industrialrobotics system needs a reliability of over 99.9999%.All the scenarios above haverequirements for massive in-network computing,which cannot be fulfilled via currentcommunication network.Thus,the next generation system will enable ubiquity ofnetwork access and computing capabi

12、lity,where computing and network need to bedeeply converged and developed together to maximize resource efficiency andoptimize the users experience.Figure 1-1 New capability requirements of IMT-2030The convergence of network and computing provides multifaceted benefits tooptimizing the processing an

13、d transmission of data,which is crucial to emergingtechnologies and applications,including XR,AI cloud gaming,industrial roboticscontrol,and otherAI usages.Aconverged architecture of computing and networkingsupports high-quality but low-cost service delivery through coordinated performancemonitoring

14、 and joint QoS control.In scenarios of time-varying service demands andnetwork traffic states,the convergence framework dynamically and elasticallyoptimizes where and when the data can be processed,to reduce latency,maximizereliability,and enable real-time processing.Furthermore,the convergence leve

15、ragesresource utilization efficiency by jointly allocating resources and balancing workloads,thereby preventing bottlenecks from either the communication or computationcomponent.1.2.Features of mobile network and computing convergenceMobile network and computing convergence(MNCC)is a new capability

16、whichmobile network and computing are deeply integrated.With it,goals of automateddeployment,optimal routing,and load balancing of services are achieved throughubiquitously deployed computing nodes.The new network infrastructure is capable ofperceiving computing resources both inside and outside the

17、 network.Thisinfrastructure enables the network to perform computing resource scheduling ondemand and in real-time,thereby improving computing resources utilizationefficiency,and enhancing user experience.MNCC is based on the mobile network capabilities and the computinginfrastructure 3.The objectiv

18、e is to provide integrated communication andcomputing services with the required E2E QoS(rate,latency,reliability).MNCC hasthe following features:1)MNCC provides on demand and integrated communication and computingservices.It realizes the optimal connection among user applications and availablecompu

19、ting/networking resources by selecting the optimal access path from user toapplication for best user experience.It also features ubiquitous mobile access,terminal-network coordination,controllable experience,and high-stabilitycommunication and computing.Therefore,by incorporating computing capabilit

20、y,thenetwork can provide high-quality mobile access capabilities for both communicationand computing services.2)MNCC is built on mobile network infrastructure.In addition to maintainingtraditional core capabilities of mobile networks,it introduces two key innovativearchitectural capabilities:the int

21、egration of network and computing,and unifiedscheduling of communication and computing resources.It encompasses computingresources spanning from terminal,in-network,to edge and cloud,leveraging theunique distributed network location advantage.MNCC also enhances QoS in widearea by joint scheduling of

22、 communication and computing resources.Consequently,MNCC provides a deterministic and optimized end-to-end service experience,wherethe network supports integrated networking-plus-computing resources co-scheduling.2.Key requirementsTo realize the features of MNCC mentioned above,the following keyrequ

23、irements must be satisfied:wide-area continuous computing capability coverage,computing resource awareness and discovery,E2E experience guarantee,terminal-in-network-edge-cloud computing coordination,and path selection andoptimization.2.1.Wide-area continuous computing service coverageThe MNCC requi

24、res the capability of wide-area continuous computing servicecoverage,necessitating the joint planning of network coverage,user plane and edgecomputing nodes.In wide-area mobility scenario,MNCC determines the consistencymode for service experience according to the requirements of computing tasks,e.g.

25、,the computing task is delay-sensitive or continuity-sensitive or both.For delay-sensitive tasks,MNCC selects,adjusts and schedules the optimalcomputing node(e.g.the computing node closest to the terminal)for the computingtask,based on real-time terminal location to ensure the lowest delay.For conti

26、nuity-sensitive tasks,after computing node selection,the MNCC shouldguarantee the computing task is anchored to the selected computing node and adjuststhe forwarding path to ensure data consistency.2.2.Computing resource awareness and selectionComputing resource is the foundation of MNCC,and therefo

27、re the awareness ofcomputing resource is the key capability to possess.In computing and networkconvergence,different data processing scenarios have different computingrequirements.The key tasks for building the convergence,deploying services,andaccurately scheduling computing resources are as follow

28、s:how to model and measurecomputing capability while shielding heterogeneity of the computing resources;howto atomically abstract the computing services to provide unified computingcapabilities for applications;and how to establish a unified model for differentworkload distribution modes:Network or

29、cloud side computing.Data sent by the terminal needs computing atnetwork or cloud side.MNCC requires the capability of selecting computing resourcebased on the task requirement and assigning computing tasks to appropriatecomputing nodes.Terminal-network-cloud collaboration.When a service is initiate

30、d by a terminal,the network needs to determine whether the terminal possesses enough computingresource to execute the computation locally,or terminal-edge collaboration computingis needed.If in the latter case,MNCC should have the capability to discover therequired computing resources and schedule t

31、hem for the task.For both modes,the selection of computing resources requires policies based onterminal location,E2E QoS requirements,network coverage and capacity,current andhistorical network QoS information,computing node deployment(e.g.location,resource type,resource quantity,bandwidth)and user

32、subscriptions.2.3.E2E experience guaranteeThe E2E experience guarantee of an application for a user stems from bothcommunication and computing segments.The legacy mobile communication andedge/cloud computing are in two separate systems and the scheduling is doneindependently at both sides.Therefore,

33、there is no deterministic mechanism toguarantee the E2E experience.With communication and computing converged in onesystem,the scheduling of communication and computing tasks can be jointlycontrolled to ensure the E2E experience.By knowing the capabilities,status andperformance of both communication

34、 and computing segments,scheduling of thecommunication and computing resources can be dynamically adjusted to meet theE2E experience requirements without over-provisioning.2.4.Terminal-in-network-edge-cloud computing networkcoordinationMNCC should best utilize the computing resources across multiple

35、 domains,including central cloud,edge cloud,in-network computing,and terminals.Specifically,MNCC provides ubiquitous computing resources physicallyinterconnected among these domains,and synergistically orchestrates computingtasks among them.For example,if an object recognition and detection task bef

36、ore XRrendering is implemented on the cloud side,it will lead to a large amount of datatransmission and corresponding high network bandwidth requirement.However,if thecomputation is performed only on the terminal side,it may risk a largerendering/processing delay due to the limitation of computing r

37、esources,therebylowering user experience.Therefore,the computing resources of the terminal end orin-network or edge need to be jointly utilized,where each entity handles a portion ofthe computing task to minimize the service delay and terminals energy consumption.As an example,edge computing support

38、s complex inference(e.g.large AI/ML model),while the terminal performs a small amount of data preprocessing.Moreover,by considering global computing resources,and by collaborativescheduling of computing and network resources in accordance with the servicerequirements,MNCC can further improve the exp

39、erience of users at scale.2.5.Path selection and optimizationTo access computing nodes for services,path selection and optimization shouldbe performed in the network.The existing path selection mechanism,where thenetwork selects the UPF(User Plane Function)by SMF(Session ManagementFunction),is based

40、 on the availability and load of existing UPF resources within theSMFs service area.However,it neither fully considers the balance of resources on aglobal scale nor the alignment between user QoS requirements and user plane paths.Therefore,it cannot select the optimal user plane path dynamically and

41、 efficiently toprovide ultimate computing experience to the user.For control plane path selection,when a user accesses the mobile communication network,it is necessary to considerthe real-time status management and control capabilities of the mobile network,including global network resources and com

42、puting nodes workloads,to select themost suitable access control network function for the user.As a result,MNCC needs to utilize the network to dynamically buildinterconnection between distributed computing resources and storage resources.Through multi-dimensional coordinated allocation of network,s

43、torage,andcomputing resources,the application can be scheduled among widely distributedcomputing resources in the MNCC.By jointly considering and controlling thecommunication delay(including propagation delay and transmission delay),globaloptimization of computing resource usage can be achieved with

44、 guaranteed servicequality and user experience.2.6.Identification of service requests and associated QoS/resource requirementsIn some terminal-in-network-cloud collaboration use cases,such as federatedlearning and UE computation offloading,application modules or computing tasks willbe dynamically de

45、ployed or offloaded to the computing nodes in the network basedon UEs requests and available resources.The UEs request can be over-the-top application requests or specific computingservice requests to the network entities.Regardless how service requests areexpressed and transported,the network needs

46、 to identify the requests and resolve therequired resource and QoS requirements,including the number of applicationmodules or computing tasks to be deployed,the amount of computing resourcesrequired,the expected latency,the bandwidth across selected path,and othernecessary requirements.These require

47、ments need to be reflected explicitly orimplicitly in the UEs request.3.Architecture considerations3.1.Evolution of 5G and Edge system architecturesIn 5G era,the 5GC and Edge Data Network are two separate networks that areoperated,controlled and managed independently.The Edge Data Network couldconsu

48、me some 5GC network capabilities through NEF(for example,EDGE_7,EDGE_2 and EDGE_8 interfaces(see 3GPP TS 23.558 4)to fulfil the requirementof the edge service operations,such as AF(Application Function)traffic influencefunctionality,UE location information,establishment of an AF session with QoS,and

49、QoS monitoring capability.However,these capabilities are not sufficient to ensure and optimize the E2E QoSthat stems from both communication and computing segments.With this limitedcoordination,the 5G core network is not aware of the QoS provided by the Edge DataNetwork,and the Edge Data Network has

50、 no knowledge about the network conditionsof the AN(access network)or terminal,therefore there is no way to coordinateresource management and traffic scheduling at both sides to ensure and optimize theE2E QoS.To enable MNCC and control the E2E QoS between the 5G system and EdgeData Network,the coord

51、ination between the two systems needs to be extended tosupport:5G system to get the information and status of the Edge server,such as location,capacity,usage,bandwidth,latency,throughput,etc.;and5G system and/or Edge Data Network to get the information of Access Networkand terminal,such as latency,t

52、hroughput,capacity,coverage area,number of users,reliability,etc.The extended coordination could be achieved through enhancing the interfaces(EDGE_7,EDGE_2 and EDGE_8)between 5GC and the Edge Data Network,theinterfaces(e.g.,EDGE_M)between the Management and Orchestration Systems ofboth sides,as illu

53、strated in the Figure 3-1.Figure 3-1:Enhanced coordination between 5G network and Edge DataNetwork3.2.Considerations of 6G system architectureThe architecture of 6G MNCC is illustrated in Figure 3.2.In 6G system5,thecomputing execution is integrated in terminal,network and edge to execute thecomputi

54、ng tasks.The MNCC is enabled by both continuum scheduling and E2Esession management.The Continuum Scheduling includes joint communicationscheduling and computing task scheduling.The Communication scheduling partorchestrates the communication functions,including deployment,scaling and healingof the f

55、unctions,etc.Computing task scheduling part identifies the requirement forcomputing tasks,decomposes and summarizes the tasks based on the requirement,and completes the dynamic deployment of applications based on the capability,capacity and E2E performance of the available network and computing reso

56、urces.TheCommunication scheduling and Computing task scheduling work in a coordinatedway,with each considering the status of the other.For example,the capacity ofcommunication function and application may need to be expanded(by scaling out)simultaneously to support more uses in an area.Figure 3-2 Ar

57、chitecture of 6G MNCCThe E2E Session Management function supports Communication SessionManagement and Computing Session Management.The Communication SessionManagement is controlling the QoS for the communication session,like the SMF in5G system.The Computing Session Management is responsible for com

58、putingsession modeling,and computing QoS and session continuity control.The ComputingSession Management part propagates the computing QoS parameters for thecomputing session to the network functions or computing nodes containing thecomputing resources to control the execution and scheduling of the c

59、omputing tasks.The Communication Session Management and Computing Session Management workjointly to control and guarantee the E2E QoS of the applications for users.Forexample,with the knowledge of communication status,the computing scheduling canbe adjusted to meet the E2E requirements without over-

60、provisioning of computingresources,and vice-versa.Together with the enhanced terminal,network,edge andcloud,these functions provide support for global control of computing sessions,mobility management,and computing-network collaboration,as well as efficient andon-demand orchestration of computing re

61、sources,satisfying the requirements of 6Gnew services scenarios with higher computing speed,greater concurrent computingcapabilities,and other high-performance computing requirement of mobile networks.4.Key technologies4.1.Communication and computing converged orchestrationThe infrastructure of MNCC

62、 is composed of numerous computing nodes.In theforeseeable future,the total number of computing and networking nodes is expectedto reach magnitude of thousands or even higher,forming a distributed resourcenetwork with extremely complex topology.Therefore,achieving precise and efficientallocation and

63、 scheduling of resources in network at such scale and complexity hasbecome a critical challenge.From the perspective of commercial availability and user-friendliness,tenants ofthe computing resources may lack both willingness andcapability to analyze therequirement.Consequently,they may not be able

64、to provide proper requirements.Instead,they may only provide requirements on experience that can be perceived atthe service level.Besides,incentive mechanism may need to be designed to encouragethe tenants of the computing resources to be involved in computing task.In this case,the MNCC needs the ca

65、pability to translate the high-level,coarse-grained,and abstract service requirements of tenants into specific,detailed,and implementable requirements on computing and networking resource.It shouldalso integrate information such as the geographical location,network conditions,service load,and types

66、of each computing node in the current network to achieveprecise and intelligent scheduling.This scheduling ensures that,the tenants originalservice requirements are fulfilled effectively.4.2.E2E QoS controlGiventhat the existing mutual independent QoS control mechanisms ofcommunication and computing

67、 cannot satisfythe new E2E QoS requirement,it isnecessary to introduce jointQoS control mechanism withinthe MNCCframework for both communication and computing segments.The communication QoS control follows the similar approach for the legacynetworks,with additional knowledge of the status on computi

68、ng side.This enables thecommunication QoS control to be adaptive to the dynamic E2E environment,ratherthan being fixed and focused solely on the radio network.The computing QoS control can be achieved in the following ways:The terminal-side application marks the QoS requirements by specifiedparamete

69、rs,and the network translates these parameters into the requirements forcomputing resources,including type of computing resources,amount of computingresources,communication latency,and computing latency,which,together withadditional information such as UE location,are necessary to support computing

70、taskscheduling and computing resource allocation.The network establishes sessions based on both communication and computingQoS parameters.The computing and networking session management functions canselect different forwarding paths and computing nodes to process the requestedcomputing tasks.The typ

71、e and size of a computing task affects the selection ofcomputing nodes with respect to conditions of network and computing segments,while the communication latency determines the selection of user plane paths.Forthese reasons,the selection of computing resource and communication path shouldfollow un

72、ified policy to guarantee E2E QoS.4.3.Dynamic UE workload offloading and schedulingWith pervasive computing resource distribution within the network,theefficiency of workload offloading and scheduling areheavily impacted by networkconditions.This section explores how workloads can be dynamically off

73、loaded andscheduled across different remote execution environments and discusses the impact ofcommunication link latency and congestion on offloading decisions,where thecomputing capabilities in remote execution environments include Hardware as aService(HaaS),Platform as a Service(PaaS),and Software

74、 as a Service(SaaS).Effective optimization strategies of with respect to these factors are essential forenhancing E2E QoS performance and optimizing resource utilization and consists ofthe following aspects:1)Workload modeling.Modeling of tasks involves identifying thecomputational requirements,data

75、 dependencies,bandwidth requirements,andexecution timelines of tasks.Accurate models help in predicting the performanceimpacts of offloading and in making informed decisions about where and when tooffload tasks.2)Workload mobility.This aspect involves continuous and efficient execution oftasks as de

76、vices move between two different locations,which involves managing thehandover of tasks across access networks and execution environments,being adaptiveto changing network conditions.3)Workload distribution and synchronization.Efficient distribution andsynchronization of workloads across multiple no

77、des and environments are vital toenable harmonicparallel computing with distributed resources.4.4.Security and privacy for computingMNCC brings new challenges for security:Each application has its specific security requirements,such as data governanceand compliance while financial transaction or hea

78、lthcare data are part of applicationservice.Consequently,the applications running on the same computing node mayhave different levels of security requirements,which requires different protectionmechanisms.This issue becomes even more challenging in software defined networks,in which the network func

79、tions are mission critical perpetually.These challenges give rise to the need for new security considerations.1)Security aware orchestrationThe highly distributed computing and communication system contains securitydomains that are protected with different levels of security mechanisms,and workloadn

80、eeds to be deployed into appropriate domain based on the security requirements.Therefore,the orchestration function needs to enforce the trusted software supplierchain by using the white-list to deploy the approved software into the correspondingsecurity domain.Confidential computing adds the memory

81、 encryption at the workloadlevels,and provides the security&privacy among network function&various edgeapplications/services.2)Workload isolationIn the scenario where the network functions and applications are deployed ontothe same physical server,the hypervisorneeds to ensure that the network funct

82、ionand the application will not cause security issues during data transmission,storageand processing among them.In addition,the hypervisor needs to provide securityrequirements to isolate the workload to prevent attackers from accessing them.Giventhat network function is critical infrastructure,zero

83、 trust architecture should be builtin the server forthe applications and network functions.3)Network layer securityVarious methods can be used to improve the network security,includingrule-based methods,such as network ACL(Access Control List),and traditionalpattern-based or AI-based methods for abn

84、ormal detection.Once a security issue isdetected,actions including blocking the traffic,changing the ACL,movingworkloads to different network domains,etc.,should be taken immediately to preventthe network from further attacks.4)Application layer securityVarious attacks lie in application layer where

85、 attackers attack the workloads viaJavaScripts,files(pdf,PE,word files),DNS,or command&control.The system needto have mechanisms in application layer to prevent those attacks and keep workloadssecure.5)Data Loss PreventionKeeping data secure and preventing data breaches is one of the most importanta

86、spects of workload security.Data loss prevention should be top-prioritizedwhenconsidering how and where to deploy those workloads.The system needs to usesecurity aware orchestration and detection methods,such aspattern matching andAI,to prevent any kind of data loss.Moreover,securing data with lifec

87、yclemanagement spanning data at rest,data in transit and data at runtime should betackled by the system.Among the new security technologies,Post QuantumCryptographs can be considered as a promising way of achieving better security.6)AI Driven Behavior Analysis and Vulnerability Scan.The system exhib

88、its different behaviors when operating normally versus undercybersecurity attacks.A promising type of approaches is AI based techniques that cananalyze the platform telemetry,CPU&Memory&I/O metrics to build the behaviormodel.When system behavior deviates away from normal behavior,the securityprotect

89、ion mechanism then triggers and enforcethe vulnerability scan forsystemsecurity auditing,responds to the threats,and remedies any incurred issues.4.5.Computing capability exposureThe capability exposure of MNCC includes the exposure of network capabilitiesand computing capabilities and can be catego

90、rized into two types:i)exposure tonetwork itself and ii)exposure to external applications and industries.The exposure to network itself enables the mobile communication network to beaware of the available computing resources,which is essential to aunifiedsystemarchitecture with inherent networking a

91、nd computing capabilities.On the other hand,the exposure to external applications and industries enables the mobilecommunication network to provide computing services to external industrialapplications.The MNCC may also provide a platform for exposing the computingresources to application tenants,be

92、sides the communication services.Through thecapability exposure interface,MNCC provides its tenants services that are in line withthe operational habits of cloud computing tenants,which saves users from having tolearn about diversenetwork access and complex networking logic in their tenantenvironmen

93、ts.5.Typical use cases and potential solutions5.1.Case 1:Extended Reality(XR)applications offloading5.1.1.Description and requirementsHuman machine interface has always been essential to empower users to fullyleverage new technological advances.XR encompasses three types of experiences:Virtual Reali

94、ty(VR),Mixed Reality(MR)and Augmented Reality(AR).In VRexperiences the user is fully immersed in a computer generated 3D environment.Thus,it is completely detached from the real world and anchored to the digital world 6.Inrecent years,VR devices have enabled the visualization of a 3D video pass-thro

95、ugh tothe user to interact with the real world 7.However,users are still anchored to thedigital world.These new type of devices are known as MR devices.This approachcan mimic the behavior of AR devices,where user can see the real world directly,i.e.,the user is anchored in the real world,and 3D asse

96、ts are overlaid seamlessly in thereal environment through the use of see through holographic lenses 8.In years tocome,XR devices are expected to combine the main benefits of the aforementioneddevices,enabling fully immersive experiences in both the digital and the real world9.Table 5-1:Specification

97、s of state-of-the-art VR/MR and AR devices vs idealdevicesMetricQuest 3Ideal VRMicrosoftHoloLens 2Ideal ARResolution(MPixels)9.12004.4200Field-of-view(Degrees)110Full:165x175Stereo:120 x13552 diagonalFull:165x175Stereo:120 x135Refresh rate(Hz)-144Motion-to-photon latency(ms)20 20 9 70.1

98、-0.2Silicon area(mm2)N/A100-200 173 100Weight(grams) sEven though current devices succeed in demonstrating the potential of such newinterfaces,their current state-of-the-art is far from what is considered to be ideal.Tobe fully embraced by the mainstream consumer,XR devices must have

99、a combinationof low weight,high resolution and frame rate and low latency.The following Tablesshows a spec comparison between current XR devices in the market and their idealspecifications 10,VR gaming wireless offloading requirements,XR enterprisecollaboration and workplace productivity wireless of

100、floading requirements,and XRsocial experiences wireless offloading requirements,respectively.As it can be seen,device weight,resolution and power consumption are still far away from ideal values.To close the gap,the industry is exploring the offload of XR compute modules to theedge/cloud.In what fol

101、lows,we explore the architecture of such systems and how5G/6G will play a pivotal role in enabling such solutions.Table 5-2:VR gaming wireless offloading requirements.Reported latencies referto the wireless link onlyTraffic StreamKPISpecificationVideo frames(downlink)Throughput100 Mbps to 200 Mbps(4

102、k/8k 72-120 Hz)LatencyP755ms,P9510ms,P9950msPose/IMU/Controllercommands(uplink)Throughput2MbpsLatencyP902ms;P99.910msMIC Audio(uplink)Throughput1 MbpsLatencyP9010ms;P99.9 15msHaptics(downlink)LatencyP9010ms;P99.915msAudio(downlink)Throughput2 MbpsLatencyP9010ms;P99.915msTable 5-3:XR enterprise colla

103、boration and workplace productivity wirelessoffloading requirements.Reported latencies refer to the wireless link onlyTraffic StreamKPISpecificationVideo frames(downlink)ThroughputMin.30 Mbps(1080p72 Hz)PeriodNominal:13.8ms;P9920 msLatencyP7510ms,P9515ms,P9920msPose/IMU/Controllercommands(uplink)Thr

104、oughput2MbpsLatencyP902ms;P99.910msMIC Audio(uplink)Throughput1 MbpsLatencyP9010ms;P99.9 15msHaptics(downlink)LatencyP9010ms;P99.915msAudio(downlink)Throughput2 MbpsLatencyP9010ms;P99.915msTable 5-4:XR social experiences wireless offloading requirements.Reportedlatencies refer to the wireless link o

105、nlyTraffic StreamKPISpecificationVideo frames,app data.Throughput20-30 Mbps(60-120Hz video)LatencyP7510ms,P9515ms,P9950msSensor data,audioThroughputUp to 5 MbpsLatencyP9010ms;P9920ms5.1.2.Potential solutions:Wireless remote computingMost popular head-mounted displays(HMDs)are self-contained.The body

106、 of theheadset contains not only the XR screen and lenses,but also a System-on-Chip(SoC)to locally render the experience,and also process any sensing and/or input devicetracking,e.g.hand controllers.In some instances,the SoC can render high fidelityimages,but it comes at a cost of weight and conveni

107、ence 9,11.In other approaches,HMDs are lightweight but with limited field-of-view and low fidelity images 12.Combining the benefits of these two opposite approaches is possible by offloadingsome of the SoC processing to a separate compute unit.To enable a mobile userfriendly experience,offloading ca

108、n be enabled by high reliable and deterministicwireless communications.Figure 5-1:Wireless remote compute architecture using 6G networksFigure 5-1 shows an example of the different modalities of XR computeoffloading 13.Current state-of-the-art XR devices perform simultaneous locationand mapping(SLAM

109、)and mapping optimization using point cloud data sets.Thesefunctions allow the XR to have a map of the environment and locate itself within it.Inaddition,object detection and tracking enables the 3D semantic segmentation of theperceived environment allowing a more realistic interaction between real

110、and digitalobjects.Finally,the rendering and multimedia processing takes care of rendering thescene,compute image re projection,and encode and decode streaming video.In theFigure,compute models are organized based on their time criticality and computerequirements.Thus,high offload scenarios are only

111、 suitable for wireless networkssupporting both high determinism/low latency and high throughput.In the low offload scenario,most of the processing happens locally in the HMD.As the local spatial map is created and localization happens in the device,associatedpoint clouds are compressed and sent to t

112、he cloud through 5G/6G connectivity,wherea point cloud is integrated into an existing global spatial map.The medium offloadscenario considers the point cloud and local spatial map generation done in thenetwork/edge/cloud.This is achieved by transmitting compressed video collected bythe HMD cameras t

113、o the network/edge/cloud where it is processed.In addition,in thisscenario object detection is also performed.In the extreme scenario,object trackingand HMD localization using SLAM are also offloaded.In this case,sensor dataincluding inertial measurement unit and LiDAR are also sent to thenetwork/ed

114、ge/cloud.In all three cases,rendering can be performed locally or remotelythrough the rendering and multimedia processing module.As an example,Table 5-5shows the high level requirements expected for compute rendering 13,14.Table 5-5:Video streaming requirements for cloud gaming,VR and ARexperiencesU

115、se casesDLbitrates(Mbps)ULbitrates(Mbps)Motion-to-photonlatency(ms)FrameReliability(%)Cloud gaming8-30 0.310-30 99VR30-100 25-20 99AR2-602-205-50995.1.2.1.Distributed offloading and QoS controlTo support different modalities of XR computing offloading,the correspondingcomputing functions need to be

116、pre-deployed in the system.According to the QoSrequirements of the workload,the orchestration function deploys the computingfunctions to the appropriate compute nodes in the network or edge based on theknowledge of both communication and computing capability and performanceassociated with the comput

117、e nodes.Each compute node serves a certain area wherethe E2E QoS requirements of the XR devices can be met.When an XR device requests offloading a specific workload,the orchestrationfunction selects the appropriate compute node where the corresponding computingfunction has been installed,or to insta

118、ll the computing function if it has not beeninstalled yet,to serve the XR device.After compute node is selected,the communication and computing sessionmanagement function establishes the communication and computing sessions for theoffloading with the corresponding QoS configurations to meet the E2E

119、QoSrequirements.If the performance on the communication or computing side cannot bekept for some reason,the communication and computing session managementfunction dynamically adjust the QoS configurations on the other side,or report to theorchestration function to select another computing function a

120、ssociated with thecompute node for guaranteeing the QoS requirement.5.1.2.2.Communication-aware offloading controlUsing the example described above,there are multiple ways offloading can beoptimized.For instance,the XR application can choose dynamically how to operatedepending on the 5G/6G network c

121、onditions and the compute capabilities at thenetwork/edge/cloud.Thus,the application can choose to operate in low offload whennetwork conditions are poor,or when not enough compute is available at thenetwork/edge/cloud.Likewise,under ideal network and compute conditions,theapplication might select t

122、o operate in high offload mode.A different approach to thismethod is to use dynamic Quality-of-Service(QoS)while operate in high offloadmode.For instance,the video uplink data stream can change its QoS dynamicallydepending on the XR user actions to allow for a better utilization of the channelbandwi

123、dth.If the user is not moving its head quickly,e.g.reading a document,SLAMmight not require high video frame rates in the uplink to achieve good XR poseestimation.Thus,dropping packets(due to low QoS tagging)might be tolerated.Inthe opposite scenario where the users head movement is high,e.g.VR gami

124、ng,tagging uplink video traffic is required to enable a sufficient SLAM performance forhead-tracking.This process can be done dynamically,as often times XR usersalternate periods of time with significant and moderated head movement.5.2.Case 2:Real-time gaming andAI powered by remotecomputing5.2.1.De

125、scription and requirementsReal-time gaming is a key workload that constantly demands system levelimprovements in hardware ecosystem(CPU/GPU/AI accelerators/memoryspeed/display etc)and software ecosystem(multi-GPU support,enablingnetwork/edge/cloud-based system)to deliver a compelling gaming experien

126、ce to theend user.The following figure illustrates the key challenges(latency,qualitythroughput)in gaming as it shifts today from client gaming to cloud gaming model.While providing higher throughput by cloud gaming,this comes at the cost ofincreased latency to stream the rendered content to the cli

127、ent.Figure 5-2:Real-time gaming requirementsThere are two kinds of cloud gaming execution model-Frame Streaming&Command Streaming.In the frame streaming execution model the game application islaunched on the cloud instance and rendered frames are encoded and streamed to theclient device.The client s

128、ystem decodes the video stream and display the content andcontinuously transmits the user inputs to the cloud instance to be sent to the gamingprocess.In the command streaming model,the game application is launched on thecloud instance but instead of rendering the frame,the APIs to render the frame

129、alongwith the game assets are transmitted to the client system to execute using the clientGPU.Frame streaming model has become quite popular as it eliminates therequirement for medium/high end GPU on the client device,allowing any user to playlatest games at highest quality settings on their persona

130、l computer/tablets and smartphones.Cloud gaming faces the following challenges:1)End to end latency.End to end latency is a time measure between user inputs(keyboard/mouse/game controller)to visually seeing a display update on the screen.The following diagram shows the breakdown of multiple stages t

131、hese inputs had totravel from client system to the cloud and back to reach the display.The red blocksrepresent variable processing delays which account for overall variable end to endlatency in the system.A better gaming experience not only requires low latency butalso demands a bounded latency irre

132、spective of many factors(shown in red blocksbelow)that contributes to the overall variable latency in the end-to-end system.Figure 5-3:E2E latency challenges2)Frame quality.The loss of frame quality is due to network congestion andbandwidth limitations.These unpredictable network conditions highly i

133、mpact theframe quality during the frame encoding process,which not only impacts highfrequency details in the rendered frame but also unnecessarily waste GPU computingcycles to render them at the highest quality settings.Figure 5-4:Frame quality challenges5.2.2.Potential solutions:Distributed renderi

134、ng acrossclient/network/cloudTo solve the latency and quality challenges in real-time gaming,the first is toplace the rendering workload(s)to the most appropriate computing node.Thisinvolves splitting the rendering tasks based on the nature of the contents and theirlatency and quality requirements,a

135、nd distributing the tasks to the most appropriateGPUs(client,network,edge and cloud).By converging the network and computing,the system can enable the client to select the most appropriate remote GPUs for thevarious rendering tasks with the knowledge of both communication and computingcapability and

136、 performance associated with each remote GPU.Besides the rendering distribution,another significant approach to tackle thesechallenges is to bring the best of frame streaming and command streaming modeltogether to leverage end to end compute between client/edge/cloud based on latencyand quality requ

137、irements.In the real usage,the terminal user can send the latency andquality expectation as well as local computing capabilities to the edge scheduler,e.g.,e2e network latency 50ms,30Mbps,and so on,then scheduler can determine thescheduling policy.For example,distributing certain workloads as comman

138、d streamingmodel and certain workloads as rendered frame between cloud and client systems cannot only lower the E2E latency but also helps to achieve bounded throughput&quality in the overall gaming experience.There are two policies for the workloadoffloading:1)Workload splitting between client and

139、network/edge:This policy is todetermine what kind of workload output is visual quality sensitive but light weight,e.g.,HUD rendering for the game,then offloading it to the terminal side.2)Rendering&Encoding policy choice based on terminal feedback:The terminalwill timely feedback the network status

140、as well as visual quality to the edge scheduler,for example,packet lost and packet transmission latency at every 50ms,then the edgescheduler will adjust the rendering and encoding policy to guarantee the QoS.Figure 5-5:Distributed rendering across client/network/cloud5.2.2.1.GPU registration,discove

141、ry,selection and switchTo enable the distribution of rendering workloads to the remote GPUs which canbest meet the latency and quality requirements,the converged network and computingsystem needs to support the following:GPU registration:the computing nodes in the network,edge and cloud registerthe

142、available GPUs to the orchestration function,with the information related to thecapacity,performance,latency,and the location or serving area.GPU discovery:the client discovers from the orchestration function the availableGPUs that can serve the area where the client is located.GPU selection:the cli

143、ent selects the remote GPUs that are for suitable thespecific rendering tasks based on the latency and quality requirements,and offloadsthe rendering tasks to the selected remote GPUs.GPU switch:for a mobile client,e.g.,the user plays the game in a driving car ortrain,when the client moves out of th

144、e service area of the serving GPU or the QoSrequirements cannot be met by the serving GPU any longer due to clients movement,the rendering task need to be switched to another GPU to ensure the service continuityand the QoS.This involves the GPU discovery and selection for the new area,andalso game s

145、tatus and context transfer between the two GPUs for the switch.5.2.2.2.Latency aware split renderingMany game engines maintain a hierarchical information about the scene to berendered,by leveraging that information we can categorize some workloads withmedium to high latency workloads(such as physics

146、 simulation,dynamic reflectionsetc)to be executed on a remote GPU(running on edge or cloud)and use the localGPU to render the low latency workloads.Another way to split the workload betweennetwork/edge/cloud&client is to leverage the frame streaming and commandstreaming model along with AI based up

147、sampling to distribute the workload asshown in the diagram below.Figure 5-6:Latency aware split rendering5.2.2.3.Split text rendering3D Text renderings and Heads Up Display(HUD)are quite common in computergames to show the player stats,game stats,menus to the user.These stats are vital forthe user t

148、o play the game.A highly varying network bandwidth causes the cloudgaming service to encode the frame within a constrained bitrate leading to visualartifacts and low quality in the encoded frame.This issue is amplified when therendered frame also includes high frequency information such as texts&det

149、ailedoverlays that may not be easy to read when encoded with a constrained bitrate.Thefollowing diagram illustrates the quality of the encoded frame at various QuantizationParameter(QP)settings.Figure 5-7:Quality loss of HUD at various QP settingsTraditional solutions tried to fix some of these issu

150、es on the media encoder side,by assigning smaller QP in the frame regions where texts are displayed in the screenor assigning smaller base level QP for the overall frame for games that has text inthem.However,smaller QP also brings stutter to game play on the client side assmaller QP require more bi

151、ts to encode the frame thus more data must be transferredper frame,and this cannot be sustained in a variable network bandwidth to deliver60FPS gameplay all the time.Our solution detects draw calls that are rendering text,HUDs(both in projectedplane and in 3D)in each frame and reroutes them as comma

152、nd/API streams to executeon the client GPU after the frame is decoded.Figure 5-8:Split text renderingsBesides retaining text and high frequency overlays at highest quality,ourapproach also helped to save encoding bit rate by up to 20%when we encode theframe without text by removing the HUD informati

153、on(which occupies a very smallarea)in encoding while keeping the same quality.5.2.2.4.Network&render aware encodingSplit Text Rendering approach addresses the challenges with preserving visualquality especially around high frequency visual information in the rendered frame.While this greatly helps i

154、n preserving quality of text and HUDs,we still suffergeometry and shading quality loss on the rest of the regions in the frame.To address this geometry quality loss,the“rendering knobs”is introduced totweak during heavy network congestion or bandwidth limitations or heavy GPUutilization by concurren

155、t game instances.The following diagram illustrates theintermediate stages in the rendering pipeline(G-buffers)used to decide the QPsettings on key regions in the rendered frame(i.e.,preserving higher QP around themain character in the game).Besides this,we can also dynamically adjust severalrenderin

156、g settings such as texture detail reduction by adjusting the base mip-level inhighly detailed textures in the game,texture bias control,shading rate control basedon network congestion.This approach not only helps with reducing the bit-rate of theencoded frame but also helps to improve rendering effi

157、ciency by rendering at adesired shading rate during network congestion.Additionally,rendering at lowertexture details also helps reduce the overall texture access bandwidth during renderingthe frame,thus games that are memory bound can run smoothly with this approach.Figure 5-9:Network&render aware

158、encoding5.2.2.5.Multi-GPU rendering&AIWe propose a Ghost GPU approach to seamlessly enable remote renderingcapabilities in client PC as well as in edge or cloud data centers.The core idea is topresent remote GPU support as a(virtual)graphics adapter in the operating system(asGhost GPUs)where applica

159、tions can use the current graphics&compute API toaccess the GPU.The user can choose to fully utilize local GPUs when enough localcompute is available and use additional Ghost GPUs when additional compute isrequired.This innovation also helps to enable advanced rendering features(such asray tracing,m

160、achine learning acceleration)beyond the capabilities of the local GPU,where the“Ghost GPU”(a virtual GPU adapter)act as a physical device to the gameapplication.Ghost GPU approach also helps developers to leverage multi-GPU APIs andseamlessly enable their application to an end-to-end system with no

161、overheads.Thisapproach also helps battery operated devices to dynamically offload workloads toremote GPUs and AI accelerators to extend battery life and switch to local compute inpowered mode.(a)Ghost GPU(b)End to End Contract mechanismFigure 5-10:Ghost GPU rendering5.3.Case 3:Enabling factory-scale

162、 collaborative edgerobotics system5.3.1.Description and requirementsThe Industrial IoT market is going through an incredible digital transformationtowards fully connected,flexible and intelligent autonomous systems.Collaborativemulti-robot systems will be an integral component of highly automated fa

163、ctories ofthe future.Collaborative mobile robots,where robots work together as a team,canenable highly reconfigurable work cells that adapt to new processes and tasks(e.g.joint inspection,assembly,packaging,warehousing)in logistics,retail,hospitality,healthcare,agriculture,and transportation applica

164、tions.Future robotic applicationssuch as reconfigurable manufacturing,unmanned robotic facilities,and fullyautonomous robotic service fleets etc.,will require robots with advanced navigationand object handling(manipulation)capabilities.Such robots will have a multitude ofsensors generating a huge am

165、ount of data.They will need advanced perception andprediction capabilities,human-like cognition capability,and ability to continuouslylearn,adapt and evolve.Also,future robotic systems will require collaboration amongmultiple robots and collaboration/co-existence with humans.For these advancedcapabi

166、lities,robots will need massive computing capabilities for runningcomputationally complex AI algorithms.Moreover,to achieve high operationalspeeds,computing functions for these robots will have to be executed with very lowlatency,with safety and reliability and with better energy efficiency-which is

167、 highlychallenging for todays battery-powered mobile robot systems.Computing platforms available for robots today are largely“Robot-centric”,asthey focus on packing more and more computing on the robot.However,relyingsolely on on-robot compute will not be sufficient as there is a limit to how muchco

168、mputing can be packed on a robot and this comes at a trade-off with respect to cost,size,computing latency,energy efficiency,operating time on battery etc.Hence,thereis a need for a complementary“Edge-centric”Robotics paradigm,that combinesadvancements in Edge computing,AI,advanced wireless communic

169、ations andtowards delivering E2E optimized Edge Computing-based solutions for robotics.Suchan Edge-centric systems approach for robotics is also referred to as“Edge Robotics”.Two common Edge robotic use cases are illustrated in Figure 5-11.As shown inFigure 5-11(a),Use Case 1 concerns static robot a

170、rms which are used to performrobotic manipulation,i.e.pick,place and handling of objects using robot arms withgrippers.This use case is used to accomplish tasks such as picking up objects fromconveyer belt,sorting,assembly,object retrieval,shelf stacking etc.In this use case,cameras in the infrastru

171、cture send data over a wireless network to an on-premise Edgeserver.The Edge server estimates the 3D poses of the moving objects on the belt inreal-time and sends the object poses over the wireless network to the robot for pickupaction.Each camera frame is subject to communication and compute latenc

172、ies in thedistributed Edge system.Hence,by the time the robot gets a 3D pose estimate of theobject on the conveyer belt from the Edge,the object location changes significantlydepending on E2E latency and conveyer belt speed and as a result,the arm missespicking the objects.The RGB-D camera generates

173、 about 50MBytes per second(atVGA,24-bits/pixel resolution,25 frames/sec)which need to be compressed and sentover wireless to the Edge server.Computational latency of perception algorithms canbe up to hundreds of milliseconds,depending on their complexity.In addition,thesensing-to-actuation round-tri

174、p latency can often exceed 100ms when uplink anddownlink latencies are added to the E2E latency.At a conveyer belt speed of 0.8m/s,100ms latency translates into an error of about 8cm in estimation of object location onthe belt,which can cause the object pick-up task to fail,unless compensated by ani

175、ntelligent state correction approach as described later.(a)Use case1:Conveyer object pickup(b)Use case2:Autonomous robot navigationFigure 5-11:Two edge robotic use casesAnother key robotic use case in factory environments is the navigation ofAutonomous Mobile Robots(AMRs).AMRs are used for transport

176、ation of payloadboxes,objects,shelves across different parts of the factory or warehouse environment.AMRs are battery powered and their up-time(operating time on a battery charge)isan important metric which directly impacts their productivity.Advanced AMRs haveseveral cameras,LiDAR and other sensors

177、 which are used for localization,environment mapping,obstacle avoidance,navigation and the energy consumed byAMRs computational subsystem is often comparable to the energy consumed formobility.Therefore,due to battery capacity,size and cost limitations,AMRs canbenefit substantially by offloading som

178、e of their computationally heavy functions,such as SLAM(Simultaneous Localization and Mapping),path-planning etc.to anEdge server(as shown in Figure 5-11(b).In addition,due to their mobility,AMRtasks fundamentally rely on wireless communications in the factory.As shown inFigure 5-11(b),data from on-

179、robot 3D cameras is transmitted to and processed at theEdge server for a variety of navigation functions,such as mapping,occupancy gridgeneration,localization,path planning and multi-robot coordination.This informationis used to compute waypoints,velocity and steering commands,which are sent backto

180、each AMR over the wireless network.AMRs also need large data exchange overwireless networks(for camera data)and computational workloads can also causesignificant latency.For example,at navigation speed of 2m/s,a sensing-to-actuationlatency of 100ms,can result in AMR localization error of about 20cm,

181、leading topath-planning errors and increase in potential collisions,motivating the need for statecorrection for AMR navigation as well.5.3.2.Potential solutions:Edge robotics offloading systemFigure 5-12:A factory-scale edge robotic systemFigure 5-12 shows a factory of the future enabled by Edge rob

182、otics 15.Here,rather than relying solely on on-board compute,mobile robots and sensors in theinfrastructure such as cameras,offload their computing functions(computingworkloads)to the Edge server over a wireless time-sensitive network.The EdgeComputing servers perform high-speed data processing for

183、the robots for perception,planning,control-coordination,and send action commands back to the robots whileachieving low end-to-end(E2E)sensing to control latency.In such a system,thesensing and computing functions are distributed across the system and virtualized onthe Edge server in the form of robo

184、t microservices.However,to enable robots toeffectively leverage the computing capabilities on the Edge server,it is necessary tomeet the latency and reliability targets while offloading compute functions of multiplerobots to the Edge server.In addition,it is also necessary to efficiently utilize the

185、wireless and computing resources,without compromising the robotic task efficiencyand safety.This requires continuum orchestration of the communications andcomputing resources and coordinated communication and computing scheduling forthe robotic tasks to ensure the E2E QoS.Furthermore,under unreliabl

186、e networkconditions and unstable computing resource availability,the communications andcomputing resources need to be carefully managed and provisioned and ensure therobots can operate successfully and safely.5.3.2.1.Edge robotic system co-designTraditional approach of designing compute,communicatio

187、ns and controlcomponents in an Edge system as independent“silos”leads to inefficiencies,limitscapacity and scalability.In contrast,our solution follows a“Co-design”approach,which entails jointoptimization of robotic control,wireless network resource management anddistributed compute orchestration,to

188、 help meet the stringent latency and reliabilityrequirements,while also optimizing wireless and compute resource utilization.This“Co-design”can be a key component in“communication-computing convergence”vision for future NextG systems.(a)Co-design framework(b)Hardware prototypeFigure 5-13:Co-design f

189、ramework and hardware prototypeThe Co-design Framework 16 is shown in Figure 5-13(a),which has two keyco-design components:QoS-aware Robot Adaptation enables robots to adapt to unreliable network.Thecomponent enables robots to adapt to latency and data loss effects and improvessuccess rate of the ro

190、botic tasks under unreliable network conditions.Here,the stateof the available computing and wireless network QoS(which includes QoS metrics,such as latency,packet loss,channel quality etc.)is used by the QoS-aware RobotAdaptation block to correctly predict the correct state of the robotic control s

191、ystemeven in presence of delayed or missing packets and adapt its control policy to achieveresilience to network effects.The Control-aware Dynamic QoS Adaptation block minimizes computing andwireless resource utilization without impacting robotic task performance.Thiscomponent improves computing and

192、 wireless resource utilization.Here,therequirements of robotic control task are used by the“Control-aware Dynamic QoS”block to dynamically allocate resources and to adapt the wireless network QoS byallocating minimal resources to successfully complete the robotic task.Thiscomponent could further pro

193、vide feedback to the network and computing controlfunctions to optimize the QoS control for both communication and computing.5.3.2.2.Applying co-design framework to real-world use casesThe co-design framework is applied to the conveyer object pick-up and AMRnavigation use cases in simulation and the

194、reafter applied to real-world hardwareprototype.For the conveyer object pick-up use case,the QoS-aware State CorrectionCo-design module,is an AI-model that is trained to predict the correct pose of objecton the conveyer,given E2E latency and belt speed,so that the robot can pick upobjects successful

195、ly.This improves resilience of robotic control to network effectssuch as delayed or dropped packets.The experiments have demonstrated 16 thatwithout state prediction,the object pickup success rate drops significantly(to about60%)as E2E latency increases(to 80ms),whereas with state correction,it ispo

196、ssible to maintain high success rate of object pickup(close to 100%)even at highE2E latencies.Just as robots adapt to network effects,the second co-design module,called theControl-aware Dynamic QoS,adapts the QoS of wireless network to the needs of therobotic task.The Control-aware Dynamic QoS modul

197、e is used to improve wirelessnetwork utilization by reducing the uplink traffic.With Dynamic QoS we havedemonstrated more than 70%reduction in uplink traffic over wireless network 16without significantly impacting the object pick-up success rate,which frees up thewireless bandwidth to support more r

198、obots.The results have been validated on areal-world prototype system using UR5 robot arm from Universal Robotics,a wireless5G network and Xeon-based Edge Server,shown in Figure 5-13(b).Coming to the second robotic use case of Autonomous Mobile Robots illustratedin Figure 5-11(b).To achieve successf

199、ul navigation after offloading the SLAMcomputing workload to the Edge Server,it is necessary to determine the correct pose(location and orientation of the AMR)using camera or LiDAR data processed on theEdge Server and then apply QoS-aware State Correction to compensate for E2Elatencies,otherwise the

200、 AMR starts drifting away from its intended trajectory.Offloading of the SLAM and other compute functions to the Edge reduces thecomputing load on the on-robot compute platform and thereby reduces robots energyexpenditure and extends robots operating time on battery,which is an importantproductivity

201、 metric.To demonstrate energy savings,we implemented an Edge-basedAMR Edge-robotic system as shown in Figure 5-14(a),where the ORB-SLAMcomputing workload was offloaded to a Xeon based Edge server.The AMR is acustom Turtlebot3 platform enabled with an Intel CoreTM i7 based system.(a)A real-world prot

202、otype of an AMR edge roboticsystem using Turtlebot3 AMR with Intel Core i7based compute(b)Energy of different AMRsub-components with SLAM functionrunning on-robot and when SLAMfunction is offloaded to the edgeFigure 5-14:Implementation prototype and energy savings of AMR edge roboticsystemAs can be

203、seen from Figure 5-14(b),moving just the SLAM compute function tothe Edge server,results in overall 29%reduction in AMR energy consumption,evenafter discounting the 5%added energy cost of sending camera data over a wirelessnetwork to the Edge 17.Offloading AMR functions to the Edge can further help

204、inreducing the demands on the AMRs computational subsystem,resulting in additionalcost,size,thermal and complexity reduction benefits for the AMRs.The aforementioned use-cases illustrate the feasibility and value proposition ofNetwork/Edge-centric design for future industrial automation and robotic

205、systems.These examples also demonstrate how E2E Co-design plays a critical role forcompute-communication convergence in NextG systems in meeting tight latency,reliability,and resource efficiency targets.Future edge robotic systems could befurther evolved to an in-network robotic system to maximally

206、optimize the E2E QoSby utilizing the highly distributed computing capabilities in the NextG systemempowered by joint orchestration and control of communication and computingresources.6.Conclusion and future workThis white paper starts with the motivation and features of the MNCC,andderives key requi

207、rements.Based on the requirements and the 5G EDN architecture,the 6G MNCC architecture is derived.Then 5 key technologies are introduced.Threetypical use cases are introduced to demonstrate the wide usage of MNCC:XRapplication offloading,real-time gaming and AI enabled by remote GPU,andcollaborative

208、 robotics system.The paper provides the business cases,challenges and clear technology path forMNCC that unleashes the combined power of communication and computing toempower ultimate experience for the end users(consumers and enterprises).Wewould like to arouse more discussion and collaboration wit

209、h partners on MNCCglobally to form further industrial consensus,and together drive the technologyevolvement to enable ubiquitously integrated communication and computing servicesto the industry with the unique advantage of the convergence.7.References1 S.Wang,T.Sun,H.Yang,X.Duan and L.Lu,6G Network:

210、Towards aDistributed and Autonomous System,2020 2nd 6G Wireless Summit(6GSUMMIT),Levi,Finland,2020,pp.1-5.2 IMT-2030(6G)promotion group,“Outlook on 6G Architecture White Paper”,Dec.,2023.3 Xiao.Dong.Duan et al.,6G Architecture Design:from Overall,Logical andNetworking Perspective,in IEEE Communicati

211、ons Magazine,vol.61,no.7,pp.158-164,July 2023,doi:10.1109/MCOM.001.2200326.4 3GPP TS 23.558,“Architecture for enabling Edge Applications”.5 X.Wang et al.,Holistic service-based architecture for space-air-groundintegrated network for 5G-advanced and beyond,China Communications,vol.19,no.1,pp.14-28,Ja

212、n.2022.6 N.Rendevski,D.Trajcevska,M.Dimovski,K.Veljanovski,A.Popov,N.Emini,and D.Veljanovski,“PC VR vs Standalone VR Fully-Immersive Applications:History,Technical Aspects and Performance,”in 2022 57th InternationalScientific Conference on Information,Communication and Energy Systems andTechnologies

213、(ICEST),pp.14,2022.7“Meta Passthrough.”https:/ Hololens 2.”https:/ Vision Pro.”https:/ S.V.Adve,“ILLIXR:An Open Testbed to Enable Extended Reality Systems Research,”IEEE Micro,vol.42,no.4,pp.97106,2022.11“Meta Quest 3.”https:/ ThinkRealityA3.”https:/ A.Zaidi,“XR and 5G:Extended reality at scale with

214、 time-critical communication,”tech.rep.,Ericsson,82021.14“Wi-Fi brings immersive experiences to life,”White paper,Wi-Fi Alliance,2024.15Amit Baxi,Mark Eisen et al.,“Towards Factory-scale Edge Robotic Systems:Challenges and Research Directions,”IEEE Internet of Things Magazine,Sept2022416Eisen,Mark,S

215、antosh Shukla,Dave Cavalcanti,and Amit S.Baxi.Communication-Control Co-design in Wireless Edge Industrial Systems.In2022 IEEE 18th International Conference on Factory Communication Systems(WFCS),pp.1-8.IEEE,2022.17Vincent Mageshkumar,Amit Baxi et al.,“Adaptive Energy Optimization forEdge-Enabled Aut

216、onomous Mobile Robots”,International Conference onCOMmunication Systems&NETworkS(COMSNETS),2024ContributorsChina Mobile:Zehao Chen,Zhenglei Huang,Yushuang Hu,Xiaodong Duan,Lu Lu,Tao SunIntel:Yizhi Yao,Thomas Luetzenkirchen,Shilpa Talwar,Javier Perez-ramirez,Selvakumar Panneer,Ben Lin,Amit Baxi,Heqin

217、g Zhu,David Lu,Hao FengInspur:Zicheng Wang,Xian Gao,Wei LinZTE:Jianfeng Zhou,Lijuan ChenNokia Shanghai Bell:Gang Liu,Kaibin Zhang,Gang ShenLenovo:Lizhuo Zheng,Haiyan Luo,Xin GuoAsiaInfo:Ye Ouyang,Shoufeng Wang,Jie SunVIVO:Xiaobo Wu,Xiaowen Sun,Yanchao KangCICT Mobile:Yapeng Wang,Ruiyan Qin,Hui XuH3C:Pei Li,Xuejin Yang

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(中移智库:5G-A_6G移动网络新计算面使能移动算网融合白皮书(2024年)(英文版)(40页).pdf)为本站 (白日梦派对) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

wei**n_... 升级为高级VIP    135**09... 升级为至尊VIP

微**...  升级为标准VIP wei**n_...  升级为标准VIP

 wei**n_... 升级为标准VIP wei**n_...  升级为至尊VIP

wei**n_...  升级为至尊VIP wei**n_...   升级为标准VIP

 138**02...  升级为至尊VIP  138**98... 升级为标准VIP

微**... 升级为至尊VIP    wei**n_... 升级为标准VIP

  wei**n_... 升级为高级VIP wei**n_...  升级为高级VIP

 wei**n_...  升级为至尊VIP 三**... 升级为高级VIP

186**90... 升级为高级VIP wei**n_... 升级为高级VIP  

133**56... 升级为标准VIP 152**76...  升级为高级VIP

 wei**n_... 升级为标准VIP wei**n_... 升级为标准VIP 

 wei**n_... 升级为至尊VIP wei**n_... 升级为标准VIP

133**18... 升级为标准VIP    wei**n_... 升级为高级VIP

wei**n_... 升级为标准VIP   微**... 升级为至尊VIP

wei**n_... 升级为标准VIP   wei**n_...  升级为高级VIP

187**11... 升级为至尊VIP  189**10...  升级为至尊VIP

188**51...  升级为高级VIP 134**52... 升级为至尊VIP 

134**52...  升级为标准VIP   wei**n_... 升级为高级VIP

 学**... 升级为标准VIP  liv**vi... 升级为至尊VIP

大婷 升级为至尊VIP   wei**n_... 升级为高级VIP

wei**n_... 升级为高级VIP  微**... 升级为至尊VIP 

微**...  升级为至尊VIP    wei**n_... 升级为至尊VIP

 wei**n_... 升级为至尊VIP wei**n_... 升级为至尊VIP  

  战** 升级为至尊VIP 玍子 升级为标准VIP

 ken**81... 升级为标准VIP 185**71... 升级为标准VIP

wei**n_...  升级为标准VIP  微**... 升级为至尊VIP 

  wei**n_... 升级为至尊VIP 138**73...  升级为高级VIP

138**36... 升级为标准VIP   138**56... 升级为标准VIP 

wei**n_...  升级为至尊VIP  wei**n_...  升级为标准VIP

 137**86... 升级为高级VIP  159**79... 升级为高级VIP

wei**n_...  升级为高级VIP 139**22...  升级为至尊VIP

151**96...  升级为高级VIP  wei**n_... 升级为至尊VIP 

186**49...  升级为高级VIP  187**87... 升级为高级VIP

wei**n_... 升级为高级VIP  wei**n_...  升级为至尊VIP

sha**01...  升级为至尊VIP wei**n_... 升级为高级VIP

139**62...  升级为标准VIP wei**n_...  升级为高级VIP 

 跟**...  升级为标准VIP 182**26... 升级为高级VIP

wei**n_...  升级为高级VIP 136**44... 升级为高级VIP 

136**89...  升级为标准VIP   wei**n_... 升级为至尊VIP

wei**n_...  升级为至尊VIP  wei**n_... 升级为至尊VIP 

wei**n_... 升级为高级VIP   wei**n_... 升级为高级VIP

177**45...  升级为至尊VIP  wei**n_...  升级为至尊VIP

wei**n_... 升级为至尊VIP   微**...  升级为标准VIP

wei**n_...  升级为标准VIP wei**n_...  升级为标准VIP 

139**16... 升级为至尊VIP  wei**n_...  升级为标准VIP

 wei**n_...  升级为高级VIP 182**00... 升级为至尊VIP 

wei**n_... 升级为高级VIP   wei**n_... 升级为高级VIP

 wei**n_... 升级为标准VIP 133**67... 升级为至尊VIP 

wei**n_...  升级为至尊VIP 柯平 升级为高级VIP  

 shi**ey... 升级为高级VIP   153**71... 升级为至尊VIP