上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

4-4 基于环境虚拟化的强化学习应用实践.pdf

编号:102357 PDF 31页 20.23MB 下载积分:VIP专享
下载报告请您先登录!

4-4 基于环境虚拟化的强化学习应用实践.pdf

1、基于环境虚拟化的强化学习应用实践基于环境虚拟化的强化学习应用实践俞扬南京大学/南栖仙策奖励行动观测强化学习通过与环境反复交互试错,找到最优策略强化学习是机器学习中关于如何学习决策的分支人工智能机器学习监督学习人脸识别,图像识别,统计预测强化学习AI围棋,AI游戏无监督学习数据降维,数据压缩,数据可视化Reinforcement Learning:About the intelligence of actionsAbout Reinforcement LearningJ()=Zxp(x)loss(x)dxSupervised learning objectiveJ()=ZTrap()R()dp(

2、)=p(s0)TYi=1p(si|ai,si?1)(ai|si?1)Reinforcement learning objectiveAgentEnvironmentaction/decisionrewardstateWhy SL has wide applicationsSL is much more data-drivenLess artificial,more applications“the actual contents of minds are tremendously,irredeemably complex;we should stop trying to find simple

3、 ways to think about the contents of minds We want AI agents that can discover like we can,not which contain what we have discovered.Building in our discoveries only makes it harder to see how the discovering process can be done.”Human-level Records of RL1992TD-Gammon2016AlphaGoDeep Q-Network2014Alp

4、haZero20182019AlphaStarMuZero20202020Agent57Industrial problem exampleHybrid Mode ControlData from a bad policyGlobal constraintDemands in industrial applications1.Trial-and-success3.Fully offline evaluation No errors Adaptive Performance expectation Confidence for going online4.Other challenges Cha

5、nging reward functions Mostly have no knowledge about RL for their decision-making tasks2.Very few data Decision data is always smallJ Degrave,et al.Magnetic control of tokamak plasmas through deep reinforcement learning,Nature 602:414419,2022.Recent application by DeepMindRecent application by Deep

6、Mind“We use a simulator that has enough physical fidelity to describe the evolution of plasma shape and current,while remaining sufficiently computationally cheap for learning”“This achievement required overcoming gaps in capability and infrastructure through scientific and engineering advances:1.an

7、 accurate,numerically robust simulator;2.an informed trade-off between simulation accuracy and computational complexity;3.a sensor and actuator model tuned to specific hardware control;4.realistic variation of operating conditions during training;5.a highly data-efficient RL algorithm that scales to

8、 high-dimensional problems;6.an asymmetric learning setup with an expressive critic but fast-to-evaluate policy;7.a process for compiling neural networks into real-time-capable code and deployment on a tokamak digital control system.J Degrave,et al.Magnetic control of tokamak plasmas through deep re

9、inforcement learning,Nature 602:414419,2022.A general development process in applications业务理解业务问题定义算法调优部署运行算法工程师/标注工程师运维算法工程师算法/模型动态调整数据处理数据算法工程师from Weinans talkfrom HLZhens talkData-driven RL:Offline RLDataPolicyTrainingNo models can be employed without validationsData-driven RL:Offline RLData-dri

10、ven RL:Offline RL1.Online selection w.r.t.deterministic policy2.Online selection vs.offline selection3.Impact of conservative dataDemands in industrial applications1.Trial-and-success3.Fully offline evaluation No errors Adaptive Performance expectation Confidence for going online Global constraints4

11、.Other challenges Changing reward functions Mostly have no knowledge about RL for their decision-making tasks2.Very few data Decision data is always smallNo useful simulators(even for many industrial tasks)Simulators/modelsLearning environment modelsHistorical action-response data:(s0,a0)(s1,a1)(s1,

12、a1).s1s2s3.Env modelProblem 1:Compounding errorHybrid Mode ControlRollout 1800 steps in the supervise-learned model realrolloutCompounding error is solved by distribution matchingMichael Kearns and Satinder Singh.Near-optimal reinforcement learning in polynomial time.Machine Learning,49(2-3):209232,

13、2002.Givenmaxs,akP(s,a)?P(s,a)k1 kVM?VMk1?2(1?)2For any policyeven a small step-wise error can lead to a large differenceTian Xu,Ziniu Li,Yang Yu.Error bounds of imitating policies and environments.NeurIPS 2020Model error:compounding error problem eliminatedBack to the ApplicationHybrid Mode Control

14、Rollouts in repeated experimentsBack to the ApplicationOptimize the mode control using RL in the learned environment modelReduce fuel consumption underthe same end-point battery levelTested fuel consumptionOld policy:4.68Optimized policy:4.56Explanation on thedecisionsOld policyOptimized policyProbl

15、em 2:execution biasXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhihua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.dosage response curves in 6 cities Problem 2:execution biasTraining

16、dataLearned modelsassasreal causal modelcausal model for collecting dataSimpsons paradoxhttps:/en.wikipedia.org/wiki/Simpson%27s_paradoxProblem 2:extremely simple case(1,-0.1,0.9)(0.9,-0.09,0.81)(0.81,-0.081,0.729)sampled data:(s,a,s)linear regressionsass=s+aa=-0.1 sAn infeasible solutiona=-0.1 s+0.

17、0001 rand()a=-0.1 s+0.001 rand()a=-0.1 s+0.01 rand()Is that fatal?this kind of curve is commonly found in control tasksthe curve of s:approaching certain valuesass=s+aa=-0.1 sSolution:Adversarial Counterfactual Risk MinimizationSupervised LearningIPWOur methodXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhi

18、hua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.Application resultsXiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhihua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ru

19、ijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,2022.More applications 1水泵效率建模流量效率实验测试结果日常运营数据建模More applications 2Training dataRobustness test控制系统建模Categories of offline RLRLOffline RLModel-freeModel-basedBCQ:Scott Fujimoto,David Meger,and Doina P

20、recup.Off-policy deep reinforcement learning without exploration.ICML19CQL:Aviral Kumar,Aurick Zhou,George Tucker,and Sergey Levine.Conservative Q-learning for offline reinforcement learning.NIPS20Model-assistantFully model-basedVtaobao(AAAI19)、Vdidi(KDD19)1.Xiong-Hui Chen,Yang Yu,Zheng-Mao Zhu,Zhih

21、ua Yu,Zhenjun Chen,Chenghe Wang,Yinan Wu,Hongqiu Wu,Rong-Jun Qin,Ruijin Ding,Fangsheng Huang.Adversarial Counterfactual Environment Model Learning,CoRR abs/2206.04890,20222.Tian Xu,Ziniu Li,Yang Yu.Error bounds of imitating policies and environments.In:Advances in Neural Information Processing Syste

22、ms 33(NeurIPS20),Virtual Conference,2020.MOPO:T.Yu,G.Thomas,L.Yu,S.Ermon,J.Y.Zou,S.Levine,C.Finn,and T.Ma.MOPO:Model-based offline policy optimization.Advances in Neural Information Processing Systems,33:14129-14142,2020.COMBO:T.Yu,A.Kumar,R.Rafailov,A.Rajeswaran,S.Levine,and C.Finn.Combo:Conservative offline model-based policy optimization.Advances in Neural Information Processing Systems,34,2021.Environment LearningVirtual worldReal-worldDataData-driven RL for Real-world Decision-makingRL Solver谢谢!Tasks in most RL papers todayTasks we are solving by RL now

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(4-4 基于环境虚拟化的强化学习应用实践.pdf)为本站 (云闲) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部