《Whale:统一多种并行化策略的分布式深度学习框架.pdf》由会员分享,可在线阅读,更多相关《Whale:统一多种并行化策略的分布式深度学习框架.pdf(19页珍藏版)》请在三个皮匠报告上搜索。
1、A Unified Distributed Training Framework Whale Ang Wang wangang.waalibaba- PAI, Alibaba Cloud 15/12/2020 Motivation 25117340 1500 11000 175000 0 50000 100000 150000 200000 Parameters(M) 8 16 32 80 0 10 20 30 40 50 P4P100V100A100 Memory (GB) 1 Models are getting larger and more complex Larger models
2、lead to better results with lower validation perplexities Model size grows far beyond upgrading of hardware 1 https:/ Models are getting larger Data Parallel(DP) is widely used in distributed training as it is simple and easy to implement. DP is not always optimal for every distributed training work
3、loads. Necessary to find an efficient parallel strategy that can make full use of the resources and speedup the training. Motivation Distribute the training workload with data parallelism Data parallelism becomes less optimal for lots of distributed workloads Motivation Difficult to increase the batch size in a single GPU device due to the limitation of the GPU device memory capacity. Large weight