《02-华为机器翻译模型训练推理加速实践-魏代猛.pdf》由会员分享,可在线阅读,更多相关《02-华为机器翻译模型训练推理加速实践-魏代猛.pdf(31页珍藏版)》请在三个皮匠报告上搜索。
1、华为机器翻译架构和模型加速华为机器翻译架构和模型加速魏代猛华为2012/机器翻译实验室-华为高级技术专家,机器翻译算法负责人,产品落地华为云、HMS、华为手机等-北京大学硕士,研究方向:机器翻译、同传翻译、语义理解等-带领团队参加 WMT20/21/22 news、biomedical、efficiency等赛道多项 第一,IWSLT 22 多项第一,WAT20比赛多项第一-在AAAI,ACL,EMNLP,ICASSP等发表论文30+个人简介 机器翻译简介 模型推理问题 端测推理加速 华为机器翻译 总结大纲机器翻译简介1、主流的机器翻译模型包含Encoder和Decoder两部分,Encode
2、r将原文整个序列编码成一个多维向量,Decoder将原文序列的向量解码成译文。2、Attention模型记录原文和译文的词对齐关系,指导机器翻译在解码译文某个词时,应该更关注与原文的哪一个部分,以提升长句翻译质量。机器翻译简介Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N.Gomez,Lukasz Kaiser and Illia Polosukhin.“Attention is All you Need”Neural Information Processing Systems(2017)
3、.Encoder:6Decoder:6Hidden size:1024参数量:2亿大小:800M模型推理问题Transformer模型在GPU,CPU,ARM运行典型值模型太大,计算量太大GPU/T4CPU/IntelARM耗时45 ms/token150 ms/token-端侧最具挑战模型推理问题让模型变小存储计算量质量速度质量大小 机器翻译简介 模型推理问题 端测推理加速 华为机器翻译 总结大纲端侧推理加速模型存储小 小模型 质量变差 计算量小知识蒸馏Geoffrey E.Hinton,Oriol Vinyals and Jeffrey Dean.“Distilling the Knowl
4、edge in a Neural Network”arXiv:Machine Learning(2015):n.pag.Yoon Kim and Alexander M.Rush.“Sequence-Level Knowledge Distillation”Empirical Methods in Natural Language Processing(2016).Markus Freitag,Yaser Al-Onaizan and Baskaran Sankaran.“Ensemble Distillation for Neural Machine Translation”arXiv:Co
5、mputation and Language(2017):n.pag.小模型 高质量?知识蒸馏96%端侧推理加速Tiny Bert小模型&高质量Xiaoqi Jiao,Yichun Yin,Lifeng Shang,Xin Jiang,Xiao Chen,Linlin Li,Fang Wang,and Qun Liu.TinyBERT:Distilling BERT for Natural Language Understanding.EMNLP 2020小模型:更小空间,更快的推理Benoit Jacob,Skirmantas Kligys,Bo Chen,Menglong Zhu,Matt
6、hew Tang,Andrew Howard,Hartwig Adam and Dmitry Kalenichenko.“Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference”Computer Vision and Pattern Recognition(2018).增加量化层,FP32-Int8-FP32,E2E训练端侧推理加速-模型压缩,低精度推理小模型:更小空间,更快的推理端侧推理加速-模型压缩,低精度推理Alham Fikri Aji and Kennet
7、h Heafield.2020.Compressing Neural Machine Translation Models with 4-bit Precision.InProceedings of the Fourth Workshop on Neural Generation and Translation,pages 3542,Online.Association for Computational Linguistics.Log0值问题4 bitlog 4bit前期后期小模型:更小空间,更快的推理端侧推理加速-模型压缩,低精度推理模型BLEU(WMT14)Parameter Size3
8、2 bit26.5260M8 bit26.4(-0.1)66M4 bit24.3(-2.2)35MLog 4 bit(后期)25.1(-1.4)35MLog 4 bit(前期)26.2(-0.3)35M直接4bit 影响大,log 4 bit前期介入量化训练很关键Int8推理模型中计算量最大的是矩阵运算(GEMM)Int8推理:用整型运算代替浮点型运算提速处理好量化和反量化是提速的关键华为Noah高性能推理实验室https:/ Shanker Khudia,Jianyu Huang,Protonu Basu,Summer Deng,Haixin Liu,Jongsoo Park and Mik
9、hail Smelyanskiy.“FBGEMM:Enabling High-Performance Low-Precision Deep Learning Inference.”arXiv:Learning(2021):n.pag.小模型:更小空间,更快的推理-模型压缩,低精度推理端侧推理加速src_t1,src_t2,tgt_t1,tgt_t2tgt_t3能否变小?Qiang Wang,Bei Li,Tong Xiao,Jingbo Zhu,Changliang Li,Derek F.Wong and Lidia S.Chao.“Learning Deep Transformer Mode
10、ls for Machine Translation.”Meeting of the Association for Computational Linguistics(2019).小模型:更强的能力-结构优化,参数共享,多语言模型端侧推理加速Encoder 25-40层,decoder3层端侧推理加速小模型:更强的能力-结构优化,参数共享,多语言模型相邻层共享最好Melvin Johnson,Mike Schuster,Quoc V.Le,Maxim Krikun,Yonghui Wu,Zhifeng Chen,Nikhil Thorat,Fernanda B.Vigas,Martin Wa
11、ttenberg,Greg S.Corrado,Macduff Hughes and Jeffrey Dean.“Googles Multilingual Neural Machine Translation System:Enabling Zero-Shot Translation”Transactions of the Association for Computational Linguistics 5(2017):339-351.Naveen Arivazhagan,Ankur Bapna,Orhan Firat,Dmitry Lepikhin,Melvin Johnson,Maxim
12、 Krikun,Mia Xu Chen,Yuan Cao,George Foster,Colin Cherry,Wolfgang Macherey,Zhifeng Chen and Yonghui Wu.“Massively Multilingual Neural Machine Translation in the Wild:Findings and Challenges”arXiv:Computation and Language(2019):n.pag.小模型:更强的能力-结构优化,参数共享,多语言模型端侧推理加速10个语种放一个模型30个语种模型容量不够小结策略策略质量质量速度速度大小
13、大小知识蒸馏-量化推理-模型结构-参数共享-多语言-ShortListDecoder结构专注于减少计算量ShortList优化EncoderDecoderhvoc_sizeh:512voc_size:3200010 x FFN input词对齐每个词:100候选16词300候选(去重)512x32000 512x300端侧推理加速词对齐可以用Fastalign,每个词75候选Biao Zhang,Deyi Xiong and Jinsong Su.“Accelerating Neural Transformer via an Average Attention Network”Meeting
14、of the Association for Computational Linguistics(2018).Yann N.Dauphin,Angela Fan,Michael Auli and David Grangier.“Language modeling with gated convolutional networks”International Conference on Machine Learning(2017).端侧推理加速Decoder结构?:2 Tao Lei,Yu Zhang,Sida I.Wang,Hui Dai,Yoav Artzi,Simple Recurrent
15、 Units for Highly Parallelizable Recurrence,EMNLP 2017Tao Lei.2021.When Attention Meets Fast Recurrence:Training Language Models with Reduced Compute.Association for Computational Linguistics 2021.端侧推理加速Decoder结构LSTMSRUSRU+Yann N.Dauphin,Angela Fan,Michael Auli and David Grangier.“Language modeling
16、with gated convolutional networks”International Conference on Machine Learning(2017).Hengchao Shang,Ting Hu,Daimeng Wei,HW-TSCs Submission for the WMT22 Efficiency Task.InProceedings of the Seventh Conference on Machine Translation(WMT),pages 677681,ACL 2022端侧推理加速Decoder结构SRU+AASRU小结策略策略质量质量速度速度大小大小
17、知识蒸馏-量化推理-模型结构-参数共享-多语言-shortlist-Decoder结构-WMT22 Efficiency Task小结Hengchao Shang,Ting Hu,Daimeng Wei,Zongyao Li,Xianzhi Yu,Jianfei Feng,Ting Zhu,Lizhi Lei,Shimin Tao,Hao Yang,Ying Qin,Jinlong Yang,Zhiqiang Rao,and Zhengzhe Yu.2022.HW-TSCs Submission for the WMT22 Efficiency Task.InProceedings of th
18、e Seventh Conference on Machine Translation(WMT),pages 677681,ACL 2022Kenneth Heafield,Biao Zhang,Graeme Nail,Jelmer Van Der Linde,and Nikolay Bogoychev.2022.Findings of the WMT 2022 Shared Task on Efficient Translation.InProceedings of the Seventh Conference on Machine Translation(WMT),pages 100108,Abu Dhabi,United Arab Emirates(Hybrid).ACL 2022.机器翻译简介 模型推理问题 端测推理加速 华为机器翻译 总结大纲华为机器翻译华为机器翻译总结1.以业务为轴心,按场景优化2.立足深度学习不断变化的大背景3.结合自身优势,不断迭代策略策略GPUCPUARM知识蒸馏-量化推理模型结构参数共享-多语言shortlist-Decoder结构-