上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

鲍凡_多模态生成大模型-v2_watermark.pdf

编号:155595 PDF 28页 2.09MB 下载积分:VIP专享
下载报告请您先登录!

鲍凡_多模态生成大模型-v2_watermark.pdf

1、A Tutorial on Large Multi-Modal Generative ModelsTsinghua Univiersty,ShengShu CTO,Fan BaoWhat is Multi-Modality?Modality:A way to organize information(信息组织的某种方式)Visual information:images、videos Space information:3D Abstract information:Text Large Multi-Modal Models:Sufficiently understand interleave

2、d inputs of various modalities Smartly choose a proper modality as its output,i.e.,a proper way to output informationEach modality has its own special knowledgeParadigms of Large ModelsLarge Language ModelLarge Multi-Modal ModelArchitectureConverge to transformerMany solutions,not an absolutely opti

3、mal oneScaling lawBig data+trillions of parameters emergent abilityEarly stage of verificationAlignmentInstruction tuning+RLHF friendly assistant of humansEarly stage of verificationSchemes for Large Multi-Modal Models Extend Large Language Models Extend Diffusion ModelsExtend Large Language Models

4、Adapter mode:Add learnable modules to LLM decoder Flamingo Feature alignment mode:Align features of other modalities to the embedding space of language tokens Freeze LLM:ClipCap、BLIP-2、PaLM-E Learn all parameters:KOSMOS、EmuAdapter Mode:FlamingoFreeze the self-attention layers of LLMInject learnable

5、cross-attention layersImage embeddings as key&value,text embeddings as queryAdapter Mode:FlamingoShow some simple in-context learning abilities of images and languagesThe ability comes from43M webpages(consist of interleaved image-text data)1.8B pairs of image and textOnly output the single language

6、 modalityFeature Alignment Mode:ClipCapLearn a mapping network to convert visual features to language embedding spaceThe converted visual features serve as the prefix.The language model generates subsequent texts.Only support the captioning task.Work before the burst of large models:Data size=100M i

7、mage-text pairsFeature Alignment Mode:PaLM-EUse more modalities:text&images&sensor signals as inputsTrain on data of robotics tasksFeature Alignment Mode:KOSMOSTune all parameters from scratch1.6B parameters,not very largeFine-grained image understandingFeature Alignment Mode:KOSMOSFeature Alignment

8、 Mode:EmuFinetune all parameters based on llamaAlso predict the CLIP image feature using L2 lossSupport image generation with Stable Diffusion decoderFeature Alignment Mode:EmuFeature Alignment Mode:LLaVA、mini-GPT4、mPLUG-OwlLLaVAmini-GPT4mPLUG-OwlExtend Diffusion Models UniControl:Unify multiple ima

9、ge-and-text-to-image tasks Versatile Diffusion:A framework of multi-task for diffusion models UniDiffuser:Build a multi-modal framework starting from probabilistic modelingUniControlSee the task descriptor as a new conditionUniControlGeneralize to new tasksUniControlSimilar architecture to ControlNe

10、t and T2I-adapterVersatile DiffusionA multi-task framework for diffusion modelsFour streams in the architectureEach task corresponds to a streamTrain on four losses at the same timeUniDiffuser A principled probabilistic modeling framework for multi-modality Model all distributions determined by two

11、modalities(x0,y0)Conditional distributions:q(x0|y0),q(y0|x0)Joint distribution:q(x0,y0)Marginal distributions:q(x0),q(y0)UniDiffuser:Probabilistic Modeling Marginal distribution q(x0)-Conditional distribution q(x0|y0)-Joint distribution q(x0,y0)-UniDiffuser:Probabilistic Modeling Estimate can solve

12、all problemsJoint noise prediction networkUniDiffuser:Probabilistic ModelingMinimal modification to the original diffusionUniDiffuser:Architecture Then we can use a transformer to model latent embeddings We use the U-ViTLong skip connections between shallow and deep layers greatly improve the perfor

13、manceBao et al.,All are Worth Words:A ViT Backbone for Diffusion ModelsUniDiffuserConclusion Multi-modality is still at its early stage,many to explore Representation:a unified representation for different modalities,e.g.,3D and videos Architecture:does the current one,e.g.,extended LLM,support deep

14、 understanding of various modalities?Data:compared to LLM,is all Internet data sufficient to train a very strong multi-modality model?What to do if no?Reference Flamingo:a Visual Language Model for Few-Shot Learning ClipCap:CLIP Prefix for Image Captioning BLIP-2:Bootstrapping Language-Image Pre-tra

15、ining with Frozen Image Encoders and Large Language Models PaLM-E:An Embodied Multimodal Language Model Language Is Not All You Need:Aligning Perception with Language Models Generative Pretraining in Multimodality Visual Instruction Tuning MiniGPT-4:Enhancing Vision-Language Understanding with Advan

16、ced Large Language Models mPLUG-Owl:Modularization Empowers Large Language Models with Multimodality UniControl:A Unified Diffusion Model for Controllable Visual Generation In the Wild versatile diffusion:text,images and variations all in one diffusion model One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(鲍凡_多模态生成大模型-v2_watermark.pdf)为本站 (张5G) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部