《鲍凡_多模态生成大模型-v2_watermark.pdf》由会员分享,可在线阅读,更多相关《鲍凡_多模态生成大模型-v2_watermark.pdf(28页珍藏版)》请在三个皮匠报告上搜索。
1、A Tutorial on Large Multi-Modal Generative ModelsTsinghua Univiersty,ShengShu CTO,Fan BaoWhat is Multi-Modality?Modality:A way to organize information(信息组织的某种方式)Visual information:images、videos Space information:3D Abstract information:Text Large Multi-Modal Models:Sufficiently understand interleave
2、d inputs of various modalities Smartly choose a proper modality as its output,i.e.,a proper way to output informationEach modality has its own special knowledgeParadigms of Large ModelsLarge Language ModelLarge Multi-Modal ModelArchitectureConverge to transformerMany solutions,not an absolutely opti
3、mal oneScaling lawBig data+trillions of parameters emergent abilityEarly stage of verificationAlignmentInstruction tuning+RLHF friendly assistant of humansEarly stage of verificationSchemes for Large Multi-Modal Models Extend Large Language Models Extend Diffusion ModelsExtend Large Language Models
4、Adapter mode:Add learnable modules to LLM decoder Flamingo Feature alignment mode:Align features of other modalities to the embedding space of language tokens Freeze LLM:ClipCap、BLIP-2、PaLM-E Learn all parameters:KOSMOS、EmuAdapter Mode:FlamingoFreeze the self-attention layers of LLMInject learnable
5、cross-attention layersImage embeddings as key&value,text embeddings as queryAdapter Mode:FlamingoShow some simple in-context learning abilities of images and languagesThe ability comes from43M webpages(consist of interleaved image-text data)1.8B pairs of image and textOnly output the single language
6、 modalityFeature Alignment Mode:ClipCapLearn a mapping network to convert visual features to language embedding spaceThe converted visual features serve as the prefix.The language model generates subsequent texts.Only support the captioning task.Work before the burst of large models:Data size=100M i
7、mage-text pairsFeature Alignment Mode:PaLM-EUse more modalities:text&images&sensor signals as inputsTrain on data of robotics tasksFeature Alignment Mode:KOSMOSTune all parameters from scratch1.6B parameters,not very largeFine-grained image understandingFeature Alignment Mode:KOSMOSFeature Alignment
8、 Mode:EmuFinetune all parameters based on llamaAlso predict the CLIP image feature using L2 lossSupport image generation with Stable Diffusion decoderFeature Alignment Mode:EmuFeature Alignment Mode:LLaVA、mini-GPT4、mPLUG-OwlLLaVAmini-GPT4mPLUG-OwlExtend Diffusion Models UniControl:Unify multiple ima
9、ge-and-text-to-image tasks Versatile Diffusion:A framework of multi-task for diffusion models UniDiffuser:Build a multi-modal framework starting from probabilistic modelingUniControlSee the task descriptor as a new conditionUniControlGeneralize to new tasksUniControlSimilar architecture to ControlNe
10、t and T2I-adapterVersatile DiffusionA multi-task framework for diffusion modelsFour streams in the architectureEach task corresponds to a streamTrain on four losses at the same timeUniDiffuser A principled probabilistic modeling framework for multi-modality Model all distributions determined by two
11、modalities(x0,y0)Conditional distributions:q(x0|y0),q(y0|x0)Joint distribution:q(x0,y0)Marginal distributions:q(x0),q(y0)UniDiffuser:Probabilistic Modeling Marginal distribution q(x0)-Conditional distribution q(x0|y0)-Joint distribution q(x0,y0)-UniDiffuser:Probabilistic Modeling Estimate can solve
12、all problemsJoint noise prediction networkUniDiffuser:Probabilistic ModelingMinimal modification to the original diffusionUniDiffuser:Architecture Then we can use a transformer to model latent embeddings We use the U-ViTLong skip connections between shallow and deep layers greatly improve the perfor
13、manceBao et al.,All are Worth Words:A ViT Backbone for Diffusion ModelsUniDiffuserConclusion Multi-modality is still at its early stage,many to explore Representation:a unified representation for different modalities,e.g.,3D and videos Architecture:does the current one,e.g.,extended LLM,support deep
14、 understanding of various modalities?Data:compared to LLM,is all Internet data sufficient to train a very strong multi-modality model?What to do if no?Reference Flamingo:a Visual Language Model for Few-Shot Learning ClipCap:CLIP Prefix for Image Captioning BLIP-2:Bootstrapping Language-Image Pre-tra
15、ining with Frozen Image Encoders and Large Language Models PaLM-E:An Embodied Multimodal Language Model Language Is Not All You Need:Aligning Perception with Language Models Generative Pretraining in Multimodality Visual Instruction Tuning MiniGPT-4:Enhancing Vision-Language Understanding with Advan
16、ced Large Language Models mPLUG-Owl:Modularization Empowers Large Language Models with Multimodality UniControl:A Unified Diffusion Model for Controllable Visual Generation In the Wild versatile diffusion:text,images and variations all in one diffusion model One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale