1、Generative Pre-Training of Graph Neural Networks,Ziniu Hu1, Yuxiao Dong2, Kuansan Wang2, Kai-Wei Chang1, Yizhou Sun1 1University of California, Los Angeles 2Microsoft Research, Redmond,Learning from Unlabeled Data,Unlabeled Data Accessible, Abundant (1000 X more,Labeled Data Expensive, Scarce,Learni
2、ng from Unlabeled Data,Anomaly Detection on Graph: Labeled Nodes: Malicious Account (scarce) Unlabeled Nodes: The whole Graph (abundant,Unsupervised Pre-Training,w/ Abundant Unlabeled Data,w/ Few Labeled Data,Recent progress of pre-training in NLP and CV shows that we can train very deep models (Tra
3、nsformer, ResNet) with unlabeled data to learn generic knowledge,Unsupervised Pre-Training in NLP and CV,Yinhan Liu et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach Ting Chen et al. A Simple Framework for Contrastive Learning of Visual Representations,Recent progress of pre-training in NLP and CV shows that we can train very deep models (Transformer, ResNet) with unlabeled data to le