上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

1-2 图形学习中的公平性和可解释性.pdf

编号:102410 PDF 30页 5.78MB 下载积分:VIP专享
下载报告请您先登录!

1-2 图形学习中的公平性和可解释性.pdf

1、|Enyan DaiCollege of Information Sciences and Technology The Pennsylvania State University|FairnessFairness andand ExplainabilityExplainabilityin in GraphGraph NeuralNeural NetworksNetworks1|01FairnessCONTENT|BackgroundDefinitionsAdversarial DebiasingFairness Constraints02ExplainabilityPost-hoc Expl

2、anations GNNExplainerSelf-Explainable GNN SE-GNN|Fairness01|DiscriminationDiscrimination/Biases/Biases in in MachineMachine Learning Learning|Face recognition perform poorly for the darker female.4|IntroductionIntroduction|Graph neural networks have higher risk in discrimination:People linked in the

3、 network tend to have the same sensitive attributes.The message passing would lead linked nodes have similar predictionsAn example of salary prediction.(Due to historical reasons,the male generally earns high salary in the dataset Ground Truth:HighLowLowHighHighGround Truth:LowPrediction:LowPredicti

4、on:High5|Graph neural networks have higher risk in discrimination:EmpiricalEmpiricalAnalysisLarge!and#$indicate an unfair model.Results on Pokec-z6Dai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph neural networks with limited sensitive attribute information.WSDM.2021.|7Dai,E

5、nyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|Node Classification:Let 0,1 and 0,1 denote the sensitive attribute and label.Statistical Parit

6、y:The prediction should be independent with the sensitive attribute:|=0=|=1Equal Opportunity:The probability of an instance in a positive class being assigned to apositive outcome should be equal for both subgroup members:|=1,=0=|=1,=1Definitions of GroupDefinitions of Group FairnessFairnessLink Pre

7、diction:Dyadic Fairness:An extension of statistical parity for link prediction.It requires the linkpredictor to give predictions independent with the sensitive attributes of the target nodes:(,)|=()=,|where(,)denotes the link prediction of node and node|8Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie

8、 Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|AdversarialAdversarial DebiasingDebiasing in in FairGNNFairGNNAdversary aims to predict sensitive attributes with is the representatio

9、n learned by the GNN classifier.Classifier aims to learn representation to fool the adversary to predict wrong sensitive attributes.Estimated sensitive attributesX!#!$Input layer!#$Output layerHiddenlayersGNN classifier!#$Adversary!#$GCN based sensitive attribute estimator#%Sensitive attribute estim

10、ator:Obtain estimated sensitive attributesClassifier and adversary:Eliminate the sensitive information in therepresentation for classification.|When the adversarial loss reaches its global minimum:|=0=|=1 Statistical Parity9Dai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph n

11、eural networks with limited sensitive attribute information.WSDM.2021.|FairnessFairness ConstraintsConstraints|Minimize the absolute covariance between the estimated sensitive attributes and prediction#:Add a covariance constraint to:Further ensure the fairnessStabilize the training of adversarial d

12、ebiasing10Dai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph neural networks with limited sensitive attribute information.WSDM.2021.|11 FairGCN and FairGAT use different backbones as classifier FairGCN and FairGAT achieve fairness with little drop in performance.ResultsResult

13、s of of FairGNNFairGNNDai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph neural networks with limited sensitive attribute information.WSDM.2021.|GeneralGeneral FrameworkFramework ofof AdversarialAdversarial DebiasingDebiasing|12Training datasetGNN Encoder?Adversary model?Clas

14、sifier model?Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|FairnessFairness ConstraintsConstraints|13Training datasetGNN ClassifierPred

15、icted vectorsFairness Regularizer?Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|DatasetsDatasets|14Dai,Enyan,Tianxiang Zhao,Huaisheng Z

16、hu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|FutureFuture DirectionsDirections|151.Attack and Defense in Fairness2.Fairness on Heterogeneous Graphs3.Fairness without Sens

17、itive AttributesDai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|Explainability02|17I IntroductionntroductionTransaction NetworkWhy?Low Cr

18、edit Score Graph Neural NetworkExplainability of Graph Neural Network is required for various applications:Credit estimation Fraud detectionDrug generation|Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Ne

19、ural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|18|PostPost-hochoc ExplanationsExplanationsGNN GNNExplainerIdentify important subgraph|Objective function:Find the subgraph contributes to the prediction 19Ying,Zhitao,Dylan Bourgeois,Jiaxuan You,Marinka Zitnik,and Jure Leskovec.Gnne

20、xplainer:Generating explanations for graph neural networks.Advances in neural information processing systems 32(2019).|20SelfSelf-Explainable GNNExplainable GNN|Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022Identify interpretable K-Nearest Labeled Nodes for node

21、classification Two folds self-explanations:1.K-Nearest Labeled Nodes and their labels2.Explanations in local graph structure similarityEdge matching for interpretable similarity modeling+Local Graph of%Local Graph of&!|21P Proposed SEroposed SE-GNNGNN1231.Interpretable Similarity Modeling:Explicitly

22、 models the node similarity and localstructure similarity with explanationsIdentify K-nearest labeled nodes of the target node2.Prediction with K-Nearest Labeled Nodes:Novel loss function#for accurate predictions andsimilarity modeling3.Self-Supervisor:Self-supervision to benefit similarity modeling

23、 forexplanations.|Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|22Interpretable Similarity Modeling:Node SimilarityInterpretable Similarity Modeling:Node SimilarityEncoder:1.Encode the node features:2.One GCN layer with residual connection:Deeper GCN may lead to

24、 oversmoothing issue that negatively affects similarity modeling(Node SimilarityNode Embeddings)(,Local Graph of labeled node Local Graph of target node#EncoderNode similarity of and:|Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|23Local Structure SimilarityLoca

25、l Structure SimilarityEdge embeddings of edge linking and:Local Structure Similarity(Average edge similarity of the matched edges):Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022$#,EncoderMatch Edges%&#%#{Explanation123123123Local Graph of labeled node!Local G

26、raph of target node&%&Edge Embeddings#%#&#Interpretable Similarity Modeling|24P Prediction with Krediction with K-Nearest Labeled NodesNearest Labeled NodesOverall similarity:Predict by weighted averaging labels of K-nearest labeled nodes,:Objective function:candidate K-Nearest Labeled nodesthat bel

27、ong to the same class as#Maximize the similarity scores between node in)#Facilitate classification and node similarity modelingRandomly sampled nodes that belong toother classesDai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|25Enhance Explanation with SEnhance Exp

28、lanation with Selfelf-SupervisionSupervisionInput local graphEdge perturbationAttribute MaskingEncoderNode embeddingsEdge embeddings Node embeddingsEdge embeddings Maximize similarityContrastive self-supervised task:Objective function for node similarity:Objective function for edge similarity:Dai,En

29、yan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|Node Classification on RealNode Classification on Real-World DatasetsWorld DatasetsSelf-supervised methodState-of-the-art GNNsApply KNN on embeddings learned by different encodersComparable results with state-of-the-art se

30、lf-supervised GNNsRobust GNN26Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|Evaluate the Edge Matching ExplanationEvaluate the Edge Matching ExplanationLocations are based on the node attributesNode colors denote the label of nodesEdges with the same number deno

31、te that they are matched 27Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|Quantitative Explanation EvaluationQuantitative Explanation EvaluationQuality of K-nearest Labeled nodes Explanation Quality of Edge Matching Explanation(Based on subgraph of the nearest la

32、beled nodes)We generate a synthetic dataset Syn-Cora that provide ground truth for K-nearest labelednode explanation and edge matching explanation28Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|FutureFuture DirectionsDirections2.Class-level Explanations1.Benchma

33、rk Datasets for Explanations3.Apply explanations in Fairness and Robustness29Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|Thanks for Listening|More details can be found in our survey:Paper:https:/arxiv.org/pdf/2204.08570.pdfRepository:https:/

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(1-2 图形学习中的公平性和可解释性.pdf)为本站 (云闲) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部