1、|Enyan DaiCollege of Information Sciences and Technology The Pennsylvania State University|FairnessFairness andand ExplainabilityExplainabilityin in GraphGraph NeuralNeural NetworksNetworks1|01FairnessCONTENT|BackgroundDefinitionsAdversarial DebiasingFairness Constraints02ExplainabilityPost-hoc Expl
2、anations GNNExplainerSelf-Explainable GNN SE-GNN|Fairness01|DiscriminationDiscrimination/Biases/Biases in in MachineMachine Learning Learning|Face recognition perform poorly for the darker female.4|IntroductionIntroduction|Graph neural networks have higher risk in discrimination:People linked in the
3、 network tend to have the same sensitive attributes.The message passing would lead linked nodes have similar predictionsAn example of salary prediction.(Due to historical reasons,the male generally earns high salary in the dataset Ground Truth:HighLowLowHighHighGround Truth:LowPrediction:LowPredicti
4、on:High5|Graph neural networks have higher risk in discrimination:EmpiricalEmpiricalAnalysisLarge!and#$indicate an unfair model.Results on Pokec-z6Dai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph neural networks with limited sensitive attribute information.WSDM.2021.|7Dai,E
5、nyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|Node Classification:Let 0,1 and 0,1 denote the sensitive attribute and label.Statistical Parit
6、y:The prediction should be independent with the sensitive attribute:|=0=|=1Equal Opportunity:The probability of an instance in a positive class being assigned to apositive outcome should be equal for both subgroup members:|=1,=0=|=1,=1Definitions of GroupDefinitions of Group FairnessFairnessLink Pre
7、diction:Dyadic Fairness:An extension of statistical parity for link prediction.It requires the linkpredictor to give predictions independent with the sensitive attributes of the target nodes:(,)|=()=,|where(,)denotes the link prediction of node and node|8Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie
8、 Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|AdversarialAdversarial DebiasingDebiasing in in FairGNNFairGNNAdversary aims to predict sensitive attributes with is the representatio
9、n learned by the GNN classifier.Classifier aims to learn representation to fool the adversary to predict wrong sensitive attributes.Estimated sensitive attributesX!#!$Input layer!#$Output layerHiddenlayersGNN classifier!#$Adversary!#$GCN based sensitive attribute estimator#%Sensitive attribute estim
10、ator:Obtain estimated sensitive attributesClassifier and adversary:Eliminate the sensitive information in therepresentation for classification.|When the adversarial loss reaches its global minimum:|=0=|=1 Statistical Parity9Dai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph n
11、eural networks with limited sensitive attribute information.WSDM.2021.|FairnessFairness ConstraintsConstraints|Minimize the absolute covariance between the estimated sensitive attributes and prediction#:Add a covariance constraint to:Further ensure the fairnessStabilize the training of adversarial d
12、ebiasing10Dai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph neural networks with limited sensitive attribute information.WSDM.2021.|11 FairGCN and FairGAT use different backbones as classifier FairGCN and FairGAT achieve fairness with little drop in performance.ResultsResult
13、s of of FairGNNFairGNNDai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph neural networks with limited sensitive attribute information.WSDM.2021.|GeneralGeneral FrameworkFramework ofof AdversarialAdversarial DebiasingDebiasing|12Training datasetGNN Encoder?Adversary model?Clas
14、sifier model?Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|FairnessFairness ConstraintsConstraints|13Training datasetGNN ClassifierPred
15、icted vectorsFairness Regularizer?Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|DatasetsDatasets|14Dai,Enyan,Tianxiang Zhao,Huaisheng Z
16、hu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|FutureFuture DirectionsDirections|151.Attack and Defense in Fairness2.Fairness on Heterogeneous Graphs3.Fairness without Sens
17、itive AttributesDai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|Explainability02|17I IntroductionntroductionTransaction NetworkWhy?Low Cr
18、edit Score Graph Neural NetworkExplainability of Graph Neural Network is required for various applications:Credit estimation Fraud detectionDrug generation|Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Ne
19、ural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|18|PostPost-hochoc ExplanationsExplanationsGNN GNNExplainerIdentify important subgraph|Objective function:Find the subgraph contributes to the prediction 19Ying,Zhitao,Dylan Bourgeois,Jiaxuan You,Marinka Zitnik,and Jure Leskovec.Gnne
20、xplainer:Generating explanations for graph neural networks.Advances in neural information processing systems 32(2019).|20SelfSelf-Explainable GNNExplainable GNN|Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022Identify interpretable K-Nearest Labeled Nodes for node
21、classification Two folds self-explanations:1.K-Nearest Labeled Nodes and their labels2.Explanations in local graph structure similarityEdge matching for interpretable similarity modeling+Local Graph of%Local Graph of&!|21P Proposed SEroposed SE-GNNGNN1231.Interpretable Similarity Modeling:Explicitly
22、 models the node similarity and localstructure similarity with explanationsIdentify K-nearest labeled nodes of the target node2.Prediction with K-Nearest Labeled Nodes:Novel loss function#for accurate predictions andsimilarity modeling3.Self-Supervisor:Self-supervision to benefit similarity modeling
23、 forexplanations.|Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|22Interpretable Similarity Modeling:Node SimilarityInterpretable Similarity Modeling:Node SimilarityEncoder:1.Encode the node features:2.One GCN layer with residual connection:Deeper GCN may lead to
24、 oversmoothing issue that negatively affects similarity modeling(Node SimilarityNode Embeddings)(,Local Graph of labeled node Local Graph of target node#EncoderNode similarity of and:|Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|23Local Structure SimilarityLoca
25、l Structure SimilarityEdge embeddings of edge linking and:Local Structure Similarity(Average edge similarity of the matched edges):Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022$#,EncoderMatch Edges%%#{Explanation123123123Local Graph of labeled node!Local G
26、raph of target node&%&Edge Embeddings#%#Interpretable Similarity Modeling|24P Prediction with Krediction with K-Nearest Labeled NodesNearest Labeled NodesOverall similarity:Predict by weighted averaging labels of K-nearest labeled nodes,:Objective function:candidate K-Nearest Labeled nodesthat bel
27、ong to the same class as#Maximize the similarity scores between node in)#Facilitate classification and node similarity modelingRandomly sampled nodes that belong toother classesDai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|25Enhance Explanation with SEnhance Exp
28、lanation with Selfelf-SupervisionSupervisionInput local graphEdge perturbationAttribute MaskingEncoderNode embeddingsEdge embeddings Node embeddingsEdge embeddings Maximize similarityContrastive self-supervised task:Objective function for node similarity:Objective function for edge similarity:Dai,En
29、yan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|Node Classification on RealNode Classification on Real-World DatasetsWorld DatasetsSelf-supervised methodState-of-the-art GNNsApply KNN on embeddings learned by different encodersComparable results with state-of-the-art se
30、lf-supervised GNNsRobust GNN26Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|Evaluate the Edge Matching ExplanationEvaluate the Edge Matching ExplanationLocations are based on the node attributesNode colors denote the label of nodesEdges with the same number deno
31、te that they are matched 27Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|Quantitative Explanation EvaluationQuantitative Explanation EvaluationQuality of K-nearest Labeled nodes Explanation Quality of Edge Matching Explanation(Based on subgraph of the nearest la
32、beled nodes)We generate a synthetic dataset Syn-Cora that provide ground truth for K-nearest labelednode explanation and edge matching explanation28Dai,Enyan,and Suhang Wang.“Towards Self-Explainable Graph Neural Network.”CIKM 2022|FutureFuture DirectionsDirections2.Class-level Explanations1.Benchma
33、rk Datasets for Explanations3.Apply explanations in Fairness and Robustness29Dai,Enyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|Thanks for Listening|More details can be found in our survey:Paper:https:/arxiv.org/pdf/2204.08570.pdfRepository:https:/