《探索图可解释性中的分布外泛化问题.pdf》由会员分享,可在线阅读,更多相关《探索图可解释性中的分布外泛化问题.pdf(35页珍藏版)》请在三个皮匠报告上搜索。
1、探索图可解释性中的分布外泛化问题方俊峰 中国科学技术大学 博士DataFunSummit#2023当前的可解释评估指标真的“公平”吗?可解释算法为何会引入OOD问题?如何实现网络-数据的联合解释?1)避开公式 2)中英混杂2Background3How to define explainability?(1)Miller,Tim.“Explanation in artificial intelligence:Insights from the social sciences.”arXiv Preprint arXiv:1706.07269.(2017).Explainability is th
2、e degree to which a human can understand the models result 1Find which fractions are most influential to the GNNs predictionFind important subgraphInputModelCycleCycleGridHouseBA3-motifExplanationMethodExisting methodsSAGradCAMGNNExplainerPGExplainerCF-GNNExGraphMaskGSATDIR*GREA*m=0.03m=0.94GemPGMEx
3、6Evaluation metricNIPS 2023 Oral(2%)Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis.Existing Evaluation metrics1.Human supervision seeks to justify whether the explanations align with human knowledge,but it is often too subjective,thus hardly providing quantifiable
4、 assessments.2.Measuring the agreement between the generated and ground-truth explanations,such as Precision and Recall.Unfortunately,access to the ground truth is usually unavailable and labor-extensive.CycleGridExisting Evaluation metrics#Caveat of RM:as the after-removal subgraphs are likely to l
5、ie off the distribution of full graphs,the GNN is forced to handle these off-manifoldinputs and easily gets erroneous predictions.3.Feature Removal(RM):first remove the unimportant features and feed the remaining part(i.e.,explanatory subgraph)into the GNN,and then observe how the prediction changes
6、.Existing Evaluation metrics4.Generation-based metrics:Instead of directly feeding,they use a generative model to generate a new full graph conditioned on the subgraph.#Caveat of Generation-based metrics:the generation-based metrics show respectto the data distribution somehow but couldbe inconsiste
7、nt with GNNs behavior andlose control of the infilling part.Existing Evaluation metricsCan we design a metric that respects the data distribution and GNN behavior simultaneously?STEP 1:Formulation of Adversarial Robustness.通俗点说:如何判定你给定的解释好不好?如果好,那么我随便扰动除解释以外的边,输入不应该翻转。因此,解释的鲁棒性可以定义为:致使输出翻转的最小扰动。STEP
8、 2:Finding a Tractable Objective.Two observations:Perturbations may fail to find any adversarial example.Current attack methods typically turn to find an alternative sub-optimal solution.Lagrange duality:通俗点:那么我每次只扰动部分除解释外的边,看输出变化STEP 3:OOD Reweighting BlockExplanatory Subgraph(May violate the data
9、distribution)The degree of Violationhow to ensure that the aforementioned evaluation process respects the data distribution?通俗点:重建越失败,违反原始数据流行越大Overall Framework1234SimOARRemove grey dots!通俗点(KEY POINT):不要直接输入子图,而是:固定子图,按比例k 扰动其他边,看平均输出的变化。ResultsResults19Explanation Method Cooperative Explanations
10、of Graph Neural Networks WSDM 2023Revealing the OOD Limitation通俗点:直接将Gs输入模型不正确,那么能不能找到一个适合Gs的模型呢?Revealing the OOD Limitation通俗点:直接将Gs输入受训于G的模型不正确,那么能不能找到一个受训于(适合)Gs的模型呢?可是,GsffGs,死循环了CGE:CGE:Theoretical BasisCGE:Lottory ticket hypothesisExplainerLTH in GNNModelMaskMask ModeltrainMask.How to do:gene
11、rate mask train rewindFor what:iteratively select subnetwork Explainer:GNNExplainer.How to do:generate mask train rewindFor what:iteratively select subgraph Incorporate LTH and GNNExplainerGraph:generate mask train for MMI(not label)rewindNetwork:generate mask train for MMI(not label)rewindTrain for
12、 MMI Cooperative GNN Explanation(CGE)Employing CGE to optimize existing methodsexisting methods:generate importance scores for each edgeTrain for MMI existing methodsgenerate importance scores M1Incorporate LTH and ExplainerTrickParameters simplification:we force the masks of neurons in one convolutional unit to the same value.Early stopping:Algo30ExperimentQuantitative&Qualitative analysisQuantitative analysisQualitative analysis33Theory in GNN ExplanationOn Regularization for Explaining Graph Neural Networks:An Information Theory Perspective.https:/ Why OOD?Thanks!