How to explain the prediction of a GNN node classification? Distill n' Explain is a novel GNN explainer that highlights important nodes that influenced such a prediction. It first distills the GNN and explains it afterward.
Abstract
Explaining node predictions in graph neural networks (GNNs) often boils down to finding graph substructures that preserve predictions. Finding these structures usually implies back-propagating through the GNN, bonding the complexity (e.g., number of layers) of the GNN to the cost of explaining it. This naturally begs the question: Can we break this bond by explaining a simpler surrogate GNN? To answer the question, we propose Distill n’ Explain (DnX). First, DnX learns a surrogate GNN via knowledge distillation. Then, DnX extracts node or edge-level explanations by solving a simple convex program. We also propose FastDnX, a faster version of DnX that leverages the linear decomposition of our surrogate model. Experiments show that DnX and FastDnX often outperform state-of-the-art GNN explainers while being orders of magnitude faster. Additionally, we support our empirical findings with theoretical results linking the quality of the surrogate model (i.e., distillation error) to the faithfulness of explanations.
Materials
BibTeX
@inproceedings{2023-DnX,
 title = {Distill n’ Explain: Explaining Graph Neural Networks Using Simple Surrogates},
 author = {Tamara Pereira AND Erik Nascimento AND Lucas Resck AND Diego Mesquita AND Amauri Souza},
 booktitle = {International Conference on Artificial Intelligence and Statistics},
 year = {2023},
 url = {http://www.visualdslab.com/papers/DnX},
}