Interpretable Explanations for Probabilistic Inference in Markov Logic

Abstract

Markov Logic Networks (MLNs) represent relational knowledge using a combination of first-order logic and probabilistic models. In this paper, we develop an approach to explain the results of probabilistic inference in MLNs. Unlike approaches such as LIME and SHAP that explain black-box classifiers, explaining M LN inference is harder since the data is interconnected. We develop an explanation framework that computes importance weights for MLN formulas based on their influence on the marginal likelihood. However, it turns out that computing these importance weights exactly is a hard problem and even approximate sampling methods are unreliable when the MLN is large resulting in non-interpretable explanations. Therefore, we develop an approach where we reduce the large MLN into simpler coalitions of formulas that approximately preserve relational dependencies and generate explanations based on these coalitions. We then weight explanations from different coalitions and combine them into a single explanation. Our experiments illustrate that our approach generates more interpretable explanations in several text processing problems as compared to other state-of-the-art methods.

Publication Title

Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021

Share

COinS