Electronic Theses and Dissertations

Date

2021

Document Type

Dissertation

Degree Name

Doctor of Philosophy

Department

Computer Science

Committee Chair

Deepak Venugopal

Committee Member

Vasile Rus

Committee Member

Amy Cook

Committee Member

Xiaofei Zhang

Abstract

Explaining the results of Artificial Intelligence (AI) or Machine Learning (ML) algorithms is crucial given the rapid growth and potential applicability of these methods in critical domains including healthcare, defense, autonomous driving, etc. While AI/ML approaches yield highly accurate results in many challenging tasks such as natural language understanding, visual recognition, game playing, etc., the underlying principles behind such results are not easily understood. Thus, the trust in AI/ML methods for critical application domains is significantly lacking. While there has been progress in explaining classifiers, there are two significant drawbacks. First, current explanation approaches assume independence in the data instances which is problematic when the data is relational in nature, which is the case in several real-world problems. Second, explanations that only rely on individual instances are less interpretable since they do not utilize relational information which may be more intuitive to understand for a human user. In this dissertation, we have developed explanations using Markov Logic Networks (MLNs) which are highly expressive statistical relational models that combine first-order logic with probabilistic graphical models. Since MLNs are symbolic models, it is possible to extract explanations that are human-interpretable. However, doing this is computationally hard for large MLNs since we need to perform probabilistic inference to attribute the influence of symbolic formulas to the predictions. In this dissertation, we have developed a suite of fundamental techniques that help us in i) explaining probabilistic inference in MLNs and also ii) utilize MLNs as a symbolic model for specifying relational dependencies that can be used in other explanation methods. Thus, this dissertation significantly advances the state-of-the-art in explanations for relational models, and helps improve transparency and trust in these models.

Comments

Data is provided by the student.

Library Comment

Dissertation or thesis originally submitted to ProQuest

Share

COinS