Augmenting Deep Learning with Relational Knowledge from Markov Logic Networks

Abstract

Neuro-symbolic learning, where deep networks are combined with symbolic knowledge can help regularize the model and control overfitting. In particular, for applications where data instances are not independent, domain knowledge can be used to specify relational dependencies which may be hard to infer purely from the data. Symbolic AI models such as Markov Logic networks (MLNs) which are based on first-order logic are designed to represent and reason with uncertain background knowledge. However learning and inference algorithms in such models is known to be slow and inaccurate. In this paper, we develop a novel model that combines the best of both worlds, namely, the scalable learning capabilities of DNNs and symbolic knowledge specified in MLNs. To do this, we infer symmetries in the data based on the relational knowledge encoded in an MLN knowledge base and train a Convolutional Neural Network (CNN) to learn kernels combining symmetrical variables. However, by doing this, we are forced to split the relational data into independent instances for CNN training which may result is a loss of relational dependencies adding noise/uncertainty to the learned model. Therefore, instead of a single model, we learn a distribution over the model parameters. Our experiments illustrate that our model outperforms purely-MLN or purely-DNN based models in several different problem domains.

Publication Title

Proceedings - 2020 IEEE International Conference on Big Data, Big Data 2020

Share

COinS