Learning mixtures of MLNs
Abstract
Weight learning is a challenging problem in Markov Logic Networks (MLNs) due to the large size of the ground propositional probabilistic graphical model that underlies the first-order representation of MLNs. Though more sophisticated weight learning methods that use lifted inference have been proposed, such methods can typically scale up only in the absence of evidence, namely in generative weight learning. In discriminative learning, where the evidence typically destroys symmetries, existing approaches are lacking in scalability. In this paper, we propose a novel, intuitive approach for learning MLNs discriminatively by utilizing approximate symmetries. Specifically, we reduce the size of the training database by clustering approximately symmetric atoms together and selecting a representative atom from each cluster. However, each choice made from the clusters induces a different distribution, increasing the uncertainty in our learned model. To reduce this uncertainty, we learn a finite mixture model by stacking the different distributions, where the parameters of the model are learned using an EM approach. Our results on several benchmarks show that our approach is much more scalable and accurate as compared to existing state-of-the-art MLN learning methods.
Publication Title
32nd AAAI Conference on Artificial Intelligence, AAAI 2018
Recommended Citation
Islam, M., Sarkhel, S., & Venugopal, D. (2018). Learning mixtures of MLNs. 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 6359-6366. Retrieved from https://digitalcommons.memphis.edu/facpubs/2928