Efficient weight learning in high-dimensional untied MLNs
Existing techniques for improving scalability of weight learning in Markov Logic Networks (MLNs) are typically effective when the parameters of the MLN are tied, i.e., several ground formulas in the MLN share the same weight. However, to improve accuracy in real-world problems, we typically need to learn separate weights for different groundings of the MLN. In this paper, we present an approach to perform efficient weight learning in MLNs containing high-dimensional, untied formulas. The fundamental idea in our approach is to help the learning algorithm navigate the parameter search-space more efficiently by a) tying together groundings of untied formulas that are likely to have similar weights, and b) setting good initial values for the parameters. To do this, we follow a hierarchical approach, where we first learn the parameters that are to be tied using a non-relational learner. We then use a relational learner to learn the tied-parameter MLN with initial values derived from parameters learned by the non-relational learner. We illustrate the promise of our approach on three different real-world problems and show that our approach yields much more scalable and accurate results compared to existing state-of-the-art relational learning systems.
International Conference on Artificial Intelligence and Statistics, AISTATS 2018
Al Farabi, K., Sarkhel, S., & Venugopal, D. (2018). Efficient weight learning in high-dimensional untied MLNs. International Conference on Artificial Intelligence and Statistics, AISTATS 2018, 1637-1645. Retrieved from https://digitalcommons.memphis.edu/facpubs/2763