Efficient weight learning in high-dimensional untied MLNs

Abstract

Existing techniques for improving scalability of weight learning in Markov Logic Networks (MLNs) are typically effective when the parameters of the MLN are tied, i.e., several ground formulas in the MLN share the same weight. However, to improve accuracy in real-world problems, we typically need to learn separate weights for different groundings of the MLN. In this paper, we present an approach to perform efficient weight learning in MLNs containing high-dimensional, untied formulas. The fundamental idea in our approach is to help the learning algorithm navigate the parameter search-space more efficiently by a) tying together groundings of untied formulas that are likely to have similar weights, and b) setting good initial values for the parameters. To do this, we follow a hierarchical approach, where we first learn the parameters that are to be tied using a non-relational learner. We then use a relational learner to learn the tied-parameter MLN with initial values derived from parameters learned by the non-relational learner. We illustrate the promise of our approach on three different real-world problems and show that our approach yields much more scalable and accurate results compared to existing state-of-the-art relational learning systems.

Publication Title

International Conference on Artificial Intelligence and Statistics, AISTATS 2018

This document is currently not available here.

Share

COinS