Efficient inference for untied MLNs
We address the problem of scaling up localsearch or sampling-based inference in Markov logic networks (MLNs) that have large shared substructures but no (or few) tied weights. Such untied MLNs are ubiquitous in practical applications. However, they have very few symmetries, and as a result lifted inference algorithms-the dominant approach for scaling up inference-perform poorly on them. The key idea in our approach is to reduce the hard, time-consuming sub-task in sampling algorithms, computing the sum of weights of features that satisfy a full assignment, to the problem of computing a set of partition functions of graphical models, each defined over the logical variables in a first-order formula. The importance of this reduction is that when the treewidth of all the graphical models is small, it yields an order of magnitude speedup. When the treewidth is large, we propose an over-symmetric approximation and experimentally demonstrate that it is both fast and accurate.
IJCAI International Joint Conference on Artificial Intelligence
Sarkhel, S., Venugopal, D., Ruozzi, N., & Gogate, V. (2017). Efficient inference for untied MLNs. IJCAI International Joint Conference on Artificial Intelligence, 4617-4624. https://doi.org/10.24963/ijcai.2017/644