On lifting the Gibbs sampling algorithm
Abstract
First-order probabilistic models combine the power of first-order logic, the de facto tool for handling relational structure, with probabilistic graphical models, the de facto tool for handling uncertainty. Lifted probabilistic inference algorithms for them have been the subject of much recent research. The main idea in these algorithms is to improve the accuracy and scalability of existing graphicalmodels' inference algorithms by exploiting symmetry in the first-order representation. In this paper, we consider blocked Gibbs sampling, an advanced MCMC scheme, and lift it to the first-order level. We propose to achieve this by partitioning the first-order atoms in the model into a set of disjoint clusters such that exact lifted inference is polynomial in each cluster given an assignment to all other atoms not in the cluster. We propose an approach for constructing the clusters and show how it can be used to trade accuracy with computational complexity in a principled manner. Our experimental evaluation shows that lifted Gibbs sampling is superior to the propositional algorithm in terms of accuracy, scalability and convergence.
Publication Title
Advances in Neural Information Processing Systems
Recommended Citation
Venugopal, D., & Gogate, V. (2012). On lifting the Gibbs sampling algorithm. Advances in Neural Information Processing Systems, 3, 1655-1663. Retrieved from https://digitalcommons.memphis.edu/facpubs/3033