Adaptive blocked Gibbs sampling for inference in probabilistic graphical models

Abstract

Inference is a central problem in probabilistic graphical models, and is often the main sub-step in probabilistic learning procedures. Thus, accurate inference algorithms are essential to both answer queries on a learned model, as well as to learn a robust model. Gibbs sampling is arguably one of the most popular approximate inference methods that has been widely used for probabilistic inference in several different domains including natural language processing, computer vision. etc. Here, we develop an approach that improves the performance of blocked Gibbs sampling, an advanced variant of the Gibbs sampling algorithm. Specifically, we utilize correlation among variables in the probabilistic graphical model to develop an adaptive blocked Gibbs sampler that automatically tunes its proposal distribution based on statistics derived from previous samples. Specifically, we adapt the proposal such that we sample blocks containing highly correlated variables more often than the others. This in turn helps improve probability estimates given by the sampler, by selecting hard-to-sample variables more often during the sampling procedure. Further, since adaptation breaks the Markovian property of the sampler, we develop a method to guarantee that our sampler converges to the correct stationary distribution despite being non-Markovian, by diminishing the adaptation of the selection probabilities over time. We evaluate our method with several discrete probabilistic graphical models taken from UAI challenge problems corresponding to different domains, and show that our approach is superior in terms of accuracy as compared to methods that ignore correlation information in the proposal distribution of the sampler.

Publication Title

Proceedings of the International Joint Conference on Neural Networks

Share

COinS