Learning Belief Networks in the Presence of Missing Values and Hidden Variables

N. Friedman

To appear in Fourteenth Inter. Conf. on Machine Learning (ICML97).

Postscript version (140K) PDF version.


In recent years there has been a flurry of works on learning probabilistic belief networks. Current state of the art methods have been shown to be successful for two learning scenarios: learning both network structure and parameters from complete data, and learning parameters for a fixed network from incomplete data---that is, in the presence of missing values ---or hidden variables. However, no method has yet been demonstrated to effectively learn network structure from incomplete data.

In this paper, we propose a new method for learning network structure from incomplete data. This method is based on an extension of the Expectation-Maximization (EM) algorithm for model selection problems that performs search for the best structure inside the EM procedure. We prove the convergence of this algorithm, and adapt it for learning belief networks. We then describe how to learn networks in two scenarios: when the data contains missing values, and in the presence of hidden variables. We provide experimental results that show the effectiveness of our procedure in both scenarios.

Back to Nir's publications page