My home page
My group
Professional activities

Daphne Koller Publications

Learning Module Networks (2005)

by E. Segal, D. Pe'er, A. Regev, D. Koller, and N. Friedman
[older version, 2003]

Abstract: Methods for learning Bayesian networks can discover dependency structure between observed variables. Although these methods are useful in many applications, they run into computational and statistical problems in domains that involve a large number of variables. In this paper, we consider a solution that is applicable when many variables have similar behavior. We introduce a new class of models, module networks, that explicitly partition the variables into modules, so that the variables in each module share the same parents in the network and the same conditional probability distribution. We define the semantics of module networks, and describe an algorithm that learns the modules composition and their dependency structure from data. Evaluation on real data in the domains of gene expression and the stock market shows that module networks generalize better than Bayesian networks, and that the learned module network structure reveals regularities that are obscured in learned Bayesian networks.

Download Information

E. Segal, D. Pe'er, A. Regev, D. Koller, and N. Friedman (2005). "Learning Module Networks." Journal of Machine Learning Research, 6, 557-588. pdf

Bibtex citation

  title = {Learning Module Networks},
  author = {E. Segal and D. Pe'er and A. Regev and D. Koller and N. Friedman},
  journal = {Journal of Machine Learning Research},
  year = 2005,
  volume = 6,
  month = {April},
  pages = {557--588},

full list
Click to go to robotics Click to go to theory Click to go to CS Stanford Click to go to Stanford's Webpage
home | biography | research | papers | my group
courses | professional activities | FAQ | personal