Two of the most important threads of work in knowledge representation today are frame-based representation systems (FRS's) and Bayesian networks (BNs). FRS's provide an excellent representation for the organizational structure of large complex domains, but their applicability is limited because of their inability to deal with uncertainty and noise. BNs provide an intuitive and coherent probabilistic representation of our uncertainty, but are very limited in their ability to handle complex structured domains. In this paper, we provide a language that cleanly integrates these approaches, preserving the advantages of both. Our approach allows us to provide natural and compact definitions of probability models for a class, in a way that is local to the class frame. These models can be instantiated for any set of interconnected instances, resulting in a coherent probability distribution over the instance properties. Our language also allows us to represent important types of uncertainty that cannot be accomodated within the framework of traditional BNs: uncertainty over the set of entities present in our model, and uncertainty about the relationships between these entities. We provide an inference algorithm for our language via a reduction to inference in standard Bayesian networks. We describe an implemented system that allows most of the main frame systems in existence today to annotate their knowledge bases with probabilistic information, and to use that information in answering probabilistic queries.