Consider an agent (or expert system) with a knowledge base KB that includes statistical information (such as "90\% of patients with jaundice have hepatitis"), first-order information ("all patients with hepatitis have jaundice"), and default information ("patients with jaundice typically have a fever"). A doctor with such a KB may want to assign a degree of belief to an assertion phi such as "Eric has hepatitis". Since the actions the doctor takes may depend crucially on this degree of belief, we would like to specify a mechanism by which she can use her knowledge base to assign a degree of belief to phi in a principled manner. We have been investigating a number of techniques for doing so; in this paper we give an overview of one of them. The method, which we call the random worlds method, is a natural one: For any given domain size N<\I>, we consider the fraction of models satisfying phi among models of size N<\I> satisfying KB. If we do not know the domain size N<\I>, but know that it is large, we can approximate the degree of belief in phi given KB by taking the limit of this fraction as N<\I> goes to infinity. As we show, this approach has many desirable features. In particular, in many cases that arise in practice, the answers we get using this method provably match heuristic assumptions made in many standard AI systems.