The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials --- distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the clique potentials. This makes the clique tree algorithm impractical for certain large discrete networks, and inapplicable for most continuous and hybrid networks. This paper presents a new unified approach that combines approximate inference and the clique tree algorithm, thereby circumventing this limitation. Many known approximate inference algorithms can be viewed as instances of this approach: likelihood weighting, most approximate inference algorithms for dynamic Bayesian networks, inference for Conditional Gaussian networks, and more. The algorithm essentially does clique tree propagation, using approximate inference to estimate the densities in each clique. In many settings, the computation of the approximate clique potential can be done easily using statistical importance sampling. Unlike exact clique tree propagation, the results of a single pass may not be accurate enough. The algorithm deals with this problem using an iterative scheme: it estimates the error at each clique, and re-evaluates the potential of those where the error is large. This algorithm combines the best features of exact and approximate inference: Like approximate inference algorithms, it can deal with complex domains, including hybrid networks. Like the clique tree algorithm, it exploits the locality structure of the Bayes net to reduce the dimensionality of the probability densities involved in the computation.