In many domains, we are interested in unveiling the structure of the underlying distribution, e.g., whether one variable is a direct parent of the other. Bayesian model-selection attempts to find the MAP model and use its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task, based on Markov Chain Monte Carlo (MCMC) methods. However, rather than doing MCMC directly on network structures, we use MCMC on orderings of the network variables. This space is much smaller and more regular than the space of structures, and has a smoother posterior `landscape'. We start by showing how to efficiently sum over the exponential number of networks that are consistent with an ordering. This allows us to compute, for a given ordering, both the marginal probability of the data and the posterior of a feature. We then use these results to construct an MCMC scheme over variable orderings. We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach.