The problem of developing good policies for partially observation Markov decision processes (POMDPs) remains one of the most challenging areas of research in stochastic planning. One line of research in this area involves the use of reinforcement learning with belief states, probability distributions over the underlying model states. This is a promising method for small problems, but its application is limited by the intractability of computing or representing a full belief state for large problems. Recent work shows that, in many settings, we can maintain an approximate belief state, which is fairly close to the true belief state. In particular, great success has been shown wiht approximate belief states that marginalize out correlations between state variables. In this paper, we investigate two methods of ull belief state reinforcement learning and one novel mmethod for reinforcement learning using factored approximate belief states. We compare the performance of thse algorithms on several well-known problems from the literature. Our results demonstrate the importance of approximate belief state representations for large problems.