There is no shortage of texture models to choose from. Since no particular representation has been shown to give great results on a wide range of textures, many ideas have been tried. Some have been created from scratch, while others have been borrowed from other disciplines. Models are often conceived by thinking of the image not as a set of pixels, but rather as a function I(x,y) defined over a continuous region of the plane. How the model is discretized and then implemented is of no less importance than the theory behind it.
In general, the purpose of a texture model is to provide a means for transforming a window of an image into a set of numbers, which in this paper will be called a feature vector. The feature vector can be thought of as a point in an n-dimensional feature space. The representation is ``good,'' then, if windows taken from the same texture sample form a cluster in feature space, and if windows taken from different texture samples are far away from each other in feature space. An inherent problem to be noted is that as we move across a boundary from one texture to another, the vectors do not travel in a straight line from one cluster to the other; in fact, they may cross a region of feature space belonging to an entirely different texture.)
The models can be roughly divided into three main categories: image pyramids, which try to capture spatial frequencies at different scales; random fields, which assume that pixel values are chosen by a two-dimensional stochastic process; and statistical methods, a ``catch-all'' phrase to group together a number of older techniques which are not as popular as the first two groups.