Texture Segmentation on Mars

An Example

Mark A. Ruzon

This is an actual image taken from Sol 3 of the Mars Pathfinder Mission:

[]

248 rows, 256 columns

The goal is to take such an image and decide what parts are "interesting" in a geologic sense. Obviously, the definition of "interesting" depends on which geologist you ask, which image you are looking at, and what sort of tasks are to be performed. Nonetheless, a first approximation has been made.


We proceed by sampling the image via square windows (15 x 15 in this example) that overlap (here the offset between two windows is 4 pixels). The pixels within each window are modeled as a Gaussian Markov Random Field (GMRF), and a six-dimensional feature vector is extracted. This vector consists of the mean, the variance, and four autocovariance values.

We assign each vector to one of five pre-chosen classes. A "voting" procedure is used to determine the final label that will be assigned to each pixel in the image. The labeling constitutes a partition which is shown below:

[]

Final Segmentation

[]

"Sky/Flat"

[]

"Dust/Horizon"

[]

"Pebbles"

[]

"Dark Rock"

[]

"Light Rock"

A few caveats are in order here. First, although the five classes have been given semantic labels, it should not be inferred that the algorithm can delineate objects, though happily it does occur from time to time. In addition, the output of the algorithm on the boundary between two textures may not always belong to one of the two textures involved (e.g. the horizon winds up in a class with dust, though the two textures on either side are rocks and sky, and dust and the horizon are two very different things). Finally, the output can be made more aesthetically pleasing by lowering the offset between windows, but only at the expense of increasing the computation time quadratically.


This segmentation is, in a sense, the ultimate goal because it provides a primitive representation that can be used in a number of different applications. It can be the first step in building a three-dimensional model of the terrain. It can be used with a rover's obstacle avoidance sensors to estimate the size and shape of objects rather than just their heights. It can be used to detect layers in outcrops, although preliminary results show that a different set of texture classes may be required for this purpose. It can be used simply for the purpose of excluding the sky from future calculations such as binocular disparity.

The demonstration chosen in this example is that of selective image compression. In order to maximize the amount of relevant science information in the images that are sent back, it would be helpful to compress different areas of the image inversely proportional to their relative importance.

There are two avenues we can pursue here. The first is to make statements like, "Pebbles are always uninteresting" or "Rocks are always interesting." If the classes we have chosen match up well with the semantic labels, then this may be viable. However, there is another approach, the novelty approach, which simply says, "The textures that appear less often in the image are the most interesting." Thus, a dark, rock-like texture sitting in an image of pebble-like texture is important, as is a bright patch in an image dominated by dark rocks.

In the algorithm, we recognize three general categories. The "background" category consists of the most numerous class (in terms of number of pixels) in the image, plus other classes in order of decreasing amount, until the total is higher than a pre-set threshold. The remaining regions belong to the "foreground" category. The third category is "sky," and it is removed from all calculations mentioned. This is because the sky is very different than all other textures likely to be found on Mars, and so the chances that a non-sky pixel is classified as sky is usually small.

The final images, shown below, have been classified after dividing up the image into 8 x 8 blocks, since this is the size used by a standard JPEG compression algorithm.

[]

"Foreground"

[]

"Background"

[]

"Sky"

Note that not all of the sky is in one category. This is due to the large window size and the choice of a conservative approach. The foreground areas have been dilated at the end in order to provide a little context, as well as include any corners which may have been unnecessarily skipped. Also, note that the terms "foreground" and "background" do not necessarily correspond with the common usage of these terms.


From here, it is obvious that the sky can be compressed at 100:1 or more, while the background can be compressed at 12:1 or 24:1. The foreground can be sent back using lossless compression or JPEG compression at 6:1. Thus, the number of bits returned to Earth is much smaller, while the essential information in the image is preserved.

In conclusion, texture segmentation is not going to be able to do what a human geologist would do upon viewing an image, but there are applications where it can be quite useful. In addition, at 11 seconds on an SGI workstation, this particular algorithm is extremely fast with respect to the published literature. It is estimated that such a process would take 3 minutes of real time while a Pathfinder-type probe went about its other duties, and this interval could be shortened some if necessary. This example demonstrates the potentially significant improvement in the quality and the amount of science image data returned.


The reference used for the GMRF model is:

Schwartz, O., Quinn, A., Fast and Accurate Texture Based Image Segmentation, 1996 Int'l Conf. on Image Processing.


Page maintained by mark34@cs.stanford.edu