# Detection of Linear Layers

## Mark A. Ruzon

For the task of determining the geologic processes that occurred in a given area, there seems to be no more telling feature of a landscape than the existence of layers. The processes of wind, water, and heat all seem to leave their footprints behind in the form of layers. Automatic detection of layers, therefore, is of the utmost importance in trying to understand the geology of an area where humans are currently unable to go, such as Mars.

But how do we detect layers? It can't be from the content of the layer itself, for if we stand very close to an outcrop so that we see only one region, then the layer has somehow disappeared. Detection of layers is possible only when one sees the discontinuities, or boundaries, between two adjacent layers. The boundaries can be of two general types, which have been termed "linear" and "non-linear." The former is the subject of this document, and the latter shall be briefly discussed later.

 Here is an obvious example of an image with linear layers:
Linear layers are defined to be those layers whose boundaries are distinguishable by intensity edges. If we try to define the term without reference to boundaries, the best we can say is that it is a long, narrow, coherent, and traceable feature of an image. Examples in geology include parallel bedding (as in the above image), cross bedding, schists, and gneisses. Examples of non-linear layers include graded bedding, quartzite, and marble.

The problem is that all interesting images contain a wealth of intensity edges, so we cannot simply use the presence of edges to imply the presence of layers. Our more sophisticated model defines linear layers to exist in regions where there are many edges, the majority of which run in the same direction. In order to make this definition more precise and begin using it, we create an edge-based model of texture using the algorithm desribed below.

First, we run an edge detection algorithm (the Canny algorithm in this case) to detect areas of high intensity gradient and connect them into edges. We then take a small window of the image and compute a histogram of the orientation of all the 8-pixel line segments within this window. From the histogram we compute three numbers which characterize the texture:

• the dominant orientation, which is found by looking for the highest adjacent buckets in the histogram.
• the dominance, or what percentage of all the segments in the window fall into those two buckets.
• the edge point density, which is simply the sum of all the buckets in the histogram. It is an absolute measure, whereas the other two are relative.
These three parameters can also be understood better by looking at their application to the above image.

This image shows the spatial locations of the areas of different dominant orientations. The black areas have no edges and therefore no dominant orientation. The darkest grey (which is most of the image) represents areas whose dominant orientation is between 0 and 22.5 degrees with respect to the positive x-axis). As the shades get brighter, the orientation moves counterclockwise in the plane. The white areas have an orientation between 157.5 degrees and 180 degrees.

This image measures the dominance of the dominant orientation in each window. The brighter pixels correspond to windows with high percentages of edges at the orientation specified by the previous image. As you can see, most areas of this image are very highly dominant.

The final image in the set shows the density of edge points in each window of the image. Note the upper right hand corner. Though the dominance (see the previous image) is very high, the density is much lower because only one long edge was detected in that area. It will eventually be discarded as a candidate for layering.

The real question is how these three heterogeneous qualities--one an angle, one a percentage, and one an absolute quantity--are combined into a "Yea" or a "Nay" on the existence of layers in the image. We do this by relying on the fact that adjacent pixels often have the same dominant orientation label, partitioning the image into regions. We look at each region and see if a certain percentage of the pixels in that region correspond to windows whose dominance is above a certain level. If this occurs, the region is called a dominant region and its pixels are candidates for being labeled as layers. We then take each pixel in the region and check to see if the edge point density is above yet another threshold. If it is, it is included. If the region is not found to be a dominant region, it is ignored entirely.

The effect of this algorithm is to make sure that the regions we get back are generally coherent (meaning without being fragmented or containing many small holes) and meet the criteria for our conception of what a linear layer is. After applying this algorithm to our example, we get the following output:

As you can see, the layers were found quite easily. This is, however, an image where the layers are very strong. How does it behave on images where the layers are not quite so clear, or altogether nonexistent? The output for a few images is seen below:

The results indicate that the algorithm does a good job of finding layers where they exist, and an equally good job of not finding layers where they don't.

Note the last image, however. Though layers were found only in certain places, the truth is that geologically, the entire image consists of different layers. The problem in detecting these layers is that the boundary between them is not marked by long, parallel edges. Instead, the differences between layers are textural and involve different particle sizes or other patterns. These are "non-linear" layers according to this document. Though semantically the two represent the same thing, the visual processes for finding them are completely different, and therefore non-linear layers require a completely different algorithm.

That algorithm would require using texture segmentation to partition the image into different regions of homogeneous texture, which hopefully would correspond to the different layers. It is not known currently how well this could be done. Even if it could be done well, the output of such an algorithm could never conclude that there are layers or no layers in an image, because more information is required. If different textures are different distances away from the camera, then these are not layers. Therefore, depth information would be needed so that the system would know if it was looking at an outcrop or not in the first place. Even our detector could be tricked into finding layers where there were none, given the right circumstances. A fundamental assumption is that a large number of mostly parallel lines will not occur in a small part of the image unless they correspond to layering.

In spite of this limitation, linear layers do cover a good amount of the layers that exist on Earth or on Mars, and so we can consider the potential applications for our output. The most obvious is alerting the rover (or a lander able to pick up rocks and examine them closely) that layers have been found so that a more detailed analysis can be made. Another application is using the amount of layers found in an image as an "interest measure" for autonomously determining whether the image is worthy of being sent back.

Page maintained by mark34@cs.stanford.edu