[This should be a nice logo]

A Short History of
Color Edge Detection

Mark A. Ruzon


In 1977 Professor Ramakant Nevatia of USC published the first journal paper (we think) on color edge detection, in which he extended the Hueckel operator, developed 4 years previously, to color images. Since then, at least 17 other journal papers (14 according to the USC Computer Vision Bibliography, plus 3 more not listed) and a large number of conference papers have been written.

Nearly all of them still try to "extend" a greyscale edge detector to color images. While it seems intuitive to start with the solution to an easier problem in order to attack a more complex one, there is a drawback in this particular case. We perceive grey levels to be ordered; the fact that "medium grey" and "white" can be averaged together to produce "medium-light grey" does not bother anyone. However, if you take a red and green region and try to approximate it with a single color, you will run into problems immediately. Depending on your arrangement of colors, it could be yellow (in the spectrum), grey (CIE-Lab color space), yellowish-grey (RGB, HSV) or impossible (the opponent-colors theory). Even if we could agree on the "correct" color space, none of the colors mentioned is perceptually similar to red or green. This truth is the justification of the compass operator's use of color signatures.

Most of the literature can be placed into three categories: output fusion methods, multi-dimensional gradient methods, and vector methods. Output fusion appears to be the most popular; the goal is to perform edge detection three times, once each for red, green, and blue (or whatever color space is being used), and then the output is fused to form one edge map, as shown by the following diagram:

This is the output fusion schematic

Multi-dimensional gradient methods short-circuit the process somewhat by combining the three gradients into one and detecting edges only once:

This is the multi-dimensional gradient schematic

We find the decomposition of a multi-band image into a number of single-band images to be unpleasing. While it is true that humans perceive color in three dimensions, it is unlikely that edge detection or gradient computation takes place by projecting colors onto three separate axes.

There are a two notable papers that treat a pixel's color as a vector throughout. One is by Yang and Tsai (1996), which tries to find for each 8x8 image block the best axis in color space on which to project the image data, creating a single-band image. The other is by Trahanias and Venetsanapoulos (1996), which uses vector order statistics to compute a variety of statistical measures for edge detection. Neither has become popular.

The main benefit of an edge model that relies on color signatures and the Earth Mover's Distance is that it can be applied to any image range -- black-and-white, greyscale, color, or other multi-spectral ranges. We need only to specify the ground distance between two points. The resulting signatures should be perceptually meaningful.

Page maintained by mark34@cs.stanford.edu