Improving Canny's Edge Detector with Space-Variant Filters

Philip L. Tsai
tsailipu@cs.stanford.edu

Note: The formats of the code and input/output images have deviated from those mentioned in my proposal. Specifically, the code is now all in Matlab because of the ease of implementing the algorithm in it. Also, since I use Matlab's imread() and imwrite() funtions to read and write image files, the formats of input images can now be those recognized by imread() (although I use 'bmp' most of the time). Similarly the output formats can be those recognized by imwrite(). For showing the images on-line, however, I have tranformed all of them to be JPEG files. Finally, I have decided to output the edge map in binary format, instead of a gray-scale edge strength map, as I believe a binary edge map is much easier (for humans) to see and more convenient, for instance, to use as inputs to other practical applications in general.


  1. Inputs
    Shadow:
    Clock:
    Lamp:
    Venice:
  2. Outputs
    Canny's:


    Mine:
    Canny's:


    Mine:
    Canny's:


    Mine:
    Canny's:


    Mine:
(Note: The code for canny's edge detector is from Matlab 5.2's image processing toolbox. To be fair to both algorithms, I use the same, default parameters (i.e. sigma = 1, threshold = automatically selected by edge()).

  1. Brief Discussion
    The algorithm is still incomplete, as I am still trying to understand the effect of each parameter I use in the algorithm; there are still rooms for experiments and possible improvements. However, the results so far look promising. A detailed discussion and analysis of the method I use are omitted here and will be covered in the final report. Below are some comments on the performance of my method so far.

    Most of the coding has been done, although some bugs may still exist. An outline of my method is (Note: I use [output_name] to represent the output image from each substage in my algorithm):
    Obtain the high-frequency components in the input image (basically, subtract the diffused input image from the original one) -- call it [highimg].
    Apply Cellular Neural Network (CNN)'s falling membrane filter to [highimg], obtaining [fallimg].
    Mutiply [highimg] with [fallimg], which acts as a space-variant, multiplicative mask -- call the output [maskimg].
    Linearly blend [maskimg] with the low frequency components of the input image (i.e. the diffused input image)-- call the outout [finalimg].
    Apply Canny's edge detector to [finalimg], obtaining the final edge map.

    As we can see, my method is able to pick up more edges in most parts of the input images than Canny's, as illustrated in the shadow and lamp images. For instance, even the edges contained in the reflection of the lamp on the window are nicely picked up in my method. However, some small edge details do get lost, such as the suspending string of the wooden doll in the shadow image; and a few less intereting ones are also picked up, such as the fine, "differential" shadings in the wooden doll's body and its shadow. This is probably an inevitable effect of the space-variant method I use so far, but there are still rooms to see how different parameters I use can help here (e.g. how much diffusion should be done, the linear-blending coefficient, etc.). Also, my output images contain less "noises" (uninteresting edge information) in the background, as can be seen in the clock and venice output images -- i.e. in the wooden floor of the clock and in the water of the venice images.

    Areas to further explore and analyze:

    The effects of different parameters for diffusion and falling membrane templates (the diffusion template is used for obtaining [highimg], the high frequency components of the input image);
    Some other image enhancement techniques;
    How my method performs over a wider image dataset.

  2. Link to Source Code
    Here is a copy of the directory containing all of my code so far. Many thanks to Kenneth Crounse for the Matlab implementation of the Cellular Neural Network simulator.

    alg2.m is the file I use to produce my edge output files above.

    Because of time constraint, I was unable to explore much of bilateral filtering in the context of edge detection. Therefore, I omit any discussion and output images here. However, thanks to Professor Tomasi, I have the implementation in Matlab, bilateral.m, and played a little bit with it as well. If time permits, I might try to include the results of bilateral filter as applied to edge detection.


References:
[1] E. Trucco and A. Verri, Introductory Techniques for 3D Computer Vision, Prentice Hall (1998)
[2] K. R. Crounse, Image Processing Techniques for Cellular Neural Network Hardware, U.C. Berkeley, 1997
[3] C. Tomasi and R. Manduchi, Bilateral Filtering for Gray and Color Images, ICCV, 1998