- Brief Discussion
The algorithm is still incomplete, as I am still trying to understand the effect of each parameter I use in the algorithm; there are still rooms for experiments and possible improvements. However, the results so far look promising. A detailed discussion and analysis of the method I use are omitted here and will be covered in the final report. Below are some comments on the performance of my method so far.
Most of the coding has been done, although some bugs may still exist.
An outline of my method is (Note: I use [output_name] to represent the output image from each substage in my algorithm):
Obtain the high-frequency components in the input image (basically, subtract the diffused input image from the original one) -- call it [highimg].
Apply Cellular Neural Network (CNN)'s falling membrane filter to [highimg], obtaining [fallimg].
Mutiply [highimg] with [fallimg], which acts as a space-variant, multiplicative mask -- call the output [maskimg].
Linearly blend [maskimg] with the low frequency components of the input image (i.e. the diffused input image)-- call the outout [finalimg].
Apply Canny's edge detector to [finalimg], obtaining the final edge map.
As we can see, my method is able to pick up more edges in most parts of the input images than Canny's, as illustrated in the shadow and lamp images. For instance, even the edges contained in the reflection of the lamp on the window are nicely picked up in my method. However, some small edge details do get lost, such as the suspending string of the wooden doll in the shadow image; and a few less intereting ones are also picked up, such as the fine, "differential" shadings in the wooden doll's body and its shadow. This is probably an inevitable effect of the space-variant method I use so far, but there are still rooms to see how different parameters I use can help here (e.g. how much diffusion should be done, the linear-blending coefficient, etc.). Also, my output images contain less "noises" (uninteresting edge information) in the background, as can be seen in the clock and venice output images -- i.e. in the wooden floor of the clock and in the water of the venice images.
Areas to further explore and analyze:
The effects of different parameters for diffusion and falling membrane templates (the diffusion template is used for obtaining [highimg], the high frequency components of the input image);
Some other image enhancement techniques;
How my method performs over a wider image dataset.
- Link to Source Code
Here is a copy of the directory containing all of my code so far. Many thanks to Kenneth Crounse for the Matlab implementation of the Cellular Neural Network simulator.
alg2.m is the file I use to produce my edge output files above.
Because of time constraint, I was unable to explore much of bilateral filtering in the context of edge detection. Therefore, I omit any discussion and output images here. However, thanks to Professor Tomasi, I have the implementation in Matlab, bilateral.m, and played a little bit with it as well. If time permits, I might try to include the results of bilateral filter as applied to edge detection.
|