next up previous
Next: The SUSAN ``Corner'' Finder in Detail Up: The SUSAN Two Dimensional Feature Detector Previous: The SUSAN Two Dimensional Feature Detector

Review

 

Mathematical definitions of image structures that are well localized in two dimensions have been even more diverse in the past than definitions of edges. Much of the work on two dimensional features has concentrated solely on ``corners'', in name and in practice, that is, features formed at boundaries between only two image brightness regions, where the boundary curvature is ``sufficiently high''. Much theory has been presented which is only strictly accurate for such simple image structures. However, many other types of localized structure arise, for example, ``T'', ``Y'' and ``X'' junctions, as well as less rigidly defined structures. These will not usually fit into the modelgif of corner finding theory, and so will not be correctly dealt with.

A few methods which find points of localized image structure directly (i.e., take the italicized phrase as the ``brief'' for developing an algorithm) have been used, for example the Moravec interest operator and the Plessey algorithm.gif These are described below. This relaxation of the corner ``definition'' is to be commended; however, they each have their own shortcomings. In the case of the Plessey feature point detector, the increase in reliability of point detection and in the variety of points which may be detected (when compared with ``corner'' based methods, such as Kitchen-Rosenfeld and Dreschler-Nagel) is balanced by a loss in localization accuracy (for reasons explained below).

The SUSAN detector makes no assumptions about the form of the local image structure around a well localized point. Neither does it look for ``points of interest''. Instead, it analyzes different regions separately, using direct local measurements, and finds places where individual region boundaries have high curvature, i.e., finds corners formed by single regions. Thus at junctions involving more than two regions, for example, ``T'' junctions, more than one region may contribute to the detection of a ``corner'', and all junctions will be correctly processed, no matter how complicated they are. Also, no assumptions about the internal structures of the regions around a ``corner'' are made; it is common to assume that the regions are constant in value, but in the case of the SUSAN detector, sloping regions are allowed, as shown in later examples.

A few methods of using binary edge maps to find corners have been suggested. The edges are detected and then edge curvature is calculated in order to find corner locations. See, for example, [20], [3] and [36]. Clearly this sort of approach has severe problems with junctions.

In [38] and [39], Moravec developed the idea of using ``points of interest''. These are defined as occurring when there are large intensity variations in every direction. This definition is realized by computing an un-normalized local autocorrelation in four directions and taking the lowest result as the measure of interest. This response is thresholded and then local non-maxima are suppressed. However, some problems with the Moravec operator have been identified (principally by the proponents of the Plessey detector, which builds on the same feature definition): the response is anisotropic, due to the use of only four directions for finding the local autocorrelation, the response is noisy, as the profile of the ``window'' used for finding the autocorrelation is square, and the operator is sensitive to strong edges, as the response is given by the minimum in the autocorrelation measurements, and not by the variation in them.

In [4] Beaudet enhanced high curvature edges (i.e., looked for saddle points in the image brightness surface) by calculating image Gaussian curvature (the product of the two principle curvatures). In [29] Kitchen and Rosenfeld used a local quadratic surface fit to find corners. The parameters of the surface were used to find the gradient magnitude and the rate of change of gradient direction; the product of these quantities was used to define ``cornerness'', and local maxima were reported as corners. In [16] Dreschler and Nagel defined corners to be points lying between extrema of Gaussian curvature. Later, (for example, see [41]) Nagel used the definition (of corner position) of ``the point of maximum planar curvature for the line of steepest slope''. In [76] Zuniga and Haralick found corners at significant changes in curvature along edges.

In [43] Noble shows that these four approaches are all making the same underlying measurement, that is, the product of the gradient magnitude and the edge contour curvature. They are all sensitive to noise, and have relatively low reliability, as shown in various tests in [24].

In [15] Deriche and Giraudon use Beaudet's ``cornerness'' measure in conjunction with edge detection to obtain accurate corner localization. Corners are found at two different scales, and lines are drawn between a corner at one scale and the same corner at the other. Then the intersection of this line with the nearest zero crossing of the Laplacian edge is defined as the correct position of the corner. It is not clear whether sensitivity to noise and reliability of detection is improved with this method.

In [74] Wang and Brady develop the above curvature based methods. As well as requiring that curvature be a maximum and above a threshold, they require that gradient perpendicular to the edge be a maximum and above a threshold. Also false corner response suppression is performed to prevent corners being wrongly reported on strong edges. However, in the presence of large amounts of noise, this is not sufficient to prevent false responses on strong diagonal edges. The corners are found at different smoothing levels allowing an estimation of the corner positions at zero smoothing.

In [26] Harris and Stephens described what has become known as the Plessey feature point detector. This is built on similar ideas to the Moravec interest operator, but the measurement of local autocorrelation is estimated from first order image derivatives. The variation of the autocorrelation over different orientations is found by calculating functions related to the principle curvatures of the local autocorrelation. This well conditioned algorithm gives robust detection, that is, feature points are reliably detected. However, localization accuracy is poor, particularly at certain junction types (see results presented in Section 5.4). This occurs for two reasons. The first is that the method can only work accurately at most junctions if the masks used to find the derivatives and to perform smoothing have zero size. This can be easily understood in the case of ``T'' junctions, where the two ``smaller'' regions at the junction have similar brightness, and the ``larger'' region has a very different brightness from the other two. See, for example, the junction near the lowest and leftmost marker shown in Figure 21. Clearly, for finite mask sizes, one curvature (that measured perpendicular to the strong edge) will have a maximum value all along that edge. However, the other curvature (that parallel to the strong edge) will have a value which is constant at all points along the weak edge and which starts to drop as the strong edge is approached, the distance at which this begins to occur being half the size of the masks used. Thus the position at which both curvatures are largest is not well defined, and when some compromise position is chosen, it will be displaced away from the real junction position. The other reason for localization error is explained fully in [62], where it is shown that the actual measure of interest depends to a large extent on the relative values of the two curvatures, and is greatest when they are equal. In the case described above, the two curvatures will be equal at some point along the weak edge, away from the strong edge, and therefore away from the correct position of the junction.

In [43] Noble relates the Plessey operator to the four ``greylevel corner detectors'' mentioned above, and shows that it can be viewed as measuring image curvature also.

In [19] Förstner and Gülch describe a method which appears to use exactly the same measure of ``cornerness'' as the Plessey operator (note the earlier statement that this work predates the Plessey paper). However, a more complicated implementation is used; the detection and localization stages are separated, into the selection of windows, in which features are known to reside, and feature location within selected windows. This results in improved localization accuracy with some corner types, but the algorithm is, as one would expect, slower than the Plessey detector.

In [12] Cooper et. al. use image patch similarity tests in two edge based corner finding algorithms. In the first method the similarity function (an un-normalized local autocorrelation function) is found at positions in both directions along an edge; if the function gives a high result in either direction parallel to the edge then a corner is reported. This is clearly similar to the principle behind the Moravec and Plessey detectors. In the second method the results of the similarity tests performed along the edge are used to estimate image curvature directly. In both cases the localization accuracy at junctions is poor.

In [43] Noble uses mathematical morphology (as mentioned earlier), applied to the approach of facet based structure modelling, to propose a two dimensional feature detector. Gaussian curvature is used to determine whether local patches are elliptic, hyperbolic, parabolic or flat; this information is extracted from the image using morphological techniques. It is not clear how well junctions will be dealt with, as no detector is actually developed, and results of testing the morphological approach on two dimensional features are not presented.

In [70] Venkatesh uses the local energy approach discussed earlier to find two dimensional features. Maxima in local energy are searched for along the edge direction. Only tests performed on very simple (and noise free) images are presented.

In [53] Rosenthaler et. al. use an integrated edge and ``keypoint'' detector to find two dimensional features. Oriented directional energy filters are used to find first and second derivatives of energy along the direction of each filter; these are combined to give a final response which is a maximum at ``keypoints''. The results presented are good, but performance in the presence of noise is not investigated.

In [33] Liu and W.-H. Tsai assume that only two regions form a corner. The image is divided up into patches, and each patch is segmented into two regions using grey level moment preserving thresholding. The two regions are then analyzed to determine whether a corner exists anywhere within the patch. If it does, the corner's position and angle are found. The method clearly cannot cope with junctions. It has apparently only been tested on very simple images, and does not function very well in the presence of noise.

In [37] Mehrotra et. al. use half-edge detection to find corners. Directional masks which are zero in one half are used to find either gradient maxima or zero crossings of the second derivative. (The line splitting the mask into ``zero'' and ``non-zero'' parts is perpendicular to the measured edge direction. Large masks are used at eight orientations.) Then for each edge point the difference in orientation between the neighbouring edge points is used to find corners. Results are presented for very simple images only.

In [52] Rohr uses strict models to detect and localize corners and junctions. The assumptions are made that regions making up junctions have constant underlying values and that junctions are well isolated in the image. Large masks are used to fit local structures to models with several parameters, including the feature position and orientation, the intensities of regions touching at the junction, the amount of blurring, and the relative positions of the edges forming the junction. Thus feature detection from a large scale viewpoint is performed. Results are only presented for very simple low noise images.

In [58] Singh and Shneier attempt to overcome the problem of the trade-off between reliable detection and accurate localization by fusing various corner finding methods. Initially a template based corner detector is described. Three different methods for performing the matching between the image and the template are given, and it is shown that they have differing degrees of reliability and accuracy. (These two quantities are related to each other for any one method by a qualitative ``uncertainty principle''.) Fusing the three methods' outputs should give both reliable and accurate results. In the second part of the paper, a gradient based approach is taken, similar to that of Kitchen and Rosenfeld. Different methods are described, each using a slightly different definition of ``cornerness''. Again, the different methods are fused to provide a final list of detected corners. Results presented appear poor.

In [46] Paler et. al. note that at a corner the median of local brightness values taken over a small mask is significantly different from the centre value. Thus the difference between the median and the centre value is used to produce a corner response. This approach is limited to situations where the edge widths and the contrast between ``the object and the background'' can be accurately estimated. Junctions cannot be dealt with at all.



next up previous
Next: The SUSAN ``Corner'' Finder in Detail Up: The SUSAN Two Dimensional Feature Detector Previous: The SUSAN Two Dimensional Feature Detector



LaTeX2HTML conversion by Steve Smith (steve@fmrib.ox.ac.uk)