next up previous
Next: Segmentation Based on Analytic Image Transformations Up: Image Segmentation and Object Tracking Previous: Segmentation Based on Local Measurements

Segmentation Based on Simple Clustering or Inconsistency with Background Flow

In [64] Meygret and Thonnat match edge contours at multiple scales to give image flow, using stereo sequences. They then link contours into groups with locally consistent flow, assuming translational movement. This produces patches of ``separate motion''. The results presented suggest that the large areas where no linked edges are present (and therefore where no motion or segmentation information is available) could be better represented if the motion of two dimensional features were used. For example, road markings appear as separate objects (from each other and the background) because they are unlinked in the image.

In [110] Thompson attempts to find moving objects in situations where only translational image motion is taking place. Thus no divergence, rotation, or deformation is allowed. The BCCE is used to find the normal component of flow, and the flow measurements are combined with image contrast information to aid the determination of object outlines. The outlines are found using region merging on the basis of similarity in brightness (and consistency in flow or lack of flow information) or similarity in flow.

In [111] Thompson and Pong note that discontinuities (and more general variation) in flow can be due to independent motion or depth discontinuities. To help make the distinction they discuss a series of additional constraints, such as ensuring knowledge of the camera's motion, or assuming that the world is planar. They then use these constraints to find moving objects by virtue of local discontinuities in flow. The flow is found using the method of Barnard and Thompson described in [4], that is, the motion of corners.

In [60] McLauchlan et. al. assume zero camera translation and known rotation (a valid assumption in their case, as they are using a pan-tilt-verge stereo head fixed in space). In this case the background flow can be completely subtracted out, leaving only moving objects with non-zero flow. Normal flow is found using the BCCE. Because of the simplicity of the algorithm and also the use of dedicated and parallel hardware, this algorithm runs in real time.

Nelson and Aloimonos, in [77], use very coarse measurements of image flow divergence from spatiotemporal filters to give ``time-to-impact'' estimates. In [76] Nelson describes a system similar to that of McLauchlan et. al. which finds the normal flow using the BCCE, and determines those parts of the image which are not consistent with the rigid background motion. This is done by comparing the normal flow field with simple precomputed possible motions, allowing the background flow to be cancelled. No attempt to provide image flow segmentation into differently moving objects is made, or to model or track shapes over time. Also in this paper a method is proposed which detects motion by virtue of image acceleration, which is assumed to be high only for independently moving objects. This will clearly work only for objects which have high acceleration in the world. Both of these systems are implemented to run at approximately half real time speed.



next up previous
Next: Segmentation Based on Analytic Image Transformations Up: Image Segmentation and Object Tracking Previous: Segmentation Based on Local Measurements



© 1997 Stephen M Smith. LaTeX2HTML conversion by Steve Smith (steve@fmrib.ox.ac.uk)