The segmentation of image sequences into differently moving objects (or, more correctly, their projections in the image) is one of the aims of this thesis. In this section previous work on segmentation is reviewed. The different approaches taken in the past could be presented in an order that depended either on the method used to find segmentation or on the assumptions used about the world, the objects in it, the motion of the camera, and the motions of the objects. The former approach is taken, to give greater continuity to the discussion.
The segmentation methods lie in (or between) two groups; those detecting flow discontinuities (local operations) and those detecting patches of self-consistent motion according to set criteria (global measurements). It is usually pointed out by proponents of the latter approach that the former is very sensitive to noisy flow measurements. The methods of statistical regularization and image transformation fall somewhere in between these groups, attempting to achieve global minimization of locally defined cost functions, or finding best fits to image transformations respectively.
Much of the segmentation research reviewed here relies on the assumption that object motion is restricted to certain movement types, most typically, translational motion. For obvious reasons a more general approach was preferred in the work reported in this thesis, so that a wider range of world events could be correctly handled.
Note should be taken of how little work has been undertaken which involves the temporal integration, or tracking, of segmentation results. The research described in this thesis covers this area as well as the segmentation, that is, a temporally coherent list of segmented objects is maintained as time proceeds, and objects move about in the image.