next up previous
Next: Conclusions and Future Work Up: ASSET 2 Previous: Real-Time Implementation of ASSET-2

Results

 

In this section the results of testing ASSET-2 are presented. The image sequences were produced by DRA except where indicated otherwise.

Figure 9 shows the tracking of the Landrover shown earlier, from the first tracked frame. On the left, the edge correction stage has not been used to ``fine-tune'' the cluster boundaries. On the right, image edges have been used; they are first used in frame 8. In both cases ASSET-2 is working stably, and the improvement over time of the estimated boundary is evident. By frame 10 the advantage of using edges is clear. The central rectangle identifies the tracked centre of gravity of the cluster, with a vector coming out from it showing the tracked image motion of the cluster, not visible here due to very low image velocity. (Edge correction was also used in the tests shown in Figures 10 to 14, but not in the remaining tests.)

  
Figure 9: Seven frames showing the increasing accuracy of the estimated object boundary as time commences from initialization of a filtered cluster. On the left, the edge correction stage has not been used to ``fine-tune'' the cluster boundaries. On the right, image edges have been used; they are first used in frame 8.

Figure 10 shows frame 44 of the ambulance sequence. The boundaries are being accurately and stably tracked. No spurious objects are present.

  
Figure 10: The output of ASSET-2 at frame 44 of the ambulance sequence.

Figure 11 shows sections of frames 100 to 166 of the ambulance sequence. Both vehicles are still being accurately tracked, with the exception of the minor error in the top edge of the Landrover. There is no single isolated image edge corresponding to the top of the Landrover, so there is a slight inaccuracy in the estimation of its boundary here. In frame 106 the ambulance is partially occluded by the Landrover. ASSET-2 shows that it has recognized this occurrence by marking the ambulance's boundary with a darker line. The tracking of the two vehicles is good, with the ``unocclusion'' of the ambulance being recognized in frame 148.

  
Figure 11: Twelve frames showing how ASSET-2 tracks objects before, during and after occlusion, using frames 100 to 166 of the ambulance sequence.

Figure 12 shows ASSET-2 tracking a radio-controlled truck in a situation where occlusion by a static obstacle takes place. As the truck passes behind the static obstacle, occlusion is automatically registered and shown by changing the brightness of the superimposed outline. The shape is frozen but the position is updated using the existing 2D motion model. Without acceleration modelling, when the vehicle emerges from behind the obstacle, it is a long way ahead of the expected position. However, using acceleration modelling (as shown here), the prediction of the object's position is accurate enough to allow the old cluster to rejoin the truck once it reappears (shown by continuity of cluster label - here the number 3).

  
Figure 12: Example output from ASSET-2; a moving object is occluded by a static obstacle.

Figure 13 shows a Ford vehicle travelling away from the camera. In this case, the camera is static. This shows ASSET-2 functioning accurately in a situation where there is no background flow, and the tracking continues until the motion of the vehicle is less than one pixel per frame.

  
Figure 13: The output of ASSET-2 at frame 9 of the Ford sequence.

Figure 14 shows an image sequence taken from a moving vehicle at night with an infra-red camera. The quality of the infra-red images is not as good as images taken with normal video cameras; the resolution is not as high, and there are horizontal dark and light stripes superimposed on the image. (Both of these degradations are just visible on the printed picture shown here.) However, ASSET-2 still functions; the overtaking vehicle is adequately tracked. The sequence was kindly provided by Pilkington Plc.

  
Figure 14: The output of ASSET-2 at frame 8 of a sequence taken by an infra-red camera at night.

Figure 15 shows a frame from a sequence in which a Landrover fills up a large portion of the image, whilst being followed by the vehicle carrying the video camera. Even this large object is tracked as a single cluster successfully.

  
Figure 15: The output of ASSET-2 at frame 7 of a sequence taken following a Landrover at close range.

Figure 16 shows ASSET-2 tracking a moving aircraft. The original video was taken with a hand-held long focal length camcorder; the resulting image sequences are very shaky, but a suitable object motion model (no acceleration modelling, low velocity updates and high position updates) allows successful tracking of various aircraft seen at the airshow, including independent tracking of several helicopters simultaneously. (The helicopters are small and the output is only meaningful if viewed dynamically, hence it is not shown here.)

  
Figure 16: The output of ASSET-2 when given a sequence from an airshow.

Figure 17 shows ASSET-2 tracking several independently moving vehicles at a roundabout. The video sequence is taken by a traffic monitoring camera which, though mounted in a stationary position, moves around as its platform sways. The number of radii in the shape model was reduced to 8 (hence the more ``quantized'' outlines) due to the limited graphics speed of the Framestore. The sequence was kindly provided by Roke Manor Research Limited.

  
Figure 17: The output of ASSET-2 when given a sequence from a traffic monitoring video camera.

Figure 18 shows ASSET-2 tracking several independently moving vehicles on a motorway. Simple geometry calculations and calibrations are used to convert image speed and position to vehicle speeds, which are superimposed over the vehicles.

  
Figure 18: The output of ASSET-2 when given a sequence from a traffic monitoring video camera over a motorway, with real speeds calculated.



next up previous
Next: Conclusions and Future Work Up: ASSET 2 Previous: Real-Time Implementation of ASSET-2



LaTeX2HTML conversion by Steve Smith (steve@fmrib.ox.ac.uk)