The predictive movement model that we chose was the particle filter, which takes many samples of an image and then estimates the target’s movements based on its probability distribution and dynamic motion model. The color based particle filter demonstrated a higher capability of tracking compared to the scale invariant feature transform (SIFT)[2]* or speeded up robust features (SURF)[3]* with respect to varying illumination, background noise, rotation, and occlusion. A Particle Filter, also known as a Sequential Monte Carlo Filter, seeks to converge on the location of the target by taking multiple samples of the image surrounding the predicted position of the target.  It weights each sample based on its likelihood of being a match and forms an estimate by calculating the weighted mean of the overall samples. Weak likelihoods are often lost and replaced with stronger samples that are more similar to the target as tracking progresses. The position of the target is estimated by moving the particles according to a dynamic Newtonian motion model and adding a random multivariate stochastic variable to allow a wide range of positions for the target to deviate from the expected dynamic model.

 

*References can be found on the complications page

Figure 1: Left:Sample of Guesses (~100 Blue Cirlces) Right: Most Likely Match Figure 2: Particle Filter Matches Throughout Path
   
Videos of the Sampling and Tracking

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player

Content on this page requires a newer version of Adobe Flash Player.

Get Adobe Flash player