The hardware used was Canon VB-C60 Network Cameras, their mechanical operation speed was adequate for potentially fast moving targets and the zoom functionality was powerful enough for covering large outdoor areas. These cameras rely on communication through HTTP protocol which is a request-response client that handles all information between our algorithm and the cameras. The systems software is written in c and implements the HTTP[4]* protocol as well as Willow Garages's OpenCV package. The OpenCV pacakage was usefull for all the visusal output and preprocessing that was requiered, this included target selection, displaying the different videos (live feed, tracking feed, processing feed, etc), and debugging output. Using these tools and proving the concept of tracking with a single camera we were able to expand to more applicable and usefull situations.

By knowing the camera’s distances between one another, both in Euclidean Distance and in orientation, it is possible to determine where a target is with respect to one camera as long as it is known where it is with respect to another camera.  This transformation is known as a Jacobian, and it is the foundation for allowing us to transfer coordinates of the target from one camera to the next.  During surveillance operations, further values are hard coded into the cameras such as specific ranges of interest where adjacent cameras should change their orientation and attempt to begin tracking the target.  Two major applications were investigated for these coordinated cameras – one where all the cameras are in a single ‘smart’ room and tracking selected objects as they interact with one another, and the other being a distributed surveillance system which passes information from camera to camera as the target moves between the different viewpoints.  The smart room implementation takes in target selections from a MASTER camera, which then finds the most optimum DRONE cameras to track the target’s movements.  In the distributed surveillance system, the MASTER camera is only in command to select its target, and then behaves like a DRONE camera when the number of targets is greater than the number of DRONES.

Full video demonstration available HERE

*References can be found on the complications page

orientation
Figure 1: Distributed Surveillance Orientation
Figure 2: Jacobian Transformation Matrix
Figure 3: Decision Flow Chart