This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. Systems that initially calibrate for a mapping between pixels of a wide field of view (FOV) master camera and the pan-tilt (PT) settings of a steerable narrow FOV slave camera assume that the target is travelling on a plane. As the target travels through the FOV of the master camera, the slave cameras PT settings are then adjusted to keep the target centered within its FOV. In this paper, we describe a system we have developed that allows both cameras to move and extract the 3D coordinates of the target. This is done with only a single initial calibration between pairs of cameras and high-resolution pan-tilt-zoom (PTZ) platforms. Using the information from the PT settings of the PTZ platform as well as the precalibrated settings from a preset zoom lens, the 3D coordinates of the target are extracted and compared to those of a laser range finder and static-dynamic camera pair accuracies.
Investigating the use of cooperating camera systems for real-time, high definition video surveillance to detect and track anomalies over time and adjustable fields of view is moving us towards the development of an automated, smart surveillance system. The master-slave architecture for surveillance, in which a wide field-of-view camera scans a large area for an anomaly and controls a narrow field of view camera to focus in on a particular target is commonly used in surveillance setups to track an object -. The static camera solution -, or the master-slave system architecture with static master camera   are well-researched problems, but is limited by the field of view of the master camera.
In particular, due to the computational complexity arising from object identification, having such systems operate in real-time is a hurdle within itself   . These setups often use background subtraction to detect a target within the FOV of the static camera and use a homography mapping between the pixels of the static camera to the pan/tilt (PT) settings of the slave camera to focus on the target. Look-up tables  and interpolation
functions - are common tools used to navigate through the different settings to find the optimum setting for target tracking . Essentially, a constraint is placed on the target such as the percentage of the image it must cover, or the centering of the target within the image at all times, or a combination of the two, and the intrinsic/extrinsic parameters are varied to find the optimum setting that best satisfies these constraints.
This paper presents a dual-dynamic camera system that uses in-house designed, high-resolution gimbals ,and commercial-off-the-shelf (COTS) motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. The encoders on the lenses and gimbals of the master camera control the slave camera to zoom in and follow a target as well as extract its 3D coordinate relative
to the position of the master camera. This system interpolates the homography matrix between pixels of the master camera and angles on the slave camera for different pan/tilts of the master camera. The master camera will keep a target in a specific region within the image and adjust its angle based on the trajectory of the target to force the target to stay within that region.
The homography mapping between the master and salve camera is updated anytime the master camera moves, so as to keep the control between the master-slave cameras continuous. The master camera turns off background subtraction every time it detects that it needs to move and reinitializes it after it has completed its movement. This system operates in real-time, and since the encoder settings are in absolute coordinates it can potentially be used to provide a 3D reconstruction of the trajectory of the target.
See also: Comments to Paper