Dual Dynamic PTZ Tracking Using Cooperating Cameras

Abstract

Master-slave camera surveillance setup (refer to gimbals graphically like master and slave).
Master-slave camera surveillance setup (refer to gimbals graphically
like master and slave).

This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. Systems that initially calibrate for a mapping between pixels of a wide field of view (FOV) master camera and the pan-tilt (PT) settings of a steerable narrow FOV slave camera assume that the target is travelling on a plane. As the target travels through the FOV of the master camera, the slave cameras PT settings are then adjusted to keep the target centered within its FOV. In this paper, we describe a system we have developed that allows both cameras to move and extract the 3D coordinates of the target. This is done with only a single initial calibration between pairs of cameras and high-resolution pan-tilt-zoom (PTZ) platforms. Using the information from the PT settings of the PTZ platform as well as the precalibrated settings from a preset zoom lens, the 3D coordinates of the target are extracted and compared to those of a laser range finder and static-dynamic camera pair accuracies.

1. Introduction
Investigating the use of cooperating camera systems for real-time, high definition video surveillance to detect and track anomalies over time and adjustable fields of view is moving us towards the development of an automated, smart surveillance system. The master-slave architecture for surveillance, in which a wide field-of-view camera scans a large area for an anomaly and controls a narrow field of view camera to focus in on a particular target is commonly used in surveillance setups to track an object [1]-[3]. The static camera solution [4]-[6], or the master-slave system architecture with static master camera [6] [7] are well-researched problems, but is limited by the field of view of the master camera.

In particular, due to the computational complexity arising from object identification, having such systems operate in real-time is a hurdle within itself [1] [2] [8]. These setups often use background subtraction to detect a target within the FOV of the static camera and use a homography mapping between the pixels of the static camera to the pan/tilt (PT) settings of the slave camera to focus on the target. Look-up tables [3] and interpolation
functions [9]-[11] are common tools used to navigate through the different settings to find the optimum setting for target tracking [6]. Essentially, a constraint is placed on the target such as the percentage of the image it must cover, or the centering of the target within the image at all times, or a combination of the two, and the intrinsic/extrinsic parameters are varied to find the optimum setting that best satisfies these constraints.

This paper presents a dual-dynamic camera system that uses in-house designed, high-resolution gimbals [12],and commercial-off-the-shelf (COTS) motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. The encoders on the lenses and gimbals of the master camera control the slave camera to zoom in and follow a target as well as extract its 3D coordinate relative
to the position of the master camera. This system interpolates the homography matrix between pixels of the master camera and angles on the slave camera for different pan/tilts of the master camera. The master camera will keep a target in a specific region within the image and adjust its angle based on the trajectory of the target to force the target to stay within that region.

The homography mapping between the master and salve camera is updated anytime the master camera moves, so as to keep the control between the master-slave cameras continuous. The master camera turns off background subtraction every time it detects that it needs to move and reinitializes it after it has completed its movement. This system operates in real-time, and since the encoder settings are in absolute coordinates it can potentially be used to provide a 3D reconstruction of the trajectory of the target.

Source:

Journal: Intelligent Control and Automation
DOI: 10.4236 / ica.2015.61006 (PDF)
Paper Id: 53319 (metadata)

See also: Comments to Paper

About scirp

(SCIRP: http://www.scirp.org) is an academic publisher of open access journals. It also publishes academic books and conference proceedings. SCIRP currently has more than 200 open access journals in the areas of science, technology and medicine. Readers can download papers for free and enjoy reuse rights based on a Creative Commons license. Authors hold copyright with no restrictions. SCIRP calculates different metrics on article and journal level. Citations of published papers are shown based on Google Scholar and CrossRef. Most of our journals have been indexed by several world class databases. All papers are archived by PORTICO to guarantee their availability for centuries to come.
This entry was posted in ICA and tagged , , , . Bookmark the permalink.

Comments are closed.