Military Embedded Systems

Fusing video and radar tracks in multisensor military security

Story

January 29, 2014

Dr. David G. Johnson

Cambridge Pixel

Effective fusion of multiple sensors such as radar video and cameras is the key to presenting a situational display in military security applications that successfully informs the operator and supports critical decision-making. However, while track display with track fusion offers the benefits of simplifying the display presentation based on an assessment of threat, this approach is only as effective as the rules used to process, filter, and select the information. Complementing the processed display with the ability to show primary sensor data allows for simplified presentation of complex information where there is confidence in the data interpretation, while still permitting the operator to observe raw sensor data for manual interpretation, verification, or simply reassurance.

A complex military security system uses multiple overlapping sensors to provide coverage of an area of interest. Sensors include radars and cameras, which may be co-located and combined in range to provide near, medium, and long-range detection, or which may be at different locations to enlarge the geographical coverage. A moving target that is acquired by one sensor may then be tracked, with continuity of identity, across multiple sensors, ensuring that the operator is presented with a consistent view of the target moving through the coverage of multiple sensors. The challenge with this approach is to present sensor and processed data in a way that supports an operator in the interpretation of the situation, with neither too much data so that there is a risk of confusion, nor too little data where critical information may be missing.

Presenting the operator with a high-level interpretation of the scene requires automatic identification of targets from sensor data and subsequent fusion of those tracks across overlapping sensors. Removing the raw sensor data and presenting processed, filtered, and prioritized reports will simplify the display and ensure that the operator sees only the relevant information. The key is getting the processing right so that there is neither too much rejection of real targets of interest (probability of detection is maximized) nor too little rejection of false targets (probability of false alarm is minimized).

 

Figure 1: Track data and radar video from two sensors.

(Click graphic to zoom)


21

 

 

Displaying multisensor data

Figure 1 shows the situation of two overlapping radars providing enhanced geographical coverage around a security installation. This is typical of a security installation around a high-value asset that observes short-range targets (for example, as far as 5 km) with one radar type and uses a different sensor for longer-range targets (for example, as far as 50 km). The display shows the presentation of the radar video (yellow for a short-range sensor, orange for the long-range sensor). Automatic processing has analyzed the radar video to identify targets of potential interest; these are shown as track symbols in white. The display can be simplified, as seen in Figure 2, by removing the primary radar display, leaving tracks alone. In this example, tracks may be derived from either single primary radar or two fused primary radars.

The track extraction activity is an automatic software process that is configured to create tracks after consideration of several scans of radar video, where a target-like response has a consistent appearance. The number of scans considered does affect the speed of detection, but consideration for longer time will reduce false alarms. A balance must be achieved between the two, with different application requirements preferring shorter acquisitions or lower number of false alarms. The presentation in Figure 2 has removed the sensor data and presented only the targets that have passed the detection criteria.

 

Figure 2: Display simplified to show track data only with radar video removed.

(Click graphic to zoom)


22

 

 

If there is some uncertainty in the interpretation of the high-level fused display of Figure 2, the operator may enable the display of the primary sensor data, as shown in Figure 1. This enhances the display picture by presenting the radar video as graphical layers. By using semitransparent colors, the additional information enhances the display, preserving the presentation of the overlays. A priority assignment of each layer, together with transparency in the graphics rendering, ensures that the display of the enhanced sensor video does not obscure the processed track labels. The objective is to present the sensor information in a way that aids interpretation of the security picture, but does not mask the presentation of other information.

Fusing radar video

What isn’t depicted by the screenshot of Figure 1 is the fact that the radar images are continuously updating with the sweep of the radar, typically at a rate of 30 times per second. This presents a moving radar sweep that matches the scanning of the radar, reassuring the operator of an active measurement process. The display processing that creates the multilayered display is therefore combining scan-converted radar and graphics images at a rate of 30 times per second and compositing that data into a single image that is then copied to the display window. The processing involves a number of stages that build layers of the display as an underlay map, add alpha-blended radars, and then draw the overlay. The choice of colors and degree of transparency in the alpha blending affects the appearance of the radar layers (from invisible to highly visible).

The combination of multiple images is comfortably handled by a modern graphics card in a Windows or Linux software application. The application shown in Figures 1 and 2 is a Windows application that runs on standard computer hardware. For larger screen sizes and multiple radars, a PCI Express graphics card accelerates the composition of the layers. Such a software solution is attractive to users because today’s standardized single-board computers or desktop PCs can easily handle complex real-time graphics. The specialized hardware that historically underpinned this type of solution is no longer needed, and the software solution is flexible enough to give users the option to combine processed and real-time sensor information onto a single display.

As long as users take care to selectively enable the real-time data when requested or when uncertainty requires it, the result of this approach is a high-level display of fused information that is anticipated to be correct most of the time, but which has the backup of the sensor data to assist in challenging scenarios.

Sidebar 1

(Click graphic to zoom)


21

 

Dr. David G. Johnson is technical director at Cambridge Pixel. He holds a B. Sc. degree in electronic engineering and a Ph.D. in sensor technology from the UK’s University of Hull. He has worked in radar processing and display for 20 years and led teams developing software solutions for military radar tracking and radar scan conversion. Dr. Johnson can be reached at [email protected].

Cambridge Pixel +44 (0) 1763 852749 www.cambridgepixel.com

 

Featured Companies

Cambridge Pixel

New Cambridge House, Litlington, Royston, Herts