Image fusion enhances light armored vehicle capabilityStory
November 11, 2008
There is a wide variety of video imaging sensors, each offering optimal viewing or detection characteristics depending on the type of object being observed and the meteorological, lighting, or background conditions. To maximize the probability of detection, images from more than one sensor can be compared in real time to maintain image context and highlight objects of interest. Images can be displayed from each sensor side-by-side for an operator to discriminate directly.
Or, alternatively, the images can be fused to create a truly realistic composite view from both sensors. Such a sensor fusion system would be ideal for deployment on light armored vehicles or Unmanned Aerial Vehicles (UAVs) if the critical criteria of space, weight, power, and affordability are met.
A typical small armored vehicle will often be fitted with two or more cameras, each operating in different parts of the spectrum (for example, visible, near infrared, or thermal infrared). In general, a thermal imager is best at detecting the thermal signature of troops or ground vehicles. In contrast, the background ‚Äì and therefore, the context for the objects can be identified better from the visible image. The context contains important information enabling a reconnaissance mission to report accurate dispositions of hostile troops or vehicles to other participants, for example (Figure 1).
(Click graphic to zoom by 2.5x)
The figure's leftmost image is from a TV camera, and the center image is from a thermal infrared camera that cannot discriminate the background detail because of adverse weather conditions. The fused image on the right clearly shows both the vehicle (which has been reversed to black) and the background detail together. Using information in this way has great practical advantages for tactical or covert surveillance as multiple sensors can see through smoke, mist, precipitation, and camouflage. When used from the air, image fusion provides better definition of topographical features, while the visible spectrum also provides color discrimination between similar object types.
One method for fusing images is to use linear combination of pixel intensity. However, this relies on matched optical characteristics of the sensors. It also has the effect of highlighting or even canceling objects that appear on both sensors while dimming those that appear on only one. An alternative technique is to use colors to differentiate between the images, but this results in a loss of realism. The final alternative, which avoids these unwanted effects, is to use a multi-resolution algorithm. It decomposes each of the images into low, medium, and high resolutions to provide spatial, sized, and detailed representations that are then combined with different weightings to form the displayable image. The algorithm preserves the clarity of objects detected by only one sensor and combines objects from both sensors in a natural form.
Whichever method is used to fuse the images, additional correction is required to resolve physical and temporal misalignment. Physical misalignment can be caused by the positions of the sensors on the vehicle or by unmatched optical characteristics. This can be corrected by warping one image to fit the other. Temporal alignment is often required if the sensors or their video transmission paths delay one image stream more than the other, resulting in moving objects appearing in more than one position.
Hardware or software solution
Image fusion could be implemented just in software, particularly where raw sensor images can be packetized and streamed over a network to centralized computing resources. However, this level of resource is unusual for small, light armored vehicles, making a hardware solution an attractive alternative. To meet this demand, GE Fanuc Intelligent Platforms has developed the IMP20 image fusion module, measuring only 4" x 2.7" x 0.5" (100 mm x 68 mm x 12 mm). It accepts PAL or NTSC video from two out of three sources. Using a new approach to the multiresolution algorithm, it performs image fusion and image warping in real time. Accordingly, it provides a video output that can be synchronized to a central source and overlaid with graphical symbology if required.
Image fusion is now a practical and affordable technology for a broad range of light tactical ground vehicles and other sensor platforms. It can be deployed with no additional embedded computing support to provide enhanced detection capability. Combined with other image processing technologies such as tracking and recognition/classification, it can provide the "extra edge" critical to battlefield situational awareness.
To learn more, e-mail Duncan Young at [email protected].