Military Embedded Systems

Enhanced Full-Motion Video advances military UAS surveillance - Multistream image processing and routing in real time

Story

October 06, 2010

Jack Wade

ZMicro

Enhanced Full-Motion Video advances military UAS surveillance - Multistream image processing and routing in real time

Live Full-Motion Video (FMV) from UASs has become indispensable in military and intelligence operations. As these video streams play an increasing role in warfighting, inconsistent or poor image quality is becoming a threat to mission success. Fortunately, advances in FPGA-based parallel processing technology now allow real-time image processing that can significantly improve the quality and usefulness of UAS video feeds. An FMV-routing platform is also integral in ensuring FMV data gets into the right hands.

Military strategists are tooling up with Unmanned Aircraft Systems (UASs) to provide the visual surveillance today’s combat environment requires. Last year, for the first time, the Air Force trained more unmanned aircraft pilots than conventional fighter and bomber pilots combined. Robotics, and particularly UASs, have changed the nature of warfighting.

UASs can transmit a direct video feed to a nearby ground control station or broadcast via satellite to command centers around the globe. They provide an on-demand, close-up view of the combat zone that would not otherwise be possible. UASs make it possible for commanders to make decisions and execute missions from a safe distance without endangering the lives of their troops. The key enabling technology responsible for the breakthrough capabilities of the UAS and the subsequent transformation of warfare is Full-Motion Video (FMV). Practically speaking, without FMV from the aircraft’s onboard cameras, pilots would not be able to navigate the UAS remotely from the ground (Figure 1).

 

Figure 1: Practically speaking, without FMV from the aircraft’s onboard cameras, pilots would not be able to navigate the UAS remotely from the ground. U.S. Air Force photo by Senior Airman Tiffany Trojca


21

 

 

The problem is that even the most sophisticated camera, from the best vantage point, may not be able to provide the clear and reliable visibility needed for mission-critical applications. For example, field deployed UAS aircraft encounter atmospheric conditions such as dusk, dawn, fog, haze, smoke, and bright sunlight. Furthermore, producing high-quality imagery on a mobile platform such as the UAS poses some interesting challenges due to its motion and the resulting image perspectives. The quality of the video imagery can be compromised by narrow camera field-of-view, data link degradations, bandwidth limitations, or a highly cluttered visual scene as in urban areas or mountainous terrain. In fact, the military reports that inconsistent quality of FMV imagery is a serious problem. Mission success often depends on the ability to positively identify targets, and this requires clear – and in many cases, enhanced – imagery.

Fortunately, it is now possible to address these problems and to significantly enhance FMV image quality. Recent advances in FPGA-based parallel processing technology now provide the computing power necessary to use sophisticated image processing software algorithms that can yield dramatically enhanced FMV in real time. A platform for routing FMV into the proper hands is also integral.

Parallel processing breaks the speed barrier

The science of digital image processing has resulted in a large body of useful software algorithms for image processing. Most of these algorithms were originally developed to process still images. These algorithms tend to be computationally intensive, and in the past, processor technology did not offer the speed necessary to keep up with the demands of FMV. To understand the processing requirements for FMV, consider the following example. The number of frames per second, the number of lines per frame, and the number of pixels per line depend on the video standard. The National Television System Committee (NTSC) standard definition video is digitized at 720 x 480 or full D1 resolution at 30 frames per second, which results in a 10.4 MHz pixel rate. Furthermore, High-Definition (HD) video standards have up to six times more data per frame. The computation load, therefore, can be six times higher.

The actual amount of processing required per pixel depends on the image processing algorithm. For example, a popular edge detection algorithm requires approximately 130 operations per pixel. Coupled with running at multiple resolutions, serial implementation of the algorithm using a programmable DSP with a clock speed of 600 MHz can only process 4.6 mega pixels per second. This is not sufficient to support real-time video streams at high resolution. High -Definition Full-Motion Video (HD FMV) streaming, for example, is more suited to processing algorithms at 30 fps at full resolution of 1,920 x 1,080 pixels: a pixel rate of 160 mega pixels per second. Additionally, the time required to process one sample can be longer than the time between the arrivals of two consecutive samples, such that the operations need to be executed in parallel. Consequently, real-time processing of digital video signals requires parallel processing to handle the high data throughput.

Today, there are FGPAs that can process FMV – and be scaled to handle any number of video streams – in real time. The advantage of FPGAs compared to other processors is their unmatched capacity for parallel processing. FPGAs benefit from an arbitrary number of data paths and operations, up to the limit of the device capacity. But as fast as FPGAs are, achieving the goal of zero latency required for real-time FMV requires some ingenuity. The trick is to calculate the necessary adjustment based on the first frame, but apply it to the following frame, and so on. Calculations for the next frame are done in parallel with processing of the current frame, so no latency is introduced.

Open-architecture algorithms afford quality imagery

Using an image processing platform based on FPGAs makes it possible to apply existing and new image processing algorithms to address the many obstacles to high-quality FMV imagery.

Such a platform would benefit from an open architecture that would facilitate algorithm porting and thus make it easier to tap into a wide variety of image processing algorithms that are commercially available.

Existing image processing algorithms offer countless possibilities for deriving new useful information from FMV streams. There are image enhancement algorithms that filter out visual distractions while adjusting contrast and color to aid the eye in focusing on elements of interest. Other algorithms can pinpoint a subject hidden within a large landscape, like finding a needle in a haystack. Image mosaic algorithms can stitch together images from multiple cameras and multiple view angles to form a single unified higher-resolution view. There are also encoding/decoding and compress/decompress algorithms that make image data transmission and storage more efficient. Algorithms can be used individually, or multiple algorithms can be applied to the same video stream.

Image processing algorithms that are FPGA-based can operate in both parallel and sequential modes. In a parallel mode, an image frame can be simultaneously fed into multiple FPGAs, each working on its assigned area of interest. In a sequential (or daisy-chaining mode), the output of one FPGA is fed into a subsequent FPGA for additional processing. For example, a video stream might first need processing to eliminate spatial or temporal noise, before it flows into a second algorithm where stabilization, object recognition, or other functions occur. Thus, many different types of algorithms are important for UAS applications (Figure 2). These algorithms include chroma keying, stabilization, fusion, locally adaptive contrast, and tracking movers, among many others.

 

Figure 2: Many different types of algorithms are important for UAS applications. On this display, an algorithm dehazes images for fog, smoke, and sand-storm environments.


22

 

 

A platform for dynamic, enhanced FMV routing

Capturing quality imagery at real-time speeds is only part of the solution. It is equally important to be able to route the right visual information to the right person at the right time. For example, ground station personnel may use FMV from a UAS to pilot the aircraft, while command personnel in Washington will require different views of the same video feed to identify targets.

An integrated platform combining large-scale video routing capabilities with sophisticated image processing would be ideal for military field operations. This platform might use a switched fabric to serve as a video matrix switch to allow any video input to be routed to any video output, including multiple outputs. Video inputs could be routed to any of the algorithms, or any combination of algorithms. Such a platform could take any video source and route it to any combination of attached displays or network connections. It could route multiple sources to one monitor, or to virtual screens within a monitor. Operators could turn image processing functions on or off, or swap the primary and picture-in-picture windows using a touch screen.

This type of platform provides mission-configurable chat, moving maps, heads-up display, sensor video, and situational awareness. For UAS surveillance, this technology could be installed at a UAS Ground Control Station (GCS) to apply image enhancement and edge detection algorithms to incoming video streams. The edge detection algorithm would identify anomalous shapes and highlight details for surveillance and Bomb Damage Assessment (BDA).

Groundbreaking image clarity

Live, high-quality, real-time FMV from UASs has become indispensable in military and intelligence operations. New COTS systems that take advantage of FPGAs for real-time, scalable image processing – such as the Z Microsystems Any-Image-Anywhere (AIA) system – provide visual clarity not possible before.

Jack Wade is Founder and CEO of Z Microsystems, a manufacturer of rugged, mission-ready computers and display equipment for the U.S. military. He is a recognized integrator and signal-processing expert. He regularly works with the USAF and UAS suppliers to enhance UAS performance. He can be contacted at [email protected].

Z Microsystems 858-831-7010 www.zmicro.com

 

Featured Companies

ZMicro

9820 Summers Ridge Road
San Diego, CA 92121