Military Embedded Systems

Advanced image processing enables UAVs to fulfil their potential

Story

July 08, 2008

Doug Scott

Abaco Systems

Advanced image processing enables UAVs to fulfil their potential

Real-time image processing hardware has significant potential to efficiently and practically solve a number of problems currently faced by autonomous systems. A recent trial for autonomous airborne refuelling has demonstrated the potential - and great success - of applying image processing to an extremely challenging problem.

There are many reasons why Unmanned Aerial Vehicles (UAVs) are attracting so much interest - and investment - from military organizations around the world. By not endangering a pilot's life, they can fly missions that would otherwise be judged too dangerous. True autonomy - the goal of much UAV research - allows an overall reduction in personnel. And the fact that mission length is potentially not a function of pilot fatigue is highly attractive.

However, mission length - until such time as solar power becomes a viable alternative - is compromised not only by pilot-related factors, but also by the need to refuel. The ideal UAV will never need to return to base for refuelling, and this requirement has been at the heart of substantial research into in-flight refuelling.

Much development effort has been expended on the potential for GPS technology to enable a refuelling tanker and a receiver UAV to move within close enough proximity of each other to allow docking the fuel probe into the UAV drogue. However, the degree of precision required - especially as either craft is vulnerable to disturbance factors such as turbulence - is at the very outer limits of GPS accuracy. An alternative technology - the use of advanced image tracking and image processing systems - needed to be developed to complement the GPS system to enable a complex, highly sophisticated maneuver (Figure 1).

Figure 1

(Click graphic to zoom)


21

 

Evaluating that alternative was the goal of a joint project Advanced Airborne Refueling Demonstration (AARD) developed by the Defense Advanced Research Projects Agency (DARPA) and Sierra Nevada Corporation (SNC). The combined effort produced the first successful demonstration of autonomous probe- and-drogue airborne refuelling, and was conducted at NASA Dryden Flight Research Center.

DARPA and SNC settled on using the probe-and-drogue (or hose-and-drogue) refuelling method in the demonstration because it is widely perceived as the most difficult to automate - a function of the flexibility of the hose and its susceptibility to aerodynamic disturbance. Octec - now part of GE Fanuc Intelligent Platforms - and Sierra Nevada Corporation teamed to develop and deliver the image capture and processing functionality that was central to the demonstration's success.

The challenge

At the project's outset, a number of key studies had to be executed which, in cases such as determining the optimum viewpoint location for the image tracking device and ascertaining the ideal field of view, were interrelated.

Among these studies was an evaluation of alternative image capture approaches. It was known that whichever technology was chosen, the requirement would be to provide a range measurement accuracy of approximately 36 inches at a range of 100 feet to establish the relative positioning of the probe and drogue, closing to an accuracy of 4 inches at a range of 12 feet to allow probe insertion.

Key considerations in selecting the image capture device and the medium used to transmit the captured image data for processing included:

  • The resolution of the captured image
  • The ruggedness of the device
  • Size and weight
  • Susceptibility of the transmission medium to the ElectroMagnetic Interference (EMI) that could be expected to be present in an RF-rich environment

It was determined that a high-resolution digital sensor would be the ideal solution; however, its adoption was precluded by any amount of development time necessary to customize the tracker hardware to meet the sensor's digital interface standard. While a high-density fiber optic transmission line was believed to offer the optimum resistance to EMI interference, testing revealed that it gave such a poor image resolution that it was not possible for the tracker to detect a several-pixel object at the extended 30-meter range. The fiber cable also suffered from a relatively large number of dead pixels/fibers.

Finally, a standard NTSC video camera was found to deliver sufficient image resolution to resolve the drogue and basket at a range of 30 meters. A "remote-head" version of the camera was selected to minimize the size and weight impact at the chosen mounting point. The associated transmission cable was shown to offer acceptable resistance to EMI, and the video quality did not indicate any noticeable image interference artifacts.

Evaluating alternative sensor-mounting positions

Although the application is ultimately intended for completely unmanned platforms - both the refuelling tanker and the UAV - the demonstration took place using a manned NASA F-18 flight research aircraft (pictured, first page of article). The four optimum locations for the remote sensor on the F-18 were identified by the NASA flight crew, but modelling and simulation of the desired flight profile and viewpoint constrained the selection to two. The Head Up Display (HUD) view gave close operating range to the drogue and offered the maximum likelihood of the drogue being within the field of view. The view from the inboard right pylon gave good drogue visibility in the terminal phase and had the advantage of being an existing camera position. Both mounting points, however, also had disadvantages that needed to be factored in.

These disadvantages were driven to a large extent by the fact that the tracking algorithm required a minimum of several pixels on the drogue target for recognition and tracking; this, in conjunction with the NTSC sensor's resolution capability, dictated a maximum Field Of View (FOV) of 55°. Too narrow a point of view would make acquiring the drogue more difficult and would also cause problems as the drogue came closer, filling the entire field of view. Within the event, the optimal field of view was determined to be 55°.

For the demonstration's purposes, two camera positions were used primarily to evaluate their effectiveness. However, deployment of the image-processing hardware within a UAV environment requires only a single camera, minimizing weight and power consumption and avoiding further complexity arising from installing two or more camera sensors at multiple locations on the airframe. This was possible because of the significant development effort in creating algorithms that can accurately estimate range from a single camera.

Provisions were also made in developing algorithms to eliminate background clutter that could be mistakenly identified in the captured image. For example, some of the aircraft structures such as the engine exhaust nozzles and fuel hose exit aperture, under certain lighting conditions, appear very much like the drogue at extended distances. Areas behind the tanker that should be avoided - described as "avoidance volumes" - were determined through simulation.

Another challenge in accurately identifying the position and distance of the drogue relative to the hose resulted from trial video analysis. This showed that the instability of the drogue's outer rim (Figure 2) made it an inappropriate reference point. However, the drogue's solid inner hub was found to exhibit stable high contrast, making it relatively easy for the sensor to identify and providing a practical calibrated reference point.

Figure 2

(Click graphic to zoom by 2.1x)


22

 

Measuring range from a single camera

A significant effort was expended to develop the appropriate range estimation algorithms, and two were proposed. One was based on a classical centroid approach; the other was a model-based approach. The algorithms were implemented in MATLAB and tested against the simulated models.

It was determined that each algorithm has complementary strengths and weaknesses. The centroid-based approach delivered superior resolution due to its measuring an area in image pixels, while the model-based approach was more stable and more accurate by comparing the observed video pattern with a known reference.

Figures 3a and 3b show the reported estimates from each algorithm against a known true range for each image field in the video sequence. As expected, the model-based approach is noisier at the further ranges than the centroid-based approach; however, the centroid-based algorithm suffers from gain and offset bias in its ability to measure accurately.

Figure 3

(Click graphic to zoom)


23

 

The decision was made to implement both algorithms running in parallel. This resulted in a measurement accuracy better than 10 centimeters from a range of approximately 3 meters. Desired accuracy was originally specified as 1-sigma (that is, 68 percent of the time) standard deviation; the accuracy achieved was predominantly better (that is, it was within the given range about 90 percent of the time - almost 2-sigma standard deviation).

At the heart of the automated airborne refuelling demonstration was Octec's ADEPT-60 sensor-based video tracking and image processing module (Figure 4). It is natively equipped with several image processing algorithms and extended to enable the measurements from the newly developed algorithms to be combined. This provided the flexibility to optionally estimate the range without direct interaction between the underlying algorithms.

Figure 4

(Click graphic to zoom by 2.2x)


24

 

The increasing proportion of the field of view occupied by the drogue as it moved closer to the refuelling tanker - from a few pixels at a distance of 30 meters to more than half the field of view during the docking phase - led to the requirement to modify the tracking algorithm to contain a resizable image template of the target in memory, which is dynamically updated using the estimated drogue size. The template image also served as a first-stage mechanism for reacquiring the drogue in the event of an intermittent track loss.

As noted earlier, the selected mounting points for the image sensors were not without their drawbacks. Most significant among these was a concern that the HUD cockpit window would induce significant image distortion. This issue was addressed by calibrating the system using "real world" measurements of various 3D points around the aircraft and hangar, and tracking these same points through the image capture sensor. An optimization routine was written in MATLAB to model the camera and lens parameters, and the known 3D points were reprojected through the model and compared with those measured by the vision system. This allowed the processing algorithms to be modified to compensate for anomalies in the camera's field of view.

The next phase

The demonstration proved that the application of advanced vision-based sensors for image capture and state-of-the-art image processing technology to augment the existing capabilities of sophisticated GPS-based positioning systems can be achieved. However, work to maximize the viability and deployability of this technology continues. Overall performance in acquiring the target image can be increased. Operating range - the initial distance at which the drogue can be captured - can also be extended, reducing the reliance on very precise GPS-based measurements. Finally, work will be done to enable drogue type identification, recognition of anomalous drogue behavior, and the effects of lighting and weather conditions.

Doug Scott is an algorithms development engineer at Octec Ltd., part of GE Fanuc Intelligent Platforms. He joined Octec in 2001 to develop various algorithms for automatic video trackers and image processing hardware. He holds a BSc Eng. in Electronics from the University of the Witwatersrand, South Africa. He can be reached at [email protected].

GE Fanuc Intelligent Platforms
+44 (0)1344-465200
www.gefanucembedded.com

 

Featured Companies

GE Intelligent Platforms, Inc.

5 Necco Street
Boston, MA 02210