Military Embedded Systems

ISR signal processing brings performance to sensors and enables AI at the edge

Story

September 28, 2018

John McHale

Editorial Director

Military Embedded Systems

Military intelligence, surveillance, and reconnaissance (ISR) applications continue to make demands on signal-processing designers for more performance, better thermal management, and reduced size, weight, and power (SWaP). These systems - as they move closer to the sensor on various platforms - are also starting to enable artificial intelligence (AI) solutions at the edge.

Speeding up “sensor to shooter” time is a bit of a blunt term, as ISR sensor data sent to a warfighter does not always end with shots being fired. However, speeding up the filtering of data at the sensor level does shorten the time that it takes for actionable intelligence actually gets to warfighters, enabling them to make better, faster, and more informed decisions. Some call this new reality “shortening the sensor chain.”

To enable such performance, sensor systems integrators rely on high-­performance embedded signal-processing solutions that leverage the latest commercial processors and FPGAs [field-programmable gate arrays].

“Our [defense] customers want wider signal bandwidths, improved dynamic range, higher channel density, and lower cost per channel,” says Rodger Hosking, Vice President and Founder, Pentek (Upper Saddle River, New Jersey). “There is no ultimate ‘good-enough target’ for any of these parameters because each step of improvement opens up new applications, extends the range of deployment environments, increases detection range, improves acquisition of small signals in the presence of large ones, and accommodates the newer wideband spread-spectrum and encryption techniques for signals that must be captured and generated.

“As technology moves on wider bandwidths, applications can be deployed in different ways and with different cost profiles,” Hosking continues. “For example, lower-cost form-factor profiles now have functionality that was unaffordable before. Applications such as small drones are driving this. The drone needs to be able to detect a faraway small signal while there are large signals right next to it. Capturing that small signal is akin to pulling something out of a noisy environment – a difficult task, made easier by modern signal-processing techniques.”

Increased bandwidths also mean increased use of all types of commercial processing elements. “Higher bandwidths mean more data and a larger variety of data set sizes,” says Tammy Carter, Senior Product Manager for OpenHPEC products for Curtiss-Wright Defense Solutions (Ashburn, Virigina). “We are seeing more systems that mix DSPs [digital signal processors], GPGPUs [general-purpose graphics processing units], and FPGAs, with no single technology taking over the whole signal-processing chain. Four or five years ago, I would’ve said that GPUs, as far as signal processing, were totally dead, but there has been a resurgence as defense integrators embrace the increasing throughput and decreased SWaP that GPUs bring to a system when compared to CPU-only solutions. Other factors include the ease of reconfigurability when compared to FPGAs and latency gains that can be achieved by data transfer directly to the GPU without having to go through the CPU.”

In addition to demanding more processing and performance, defense integrators are also requiring all of this functionality to be “on an inherently secure platform built on open architectures capable of supporting multiple channels,” says Peter Thompson, Vice President, Product Management, at Abaco Systems (Huntsville, Alabama). “And they want all this at lower cost, delivering lower latency with less jitter – and with minimal NRE [non-recurring engineering cost].”

Packaging processing power with the sensor

The military’s seemingly unquenchable thirst for ISR data has naturally resulted in even more information being gathered by the sensors; there’s so much data, however, that it can’t get down the current data links fast enough. Designers are therefore looking to pack increasing amounts of processing capability next to the sensor to filter some of that data before it gets sent to human operators.

“The need for more processing power is always there,” Thompson notes. “It manifests itself in different ways, though – raw FLOPS [floating-point operations per second], core count, cache sizes for FPGA solutions, more gates, more DSP slices, tighter integration of programmable logic, processors and analog conversion, bigger memories with higher bandwidth, and so on. We’re regularly deploying 16 lanes of PCIe Gen3 for a bandwidth of 15.75 GB/second, and 40 Gb Ethernet, with 100 GbE around the corner.”

More defense customers are requiring their equipment be able to analyze, quantify, and packetize data at the sensor level before it is downlinked – which also affects the SWaP requirements, says Roy Keeler, Senior Product and Business Development Manager, Aerospace & Defense, ADLINK Technology (San Jose, California). “We also see the envelope being pushed to reduce SWaP as existing processing power is right at the limit for conventional packaging technology today.” For these applications, ADLINK offers the HPERC-IBR rugged small form factor computer. (Figure 1.)

 

Fiber and FPGAs are essential for adding processing at the sensor level: “We are seeing a strong desire to minimize data movement, and maximize the use of fiber,” says Noah Donaldson, Vice President of Product Development at Annapolis Micro Systems (Annapolis, Maryland). One of the ways Annapolis addresses these requirements is with its WILD FMC+ GM60 ADC & DAC card, which is designed for positioning closer to the sensor, he continues. It is one-third smaller and lighter than 3U VPX, yet using the Xilinx Zynq UltraScale+ RF System-on-Chip (RFSoC) technology, it has full FPGA processing and converter capability. (Figure 2.)

Active electronically scanned array (AESA) “radar data is processed initially with smart sensors, and then the resulting sensor data is aggregated over fiber to larger FPGAs,” says Denis Smetana, Senior Product Manager, FPGA Products, Curtiss-Wright Defense Solutions. “It is becoming more important for FPGA modules and processor modules to be able to directly interface to fiber connections, such as 40 GbE, to better support these fat pipes of data coming from the sensors.” Curtiss-Wright offers the VPX3-534 3U VPX Kintex UltraScale FPGA 6 Gsps transceiver for ISR applications. It combines high-speed multi channel analog I/O, user-programmable FPGA processing and local processing in a single 3U VPX slot for direct RF wideband processing to 6 Gsps.

Many in the industry agree that the RF SoC FPGA released by Xilinx this year has also made it easier to get signal-processing capability next to the sensor. “The RF SoC is a key enabler to getting more signal-processing functionality closer to the antenna,” Hosking says. “Our Model 6001 RFSoC Module (RFSoM) uses the Xilinx RFSoC Zynq UltraScale+ FPGA that has eight A/D and D/A converters, a quad-core ARM processor, dual 100 GbE interfaces, and FPGA resources.”

The deployment of multiple-input and multiple-output (MIMO) RF systems is driving huge interest in RF SoC products such as the Abaco VP430 which leverages the Xilinx ZU27DR RF system-on-chip (RFSoC) technology and combines eight input and eight output channels, a large FPGA fabric, and multiple processor cores into a single slot, Thompson says.

“We are seeing adoption of the Xilinx RF SoC as being a big factor in reducing latency for [electronic warfare] applications,” Smetana notes. “It can also enable a higher number of sensors, and it solves the latency issue by burying it in a low latency path inside the FPGA.”

 

Figure 2: The WILD FMC+ GM60 ADC & DAC card from Annapolis Micro Systems leverages Xilinx RF SoC technology.


22

 

However, placing more capability with the antenna and sensor is also an RF challenge: “The closer you get to the antenna, the less you have to rely on sending RF signals through coaxial cables, which often becomes the limiting factor on system performance,” Hosking says. “One brute-force solution is putting big heavy boxes up next to the antenna. A far better strategy is the use of small digitizer pods near the antenna and new optical interface standards that enable optical links to move the data up and down the link. This gets rid of the degradation problem while also supporting wide bandwidths.

“Due to advances in data converters and DSP engines, the well-known degradation of RF signals traveling through long coaxial cables from an antenna is often the limiting factor in system performance,” Hosking explains. “Secondly, new monolithic signal devices that integrate the functions of RF, data conversion, and signal processing make it more practical to perform these previously bulky elements in smaller sub-systems mounted closer to the antenna. Thirdly, the new, open standards for high-speed optical digital links capable of delivering digitized RF signals are now being widely adopted. Lastly, the adoption of digital RF protocol standards like VITA 49 help remove the barriers of incompatibility between vendors and boost confidence in defense customers so that they take advantage of this new technology.”

Efforts to add processing capability at the sensor level are also “coupled with a demand for an increase in AI capability at the edge, near the sensor, to enable immediate data analysis,” Keeler says.

AI at the edge

One way to leverage all of that signal-processing capability is to have the hardware enable AI algorithms that will help better filter the data the sensors collect.

“AI will be part of the trend towards adding more processing at the sensor level, especially for target classification,” Carter says. “The processor will perform the classification to identify an object such as a tank, determine whether it is friend or foe, consider the speed and the direction of the movement, and even determine the level of the threat. Performing all these tasks on the platform reduces the amount of data that must be transferred to the central control station and its commands. These AI systems will enable more actionable intelligence to get to the operators in the field faster and enable them to make better decisions. This will be especially true with airborne radar systems, as much more of the classification and the resulting actions will be done in the air.”

AI is starting to make an impact “in areas such as autonomously operated vehicles and cognitive systems such as radars and EW systems that adapt to changes in the electromagnetic spectrum autonomously and intelligently,” Abaco’s Thompson says. “There are many technologies being applied to such problems, including GPUs and FPGAs. There are many dedicated sensor-processing chips that are designed specifically for this domain, and we are constantly exploring their applicability to the rugged mil/aero environment.”

“The key enabler for many AI applications is the tight coupling of the RF, data converter, and DSP functions within an FPGA-based product,” Hosking says. “This is an ideal platform for custom development of sophisticated, real-time analysis and decision-making AI algorithms. Algorithms are being designed for FPGAs to adapt to particular signals that might never have been seen before and then track or dissect them.”

AI is also part of the concept known as deep learning that is gaining much traction in defense circles. “Every year we are receiving more questions about deep learning,” Carter notes. “People are unsure exactly how to utilize it yet, but you trying to get ahead of the curve by picking the processors, GPUs, FPGAs, etc. that will enable the deployment of deep learning applications once the neural networks are more mature for the ISR arena.”

Getting rid of the heat

If only AI systems could cool off the electronics that make them possible – alas, not yet. Until the AI can cool their own systems, it will be up to hardware designers to manage the thermals generated by high-performance processors and FPGAs; they are actually coming up with some innovative solutions.

“We are definitely seeing (an enhanced focus) on thermal management requirement across the board along with a demand for more liquid cooling in smaller systems,” Curtiss-Wright’s Carter says. “Liquid cooling has been around for years but could be fairly messy from a COTS [commercial off-the-shelf] standpoint. For new higher-powered devices, Air-Flow Through (AFT) cooling is being adapted, as it is currently the only way to effectively cool high-power systems.”

Constant innovation is needed to keep up with the thermal densities today and tomorrow, Thompson says. “We take an end-to-end approach to reduce thermal resistance at all points of a system – die to heat spreader, heat spreader to heat sink, heat sink to wedgelock, wedgelock to chassis, chassis to environment. We employ novel materials and assembly techniques, embedded heat pipes, controlled tolerances, self-adjusting interfaces, and more. The aim is to allow our processors to perform at maximum clock rates, even at the highest ambient temperatures, and to increase reliability.”

At Annapolis Micro Systems, “our cooling capability has evolved as our boards’ processing capabilities – and attendant cooling requirements – have increased. It’s gone from air to conduction to AFT,” Donaldson says. “And now, for our highest performing systems, we are getting into liquid-flow-through (LFT) cooling.”

“[The] LFT cooling standard – VITA 48.4 – that was recently released will also be used for these high-power systems where possible,” Smetana says. “LFT cooling will most often be applied where the infrastructure already exists. Previously LFT hardware was developed as custom hardware, but now this is starting to migrate to COTS.”

Managing thermals, say experts, must begin with early board development and not be an after-the-fact add-on.

“Often, with a real high-density device that generates much heat, it’s too late to design thermal management – the product is done,” Hosking notes.

“Thermal management needs to start with the product-development phase. Taking care of this up front also makes it easier for defense system integrators. Today, it is virtually impossible to tack on thermal-management provisions after the product is designed. System integrators look to vendors of board-level products with good thermal design, so that they can cool their systems without impacting schedules to develop a custom solution.”