Military Embedded Systems

FPGA or GPU? - The evolution continues

Story

September 16, 2014

Charlotte Adams

Abaco Systems

A GE Intelligent Platforms perspective on embedded military electronics trends

Designers of high performance embedded computing (HPEC) systems for the military and aerospace market have some options when choosing the primary processor for signal- and image-processing applications. Designers can cast field programmable gate arrays (FPGAs) or graphics processing units (GPUs) in the starring role.

In the past the military was wedded to FPGAs mostly because there was no middle ground between FPGAs and cost-prohibitive application-specific integrated circuits (ASICs). Program managers thought nothing of building a complete electronic warfare (EW) system with FPGAs.

Two developments, however, are changing this picture: First, GPUs have emerged that are nearing parity with FPGAs in both performance and power consumption. Second, the military itself has changed, with budgetary necessity driving officials to demand size, weight, and power (SWaP) tradeoffs. As a result, GPUs are becoming more popular and may eventually overshadow FPGAs, as the latter alternative takes on a more subordinate role.

FPGAs vs. GPUs

FPGAs have certain advantages. To begin with, these chips are hardware implementations of algorithms, and hardware is always faster than software. FPGAs are also more deterministic; their latencies are still an order of magnitude less than that of GPUs – hundreds of nanoseconds vs. single-digit microseconds. (GPU users compensate by accommodating the worst-possible timing case in their particular applications.)

GPUs historically have been power hogs, which is problematic in battery-dependent scenarios, but the latest GPU products have reduced that liability. NVIDIA’s Tegra K1 GPU/CPU board, for example, burns less than 10 W. GE Intelligent Platforms, taking notice of this improvement, has announced an agreement with NVIDIA to add Tegra K1-based products to its stable of GPU offerings (see Figure 1).

 

Figure 1: GE Intelligent Platforms is NVIDIA’s preferred provider of products based on the new Tegra K1 to serve users in the military/aerospace market.

(Click graphic to zoom by 1.9x)


21

 

 

Unlike FPGAs, GPUs run software, and executing an algorithm in software takes time. Instructions have to be fetched and cued up, math operations have to be performed, and results have to be sent to memory. GPUs also have their own advantages. On the hardware side, GPUs’ massively parallel construction enables them to run a software algorithm much faster than a conventional processor could. GPUs also run their software very close to the hardware, enhancing speed and controllability.

Unlike FPGAs, GPUs excel in floating-point operations. GPU cores are native hardware floating-point processors. A 384-core GPU can run 384 floating-point math operations every clock cycle. This capacity makes GPUs a natural fit for floating-point-intensive signal- and image-processing applications.

In fact, many newer signal-processing algorithms are aimed at GPUs. Moreover, GPUs are designed with very fast memory, and new direct memory access (DMA) techniques allow high-volume sensor data to be streamed to the GPU without consuming GPU clock cycles.

GPUs also offer good backward compatibility. If an algorithm changes, the new software can run on older chips. FPGAs are more problematic on this count: It’s no small matter to upgrade the algorithm on an FPGA or to move an algorithm to a newer FPGA. GPUs, furthermore, are supported with a wide array of open development tools and free math function libraries.

GPUs are increasingly found in radar processing, for example, where flexibility is valuable. Radar has numerous modes, some of which pilots want to run simultaneously. GPUs are right for this application, as they can run multiple processing pipelines at the same time. While FPGA manufacturers offer the ability to synthesize a small number of algorithm “images” on the same chip, the algorithms can’t be run simultaneously. It takes a second or so – an eternity in EW – to switch between them.

Is collaboration the key?

The long-term trend in embedded military-signal and image-processing applications seems to be the adoption of the GPU as the primary processing engine, with the FPGA in a supporting role as the data pipe between the antenna and the GPU. Central processing units (CPUs) would play a management role, interpreting the results of the GPU and sending the “answer” to the user.

Such a combined system would play to the strengths of each type of processor while maximizing system efficiency. The FPGA would forward incoming sensor data at high speeds, while the GPU would handle the heavy algorithmic work. Then the CPU would step in to winnow out false positives from the GPU’s output. Since the FPGA would have fewer responsibilities, it could be smaller and less difficult to design and therefore cheaper and faster to field.

defense.ge-ip.com

 

Featured Companies

GE Intelligent Platforms, Inc.

5 Necco Street
Boston, MA 02210
Categories
Radar/EW - Signal Processing
Topic Tags