Military Embedded Systems

Increasing data-transfer performance in VPX systems

Story

November 16, 2017

Thierry Wastiaux

Interface Concept

The combination of digital simulation and high-performance calculation is radically transforming industry and research. It is now possible to develop products and services without spending precious time and dollars on replicating work already done. This is certainly the case in the military and aeronautics fields, particularly in the area of building size, weight, and power (SWaP)-constrained embedded computing architectures for use in electronic warfare and radar applications.

Developing a high-performance embedded computing (HPEC) capability requests a strong knowledge of the technologies involved, particularly concerning high-speed data transfers. It goes without saying that high-performance data connectivity is a must-have piece of getting computing resources like processors or field-programmable gate arrays (FPGAs) to work together.

In today’s embedded systems for defense and aerospace applications, the VITA 65 OpenVPX standard appears to be the best protocol to supply the needed connectivity and to pack maximum processing power in size, weight, and power (SWaP)-constrained HPEC architectures.

For data transfers, VPX defines the concept of the data plane separated from the control plane. To get the best from this VPX standard, some conditions have to be met: The most robust and high-throughput protocols must be chosen to operate the data plane. In addition, the processors must be relieved of the tasks of transferring data from memory to memory through the use of direct memory access engines.

State-of-the-art protocols for the data plane

The interconnect used in VPX HPEC systems must be robust, fast, flexible, and very power-efficient. The designer must consider the size of the targeted computing system in terms of the number of computing payloads. It is now well accepted by the market that PCI Express – in its Gen 2 or Gen 3 implementations – and 10GBASE-KR are currently the best protocols of choice to build HPEC VPX systems.

For PCIe, CRC [cyclic redundancy check] control at the link layer as well as retransmission hardware mechanisms can give users added robustness. Permanent effort sponsored by Intel has led to a steady speed and performance increase. The maximum theoretical bandwidth per lane is 7.88 Gbps in Gen3 thanks to the reduced overhead enabled by the 128B/130B encoding, which has particularly low-power consumption; this means a throughput of 31.5 Gbps on a PCIex4 link. The semiconductor industry has developed PCIe switch components. They usually include direct memory access engines for fast data transfers. These switches may be distributed on the different boards of a system or set in a centralized switch for both control plane and data plane.

In the PCIe architecture, each Intel processor is the root complex in its PCIe domain, enumerating all the end points. Enabling parallel processing in HPEC systems means that designers must develop software to enable seamless communication between processors.

This approach has been successfully implemented in high-performing VPX 3U systems and appears well-suited for the design of small to medium VPX 6U systems where space is very limited. For example, in radar or electronic warfare (EW) applications, such an approach enables integration of the front-end processing FPGA modules that are directly connected to the different sensors. We can see that for small to medium-sized VPX 6U systems with just a few processor boards, PCIe is clearly the right choice. However, when more processing power is required with more computer boards, the PCIe approach becomes cumbersome due to the requirement of many point-to-point links between the boards.

The IEEE 802.3 standard for Ethernet has standardized 10GBASE-R and 40GBASE-R, with their physical layer (PHY) implementations for backplane communication based on 64B/66B code, 10GBASE-KR, and 40GBASE-KR4. The 64B/66B code of the physical coding sublayer (PCS) allows robust error detection, and its encoding ensures that sufficient transitions are present in the PHY bit stream to make clock recovery possible at the receiver. The physical medium dependent (PMD) sublayer of 10GBASE-KR enables transmission on one lane at 10.325 Gbps. The PMD sublayer of 40GBASE-KR4 enables transmission on four lanes at the same rate.

With 10GBASE-KR, systems designers are driven back to a central-switching approach, which becomes necessary when building large VPX HPEC systems with many processor and application boards.

The 6U VPX Cometh4510a, with its 48 10GBASE-KR data plane ports and 16 1000BASE-KX control plane ports (along with its additional optical front ports in addition), can be used as the backbone of a large central switched 6U VPX architecture. In large 3U VPX systems, designers can consider the Cometh4590a, which offers generous bandwidth and as many as 29 10GBASE-KR ports. (Figure 1.)

 

Figure 1: The Cometh4510a (left) and 4590a (right) switches can be used for 6U and 3U architectures, respectively. Photos courtesy of Interface Concept.


Figure1

 

 

Even if these data plane protocols are sufficient for building the high-performance embedded systems that the market requires, the industry is looking for even faster protocols.

Increasing the throughput to 25 Gbps on a differential pair

Standards-making bodies are currently examining how to increase the high-density backplane serial throughput to 25 Gbps per differential pair. As an example, the IEEE 803.2 Ethernet Working Group has begun discussions on a 100 Gigabit backplane Ethernet standard with four lanes, each of them running at 25 Gbps.

A white paper on the subject from TE Connectivity (“A comparison of 25 Gbps NRZ & PAM-4 Modulation used in Legacy & Premium Backplane Channels 2012”) sets out that backplane performance levels are essentially categorized by their insertion loss, return loss and insertion loss-to-crosstalk ratio) on a defined channel. This channel is specified using a backplane system with two daughterboards connected to a backplane, the length of the differential pairs and the connectors having been carefully specified.

Per the white paper, two receiver architectures are to be considered: The first architecture is based on a conventional analog signal-processing approach. The second architecture uses a digital processing signal-based receiver. In both architectures, transmitter minimization, charge transfer efficiency transfer functions, sampling points, and the coefficients of the equalizer in the receiver to maximize the signal-to-noise ratio (SNR) are considered. In the simulations, the SNR margin is computed, being the difference between the SNR at the decision point of the receiver and the SNR required to achieve the target probability of symbol error.

The TE simulations found that one of the main sources of SNR margin degradation is due to channel reflections. The right design of connectors and the suppression of any stub on the PCB (backdrilling) thus become essential.

It also appears clearly that PAM-4 signaling offers better SNR margins than nonreturn-to-zero (NRZ) signaling. In addition, a lower symbol rate enables advanced equalization in DSP-based receiver architectures. But the simulations also reveal that PAM-4 signaling by itself is not sufficient for operation in legacy 10GBASE-KR-compliant backplanes: Megtron6 or improved FR4 as defined by the IEEE 802.3ap Backplane Ethernet Task Force and improved connectors must be used.

The development and the standardization of these new electrical technologies on the backplane will take some time. Meanwhile the VPX systems designers looking for fast data transfers beyond the 10 Gbps on a differential pair, can use optical links as standardized in the VITA 66.x norm for optical backplanes. As an example, the MIL-STD-compliant LightCONEX product from Reflex Photonics will be able to include a 12-lane transmit and a 12-lane receive at 10 Gbps linked directly into a 66.4 space, supporting extended temperatures. Such an approach avoids having optical cables running over the boards between the soldered transceivers and the VITA 66.X connectors with the inherent vibration issues possibly leading to wear points.

In the evolution of VPX towards higher throughputs on differential pairs, designers will have to build new high-quality backplane links using low-loss dielectrics, smooth copper, and low-reflection and low-crosstalk connectors. Board design will be even more demanding in term of impedance control with the systematic use of backdrilling. PAM-4 will probably be used as it improves the SNR margin. High levels of equalization will also be mandatory. In the meantime, PCIe and 10GBASE-KR protocols and optical VITA 66.x links remain good options for designers when more throughput is needed.

Thierry Wastiaux is senior vice president of sales at Interface Concept, a European manufacturer of electronic embedded systems for defense, aerospace, telecom, and industrial markets. He has 25 years of experience in the telecom and embedded systems market, having held positions in operations, business development, and executive management. Prior to joining Interface Concept, he was responsible for the operations of the Mobile Communication Group and the Wireless Transmission Business Unit at Alcatel-Lucent. He holds an M.Sc. from France’s Ecole Polytechnique. Readers may contact Thierry at [email protected].

Interface Concept www.interfaceconcept.com

 

Featured Companies

Interface Concept

3, Rue Felix Le Dantec
Quimper, Brittay 29000 France
Categories
Radar/EW - Signal Processing
Topic Tags