10 Gigabit backplane Ethernet for embedded supercomputers
StoryJune 14, 2016
Designers of next-generation high-performance embedded computing (HPEC) solutions for demanding intelligence, surveillance, and reconnaissance (ISR) systems applications got a boost from the introduction of Intel's multicore Xeon D system-on-chip (SoC) processor earlier in 2016. This device provides as many as 16 cores in the same power footprint as earlier four-core devices and features the rugged ball-grid-array (BGA) packages and extended temperature range needed for deployed applications.
Designers of next-generation high-performance embedded computing (HPEC) solutions for demanding intelligence, surveillance, and reconnaissance (ISR) systems applications got a boost from the introduction of Intel’s multicore Xeon D system-on-chip (SoC) processor earlier in 2016. This device provides as many as 16 cores in the same power footprint as earlier four-core devices and features the rugged ball-grid-array (BGA) packages and extended temperature range needed for deployed applications.
Since this processor family and Ethernet serve as the workhorse of the HPC clusters that drive large-scale commercial data centers, development tools for building highly scalable embedded supercomputers can be leveraged to help build HPEC systems. Xeon D enables a level of compute performance that previously was only available on 6U VPX processing boards to migrate to size, weight, and power (SWaP)-optimized 3U cards. This higher performance means that a wide variety of high-end ISR applications such as radar, sensor, and image processing can now be deployed on smaller space- and weight-constrained platforms.
An essential element of HPEC systems is support for high-speed 10 and 40 Gigabit Ethernet (GbE) networks. More good news for the HPEC system designer is the fact that the Xeon D features two ports of built-in 10GBASE-KR “backplane” Ethernet. This capability is important because ISR applications are increasingly turning to large-scale SWaP-optimized 3U VPX architectures, with ten or more boards per box; KR Ethernet hits the sweet spot by supporting 10GbE on a single serializer/deserializer (SerDes) lane. The upshot: 3U cards, with many fewer available backplane pins than 6U VPX, can now support 10 GbE with only four pins on the backplane, compared to the eight or 16 pins it takes to support other styles of Ethernet.
For HPEC system designers, the benefit is clear: KR Ethernet can deliver four times as many 10 GbE ports as previously. With 10GBASE-KX4 or XAUI technology, 3U VPX Ethernet switch cards were effectively limited to eight ports. Since 10GBASE-KR uses far fewer pins, a 3U switch card can now support 32 ports of 10 GbE, enabling system designers to build much larger 3U-based HPEC systems. While it was possible in the past to add KR Ethernet to a 3U board before the advent of the Xeon D, doing so required a separate Ethernet controller – consuming valuable board real estate and power. With the Xeon D, HPEC designers get true supercomputer multicore performance with KR Ethernet provided natively on-chip.
Because many systems will require a mix of boards and a range of Ethernet types, it’s important to be able to support 1 GbE, 10 GbE, and 40 GbE network speeds. Since the IEEE standards for backplane Ethernet are closely related, the SerDes interfaces on newer switch chips can be configured to support all three speeds. 3U switches based on backplane Ethernet can potentially support 32 x 1 GbE, 32 x 10 GbE, 8 x 40 GbE, or a mix of all three. Those numbers mean that today’s newest 10 GbE processors can coexist with older cards that only support 1 GigE in the same system. For future-proofing, when 40 GbE boards become widely available, the same switch card could be able to support them as well.
An example of a next-generation 3U VPX Ethernet switch card that supports KR Ethernet is Curtiss-Wright’s VPX3-687 (Figure 1). The open-architecture card uses standard-industry connectors to provide 320 Gbps of line-rate switching, with support for 1000BASE-KX, 10GBASE-KR, and 40GBASE-KR4. In addition to supporting backplane Ethernet interconnects, this switch also offers media access control/physical layer (MAC-PHY) links such as XFI and SFI for connection to optical transceivers for outside-of-the-box networking.
Providing the right mix of optics on a switch has been a challenge in the past – there are many different standards using different wavelengths, types of fiber/connector, and varying power levels for short or long reach. Designing a switch that can connect to off-the-shelf optical transceivers enables the system designer to address the project’s unique optical data requirements.
Figure 1: The Curtiss-Wright VPX3-687 3U VPX Ethernet switch can offer 320 Gbps of line-rate switching plus links for connection to optical transceivers.
(Click graphic to zoom by 1.9x)
As the 3U form factor becomes increasingly popular for building scalable HPEC systems, the ability of the network switch card to support the higher speed processing made possible by the Xeon D processor becomes critical. To fulfill the promise of HPC supercomputing-class computing in the harsh environments of deployed defense applications requires flexible network switching that talks today’s 10 GbE language while supporting older 1 GbE cards and is poised to support the next generation of 40 GbE cards over the horizon.
Andrew McCoubrey, product marketing manager for switching and routing products, C4 Solutions Group Curtiss-Wright www.cwcdefense.com