The Ethernet bunnyStory
June 24, 2008
While Ethernet is known for its ubiquity, it?s also gaining even more market traction as its 10 GbE version speeds things up.
The embedded computer industry, both civilian and military, follows the semiconductor roadmaps that chip manufacturers determine. In the past few years, a fundamental technology change has occurred. Traditional bit parallel buses, which have dominated for 25 years and are used in platform standards such as VME and CompactPCI, are slowly going away. They are being replaced with switched serial interconnects, sometimes called switched fabrics, which connect a source of data to a destination for that data instantly and temporarily through a switch. In order to keep the pin counts down, the data are transmitted serially at high data rates. Switched serial systems have many advantages, including higher transmission speeds than could ever be achieved over a parallel bus due to reduced capacitance. Also, the failure domain is small because one card cannot bring down the entire data bus, and this characteristic enables very robust and highly available systems to be designed and built.
When this transition began a few years ago, a plethora of serial interconnect technologies was announced. The trade press and analyst communities then devoted a lot of ink to debating the "bus wars," attempting to determine the winners and losers. That has largely settled down. PCI Express, driven by Intel and widely adopted, has become the dominant chip-to-chip interconnect but is almost always confined to a board and does not generally go onto the backplane. There are some exceptions, including the CompactPCI Express standard, which added a PCI Express fabric to that backplane. Serial RapidIO has become the interconnect of choice for DSP manufacturers, providing some excellent low-latency properties. InfiniBand and Fibre Channel see limited use in storage networks but are not widely supported. Surprisingly, Ethernet has emerged as the clear winner.
Ethernet, you say? Isn't that old, slow technology insufficiently deterministic to build real-time systems? The answer is no. Ethernet has been around for a long time, but it just gets faster and faster. Many of us remember Ethernet as 1 Mbps technology piped over big yellow cables. But Ethernet is the technology that drives all modern networks and the Internet. Hundreds of millions of Ethernet interconnects are added every year, and that has provided major impetus for continuous improvement. Just a few years ago, Ethernet speeds topped out at about 1 Gbps. Now the industry is upgrading to 10 Gbps technology. The 40 Gbps technology will be around in a few years. The TCP/IP Offload Engines (TOEs) have largely eliminated the main processor overhead formerly associated with processing packets. As such, latency has virtually disappeared and is now measured in nanoseconds. So Ethernet, like the Energizer Bunny, just keeps going and going and going ‚Ä¶
The platform standards world is moving quickly to develop 10 Gbps backplane technology, and new challenges are being addressed. At these high speeds, every part of the transmission path plays a critical role and must be carefully designed. Trace lengths of differential data pairs must be matched, capacitances minimized, and skew, jitter, and crosstalk must be managed. Furthermore, in a world where customers want to buy interoperable system parts from different vendors, budgets for all of these parameters must be established for each part of the data transmission path, including boards, connectors, and the backplane itself.
Fortunately, pioneering groups such as PICMG's Interconnect Channel Characterization Committee have been working hard to define measurement and interoperability criteria for these high-speed data paths. Also, the IEEE recently released an important standard defining 10 Gb data paths over backplanes. Named IEEE 802.3ap, this standard defines several 10 Gb alternatives, including data paths of four pairs down to one pair. It is the single pair standard, 10GBASE-KR, that presents the most challenges because it operates at the highest data rate. PICMG has also reopened its PICMG 3.1 specification, which defines Ethernet over the backplane for AdvancedTCA systems, to accommodate the new IEEE standard.
Speaking of PICMG, the organization has been active on a number of other fronts. Of most interest to this readership is the development of ruggedized versions of MicroTCA. This effort has now split into two separate groups, one working on ruggedized air-cooled MicroTCA and the other working on even more rugged conduction-cooled MicroTCA. A number of major aerospace vendors are helping this effort by actively participating in the development of the two standards. MicroTCA, often referred to as AdvancedTCA's "little brother," is garnering a lot of interest in the traditional mil/aero communities because it is small, powerful, and can be made highly available. This high availability, brought into PICMG by the telecom industry that has used it for years, is of particular interest to this community. Quite a bit of effort of late has gone into connector testing, as military customers insist on seeing the test data. The existing MicroTCA connector appears more than adequate in terms of its physical robustness. It also already works beyond the 10 Gb per pair speed everyone realizes will soon be needed.
PICMG is active on a number of other fronts. A new revision of the core AdvancedTCA specification, PICMG 3.0, has just been released. There are no major changes, but there are many small refinements derived from almost five years of development and deployment. A version of the Advanced Mezzanine Card standard that supports RapidIO, AMC.4, is now undergoing member review. Another group is developing a design guide for the COM Express single board computer standard to help users design application-specific base boards. Yet another group is working on a second revision of the AMC.1 specification. All of this work will with luck be wrapped up by the end of the year, when new challenges await.
To learn more, e-mail Joe at [email protected].
For more information on PICMG, go to www.picmg.com.