Military Embedded Systems

OpenVPX standard hits five years of enabling interoperability in military embedded systems

Story

October 22, 2015

John M. McHale III

Editorial Director

Military Embedded Systems

OpenVPX standard hits five years of enabling interoperability in military embedded systems

Every month the McHale Report will host an online roundtable with experts from the defense electronics industry ? from major prime contractors to defense component suppliers. Each roundtable will explore topics important to the military embedded electronics market. This month we discuss the fifth anniversary of the OpenVPX standard. Five years have passed since OpenVPX/VITA 65 was ratified as an ANSI standard and six years since the OpenVPX Industry Working Group was founded by Mercury Systems in 2009. Other companies such as GE Intelligent Platforms and Aitech were solicited to put together the consortia and help lead it. Later that year, the OpenVPX Technical Working Group was formed with participation opened up to any VITA member in good standing.

This month’s panelists include: Jerry Gipper, Executive Director, VITA.; Ian Dunn, VP and General Manager & Robert Grochmal, OpenRFM Program Director of Mercury Systems; Doug Patterson, VP Military and Aerospace Business at Aitech Defense Systems; Steve Edwards, Director of Product Management & Mike Slonosky, Product Marketing Manager-Power Architecture SBC at Curtiss-Wright Defense Solutions; and Richard Kirk, Director of Core Computing at GE Intelligent Platforms.

MCHALE REPORT: Five years have passed since the OpenVPX (VITA 65) standard was ratified. The standard essentially enables interoperability via multiple suppliers and subsystem building blocks for the growth of VPX technology in military embedded systems. How has the standard evolved since its debut in 2010?

GIPPER: The OpenVPX systems specification defines an architecture framework that defines and manages module and backplane designs, including pin outs, and sets interoperability points within VPX while maintaining full compliance with VPX. By definition, it is a living document that has to be continuously updated to reflect the latest additions and enhancements to the base-line VPX specifications. The OpenVPX working group is continuously soliciting inputs for the next release. So far to date, there have been two major releases with the third nearing completion.

DUNN & GROCHMAL: OpenVPX is a continuously evolving specification maintained by the VITA Standards Organization (VSO) and includes many of the same participants that helped create the original specification. Every few years, a new version of the specification is released and during the interim periods formal guidance is used for new designs, such as fiber-optic connectivity, RF support, and new switch fabric speeds. Additionally, new supporting specifications are coming to fruition such as VITA 62 (OpenVPX power supplies), VITA 68 (OpenVPX backplane signal integrity), and VITA 46.11 (System Management). Thus, OpenVPX has its own ecosystem of supporting specifications that when used together ensure strong module and system-level tool kits for developing complex next-generation military embedded systems.

KIRK: The main evolution has been the proliferation of module (and slot) profiles. Also, much has gone into expanding it to support optical profiles (based on VITA 66). The large number of profiles means that finding two or more suppliers offering exactly the same profiles can be difficult. However, OpenVPX has created a perception of “interoperability via multiple suppliers,” which integrators can believe in, and OpenVPX has successfully calmed concerns about divergence. In reality, a custom backplane is almost always required to fully implement a specific system, but standard OpenVPX development systems can be successfully used in the early evaluation and development stages of a project.

The theory of open, industry standards is an excellent one. The reality is not quite so straightforward, and that’s inevitable – but, with the right planning, the interoperability envisaged by the VITA65 working group is still very achievable.

EDWARDS & SLONOSKY: The last release was in 2012. The working group is now finalizing the third revision of the specification, which should be out in the first half 2016. This new revision will bring the standard up to date with Gen 3 backplanes (8-10+ Gbaud), as well as incorporating options from VITA 66 (optical) and VITA 67 (RF) into profiles. There is a lot of new work, primarily in 3U, to define radial clocks for high precision clocking. One common use case is software defined radio applications.

PATTERSON: The OpenVPX standard has stabilized with very few changes and/or modifications since it was first accepted and ratified by VITA in 2010 as ANSI/VITA 65-2010 (R:2012). This minimal evolution is a testament to the initial efforts of developing a clearly defined open standard.

The largely cosmetic revisions in 2012 to develop the VITA 65 OpenVPX standard merely corrected some document references and other minor errors, and more clearly defined payload and peripheral slots as well as and added a common reference clock. Neither of these changes radically altered the base VITA 46 2010 standard. These changes were made mainly to better assure interoperability between boards from different COTS providers.

MCHALE REPORT: For what military applications is OpenVPX best suited?

PATTERSON: Any C4ISR application where moving large packets of high-speed data from one board in a system to another board is critical to the system’s real-time performance. Typical applications include radar, sonar, SDR (software defined radio), and high definition video – either pre-processed and compressed or uncompressed. As Moore’s Law enables lower SWaP possibilities, and the demand for miniaturization continually drives system designs, engineers are faced with multiple options today that didn’t exist even five years ago.

Because VITA 46 VPX didn’t have the luxury of an already established user community and ecosystem, it got off to a somewhat rough and chaotic start. However, VITA 65 OpenVPX, and related mechanical and environmental standards, were designed and developed to ensure future interoperability at the board level from multiple commercial-off-the-shelf (COTS) suppliers and was based on the fledgling adoption of VITA 46 by the advanced user community.

DUNN & GROCHMAL: OpenVPX is ideally suited for rugged on-platform, server-class processing, and similar applications where very large amounts of sensor data need to be acquired in real or near real-time, then processed and disseminated for on or off-platform use, all within a low SWaP envelope. The OpenVPX architecture outperforms others in applications such as radar, [intelligence, surveillance, reconnaissance (ISR)], big data, and broad electromagnetic spectrum awareness and dominance. OpenVPX gives system integrators a well-defined path to specify interoperability, which is becoming increasingly imperative, especially with the DoD’s renewed focus on open systems architecture and cross-service platform convergence. In addition, OpenVPX provides the hardware underpinnings necessary for software interoperability and portability for initiatives like the Future Airborne Capability Environment (FACE), VICTORY, and Open Mission Systems (OMS) to achieve their performance goals.

KIRK: OpenVPX is best suited to those high compute/high throughput applications that can take advantage of high data rates such as 10- and 40 Gigabit Ethernet. That includes a whole slew of applications ranging from smaller command and control systems based on a handful of 3U boards through to highly sophisticated radar, ISR, and electronic warfare systems that require large numbers of 6U boards to attain the compute power and system bandwidth to meet the needs of the application.

That said: OpenVPX is also finding footholds in the more prosaic command and control end of the spectrum. Suppliers like GE are focused on OpenVPX as the platform of choice for new defense and aerospace programs, so there is more choice and “modernity” available in their VPX portfolios than in older platforms such as VME and CPCI. Many programs are choosing OpenVPX modules because they offer compact solutions and feature modern silicon with long lifetimes, even if they don’t need the full capabilities of OpenVPX. The ruggedness and ability to perform board swap in the field that is afforded by VITA 48/REDI is attractive to many applications where depot repair of complete LRMs is not desirable.

OpenVPX does come at a relatively high price, though, so isn’t always affordable for programs on very tight budgets – which is why there are still some places where VME and CompactPCI still thrive.

EDWARDS & SLONOSKY: It seems like VPX is being mandated from most, if not all, new programs. VPX, by its nature, is very flexible and can support a whole host of applications from mission computing to sensor and radar processing. 3U is gaining traction in smaller and even some larger systems, due to the smaller footprint of the modules combined with the processing power that can be implemented on them.

MCHALE REPORT: When the OpenVPX working group was founded it was set up outside of VITA, which caused a bit of controversy at first, but proved to be very successful in the long run. What lessons were learned from that process that has enhanced the standards development process within VITA?

GIPPER: VITA had traditionally dealt primarily with foundational hardware specifications like VMEbus. VPX was lacking in a system level framework that could be used to guide integration and steer development. Some of the companies that were focused on system level development saw this deficiency and launched the OpenVPX architectural framework effort. They were not even quite sure what they needed to do at that time.

Since then, the members have been much better at stepping back and taking a full system point of view with an added emphasis on interoperability, hence the launch of the VITA 80 working group focused on developing methodologies for testing interoperability among VPX modules. The supply and demand side members of the VPX ecosystem have been working very closely together to round out development of future specifications and standards, with each side taking leadership where appropriate.

DUNN & GROCHMAL: We live in a complex world where time-to-market is an intrinsic industry advantage. Attempting to infuse a consensus-driven environment with a sense of urgency can often conflict with the increasing demand for timely solutions. Sometimes “out of band” efforts, where a team of domain experts generate a solid draft specification, which is then further refined by an expanded panel of experts, can be the quickest path to a solution. We also found that having committed customers on board early as a sponsor (in this case, The Boeing Company) is critical so the standards team can stay focused on providing a relevant specification. Additionally, setting clear goals in a standards body, including schedules and comprehensive project execution is a must. The OpenVPX Industry Working Group kept the mission on target from the start through hard work and dedication by many people from many different companies, often competitors.

As a result of the OpenVPX initiative, we believe that VITA is now more recognized as an authority on systems-level embedded standards. This is a great step forward initially stimulated by Ray Alderman’s vision and support of the OpenVPX specification. Ray deserves acknowledgement for his support of OpenVPX from the outset and drive to create a new way of thinking and focus on systems for VITA.

EDWARDS & SLONOSKY: The original intent was to bypass the perceived “bureaucracy” within VITA, but the exclusion of some companies from the process is not in the best interest of the industry. The primary lesson learned is that we need to be able to operate quickly at times to get a spec to a releasable state in less than 12 months. That can happen within the VITA community, but only if clear ground rules are set up front and member companies dedicate resources to accomplishing this for the good of the industry. OpenVPX was successful because the companies involved dedicated resources to this effort, and continue to in order to keep the specification relevant as technology moves forward.

KIRK: A standard always benefits from being open and debated within the community by all willing participants. What the OpenVPX working group demonstrated is that standards don’t necessarily need to come from independent bodies: where there’s a consensus, even among competitors, that customer needs are not being addressed by the current standards infrastructure, it’s appropriate, in our view, that those companies address the need in order to provide customers with what they need. That really shouldn’t be controversial.

Sometimes, it’s possible for inertia to set in when a standard is governed by many entities, and the very nature of standards bodies means that they may not make progress as quickly as the marketplace, customers, and technology demand. Occasionally, a disruptive force is required to break the cycle and get things on track: you might describe it as a “wake up call.” It probably should not be the model in all cases, but on occasion it can be effective. One lesson is, therefore, that the VITA Standards Organization should be prepared to move faster.

PATTERSON: Open standards must remain open to give users the ability to implement systems where board interoperability is not only assured, but mandated. The OpenVPX standard, VITA 65, was designed as a system-level VPX specification to address this potential interoperability issue. The specification defines backplane and board level profiles to ensure the interoperability of products used in developing systems and subsystems. OpenVPX also narrows down all the interconnect options offered in VITA 46, potentially reducing the need for custom development of backplanes and chassis for every application.

While this was the intent of OpenVPX, VPX itself is defined as a point-to-point, mesh fabric interconnect, meaning there are, by design, a great deal of architecture and backplane interconnection options in VPX. The “one backplane design meets all needs” idea is more of a lofty goal than reality. With the parallel busses of VME or CompactPCI, the most pressing issue was only the number of slots needed for a development system. User I/O pins were set in stone by their respective standards assuring there was only one option to define the electrical and communication protocols across the backplane.

One of the major lessons learned is that it is a herculean task to not only define a new, un-used and un-tried standard that covers all possible implementation options, but also offer it as a unified, open standard for the end-user to start developing applications immediately. For example, the VME and CompactPCI standards were already in use and offered by several COTS vendors. These standards were developed to ensure future interoperability at the board level and were based on an already existing and adopted user community.

MCHALE REPORT: The creation of the VPX standard was necessary as VME had hit a ceiling in terms of performance for supercomputing, signal processing intensive applications, yet VME still thrives in certain niches. Will there be a ceiling for VPX like there was with VME? If so, what is it?

KIRK: The huge leap in capability from VME to VPX raised the ceiling by such a large degree that the majority of applications will be well serviced for several years to come. That said: the classic VITA 46 copper connectors are limited in throughput and won’t serve as the need for data rates grows beyond 15 Gbits/second. Those connectors serve as something of a ceiling in that they limit us to 40GbE and FDR-10 InfiniBand - when the silicon is available now to drive 100GbE and EDR InfiniBand. Many applications are just fine with 10GbE or 40GbE, though, so are not hitting a wall - but there are a few that push the envelope either due to sensor rates (think many Gigapixel focal plane arrays), latency (for example, EW) and bandwidth (Video SAR, for instance).

With the use of optical modules, it is possible to extend VPX to service the real high end of the application spectrum, but this is a more specialized area where customized solutions may be more common than off-the-shelf ones. Generally speaking, though, you could say that optical will help - but until we have optical waveguides in the backplane instead of bunches of fibre pigtails, we have problems with ruggedization. Maybe new copper connectors will emerge that will help, or maybe new encoding methods will come along: consider, for example, the switching bandwidth of 10GBASE-T versus that of 10GBASE-KR).

EDWARDS & SLONOSKY: VPX is meant to be extensible, but there are practical limits. Some of the ones that are already being pushed are thermal and speed over the backplane. The primary limit is the VPX connector. It was initially design to support 6.25 Gbaud. We are seeing reliable operation at 10+ Gbaud but everyone in the industry agrees that the connector, maybe modified and with proper design on the module and backplane, has an upper limit somewhere between 14-16 Gbaud. At this point there will be the need to make a choice on a new connector. It may still be called VPX, but it is unlikely to be backwards compatible with the existing connector.

The technologies coming from Intel, Xilinx, Altera, Nvidia, and others are pushing us to 150-200 W, and more. This is beyond the limits of standard CC modules. While VPX can handle the power, the system level design may need to change to use LTF at the module level. Not all programs are prepared to do this, so trade-offs will have to be made.

PATTERSON: Programs are driven by humans, and generally all humans resist change. The same “homily” applies to aerospace and defense programs. As component obsolescence takes its death grip on the older silicon used to make VME and CompactPCI systems, the natural progression will be toward VPX, or similar high-speed serial standards. However, there will come a time where high speed serial copper-based backplane interconnects are not fast enough to implement the next generation of super computers, and options for solutions like fiber optics will be sought.

DUNN & GROCHMAL: OpenVPX solutions, with their rugged, multi-plane architecture and high-speed fabric capability provide the governing lynchpins for a balanced processing architecture – digital and RF – with the ability to process and move huge amounts of sensor data. We find ourselves in a world of increasing threats where cyber, radar, and electronic warfare data need to be processed, often simultaneously, to garner the best battle space options for the warfighter. We believe that though VME will continue to solve less data-intensive application requirements through each tech refresh, it is not as well-suited as OpenVPX to address these increasing threats within a time-sensitive, cross-service, highly networked environment.

So basically, we believe that OpenVPX-based architecture solutions are here to stay and will likely be around longer than VME. At Mercury, our focus is increasingly on OpenVPX solutions, as that is what our customers are requesting. We have deployed many OpenVPX-based solutions and we continue to see high demand for them. Hundreds of OpenVPX products exist in the VITA ecosystem. OpenVPX is a flexible specification, which supports technology expansion. For example, when we look at the increase in Ethernet speeds, they can easily be accommodated by the OpenVPX specification or VITA 67, a complementary specification for RF backplane connective co-resident with processing.

GIPPER: Parallel buses like VMEbus have inherent limits on what performance levels can be reached. Switched serial interconnects have limits on what performance level each physical link can obtain that are bound by the laws of physics. There are multiple dimensions that one can work in with today’s technology. The first is the transmission speeds of the SERDES that are used. SERDES will continue to move up in performance to some physical limitation. The second dimension addresses the performance problem by using more physical links, up to some undefined limit. Too many links however, and you are back to the problems of a parallel bus with too many bits.

The third dimension is the physical limitation of the transport medium. Currently, copper interconnects are used at the backplane level. Moving to optical interconnections will open up the door to much higher transfer rates and at the same time, reset the number of pipes issues because now you have much smaller pipes that are physically many times smaller than their copper equivalent. On top of that, optical is still shrinking in physical size so once we break through the copper vs optical at the backplane level, I don’t see much of a constraint for many years to come.

MCHALE REPORT: What is next for VPX technology? SpaceVPX? Optical backplanes? Predict the future.

EDWARDS & SLONOSKY: Optical is already here, but for I/O and not data between modules across the backplane. That is coming and when it does it will find its way into OpenVPX or its successor. SpaceVPX is also here but in a limited sense. What is holding this back is the relatively small size of the market. As that market segment grows, I would expect to find more module suppliers developing standard products to meet the requirements of SpaceVPX.

We are already exploring the next connector technology. It may be optical or it may be copper, but it is going to need to support at least 25 Gbaud on the BP. That enables 100 GigE and other technologies. The FPGA vendors already have SERDES that operate over 30 Gbaud and that needs to be considered as well.

Expect to continue to see profiles created for specific market segments, such as the radial clocks for SDR applications, maybe multiple data planes for high-performance radar applications, etc. VITA 48 will also need to continue to evolve to support the cooling challenge.

PATTERSON: Anything today that is considered high-speed digital is merely over-driven analog. High-speed digital signals are no longer defined by signal level transitions always happening some fixed number of microseconds after an event, but rather are defined by the probability of a signal level transition occurring in some time window.

The physical limits of copper include serial inductance and parasitic capacitance, which both degrade a signal’s speed and its propagation over distances. And as higher speed digital signals exponentially increase, so does the components’ power dissipation. As a compromise driven by the limits of physics, to reduce the overall power of the systems, the transition signal level windows are reduced, which greatly decreases the signal-to-noise ratio. This compromise makes it harder to determine when a noisy signal transition has actually “transitioned” to the opposite state with high confidence. This is where the determination of a changing digital signal is now defined with a probability of happening rather than a clearly defined and stable state (AKA connector eye patterns).

Solutions for the limitations of backplanes and boards using copper vias will be sought, found, and implemented. At the moment, fiber optic-based backplane interconnects hold the promise of nearly eliminating the backplane connector’s parasitic capacitance and inductance, making even higher speeds more than possible, but achievable.

GIPPER: Watching VPX mature has been very interesting. Now that the consumer side of the equation is understanding the system level approach to using VPX, I expect to see many more application specific implementations based on the solid VPX framework of technology. SpaceVPX is a great example. The application base took the established VPX technology and made a few minor enhancements to make modules more fault-tolerant and have added value to the original VPX suite of specifications all the while developing a solution for specific applications that need a reliable level of system redundancy. Now we are seeing the same efforts being undertaken by CERDEC with the Modular Open Radio Frequency Architecture, or MORA, that will open RF interfaces to enable rapid insertion of new capabilities and broader interoperability as well as reduce size, weight and power, or SWaP, for future ground vehicles. VPX is a key element of MORA.

I fully anticipate other embedded high performance application segments to emerge to define their own specific implementations, again built on the VPX framework that will solve their problems. All of this adds value to VPX giving it a broader appeal to the target industries. Optical interconnects are already playing a role in VPX-based platforms. Optical backplanes are on the roadmap with plenty of work to be done on the technology but when they may be widely deployed will be driven more by market economics than technology. New metrics for measuring system costs that give a higher value to performance may be needed to get through the optical inflection point.

Many people argued that VPX was too open, that it would fail like Futurebus. I think the VPX pioneers had a bigger picture in mind for VPX, they weren’t 100 percent sure what would work best for them but they did not want to be overly restrictive, leaving room for innovation. Since it is a living technology, it is adapting quickly to situations where it can raise above to be the best solution possible. More OpenVPX profiles will emerge; some will become widely accepted within the industry while others will not survive. Fortunately, nothing in the technology or specifications prohibits creative application of VPX.

DUNN & GROCHMAL: We foresee the U.S. government labs (e.g. Air Force Research Laboratory (AFRL), Naval Research Laboratory (NRL), and others) using OpenVPX and blade architectures like ATCA as the basis for their next generation embedded computing architectures which will dictate future deployments for decades. The recent VITA ratification of SpaceVPX (V. 78) developed at AFRL is an extension of OpenVPX adding a level of redundancy given the unique requirements of a space environment. U.S. Navy, Army, and Special Operations Forces, driven by the government’s demand for affordable, interoperable solutions, are creating reference architectures for the defense primes to use in next-generation platforms, which must last and enable affordable tech refreshes for 20-30 years (for example, the U.S. Air Force’s Next-Generation Radar (NGR) initiative). From our viewpoint, VITA’s VSO, with industry and government participation, can provide the crucible from which new deployable, affordable innovations built on OpenVPX and VPX architectures can evolve. VPX extensibility with OpenVPX’s system level considerations provides the flexibility to incorporate future innovations and ensure longevity.

KIRK: Optical for sure. That’s an easy guess, as it’s already happening. It would sure help, though, to have rugged, affordable optical waveguide backplanes. Someone, however, will find a way to extend copper to get us perhaps beyond 15 Gbits/second, and that would serve perhaps for another five or so years.

[One of two] other things we may well see in the future of OpenVPX is being driven by integrators who want to be able to use boards from different vendors in the same slot in a deployable system that has a dedicated backplane layout. One of the impediments to that is the one I noted previously, which is the proliferation of slot and module profiles. I can see a culling of the profiles to a more manageable number.

Another is the Wild West spirit that still pervades the pins set aside for user I/O. Maybe it’s time to define profiles that cover some of the more common uses of the user-defined pins - such as USB, serial, SATA and so on - in a way that would allow systems to be designed with a common denominator of functionality.

We’re seeing lots of systems moving to 3U, but we have some bottlenecks there in terms of the number of pins available to the backplane that make systems less scalable than many customers desire. New connectors with a higher pincount - and higher bandwidth - would be an excellent development.

Categories
Radar/EW - Signal Processing
Topic Tags