VITA 65 in record time - Interview with Dr. Ian Dunn, Chief Technology Officer, Advanced Computing Solutions, Mercury Computer Systems, Inc.Story
December 09, 2010
Editor's Note: Gone are the days when a VITA spec takes years to become a reality. The following interview is a behind-the-scenes look at why OpenVPX was born and how it achieved rapid ANSI approval. Editor Chris Ciufo interviewed Ian Dunn shortly after ANSI blessed the spec. Edited excerpts follow.
MIL EMBEDDED: Last year something called OpenVPX was born, amidst controversy. Where has the standard ended up?
DUNN: We made it through standardization with VITA and ANSI. It took a few months longer than we anticipated, but that’s not bad for outside standards work. The more important deadline we wanted to keep was the working group’s commitment to the original date of handing it off to VITA, which occurred on time [at MILCOM in October 2009]. The original hope was to get the ANSI process through in one round with VITA.
MIL EMBEDDED: How many rounds did it go with VITA and ANSI?
DUNN: It went two rounds with VITA and two rounds with ANSI, too. The official publication date of VITA 65 as an ANSI standard was June 18. I think it was May 24 that OpenVPX was first submitted to ANSI, so the entire OpenVPX process was about a year and a half.
MIL EMBEDDED: What were the biggest achievements – and disappointments – in the OpenVPX process?
DUNN: An important achievement is that VITA 65 is a “living, breathing” specification. We figured out questions such as: How could it be augmented, and how could things be deleted? The idea was, for example, 1) What if someone wants to add a profile, or what if someone wants to add a definition of the architecture? The normal VITA cadence was to take years to make a major revision. We approached VITA about this, and their statement was that each working group could set its own guidelines within its own specification. For ANSI, my understanding is that if a committee ratifies an update process in the spec and ANSI signs off on it, that allows the spec to be modified without waiting for those major revision cycles.
MIL EMBEDDED: OK, so faster revisions … any other goals with OpenVPX?
DUNN: There were really a couple of primary goals. First, we were very concerned with the status of the embedded industry in general. We were seeing a lot of pressure from the primes to just replace embedded form factor architectures with bladed architectures and then to start ruggedizing those. And secondly, a lot of it had to do with interoperability mismatched with mainstream market-driven technology. And then the final goal was to create an output with a minimal skill, create an environment where the amount of variability was minimized. We were convinced that we could live with a handful of profiles. Later on as we engaged everybody, we figured it was more than that. And I think in reality we’ve ended up with an environment now where the core usable profiles are probably a dozen if not more.
MIL EMBEDDED: Can you explain what a “profile” is?
DUNN: The OpenVPX standard defines the allowable combinations of interfaces between the module, backplane, and chassis to facilitate interoperability and reuse. These interfaces are called profiles. For example, slot profiles define the connector type and how each pin, or pair of pins, is allocated. The module profile defines what communication protocol may be used on each module interface as defined in a corresponding slot profile, connector type, module height (6U/3U), and cooling method (forced air/conduction). The backplane profile defines which pins or set of pins is routed in the backplane, and which pins are routed to the rear transition module. The backplane profile also defines allowed slot-to-slot pitch.
MIL EMBEDDED: OK so any other achievements before we move on?
DUNN: The enterprise goal – That was a very important outcome in our perspective, too: OpenVPX now can be used to build enterprise-class architectures in an embedded backplane. And so customers can now build a development environment with PCs, or blades, or servers, and they can transition that to a rugged design. This means OpenVPX vendors are in a position to sell rugged, enterprise-class products at the end of the day. [Editor’s note: As we went to press, Mercury announced what they call the industry’s first dual quad-core Xeon embedded server on a VPX form factor. Clearly the company is executing a long-term strategic plan.]
A very important aspect of this enterprise ability is rugged LRUs. The second is the basic ability to do giga-clustering. And then finally, the last element is the expansion plane capability, meaning that an enterprise-class architecture has a PCI-based expansion plane. We really wanted that concept to find its way into a rugged architecture.
MIL EMBEDDED: Mercury was very clear about allowing nothing to derail OpenVPX’s schedule.
DUNN: Yes. I’ve got to credit the community because all the OpenVPX players understood that. At first, I thought, “Wow, you know, we’re really talking about a business decision here. So how do I get access to the business leaders of these corporations?” Well unfortunately, I had to create some controversy, and I had to put people’s businesses on the line. [Editor’s note: See Chris Ciufo’s column entitled “OpenVPX Industry Working Group: Open for business, or just controversy?” at www.mil-embedded.com/articles/id/?3818.]
MIL EMBEDDED: What about the primes’ viewpoint?
DUNN: Regarding the primes who are not on the committee with us but who are just primes winning deals and winning programs: One of the outward-facing accomplishments we wanted to achieve was to reset the image of the embedded community to not be perceived as a closed architecture community. Of course this was terribly important for Mercury because we had the reputation of being the quintessential closed company with RACEway; RACE++, and so on.
After OpenVPX we’ve made some progress. Particularly after it was handed over to VITA, we talked to programs and offices within the business management at the primes and, even to a certain extent, the government. We talked up the fact that OpenVPX is in the spirit of what the government is trying to accomplish from an open architecture perspective. Now realistically, the government is many, many layers above the hardware, usually. But it created a good conversation because it meant the primes could use OpenVPX as a way to showcase how they were being open, and we found some very good reception.
MIL EMBEDDED: With all these goals already achieved, where will OpenVPX head now?
DUNN: While there are more OpenVPX profiles than perhaps we wanted with our original goal, one of the architecture’s successes is that it locked down certain categories of signals. This is the planes concept in the specification: the management plane; the control plane, which is the gigi-clustering plane; the data plane; the expansion plane; and then the user I/O [plane].
I think the next frontier is the user I/O dimension, which is the sensor plane or the “user plane.” This is the area of a subsystem that is very useful to the primes because it’s the area where, for instance in ground combat operations, computers now need to include user interfaces while serving as net-centric resources for the entire mission vehicle. The original VPX space had nothing locked down, with all kinds of planes everywhere. OpenVPX took six or seven of the dot specs that everybody really could agree upon and locked them down into a plane. Now what’s left are the other six or seven dot specs that weren’t making a lot of progress. And so it’s allowed everybody to put some real core focus into the things that are still left.
MIL EMBEDDED: Let’s switch gears and talk about Mercury’s Cell BE effort of a couple years ago. What happened with that?
DUNN: When Cell hit the street, one of the things Mercury was excited about was that it was the first processor in a while that could simultaneously execute the front and back ends of a sensor computing problem. So it could be used to do signal processing and signal exploitation, data exploitation, and data mining. And as we went around, the primes were all equally excited about this.
We developed a strategic relationship with IBM, then jointly developed some software and a value proposition. We then started helping prime contractors – both in commercial and defense – to use the Cell to create larger solutions, whether that be in industrial imaging, ISR, communications, or whatever. We also classified the Cell internally as a game chip, and lumped it in investments the rest of the industry was making in using gaming chips to execute high-performance computing.
And the Cell being the first and the most programmable variant of that, we jumped on and went after it with a vengeance. Now, when it hit the street, I was astounded at the number of customers who tried it. The main message that came back was that it was a great chip but that it was a productivity challenge: A lot of customers had a hard time getting performance out of Cell BE because of programmability issues. So Mercury offered to do customers’ work for them, and we ended up doing quite a bit of that. Of course, we had a scalability issue there; we could only help so many.
MIL EMBEDDED: You had some commercial design wins with Cell BE, such as Massachusetts General Hospital.
DUNN: Yes, that’s right, and we also had some industrial imaging wins; however, in the meantime, on the defense side we were not as successful because in many cases we couldn’t do the programming for defense customers – because of classification or security issues. And so we only ended up with one or two adoptions there.
And with the exception of the big OEMs that IBM was servicing – Sony and Toshiba – I think a lot of the opportunities that IBM saw for it as well were challenging because, as I mentioned, customers really could just not figure out how to efficiently get performance out of Cell BE even though it was a phenomenal architecture. That led to IBM not investing into a follow-on roadmap. So we’ve turned over all those internal investments and personnel expertise to the more generalized category of GPGPU. That way, we use our Cell BE learning and expertise to create a larger processing category that we’re now building into products.
MIL EMBEDDED: Makes perfect sense. Last question: Curtiss-Wright recently acquired Hybricon, and Kontron acquired AP Labs. Do you have any comments on these acquisitions or why the industry might be turning toward systems integration houses?
DUNN: Well, the way that we look at the acquisitions of some of our industry peers is that first and foremost, they’re filling out their catalog, really – whether it’s with boxes, backplanes, or whatever. That makes them more responsive to industry integrators, whether primes or otherwise. There’s a class of companies that can do that well, but that’s not really the way we think the industry’s going, or where the value proposition is.
The government’s moving to a model where they really want to procure capabilities, and they’re looking to the primes for that. They’re looking to commercial companies for that, and that’s the strategy and marketplace we’re after. We obviously must have our own products and third-party sourcing and relationships, but we’re really focused at the solutions level, at the integrated capability level. Or maybe our goal is to be just below that – to be what we’re calling the “application-ready level” to help the prime contractors build solutions faster, get them to the warfighter faster.
MIL EMBEDDED: Mercury is certainly among the top five systems integrators in the industry. You guys have never really been a board-level supplier. You’ve always been a solve-the-problem company.
DUNN: Yeah, and it’s a core capability and core asset of our company.
Dr. Ian Dunn is Mercury Computer Systems, Inc.’s CTO and senior architect responsible for technology strategy and R&D projects. He has 20 years’ experience designing and programming parallel computers for real-time signal processing applications. Before joining Mercury in 2000, he consulted to Disney Imagineering and Northrop Grumman on automation and computing projects. Ian received his doctoral degree in Electrical Engineering from Johns Hopkins University and his undergraduate degree in Electrical Engineering from Oregon State University. He has authored numerous papers and a book on designing signal processing applications for high-performance computer architectures. He can be contacted at [email protected].
Mercury Computer Systems 866-627-6951 www.mc.com