Military Embedded Systems

Virtualization improves efficiency of legacy military embedded systems

Story

October 11, 2019

Sally Cole

Senior Editor

Military Embedded Systems

Virtualizing legacy embedded systems improves their performance, efficiency, and security, as well as helping to meet size, weight, and power requirements for military aircraft and ground vehicles.

The U.S. Department of Defense owns myriad “legacy” embedded systems that are being given a new lease on life, thanks to the wonders of virtualization.

“Virtualization is a must-have capability in next-generation software-based military systems,” says Ray Petty, vice president of aerospace and defense for Wind River (Alameda, California). “It enables the use of multiple application and operating system environments on a shared compute platform by abstracting away the exact computer architecture from the applications – removing underlying hardware and software dependencies on both new and legacy applications. It also enables the use of a single compute platform to be used by multiple applications from different domains and suppliers.”

What exactly does a virtualization platform do? Wind River happens to offer one called the Helix Virtualization Platform, which is an adaptive software environment that consolidates multi-operating systems and mixed-criticality applications onto a single compute software platform to simplify, secure, and future-proof designs within the aerospace and defense markets. Applications can be legacy or new-capability, based on industrial standards such as ARINC 653, POSIX, or FACE, and run on operating systems such as Linux, VxWorks, and others.

The virtualization of military systems is already well underway, because “it addresses many of the military’s software development, test, and security concerns,” Petty says.

Benefits of virtualization for embedded systems

Virtualization offers many benefits for embedded systems – especially legacy military ones.

“Virtualization is an amazing technology,” says Chris Ciufo, chief technology officer for General Micro Systems. “While it’s not new, it’s only within the past five to 10 years that processors and the systems that run them have had enough performance and resources so that when you virtualize you still have enough processing capability left over to do other things.”

Virtualization “decouples legacy hardware and software dependencies and allows for rapidly repurposing mission platforms,” Petty says. “It also enables systems at the edge to have dynamic on-the-fly information technology (IT) and operations technology (OT). This is ideal for rapidly changing security, warfare environments, or where there is a high failure or degradation rate of capabilities.”

For a virtualization platform to be most efficient, “it must allow applications using a broad range of commercial and proprietary guest operating systems to run without penalty,” Petty adds. “Virtualized systems should also enable the continued use of legacy software applications while combining them with new capabilities within new operating environments.”

The biggest benefit virtualization provides, according to Chip Downing, senior market development director of aerospace and defense for RTI (Sunnyvale, California), is that a systems integrator can now take a legacy system, typically on a standalone or federated system, and deploy it into a shared compute platform that’s more modern and can be more easily upgraded.

“Now we can have legacy and new applications, each with their own operating system and application libraries, running on a virtualized compute platform,” explains Downing. “Once you have a virtualization layer underneath these different applications and operating systems, the potential to upgrade that platform is very attractive – you can simply install new hardware with the virtualization layer and run the existing set of applications. This should incur very little change, except for potentially higher performance of the different applications on the platform due to more modern and typically faster processors. It enables performing upgrades much easier and allows you to give aging legacy platforms a new life through virtualization on the latest hardware.”

Multicore hardware “is driving a huge change in embedded systems where traditionally we had an embedded system with just one processor, one scheduler or RTOS, and one set of applications,” Downing says. “Now, we can virtualize and consolidate the different platforms onto one shared compute platform.”

Downing points out that a rich ecosystem of suppliers – DDC-I, Green Hills Software, Lynx Software, Sysgo, Star Lab, and Wind River – have created embedded virtualization platforms with RTCA DO-178C and EUROCAE ED-12C airborne safety-­certification evidence.

This means a huge improvement in efficiency when it comes to military aircraft, ground vehicles, unmanned aerial vehicles, or other platforms with an assortment of federated boxes and strict size, weight, and power (SWaP) restrictions. (Figure 1.) “By virtualizing these traditionally federated embedded applications on a new compute platform, you can vastly reduce SWaP,” Downing adds. “And platforms with certification evidence systems integrators can mix high levels of criticality with low levels of criticality – whether it’s safety or security – on a shared compute platform. This has extremely high value and can save tens of millions of dollars during the life of an airborne system.”

Figure 1 | Virtualizing traditionally separate operations can bring beneficial SWaP benefits to such applications as military aircraft, ground vehicles, or unmanned aerial systems.

21

 

Virtualization not only makes it possible to reduce SWaP on military manned or unmanned aircraft, ground vehicles, and other weight-sensitive military equipment, but also improves its operating characteristics and energy consumption. “The traditional way to add capability was to add another federated box containing that capability, which increased platform weight and power consumption,” Downing says. “Military systems integrators can now virtualize both the legacy and new software on a shared compute platform and add new capability by simply adding another software partition, not an entire new box of capability. This decreases cost of deployment and increases the efficiency of the military platform during its service lifetime. Virtualization also enables the rapid response to new threats by enabling the insertion of new capabilities in very compressed time frames.”

The overall appeal of virtualizing legacy systems is that “you can have essentially the same legacy system, which looks like it’s still running on the older system, processor, and environment, but it’s now running on a shiny new processor and system that can also be doing many other things,” Ciufo says. “This means that legacy systems can be kept alive a lot longer without many changes – saving the government, the contractor, the prime, etc. the cost of recertifying a system. It’s a tremendous benefit to the defense industry because it often costs more to recertify a system than to redesign it. Virtualization is a boon for modern defense systems when it can be used.”

What does virtualizing embedded systems mean for security?

Virtualizing embedded systems is primarily viewed as a way to make them more secure by essentially creating a moving target, but it can potentially expand the attack surface – depending on how connected the systems or devices are.

Embedded systems can be more secure “because a virtualized platform has an abstraction layer that enables the reliable movement of software operating systems and applications from platform to platform,” Downing says. “Traditionally, you just had one target, one operating system, one processor. If someone worked at it long enough they could figure out a way to break into it. Virtualization allows you to move those applications around – throughout an aircraft or system. It’s a moving target because it’s abstracted away now; it’s not just running on bare hardware. Even if you’re able to compromise the hardware and take advantage of a security flaw in the hardware platform, there is now a virtualization layer above the hardware, providing one more layer, and possibly a dynamic layer, that separates an application environment and its adversaries.”

On the other hand, “Embedded devices and systems have traditionally run in relative isolation and were protected from a wide range of security threats,” Petty says. “Today’s devices and systems are often connected to corporate networks, public clouds, or the Internet directly.”

Defense and intelligence networks will likely “embrace cloud capabilities and leverage commercial smartphone platforms while maintaining highly secure domains,” he points out. “This wide connectivity yields substantial gains in functionality and usability but also makes devices more vulnerable to attack, intrusion, and exploitation.”

The “connected era” is elevating secure connectivity to an essential system requirement, which wasn’t necessarily a top priority in the past. “Often, the embedded hardware and software from previous-generation devices weren’t designed to enable secure connections or to include network security components, such as firewalls, intrusion protection, or other security-focused functionality,” Petty adds. “Developers can’t assume network environments will be private and protected, nor can they predict how their devices might be connected in the field. They also can’t predict the impact of future connected devices on their products.”

The most efficient way to find the right balance between device capability and security “is by defining and prioritizing the device security requirements with the rest of the system and its development environment, including the network environment,” Petty says. “For maximum efficiency, this should be done early in the product life cycle.”

Emerging trends

One major trend is that “software-defined open virtualization solutions are proving to be a smarter way to implement next-gen military systems, since they’re easier to maintain and enable future software and hardware upgrades with minimal risks, costs, and downtime,” Petty says.

Today’s embedded microprocessors “have hardware-assisted virtualization IP that supports full operating system environments in virtualized machines within a shared compute environment,” he adds. “Although this capability has existed in enterprise and IT spaces for more than a decade, it’s just now becoming commonplace in embedded devices and OT [operational technology] environments. The reduced SWaP-C requirements make using virtualization a very ­attractive alternative for next-gen designs.”

Another virtualization trend, while not exactly new, is a positive sign for military embedded systems: “Companies are figuring out how to certify the safety and security of hypervisors on multicore processors,” Downing says. “This is encouraging because avionics and other critical platforms can now keep up with innovations in hardware.” (Figure 2.)

Figure 2 | Trends in virtualization point toward certification of safety and security of hypervisors on multicore processors, which means that avionics and other mission-critical platforms can keep pace with hardware innovations.

21

 

For a few years, “simply doing a dual-core or quad-core was problematic, but the industry has now figured out how to solve the multicore contention and interference issues,” Downing adds. “It’s a huge trend because we can get a lot more capability into an embedded device now that it’s multicore and supports a wide range of operating systems and applications. Plus, these platforms can support not only a real-time operating system but also a larger operating system, like Linux, and run it on that virtualized platform. With this technology foundation in place, we can then easily upgrade that platform at a future date without disrupting the existing code.”

A really interesting new trend is the use of data distribution service (DDS) within virtualized multicore environments. “In the past, as companies integrated applications onto single-core ARINC 653 platforms, the demand for DDS waned because the ARINC 653 environment reliably managed the communications between the application partitions,” Downing says. “Now, with consolidated multicore platforms, the need for real-time communications between applications with a reliable quality of service capability is increasing because of the complexity of multicore processors and the potential multicore contention and interference that occurs with a mixed operating system virtualization environment. It’s literally a distributed system that needs a robust connectivity foundation to manage the interoperability between virtualized applications. The new distributed system is now a virtualized multicore platform.”

Another emerging trend in virtualization now is to add peripherals “that hadn’t previously been accessible to the microprocessor,” Ciufo says. “Early on, virtualization primarily relied on multicore processors to run multiple synthetic environments – typically one per core. Then Ethernet ports were virtualized so that eight Ethernet ports can be shared with four virtualized environments, which makes it look like 32 ports are available to the system.”

Ciufo says he is also noticing an effort to virtualize other processing resources and systems like general-purpose computing on graphics processing units (GPGPUs) or other digital signaling assets in the system known as coprocessors, which include algorithm processors, artificial intelligence processors, and vector processors. “These high-performance resources to the system do a lot of computational work,” he notes. “So the trend now is to virtualize algorithm processors like digital signal processors, GPGPU processors, and other compute resources that previously were dedicated to a processor but now have enough horsepower to also be virtualized and shared between synthetic environments.”

This is significant, according to Ciufo, because these coprocessing algorithm resources tend to be bolted to only one part of the system. It will “require new software to be written, and virtual environment providers will need to describe within their software how they plan to deal with talking to data moving to and from those virtualized resources,” he says. “Since these are high-performance resources that work very quickly in terms of data throughput and movement, it also requires the virtualization companies that provide the software to rethink how they deal with their own passing of data in and out of the virtualized environment – including the interrupts that are required to deal with those resources. So it’s not a trivial task, but we’re definitely seeing a trend of using high-powered coprocessor compute resources and virtualizing them too.”