Military Embedded Systems

Allowing for GPOS and RTOS: The unique virtualization needs of mission-critical embedded systems

Story

September 23, 2009

Chris Main

TenAsys

The prevalence of multicore processors on the computing scene is now a fact of life, and OEMs are experimenting with ways to partition their applications on different processor cores. In the embedded computing world, this can mean hosting multiple, heterogeneous operating systems on the same processor chip at the same time. Thus, virtualization is key. While some Virtual Machine Managers (VMMs) blend only General Purpose Operating Systems (GPOSs), mission-critical virtualization schemes containing both Real-Time Operating Systems (RTOSs) and GPOSs present their own unique challenges. Chris reveals the finer points of virtualization and determinism, presents an example, then explains how legacy applications fit into the scenario.

Considering the ever-increasing pervasiveness of multicore processors in the embedded realm, virtualization is the key to enabling multiple operating systems to coexist on a multicore processor chip. However, each virtualized embedded system is different, particularly when an embedded system involves mission-critical or highly secure applications running on separate guest OSs on the same platform. Often, a Real-Time Operating System (RTOS) and a General Purpose Operating System (GPOS) will be combined. Different OSs are required because real-time or machine-directed tasks have different needs for OS functionality than general purpose or human-directed tasks.

But how virtualization is implemented makes all the difference when building such mission-critical embedded systems. The responsiveness of the system must be preserved, and that means preserving the original system’s ability to respond to stimulus in a time-predictable and repeatable or deterministic manner. But therein lies the challenge: Not all Virtual Machine Manager (VMM) implementations are created equal. The VMMs used in server applications, for example, put maximum resource utilization as their highest priority, while hypervisors built for the needs of telecommunication applications typically focus on data throughput.

However, neither focuses on responsiveness to external events, as is needed in virtualization schemes for real-time defense applications. For most mission-critical military embedded systems, a special kind of virtualization – embedded virtualization – is required in order to respond with determinism to a range of external events. This ideal virtualization approach for military embedded systems additionally allows OEMs to save investment cost and preserve intellectual property by making it easy to host their legacy, real-time applications alongside new system elements with minimal, if any, changes to their existing code. Hosting legacy applications is much tougher than simply executing the code on a VMM or hypervisor, however. Our discussion explores the relationship between virtualization and determinism, includes an example, and explains how to fold in legacy applications.

Deterministic response to events

The key to building a deterministic VMM is to first deal with the issue of how processor interrupts are delivered to each of the guest operating environments. Enabling the shortest response times ensures determinism and requires differentiating between I/O resources that can be virtualized and those that should not. For example, disk accesses are typically not time-critical elements in an embedded system, so they can be virtualized. In this way, a single disk can be shared between multiple operating environments. In contrast, interrupts from hardware devices, such as encoders that provide inputs for closed-loop motion control, need to be handled according to a precise schedule in order for the application to work predictably. So the interrupt inputs to these processes must be, in effect, “hardwired” to the processor that runs the real-time control programs.

An embedded virtualization platform must enable isolation between multiple operating systems, with a minimum of virtualization overhead. Accordingly, operating system software and applications hosted by this platform are allowed direct access to critical I/O devices in order to maintain deterministic response to device events. General-purpose virtualization approaches that virtualize the entire machine environment might maximize the utilization of the CPU at the expense of responsiveness to external events. They attempt to maximize the utility of the platform and typically do this by allocating work to CPUs as they become available. In this way they can use a high proportion of the available CPU cycles (over 90 percent claimed in some cases), which allows them to reduce the hardware cost of running a given number of server applications.

In contrast, an embedded VMM implementation must maximize the predictability of the response of applications to hardware events and CPU utilization is not as important. The overriding factor is performance of a given interface to the CPU, and the VMM ensures this by isolating hardware between virtual environments.

Example: GPOS/RTOS virtualization

Consider the following example wherein isolation of hardware between virtual environments is a factor in a system using multiple computing substations. The application’s purpose is to retrieve small unmanned vessels and place them aboard Navy ships. The problem is how to guide a robot that is affixed to a ship so that it can attach a line or fixture to an unmanned floating vehicle while both are tossing at sea. The robotic crane uses a vision system to see exactly where the vehicle’s attachment point is and computes the algorithms to predict where it will be in the future as it is moved by the action of the sea. The robot crane is guided by the motion control system to the place where a hook or latch is predicted to be, and contact is made as the two come together.

As stated, this application utilizes multiple computing subsystems. One of the processors is dedicated to processing tasks relating to the vision system, while operating tasks associated with driving the motion subsystem are dedicated to another CPU. This scheme allows the processing of the vision system to monopolize the cycles of one processor without affecting the other functions of the system, and vice versa. A third processor, with no time-critical processing needs, supports the Human-Machine Interface (HMI). Before the advent of multicore CPUs and VMM software, these three processing subsystems (see Figure 1) would be implemented as three separate computing units, with their own processor cards, memories, power-conditioning circuitry, and so on.

 

Figure 1: Embedded system built as a set of independent subsystems


Figure1

 

 

Now, the three separate subsystems can be hosted on different cores of the same processor chip, enabling system cost savings without sacrificing performance and determinism of the separate functions. As Figure 2 shows, one of the keys to maintaining the responsiveness of the system is to dedicate processor cores and associated I/O to separate operating environments.

 

Figure 2: Multicore processors and Virtual Machine Manager software enable multiple processing subsystems to be implemented on the same platform, saving system costs without sacrificing determinism.

(Click graphic to zoom)


22

 

 

To preserve determinism, embedded VMM developers must plan virtualization carefully so that interrupt overhead is predictable, measurable, and minimal. Also, each CPU core must have its own task scheduler and virtual machine, rather than using a single master scheduler designed to share multiple cores. An added benefit of this approach is elimination of the overhead associated with a master scheduler, which many virtualization schemes use to implement Symmetrical Multiprocessing (SMP) to manage execution of multiple GPOSs on multiple processor cores. The SMP scheduler has a relatively high overhead compared to the embedded multiprocessing approach, which does not impose scheduling policies at a system level.

Folding in real-time, legacy processes

As mentioned earlier, one of the most valuable uses for multi-OS embedded systems is incorporating legacy real-time processes into new or upgraded products. Military OEMs typically have a large intellectual property investment that they don’t want to risk or discard when moving to a new platform. Typically, these OEMs will start out by just running their legacy RTOS alongside Windows on a VMM. But as time goes on, they might find the need for expanded real-time functionality and running multiple RTOSs alongside Windows.

When considering legacy application integration into a virtualization scheme utilizing RTOS and GPOS, besides the determinism issue, another issue is managing efficient communication between the environments. To move legacy applications from a multiplatform environment such as that shown in Figure 1 to a single platform, multicore, multi-OS environment (see again Figure 2), it pays to virtualize standard resources (disk services for booting, serial terminal services for logging, and Windows virtual communications services such as virtual Ethernet and virtual serial interconnects) that are not time-critical, while refraining from virtualizing I/O that is critical to delivering determinism (for example, a motion-control interface and vision subsystem). To allow for maximum performance with minimal lost data (typical requirement is none), handling the interrupt deterministically is critical.

An additional aspect of making legacy software work easily in an embedded virtual environment is that of providing support for software loading. Instead of requiring each guest operating system to be modified using a special Board Support Package (BSP) to make the software operate properly in the VMM, the embedded VMM platform should allow each guest operating system to boot as it normally would on a PC, without change. This eliminates the need for software modifications and can considerably decrease the cost and simplify the implementation of embedded systems.

Leveraging multicore chips via embedded virtualization

Embedded virtualization gives OEMs the opportunity to take full advantage of the new multicore processor chips in platforms where RTOS and GPOS both reside, to decrease system costs and preserve legacy code without sacrificing determinism. This new technology could not have come at a better time for the military embedded systems marketplace, as all OEMs are looking for ways to increase their efficiency.

Chris Main, CTO, has led the development of TenAsys’ virtualization technologies. He earned a graduate degree in Physics from York University (UK) and postgraduate degree from Bath University (Education). Chris has worked in real-time systems starting with minicomputers, then worked in the iRMX group at Intel. He was on the original development team for INtime and is a cofounder of TenAsys. He can be contacted at [email protected].

TenAsys Corp. 503-748-4720 www.tenasys.com

 

Featured Companies

TenAsys

1400 NE Compton Drive, #301
Hillsboro, OR 97006