Next-generation multicore hypervisors accelerate legacy software migration in A&D systems
StoryOctober 06, 2010
The challenge of maintaining deployed systems in Aerospace and Defense (A&D) can span long life cycles that integrate legacy platforms with new capabilities, while continuing to drive down Size, Weight, and Power (SWaP) demands. Deploying systems based upon advanced multicore processors executing mixed OS and hypervisor environments is an effective strategy to bridge existing assets into future platforms, while consolidating environments into smaller system footprint frameworks.
There is an unstoppable trend in aerospace and defense systems to move to more advanced multicore silicon platforms that feature more processing power than legacy technologies. This not only enables a dramatic reduction in SWaP, but it also – when combined with advanced operating environments – can enable a new generation of systems that can easily endure technology refresh cycles with significantly less testing. These technologies also now contain multiple OS environments, like VxWorks and Linux, to enable even greater scalability and economy over traditional platforms.
This is a significant departure from existing single core, single OS systems. It requires increased awareness of both underlying processor boards and peripherals, interaction of the cores, and supported OSs. A multicore platform will share more system resources (like interrupt controllers, devices, and caches) than single core and traditional multiprocessor environments. Therefore, these systems’ configuration and deployment require more focus on the system design. Efficient embedded hypervisors, hosting diverse guest OSs and controlling allocation and utilization of individual cores, offer a compelling choice in abstracting away a significant element of risk in next-generation programs.
Enabling legacy in next-generation systems
One of the biggest challenges of migrating legacy applications into future systems is combining these applications with newer operating environments. Although this can be accomplished with AMP and SMP multicore configurations, an efficient embedded hypervisor not only allows these applications to be moved into the future, but it also enables a smaller platform footprint. This creates an efficient forward-looking design that can easily sustain a hardware refresh cycle.
A multicore embedded hypervisor is the only platform that can truly enable rapid technology refresh. By abstracting away the exact hardware environment, system designers can now move from a four-core multicore system to an eight-core platform of the same silicon family without forcing a complete retest of all applications. Let’s take a closesr look at these ideas.
Next-generation platform choices
There are three basic configuration options for multicore systems:
1. AMP – Asymmetric multiprocessing configurations, where each core has a separate instance of an OS.
2. SMP – Symmetric multiprocessing configurations, where one operating system controls access to all or a set of processor cores. Allocation of tasks/threads to a specific core, along with communication between threads/tasks running on separate cores, is managed by the SMP OS.
3. Embedded hypervisor – The most flexible method for configuring multicore systems is to employ a hypervisor that abstracts away the underlying hardware environment and controls partitioning of all processor cores and peripherals on the board.
Embedded hypervisors are a thin layer of code that partitions hardware into virtual environments (referred to as “virtual boards”) and the OS inside these virtual boards. Virtual boards run in separate address spaces [protected by the Memory Management Unit (MMU)]. A virtual board that can run on a single core can run SMP across multiple cores, or can be scheduled with other virtual boards on a single core on either a priority preemptive or time-based schedule. This is accomplished by virtualizing or partitioning key components of a system:
- CPU – By virtualizing the CPU, one can either share a single core with multiple virtual boards on top of one physical processing core or dedicate a single core or a set of cores to a single virtual board.
- Memory – Memory virtualization involves partitioning the physical memory so multiple partitions can use parts of the real memory. This allows more efficient memory use and creates an abstraction layer for separating and controlling memory access.
- Devices – Devices can either be partitioned (dedicated to a single virtual board) or virtualized and shared between multiple virtual boards.
Selecting the optimal hypervisor
Many different types of hypervisors are available, and the most well known are full-featured IT hypervisors such as VMware, KVM, and Xen. These hypervisors abstract physical hardware and offer comprehensive features such as remote management, load balancing, and failover. But these features require expensive scheduling algorithms, making them unsuitable for small, real-time, deterministic embedded systems, which need a thin, small hypervisor that maintains the RTOS’s real-time capabilities and determinism in these environments. Embedded hypervisors – such as the Wind River Hypervisor or VxWorks MILS Separation Kernel (SK) – are optimized for performance, isolation, and certification.
Moreover, three key capabilities exist for hypervisor technology in the A&D market:
1. Ease of migration of legacy applications and operating environments into new platforms.
2. The ability to reduce SWaP by consolidating stand-alone federated systems into smaller, more efficient platforms.
3. Ease of future technology refreshes to denser multicore silicon in similar processor families.
The key value of a hypervisor for aerospace and defense is separation of legacy hardware environments from new hardware environments, which significantly reduces both testing and upgrade time for all next-generation systems.
Chip Downing is the Senior Director for Aerospace and Defense at Wind River. He can be contacted at [email protected].