Military Embedded Systems

Automated performance measurement and timing analysis help military embedded systems avoid early obsolescence

Story

March 07, 2012

Andrew Coombes

Rapita Systems

The ongoing success of military embedded systems on land, sea, and air depends on the ability to modify the systems to meet emerging requirements. Over time, accumulated modifications to software-based systems result in degradation of the performance of that system. Eventually, the resulting performance degradation leaves system developers with the choice of either abandoning planned new features or replacing the hardware and accepting early obsolescence. There is an alternative. Automated performance measurement and timing analysis technology provide developers with the tools to optimize away much of the performance degradation resulting from accumulated modifications, thereby avoiding either abandoning features or early obsolescence.

Military embedded systems are typically enhanced many times during their lifetime. Many of these enhancements are software updates. Over time, the software updates cumulatively increase the demands placed on the computing platform. This can lead to the hardware’s capabilities becoming insufficient to meet application demands, potentially resulting in intermittent failures.

System developers then face the difficult choice of either abandoning planned new features, leading to capability decay, or replacing the hardware (that is, early obsolescence).

A viable alternative requires the identification of high-impact, low-risk strategies for optimizing software, thereby maximizing the service life of the computing platform. This alternative includes automated performance measurement and timing analysis.

The problem of performance

Military embedded systems, and especially avionic systems, such as the BAE Systems Hawk’s mission control computer, are often real-time embedded systems. Real-time systems are distinct because their correct behavior depends both on their operations being logically correct, and on the time at which those operations are performed. Engineers developing these systems must be able to provide convincing evidence that the software always executes within its time constraints.

The nature of software means that every time it is executed, it could take a different path through the code, leading to different execution times. Even when using the system in the same way, differences in the internal state could mean that the user sees widely varying execution times. Because of this, it is entirely possible to rigorously test software without seeing any timing problems, then to encounter a situation in actual use that results in significant timing problems. So to be sure a system always meets its execution time, it is necessary to establish its Worst-Case Execution Time (WCET), which is also a consideration for DO-178B.

Finding Worst-Case Execution Time

Measurement is an approach often taken to obtain confidence in the timing behavior of a real-time system. To measure timing, engineers typically place instrumentation points at the start and end of sections of code they wish to measure. These points record the elapsed time, either by toggling an output port (monitored via an oscilloscope or logic analyzer) or by reading an on-chip timer and recording the resulting timestamps in memory.

Unfortunately, these high-water marks might not reflect the longest time that the code could take to execute. This happens when the longest path through the code has not been exercised by tests, as illustrated in Figure 1. Two tests, represented in Figure 1 by the green path and the blue path, are run. The observed execution times from these tests are 110 and 85 respectively. Despite these tests executing all code in the software, there is a third path (shown in red), which has an execution time of 140, making it the longest path.

 

Figure 1: Execution paths: High-water marks might not reflect the longest time that the code could take to execute. This happens when the longest path through the code has not been exercised by tests.

(Click graphic to zoom)


21

 

 

This example shows that simply executing all code isn’t enough to exercise the longest path. For nontrivial code, it is very hard to devise tests that are certain to drive the code down its longest path. This situation can be avoided by adding instrumentation points at each decision point in the code. Whenever an instrumentation point is executed, its ID and a timestamp are recorded. Running a series of tests on the system results in the creation of a timing trace. Combining the timing information from the trace with information about the structure of the code makes it possible to find information about the timing behavior of the software, including predictions of WCET.

For typical military applications, which can run into millions of lines of code, it would be extremely laborious to instrument programs by hand; moreover, the volume of trace data typically produced would make manual attempts to combine trace data with program structural information infeasible. Fortunately, the tasks of program instrumentation, trace processing, combining trace data with program structural information, and data mining/presentation are all amenable to automation. RapiTime from Rapita Systems is an automated performance measurement and timing analysis technology that helps solve the challenge of obtaining detailed timing information about large military embedded systems implemented in C, C++, or Ada.

Performance optimization

Knowing the WCET is only one part of the solution: When faced with the problem of a software component that overruns its execution time budget, it is essential that a systematic, scientific approach is taken to optimizing the component’s performance.

Software performance optimization requires three questions to be answered:

  • Where is the best place to optimize?
  • Is the proposed optimization making an improvement?
  • How much improvement can be made?

Where is the best place to optimize?

In a typical complex application:

(1) Most subprograms are not actually on the worst-case path; they contribute nothing to the worst-case execution time. Optimization of these subprograms would not reduce the WCET at all.

(2) Many subprograms contribute a small amount to the WCET and so do not represent good candidates for optimization. Effort spent optimizing these subprograms would not constitute an effective use of resources.

(3) A small number of subprograms contribute a large fraction of the overall WCET (Figure 2). Therefore, the subprograms are potential candidates for optimization.

 

Figure 2: Cumulative contribution of subprograms to the overall WCET

(Click graphic to zoom by 1.9x)


22

 

 

By inspecting WCET information, engineers can easily identify a relatively small number of components where optimization could potentially have a large impact on the overall worst-case execution time.

Am I improving things?

It is sometimes tempting to try to short circuit the analysis process by guessing where the worst-case hotspots are, optimizing that code, and then seeing what the effects are. However, the experience of software optimization tells us that even highly skilled software engineers with an in-depth understanding of their code find it almost impossible to identify the significant contributors to the WCET, and hence the best candidates for optimization, without access to detailed timing information.

Often it seems so obvious – “It must be that section of code that makes all those floating-point calculations that is the best candidate for optimization” – when actually, some innocuous-looking assignment hides a memory copy that is taking nearly all of the time. The answer to this problem is simple: Don’t guess, measure. Then repeat the measurement to quantify the improvement (or lack thereof).

How much improvement can be made?

Table 1 indicates the level of improvements in Worst-Case Execution Times that can be obtained through a simple process of software optimization. These results were achieved using RapiTime technology to provide detailed timing information on the mission computer of a BAE Systems Hawk. These optimizations led to an overall decrease of 23 percent in WCET.

 

Table 1: Optimization improvements on a BAE Systems Hawk mission computer

(Click graphic to zoom by 1.9x)


21

 

 

The benefits of WCET and performance optimization

Access to automated performance measurement and detailed timing analysis during the modification of military embedded systems can provide a number of advantages to the developer:

1. A systematic and scientific approach is utilized in obtaining confidence in the system’s timing behavior.

2. Detailed information about worst-case execution time allows candidates for optimization to be quickly identified.

3. Automated measurement allows the effectiveness of candidate optimizations to be assessed.

The ability to do the best possible timing optimizations means avoiding making the hardware unnecessarily obsolete and eliminating the need to abandon planned new features or replace the hardware and accept early obsolescence.

Dr. Andrew Coombes is Marketing and Engineering Services Manager at Rapita Systems. For the past 15 years, he has helped develop and commercialize software tools for embedded, real-time applications. He received his DPhil in Computer Science at the High-Integrity Systems Engineering Group at the University of York (UK) before working in a consultancy and for the BAE Systems Dependable Computing Systems Centre (DCSC). Contact him at [email protected]

Rapita Systems +44 1904 567747 www.rapitasystems.com

 

Featured Companies

Rapita Systems

41131 Vincenti Ct.
Novi, MI 48375