Military Embedded Systems

Test and analysis tools help verify and enforce security in military systems

Story

September 15, 2016

Chris Tapp

LDRA Technology

Jay Thomas

LDRA Technology

Secure coding practices, properly tested and verified, can help assure the reliable and safe operations of military systems. Organizations should start from the ground up, using a combination of static and dynamic analysis, unit and integration testing, and requirements traceability.

Security breaches in military systems can be devastating. An example of a security disruption with severe consequences was the purported capture in 2011 of a U.S. RQ-170 unmanned aerial vehicle (UAV) operated by the CIA over Iran. According to Iran, the craft was safely landed by Iranian cyberwarfare units that managed to take it over. The assertion was that the UAV was captured by jamming both satellite- and land-originated control signals to it, followed up by a GPS spoofing attack that fed the UAV false GPS data to make it land in Iran at what the drone thought was its home base in Afghanistan.

While the actual details may never be clear, it does appear that the drone was compromised to the extent that it could be safely landed in Iranian territory and passed into the possession of the enemy for possible reverse-engineering. Something in the software of that drone allowed access to at least one part of the system, which apparently opened access to its vital internals.

Securing embedded systems

Embedded systems now pervade the military in everything from vehicle control, communications, weapons control, and guidance to autonomous and semi-autonomous systems, including UAVs and similar craft. These devices are now interconnected for control and coordination purposes. In the interests of personnel safety, the ability to accomplish their mission, and often of national security, these devices must be safe for their operators and reliable in their operations. In addition, it is imperative that they be secure from unauthorized access and attack. If they are not secure, they cannot be considered safe or reliable. Thus the requirements for safety, reliability, and security are inseparable and interdependent.

Such requirements cannot be afterthoughts, but must be built in from the ground up. They also often require that software adhere to certain coding guidelines such as MISRA or CERT C and must follow industrial or government-mandated standards such as DO-178C. As these systems increasingly become subject to certification requirements, correctness in coding and functionality must be proven and documented.

Despite the fact that there are many strategies available to implement security, it is still necessary to ensure that these are also correctly coded – both in terms of coding standards and in terms of correct functionality in the overall application. Transfer protocols such as transport layer security (TSL) – which is an improvement over the secure sockets layer (SSL), the secure file transfer protocol (SFTP), and other protocols – are now widely used but are often acquired from outside the organization. Other strategies can include the use of secure device drivers, procedures for remote implementation of secure and encrypted firmware upgrades, and personal-verification protocols such as passwords, retina scans, and radio-frequency identification (RFID) chips to secure access. Other layered security strategies allow only selected access to parts of the system, but these can also introduce flaws that can be exploited if not detected.

In the past, organizations may have been able to check their code with manual code reviews and software walkthroughs. However, the size and complexity of today’s critical programs make it impossible to assure complete analysis with such methods and means. A new arsenal of test and analysis tools and methods is needed to meet today’s security requirements.

Establishing and enforcing security

Today’s comprehensive tool suites integrate tools for testing, analysis, and verification in a single development environment. The use of the tool environment may also help establish a disciplined methodology within an organization that can help teams cooperate even though they may be working in different locations.

In order to meet certification or qualification requirements, tools that enable bidirectional requirements traceability – from requirement and design to implementation, verification activities, and artifacts – can differentiate an organization from the competition and ensure the shortest path to device approval. A requirements-management tool allows teams to work on individual activities and link code and verification artifacts back to higher-level objectives.

Bidirectional traceability, based on a requirements document, is needed to ensure that every high-level requirement is covered by one or more low-level requirements and that every low-level requirement can be traced back to a high-level requirement.

Beyond that, the tools are also needed to perform extensive foundational tests that are based on static analysis, dynamic coverage analysis, and unit/integration testing. These results help assure security, functional safety, and compliance with coding standards, as well as the ability to trace requirements and see that they actually function as expected through extensive testing.

Static and dynamic analysis: partners in security

In assuring security, the two main concerns are data and control. Designers must consider who has access to data, who can read/write from it, how the data flows, and different levels of access and control. To address these issues, static and dynamic analysis must occur together.

On the static-analysis side, the tools work with the uncompiled source code to check the code against the selected rules, which can be any combination of the supported standards as well as any custom rules and requirements that the developer or a company may specify. The tools can also look for software constructs that can compromise security, check memory protection to determine who has access to which portions of memory, and trace pointers that may traverse a memory location. For best results, the information should be presented in graphical screen displays for easy assessment to assure coding standards compliance.

Dynamic analysis tests the compiled code, which is linked back to the source code using the symbolic data generated by the compiler. Dynamic analysis, especially code-coverage analysis, requires extensive testing. Developers may be able to manually generate and manage their own test cases – the typical method of generating test cases – working from a requirements document, a process that may stimulate and monitor sections of the application with varying degrees of effectiveness. Given the size and complexity of today’s code, however, often this method is insufficient to achieve certain required certifications.

 

Figure 1: The dynamic-analysis capabilities of the LDRA tool suite produce reports of variable and parameter usage that are based on the current test run. The report highlights the file and location within the file where the variable was used, with custom filters that allow more refined testing.


21

 

 

Security requires rigorous and thorough test for functional vulnerabilities as well as for adherence to the coding rules and directives in the running application. If the coverage analysis requirements include statement or branch/decision coverage, procedure/function-call coverage, or – in more rigorous environments, modified condition/decision coverage (MC/DC) – these can often require both source and object code analysis. It will likely also require automated test generation, as well, as a means of measuring the effectiveness of the testing (Figure 1).

Automatic test generation is based on the static analysis of the code and uses this information to determine proper stimuli to the software components in the application during dynamic analysis. This backbone of essential boundary value testing can easily be extended with functional tests created manually from the requirements document. These should include any functional security tests such as simulated attempts to access control of a device or feed it with incorrect data that would change its mission. In addition, functional testing based on created tests should include robustness such as testing for results of unallowed inputs and anomalous conditions.

Diving into the code

Finding security flaws can involve more subtle issues. For example, there is danger associated with areas of “dead” code that could be activated by a hacker or obscure events in the system for malicious purposes. Although it is ideal to start implementing security from the ground up, most projects include pre-existing code that may not have been subjected to the same rigorous testing as the current project. Used together, static and dynamic analysis can reveal areas of dead code, which can be a source of danger or may take up needed space.

The ability to distinguish between truly dead code and seldom-used code is yet another reason why bidirectional requirements traceability is important; to be able to check that requirements are met by code in the application, but also to trace code back to actual requirements from the actual code. If neither of those routes shows a connection, the code definitely does not belong there.

Static analysis, therefore, functions to analyze source code for proper programming practices and also to help dynamic analysis set up for coverage testing, functional testing, control, and data-flow analysis. The latter is essential in order to highlight and correct potential problem areas and produce software quality metrics.

Companies developing to meet stringent security requirements in airborne or combat systems may be required to demonstrate analysis of data flow and control flow for software certification. In the case of certifying airborne software and systems under DO-178C, verification is required at the object level. This involves the ability to relate code coverage at the source-code level with that achieved at the object-code level. In some cases, it may also be necessary to provide the mechanism to extend the code coverage at the assembler level. This extension can be especially helpful for certification at DO-178C Level A, where software failure could result in loss of aircraft and/or loss of life.

Start with unit testing and grow the project

Thinking about and developing for security from the ground up doesn’t help much unless testing can occur from the ground up – and that includes testing on a host development system before target hardware is available. At this phase, nobody is talking about the stage of a project nearing completion, so it’s often possible to do early unit testing and then integration testing as assignments come together from different teams or developers.

This approach also applies to parts of code that may be written from scratch, brought in from other projects, purchased as commercial products, or obtained as open source. Even in-house code needs to be checked because it may not have been originally subjected to the same analysis. The decision to use unit-test tools comes down to a commercial decision: The later a defect is found in the product development, the more costly it is to fix (Figure 2).

 

Figure 2: The cost of fixing defects increases dramatically the later in the development cycle they are addressed.


22

 

 

Functional testing on the host can be done without consideration for hardware timing and in some cases can be performed on a host-based virtual target with simulated connected peripherals. The same tests executed on the host must also be executed on the target hardware to ensure proper functional testing.

Applying a comprehensive test and analysis framework to the development process of an organization greatly improves the thoroughness and accuracy of security measures to protect vital systems.

Jay Thomas, a technical development manager for LDRA Technology, has been working on embedded software applications in aerospace systems since the year 2000. He specializes in embedded verification implementation and has helped clients on projects including the Lockheed Martin JSF and Boeing 787, as well as medical and industrial applications.

Chris Tapp is a field applications engineer at LDRA, with more than 20 years of experience in embedded software development. He graduated from the University of Durham in 1987 and has spent most of his career working within the automotive, industrial control, and information technology industries. He serves in the MISRA C working group, and is currently chairman of the MISRA C++ working group.

LDRA www.ldra.com

 

Featured Companies

LDRA Technology

2540 King Arthur Blvd, Suite #228
Lewisville, TX 75056