Military Embedded Systems

Mitigating security risks early in the development life cycle

Story

September 04, 2012

Jane Goh

Coverity

By limiting the number of primitives within code, developers can make the process of exploiting software much more difficult for hackers, thereby increasing the cost of exploitation and reducing its likelihood.

Software has increasingly become crucial to a military’s field defense and combat support capabilities. Embedded software in military and aerospace systems has to be both reliable and secure because security vulnerabilities can be just as dangerous as the functional defects the industry has developed so many controls to prevent.

Many of the same techniques used to address functional or quality defects can also reduce security vulnerabilities. When it comes to software development, security defects should be treated like software defects and managed as part of the development process. Indeed, the distinction between security and quality can sometimes be a subtle one; the defect that manifests itself as a system failure today could be exploited by an attacker tomorrow.

Defects are essentially potential exploitation primitives1 that can be creatively strung together by hackers into an attack. Developers can make the process of exploiting software much more difficult for the attacker by eliminating as many primitives as possible. The following example illustrates how multiple primitives can be chained together to achieve remote code execution.

Example of a multi-primitive attack

Let’s assume that a security vulnerability exists in the code that resides on a remote server. While identifying the root cause is sufficient for remediating the flaw, the successful exploitation of that vulnerability is dependent on multiple pre-existing conditions. For the context of this example, we assume an attacker attempts to achieve Remote Code Execution (RCE), thereby running code of the attacker’s choosing on the remote machine. Although triggering the security vulnerability is required to achieve RCE, it actually requires many small steps that we refer to as exploitation primitives. By chaining these primitives together, the attacker can create an exploit that works reliably and maintains stability after the exploit has run its course.

In our example, the attacker is using, but is not limited to, four unique primitives. The first primitive used is the soft leak2, which leverages legitimate program functionality to manipulate memory in the targeted application without any stability or security repercussions. These primitives happen to be the most common because they rely on intended, valid program functionality. For example, a server, by design, will accept requests from a client. That client sends information that is held until session termination occurs. An exploit writer can make certain assumptions about the memory layout of a particular application based on its functionality by figuring out how these requests and sessions work.

The next primitive used is the hard leak2. The hard leak, or resource leak, is quite familiar to most C/C++ programmers. The leak occurs when the programmer forgets to free memory that was acquired dynamically during runtime. While most programmers think of this as a quality problem that will result in massive memory consumption at worst, many exploitation artists see this as an opportunity to ensure exploit stability. An attacker can assure that certain portions of memory are never subsequently used throughout the lifetime of a process by acquiring memory permanently.

The third primitive used is the integer overflow. If a mathematical operation attempts to store a number that is larger than an integer can hold, then the excess is lost. The loss of the excess data is sometimes referred to as an integer wrap. For example, an unsigned 32-bit integer can hold a maximum positive value. By adding 1 to that maximum positive value, the integer will start counting again at zero (UINT_MAX + 1 == 0). A real-world example is the odometer of a car rolling over after 1 million miles and restarting its mileage count from zero. An attacker can allocate less memory than was intended by using this overflowed integer in an allocation routine.

Finally, the last primitive used is a buffer overflow. This is the most common kind of defect understood to have security implications in C/C++ programs. A buffer overflow is caused when the program writes past the end of a buffer, resulting in corruption of adjacent memory contents. In some instances, this may result in overwriting the contents of the stack or heap in ways that allow an attacker to subvert the normal operation of the system and, ultimately, take over the flow of control from the program.

Primitive use in RCE

Now that the primitive types have been covered, let’s discuss how the attacker in our example utilized them to achieve remote code execution. First, by using existing program functionality, the attacker sends valid requests that result in allocating many chunks of memory based on the size of his input. This might seem harmless, but is vital to achieving heap determinism: the manipulation of the memory layout of an application into a known desirable state, which is obligatory when exploiting heap-based buffer overflows. Next, the exploit author knew that some memory, once allocated, should never be freed again. By leveraging hard leaks within the application, the goal of having memory that survives throughout the life of the process can be achieved, resulting in greater post-exploitation stability.

The integer overflow that caused an underallocated heap buffer to be overflowed was triggered. This causes a mismatch between the actual size of the allocated buffer and the expected number of data elements it holds. The attacker can then leverage a buffer overflow to overwrite the contents of adjacent memory. For example, imagine the inability to determine the last line of a piece of ruled paper. If you sequentially keep writing sentences, you would eventually write onto your desk and potentially that nice new shirt. By overwriting adjacent memory, the attacker can overwrite important information with data that he controls.

The ability to chain primitives together, regardless of severity, results in greater control of exploitation and post-exploitation functionality (Figure 1). If our attacker did not have the ability to create hard leaks within the application, he would have had to figure out a different way to ensure that his memory was not freed when his session timed out, or he would have at least come to the realization that an eventual program crash was inevitable. And if the integer overflow did not exist, there would not have been an opportunity at all for our attacker to exploit.

 

Figure 1: An attacker chains primitives together to perform arbitrary code execution. By limiting the number of primitives within code, developers can make the process of exploiting software much more difficult, thereby increasing the cost and decreasing the likelihood of exploitation.

(Click graphic to zoom by 1.9x)


21

 

 

The link between exploitation primitives and security vulnerabilities can be direct or indirect. Certain kinds of primitives, such as buffer overflows, can lead to many different kinds of vulnerabilities, depending on the skill, creativity, and determination of the attacker. What is clear, however, is that having more primitives available makes it easier for an attacker to leverage more severe vulnerabilities and develop damaging exploits. Therefore, finding and eliminating large numbers of exploitation primitives early in the development process can greatly help in reducing security vulnerability exposure and maintenance costs over the entire time the application is in service.

A practical approach to secure code development

Developing reliable and secure software is a tough challenge that confronts IT teams when initiatives to integrate security testing early into the development life cycle have not been adopted widely. It is not that developers don’t want to develop secure products, but they are focused on delivering new features and functionality and often are under intense pressure to meet release deadlines. Besides the lack of financial incentives to invest in strengthening security, developers are not traditionally trained to be security experts. Computer science programs have focused on producing programmers with a foundation to become good application developers but not necessarily security experts. As a result, developers today are by and large unaware of the myriad ways they can introduce security problems into their code, and don’t have the wherewithal to fix them when they are found.

Development testing solutions need to be designed from the perspective of the developer. This means addressing the major issues that have made developers shy away from traditional security assessment tools: lack of usability and high false positive rates. Development managers seeking to integrate security testing into their process should look for automated development testing tools that are able to deliver the following:

  • Clearly explained defects with little noise: Developers simply don’t have time to waste trying to sift through noisy results, or reproducing phantom defects that aren’t really there. They need defects that are easy to understand with as few false positives as possible.
  • Detection of defects early and often, as code is written: It takes significant effort to determine the exact cause of defects, and fixing defects can involve extensive architectural changes. Finding critical defects as early as possible enables development teams to anticipate workload and impact to release schedules, thereby reducing cost to the overall project.
  • Actionable and correct advice on how to fix security defects: Defect remediation advice provided as part of security assessments usually isn’t customized for the relevant framework, language, or libraries being used in the software package. It’s hard for developers to translate generic advice into a fix that works and this often leads to a wrong or incomplete fix being applied – leading to churn and rework.

Defects are an inevitable fact of software development. While it might not be possible to completely prevent vulnerabilities from being introduced during code development, the technology and processes exist now to assist developers in finding and fixing these defects as quickly and efficiently as possible.

Jane Goh, Senior Manager of Product Marketing at Coverity, has more than 10 years of technology marketing experience, and is focused on security development testing solutions at Coverity. Prior to joining Coverity, she held senior marketing positions at leading IT security companies including Imperva, Check Point Technologies, and VeriSign. She can be reached at [email protected].

References:

1. 1999. M. Bishop, “Vulnerabilities Analysis,” Proceedings of the Second International Symposium on Recent Advances in Intrusion Detection, pp. 125-136 (Sept. 1999).

2. 2007. Nicolas Waisman, “Understanding and bypassing Windows Heap Protection,” http://immunityinc.com/downloads/Heap_Singapore_Jun_2007.pdf

Coverity 415-321-5200 www.coverity.com

 

Featured Companies

Coverity

185 Berry Street Suite 6500
San Francisco, CA 94107