Military Embedded Systems

Legacy enhancement: Nightmare or dream?


August 14, 2009

Mark Pitchford

LDRA Technology

It can be a real nightmare when a customer wants a new solution based upon legacy software for which the original developers left no clear indication of design criteria. But finding the proper analysis methodologies and tool suites can help developers get a good night's rest.

Leafing through most sales brochures for software test tools, anyone could be forgiven for thinking that all software projects begin with a blank sheet of paper, a clear set of design criteria, and ample time to design and develop the ideal solution.

In reality, most development work involves the enhancement of existing code, all too frequently developed by a series of application gurus who have left the company and were more keen on getting something working than maintaining documentation or adhering to coding standards.

Avoiding false dawns

So what kind of nightmare unfolds when an order hinges on such Software of Unproven Pedigree (SOUP) acquiring enhanced functionality? Or worse: What if requirements dictate that modern coding standards need to be retrospectively applied to it (such as JSF++ AV, MISRA-C:2004, the CERT C Secure Coding Standard, or even internal company rules)?

Some software testing products suggest that they can find all your software errors simply by deploying mathematical techniques to predict dynamic behavior through static analysis. It sounds like a dream come true. After all, it saves the developer from rethinking/revamping current programming practices, never mind past ones. However, as in the rest of life, if something sounds too good to be true, it probably is. Such an approach can find some specific runtime errors such as division by zero or out-of-bounds array accesses, and if that is all that is required then it might be a winner. But the false warnings raised are likely to soon lead to sleepless nights in a cold sweat.

Even if it is true that a few runtime errors can be found, the resulting solution has to be proven from a functional perspective, too. No static analysis approach can do that. So, the danger of never finding the errors based on what is required is beyond doubt a significant nightmare. Customers will not pay, devices are deployed without complete testing, and the specter of what lies behind the unintelligible warnings could haunt the soundest sleeper.

Given that the analysis of dynamic behavior by static means is of limited use, what else is there? What of the better-established dual-pronged approach of sound coding standards coupled with the dynamic analysis of code by execution? Does that really demand a green field application with an idealized development path to become a dream solution?

Heading out of the darkness

In an effort to change the nightmare into a dream even when the starting code has no complete, written specification, developers need to start with the specification for any amendments and let the original source code simply define the existing behavior. So, the trick is now to capture that existing behavior and ensure that it only changes where change is demanded by the incoming order, even if the new project demands a higher coding standard.

Some dynamic test tools provide just such a point of reference by statically analyzing the source code. The knowledge gleaned is then used to automatically generate unit tests to create a personality profile of the behavior of each existing method or function. Groups of functions can provide a similar role where modularity is not the order of the day. By generating such unit tests on the original code and exercising them on the revised sources, it is possible to ensure that behavior has only changed where change is required.

Although this personality profile approach has merit and makes sleep a little easier, issues remain. For instance, it assumes that the original source code is sound because the application has been proven in the field. But what if some of the new functionality builds on software code that was rarely used there? And even the well-proven parts of the code are likely to now be handling very different data. How will they behave?

Studies indicate that software submitted to only functional testing and field applications is still likely to have code that remains unproven. Dynamic test tools highlight unproven or rarely used execution paths, and static analysis tools help isolate areas that are vulnerable to the newly introduced data sets and in need of more testing. When that work is complete, the personality profile tests established at the outset are modified to provide a regression test set both to prove the current phase and to ease the path for any additional development work.

In the military world, thanks to enforced DO-178B standards, few nightmare applications are quite this unsettling. However, projects do exist where at least some elements of this extreme scenario ring true. So, it is useful to know which methodologies and which tool suites provide the means to address each of these issues. Prudent, careful use of these proven techniques can yield the results you dream of.

Sleep well.

Mark Pitchford has more than 25 years' experience in software development for engineering applications, the majority of which have involved the extension of existing code bases. He has worked on many significant industrial and commercial projects in development and management, both in the UK and internationally including extended periods in Canada and Australia. For the past seven years, he has specialized in software testing and works throughout Europe and beyond as a field applications engineer with LDRA. He can be reached at [email protected].


Featured Companies

LDRA Technology

2540 King Arthur Blvd, Suite #228
Lewisville, TX 75056