Military Embedded Systems

GUEST BLOG: From code to behavior – Software assurance in safety- and mission-critical edge systems

Blog

January 15, 2026

GUEST BLOG: From code to behavior – Software assurance in safety- and mission-critical edge systems

In today’s defense and aerospace systems, the software stack is rapidly becoming as complex and as critical as the hardware it runs on. Modern edge platforms increasingly support multithreaded real-time applications, machine learning (ML) inference, over-the-air (OTA) updates, and third-party integrations. In these environments, deterministic behavior, system robustness, and security are not optional – they are mission requirements.

These growing complexities demand a new way of thinking about software assurance. Traditionally, the focus has been on verifying code quality before deployment through methods like static analysis, unit testing, integration testing, and formal verification. These practices remain essential, but they’re no longer sufficient on their own.

In high-reliability environments where embedded systems interact with the real world, under real conditions, runtime behavior becomes just as critical to safety, security, and mission success as predeployment code correctness. A modern approach to software assurance must therefore operate in two dimensions: validating the code at design time and ensuring system integrity at runtime.

Runtime behavior: The overlooked dimension

Even when code passes all verification gates during development, real-world operation introduces unpredictable variables that static tests cannot fully capture. Timing drift, task contention, hardware interrupts, and fluctuating power or thermal conditions can all affect behavior. OTA updates may change execution paths in unforeseen ways, while third-party components or system integrations can introduce dependencies and interactions that only reveal themselves during deployment.

These conditions create fertile ground for subtle and dangerous failures. For example, a race condition might only surface under specific timing and load conditions, leading to latency spikes or missed sensor readings. In systems controlling flight paths, weapons, or navigation, such anomalies can degrade performance or compromise mission success entirely.

From a cybersecurity perspective, the challenge deepens. Advanced persistent threats may exploit behavioral triggers that bypass static defenses. The recent XZ backdoor incident (a thwarted Linux/Unix backdoor attack attempt) is a stark reminder that vulnerabilities may remain dormant in verified codebases until activated under specific runtime conditions.

Continuous observability: A proactive response

To close this critical gap, engineering teams in the embedded sector are turning to continuous observability – the ability to monitor and analyze software behavior during live operation without disrupting performance or system stability.

In safety- and mission-critical systems, this capability enables early detection of anomalies, often before they escalate into operational failures. Teams can trace root causes of subtle issues such as timing jitter or resource starvation without relying on incomplete logs or elusive test-case replication. This approach also supports safe OTA deployment by validating the real-world impact of updates and enabling controlled rollback if needed.

From a regulatory perspective, continuous observability can provide hard evidence of runtime stability and behavioral compliance, a factor that is increasingly valuable in audits, certifications, or mission readiness evaluations.

Defense systems in practice: An avionics case study

Consider a defense avionics application running on a partitioned embedded platform with mixed-criticality workloads. In the lab, the system passes all static checks and unit tests. However, during field trials, a subtle race condition causes periodic latency spikes in a mission-critical control loop – an issue never observed in simulation or test environments.

With runtime observability in place, the system detects the anomaly as it occurs. Engineers retrieve a detailed execution trace that reveals the precise interaction pattern causing the problem. The team is able to resolve the issue rapidly without costly flight test rework or mission risk. This transition from reactive debugging to proactive assurance illustrates the power of monitoring not just what the system is designed to do, but what it actually does.

Bringing runtime assurance to COTS platforms

Modern commercial off-the-shelf (COTS)-based embedded platforms are already evolving to support this dual-layer model of software assurance. Secure architectures like Lynx MOSA.ic, when combined with timing-analysis tools such as Spyker, provide robust design-time validation. Augmenting this with runtime tools enables a full-spectrum view of system behavior – from code-level correctness to operational integrity.

This layered strategy not only helps teams comply with standards like DO-178C, ISO 26262, and IEC 61508, but also strengthens resilience against cybersecurity threats under evolving frameworks such as NIST 800-53 and ISO/SAE 21434.

Toward a runtime integrity stack

As defense and aerospace systems grow increasingly software-defined – driven by artificial intelligence (AI), autonomy, and connected platforms – runtime assurance will become more than a best practice. It will be a baseline requirement.

In the near future, we can expect certification bodies to demand runtime evidence of integrity, not just static analysis results. Cybersecurity standards are likely to mandate behavioral anomaly detection as a line of defense. DevSecOps pipelines will expand to include observability tools for in-field monitoring, closing the loop between development, deployment, and operation.

This trend points toward a new conceptual model: the runtime integrity stack – a framework in which design-time validation and runtime assurance coexist to deliver trustworthy, resilient embedded systems.

What this means

The most serious failures in embedded systems often don’t stem from code we can see; they arise from behavior we didn’t anticipate. In defense and aerospace applications, especially those built on COTS components and open architectures, relying solely on static verification is no longer sufficient.

By embracing a two-dimensional model of software assurance – one that combines code correctness with real-time behavioral monitoring – engineering teams can mitigate risk, improve operational readiness, and build more resilient systems. In an era where mission success often depends on unseen software dynamics, runtime observability isn’t just a tool – it’s a strategic imperative.

Andreas Lifvendahl is CEO of Percepio AB.

Percepio · https://percepio.com/

Featured Companies