Military Embedded Systems

Cyberwarfare: The “invisible” weapon

Story

August 05, 2021

Sally Cole

Senior Editor

Military Embedded Systems

The era of no consequences for cyberwarfare is ending, with NATO’s declaration that a cyberattack on one ally will be treated as “an attack against all.”

Cyberwarfare is occurring now all over the world, with great frequency, and it can paralyze a military unprepared to deal with it – especially if that military’s weapons systems or networks are vulnerable.

Cyberwarfare isn’t always recognized for what it is “because the most current rules of war we’re operating under tend to be in terms of conventional tactics – things you can see,” says Tarah Wheeler, a Cyber Security Fellow at the Belfer Center for Science and International Affairs, Kennedy School of Government, Harvard University (Cambridge, Massachusetts).

While it’s easy to understand a missile being fired as an act of war, “it’s more difficult for people to understand or come to grips with the idea that something they can’t see can be an act of war,” Wheeler adds.

Jens Stoltenberg, the secretary-general of NATO, just reiterated what he said on the subject in 2019: that a serious cyberattack can in fact trigger Article 5, part of the Washington Treaty, wherein “an attack against one ally is treated as an attack against all.”

“The Secretary General of NATO is saying that a cyberattack can trigger Article 5 and people are still having a hard time understanding war on computers is real,” Wheeler points out. “Cyberwarfare is happening on computers now and it’s not magic. It’s very real, in the same kind of way a hummingbird’s wings aren’t being compelled by Harry Potter’s wand; it’s just a function of mechanical stuff happening too quickly for you to see, at the wrong frequency for the human eye to detect. This is precisely what’s happening in cyberwarfare – it’s simply happening in a place you can’t see, but you can clearly see its effects.”

Wheeler describes two attacks that occurred in 2017 as not only acts of war but as actual war crimes. “I’d call the WannaCry ransomware attack that began May 12, 2017, a war crime, when North Korea military hackers unleashed an attack directed toward the U.S., which ended up taking down the U.K.’s National Health Service (NHS) and Telefonica, a huge telecommunications company,” she says. “Do you recall about four years ago when everyone’s computer screens started turning red?”

This attack took down all the Windows 7 machines plugged into the NHS. Emergency rooms shut down, people were unable to get cancer treatments, and some data is still missing. “Computers were not able to be decrypted, and if a 23-year-old kid in West Devonshire – Marcus Hutchins – hadn’t found a kill switch for that attack, we don’t know how many people could have died as a result,” Wheeler says.

The second attack Wheeler points to is NotPetya, released by Russian hackers in June 2017. “As opposed to the messy, terrible code written by North Korea’s hackers, NotPetya was elegantly done,” she continues. “It’s pretty terrifying when you look at the code.”

NotPetya was an attack on Ukraine’s accounting systems; “it ended up taking down systems everywhere around the globe – including Maersk shipping,” Wheeler notes. “I compare it to the Ever Given container ship recently stuck in the Suez Canal for six days, which stopped 12% of world shipping. NotPetya stopped 40% of global shipping for weeks, and the hackers didn’t care that it harmed civilians outside the bounds of what they had drawn up as the battlefield.”

In these two examples of attacks, hospitals and food supplies were affected, while people’s ability to speak to each other was interrupted. “Under the Rome Statute, Article 8 of the International Criminal Court at The Hague, attacking hospitals is a war crime,” Wheeler adds.

One thing being done internationally now is to “push for rapid attributions so we can, with confidence, point our fingers at the right place to establish some level of visibility and joint international accountability,” says Dennis Moreau, a cybersecurity architect in the CTO Office for VMware (Palo Alto, California).

Still, “the level of supply-chain-related attacks and consequent damage during the past year – the hacking of Solar Winds, Colonial Pipeline, CodeCov, and Kaseya – may demand a more immediate effective response,” he notes. “We simply cannot tolerate the escalating and ongoing disruption of critical infrastructure at the scale we’re seeing.”

Weapons systems

Two U.S. Government Accountability Office (GAO) reports have flagged weapons systems’ cybersecurity as inadequate and concluded that the U.S. Department of Defense (DoD) needs to more clearly communicate its cybersecurity requirements to contractors (https://www.gao.gov/products/gao-21-179).

A 2018 GAO report revealed that during cybersecurity tests of weapons systems being developed by the DoD, testers playing the role of adversary were able to take control of systems relatively easily and operate largely undetected. These weapons systems are becoming increasingly computerized and networked, which makes cybersecurity even more critical (https://www.gao.gov/products/gao-19-128). (Figure 1.)

[Figure 1 | Today’s weapon systems are heavily computerized, which opens more attack opportunities for adversaries (represented below in a fictitious weapon system for classification reasons). Image: GAO.]

So, what might it look like if weapons systems get hacked? “No one is going to be able to pluck a weapon out of the air and retarget it,” Wheeler says. “It will potentially just fall to the ground or cease firing.”

In other scenarios, a targeting system could go awry or never arm, or the person able to figure out why the wires are crossed is unavailable. “These systems are often not only technical, but very few people know how to operate them,” she explains. “These are platforms, but the entire system is really only [understood] by a few people. This is a point of failure.”

Another problem revealed by the most recent GAO report is that program contracts are omitting cybersecurity requirements, acceptance criteria, or verification processes. While the DoD has developed a range of policy and guidance documents to improve weapons systems’ cybersecurity since GAO’s 2018 report, GAO now says the government must specifically address how acquisition programs should include cybersecurity requirements, acceptance criteria, and verification processes in contracts. In other words: If it isn’t in the contract, don’t expect to get it.

“When it comes to embedded systems, it’d be helpful if vendors selling to the DoD ensure their product has already undergone the cybersecurity maturity model certification (CMMC) process,” Wheeler says.

The DoD is starting to be more rigorous about requiring this across the board, and “we’re calling for them to require it in 100% of cases in the future,” she adds. “If they can ensure these security holes are all patched up and secure, and that their products are upgradeable and patchable, it’d solve an awful lot of problems. It means we’re talking about rolling out a patch instead of having to refuse shipment or having to take it back in again.”

[Figure 2 | Cyberwarfare operators configure a threat intelligence feed for daily watch in the Hunter’s Den at Warfield Air National Guard Base, Middle River, Maryland. The operators are assigned to the 275th Cyber Operations Squadron of the 175th Cyberspace Operations Group of the Maryland Air National Guard. U.S. Department of Defense photo.]

Yet another area of concern is embedded systems operating beyond spec: “Embedded systems are just computers attached to things, but the number one question people need to be asking when they put computers into things is: When is the last day we want to support this? And do we expect our customers to throw it out or are they going to continue to operate it beyond spec?” Wheeler says. “It’s a different situation when it’s a military targeting system that’s out-of-date and unpatched, or a person who worked on that system has left.”

This means “we depend on vendors, military contractors, the DoD, or any purchaser to demand these systems be upgradeable and patchable,” Wheeler says. “GAO is calling this out and doing yeoman’s work, pointing out that they went from zero to 40% from 2018 to 2021, but let’s try to do better next time.”

Networks under the microscope

Network security is also receiving scrutiny, and “there is no finish line in trustedness, posture, or hardening,” Moreau says. “Systems and their protections must be designed to operate assuming a state of compromise.” (Figure 2.)

This raises the bar “from being just demonstrably compliant to operationally resilient,” Moreau explains. “A focus on containment is necessary because vulnerabilities and exploits may be launched from the inside out, from the development pipeline, from the supply chain, or – as recently demonstrated – from external trusted/privileged services. Perimeter protection isn’t adequate if the protected services are already compromised. This concern has stimulated the current national and international focus on supply-chain visibility and assessment, and zero-trust as a policy approach.”

The military now appears to be focused on building small things with even smaller pieces, which Moreau finds intriguing, because the approach also enables designing for compartmentalization. “If you’re building a component, however important it is, arrange a sandbox so if it fails you can contain the failure,” he advises. “And don’t trust an opaque orchestration process that’s going to touch 800,000 systems. Continuously and actively validate the resilience of the deployed capability.”

In operation, “you need balance, observability, and the ability to actionably ‘close the loop’ to contain and respond if things go sideways,” Moreau adds. “With effective containment, if it does go sideways, the failure isn’t wiping out your entire infrastructure and with it your response capability. Automation without actionable feedback encourages doing the wrong thing very efficiently.”

Moreau recommends that the DoD pay attention to design time and requirements – and then make both an operational part of the entire life cycle. “When you need an app to do something with an API (application programming interface) or to always ‘color inside a behavior envelope’ and not do something, you need to make those guardrails (or constraints) very explicit at design time, very tested in development and deployment, but also continuously monitored at run time.”

If you have the continuity across what an artifact is intended to do, what it demonstrated in testing, and what it’s doing in operation, then “you’re in the best position to be able to deal with confidently sustaining flexibility, agility, and automation into that environment so it works for you, instead of it being the thing that confuses and paralyzes you because you don’t adequately know what’s going on,” he adds.

In the realm of embedded systems, it’s important to know how the source of that technology produced and tested it, as well as how users should interact with it, which APIs it exposes and calls, how frequently, and with which parameters.

“This kind of intentional context helps determine the operational guardrails you should wrap around this technology so your checks and balances, surveillance, and feedback system have a clear baseline of expected behavior,” Moreau explains. “And if that baseline is violated in production, you’ve got something to investigate – with high confidence and great actionability.”

You’ll see this concept emerge in Internet of Things environments as an embedded NIST standard called MUD (Manufacturers Usage Description). “Even within the commercial ecosystem, we’ve decided this is important enough that we need manufacturers to describe what their device is supposed to do, so we can lock out all other behaviors,” Moreau says. “This kind of information allows you to establish operationally independent guardrails, whether firewall rules, API gateway rules, or an analytic control to provide automatable containment of the intended behavior. This applies to chips, components, software, and services in any situation.”

The intentional declaration of what a part is supposed to do is an important way to ensure that the process, people, and technology behind that part can actually be trusted.

“These kinds of considerations are front and center in a recent executive order by President Biden, which promises to change the world of cybersecurity and consumption of technologies,” Moreau notes. “At the same time, we’re seeing international recognition of the same principles – enabling visibility into the supply chain and setting guardrails via policy approaches, like zero-trust, to get proactive and ‘in control’ of embedded systems, technologies, and software.”