Military Embedded Systems

Assured mission-critical cloud computing across blue and gray networks


October 07, 2016

Sally Cole

Senior Editor

Military Embedded Systems

Assured mission-critical cloud computing across blue and gray networks

WRIGHT-PATTERSON AIR FORCE BASE, Ohio. One way the U.S. Air Force Research Laboratory is currently pursuing secure cloud computing science and technologies is by supporting the work of the Assured Cloud Computing (ACC) University Center of Excellence at the University of Illinois (UI) at Urbana-Champaign.

ACC is developing technology for assured, mission-critical cloud computing across “blue” and “gray” networks. Blue networks are military networks that are considered secure, while gray networks are those in private hands or run by other nations that may not be secure. Their goal is to ensure the confidentiality and integrity of data and communications to get missions done – even amidst cyberattacks and failures.

A computational cloud for military purposes may involve both blue and gray networks, so it’s often necessary to coordinate computation across a mixture of these resources.

But this isn’t easy: Overseas commitments and operations can stretch network-centricity with challenges in the form of global networking requirements, government and commercial off-the-shelf (COTS) tech­nology, secure computing across blue and gray networks, and agility and mobility, according to the ACC team.

Assured mission-critical cloud computing across blue and gray net­works requires “end-to-end and cross-layered security,” says the ACC team; this level of security involves multiple layers from the end device through the network and up to the applications or computations at the data center.

A survivable and distributed cloud-computing-based infrastructure “requires the configuration and management of dynamic systems-of-systems with both trusted and partially trusted resources – data, sen­sors, networks, computers, etc. – and services sourced from multiple organizations,” emphasizes the ACC team on its website (

To ensure mission-critical computations and workflows that rely on such dynamically configured systems-of-systems, “it’s necessary to ensure that a given configuration doesn’t violate any security or reliability requirements,” the ACC adds.

“And it should be possible to model the trustworthiness of a workflow or computations comple­tion for a given configuration to specify the right configuration for high assurances.”

So far, the ACC team has demonstrated that it’s possible to “build mission-critical cloud computing elements, deliver real-time results to secure the cloud, and make the cloud reliable,” says Roy Campbell, who leads ACC and is also a professor in UI’s Department of Computer Science.

By improving the functioning of NoSQL databases, which cloud systems frequently use, and through developing more advanced scheduling algorithms, the team has increased the performance speed of these databases and has shown that they can be relied upon to finish a task on deadline, which is critical for the military.

Campbell says that the ACC research has the potential to save the gov­ernment money by allowing the use of “gray” networks for missions, rather than building colossal networks. “It’s also going to provide an additional layer of protection, because we can apply computing resources more liberally to missions,” he adds.“Our research provides more guarantee, allowing the armed forces to have more computing support for its work.”

The team’s next goal is to develop new methods to manage real-time streaming within the cloud. And now that the networking industry is embracing software-defined networking, the team is exploring ways to apply it to cloud systems.

The ACC team is also focusing its efforts on flexible and dynamic distributed cloud-computing-based architectures that are survivable; novel security primitives, protocols, and mechanisms to secure and support assured computations; algorithms and techniques to enhance end-to-end timeliness of computations; algorithms that detect secu­rity policy or reliability requirement violations in a given configuration; algorithms that dynamically configure resources for a given workflow based on security policy and reliability requirements; and algorithms, models, and tools to estimate the probability of completion of a work­flow for a given configuration.