Military Embedded Systems

The future of human-AI systems is already here – it’s just not evenly architected

Story

June 15, 2021

The future of human-AI systems is already here – it’s just not evenly architected

By Clodéric Mars

“Network-centric warfare,” “network of networks,” “system of systems,” “combat cloud,” “kill web”: This idea of an interconnected ecosystem seems as ubiquitous as it is challenging to implement, deploy, and leverage harmoniously at strategic levels in the military. With recent advances in artificial intelligence (AI) to factor in, the advent of what is now commonly referred to as the kill web relies on solving the challenges that come with the design of an architecture that allows human users and AI to safely coexist in that system of systems. Such an architecture needs to be evolutive and support iterative improvements, plus it must be technologically and structurally adaptive, scalable, modular, and, of course, secure -- all while placing human decision at its core.

The operational structure, strategies, technologies, and ethical concerns of the future remain unknowable, but they will undoubtedly need to include both human users and artificial intelligence (AI) agents, and both need to be trained. Introducing a modular and flexible architectural layer that remains consistent – from military simulation and training requirements to real-life operational needs – decisively answers these challenges and expands the capabilities of AI agents towards more strategic support. We now have the means to build such a layer in a tech-agnostic, distributed, and efficiently orchestrated way – allowing AIs and human users to collaborate more efficiently, reliably, and safely.

Laying the groundwork

Regardless of the kinds of technologies used in AI development, some basic high-level principles always apply. Much like people, AI agents need to accumulate experience and become able to estimate and reason about outcomes to achieve any sort of solid capabilities. Some AIs are better suited to certain tasks than others.

The breadth of modern AI is staggering: From statistical analysis and classical supervised or unsupervised learning to more modern techniques such as reinforcement learning (RL) or imitation learn­ing (IL), all the way up to more experimental ones like genetic algorithms – and even nonlearning approaches like planning (e.g., STRIPS, GOAP) or search (e.g., case-based reasoning, Monte Carlo tree search) plus any other existing or upcoming AI technologies. Such an architecture must be able to accommodate all of these, and even hybridize them, to leverage current state-of-the-art and future advances in AI. Most importantly, when working in collaboration, whether with humans or other AI agents, the ability to communicate – that is, exchange and receive key data – is paramount. Orchestrating data generation, acquisition, and communication is therefore at the heart of the required architecture.

Needed: A system of systems to accommodate different aspects of warfare, as well as all sorts of human and artificial actors working together, that will request, filter, and direct data coming from heterogeneous actors and systems built to different specifications. That challenge can be solved by using common underlying data structure definitions, defined using object models (OM) or interface description languages (IDL) such as HLA’s Federated Object Model (FOM) or protocol buffers. To establish and possibly “translate” what AI or human actors need to train and operate can absolutely be built – both the architecture and its orchestrator – with interoperability and tech stack agnosticism in mind. The internet is a proven example of how, with the right underlying protocols and architectural foundations, such networks can be future-resilient and adaptable. The internet enables communication between vastly different types of software, hardware, operating systems, and the like, spanning decades. However, this example only stresses how polished those foundations need to be. To eventually achieve these polished and long-sustainable foundations, two other key aspects of such an architecture must be enforced both from technological and work approach standpoints.

Modularity and iterative implementation

Aside from the orchestrator, the elements of a modern architecture designed to enable human users and AI agents to collaborate will need to be implemented through both a modular design, and an iterative process. This structure will ensure the architecture can operate in a distributed, scalable, and evolutive way, but also course-correct easily and reach that seamless internet-style interoperability. Formalizing its components as microservices using sturdy and efficient communication through structured network communications protocols in the application layer, such as gRPC, supports this modularity, as well as a distributed deployment model paramount to its use in different contexts, in a scalable way.

Markov Decision Processes – a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision maker – provide a proven framework to model how AI and humans operating in shared environments could interact using a definition of what they can perceive (their observation spaces) and what they can do (their action spaces). Environments change from specific states to others; observations of those states call for actions that change those environments, trigger new observations, and so on.

Naturally, as they learn to operate together, humans and AI agents interact with these environments and each other. Initially, they should first have the opportunity to do so in a consequence-free setting: i.e., military simulations, training exercises, and the like. Modularity and iterative implementation enable continuous evaluation and progress of the architecture components in terms of scope and sturdiness, but one critical aspect that is too often overlooked in modern AI-based systems is the transition between training, consequence-free settings, and real-life ones. Building high-fidelity simulation environments is but one element that helps AI agents and human users alike to train in a way that will facilitate a smooth transition to real operational contexts.

However, from the point of view of the architecture itself, smoothing out that transition not only requires little to no difference between a simulated environment and a real one, but also in the way all the elements work together. The ability to go back and forth between constructive, virtual, or live simulations and real operational settings as smoothly as possible is yet another critical facet of what we mean by modularity and iterative implementation. Inputs and outputs should remain the same from sim to real. As new modules are developed and others improved or deployed for real-life operation, the ability for AI agents and human users to operate seamlessly from simulation and training to real-life and operations, with as short iterative cycles as possible, will be paramount for such an architecture to provide the flexibility, safety, and efficiency required.

Standards like High-Level Architecture (HLA) – developed to provide a common architecture for distributed modeling and simulation – can help, but these primarily provide a way for simulations to interoperate. An extra layer of formalism to optimize the learning process of artificial agents in those systems is still necessary. When considering AI-powered collaborative agents learning from human actions and decisions, this factor also means that human users should be involved as soon as possible in the process; not only for the sake of the AI agents’ performance, but also for human users to train alongside them and familiarize themselves as soon as possible with what those new AI agents can or cannot do for them, and with them.

Human-centric design

It is essential to keep in mind that AI agents are no magic bullets. As extraordinary as these newer technological allies can be, automation should not be the end goal of their use, but rather the means for humans to better focus on what they do best and support them in what they can’t do as well. The amount of data and dynamics of a modern kill web is staggering, and AI can help sort through it like no other tool can. But humans remain at the center of the decision-making process, and therefore at the center of these systems.

A tool that keeps the human-centric view in mind is Cogment, an AI-human framework built around those key pillars of tech-agnosticism, multiagent and multimethod capabilities, modularity, flexibility, scalability, and adaptability. (Figure 1.)

[Figure 1 | Example of a modular architecture where simulated and real environments as well as actors, AI and humans alike, interoperate through a centralized orchestrator. In this example, the federated ecosystem is instantiated for three typical use cases, from preparation to training to operations.]

Such a platform factors in a future in which AIs and humans are intricately intertwined in increasingly complex and expansive ecosystems, while accommodating the rapid progress of the AI state of the art. Rebalancing focus towards the human element is even more crucial in the context of military applications.

Clodéric Mars has been building and deploying AI technology since 2006, closing the technology divide between machine learning and simulation while applying deep tech methodologies to solve complex engineering challenges and functionalize advanced AI for commercial usage. From his start as a developer to becoming a CTO, he has been primarily focused on AI and ML algorithms, applied data science, distributed cloud architecture, API design, product management, and team building. He is a recognized public speaker and organizer at AI industry events.

AI Redefined • https://ai-r.com/

Featured Companies