Military Embedded Systems

AI-enabled vehicles will usher in true human-machine teaming in the field


August 04, 2020

Lisa Daigle

Assistant Managing Editor

Military Embedded Systems

Figure 1 | U.S. Army photo.

Autonomous vehicles in science fiction and lore – think Asimov’s “Sally” self-driving car, Arthur C. Clarke’s automatic cars, or Knight Rider’s KITT automobile with an attitude – are often able to operate independently and in concert with their humans. The phrase “no longer science fiction, but science fact” gets thrown around so much that it’s lost some of its punch, but in the case of the Army Research Lab’s work on artificial intelligence (AI)-enabled vehicles, it’s an apt observation.

Human-machine teaming – in which soldiers trust their vehicle systems to operate alongside them as actual partners instead of at peoples’ specific direction – is well on its way to being real, as the U.S. Army is working on several projects related to artificial intelligence (AI) and machine learning (ML).

Field and combat vehicles need autonomy not just for linear mobility but more realistically to navigate and act on many fast-changing variables needed to maneuver in complex terrain, against incoming hostile attack, or counter to threats on land, at sea, or from the air. These vehicles also must operate at what’s known as “operational tempo,” or a normal operating speed compared to what is going on around them.

The U.S. Army Combat Capabilities Development Command’s Army Research Laboratory (ARL) designated several research programs as essential for future soldier capabilities. One such initiative, the Artificial Intelligence for Maneuver and Mobility (AIMM) Essential Research Program, is working toward reducing soldier distractions in the field through the true integration of autonomous systems in Army vehicles.

Dr. John Fossaceca, AIMM program manager, says that his team is trying to develop the foundational capabilities that will enable autonomy in the next generation of combat vehicles.

In a recent episode of the ARL’s “What We Learned Today” podcast, Fossaceca details the kinds of tasks the next-generation vehicles will handle: “The main purpose of this essential research program is to build autonomous systems that help the Army effectively execute multidomain operations,” Fossaceca says.

“We don’t want soldiers to be operating these remote-controlled vehicles with their heads down, constantly paying attention to the vehicle in order to control it. We want these systems to be fully autonomous so that these soldiers can do their jobs and these autonomous systems can work as teammates and perform effectively in the battlefield.” (Figure 1.)

Fossaseca notes that vehicles designed for military use face very different operating conditions than self-driving cars for commercial use. Makers of commercially available self-driving cars tend to design for operation on pristine roads, with traffic and pedestrians the major obstacles. Military autonomous vehicles, in contrast, must plan for environments that are much more diverse, including areas that may not even have a road.

“Soldiers may have to operate in forests or deserts, or in a certain manner like moving stealthily in order to achieve some objective,” Fossaceca says. “This is very different than what’s happening in the autonomous vehicle industry, which is the main model of autonomy available today in terms of autonomous vehicle research.”

Behind the drive to true vehicle autonomy, Fossaseca says, is ARL’s work to improve the autonomy software stack, a collection of software algorithms, libraries, and software components. The software stack – originally the work product of the ARL’s now-completed 10-year Robotics Collaborative Tech­nol­ogy Alliance – consists of programs that allow autonomous vehicles to perform functions such as navigation, planning, perception, control, and reasoning. The autonomy software stack also contains a world model that the intelligent system can use as reference.

Occurring in parallel right now are two lines of effort (LOE): LOE 1 is work on basic mobility, while LOE 2 involves improving the software’s decision-making ability, which encompasses collaborative learning and advanced reasoning. “These are happening at the same time and teams run experiments every six months,” Fossaseca says.

Next up for the researchers: Creating a single platform that will first perform narrow AI, or algorithms that can complete very specific tasks consistently and then use these capabilities to multitask under complex conditions. Longer-term, the AIMM researchers are working on the Scalable Adaptive and Resilient Autonomy (SARA) program, which leverages external collaborators outside of the laboratory to accelerate the pace of emerging research on autonomous mobility and human-machine teaming.


Featured Companies

U.S. Army

101 Army Pentagon
Washington, DC 20310-0101