DoD must innovate in AI by 2025Story
June 15, 2021
Developing artificial intelligence (AI) technology for the battlefield is a top priority for the U.S. Department of Defense (DoD) as its adversaries continue to scale up their own AI and machine learning capabilities. Much of the DoD’s AI wizardry is spun out of the U.S. Defense Advanced Research Projects Agency (DARPA), which looks to enable machines to become trusted, collaborative partners of not just warfighters but all humans.
Much like they have driven the increased emphasis on cybersecurity, trusted computing, and microelectronics, the U.S. government and the U.S. Department of Defense (DoD) are making dominance in artificial intelligence (AI) and machine learning (ML) technology essential across all domains.
The pressure is on: Earlier in 2021, the National Security Commission on Artificial Intelligence (NSCAI) submitted its Final Report to Congress and the President, outlining a need for the DoD to be AI-ready by 2025. The NSCAI report defines “AI-ready” as “Warfighters enabled with baseline digital literacy and access to the digital infrastructure and software required for ubiquitous AI integration in training, exercises, and operations.” [Note: this report can be found at https://reports.nscai.gov/final-report/table-of-contents/]
“Even with the right AI-ready technology foundations in place, the U.S. military will still be at a battlefield disadvantage if it fails to adopt the right concepts and operations to integrate AI technologies,” the report continues. “Throughout history, the best adopters and integrators, rather than the best technologists, have reaped the military rewards of new technology. The DoD should not be a witness to the AI revolution in military affairs, but should deliver it with leadership from the top, new operating concepts, relentless experimentation, and a system that rewards agility and risk.”
It’s important to realize “you can’t just flip a switch and have these capabilities in place,” according to NSCAI Commissioners Andy Jassy and Ken Ford. “It takes steady, committed work over a long period of time to bring these capabilities to fruition.”
The report says “the DoD must act now to integrate AI into critical functions, existing systems, exercises, and wargames to become an AI-ready force by 2025.” To get there, the report recommends the U.S. government responsibly develop and use AI technologies, with an emphasis on implications and applications of AI for defense and national security.
Hardware will play a key role. The report notes that due to China’s conflict with Taiwan, the U.S. is dangerously close to losing access to the vast majority of cutting-edge microelectronics (fabricated in Taiwan) that power U.S. companies and military. It recommends revitalizing our domestic semiconductor design and manufacturing to ensure that the U.S. is two generations ahead of its adversaries. (See more on this in our Editor’s Perspective on page 5.)
The report also stresses the importance of innovation: “The U.S. needs to sustain and increase investment in AI research to set conditions for accessible domestic AI innovation and drive the breakthroughs to win the technology competition through establishing a national AI research infrastructure and doubling Federal investments in AI R&D to reach $32B by 2026.”
AI R&D starts with DARPA
Military AI research actually began literal decades ago. A few examples: In the 1960s, the DoD began training computers to mimic basic human reasoning. By the 1990s, work on machine learning (ML) advanced from knowledge-driven to data-driven approaches, and computer programs were created to analyze vast amounts of data and “learn” from the results. Deep learning, which uses algorithms to let computers recognize objects and text within images and videos, advanced in the 2000s and 2010s. Computer vision, a combination of ML and neural networks, now can autonomously find objects of interest within video and imagery from drones within war zones. [Note: For more on AI/ML history, read https://militaryembedded.com/ai/machine-learning/artificial-intelligence-timeline]
“Our vision for AI today is the same as it was at the very beginning: to enable machines as trusted, collaborative partners to help humans solve important national-security problems,” says Valerie Browning, director of the Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office.
AI is already providing value to humans for tasks in which decisions about how to execute a task are either governed by a limited set of well-understood rules or can be based on statistical pattern recognition, she says; this value will continue to increase.
“But AI that can be trusted to augment and support humans in a broader range of real-world, time-critical tasks within dynamic and unknown environments – as is often the case for military applications – remains an aspirational goal for DARPA,” Browning notes. “As far as we’ve come, we have that much further and more to go to achieve the original DARPA AI vision of truly symbiotic, trusted collaborative partnerships between humans and machines.”
Enabling AI within military systems
There are challenges involved with using AI within the military and its systems, with the biggest one that “the military operational environment can be very dynamic and is often unknown,” says Browning. “This creates challenges in acquiring and making available the copious amount of data needed to train today’s state-of-the-art AI systems.”
Even when it is possible to train an AI system to perform a particular task, “adapting it to a new task or environment is typically not possible without significant retraining that may or may not preserve the competency of the system for prior learned tasks,” she adds.
Current AI systems “work well in applications where the consequences of ‘getting it wrong’ are tolerable and noncatastrophic,” Browning points out. “The increasing complexity and speed of military operations places a high bar for AI systems that support ‘faster-than-thought’ decision-making in situations where human lives are at risk. And realistic test and evaluation of AI systems in terms of how they will perform in these types of applications is extremely difficult.”
Military use of AI
AI is currently being used for military applications such as language translation, image classification, medical diagnosis, cyber defense, and automation of critical business processes including software accreditation and security clearance vetting.
“We have a clear understanding of the limitations of current state-of-the-art AI based on machine learning, so we can reasonably predict the type of applications that can benefit from near-term AI. In the longer term, as the original DARPA vision of truly trusted and collaborative human-machine partnerships come to fruition, we can expect to see AI increasingly deployed to support time-critical decision-making within tactical environments,” Browning says.
One challenge to adopting AI for military applications is that users want to understand how it reaches a conclusion.
“DARPA’s Explainable AI (XAI) program has made significant advancements toward AI algorithms that are more transparent and understandable to a broad range of users,” says Matt Turek, a program manager in DARPA’s Information Innovation Office. “We’ve created new XAI techniques that allow AI developers to better introspect and understand machine-learning models during the development process.”
XAI has also built new approaches for explaining decisions to operational users, Turek adds, like highlighting the region of an image that most influenced a decision. “We’ve developed processes for explaining AI systems to commanders, such as a new after-action review process for AI that uncovers key decision points for an autonomous system after a mission,” he adds.
Real-time conversational AI for robots
Speaking is the most natural way for people to interact with complex autonomous agents or robots. Knowing this, researchers from the U.S. Army Combat Capabilities Development Command (DEVCOM), Army Research Laboratory and the University of Southern California’s Institute for Creative Technologies devised a way to flexibly interpret and respond to soldier intent derived from spoken dialogue with autonomous systems.
The lab’s joint Understanding and Dialogue Interface (jUDI) system relies on a statistical classification technique to enable conversational AI via state-of-the-art natural-language understanding and dialogue-management technologies.
“The statistical language classifier enables autonomous systems to interpret the intent of a soldier by recognizing the purpose of the communication and performing actions to realize the underlying intent,” explains Army researcher Felix Gervits. “For example, if a robot receives a command to turn 45 degrees and send a picture, it could interpret the instruction and carry out the task.”
The classifier is trained on a labeled data set of human-robot dialogue generated during a collaborative search-and-rescue task. It learned “a mapping” of verbal commands to responses and actions – allowing it to apply this knowledge to new commands and to respond in an appropriate manner. (Figure 1.)
[Figure 1 | Army researchers create a novel approach to allow autonomous systems to interpret and respond to soldiers. Image courtesy U.S. Army/1st Lt. Angelo Mejia.]
The researchers say that the technique can be applied to combat vehicles and autonomous systems to enable advanced real-time conversational capability for soldier-agent teaming. “By creating a natural speech interface to these complex autonomous systems, researchers can support hands-free operation to improve situational awareness and give our soldiers the decisive edge,” Gervits says.
Interacting with conversational agents requires little to no training for soldiers. “There is no requirement to change what they would say,” he adds. “A key benefit is the system also excels at handling noisy speech, which includes pauses, fillers, and disfluencies – all features one would expect in a normal conversation with humans.”
The classifier is trained ahead of time, so it can operate in real time without processing delays in conversation. This technique supports increased naturalness and flexibility in soldier-agent dialogue and can improve the effectiveness of these kinds of mixed-agent teams, Gervits says.
AI-enabled malign information campaigns an emerging, morphing challenge
A different form of AI weaponry is emerging in the form of the insidious spread of disinformation campaigns on social media, which can be shockingly effective on a targeted and massive scale.
The NSCAI report warns: “The prospect of adversaries using machine learning, planning, and optimization to create systems to manipulate citizens’ beliefs and behavior in undetectable ways is a gathering storm. Most concerning is the prospect that adversaries will use AI to create weapons of mass influence to use as leverage during future wars, in which every citizen and organization becomes a potential target.”
One of the report’s recommendations is to fund DARPA to coordinate multiple research programs to detect, attribute, and disrupt AI-enabled malign information campaigns and to authenticate the provenance of digital media. This approach would “amplify ongoing DARPA research programs to detect synthetic media and expand its efforts into attributing and disrupting malign information campaigns,” the report states.
DARPA is exploring how to combat these threats through its “Media Forensics (MediFor) program-developed tools that automatically produce a quantitative integrity score indicating if an image or video was manipulated or AI-generated,” Turek says. “MediFor technology is foundational for detecting deep fakes and other forms of AI-manipulated media.”
For its part, DARPA’s Semantic Forensics (SemaFor) program is building tools to detect, attribute, and characterize falsified text, images, audio, and video. DARPA’s Influence Campaign Awareness and Sensemaking (INCAS) will develop techniques to help analysts detect, characterize, and track geopolitical influence campaigns with quantified confidence. The SemaFor program launches later in 2021.