Military Embedded Systems

Army scientists find catch-22 in using AI with military applications

News

February 12, 2019

Mariana Iriarte

Technology Editor

Military Embedded Systems

Army scientists find catch-22 in using AI with military applications
Dr. James Schaffer. U.S. Army Graphic by Jhi Scott.

ADELPHI, Md. The U.S. Army is collaborating with researchers from the University of California, Santa Barbara (UCSC) to test the hypothesis that many people trust their own abilities far more than that of an artificial intelligence (AI) entity and its effects in military applications.

"The U.S. Army continues to push the modernization of its forces, with notable efforts including the development of smartphone-based software for real-time information delivery such as the Android Tactical Assault Kit, or ATAK, and the allocation of significant funding towards researching new AI and machine learning methods to assist command and control personnel," explains Dr. James Schaffer, scientist for RDECOM's Army Research Laboratory, the Army's corporate research laboratory (ARL), at ARL West in Playa Vista, California.

According to Schaffer, despite advances in the technology there still exists a significant gap in basic knowledge about the use of AI and it is unknown which factors of AI will or will not help military decision-making processes, Army officials report.

"For instance, many research studies and A/B testing, such as those performed by Amazon, have experimented with different forms of persuasion, argumentation and user interface styles to determine the winning combination that moves the most product or inspires the most trust," Schaffer says. "Unfortunately, there are big gaps between the assumptions in these low-risk domains and military practice."

Under this research, scientists constructed a similar abstract to the Iterated Prisoner's Dilemma, which is a game where players must choose to cooperate with or defect against their co-players in every round with the end-goal to control all relevant factors. An online version of the game was developed by the research team, where players obtained points by making good decisions in each round.

In addition, an AI was used to generate advice in each round, which was shown alongside the game interface, and made a suggestion about which decision should be made by the player, Army officials explain. The players were free to accept or ignore the AI’s suggestions and were also required to access the AI’s advice manually. The presented AI’s advice also varied, some were deliberaty inaccurate, some required game information to be entered manually, and some justified their suggestions with rational arguments.

"What was discovered might trouble some advocates of AI - two-thirds of human decisions disagreed with the AI, regardless of the number of errors in the suggestions," Schaffer says.

The higher the player estimated their familiarity with the game beforehand, the less the AI was used, an effect that was still observed when controlling for the AI's accuracy. This implies that improving a system's accuracy will not be able to increase system adoption in this population.

"This might be a harmless outcome if these players were really doing better - but they were in fact performing significantly worse than their humbler peers, who reported knowing less about the game beforehand," Schaffer explains. "When the AI attempted to justify its suggestions to players who reported high familiarity with the game, reduced awareness of gameplay elements was observed - a symptom of over-trusting and complacency."

Despite these findings, a corresponding increase in agreement with AI suggestions was not observed presenting researchers with a catch-22 for system designers, Schaffer comments. This research is highlighting the ongoing issues in the usability of complex, opaque systems such as AI, despite continued advances in accuracy, robustness, and speed.

"Rational arguments have been demonstrated to be ineffective on some people, so designers may need to be more creative in designing interfaces for these systems," Schaffer continues. He explains that this could be accomplished through appealing to emotions or competitiveness, or even by removing presence from the AI, such that users do not register its presence and thus do not anchor on their own abilities.

"Despite challenges in human-computer interaction, AI-like systems will be an integral part of the Army's strategy over the next five years," Schaffer adds in an Army release. "One of the principle challenges facing military operations today is rapid response from guerilla adversaries, who often have shorter command chains and thus can act and react more rapidly than the U.S. Armed Forces. Complex systems that can rapidly react to a changing environment and expedite information flow can improve response times and help maintain op-tempo - but only if given sufficient trust by its users."

This research will appear in the proceedings of the ACM's 2019 conference on Intelligent User Interfaces. For more, visit: https://iui.acm.org/2019/.

 

Featured Companies

U.S. Army Research Laboratory

2800 Powder Mill Road
Adelphi, MD 20783-1138