Army study find autonomous machines increases cooperation among individuals
NewsFebruary 12, 2019
ABERDEEN PROVING GROUND, Md. Researchers from the U.S. Combat Capabilities Development Command's Army Research Laboratory, the Army's Institute for Creative Technologies and Northeastern University (NU) collaborated in a new research study that suggests the use of autonomous machines increases cooperation among individuals.
The research team, led by Dr. Celso de Melo, ARL, along with Drs. Jonathan Gratch, ICT, and Stacy Marsella, NU, conducted a study of 1,225 volunteers who participated in computerized experiments involving a social dilemma with autonomous vehicles.
[caption id="" align="alignnone" width="450" caption="Army graphic depicts potential autonomous driving setup.
"][/caption]
"Autonomous machines that act on people's behalf -- such as robots, drones and autonomous vehicles -- are quickly becoming a reality and are expected to play an increasingly important role in the battlefield of the future," de Melo explains. "People are more likely to make unselfish decisions to favor collective interest when asked to program autonomous machines ahead of time versus making the decision in real-time on a moment-to-moment basis."
Even with the promise of increased efficiency, De Melo states it’s not clear whether this paradigm shift will change how people decide when their self-interest is pitted against the collective interest. "For instance, should a recognition drone prioritize intelligence gathering that is relevant to the squad's immediate needs or the platoon's overall mission?" de Melo asks. "Should a search-and-rescue robot prioritize local civilians or focus on mission-critical assets?"
Researchers published the results with the Proceedings of the National Academy of Sciences (PNAS) in a paper titled: “Human cooperation when acting through autonomous machines.”
"Our research in PNAS starts to examine how these transformations might alter human organizations and relationships," Gratch explains. "Our expectation, based on some prior work on human-intermediaries, was that AI representatives might make people more selfish and show less concern for others."
In the paper, results indicate the volunteers programmed their autonomous vehicles to behave more cooperatively than if they were driving themselves. According to the evidence, this happens because programming machines causes selfish short-term rewards to become less salient, leading to considerations of broader societal goals.
"We were surprised by these findings," Gratch says. "By thinking about one's choices in advance, people actually show more regard for cooperation and fairness. It is as if by being forced to carefully consider their decisions, people placed more weight on prosocial goals. When making decisions moment-to-moment, in contrast, they become more driven by self-interest."
The results further show this effect occurs in an abstract version of the social dilemma, which they say indicates it generalizes beyond the domain of autonomous vehicles.
"The decision of how to program autonomous machines, in practice, is likely to be distributed across multiple stakeholders with competing interests, including government, manufacturers and controllers," de Melo continues. "In moral dilemmas, for instance, research indicates that people would prefer other people's autonomous vehicles to maximize preservation of life (even if that meant sacrificing the driver), whereas their own vehicle to maximize preservation of the driver's life."
Researchers note that autonomous machines have the potential to shape how the dilemmas are solved and, thus, stakeholders have an opportunity to promote a more cooperative society.