Military Embedded Systems

Stories are part of the curriculum for artificial intelligence robots

Other

June 17, 2016

Mariana Iriarte

Technology Editor

Military Embedded Systems

Photo by ONR

Researchers at Georgia Tech with support from the Office of Naval Research (ONR) have designed an ingenious ? arguably ? artificial intelligence (AI) software that will teach robots the difference between right and wrong. Through the art of storytelling, the software, called Quixote, should teach the robot acceptable behavior in social gatherings.

I’m sure I’m not going be the first or last to make the association between the software name “Quixote” and the legendary story of The Ingenious Gentleman Don Quixote of La Mancha by Miguel de Cervantes Saavedra. The main character of the story is first driven by his wild fantasies that originates from all the romantic stories he read. Essentially, the character has no connection to reality and sets out on a journey where the final result is death, not just of the character, but also a metaphorical death of chivalry.

In an ONR release, Marc Steinberg, the program manager says “For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive, and trustworthy,” There-in lies the rub. “One important question is how to explain complex concepts such as policies, values, or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The intriguing factor of this program is that researches are teaching binary numbers, 010101s, how to act as we were taught as children: through the act of storytelling. While stories hold an immense amount of wisdom, it is up to the reader to interpret the meaning of the story. To this effect, how do you teach an artificial intelligence agent the right meaning of a story? Especially when as humans, we all interpret words differently.

The hope is that the software “Quixote” serves as a human user manual, according to Dr. Mark Riedfl, associate professor and director of Georgia Tech’s Entertainment Intelligence Lab. The software is supposed to teach AI robots how to interact with humans in the safest and most trustworthy way.

Using stories taken from the internet that highlight daily social interactions, the team of researchers created a virtual agent and placed it in “game-like scenarios,” where it actually earned points and interestingly enough positive reinforcements “for emulating the actions of protagonists in the stories.”

The results? The agent went through an approximate 500,000 simulations with a 90 percent success rate and with a ten percent chance the virtual AI acts negatively. It’s a cringe worthy moment.

Riedl says, “These games are still fairly simple, more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

The key here is that the robots will have encoded algorithms that will make it act in a certain way. While humans have free will, these robots are driven by code. Over time, will these robots evolve as humans have over centuries? Will they adapt as society changes and moves from one moral standard to another?

“Within a decade, there will be more robots in society, rubbing elbows with us,” Riedl says. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

Cervantes wrote his story depicting the eventual expiration of chivalry, will this Quixote be teaching AI robots the eventual death of humanity? I don’t think so. We don’t even understand what our driving factor is. Human’s emotion, feelings, the connection to our perpetual “soul” – that thing that drives us to act is still a mystery to us. While robots may be more logical than us, I don’t see our eventual demise by an artificial intelligence. That privilege will be left solely to us.

For more information on Reidl’s research under ONR’s Science of Autonomy program, click here. To read his papers on this topic, click here and here.