Story time could teach robots human etiquette

With robots actually doing more in the 21st century, one researcher thinks they might actually need to learn how to interact with people. The Office of Naval Re...

Teaching Siri to say “please” and “thank you” could mean the difference between coexistence with artificial intelligence, and a robot apocalypse.

With AI based machines actually doing more in the 21st century like manual labor and military assistance, researchers think they might need to learn how to interact with people in order to avoid unethical and harmful situations involving humans.

The Office of Naval Research is working with researchers at Georgia Tech to program robots with human morals using a software program called the “Quixote system.”

“What we’re trying to do with this research project is how to teach robots and artificial intelligence systems proper behavior,” said Mark Reidl, a researcher for Quixote and director of the Entertainment Intelligence Lab at Georgia Tech. “Humans learn what we call social norms and social conventions — little rules of thumb that keep us from rubbing elbows when we’re out in society and we have to interact with each other. These are simple things like learning to stand in line or paying for things before we take them from a store—things that we take for granted, but are in fact, very very hard to teach a robot.”

The issue becomes how to turn manners, like waiting in line or saying hi, into hard code.  It’s an easy answer to a seemingly complicated question: story time.

Reidl and other researchers take natural language procedural stories and plug them into Quixote, which then converts them into signals that “reward” or “punish” an AI based on how closely they choose to act like the story’s protagonist.

“The robot actually thinks its playing a little game,” Reidl said.  “It gets +10 points every time it does something correct or similar to a story and -1 or -5 points every time it does something different. It’s just trying to get as many points as it can.”

Overtime, Reidl said the AI begins to remember which actions get them rewards, and learn to avoid ones that would take away points, ultimately learning trial and error like humans.

The Office of Naval Research and researchers like Reidl are looking at ways morally trained AIs can have military applications including the possibility of going out on missions with humans.

“There are going to be robot companions who are out there on teams working on particular missions and objectives out there, and we want them to understand the way we think and the way we [humans] work together in teams, so that they’re not constantly stepping on our feet,” Reidl said.

Reidl believes moral robots can help with rapidly evolving training simulators.

“Now as we build more and more complicated virtual training simulations, we might want to simulate entire societies or an entire town involving civilians,” said Reidl. “Being able to teach the computer how to be a civilian in a foreign country is a rapid way of creating these virtual simulations that we can then go and run various hypothetical scenarios through.”

For now, the stories Quixote programs into robots are closer to instructions than any tale out of Mother Goose, but by reverse engineering social cues into reward signals, robots are learning to program themselves.

“It turns out that stories are really good ways of encoding social norms and social conventions,” Reidl said. “Every time we tell a story, we bake in our understanding of how society works. Over time, like humans learn from trial and error, it learns that it should do the things that get praise and rewards more often that if it should do things that provide punishment. It starts to imitate the protagonist in the story, to imitate us.”

 

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories

    Artificial intelligence part of federal customer service push into the digital world

    Read more