Robots behaving badly

Robots behaving badly

Aaron Steinfeld
Photo provided by Aaron Steinfeld
Aaron Steinfeld Photo provided by Aaron Steinfeld

How do humans react when robots … misbehave?

That’s probably a question most people don’t even contemplate — machines are supposed to behave objectively and reliably, after all. But Aaron Steinfeld, an associate research professor at Carnegie Mellon University’s Robotics Institute, knows that the answers to that question could have some pretty important societal implications.

Steinfeld will be presenting his research findings as part of the Pittsburgh Humanities Festival on March 26, at 4:30 p.m. at the Trust Arts Education Center in a talk called “Human Reactions to In/Appropriate Robot Behavior.”

Steinfeld, a Buffalo native who has been in Pittsburgh since 2001, also does research in transportation and disabilities, having honed his skills with advanced degrees in engineering from the University of Michigan and post-doc work at U.C. Berkeley. While at Berkeley, Steinfeld worked in a driver/vehicle interaction lab and on self-driving cars and other advanced vehicle systems.

“My research is a mixture of transportation, disabilities and robotics,” he said, “and the unifying question across all of these is how do humans interact with complex systems, especially when they are out and about in the real world and moving?”

A few years ago, Steinfeld began collaborating with Holly Yanco, a professor at the University of Massachusetts, Lowell on a project focused on “people’s trust in robots, specifically autonomous robots,” Steinfeld explained. The robots used for the research were what are known as “nonsocial,” meaning they more were task-based utilitarian and less “Star Wars” C-3PO.

The research, which involved varying the reliability of the robot to gauge the effect on people’s trust, produced some interesting, but not always unexpected results.

“We would intentionally have the robot become less reliable during certain periods of the study, and look at how human trust changed as a result of that,” he said.

Building on the work of another study, Steinfeld and his colleagues decided to test human reaction when a robot, working as a team with a human on a task, began to assign blame when things went wrong and assign credit when things went well.

The conditions were altered three ways: (1) if the situation went badly the robot would blame itself, and if things went well it would praise the human; (2) if the situation went badly the robot would blame the human and praise itself when things went well; and (3) the robot treated itself and the human as a team, blaming the team if things went badly and praising the team if things went well.

“And, as expected, people responded very badly to the robot blaming them,” Steinfeld said with a laugh. “People had very strong social reactions to when they were being blamed, which is not to be unexpected.”

But, the effect on people’s trust in the robot dropped as a result of all three conditions, he said, even in the case where the robot blamed itself and praised the humans, or even in the team condition.

“To me, it suggests that just introducing the concept of blame creates some of the issues we see with human/human interactions,” Steinfeld said. “So, if you have an employee who regularly blames himself or praises you, you might be a little suspicious of how trustworthy is this person. Likewise, if they say that you did bad and they did good, your guard would go up, too. So, I think people are bringing in some of their human-to-human behaviors when the robot starts doing this.”

Another project that was introduced by some of Steinfeld’s students had four humans playing a game and the robot congratulating the winner of each round and giving that person a reward. The program was designed, however, so that if two players were very close in their reaction time, the person who had thus far won the least would be declared the winner of that round by the robot.

“The idea was if everyone was winning they would feel better,” Steinfeld said.

The manipulation did not, in fact, cause people to feel better, though. Instead they become suspicious “that something might not be right,” Steinfeld said. Their assumption was that the robot was somehow malfunctioning.

“They did not assume that the robot was being deceptive,” Steinfeld noted. “And this goes back to some other research by other people who have shown that people associate objectivity, fairness, and honesty with robots, and they don’t assume that robots will be deceptive. So those preconceived notions of robot characteristics overrode people’s interpretations of the robot’s behavior, which is why they assumed the malfunction rather than deceptive action.”

 The research overall is showing that “people will assign some kinds of human characteristic to the robot, or they will fall back on human-to-human interactions with the robot, but they won’t necessarily assign other human characteristics. Certain assumptions are brought in while other assumptions are not.”

Knowing how humans react to unexpected robot behavior can be useful in manipulating human behavior, Steinfeld said.

“There are times when you might want humans to react a certain way,” Steinfeld said. “You might want to lower their trust in a robot because they are being too reliant on it, or you might want them to start worrying about a robot’s capabilities — maybe some part of its performance is failing — and you really want to draw their attention to the topic. There might be iterations the robot can use to raise these issues to humans in subtle ways that still appear to be rather powerful.”

Steinfeld is well-aware of the ethical question surrounding the subject of his research.

“There is an ethical question about whether you should use deceptive techniques to induce certain human behaviors when they are around robots, but at the same token, humans do this to each other all the time,” he said. “We tell white lies, we exaggerate things here and there to draw people’s attention to things. We might do certain things to produce an effect we want to see. Anyone who teaches a class has a bag of tricks to engage their students or get students to pay attention to things. There are examples where people are using slightly deceptive techniques to produce very honorable and altruistic effects. And that is a huge ethical question, ‘Should you do these things?’”

While humans do these types of things all the time, he said, “this raises obvious questions about whether robots should be doing these things too. But intentionally having a robot do something that is slightly untrue or slightly incorrect might be a powerful way of keeping people safe.”

Steinfeld and his wife, Lisa (a former writer for the Chronicle) live with their children in Mt. Lebanon, where they are active members of Temple Emanuel of South Hills. Coming to Pittsburgh, and to CMU, was a good move, both professionally and personally for Steinfeld.

 “I think Pittsburgh is a wonderful place to explore robotics,” he said. “Obviously, the Robotics Institute and Carnegie Mellon have a long tradition of world class robotic research, and it’s really great to have the opportunity to collaborate with so many experts in the field.

“The other things I keep noticing about Pittsburgh is that the people … are kind of the best of both worlds. They have the Midwestern niceness, but things here move at more of an East Coast speed,” he continued. “When we at Carnegie Mellon are interacting with the community we get an enormous amount of valuable feedback, a lot of help when we seek help, and it’s great to feel like you are doing work in a place where people appreciate it.”

Toby Tabachnick can be reached at tobyt@thejewishchronicle.net.

read more:
comments