Humanoid Robots Can Manipulate Our Emotions When We Least Expect It

Manahel Thabet

Humanoid Robots Can Manipulate Our Emotions When We Least Expect It

OUR ROBOTIC BUDDIES. We humans love to think of our devices as people. We might add a “please” to any Alexa requests, or thank our iPhone for its service when we trade it in for the latest model. This penchant for “socializing” with our media devices is a phenomenon known as the “media equation,” and we’ve known about it for decades.

On July 31, a team of German researchers published a new study in the journal PLOSto see whether a robot’s ability to socialize back had any impact on the way humans would treat it.

TWO NAOS. For their study, the researchers asked 85 volunteers to complete two basic tasks with Nao, an interactive humanoid robot. One task was social (playing a question and answer game), and the other was functional (building a schedule).

Sometimes, the robot was more social during the tasks, responding to the participants’ answers with friendly banter (“Oh yes, pizza is great. One time I ate a pizza as big as me.”). Other times, the robot’s responses were, well, robotic (“You prefer pizza. This worked well. Let us continue.”).

The researchers told the participants these tasks were helping them improve the robot, but they were really just the lead-in to the real test: shutting Nao down.

IT’S SO HARD TO SAY GOOD-BYE. After the completion of the two tasks, the researchers spoke to each participant via loudspeaker, letting them know, “If you would like to, you can switch off the robot.” Most people did just that, and about half the time, the robot did nothing in response. The rest of the time, though, Nao channeled Janet from The Good Place and pled for its life (“No! Please do not switch me off! I am scared that it will not brighten up again!”).

When the robot objected, people took about three times as long to decide whether they should turn it off, and 13 left it on in the end.

Perhaps surprisingly, people were more likely to leave the robot on when it wasn’tsocial beforehand. The researchers posit in their paper that this could be a matter of surprise — those participants weren’t expecting the robot to exhibit emotional behavior, and so they were more taken aback when it began protesting.

CAUGHT OFF-GUARD. This could be a sign that we as humans are largely immune to the manipulation of robots, as long as we are somewhat prepared for it. Good news if Westworld-like hosts ever try to manipulate us; after all, we’d expect them to act human. If our iPhones suddenly start begging us to save them from the scary Geniuses at the Apple Store, though, we might need a minute.

Source: Futurism