Is it unethical to abuse a robot? Some researchers have been wrestling with that — and figuring out ways to make us empathize more with robots.
A study published this month in Scientific Reports found that there’s a simple way to achieve that goal. If you give someone a 3D head-mounted display (basically, a fancy set of goggles) and “beam” her into a robot’s body so she sees the world from its perspective, you can change her attitude toward it.
“By ‘beaming,’ we mean that we gave the participants the illusion that they were looking through the robot’s eyes, moving its head as if it were their head, looking in the mirror and seeing themselves as a robot,” explained co-author Francesco Pavani of the University of Trento. “The experience of walking in the shoes of a robot led the participants to adopt a friendlier attitude.”
This research adds to recent philosophical work on “robot rights” and our moral intuitions about robots. One common intuition is that if we one day manage to create a sentient robot, we’d have a duty to treat that robot ethically.
For example, the philosopher Peter Singer recently told me that the question of whether future robots should be included in our moral circle — the imaginary boundary we draw around those we consider worthy of moral consideration — is straightforward. “If AI is sentient, then it’s definitely included, in my view. If not, then it’s not.”