Tuesday, February 12, 2013

Do You Have the Heart to Kill A Robot?

Most of us wouldn't kill someone or something that doesn't want to die. At least I hope not.

Even if you're mad at your coworker for messing up the report you've been working on, the worst you'd probably do is admonish him. If your dog chews up your slippers, you'd say "Bad Dog!!" and let it go at that.
Will something like these Star Wars guys
 be our best friends some day?

What if a robot begs you not to kill it?   A robot is a machine. You turn it on and off like you would the clothes dryer or the lawn mower.

But as computerization gets more complex, more interactive, more seemingly emotional, would you have the heart to shut off a complex robot, one that seems like it has some human qualities, one that is likeable, essentially wiping out its data, it's memories?

I know that sounds silly. Of course you'd shut off a machine when you're done with it. We do it to our laptops and iPhones all the time.

But is it such a silly question?  An experiment shows that people are very uncomfortable shutting off a robot when it "asks" you not to, when it essentially says you'd hurt or kill it if you do.

According to the NPR report on Christoph Bartneck's research

"Bartneck studies human-robot relations, and he wanted to know what would happen if a robot in a similar position to the "learner" begged for its life. Would there be any moral pause? Or would research subjects simply extinguish the life of a machine pleading for its life without any thought or remorse?"

Researchers put people in front of, or next to robots that answered questions helpfully, with a voice that seemed friendly. The robot was made to look like a nice, cartoonish cat.

The people working with the robots were told they must shut them off when finished. But humans seem to naturally anthropomorphize the pleasant robots.

According to NPR:

"In videos of the experiment, you can clearly see a moral struggle as the research subject deals with the pleas of the machine. "You are not really going to switch me off, are you?" the cat robot begs, and the humans sit, confused and hesitating. "Yes. No. I will switch you off!" one female research subject says, and then doesn't switch the robot off."

"People started to have dialogues with the robot about this," Bartneck says, "Saying, 'No! I really have to do it now, I'm sorry! But it has to be done!' But then they still wouldn't do it."

There they sat, in front of a machine no more soulful than a hair dryer, a machine they knew intellectually was just a collection of electrical pulses and metal, and yet they paused.


I can understand the reluctance of these people to shut off the machines. Intellectually, I know they are machines, and I'm sure the research subjects did, too. But when will it come to the point where machines - robots- begin to take on enough humanlike and emotional characteristics that we relate enough to become their friends.

Will it be like our relationships with dogs? Or acquaintance? Somebody maybe more akin to close friends?  What if they become something that knows us better, and emphasizes with us more than any of our human or dog or cat friends?

Then what happens to us, and the machines, and our perhaps mutual relationships if the robots need repairs, or need to be rebooted?  And the Next Big Thing comes along every few minutes it seems.

If we get attached to our best friend robot, and she becomse outmoded and outdated, do we trade her in for a better model, like of like a cad does when he thinks his wife is too old?  Or do we stick with our tried and true robot/friends?

These questions are beginning to sound less and less theoretical and fantastic, and more like something many of us will have to actually deal with.

And I thought dealing with my human acquaintances was fun and challenging and complex enough.

No comments:

Post a Comment