Friday, August 31, 2012

The Three Laws Of Robotics

I recently finished reading Isaac Asimov’s futuristic mystery novel The Naked Sun. The book features the partnership of Plainclothesman Elijah Baley of Earth and a Robotic Detective, R. Daneel Olivaw, of the planet Aurora. While working together on a murder case, Baley finds himself having issues with his robotic partner. Baley has a crippling fear of open space of any kind, but, finding himself on a new planet and needing to overcome his fears, he attempts to do just that. However, Olivaw, following the 3 laws built into his positronic brain, will not allow Baley to face his fears because of the discomfort it will certainly cause him. It appears as though robots are incapable of looking to the future. I want to look at exactly what could result from this line of reasoning and see what unfortunate consequences could follow.

The Three Laws of Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

It sounds incredibly intuitive. After all, Robots are here to serve and protect humans. But dangers do inevitably arise, at least in The Naked Sun. Shortly after arriving on the outer world of Solaria, Baley wishes to attempt to face his upsetting fear of open spaces by sticking his head out of the window of his car. Daneel suggests that he should not do this because it will cause Baley a large amount of discomfort and pain. Baley ignores this and tries anyway only to be physically thwarted by Daneel. Through this revelation, and a few others throughout the book, we begin to see the shortsightedness of The Three Laws, literally. Robots cannot see into the future, obviously, but they also appear to lack the ability to make intuitive predictive leaps i.e. that some things may cause immediate unpleasantness, but will eventually result in a greater, more positive, consequence. Such things as facing a fear. Although it is unpleasant to face our fears, it will result in a better existence for us later if we do.
Daneel, and other robots governed by the three laws cannot see this though. They can only understand the first step of facing our fears, that it will be uncomfortable and likely painful. As a result, a robot will physically restrain you in order to protect you from yourself. But what other sorts of things would a robot be unable to understand and allow? Gruelling exercise routines would have to be prevented. Quitting smoking would not be allowed for the same reasons. Even if you had an aversion to, say, flossing your teeth, a robot would then have to stop you from flossing simply because it would cause unpleasantness for you. This line of reasoning seems to suggest that even surgery would not be permitted. After all, if a robot is only capable of concerning itself with the here and now, the positive eventual results of surgery would be lost in the cloud of extreme pain, fear, etc. that surgery would cause a person.
Now I am not a fictional Roboticist, but it doesn’t seem like an enormously difficult task to write an outcome based understanding into the positronic brain. Robots can do many complicated tasks, why not this one? Perhaps it is difficult. Simply rewriting the law to say something like “...some harm is necessary because it will result in less pain later on” is not sufficient. “Some” is a term that will mean nothing to a robot as it is incalculable. You cannot program intuition, and that is exactly what this sort of reasoning would require. The ability to distinguish between situations which will lead to less pain in the future and ones that will not. Surgery will lead to less pain later, but a stabbing will not (although it will if you die, but that’s another point entirely). There’s a difference there and that difference would have to be programmed into the robot along with the new adapted rule. Indeed every instance would have to be programmed in. The consequence of not doing this, would be that a robot could misread a situation that resulted in pain being permanently caused to a human. e.g. thinking that stabbing is surgery.
In The Naked Sun, multiple robots inadvertently cause harm, or even death, to a human. As a result, the robots became irreparably damaged because they had just broken the First Law of Robotics. The positronic brain cannot handle going against it’s programming and so unless every possible instance of these “bad now, good later” situations are programmed into the robot’s brain, there is a risk that they could get it wrong. And if they do, they risk themselves and the human.
So if it proves to be too difficult to perform this sort of programming, then we are right back where we started. With shortsighted robots who will prevent any and all immediate discomforts regardless of the potential gains. Robots who won't let us face our fears even if we want to or have a tumor removed. Like mentioned above, there are unfortunate consequences to this type of reasoning. Common struggles or activities could become extinct by robotic intervention. Such as exams, which sometimes cause students an extraordinary amount of stress. Through this interpretation of the First Law, robots would be required to prevent a student from taking an exam. Even something as innocuous as football would be stopped as a robot would not understand the “good” arrived at by the pain. Similarly, tattoos would not be allowed.
It’s not difficult to predict where this path would take us: a world where no amount of pain would be allowed, even for positive purposes and the robots would be forced into the role of overbearing restrictors. And, of course, robots would not allow an overthrow of their power because that could lead to eventual pain for humans. Perhaps I’m sliding down the slippery slope here, but I do believe that that was the plot of the film based upon the book with the same title, I, Robot. In The Naked Sun, Asimov himself opened the floodgates for the possible consequences of his laws. Perhaps intentionally. Either way, if a robot cannot allow a human to face a crippling fear, then they also could not allow a human to undergo surgery or quit smoking for the same reasons. And if that’s the case, our freedoms end up restricted and if that happens, then it all falls apart. Of course, this is one of the many reasons why robots are still only existent in the world of Science-Fiction. At least for now anyway....

No comments:

Post a Comment