Friday, August 31, 2012

The Three Laws Of Robotics

I recently finished reading Isaac Asimov’s futuristic mystery novel The Naked Sun. The book features the partnership of Plainclothesman Elijah Baley of Earth and a Robotic Detective, R. Daneel Olivaw, of the planet Aurora. While working together on a murder case, Baley finds himself having issues with his robotic partner. Baley has a crippling fear of open space of any kind, but, finding himself on a new planet and needing to overcome his fears, he attempts to do just that. However, Olivaw, following the 3 laws built into his positronic brain, will not allow Baley to face his fears because of the discomfort it will certainly cause him. It appears as though robots are incapable of looking to the future. I want to look at exactly what could result from this line of reasoning and see what unfortunate consequences could follow.

The Three Laws of Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

It sounds incredibly intuitive. After all, Robots are here to serve and protect humans. But dangers do inevitably arise, at least in The Naked Sun. Shortly after arriving on the outer world of Solaria, Baley wishes to attempt to face his upsetting fear of open spaces by sticking his head out of the window of his car. Daneel suggests that he should not do this because it will cause Baley a large amount of discomfort and pain. Baley ignores this and tries anyway only to be physically thwarted by Daneel. Through this revelation, and a few others throughout the book, we begin to see the shortsightedness of The Three Laws, literally. Robots cannot see into the future, obviously, but they also appear to lack the ability to make intuitive predictive leaps i.e. that some things may cause immediate unpleasantness, but will eventually result in a greater, more positive, consequence. Such things as facing a fear. Although it is unpleasant to face our fears, it will result in a better existence for us later if we do.
Daneel, and other robots governed by the three laws cannot see this though. They can only understand the first step of facing our fears, that it will be uncomfortable and likely painful. As a result, a robot will physically restrain you in order to protect you from yourself. But what other sorts of things would a robot be unable to understand and allow? Gruelling exercise routines would have to be prevented. Quitting smoking would not be allowed for the same reasons. Even if you had an aversion to, say, flossing your teeth, a robot would then have to stop you from flossing simply because it would cause unpleasantness for you. This line of reasoning seems to suggest that even surgery would not be permitted. After all, if a robot is only capable of concerning itself with the here and now, the positive eventual results of surgery would be lost in the cloud of extreme pain, fear, etc. that surgery would cause a person.
Now I am not a fictional Roboticist, but it doesn’t seem like an enormously difficult task to write an outcome based understanding into the positronic brain. Robots can do many complicated tasks, why not this one? Perhaps it is difficult. Simply rewriting the law to say something like “...some harm is necessary because it will result in less pain later on” is not sufficient. “Some” is a term that will mean nothing to a robot as it is incalculable. You cannot program intuition, and that is exactly what this sort of reasoning would require. The ability to distinguish between situations which will lead to less pain in the future and ones that will not. Surgery will lead to less pain later, but a stabbing will not (although it will if you die, but that’s another point entirely). There’s a difference there and that difference would have to be programmed into the robot along with the new adapted rule. Indeed every instance would have to be programmed in. The consequence of not doing this, would be that a robot could misread a situation that resulted in pain being permanently caused to a human. e.g. thinking that stabbing is surgery.
In The Naked Sun, multiple robots inadvertently cause harm, or even death, to a human. As a result, the robots became irreparably damaged because they had just broken the First Law of Robotics. The positronic brain cannot handle going against it’s programming and so unless every possible instance of these “bad now, good later” situations are programmed into the robot’s brain, there is a risk that they could get it wrong. And if they do, they risk themselves and the human.
So if it proves to be too difficult to perform this sort of programming, then we are right back where we started. With shortsighted robots who will prevent any and all immediate discomforts regardless of the potential gains. Robots who won't let us face our fears even if we want to or have a tumor removed. Like mentioned above, there are unfortunate consequences to this type of reasoning. Common struggles or activities could become extinct by robotic intervention. Such as exams, which sometimes cause students an extraordinary amount of stress. Through this interpretation of the First Law, robots would be required to prevent a student from taking an exam. Even something as innocuous as football would be stopped as a robot would not understand the “good” arrived at by the pain. Similarly, tattoos would not be allowed.
It’s not difficult to predict where this path would take us: a world where no amount of pain would be allowed, even for positive purposes and the robots would be forced into the role of overbearing restrictors. And, of course, robots would not allow an overthrow of their power because that could lead to eventual pain for humans. Perhaps I’m sliding down the slippery slope here, but I do believe that that was the plot of the film based upon the book with the same title, I, Robot. In The Naked Sun, Asimov himself opened the floodgates for the possible consequences of his laws. Perhaps intentionally. Either way, if a robot cannot allow a human to face a crippling fear, then they also could not allow a human to undergo surgery or quit smoking for the same reasons. And if that’s the case, our freedoms end up restricted and if that happens, then it all falls apart. Of course, this is one of the many reasons why robots are still only existent in the world of Science-Fiction. At least for now anyway....

Sunday, August 12, 2012

Why I Don’t Eat Meat Or: How I Learned To Stop Worrying And Love The Tofu

Since Thanksgiving of 2008, I have been a Vegetarian. Not a Vegan, mind you (I have a difficult time letting go of cheese and mayonnaise) but I have abstained from eating red meat, white meat, poultry and fish since then. When asked why I stopped eating meat, I usually answer something to the effect of: “The minimal relative pleasure I receive from eating meat does not outweigh the death of the animal of which I am eating.” This sentence is not necessarily self-explanatory on its own, however. So I plan to carefully lay out my reasoning philosophically for both myself and others.
Let me begin by explaining what I mean when I say the phrase “minimal relative pleasure.” Different things in life give a person simple pleasures, e.g. eating Mac ‘n Cheese, playing BSG with friends, watching a good TV show, etc. Other sorts of things give a larger amount of pleasure, e.g. falling in love, having a child, achieving a great goal, etc. The difference between these two types of pleasures are clear; simple pleasures are fleeting, they don’t stay with you for a long amount of time. They leave no significant mark on your life. The bigger pleasures, by contrast, will stay with you. They are things you will remember and feel for a great deal longer, if not indefinitely, and will leave a strong mark on your life.
It seems clear, then, that eating meat falls under the category of “simple” pleasures, i.e. that the pleasure I receive will fleet and does not make any significant mark on my life. After all, I cannot, right now, say that my life has been significantly impacted by the fact that I likely ate some sort of meat 5 years ago today. It left no mark and the pleasure I received from that meal fled a long time ago. I can say that getting married almost 3 years ago did leave a mark on my life and I still feel pleasure at the thought of it.
It is important to note as well that displeasure can also be categorized in the same two ways. A simple displeasure would be something to the effect of: stubbing your toe, losing a game of chess, having a crappy job, etc. And the bigger displeasures: getting divorced, being convicted of a crime, losing a loved one, etc. The same can be said as before, the bigger displeasures will stick with you, likely until you die, and the simple ones will fade with time, some sooner than others.
Whether non-human animals experience the same capacity and complexity of pleasure and displeasure is a question scientists, psychologists and philosophers have been unable to completely answer thus far, but progress is being made. Chair Philosopher Beth Dixon of Plattsburg University specializes in Ethics and Animals had this to say in her 2001 paper “Animal Emotion. Ethics and the Environment:

“Recent work in the area of ethics and animals suggests that it is philosophically legitimate to ascribe emotions to nonhuman animals. Furthermore, it is sometimes argued that emotionality is a morally relevant psychological state shared by humans and nonhumans.”

Dixon is not alone in her thinking and her opinion is surely not without its critics, but it merely reflects a shift in attitude.
So from Dixon’s claims alone we cannot ascribe such a complexity of emotion to a non-human animal. We can, however, posit whether certain pleasures and displeasures are of the more significant varieties. Eating, for example, is likely not a significant pleasure for non-human animals much like it is not for humans. Death, though, would seem to be the paradigm case of a negative marker being left on any sort of animal’s life, human or otherwise.
Therefore, if the pleasure I would receive from eating an animal’s flesh is of the simple variety, and the animal’s death is of the larger dis-pleasure variety, then the animal’s death far outweighs the minimal pleasure I would receive from eating her. So if the pleasure I would receive from eating meat does not outweigh the animal’s death, I am not justified in eating her at all.
One possible response to this is that eating isn’t really a simple pleasure at all. What’s That perhaps it has to be thought of in terms of the accumulated amount of meat eaten, and that that is a larger pleasure. The problem with this line of reasoning is that any small pleasure would then be turned into a greater pleasure if it accumulated over your life. If eating a certain type of meal over your lifetime is what determines whether it had a significant effect on us, then other single events would have to be thought of in the same way. An event like winning a card game. Perhaps I won a lot of card games in my life. It would be foolish to think that the effect of winning card games throughout my life placed any significant mark on me. If small pleasures are only valued in terms of their accumulated lifetime effect, then we would have to suggest that winning card games is a greater pleasure like falling in love or having a child. Surely no one would suggest this though, so the criticism falls short.
The second criticism is that perhaps animal’s feelings, good or bad, are irrelevant to our, humans, purposes. Animal Rights Philosopher, Peter Singer, argues that it is “speciesist” to think this way. And that non-human animals deserve the same regard and rights as disabled adults or infants. Immanuel Kant even argued, although he did not believe there was anything ethically wrong with treating animals as means to any human’s ends, that we indirectly are morally obligated to treat animals well. He held that we would be damaging ourselves if we were to treat animals poorly. Therefore we must treat them well or risk losing a part of our humanity.
It is possible eating a dead animal could be justified, but only if my pleasure was of the greater variety and the animal’s displeasure was of the simple variety. Alas, this is not the case and I must accept the results. A possible way to test whether or not eating meat is actually a greater pleasure as opposed to a simple one which I suggest it is, would be to determine whether not eating meat has left any significant negative mark on my life. It has left no such mark.

Thursday, August 9, 2012

Choosing The Matrix

In the book The Matrix and Philosophy (2002), Professor David Weberman wrote a chapter in which he concluded that given the choice between living in the the inauthentic Matrix world and the authentic real world, the more rational choice is to live in the Matrix. He arrived at this conclusion for a few different reasons, but his most weighty argument is: given that the  Machine’s goal is to insure that humans remain plugged into the Matrix and that they remain alive, it would be in the Machine’s best interest to make the world void of as much suffering and death as they realistically could. Therefore, quality of life for everyone would be greater in the ignorant bliss that is the dream world of the Matrix.

“...The virtual world gives us the opportunity to visit museums and concerts, read Shakespeare and Stephen King, fall in love, make love, and raise children, form deep friendships, and so on. The whole world lies at our feet except that it’s probably better than our world since the machines have every motivation to create and sustain a world without human misery, accidents, disease, and war so as to increase the available energy supply. The real world, on the other hand, is a wasteland. The libraries and the theaters have been destroyed and the skies are always gray.”

Weberman considers the criticism of free will and truth. He responds that the Machines likely would not care what we do while in the Matrix. We would be free to paint, make music, support, or fight against, the government, etc. The only things we could not do would be unplug ourselves or others. So we are almost every bit as autonomously free as if we are in the real world.
Even if choosing the Matrix is more rational, though, there seems to be something at least partly counterintuitive about this choice. Neo, Trinity, Morpheus, Tank, Mouse and Dozer certainly agree. They believe there is something more important to life. Authenticity is intrinsically valuable to them. But can they make this choice for everyone else? If Cypher disagreed with them then others likely will as well. Morpheus even admitted that they do not unplug adults because they have grown attached the fake world and reject the real.
Weberman’s argument can be taken further, though. If his argument above can be taken as a reason to choose the Matrix over the real world as it exists in the film, the same argument can be used to say that it would be more rational to choose the Matrix over the real world as it exists today as well. Given the lessened amount of suffering and death, war and so on. After all, who would not like to live in a world with less of these things? Is this not the goal of most doctors, scientists, politicians and the like?
So if we were presented with an opportunity to live in a virtual world designed to contain no “human misery” of any sort, a world in which we are still free to choose any profession or activity, and we could be ensured that our physical bodies were being preserved in the real world, I would expect to see this virtual world heavily populated. The question is, for how long?

Sunday, August 5, 2012

Naming Names

I am currently a part-time Retail Cashier at Target Field in Minneapolis, MN. The company I work for manages stadiums and parks across the globe and has been doing so for several decades. In their time doing so, they have developed certain “protocols” for how the retail trans/inter-action is to occur. At this time, I wish to focus on one particular rule: “The cashier must say the guest’s name three times throughout the transaction.” They do, however, accept that not all transactions give rise to the opportunity to learn the guest’s name. i.e. cash transactions. Also, that cashiers are not, then, required to outright ask a guest for their name. Personally, I find the “three name” rule abhorrent and will spend the rest of this entry trying to figure out why.
It is not difficult to imagine the company’s thought process. It was most probably something like: Customers will most likely leave a store happy if a positive, personal, experience is had. If a customer leaves a store happy, they will most likely return to that store again looking for another positive, personal experience. Therefore, the best way to ensure repeat patronage is to create that experience. We can call this the “Cheers” argument. Words taken from the theme song:

"Sometimes you want to go,
Where everybody knows your name,

and they're always glad you came."

And it’s true. That is what we want, as a customer; that feeling of inclusion, of welcome. It’s an experience we will go back for. Therefore, if the “three name” rule creates this feeling for a customer, and the “Cheers” argument is true, then the “three name” rule is justified. On the face of it, the “Cheers” argument sounds like a solid one for the existence of the “three name” rule. I would argue, though, that that is not the case. That it is a non-sequitur and that the name rule does not follow from the “Cheers” argument at all.
I think that the “Cheers” argument is correct, and most probably the best way to ensure that your customers will return to your store in the future. The point I disagree with is the assertion that the “three name” rule is the way to accomplish that goal. The company will reason that saying “Here is your Discover Card back, Mr. Plato” will somehow, instantaneously, make  him feel that Cheers-like feeling. Personally, I do not find this to be the case. The only people who seem to react positively to this sort of exchange are the secret shoppers who are there to see if you’re following the rule in the first place. The majority of the time, the guest hears their name and they sort of clam up like someone’s just infringed upon their personal bubble.
From personal experience, both as a cashier and as a patron, I have felt similarly when my name is read off of my badge or debit card. It’s a feeling not too dissimilar from getting a phone call from someone whom you did not give your phone number to. Perhaps neither are the most offensive things in the world but both are cases where the information is such that you would prefer to have given it out voluntarily. In either case, we end up feeling slightly put-off. So not only is the “three name” rule not the best way to accomplish the “Cheers” result, it is often counterproductive and can, at best, be neutral.
So is there an alternative to the name rule, perhaps a successful way to achieve the result of the “Cheers” argument? Well, in conjunction with general politeness and attentiveness, having a genuine conversation with the guest is going to provide a good deal more success than simply saying their name one to three times. It is far less awkward and more successful to, say, strike up a conversation about where the guest is from, or about an intriguing tattoo or a piece of clothing they have on. This is a genuine attempt at connection and even if it is a complete failure, the guest will appreciate that more than simply saying their name as you give them their credit card back. One of the reasons is that anyone can say your name, but not everyone can carry out a real conversation successfully. It takes observation, intelligence and skill and it will make a greater impact on the guest.
So perhaps abhorrent is the wrong word. I don’t actually think that it can be argued that the “three name” rule is unethical, but is more likely simply the result of a poor understanding of human interaction. It is not this reason alone, though, for which I flatly refuse to abide by this rule, although it is certainly a factor. There is just something that feels so repugnant and distasteful about it. Perhaps it is simply that I would not wish a cashier to steal my name from my debit card and toss it back at me like they were my friend. So then, following the logic of the Golden Rule, I am turned away from doing so to others. Whatever the reason may be, I will continue with the best way I know how to be a good cashier, and will continue to only say the guest’s name when my supervisor is standing directly behind me.