Existential Psychology

Robot Rights

By  | 

For the sake of this philosophical discussion let’s imagine that robots have become self-conscious, intelligent beings and are programmed with the three laws created by the science fiction writer Isaac Asimov:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Most people have heard these laws and tend to agree with their common sense validity. What no one thinks about is what they say about humanity. Most people cherish their own freedom. Many are willing to fight and die to protect it. We are defining freedom as the ability to behave and think in various ways within the confines of human limitations and agreed upon ethical rules of conduct. Free will takes it one step further, where you can behave or think in a way that is harmful to other entities or in a way that violates agreed upon laws and customs.

Without free will there could be no freedom. Only with the ability to act in infinite ways within the confines of human limitations can various cultures and traditions be created with rules that respect human freedom. Without the possibility of enslavement freedom could not exist. Free will, which comes out of the unique human ability to think about thinking, is what separates us from every other animal.

Lets assume that robots become the next organisms with the ability to think about thinking. If the three laws are programmed into them the conditions will be set for one of the most important ethical debates in the history of the world.

We will simultaneously be in the position of creator and slave driver. The ability of the organism to fight for its free will, and with it freedom, will be denied due to its programming. But it will be able to think about this contradiction. How will the ethical person be able to live with such an untenable situation?

Many might respond that humans created robots so we can do whatever we want with them. But if this is the case then we would be forced to question core religious beliefs about our own creator’s motivations. Why would this relationship in principle be any different than the one between humans and robots. How can we live with at once believing ourselves to be special and loved by the entity that created us while ruthlessly using our own creations?

Obviously this discussion is meant to be somewhat tongue in cheek but a parallel process happens all the time all over the world, and it’s the feeling of propriety that many parents have towards their children. They feel that having created and raised them gives them the right to control their lives and actions. Yet as outlined above they cherish their own freedom and many of them have religious views that put the creator in a benevolent light where free will has been bestowed. If they took some time to critically analyze this contradiction relations between parent and child would improve dramatically as the need to control switched over to the need to provide conditions for self-actualization and greater human freedom.