Sunday, November 28, 2010

Reading Response 7: Asimov's Four Laws of Robotics

We can not imagine what the world looks like if robots can think independently like human brains, command people to be servient to them,  and dominate the world, taking place of human beings. Certainly, it is a catastrophe to us, which is caused by our own hands and difficult to reverse. We do not look forward to meeting this terrible situation, so we must prevent it at the early begining of the robot design process. The Asimov's Four Laws of Robotics meet this expection.

The Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm. Although the robots are underdeveloped to reach this level that can devastate the human world, designers should not ingnore this potential danger, which has been vividly described in nowadays science fiction. Robots must be programmed to protect all humanity whenever and wherever, never do harm to them, let alone having the idea of governing them.

The First Law: A robot may not injure humanity, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics. This principle put "safety" in the first place. Today, almost all the products are designed with multiple safeguards to minimize the likelihood that can threaten people's safety, in other words, effective measures taken to guarantee the safety is one of the criteria to judge whether this product is qualified.  A lot of effort has gone into implementation of the safety issue, which is very different from what I experienced several years ago. It is a unforgettable accident that nearly cost my life. One day at the age of 7, I was hurry to rush into the elevator as the door was about to close. The damn thing happened, I was stuck into the two oncoming doors. I struggled with my full strength to save my life, but all my efforts seemed futile and the two doors never stoped pressing on me. At that time, nobody else was around there, and I was desperately painful due to the increasingly great pressure and felt hard to breathe. Great thanks to the God, the "hero" finally came, pressed the open button and saved my life. Although 15 years have past, every time I remind of that day, I am totally in a cold sweat! Thanks to the focus on safety, I dare to go into elevators again.

The Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law. In the current state, robots have no choice but to obey people's commands. The important implication of this law is to leave some decisions up to the machines on the basis of protecting people when robots are smart enough, which means robots can refuse to perform the task that is unsafe.

The last law in the sery is that a robot must protect its own exsistence as long as such protection does not conflict with the Zeroth, First, or Second Law. Definitely, a robot cost unaccountable efforts, such as money, time, intelligence, perspiration, ect. Protecting themselves is equal to protecting fortunes for human beings. Try to insist to the four laws in the design process, robots will have a more bright future and serve us in better ways.  

No comments:

Post a Comment