Asimov's Ethics of Robot Behavior

From ETHW

Ethics Development

"State a moral case to a ploughman and a professor. The former will decide it as well, and often better than the latter, because he has not been led astray by artificial rules."  Thomas Jefferson 1787.

Artificial Intelligence

Before Asimov, the majority of "artificial intelligences" in fiction followed the Frankenstein pattern, one that Asimov found unbearably tedious: "Robots were created and destroyed their creator". To be sure, this was not an inviolable rule. In December 1938, Lester del Rey published "Helen O'Loy", the story of a robot so like a person she falls in love and becomes her creator's ideal wife. The next month, Otto Binder published a short story, "I, Robot", featuring a sympathetic robot named Adam Link, a misunderstood creature motivated by love and honor. This was the first of a series of ten stories; the next year, "Adam Link's Vengeance" (1940) featured Adam thinking, "A robot must never kill a human, of his own free will."

On 7 May 1939, Asimov attended a meeting of the Queens Science Fiction Society, where he met Binder, whose story Asimov had admired. Three days later, Asimov began writing "my own story of a sympathetic and noble robot", his 14th story. Thirteen days later, he took "Robbie" to John W. Campbell, editor of Astounding Science-Fiction. Campbell rejected it, claiming that it bore too strong a resemblance to del Rey's "Helen O'Loy". Frederik Pohl, editor of Astonishing Stories magazine, published "Robbie" in that periodical the following year.

Asimov attributes the Laws to John W. Campbell from a conversation that took place on 23 December 1940. However, Campbell claimed that Asimov had the Laws already in his mind, and they simply needed to be stated explicitly. Several years later, Asimov's friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men, a suggestion that Asimov adopted enthusiastically. According to his autobiographical writings, Asimov included the First Law's "inaction" clause because of Arthur Hugh Clough's poem "The Latest Decalogue", which includes the satirical lines "Thou shalt not kill, but needst not strive / officiously to keep alive".

Timeline of Robotic Behavior Laws in Novels.

Isaac Asimov
"Minus One Law of Robotics"
"A robot may not harm sentience or, through inaction, allow sentience to come to harm.

Zeroth Law
"A robot must not merely act in the interests of individual humans, but of all humanity."
"A robot may not harm a human being, unless he finds a way to prove that in the final analysis, the harm done would benefit humanity in general."
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Runaround
First Law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
"A robot was fully capable of harming a human being as long as it did not know that its actions would result in harm."
"A robot may not harm a human being."
"[A robot] may not harm life or, through inaction, allow life to come to harm."
Second Law: "A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law."
Third Law: "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
"A robot must protect its own existence."

Adam Link, "Vengeance"
"A robot must never kill a human, of his own free will."

Roger MacBride Allen
The First Law is modified to remove the "inaction" clause (the same modification made in "Little Lost Robot").
The Second Law is modified to require cooperation instead of obedience.
The Third Law is modified so it is no longer superseded by the Second (i.e., a "New Law" robot cannot be ordered to destroy itself).

Lyuben Dilov, "Icarus's Way"
"Fourth Law of Robotics"
"A robot must establish its identity as a robot in all cases."

Nikola Kesarovski
"A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law."

The Fifth Law of Robotics
"A robot must know it is a robot."

Flaws & Limitations

Asimov established that the first law was incomplete: that a robot was fully capable of harming a human being as long as it did not know that its actions would result in harm. The example used was: one robot adds poison to a glass of milk, having been told that the milk will be disposed of later; then a second robot serves a human the milk, unaware that it is poisoned.

Conflicts in the above laws have been popularised in “I Robot” where a robot did harm a human, and the Terminator series. All these played on ‘allegiances’ with their creators, but from the Indian (India) legend of “Karna”, it was entirely possible for an intelligent upstanding person of good to be on the wrong side of a conflict, simply by being asked.

Autonomous Robots

http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf