The N Laws

via Mark, this gem from Slashdot, about Asimov’s Three Laws of Robotics:

Have any of them actually read I, Robot? I swear to god, am I in some tiny minority who doesn’t believe that this book was all about promulgating the infallible virtue of these three laws, but was instead a series of parables about the failings that result from codifying morality into inflexible dogma?

The beauty of the Three Laws was that every story he ever wrote about them was about an apparent violation of them. Of course the apparent violation was always revealed to be false and the Three Laws remains supreme and never violated (unlike in the regrettable I, Robot movie). But it was always astonishing how Asimov could start with such a restricted premise and yet extract such fascinating complexity from it. That was part of his genius.

Of course, when we talk about the Three Laws, we really mean the First Law: A Robot may not, through action or inaction, allow a human to come to harm. But what exactly constitutes harm? And what are the limits of inaction? It was by considering these issues that R. Daneel and R. Giskard ultimately formulated the Zeroth law: replacing human with humanity. In a sense, the dominant political philosophy of both Left and Right is really just a variant of the Zeroth Law. And the same struggle with “harm” and “inaction”. And therein lies, perhaps, most of the dysfunction.

7 thoughts on “The N Laws”

  1. They keep showing up all over the place. In “World of Narue” they’re distilled down into six words:

    1. Protect life
    2. Obey orders
    3. Protect yourself

    Which loses the precedence, and some nuance, but still represents the original pretty well. The big change is “life” instead of “humans” in the first law. Among other things, that means “don’t kill” but DOES NOT mean “don’t harm”. Which becomes clear when Rin actively tries to harm Kazuto and Narue by trying to break up their relationship. (She later apologizes to Narue for doing so, and points out that she was ordered to do it and could not disobey.)

  2. The question of what constitutes harm would be difficult to answer. How immediate would the harm have to be to warrant interference? Smoking a cigarrette? Global warming?

  3. Steven, that is pretty cool. I note that the precedence is retained somewhat if they are presented in that order.

    Quorlox – thats precisely the point. The reason the Zeroth Law came about was precisely because Daneel and Giskard meditated upon that very word. If you think about, only a sentient machine can really follow the Laws, because they have to interpret harm. A “dumb” sentient machine can interpret it in te obvious way: don’t shoot people, don’t let a safe fall on them, etc. A smart sentience though can start to see more subtle shades of harm, and proceed from there…

    And of course a non-sentient machine would need to have :harm” completely defined, and if you forgot to hard code in the fact that humans die when safes fall on them or when they get shot, neither action woudl trigger the First Law.

  4. But don’t humans have a problem defining “harm” absolute terms? I live in Boston and they’ve banned smoking in bars (maybe everywhere indoors) because they consider it “harmful” to the public at large. What if a robot (sentient or not) believed smoking was always harmful? What steps would it take to stop people from smoking? If ask a dozen different people to define what is and is not harmful, you’d probably end up with a dozen different answers.

    For a large number of robots, I’d be worried about who would provide the basic list of good and boad. Imagine how different robots programmed by Pat Robertson and Richard Dawkins would be. πŸ™‚

  5. Humans have demonstrably proven that they have difficulty evaluating–even with complete information in isolated scenarios–which of a given set of actions would produce the least harm, both individually and socially. I would not trust robots to be any better, since even a robot will never have complete information. In fact, we should probably avoid full-fledged AI if at all possible and keep robots limited in scope.

  6. good point. in many ways the entire issue of the Three Laws is a metaphor for humanity itself.

    However one could argue that a non-robotic AI – a distributed intelligence, for example housed at Google – that does have access to all knowledge that can be known, might approach “super rationality” in some sense. But even then I am skeptical because of chaos inherent in any deterinistic system. And the assumption that perfect knowledge = ideal action does imply a deterministic universe in a fundamental level which I instinctually recoil from.

  7. Not to mention that the “ideal action” might not be morally correct. You never know when genocide of the human race might be deemed most appropriate. πŸ˜€

    I took some chaos theory classes in undergrad and it is cool stuff.

Comments are closed.