• leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Asimov didn’t design the three laws to make robots safe.

      He designed them to make robots break in ways that’d make Powell and Donovan’s lives miserable in particularly hilarious (for the reader, not the victims) ways.

      (They weren’t even designed for actual safety in-world; they were designed for the appearance of safety, to get people to buy robots despite the Frankenstein complex.)

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        I wish more people realized science fiction authors aren’t even trying to make good predictions about the future, even if that’s something they were good at. They’re trying to make stories that people will enjoy reading and therefore that will sell well. Stories where nothing goes particularly wrong tend not to have a compelling plot, so they write about technology going awry so that there’ll be something to write about. They insert scary stuff because people find reading about scary stuff to be fun.

        There might actually be nothing bad about the Torment Nexus, and the classic sci-fi novel “Don’t Create The Torment Nexus” was nonsense. We shouldn’t be making policy decisions based off of that.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Asimov’s stories were mostly about how it would be a terrible idea to put kill switches on AI, because he assumed that perfectly rational machines would be better, more moral decision makers than human beings.