Difference between revisions of "Taming the Monsters of Tomorrow"

From Clockworks2
Jump to navigationJump to search
(Created page with "DRAFT '''Kupferschmidt, Kai. "Taming the Monsters of Tomorrow."''' In 2018 ''Frankenstein'' Centenary coverage in ''Science'' vol. 359 issue 6372 (12 January 2018): specifica...")
 
Line 3: Line 3:
 
'''Kupferschmidt, Kai. "Taming the Monsters of Tomorrow."''' In 2018 ''Frankenstein'' Centenary coverage in ''Science'' vol. 359 issue 6372 (12 January 2018): specifically 152-55. <www.sciencemag.org>.
 
'''Kupferschmidt, Kai. "Taming the Monsters of Tomorrow."''' In 2018 ''Frankenstein'' Centenary coverage in ''Science'' vol. 359 issue 6372 (12 January 2018): specifically 152-55. <www.sciencemag.org>.
  
Looks at real-world books and projects trying to anticipate existential threats to humanity and human civilization, either in terms of extinction or reduction of our material culture to, say, Medieval levels. Briefly discusses works dealing with various threats, including biological, but stresses potential dangers from AI, as handled in such works as Nick Bostrom's ''Superintelligence: Paths, Dangers, Strategies'' (Oxford, UK: Oxford UP, 2014).[http://tinyurl.com/y7cjel59][http://tinyurl.com/ycd9fowe] For Bostrom and others, if a supercomputer acquired superintelligence and "a will of its own, it might turn malevolent and actively seek to destroy humans, like HAL, the computer that goes rogue aboard a spaceship in Stanley Kubrick's film ''[[2001: A SPACE ODYSSEY (film)|2001: A Space Odyssey]]'' — although a more likely danger is AI error (155). The threat of machine take-over by AI is developed in the ''[[Colossus]]'' novels an the film [[COLOSSUS: THE FORBIN PROJECT]], and the destruction (or attempted destruction) of the human species in such works as the [[THE TERMINATOR|TERMINATOR]] movies and Harlan Ellison's horrific short story, "[[I Have No Mouth and I Must Scream]]." Max Tegmark holds "The real problem with AI is not malice, it's incompetence" — but this theme is developed much less (if at all?) in apocalyptic SF.  
+
Looks at real-world books and projects trying to anticipate existential threats to humanity and human civilization, either in terms of extinction or reduction of our material culture to, say, Medieval levels. Briefly discusses works dealing with various threats, including biological, but stresses potential dangers from AI, as handled in such works as Nick Bostrom's ''Superintelligence: Paths, Dangers, Strategies'' (Oxford, UK: Oxford UP, 2014).[http://tinyurl.com/y7cjel59][http://tinyurl.com/ycd9fowe] For Bostrom and others, if a supercomputer acquired superintelligence and "a will of its own, it might turn malevolent and actively seek to destroy humans, like HAL, the computer that goes rogue aboard a spaceship in Stanley Kubrick's film ''[[2001: A SPACE ODYSSEY (film)|2001: A Space Odyssey]]'' — although a more likely danger is AI error (155). The threat of machine take-over by AI is developed in the ''[[Colossus]]'' novels an the film [[COLOSSUS: THE FORBIN PROJECT]], and the destruction (or attempted destruction) of the human species in such works as the [[THE TERMINATOR|TERMINATOR]] movies and Harlan Ellison's horrific short story, "[[I Have No Mouth, and I Must Scream]]." Max Tegmark holds "The real problem with AI is not malice, it's incompetence" — but this theme is developed much less (if at all?) in apocalyptic SF.  
  
  
 
[[CATEGORY: Background]]  
 
[[CATEGORY: Background]]  
 
RDE, Initial Compiler 08Feb18
 
RDE, Initial Compiler 08Feb18

Revision as of 01:59, 9 February 2018

DRAFT

Kupferschmidt, Kai. "Taming the Monsters of Tomorrow." In 2018 Frankenstein Centenary coverage in Science vol. 359 issue 6372 (12 January 2018): specifically 152-55. <www.sciencemag.org>.

Looks at real-world books and projects trying to anticipate existential threats to humanity and human civilization, either in terms of extinction or reduction of our material culture to, say, Medieval levels. Briefly discusses works dealing with various threats, including biological, but stresses potential dangers from AI, as handled in such works as Nick Bostrom's Superintelligence: Paths, Dangers, Strategies (Oxford, UK: Oxford UP, 2014).[1][2] For Bostrom and others, if a supercomputer acquired superintelligence and "a will of its own, it might turn malevolent and actively seek to destroy humans, like HAL, the computer that goes rogue aboard a spaceship in Stanley Kubrick's film 2001: A Space Odyssey — although a more likely danger is AI error (155). The threat of machine take-over by AI is developed in the Colossus novels an the film COLOSSUS: THE FORBIN PROJECT, and the destruction (or attempted destruction) of the human species in such works as the TERMINATOR movies and Harlan Ellison's horrific short story, "I Have No Mouth, and I Must Scream." Max Tegmark holds "The real problem with AI is not malice, it's incompetence" — but this theme is developed much less (if at all?) in apocalyptic SF. RDE, Initial Compiler 08Feb18