Difference between revisions of "The New Fire: War, Peace, and Democracy in the Age of AI"

From Clockworks2
Jump to navigationJump to search
Line 60: Line 60:
 
Deepfakes: Starts with discussion of the Hitler jig clip, a World War II "shallow fake" by John Gierson and an associate, for a very effective piece of anti-Hitler propaganda.[http://hoaxes.org/archive/permalink/hitlers_silly_dance] Nowadays "A GAN [Generative Adversarial Network] could do easily and more convincingly what Gierson had to do painstakingly and manually" (pp. 195-96), producing "deepfakes" that "exacerbate a long-standing social problem, that Buchanan and Imbrie trace back to Hitler and earlier.
 
Deepfakes: Starts with discussion of the Hitler jig clip, a World War II "shallow fake" by John Gierson and an associate, for a very effective piece of anti-Hitler propaganda.[http://hoaxes.org/archive/permalink/hitlers_silly_dance] Nowadays "A GAN [Generative Adversarial Network] could do easily and more convincingly what Gierson had to do painstakingly and manually" (pp. 195-96), producing "deepfakes" that "exacerbate a long-standing social problem, that Buchanan and Imbrie trace back to Hitler and earlier.
 
<blockquote>
 
<blockquote>
The denial of truth has long been a feature of autocracies. In her seminal study of totalitarianism [''The Origins of Totalitarianism'' (1951)], H
+
The denial of truth has long been a feature of autocracies. In her seminal study of totalitarianism [''The Origins of Totalitarianism'' (1951/1976)], Hannah Arendt wrote, "Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, for in their opinion fact depends entirely on the power of the man who can fabricate it." Stalin and other autocrats rewrite history to preserve their hold on power, forcing citizens to accept what many knew was not true. In ''[[Nineteen Eighty-Four|1984]]'' [...] George Orwell wrote, "The party told you to reject the evidence of your eyes and ears. It was their final, most essential command." It is easy to imagine deepfakes enhancing this power [...]. (p. 197)
 
</blockquote>
 
</blockquote>
 +
It is also easy to imagine deepfakes undermining the USA (and other countries) as liberal-democratic Republics toward the end of a fairly long period of lessoning "respect for the notion of truth itself" (p. 201).[http://rich.viewsfromajaggedorbit.com/2016/03/truthiness-one-root-in-sins-of.html] So see this section of ''New Fire'' on general grounds of civic responsibility; for SF/F for possible continuing relevance of works dealing with the trope of a false reality, as in [[THE MATRIX]] or [[TOTAL RECALL]] or, in a sense, [[DARK CITY]].
  
  

Revision as of 01:05, 30 April 2022

WORKING

Buchanan, Ben, and Andrew Imbrie. The New Fire: War, Peace, and Democracy in the Age of AI. Cambridge, MA, and London, UK: The MIT Press, 2022.

Lacks a list of Works Cited but has an Index and many endNotes to the Introduction and each chapter, with the Notes citing a substantial number of works for further examination of the relevant topics.

A variety of very scholarly and highly-professional journalistic polemic with an implied audience centered in the United States and favoring democracy over autocracy (p. 5) and in a sense taking on Yuval Noah Harari's Atlantic essay, "Why Technology Favors Tyranny" (see p. 9) and other "Casandras" in favor of a more balanced, or triangulated, view of the threats and promises of AI as a technological breakthrough comparable (take the title seriously) of humans' use of fire.

+++++++++++++++++++++

The New Fire is very much into triplets: rhetorically (phrases with three elements) but more to the point conceptually, including three approaches to AI: the "evangelists," "warriors," and "Cassandras" for those enthusiastic and perhaps insufficiently critical, those working to adapt AI for warfare (for defense of democracy), and those giving warnings that might well be correct but won't (all) be effective. Problems with machine learning are classified mainly under "bias, fairness, and transparency" (p. 241 & passim). Also three main sections to the book: "Ignition," "Fuel," and "Wildfire" and three crucial elements ("sparks") behind AI: "Data," "Algorithms," "Compute," in the sense of computing power (section I, which significantly has a fourth chapter, "Failure," on AI not living up to initial hype [section II also has four chapters; section III: 2 chapters: "Fear" and "Hope"]).

From the publisher's Summary:

[...] Artificial intelligence is revolutionizing the modern world. It is ubiquitous [... and] we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good [...]. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. [...]

The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny.

Note again that implicit argument with Y. N. Harari

++++++++++++++++++

Users of the Clockworks2 wiki will find New Fire useful for a readable introduction to AI, expert systems, (cybernetic) neural networks, and machine learning — from Leibniz[1] through 1956 and coining of "artificial intelligence" to 2022 — by authors who will open their first chapter with a reference to HAL 9000 (p. [13]) and deal with such topics of interest in SF and real life as the following.

"'Data is [sic] the new oil'" (p. 14): Information as a source of power.
Surveillance, including facial and voice recognition (sometimes botched), with the general surveillance part as in e.g., The Circle (novel) and THE CIRCLE (film).
"Lethal autonomous weapons" as part of the long-standing efforts in search of the ultimate weapon and the tradition of Fred Saberhagen's berserkers, [2] the Cylons of Battlestar Galactica (2004-09), and the Dalek's of Doctor Who — et al.!

PART I, "IGNITION"

Chapter 1, "Data": See for early AI work in the real world, with limitations of "expert systems" in handling ambiguity or in contexts requiring flexibility vs. "machine learning." Cf. for the general idea and contrast for ambiguity motif of stymieing machine minds with paradoxes, as with the classic Star Trek episode "I, Mudd" or much more recently, "The Winter Line" episode of West World, Season 3 (22 March 2020). For over-optimistic early versions of SF machine learning, note education of HAL 9000's daughter, so to speak, SAL 9000 with Dr. Chandra in 2010: Odyssey Two — or with Galatea 2.2 or HARLIE in When HARLIE Was One. Repeat: the point here would be contrast of the personification of conscious SF AI's with their precursors in the real world. (Although as elsewhere, New Fire presents important points for citizens to know about interesting history.) 

 Chapter 2, "Algorithms": Real-world machine leaning of such complex matters as chess and the still-more-complex game of Go, or "Predicting the structure of a protein from a known amino acid sequence" (p. 52). "ImageNet and GANs [Generative Adversarial Networks] showed the power of data-intensive machine learning, kickstarting a technological revolution but also fostering the automation of [military] intelligence analysis and the creation for fake but convincing videos" (p. 56). 
Chapter 3, "Compute," i.e., computing power: Increasing at increasing speed, with a kind of arms race in the semiconductor industry with serious geopolitical implications (p. 61). The game of StarCraft II (released 2010) next challenge for AI: a game with some 10 to the 26th options for any given decision and 10 to the 1,685th power of possibilities for a full session ("The number of seconds in at the history of the universe, by contrast, is 10 to the 17th power") — and unlike chess or Go not "a perfect information game, with both players' pieces in everyone's sight at all times" (p. 67). The "DeepMind" project developed three versions of "AlphaStar" players: "Each version played at a grandmaster level, placing it in the top 0.2 percent of players in the world," and attracting the attention of the "warriors" among AI advocates who "recognized that StarCraft was war in miniature" (p. 70). See for background of video games[3] and for such films as THE LAST STARFIGHTER and TRON (1982) and READY PLAYER ONE (film) and novel. For the approach to true AI that could pass the Turing Test, note OpenAI's GPT (Generative Pre-trained Transformer) 2[4][5] and 3.[6] "From human writing, GPT-2 learned to mimic grammar and syntax" in English and could develop a decent opening to an SF/F story (p. 72). With a little help and heavy investment from Microsoft friends, GPT-3 "could answer science questions, correct grammatical mistakes, solve anagrams, translate between languages, and" impressively and a little frighteningly, "generate news stories" (pp. 74-75). Cf. and contrast the large and heavy kaleidoscope-like devices used to generate reading copy for Proles in Orwell's Nineteen Eighty-Four.

Chapter 4, "Failure": Failures with AI.
 Bias: Examining résumés for Amazon using data from earlier hires — the earlier bias to hire men was learned by the machine program (p. 87). 
Opacity: Much of these "system's behavior is learned by the machine itself" in ways that can be "remarkably opaque." The up side of this Buchanan and Imbrie summarize with SF editor's John W. Campbell's advice to writers seeking inspiration "to come up with a creature that thinks as well as a human, or even better than a human, but not exactly like a human" (p. 93). The downside includes sometimes rather funny instances of "specification gaming," in which systems come up with loopholes in their instructions to beat the system, not maliciously (we assume) but because they literally don't know what they're doing: e.g., racking up points in a racing game without going over the finish line since the reinforcement was for racking up points, not necessarily finishing the race. More generally, "[...] even as machine learning improves and teams of humans and machines become the norm," a new and powerful technique "called neural architecture search" will expand, in which machine learning systems design other machine learning systems; in effect, these systems are black boxes that build black boxes inside themselves. [...] Absent an unexpected breakthrough, machine learning is poised to become still more opaque" (p. 96). Note that HAL in 2001: A Space Odyssey as novel was designed by AI machines designed by other AI machines. 

PART II, "FUEL"

Chapter 5, "Inventing": Quick historical survey of "Science in Peace and War" from the A-Bomb to AI, centered on the USA and PCR (China).
Chapter 6, "Killing": Cuban Missile Crisis, 1962, Soviet submarine captain Valentin Savitsky vs. Commodore Vasily Arkhipov on using a nuclear torpedo against the US ships harassing them, Savitsky wanting to, Arhhipov overruling him, to wait: "Whether one laments Savitsky's impulsiveness or praises Arkhipov's patience, there remains no doubt that humans were in control" (pp. 136-37). In a world of "lethal autonomous weapons" of the killer-robot variety (and more so with AI controlling strategic weapons) humans will not be in control. By 2022, it has become a question for both democracies and autocracies (and States in between) on how much authority to yield to the machines. Accept for obvious background — to be developed more later in The New Fire — for such works as COLOSSUS: THE FORBIN PROJECT and THE TERMINATOR series, 
Chapter 7, "Hacking": Cyber attacks and defense, at machine speed. Implicit message for students of Cyberpunk:[7] Cyberspace hacking was more fun in the Romantic mode of Neuromancer et al. than the rather sordid real world. 
Chapter 8, "Lying": 

Machine learning-enabled microtargeting of content on the internet means that more persuasive messages are far more likely to reach their intended audiences than when operatives had to rely on newspapers and academics. Instead of the slow forgeries of the Cold War, the generative adversarial networks (GANs) [...] can rapidly make such messages far more vivid than the KGB's letter to the editor [quoted earlier in the chapter]. [...]

Disinformation is a major geopolitical concern. NSA [National Security Agency: Signals Intelligence] Director Paul Nakasone called it the most disruptive threat faced by the US intelligence community. "We've seen it now in our democratic processes and I think we're going to see it in our diplomatic processes, we're going to see it in warfare. We're going to see it in sowing civil distrust, and distrust in different countries," he said. The boundaries between the producers and consumers of information, and between domestic and foreign audiences, are blurred. The number of actors has multiplied [...]. Automation has accelerated the pace and potency of operations. And all of this is happing in the post-truth age, dominated by high levels and inequality and nativism, offering a more receptive audience than ever before. (p. 185)

Cf. and contrast the relatively primitive means of propaganda of the advertising agencies in The Space Merchants; and automated Spambots, Chatbots, et al. as part of a Fourth Industrial Revolution where propagandists and other wielders and twisters of words can be replaced by machines (see again Vonnegut's Player Piano.

Deepfakes: Starts with discussion of the Hitler jig clip, a World War II "shallow fake" by John Gierson and an associate, for a very effective piece of anti-Hitler propaganda.[8] Nowadays "A GAN [Generative Adversarial Network] could do easily and more convincingly what Gierson had to do painstakingly and manually" (pp. 195-96), producing "deepfakes" that "exacerbate a long-standing social problem, that Buchanan and Imbrie trace back to Hitler and earlier.

The denial of truth has long been a feature of autocracies. In her seminal study of totalitarianism [The Origins of Totalitarianism (1951/1976)], Hannah Arendt wrote, "Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, for in their opinion fact depends entirely on the power of the man who can fabricate it." Stalin and other autocrats rewrite history to preserve their hold on power, forcing citizens to accept what many knew was not true. In 1984 [...] George Orwell wrote, "The party told you to reject the evidence of your eyes and ears. It was their final, most essential command." It is easy to imagine deepfakes enhancing this power [...]. (p. 197)

It is also easy to imagine deepfakes undermining the USA (and other countries) as liberal-democratic Republics toward the end of a fairly long period of lessoning "respect for the notion of truth itself" (p. 201).[9] So see this section of New Fire on general grounds of civic responsibility; for SF/F for possible continuing relevance of works dealing with the trope of a false reality, as in THE MATRIX or TOTAL RECALL or, in a sense, DARK CITY.


RDE, finishing, 3Ap22 f.