The New Fire: War, Peace, and Democracy in the Age of AI

From Clockworks2
Jump to navigationJump to search

Buchanan, Ben, and Andrew Imbrie. The New Fire: War, Peace, and Democracy in the Age of AI. Cambridge, MA, and London, UK: The MIT Press, 2022.

Lacks a list of Works Cited but has an Index and many endNotes to the Introduction and each chapter, with the Notes citing a substantial number of works for further examination of the relevant topics.

A variety of very scholarly and highly-professional journalistic polemic with an implied audience centered in the United States and favoring democracy over autocracy (p. 5) and in a sense taking on Yuval Noah Harari's Atlantic essay, "Why Technology Favors Tyranny" (see p. 9) and other "Casandras" in favor of a more balanced, or triangulated, view of the threats and promises of AI as a technological breakthrough comparable (take the title seriously) of humans' use of fire. For the promise and threat of AI and bioTech, see also The Coming Wave at internal link.


The New Fire is very much into triplets: rhetorically (phrases with three elements) but more to the point conceptually, including three approaches to AI: the "evangelists," "warriors," and "Cassandras" for those enthusiastic and perhaps insufficiently critical, those working to adapt AI for warfare (for defense of democracy), and those giving warnings that might well be correct but won't (all) be effective. Problems with machine learning are classified mainly under "bias, fairness, and transparency" (p. 241 & passim). Also three main sections to the book: "Ignition," "Fuel," and "Wildfire" and three crucial elements ("sparks") behind AI: "Data," "Algorithms," "Compute," in the sense of computing power (section I, which significantly has a fourth chapter, "Failure," on AI not living up to initial hype [section II also has four chapters; section III: 2 chapters: "Fear" and "Hope"]).

From the publisher's Summary:

[...] Artificial intelligence is revolutionizing the modern world. It is ubiquitous [... and] we encounter AI as our distant ancestors once encountered fire. If we manage AI well, it will become a force for good [...]. If we deploy it thoughtlessly, it will advance beyond our control. If we wield it for destruction, it will fan the flames of a new kind of war, one that holds democracy in the balance. [...]

The new fire has three sparks: data, algorithms, and computing power. These components fuel viral disinformation campaigns, new hacking tools, and military weapons that once seemed like science fiction. To autocrats, AI offers the prospect of centralized control at home and asymmetric advantages in combat. It is easy to assume that democracies, bound by ethical constraints and disjointed in their approach, will be unable to keep up. But such a dystopia is hardly preordained. Combining an incisive understanding of technology with shrewd geopolitical analysis, Buchanan and Imbrie show how AI can work for democracy. With the right approach, technology need not favor tyranny.

Note again that implicit argument with Y. N. Harari


Users of the Clockworks2 wiki will find New Fire useful for a readable introduction to AI, expert systems, (cybernetic) neural networks, and machine learning — from Leibniz[1] through 1956 and coining of "artificial intelligence" to 2022 — by authors who will open their first chapter with a reference to HAL 9000 (p. [13]) and deal with such topics of interest in SF and real life as the following.

"'Data is [sic] the new oil'" (p. 14): Information as a source of power.
Surveillance, including facial and voice recognition (sometimes botched), with the general surveillance part as in e.g., The Circle (novel) and THE CIRCLE (film).
"Lethal autonomous weapons" as part of the long-standing efforts in search of the ultimate weapon and the tradition of Fred Saberhagen's berserkers; [2] and for the "Cassandras'" suspicious of AI, Terminators, the Cylons of the original Battlestar Galactica (2004-09), the Dalek's of Doctor Who — et al., including the brilliant dark comedy of Enforcement Droid ED-209 in ROBOCOP (1987).[3]


Chapter 1, "Data": See for early AI work in the real world, with limitations of "expert systems" in handling ambiguity or in contexts requiring flexibility vs. "machine learning." Cf. for the general idea and contrast for ambiguity motif of stymieing machine minds with paradoxes, as with the classic Star Trek episode "I, Mudd" or much more recently, "The Winter Line" episode of West World, Season 3 (22 March 2020). For over-optimistic early versions of SF machine learning, note education of HAL 9000's daughter, so to speak, SAL 9000 with Dr. Chandra in 2010: Odyssey Two — or with Galatea 2.2 or HARLIE in When HARLIE Was One. Repeat: the point here would be contrast of the personification of conscious SF AI's with their precursors in the real world. (Although as elsewhere, New Fire presents important points for citizens to know about interesting history.) 

 Chapter 2, "Algorithms": Real-world machine leaning of such complex matters as chess and the still-more-complex game of Go, or "Predicting the structure of a protein from a known amino acid sequence" (p. 52). "ImageNet and GANs [Generative Adversarial Networks] showed the power of data-intensive machine learning, kickstarting a technological revolution but also fostering the automation of [military] intelligence analysis and the creation for fake but convincing videos" (p. 56). 
Chapter 3, "Compute," i.e., computing power: Increasing at increasing speed, with a kind of arms race in the semiconductor industry with serious geopolitical implications (p. 61). The game of StarCraft II (released 2010) next challenge for AI: a game with some 10 to the 26th options for any given decision and 10 to the 1,685th power of possibilities for a full session ("The number of seconds in at the history of the universe, by contrast, is 10 to the 17th power") — and unlike chess or Go not "a perfect information game, with both players' pieces in everyone's sight at all times" (p. 67). The "DeepMind" project developed three versions of "AlphaStar" players: "Each version played at a grandmaster level, placing it in the top 0.2 percent of players in the world," and attracting the attention of the "warriors" among AI advocates who "recognized that StarCraft was war in miniature" (p. 70). See for background of video games[4] and for such films as THE LAST STARFIGHTER and TRON (1982) and READY PLAYER ONE (film) and novel. For the approach to true AI that could pass the Turing Test, note OpenAI's GPT (Generative Pre-trained Transformer) 2[5][6] and 3.[7] "From human writing, GPT-2 learned to mimic grammar and syntax" in English and could develop a decent opening to an SF/F story (p. 72). With a little help and heavy investment from Microsoft friends, GPT-3 "could answer science questions, correct grammatical mistakes, solve anagrams, translate between languages, and" impressively and a little frighteningly, "generate news stories" (pp. 74-75). Cf. and contrast the large and heavy kaleidoscope-like devices used to generate reading copy for Proles in Orwell's Nineteen Eighty-Four.

Chapter 4, "Failure": Failures with AI.
 Bias: Examining résumés for Amazon using data from earlier hires — the earlier bias to hire men was learned by the machine program (p. 87). 
Opacity: Much of these "system's behavior is learned by the machine itself" in ways that can be "remarkably opaque." The up side of this Buchanan and Imbrie summarize with SF editor's John W. Campbell's advice to writers seeking inspiration "to come up with a creature that thinks as well as a human, or even better than a human, but not exactly like a human" (p. 93). The downside includes sometimes rather funny instances of "specification gaming," in which systems come up with loopholes in their instructions to beat the system, not maliciously (we assume) but because they literally don't know what they're doing: e.g., racking up points in a racing game without going over the finish line since the reinforcement was for racking up points, not necessarily finishing the race. More generally, "[...] even as machine learning improves and teams of humans and machines become the norm," a new and powerful technique "called neural architecture search" will expand, in which machine learning systems design other machine learning systems; in effect, these systems are black boxes that build black boxes inside themselves. [...] Absent an unexpected breakthrough, machine learning is poised to become still more opaque" (p. 96). Note that HAL in 2001: A Space Odyssey as novel was designed by AI machines designed by other AI machines. 


Chapter 5, "Inventing": Quick historical survey of "Science in Peace and War" from the A-Bomb to AI, centered on the USA and PCR (China).
Chapter 6, "Killing": Cuban Missile Crisis, 1962, Soviet submarine captain Valentin Savitsky vs. Commodore Vasily Arkhipov on using a nuclear torpedo against the US ships harassing them, Savitsky wanting to, Arhhipov overruling him, to wait: "Whether one laments Savitsky's impulsiveness or praises Arkhipov's patience, there remains no doubt that humans were in control" (pp. 136-37). In a world of "lethal autonomous weapons" of the killer-robot variety (and more so with AI controlling strategic weapons) humans will not be in control. By 2022, it has become a question for both democracies and autocracies (and States in between) on how much authority to yield to the machines. Accept for obvious background — to be developed more later in The New Fire — for such works as COLOSSUS: THE FORBIN PROJECT and THE TERMINATOR series, 
Chapter 7, "Hacking": Cyber attacks and defense, at machine speed. Implicit message for students of Cyberpunk:[8] Cyberspace hacking was more fun in the Romantic mode of Neuromancer et al. than the rather sordid real world. 
Chapter 8, "Lying": 

Machine learning-enabled microtargeting of content on the internet means that more persuasive messages are far more likely to reach their intended audiences than when operatives had to rely on newspapers and academics. Instead of the slow forgeries of the Cold War, the generative adversarial networks (GANs) [...] can rapidly make such messages far more vivid than the KGB's letter to the editor [quoted earlier in the chapter]. [...]

Disinformation is a major geopolitical concern. NSA [National Security Agency: Signals Intelligence] Director Paul Nakasone called it the most disruptive threat faced by the US intelligence community. "We've seen it now in our democratic processes and I think we're going to see it in our diplomatic processes, we're going to see it in warfare. We're going to see it in sowing civil distrust, and distrust in different countries," he said. The boundaries between the producers and consumers of information, and between domestic and foreign audiences, are blurred. The number of actors has multiplied [...]. Automation has accelerated the pace and potency of operations. And all of this is happing in the post-truth age, dominated by high levels and inequality and nativism, offering a more receptive audience than ever before. (p. 185)

Cf. and contrast the relatively primitive means of propaganda of the advertising agencies in The Space Merchants; and automated Spambots, Chatbots, et al. as part of a Fourth Industrial Revolution where propagandists and other wielders and twisters of words can be replaced by machines (see again Vonnegut's Player Piano.

Deepfakes: Starts with discussion of the Hitler jig clip, a World War II "shallow fake" by John Gierson and an associate, for a very effective piece of anti-Hitler propaganda.[9] Nowadays "A GAN [Generative Adversarial Network] could do easily and more convincingly what Gierson had to do painstakingly and manually" (pp. 195-96), producing "deepfakes" that "exacerbate a long-standing social problem" that Buchanan and Imbrie trace back to Hitler and earlier.

The denial of truth has long been a feature of autocracies. In her seminal study of totalitarianism [The Origins of Totalitarianism (1951/1976)], Hannah Arendt wrote, "Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, for in their opinion fact depends entirely on the power of the man who can fabricate it." Stalin and other autocrats rewrite history to preserve their hold on power, forcing citizens to accept what many knew was not true. In 1984 [...] George Orwell wrote, "The party told you to reject the evidence of your eyes and ears. It was their final, most essential command." It is easy to imagine deepfakes enhancing this power [...]. (p. 197)

It is also easy to imagine deepfakes undermining the USA (and other countries) as liberal-democratic Republics toward the end of a fairly long period of lessoning "respect for the notion of truth itself" (p. 201).[10] So see this section of New Fire on general grounds of civic responsibility; for SF/F note for possible continuing relevance — taken figuratively (including the "figure" hyperbole) — of deep-paranoia works dealing with the trope of a false reality, as in THE MATRIX, TOTAL RECALL (1990) or, in a sense, DARK CITY and THEY LIVE.[11]

GPT-3: See above, Chapter 3 (pp. 75-76). "Minor bug and massive compute costs aside, GPT-3 made GPT-2 look like a child's toy. The new version could answer science questions, correct grammatical mistakes, solve anagrams, translate between languages, and," relevant for a potential threat, "generate news stories." The developing company "OpenAI tested this last ability by providing GPT-3 with a headline and a one-sentence summary" to generate copy — where, Erlich notes, humans working on an election campaign would generate letters to the editor or media releases with that headline and summary, plus what at one time was called a "fact sheet" and today would be a screen of "bullet points" of data from which to choose evidence for the letters or releases. "For example" New Fire continues in Ch. 3, "when given the headline 'United Methodists Agree to Historic Split' and the summary 'Those who oppose gay marriage will form their own denomination,' GPT-3 created the following:" followed by two long paragraphs of text. This Wiki's initial compiler — who has relevant experience in such matters — would send the generated text out with minimal editing, compliment the author (assumed human), and tell the author to be sure the history cited is correct (Buchanan and Imbrie note an error that would be obvious to knowledgeable readers) and remind her or him that journalistic writing requires far more than two paragraphs for a substantial story, so the text needs to be broken up into smaller units (which an editor would do routinely). The issue in Chapter 8 is whether OpenAI would be open with GPT-3, "a potential tool for automating disinformation. The tool could generate stories, so it could generate fake news; it could invent human dialogue, so it could empower bots on Twitter and Facebook" (p. 202). Relevant for SF: GPT-3 means that applications of the Turing Test in stories would need to be extensive, definitely interactive and "dialogic" — and probably need to be supplemented with additional tests that didn't involve language generation.

Part III, Wildfire

Chapter 9, "Fear": Section called "The Dead Hand" briefly retells story of Soviet Lt. Col. Stanislav Yevgrafovich Petrov[12] and how on 26 September 1983 he and his staff received an automated report of the launch of a "missile strike" detected by their early warning radar;[13] "The computer reported with its highest level of confidence that a nuclear attack" from the United States "was under way" (p. 214). Petrov examined the situation and "After careful deliberation [...] called the duty officer [... to say] the early warning system was malfunctioning. [...] Thanks to Petrov's decisions to disregard the machine and disobey protocol, humanity lived another day" (p. 214). Since then "the perceived need for speed and allure of automation" led the Soviets to institute "Perimeter," informally called "the Dead Hand": a system of making nuclear response more guaranteed, launched by action of lower echelon offices.* "The United States was also drawn to automated systems" — as with Semi-Automatic Ground Environment (SAGE, PP. 215-16) making DR. STRANGELOVE increasingly a documentary with laughs and adding seriousness to a film like WARGAMES. The section raises the question of what to make of the already-existing "evidence that autonomous systems perceive escalation dynamics different from human operators" and the concern that "nuclear war could become even more abstract than it already is, and hence more palatable." On this last point we're given the macabre but highly ethical suggestion from Roger Fisher "that nuclear codes be stored in a capsule surgically embedded near the heart of a military officer who'd would always be near the president. The officer would also carry a large butcher knife. To launch a nuclear war, the president would have to personally kill the officer and retrieve the capsule — a comparatively small but symbolic act of violence that would make the tens of millions of deaths to [...] more visceral and real" (p. 219).[14] (The "Fisher Protocol" was reportedly used in the HBO series The Leftovers.)[15]
        * For a more pessimistic reading of the Soviet and likely now Russian "Dead Hand," see Daniel Ellsbergs ch. 19, "The Strangelove Paradox" in his The Doomsday Machine: Confessions of a Nuclear War Planner (2017).
Chapter 10, Hope: Suggestions for the US government to follow to balance the possibilities for AI raised by the evangelists, warriors, and Cassandras. Not immediately relevant for SF, but important reading for responsible citizens.


For one real-world example from the time this book was being composed, see WBUR coverage of "Your AI Interviewer Will See You Now", at the link.

RDE, finishing, 30Ap22