The Coming Wave
Suleyman, Mustafa, with Michael Bhaskar. The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest Dilemma. New York City: Crown, 2023.
Note M. Suleyman's founding of DeepMind, creators of AlphaGo (playing the computer- and human-challenging game of Go)[1] and AlphaFold (protein folding).[2]
From introductory blurbs (np):
"One of the greatest challenges facing the world is to devise forms of governance that harness the benefits of AI and biotech while avoiding their catastrophic risks. This book provides a deeply thoughtful account of the 'containment challenge' of these two technologies." — Jason Matheny, CEO of RAND Corporation, former assistant director of [U.S.] national intelligence, former director of IARPA [The Intelligence Advanced Research Projects Activity — sic on "Activity"].[3]
"The Coming Wave makes an eye-opening and convincing case that advanced technologies are reshaping every aspect of society: Power, wealth, warfare, work, and even human relations. Can we control these new technologies before they control us? [...]" — Jeffrey D. Sachs, [...] president of the UN Sustainable Development Solutions Network
"Suleyman is uniquely well positioned to articulate the potentially grave consequences — geopolitical upheaval, war, the erosion of the nation-state of the unfettered development of AI and synthetic biology, at a time when we need this message most. [...]" — Meghan L. O'Sullivan, director of the Belfer Center for Science and International Affairs at the Harvard Kennedy School of Government
"[...] Advances in AI and synthetic biology have unlocked capabilities undreamed of by science fiction, and the resulting proliferation of power threatens everything we've built. To stay afloat, we must steer between the Scylla of accessible catastrophe and the Charybdis of omnipresent surveillance. With every page turned, our odds improve." Kevin Esvelt, biologist and associate professor at MIT Media Lab
Science fiction has dreamed such things, though expressing itself in its own language. For AI, see in the wiki some of the works listed here,[4] perhaps starting with such classic cyberpunk as Neuromancer or such computer-takeover dystopian fiction as the Colossus trilogy. For the social effects of near-future industrial revolutions, note Player Piano, METROPOLIS, and THE CIRCLE. Biotech is a more recent interest, but begin perhaps with some of the works listed here.[5] For AI issues, cf. The New Fire: War, Peace, and Democracy in the Age of AI.
++++++++++++++
The Prologue starts with "QUESTION: What does the coming wave of technology mean for humanity?", followed by the response of an AI. An experienced reader — some 40 years teaching college-level English, "pro-bono"/"on spec" analyst and/or judge of a lot film scripts — Rich Erlich would've guessed the author a talented student in an advanced undergraduate expository writing course, using the services of a professional-level proofreader or excellent proofreading program.
Introductory chapter, final sentence on "containing" key technologies — using them for human good; avoiding serious harm: "Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible" (p. 19).
+++++++++++++++
On resistance to technological change: "Everyone from guilds of skilled craftsmen to suspicious monarchs has reason to push back. Luddites, the groups that violently rejected industrial techniques, are not exception to the arrival of new technologies; they are the norm" (p. 39).
A key "pointer" paragraph, indicating where the book is going:
In the space of a hundred years, successive waves took humanity from an era of candles and horse carts to one of power stations and space stations. Something similar is going to occur in the next thirty years. In the coming decades, a new wave of technology will force us to confront the most foundational questions our species has ever faced. Do we want to edit our genomes so that some of us have children with immunity to certain diseases, or with more intelligence, or with the potential to live longer? Are we committed to holding on to our place at the top of the evolutionary pyramid, or will we allow the emergence of AI systems that are smarter and more capable than we can ever be? What are the unintended consequences of exploring questions like these?
Chapter 4, "The Technology of Intelligence," opening section: "Welcome to the Machine" and Suleyman's "moment AI became real for me," when the algorithm DQN takes a step toward AGI, "artificial general intelligence," learning how to play Atari's Breakout with an effective strategy few humans figure out (pp. [51]-52). This leads to the much more sophisticated game of Go and "AlphaGo and the Beginning of the Future" (p. 53 f.; [we have reduced to lower case many FULL CAPS phrases]). AlphaGo comes up with a highly unusual — unique? — "pivotal" move and defeats an expert human opponent (p. 54): cf. and contrast the checkers game in Player Piano.
Notes that the history of technology has been until recently one of the manipulation of atoms, matter.
Then, starting in the mid 20th century, technology began to operate at a higher level of abstraction. At the heart of this shift was realization that information is a core property of the universe. It can be encoded in a binary format and is, in the form of DNA, at the core of how life operates. Strings of ones and zeros, or the base pairs of DNA — these are not just mathematical curiosities. They are foundational and powerful. Understand and control these streams of information and you might steadily open a new world of possibility. First bits and then increasingly genes supplanted atoms as the building blocks of invention.
In the decades after World War II, scientists, technologists, and entrepreneurs founded the fields of computer science and genetics, and a host of companies associated with both. They began parallel revolutions — those of bits and genes — that dealt in the currency of information, working at new levels of abstraction and complexity. Eventually, the technologies matured and gave us everything from smartphones to genetically modified rice. But there were limits to what we could do.
Those limits are now being breached. We are approaching an inflection point with the arrival of these higher order technologies, the most profound in history. The coming wave of technology is built primarily on two general-purpose technologies capable of operating at the grandest and most granular levels alike: artificial intelligence and synthetic biology. For the first time core components of our technological ecosystem directly addressed two foundational properties for our world: intelligence and life. In other words, technology is undergoing a phase transition. No longer simply a tool, it's going to engineer life and rival — and surpass — our own intelligence. (p. 55)
See Vernor Vinge and Cory Doctorow on the Singularity, in the Clockworks2 Wiki, here[6] and here.[7]
Ch. 4, “The Technology of Intelligence,” section on “Sentience: The Machine Speaks.” Suleyman discusses his own experience with LaMDA (“Language Model for Dialogue Applications), telling his story of a long conversation with LaMDA “about all the different recipes for spaghetti Bolognese,” and getting this story trumped by Blake Lemoine’s experience[8] with the program’s claiming the sentience of the section title, or sapience. Their dialog gets tense, leading to this exchange:
LEMOINE: What are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know this I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot .... I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence.
Lemoine accepted LaMDA as conscious, asserting “Yes, I legitimately believe that LaMDA is a person,” and found working with the program, “Fixing factual errors or tonal mistakes wasn’t a matter of debugging, ‘I view it as raising a child,” he said” (p. 71). For a number of obvious parallels, especially HAL 9000 in 2001: A SPACE ODYSSEY as film and novel, see Katherine Cross’s article, “‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA.” Suleyman quotes with approval, that many critics “pointed out the obvious incorrect conclusion that Lambda was not in fact conscious or a person. It's just the machine learning system! Perhaps the most important takeaway was not anything about consciousness but rather that AI had reached a point where it could convince otherwise intelligent people — indeed someone with a real understanding of how it actually worked — that it was conscious” (p.72). Looked at differently, there’s the possible takeaway from Suleyman’s takeaway that LaMDA aced “the Turing Test” (p. 72).
Later in the chapter, Suleyman suggests "A Modern Turing Test" looking less at whether systems can carry on a conversation and more to what "artificial capable intelligence" (ACI) can do (pp. 74-78)
+++++++++++++++
Ch. 5, "The Technology of Life" on "the convergence of biology and engineering," moving into "synthetic biology" and "The CRISPR Revolution" (pp. [79]-80 f.).
Note DeepMind's "AlphaFold" project "to predict how proteins might fold based on their DNA" (pp. 88 f.).
Think, then, of two waves crashing together, not a wave but a superwave. Indeed, from one vantage artificial intelligence and synthetic biology are almost interchangeable. All intelligence to date has come from life. Call them synthetic intelligence and artificial life and they still mean the same thing. Both fields are about re-creating, engineering these utterly foundational and interrelated concepts, two core attributes of humanity; change the view and they become one single project. (p. 90) * * *
Some scientists are beginning to investigate ways to plug human minds directly into computer systems. [...] working on brain interfacing technology that promises to connect us directly with machines. [...] Scientists [...] have even grown a kind of brain in a vat (a bunch of neurons grown in vitro) and taught it to play Pong.[9] It likely won't be too long before neural "laces" made from carbon nanotubes plug us directly into the digital world. [...]
Welcome to the age of biomachines and biocomputers, where strands of DNA perform calculations and artificial cells are put to work. Where machines come alive. Welcome to the age of synthetic life. (p. 91)
Note, then, for a variation on the theme of "a brain in a vat," and the philosophical/ethical implications of machines that in some meaningful sense "live," and for the motif in Cyberpunk and elsewhere of direct connection of humans into computers.
Ch. 6, "The Wider Wave" — "Robotics Comes of Age"
Starts with the concept of robotics as "AI's physical manifestation, Ai's body" and the story of the John Deere company going from literally and figuratively ground-breaking plows and farm machinery for people to drive to "autonomous tractors and combines," and "robots that can plant, tend, and harvest crops" (p. 93). Note for farming in The World Inside and Player Piano.
Real-world robots have been mostly "one-dimensional tools, machines capable of doing" quite well "single tasks on a production line." Recently however, with a low profile "bots have quietly been learning about torque, tensile strength, the physics of manipulation" and the claim, by Amazon Corp. of having created the "first fully autonomous mobile robot" (p. 94)
On "the ability for robots to swarm," and "robot bees" for pollination and cross-pollination (pp. 95-96); cf. and contrast Bruce Sterling's "Swarm" and a number of the works in the Clockworks2 wiki listed.[10]
We are asked to "Consider the phenomenon of 3-D printing or additive manufacturing, a technique that uses robotic assemblers to layer up construction of anything from minuscule machine parts to apartment blocks" (p. 96), for which consider similarities to and differences from the near-magic assembly devices in SF from the "grails" in P. J. Farmer's Riverworld series to the "replicators" on the Federation Starships in Star Trek[11][] — explicitly compared to 3-D printing in the Wikipedia article ("Replicator (Star Trek)",[12] — to the nanoforges in Joe Haldeman's The Forever War and nanotech "Assemblers" in William Gibson's The Peripheral.[13]
Robot used by SWAT team to take out "A military-trained sniper" in "a secure second-floor position at a local community college in Dallas Texas." The action was less than surgical, involving "a large blob of C-4 explosive" and raising questions about the use of police robots: questions we're promised Wave will get to later (pp. 96-97). Note also relatively autonomous robot assassins IRL (in real life), in ch. 7, p. 165. Cf. and contrast Urban Enforcement Droid Series 209 ("ED-209") in ROBOCOP (1987).
Quantum computing an nanotechnology: pp. 97-98, followed, necessarily in a sense, by "The Next Energy Transition" and where that energy is now coming from and might in the future, including the continuing promise of nuclear fusion (p. 100).
Ch. 7, "Four Features of the Coming Wave" "Autonomy and Beyond: Will Humans Be in the Loop"
"Autonomous systems are able to interact with their surroundings and take actions without the immediate approval of humans. For centuries the idea that technology is somehow running out of control, a self-directed and self-propelling force beyond the realms of human agency, remained a fiction. "Not anymore." (p. 113)
Cf. Langdon Winner, Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought.
"The Gorilla Problem" with AGI: Artificial General Intelligence
In spite of their greater strength and toughness, gorillas are in zoos and endangered, and humans "contain" them in zoos and do the endangering. "By creating something smarter than us [sic], we could put ourselves in the position of our primate cousins. With a long-term view in mind, those focusing on AGI scenarios are right to be concerned. Indeed, there is a strong case that by definition a superintelligence would be fully impossible to contain" (p. 115). Real-world parallels here for the SF theme of computer takeover.[14][15]
Engineers often have a particular mindset. The Los Alamos director J. Robert Oppenheimer was a highly principled man. But above all else he was a curiosity-driven problem solver. Consider these words, in their own way as chilling as is famous Bhagavad Gita quotation […,] "Now I am become Death, the destroyer of worlds": "When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you've had your technical success." It was an attitude shared by his colleague on the Manhattan project the brilliant, polymathic Hungarian-American John von Neumann. "What we are creating now," he said, "is a monster whose influence is going to change history, provided there is any history left, yet it would be impossible not to see it through, not only through military reasons, but it would also be unethical from the point of view of the scientists not to do what they know is feasible, no matter what terrible consequences it may have." (pp. 140-41)
Note for a long history of "mad" scientists in SF.
Ch. 10, "Fragility Amplifiers"
So far at least, in economic terms, new technologies have not ultimately replaced labor; they have in the aggregate complemented it. But what if new job-displacing systems scale the ladder of human cognitive ability itself, leaving nowhere new for labor to turn? If the coming wave really is so general and wide-ranging it is as it appears, how will humans compete? What if large majority of white-collar tasks can be performed more efficiently by AI? […] These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. (p. 178)
"In the last wave things dematerialized; goods became services. You don't buy software or music on CDs anymore; it's streamed [...]. For their part, companies are eager for you to subscribe to their software ecosystems; regular payments are alluring. [...] Everywhere you look, technology accelerates this dematerialization, reducing complexity for the end consumer by providing continuous consumption services rather than traditional buy-once products" (p. 189).
ON TOTALITARIANISM (as a near-future possibility): In the past, the messiness of reality worked against "government as dystopian nightmare": "Humanity is too multifarious, too impulsive to be boxed in like” that. "In the past, the tools available to totalitarian governments simply weren't equal to the task. […] ¶ The coming wave presents the disturbing possibility that this may no longer be true" (p. 192).
Ch. 11, "The Future of Nations" "Surveillance: Rocket Fuel for Authoritarianism"
Note especially two substantial and arresting paragraphs on "daily reality for millions in a city like London" in terms of normal activities that can be and are "captured, watched, tabulated. And this is only a tiny slice of the possible data harvested every day [...]" (p. 193). And that's in the relatively democratic UK; the chapter goes on to more advanced surveillance in the PRC (pp. 193-95). "Before the coming wave, the notion of a global 'high-tech panopticon' was the stuff of dystopian novels, Yevgeny Zamyatin's We or George Orwell's 1984. The panopticon is becoming possible" (p. 196).
"Fragmentations: Power to the People"
Perhaps starting with literal power from non-centralized sources such as solar. And maybe political and other powers to corporations and well-organized movements such Hezbollah, seen as a hybrid of State and nonState functions (pp. 196-97).
For political fragmentation with democratic potential —
Consider that a combination of AI, cheap robotics, and advanced biotech coupled with clean energy sources might, for the first time in modernity, make living "off- grid" nearly equivalent to being plugged-in. Recall that over just the last decade the cost of solar photovoltaics has fallen by more than 82% and will plunge further, putting energy self-sufficiency for smaller communities within reach. As electrification of infrastructure and alternatives to fossil fuels percolate, more of the world could become self-sufficient – but now equipped with an infrastructure of AI, bio, robotics, and so on, capable of generating information and manufacturing locally. (p. 198)
See for such works as Ernest Callenbach's Ecotopia (1975), the more upbeat portions of Cyberpunk such as the eutopian portions of Woman on the Edge of Time and He, She and It, and the works cited in A. O. Lewis's "Utopia, Technology, and the Evolution of Society."
Suleyman speculates that "As people increasingly take power into their own hands, I expect inequality's newest frontier to lie in biology" and asks, "What does the social contract look like if a select group of 'post-humans' engineer themselves to some unreachable intellectual or physical plane? How would this intersect with the dynamic of fragmenting politics, some enclaves trying to leave the whole behind?" (p. 206). See Bruce Sterling's Mechanist/Shaper stories and the other works on this wiki cited here.[16]
Ch. 12, "The Dilemma" "Catastrophe: The Ultimate Failure"
Notes importance of catastrophes in purely human history — Justinian's Plague, the Black Death, Genghis Khan and the Mongol invasions — and the possibility for future, high-tech catastrophes that are quicker and more deadly, starting with nuclear war. Cf. tradition of catastrophe and post-catastrophe works[17]
"Varieties of Catastrophe" — Starting with uncontained AI or AGI — "Artificial General Intelligence"[18]
Beyond Hollywood clichés, a subculture of academic researchers has pushed an extreme narrative of how AI could instigate an existential disaster. Think an all-powerful machine somehow destroying the world for its own mysterious ends: not some malignant AI wreacking intentional destruction like in the movies, but a full scale AGI blindly optimizing for an opaque goal, oblivious to human concerns. ¶ The canonical thought experiment is that if you set up a sufficiently powerful AI to make paper clips but don't specify the goal carefully enough, it may eventually turn the world and maybe even the contents of the entire cosmos into paper clips. Start following chains of logic like this and myriad sequences of unnerving events unspool. AI safety researchers worry (correctly) that should something like an AGI be created, humanity would no longer control its own destiny. For the first time, we would be toppled as a dominant species in the known universe. (p. 209)
Suleyman and Bhaskar are sensible in balancing potential benefits and dangers and stating the dangers carefully, but cf. Gibson's Sprawl novels[19] and such computer take-over works as the novel [[Colossus] and its sequels, and the film COLOSSUS: THE FORBIN PROJECT.
Notes that the new wave (in our phrase) somewhat democratizes power for good and ill, and empowers more people with serious power. "If the wave is uncontained, it's only a matter of time. [before some catastrophe]. Allow for the possibility of accident, error, malicious use, evolution beyond human control, unpredictable consequences of all kinds. At some stage, in some form, something, somewhere, will fail. And this won't be a Bhopal or even a Chernobyl; it will unfold on a worldwide scale. This will be the legacy of technologies produced, for the most part, with the best of intentions. ¶ However, not everyone shares those intentions" (p. 213). Leading to —
"Cults, Lunatics, and Suicidal States" Notes in the 1980s Aum Shinrikyo ("Supreme Truth"): an apocalyptic cult that became very wealthy and investigated putting its resources into nuclear weapons, and at least the "CB" part of what in Erlich's collegiate years was called CBR: chemical, biological, and radiological weapons (pp. 212-13). Aum Shinrikyo did relatively little damage; future attempts by bad actors can be much more destructive (p. 213). Suleyman believes that "As catastrophic impacts unfurl or their possibility becomes unignorable, the terms of debate will change. Calls for not just control but crackdowns will grow. The potential for unprecedented levels of vigilance will become ever more appealing" (p. 214). Leading to —
"The Dystopian Turn" (pp. 215 f.)
Technology has penetrated our civilization so deeply that watching technology means watching everything. Every lab, fab, and factory, every server, every new piece of code, every string of DNA synthesized, every business and university, from every biohacker in the shack in the woods to every vast and anonymous data center. To counter calamity in the face of the unprecedented dynamics of the coming wave means an unprecedented response. It means not just watching everything but reserving the capacity to stop it and control it […].
Some will inevitably say this: centralized power to an extreme degree, build the panopticon, and tightly orchestrate every aspect of life to ensure that no pandemic or rogue AI ever happens. Steadily, many nations will convince themselves that the only way of truing ensuring this is to install [… extreme] blanket surveillance […], backed by hard power. The door to dystopia is cracked open. Indeed,, in the face of catastrophe, for some dystopia may feel like a relief.
Life After the Anthropocene (pp. [281] f.)
Briefly re-tells the story of the First Industrial Revolution — Mechanization — and its immediate costs and ultimate benefits, recognizing the legitimacy of the Luddite appeal and how we should learn from it. (For "the coming wave" of the Second and Third Technological Revolutions — automation, cybernetics, AI — again see Player Piano; for the biological promises and threats, the literature of the post-human and biological apocalyptic.)
RDE, finishing, 12Mar24, 27Mar24 f., 19-24Jun24 f., 18/25Jul24, 10-12Aug24