Darwin Among the Machines: The Evolution of Global Intelligence

From Clockworks2
Jump to navigationJump to search

Dyson, George B. Darwin Among the Machines: The Evolution of Global Intelligence. Cambridge, MA: Perseus Books (Helix), 1997. To be perhaps studied with, but not to be confused with, Samuel Butler's essay, "Darwin Among the Machines."

Opening sentences: "This is a book about the nature of machines. It is framed as history but makes no claim to have separated the fables from the facts. Both mythology and science have a voice in explaining how human beings and technology arrived at the juncture that governs our lives today" (p. xi; "Preface: Edge of the World").

In his review of 30 September 1998, Tal Cohen notes

• The main idea suggested by George Dyson in this book is simple: In the digital universe, too, a conscious mind will evolve naturally, rather than as the result of some design. Artificial Intelligence researches might as well spend their time searching for signs of intelligence on the net rather than try to develop it. [* * *]
• Dyson’s main claim is that the evolution of a conscious mind from today’s technology is inevitable. It is not clear whether this will be a single mind or multiple minds, how smart that mind would be, and even if we will be able to communicate with it. He also clearly suggests that there are forms of intelligence on Earth that we are currently unable to understand.[1]

Note this book's appearance about the same time as, but without citation to, Vernor Vinge's 1993 essay, "The Coming Technological Singularity."

Darwin might be put in useful dialog with Jessica Riskin's 2016 The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick.


From a couple of the blurbs at the beginning of the Helix edition (pages unnumbered):

[A]n almost perfect example of the effective literary treatment of scientific subjects. — Philip W. Anderson, Nature
Charles Darwin and the computer seem an unlikely couple. But [...] Dyson makes us wonder why we've waited so long to pair the two. That's one of the many surprises Dyson offers in his idiosyncratic view of the evolution of computers ... a cogent, succinct history of thinkers and taking that paved the way, occasionally unwittingly to today's technology ... The Final irony: it took someone from outside the discipline to advance our understand of the ongoing dance of life, evolution[,] and machines. — Katie Hafner, Newsweek

Dyson also deals with Erasmus Darwin.



1. Leviathan
 Opens with a discussion from Thomas Hobbes's Leviathan, where "the artificial life [AL] and artificial intelligence [AI] that so animated Hobbes's outlook [...] was not the discrete, autonomous mechanical intelligence conceived by the architects of digital processing in the twentieth century. Hobbes's Leviathan was a diffuse, distributed, artifice organism more ccharacteristic of the technologies and computational architectures of approaching with the arrival of the twenty-first." Moves on with a quick survey from Hobbes as "the patriarch" of AI to (briefly) Gottfried Wilhelm Leibniz to Marvin Minsky and Alan Turing (pp. 1-2 f.) — and from there to networked computers and other digital devices up to 1995 or so, with a glance back at H. G. Wells, with Wells working from the idea of a "World Encyclopaedia" toward 1938 and his desire for "'widespread world intelligence conscious of itself'" in a World Brain (p. 10).[2][3]
2. Darwin Among the Machines
 Erasmus Darwin as well as Charles, and Samuel Butler in defense of Erasmus Darwin — with Dyson commenting on E. Darwin's contributions to the Industrial Revolution in England (pp. 21-22), and SF through M. W. Shelley's Frankenstein (p. 22).
G. W. Dyson discusses his father's, Freeman J. Dyson's "detour into theoretical biology" with Origins of Life (1985 [revised 1999])[4] and its discussion of metabolism as central to life and replication, and the distinction between (exact) replication producing an "exact copy" as with molecules and genes (sic), and reproduction, "producing a similar copy" (p. 29). This leads to a flashback, so to speak, to John von Neumann's "General and Logical Theory of Automata" (1948): "from which my father, in his Origins of Life, condensed the essential truths that 'metabolism and replication, however intricately they may be linked in the biological world as it now exists, are logically separable. It is logically possible to postulate organisms that are composed of pure hardware and capable of metabolism but incapable of replication. It is also possible to postulate organisms that are composed of pure software and capable of replication but incapable of metabolism" (Darwin p. 32, quoting Origins, p. 7). G. W. Dyson has it that "The origins of life as we know it — and life as we are creating it — are to be found in the cross-fertilization between self-sustaining metabolism and self-replicating code," leading to the "coalescence of the kingdom of numbers with the kingdom of machines" (p. 32) and, we will complete the thought: a kind of mechanical life, AI as AL.
3. The General Wind
 Starts with G. W. von Leibniz at 24 in 1670 complaining in a letter to T. Hobbes (age 84) about Hobbes's not saying "something more clearly about the nature of mind," and observes that "Ever since Hobbes and Leibniz, the nature of mind has been inextricably linked to the nature of machines. Mind has either been defined as a property of the machine [...] or, alternatively, as a property beyond the machine" [...]" (p. 35). Moves on from Leibniz's imagining "a digital computer" (p. 37) to the 19th century and Charles Babbage and his idea for an "analytical engine" (actual work starting in 1834) to George Boole (1815-1864), inventor of Boolean algebra, to Alfred Smee, author of (among much else), Elements of Electro-Biology, or, the Voltaic mechanism of Man (1849; sic on capitalization). Concludes that "Hobbes's ratiocination, Leibniz's calculus ratiocinator, Babbage's mechanical notation, Boole's law's of thought, and Smee's conceptual cyphers" — in his analysis of natural language — "all attempted to formalize the correspondence among a mechanical system of things, a mathematical system of symbols, and our mental system of thought and ideas," which leads him to Kurt Gödel (1906-1978) and the great scope of human abilities in those areas, and, better known, the limits Gödel demonstrated of that scope (pp. 49-50). Still both answered and unanswered, "Can machines calculate?" Babbage's could. "Can machines think?" Dyson has Hobbes believing in "strong AI." "Can machines become conscious?" Smee thought that the close concurrence between the stuff of the world and our ultimate perception of it is Reality, and that a mental image without the external matter is Thought. And then: "'The power to distinguish between a thought and a reality is called Consciousness'", which would make consciousness difficult for machines lacking experience of the external world; but the definition of "consciousness" comes from Smee's book from 1849, Principles of the Human Mind deduced from Physical Laws (sic on capitalization), and Dyson believes "Smee made the leap between mind and mechanism," and approached the idea of the neural net — although consciousness remains a question (Dyson pp. 46, 47). "Can machines have souls?" (questions from Dyson p. 50, with the last question definitely remaining open). 
4. On Computable Numbers 
 Headnote (italics removed): "In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will[,] providing mansions for the souls he creates." — Alan Turing, "Computing Machinery and Intelligence," Mind 59 (October 1950): 443. There are a number of players in this chapter, but Turing is the star, and of our interest here includes this essay by Turing in, significantly, Mind; also: "Intelligent Machinery," report submitted to the National Physical Laboratory, 1948, rpt. Donald Michie, ed., Machine Intelligence, vol. 5 (1970): 3 for Dyson's citation (endnotes pp. 239, 240). Among the secondary players, note — allowing for headline hype —  Emmanuel Scheyer on the potential of what for us are old technologies of read-out and instructions strips (picture a "stock ticker") and IBM cards: "When Perforated Paper Goes to Work: How Strips of Paper Can Endow Inanimate Machines with Brains of Their Own," Scientific American 127 (December 1922): 395 and thereabout (endnotes, p. 239). (The punch-cards of the "Jacquard loom" were to have a notable history.[5] Indeed, the first efficient data-retrieval system used by the Initial Compiler of this wiki was an "edge-notched card" system, sorted with a sorting needle,[6] a system used into the 1960s in libraries and elsewhere.[7])

The chapter succinctly covers computer development, with much about Bletchley Park during World War II, and its alumni afterwards, with Turing notable, including

"[...] reflections on artificial intelligence, labeled 'mechanical intelligence' in language that remains more precise. [...] 'An unwillingness to admit the possibility that mankind can have any rivals in intellectual power,' Turing wrote in 1948" in "Intelligent Machinery" (see above) "occurs as much amongst intellectual people as amongst others: they have more to lose" (Dyson p. 70).
What may be Turing's most important contribution to cognition theory and other sciences is also important background to add seriousness to considerations of "young" computers, as in  The Adolescence of P-1 (1977) and the second childhood of HAL 9000 and "his" kin in A. C. Clarke's 2001: A Space Odyssey and its successors. Dyson notes that Turing saw a "need to develop fallible machines able to learn from their own mistakes," and, generally, "a 'learning machine,'" a "'machine that can learn from experience.'" (p. 70). Turing "saw evolutionary computation as the best approach to truly intelligent machines," with AI lying at the end of "An incremental path of trial-and-error" and asked rhetorically why try "'to produce a program to simulate the adult mind" but instead "'try to produce on that simulates the child's?'" (quoted Dyson p. 71 ["Computing Machinery" 456]). 

5. The Proving Ground
Focus character: John von Neumann and the development of bombs and other implements of war — including mathematical — and computers. Important for the importance of the military-industrial complex for the evolution of computers, with heavy emphasis on the "military" part. Note also a reference to the meteorologist and pacifist Lewis Fry Richardson, author of the important book Statistics of Deadly Quarrels (published 1960) and, relevant here, what Dyson calls "an odd but insightful paper" titled "The Analogy Between Mental Images and Sparks" (Psychology Review 37.3 [May 1930]), which "includes schematic diagrams of two simple electronic devices that Richardson constructed to illustrate his theories on the nature of synaptic function deep within the brain. One of these circuit diagrams is captioned 'Electrical Model illustrating a Mind having a Will but capable of only Two Ideas" ("Analogy" p. 222). Dyson holds that "Richardson had laid the foundations for massively parallel computing in the absence of any equipment except his own imagination; now, with nothing but a few bits of common electrical hardware, he gave bold hints as to the physical basis of mind," ideas which would become more material with the work on computers by von Neumann et al. (Dyson p. 87, and p. 242 for notes). 
6. Rats in a Cathedral
Focus character: Again John von Neumann, significant for AI and one form of AL (Artificial Life). In Dyson's view, von Neumann, unlike Alan Turing, Edmund C. Berkeley, and others,

saw digital computers as mathematical tools. That they were members of a more general class of automata that included nervous systems and brains did not imply that they could think. He rarely discussed artificial intelligence. Having built one computer, he became less interested in the question of whether such machines could learn to think and more interested in the question of whether such machines could learn to reproduce. [Dyson p. 108] * * *

Von Neumann may have envisaged a more direct path toward artificial intelligence than the restrictions of the historic von Neumann archtecture suggest. [...] Von Neumann knew that a structure vastly more complicated, flexible, and unpredictable than a computer was required before ay electrons might leap the wide and fuzzy distinction between arithmetic and mind. [...]

As a practicing mathematician and an armchair engineer, von Neumann knew that something as complicated as a brain could never be designed; it would have to be evolved. To build an artificial brain, you have to grow a matrix of artificial neurons first. (pp. 108-09)

7. Symbiosiogenesis
 Focus on Nils Hall Barricelli and his work at the Institute for Advanced Study (IAS) starting in 1953. Following a theory (again fashionable) going back to Russian scientists early in the 20th c., "mathematical biologist" Barricelli "believed that 'genes were originally independent, virus-like organisms which by symbiotic association formed more complex units'" and attempted "to construct a symbiogenetic model of the origin of life" (Dyson p. 111). So much so conventional science, if definitely advanced at the time for using an early computer. Moving into the new and unusual is the suggestion that the numerical entities in the computer themselves could evolve into a form of AL: artificial life.  

Barricelli enlarged on the theory of cellular symbiogenesis, formulating a more general theory of "symbiooganisms," defined as any "self-reproducing entities of any kind." Extending the concept beyond familiar (terrestrial) and unfamiliar (extraterrestrial) chemistries in which populations of self-reproducing molecules might develop by autocatalytic means, Barricelli applied the same logic to self-reproducing patterns of any nature in space or time — such as might be represented by a subset of the 40,960 bits of information [...] within the memory of the new machine at the IAS. 'The distinction between an evolution experiment performed by numbers in a computer or by nucleotides in a chemical laboratory is a rather subtle one," he observed [in an article in 1962]. (Dyson, p. 113) * * *

As Alan Turing had blurred the distinction between intelligence and nonintelligence by means of his universal machine, so Barricelli's numerical symbioorganisms blurred the distinction between living and nonliving things. Barricelli cautioned his audience against "the temptation to attribute to the numerical synbiooranisms a little too many of the properties of living beings" and [....] stressed that although numerical symbioorganisms and known terrestrial life-forms exhibited parallels in evolutionary behavior, this did not imply that numerical symbioorganisms were alive. [...] "[But —] They are not models, not any more than living organisms are models. They are a particular class of self-reproducing structures [...]." As to whether they are living, "it does not make sense to ask [...] as long as no clear-cut definition of 'living' has been given." A clear-cut definition of "living remains elusive to this day. (Dyson, p. 117)

Much more recently the evolutionary biologist Thomas Ray has worked with "Tierra, a stripped-down [...] virtual computer," and on 3 January 1993, Ray reports, a "'self-replicating program ran on my virtual computer'" without bringing down the host, non-virtual computer. "'The power of evolution had been unleashed inside the machine, but accelerated to ... megahertz speeds'" (Dyson, p. 125). And if such self-replicating programs achieve true AL and, so to speak, get loose in the wild? It's unlikely — and "Outside this special environnment they are only data" — but the scenarios studied for such events have been called, flippantly, but significantly, "Jurassic Park" and "Terminator 2" (p. 127).

Dyson concludes that as the 20th-c. ended, "the distinctions between A-life (represented by strings of electronic bits) and B-life (represented by strings of nucleotides) are being transversed by a language that comprehends them both" (p. 129).

8. On Distributed Communications

Starts with a brief but exciting history of signals, in the military sense — and this is another area where various militaries have played a large role — from the legend of the signal fires announcing the fall of Troy to optical and then electrical telegraphy.

Telegraph engineers were the first to give substance to what had been shown by Leibniz two centuries earlier and would be formalized by Alan Turing in the century ahead: all symbols, all information, all meaning, and all intelligence that can be described in words or number can be encoded (and thus transmitted as binary sequences of finite length. [* * *] High-speed automatic telegraph instruments were the ancestors of modern computers and gave the electromagnetic data-processing industry its start. [...] Like the molecules that convey hereditary information between living cells, telegraph equipment performed the function of recording, storing, and transferring sequences of code. (pp. 143-44)

So see for background for later AI and perhaps AL.

Also see for dating to 1835 the first appearance in print of "cybernérique" (p. 141) and 1858 for the introduction of "perforated paper tape as a means of automatic signal transmission" (143), and a place to start a history of paper tape as an important way to store, transmit, and share information, "augmented by ubiquitous punched-card" equipment and punched cards (p. 144 & passim). For a comic treatment of a life created via IBM card(s), see "For a While There, Herbert Marcuse, I Thought You Were Maybe Right About Alienation and Eros" (1972). And see this chapter for a brief reference to "Theseus, a mechanical mouse constructed by information theorist Claude Shannon in 1950. Guided by the intelligence of seventy-two electromagnetic relays, Theseus was able to find its way around a 5 x 5 maze" (p. 150).

9. Theory of Games and Economic Behavior 

Economic behavior and other "Games" and their relationship with intelligence (perhaps even mind): "That brains in nature operate more as economies than as digital computers should come as no surprise. Indeed, economic principles are the only known way to evolve intelligent systems from primitive components that are not intelligent themselves," as no less than top theorist Marvin Minsky said in Society of Mind (1986).[8] Dyson notes that if there is a "goal of life and intelligence" (a telos or finis in older terminologies) it would be difficult to specify, but there is "a general aim" that "can be detected in the tendency toward a local decrease in the entropy of that fragment of the universe considered to be intelligent or alive," which perhaps is to say in a way open to measurement "that life and intelligence tend to organize themselves" (p. 170).

Notes 1965 speculation by Irving J. Good "on the development of an ultraintelligent machine, later described as 'a machine that believes that people cannot think'" ("Ethical Machines" [unpublished draft]). Complicating questions of AI, is that "Central to the development of an indisputable mechanical intelligence is the question of what meaning is and how meaning is evolved. In Good's analysis, meaning and economy are deeply intertwined; where there is meaning, there is an economy of things representing information (or information representing things) by which the meaning of things can be evaluated and from which meaningful information structures can be built" (p. 171).

10. There's Plenty of Room at the Top

The chapter title alludes playfully to the 1957 novel and 1959 film Room at the Top as used by Richard Feynman in a talk Dec. 1959, "There's Plenty of Room at the Bottom," which included "Imagining small machines being instructed to build successively smaller and smaller machines," as with computers (p. 173). Dyson is also interested in "Large, self-organizing systems" (p. 186) such as those named "Leviathan" and "Pandemonium" and the more familiar SAGE (from Wikipedia: "Semi-Automatic Ground Environment [...] a system of large computers and associated networking equipment that coordinated data from many radar sites and processed it to produce a single unified image of the airspace over a wide area. SAGE directed and controlled the NORAD response to a possible Soviet air attack [...].)[9]

Relevant here:

Consideration of "The human-machine system" with such operations, including the work of Beatrice and Sydney Rome which used a unit of their invention, "the 'taylor' (after F. W. Taylor, founder of time-and-motion studies)" (p. 182 f.).
The observation by Nils Barricelli in 1963 that alien forms of intelligence might have trouble recognizing one another's intelligence, leading Dyson to comment, "Likewise, to conclude from the failure of individual machines to act intelligently that machines are not intelligent may present a spectacular misunderstanding of the nature of intelligence among machines" (p. 187).
"The more we come to understand the information-processing systems epitomized by the human brain, the more we find them to be functioning as evolutionary systems, and the more we come to understand evolutionary systems, the more we discover them to be operating as information-processing machines" (p. 188).
"The genesis of life or intelligence within or among computers goes approximately as follows: (1) make things complicated enough, and (2) either wait for something to happen by accident or make something happen by design" (p. 177). For "accident," see the coming to consciousness and self-awareness of the computer "Mike/Michelle"[10] in R. A. Heinlein's The Moon Is a Harsh Mistress.
"Individual cells are persistent patterns composed of molecules that come and go; organisms are persistent patterns composed of individual cells that come and go; species are persistent patterns composed of individuals that come and go. Machines, as [Samuel] Butler showed with his analysis of vapor engines in 1863 are enduring patterns composed of parts that are replaced from time to time and reproduced from on generation to the next. A global organism — and a global intelligence — is the next logical type [...]" (p. 191), getting to the point identified in Dyson's subtitle for this book. 

11. Last and First Men

Begins with a good deal of information about Olaf Stapledon's service in World War I as an ambulance driver (as the final chapter will get into the service in The Great War of G. B. Dyson's grandfather). The title and headnote are from Stapledon's The Last and First Men (1930);[11] Stapledon also wrote Star Maker (1937).[12] In a personal communication to Dyson, Irving J. Good noted that he'd read both of Stapledon's SF novels by 1948; in 1965, Good speculated "on the development of ultraintelligent machines" and "saw wireless communication as the best way to construct an ultraparallel information-processing machine" (p. 205). On a large enough scale, such interconnectedness might yield — in phrases elsewhere in the chapter — "Distributed intelligence, or composite mind" (p. 209).

12. Fiddling While Rome Burns

Chapter title refers to Fiddling While Rome Burns: as well as a figure of speech, the title of the 1954 autobiography of G. B. Dyson's grandfather (p. 221), director of the Royal College of Music in London during the Blitz. (The chapter deals seriously with music and how music might figure in human evolution and the development of consciousness.) The reference is also to what we humans are doing while the developments Dyson discusses in this book are proceeding apace, an implication reinforced by Dyson's discussion of not just Robert Greene's 1594 play Friar Bacon and Friar Bungay (which a reader might know from a course in Tudor drama) but also the Famous History of Frier [sic] Bacon, an earlier anonymous pamphlet, and the background of both (pp. 212-14). The moral of these stories centers on a talking brass (mechanical) head that must be listened to when it speaks, not when we think convenient.

Dyson also brings in A. C. Clarke's Childhood's End, with its happy/tragic ending of the assumption of humanity's children into the Overmind, marking an apotheosis for our species, and the quickly-approaching extinction of Homo sapiens in our current form. In a combination of hard-science and what many of us would see as mysticism — as in Childhood's End and Clarke and Kubrick's 2001 (p. 224) — Dyson notes how Samuel Butler, moving toward his break with Darwin, in 1877 "sought to encompass Darwinism — from the life of germs to the germination of species — within a framework of all-pervasive mind" (p. 217). He notes Olaf Stapledon as believing "that the mind of the individual and the mind of the species need not remain estranged. 'Our experience was enlarged not only spatially but temporally [...]' explains the narrator of Last and First Men, concerning the composite mind toward which our species had inexorably evolved" (pp. 217-18), but with that composite "race-mind" possibly able to relate to the "First Men," our species. Possibly. Noting with Childhood's End: "Alien beings are unlikely to bear any resemblance, mental or physical, to human beings, and it is presumptuous to assume that artificial intelligence will operate on a level, or a time scale, that we are able to comprehend. As we merge toward collective intelligence, our own language and intelligence may be relegated to a subsidiary role or left behind. When the brass head speaks, there is no guarantee that it will speak in a language that we can understand" (p. 224). More optimistically, "If all goes well, our children will be linked ever more closely to the myriad ganglia embedded in their lives while remaining members of the human race. In the distant future, they may look back on us as children and wonder how, before symbiosis with telepathic machines, it was possible for us to communicate, or even think" — and/but that's "If all goes well" (p. 226).

RDE, finishing, 23Jan-13Feb22