SOME HISTORICAL NOTES

---
Back to indexPreviousNext


From: Stephen Wolfram, A New Kind of Science
Notes for Chapter 2: The Crucial Experiment
Section: Why These Discoveries Were Not Made Before
Page 878

Close approaches. The basic phenomena in this chapter have come at least somewhat close to being discovered many × in the past. The historical progression of primary examples of this seem to be as follows:

500s-200s BC: Simply-stated problems such as finding primes or perfect numbers are presumably seen to have complicated solutions, but no general significance is attached to this (see pages 132 and 912).

1200s: Fibonacci sequences, Pascal’s triangle and other rule-based numerical constructions are studied, but are found to show only simple behavior.

1500s: Leonardo da Vinci experiments with rules corresponding to simple geometrical constraints (see page 875), but finds only simple forms satisfying these constraints.

1700s: Leonhard Euler and others compute continued fraction representations for numbers with simple formulas (see pages 143 and 917), noting regularity in some cases, but making no comment in other cases.

1700s and 1800s: The digits of π and other transcendental numbers are seen to exhibit apparent randomness (see page 136), but the idea of thinking about this randomness as coming from the process of calculation does not arise.

1800s: The distribution of primes is studied extensively - but mostly its regularities, rather than its irregularities, are considered. (See page 132.)

1800s: Complicated behavior is found in the three-body problem, but it is assumed that with better mathematical techniques it will eventually be resolved. (See page 974.)

1880s: John Venn and others note the apparent randomness of the digits of π, but somehow take it for granted.

1906: Axel Thue studies simple substitution systems (see page 895) and finds behavior that seems complicated - though it turns out to be nested.

1910s: Gaston Julia and others study iterated maps, but concentrate on properties amenable to simple description.

1920: Moses Schönfinkel introduces combinators (see page 1127) but considers mostly cases specifically constructed to correspond to ordinary logical functions.

1921: Emil Post looks at a simple tag system (see page 896) whose behavior is difficult to predict, but failing to prove anything about it, goes on to other problems.

1920: The Ising model is introduced, but only statistics of configurations, and not any dynamics, are studied.

1931: Kurt Gödel establishes Gödel’s Theorem (see page 782), but the constructions he uses are so complicated that he and others assume that simple systems can never exhibit similar phenomena.

Mid-1930s: Alan Turing, Alonzo Church, Emil Post, etc. introduce various models of computation, but use them in constructing proofs, and do not investigate the actual behavior of simple examples.

1930s: The 3n+1 problem (see page 906) is posed, and unpredictable behavior is found, but the main focus is on proving a simple result about it.

Late 1940s and 1950s: Pseudorandom number generators are developed (see page 976), but are viewed as tricks whose behavior has no particular scientific significance.

Late 1940s and early 1950s: Complex behavior is occasionally observed in fairly simple electronic devices built to illustrate ideas of cybernetics, but is usually viewed as something to avoid.

1952: Alan Turing applies computers to studying biological systems, but uses traditional mathematical models rather than, say, Turing machines.

1952-1953: John von Neumann makes theoretical studies of complicated cellular automata, but does not try looking at simpler cases, or simulating the systems on a computer.

Mid-1950s: Enrico Fermi and collaborators simulate simple systems of nonlinear springs on a computer, but do not notice that simple initial conditions can lead to complicated behavior.

Mid-1950s to mid-1960s: Specific 2D cellular automata are used for image processing; a few rules showing slightly complex behavior are noticed, but are considered of purely recreational interest.

Late 1950s: Computer simulations of iterated maps are done, but concentrate mostly on repetitive behavior. (See page 921.)

Late 1950s: Ideas from dynamical systems theory begin to be applied to systems equivalent to 1D cellular automata, but details of specific behavior are not studied except in trivial cases.

Late 1950s: Idealized neural networks are simulated on digital computers, but the somewhat complicated behavior seen is considered mainly a distraction from the phenomena of interest, and is not investigated. (See page 1104.)

Late 1950s: Berni Alder and Thomas Wainwright do computer simulations of dynamics of hard sphere idealized molecules, but concentrate on large-scale features that do not show complexity. (See page 1003.)

1956-1959: Solomon Golomb simulates nonlinear feedback shift registers - some with rules close to rule 30 - but studies mainly their repetition periods not their detailed complex behavior. (See page 1093.)

1960, 1967: Stanislaw Ulam and collaborators simulate systems close to 2D cellular automata, and note the appearance of complicated patterns (see above).

1961: Edward Fredkin simulates the 2D analog of rule 90 and notes features that amount to nesting (see above).

Early 1960s: Students at MIT try running many small computer programs, and in some cases visualizing their output. They discover various examples (such as "munching foos") that produce nested behavior (see page 871), but do not go further.

1962: Marvin Minsky and others study many simple Turing machines, but do not go far enough to discover the complex behavior shown on page 81.

1963: Edward Lorenz simulates a differential equation that shows complex behavior (see page 973), but concentrates on its lack of periodicity and sensitive dependence on initial conditions.

Mid-1960s: Simulations of random Boolean networks are done (see page 938), but concentrate on simple average properties.

1970: John Conway introduces the Game of Life 2D cellular automaton (see above).

1971: Michael Paterson considers a class of simple 2D Turing machines that he calls worms and that exhibit complicated behavior (see page 932).

1973: I look at some 2D cellular automata, but force the rules to have properties that prevent complex behavior (see page 864).

Mid-1970s: Benoit Mandelbrot develops the idea of fractals (see page 936), and emphasizes the importance of computer graphics in studying complex forms.

Mid-1970s: Tommaso Toffoli simulates all 4096 2D cellular automata of the simplest type, but studies mainly just their stabilization from random initial conditions.

Late 1970s: Douglas Hofstadter studies a recursive sequence with complicated behavior (see page 909), but does not take it far enough to conclude much.

1979: Benoit Mandelbrot discovers the Mandelbrot set (see page 936) but concentrates on its nested structure, not its overall complexity.

1981: I begin to study 1D cellular automata, and generate a small picture analogous to the one of rule 30 on page 27, but fail to study it.

1984: I make a detailed study of rule 30, and begin to understand the significance of it and systems like it.


Stephen Wolfram, A New Kind of Science (Wolfram Media, 2002), page 878.
© 2002, Stephen Wolfram, LLC