Search NKS | Online
531 - 540 of 630 for Random
For as I will discuss in Chapter 12 it seems far from inconceivable that some of the extraterrestrial radio and other signals that we pick up and assume to be random noise could in fact be meaningful messages—but just encoded in a way that can be recognized only by higher forms of perception and analysis than those we have so far applied to them.
respects they seem far more random than patterns produced by systems like rule 110 that we already know are universal.
.
• 1700s and 1800s: The digits of π and other transcendental numbers are seen to exhibit apparent randomness (see page 136 ), but the idea of thinking about this randomness as coming from the process of calculation does not arise.
• 1800s: The distribution of primes is studied extensively—but mostly its regularities, rather than its irregularities, are considered. … (See page 972 .)
• 1880s: John Venn and others note the apparent randomness of the digits of π , but somehow take it for granted.
• 1906: Axel Thue studies simple substitution systems (see page 893 ) and finds behavior that seems complicated—though it turns out to be nested.
• 1910s: Gaston Julia and others study iterated maps, but concentrate on properties amenable to simple description.
• 1920: Moses Schönfinkel introduces combinators (see page 1121 ) but considers mostly cases specifically constructed to correspond to ordinary logical functions.
• 1921: Emil Post looks at a simple tag system (see page 894 ) whose behavior is difficult to predict, but failing to prove anything about it, goes on to other problems.
• 1920: The Ising model is introduced, but only statistics of configurations, and not any dynamics, are studied.
• 1931: Kurt Gödel establishes Gödel's Theorem (see page 782 ), but the constructions he uses are so complicated that he and others assume that simple systems can never exhibit similar phenomena.
• Mid-1930s: Alan Turing , Alonzo Church , Emil Post, etc. introduce various models of computation, but use them in constructing proofs, and do not investigate the actual behavior of simple examples.
• 1930s: The 3n + 1 problem (see page 904 ) is posed, and unpredictable behavior is found, but the main focus is on proving a simple result about it.
• Late 1940s and 1950s: Pseudorandom number generators are developed (see page 974 ), but are viewed as tricks whose behavior has no particular scientific significance.
• Late 1940s and early 1950s: Complex behavior is occasionally observed in fairly simple electronic devices built to illustrate ideas of cybernetics, but is usually viewed as something to avoid.
• 1952: Alan Turing applies computers to studying biological systems, but uses traditional mathematical models rather than, say, Turing machines.
• 1952-1953: John von Neumann makes theoretical studies of complicated cellular automata, but does not try looking at simpler cases, or simulating the systems on a computer.
• Mid-1950s: Enrico Fermi and collaborators simulate simple systems of nonlinear springs on a computer, but do not notice that simple initial conditions can lead to complicated behavior.
• Mid-1950s to mid-1960s: Specific 2D cellular automata are used for image processing; a few rules showing slightly complex behavior are noticed, but are considered of purely recreational interest.
• Late 1950s: Computer simulations of iterated maps are done, but concentrate mostly on repetitive behavior. … They discover various examples (such as "munching foos") that produce nested behavior (see page 871 ), but do not go further.
• 1962: Marvin Minsky and others study many simple Turing machines, but do not go far enough to discover the complex behavior shown on page 81 .
• 1963: Edward Lorenz simulates a differential equation that shows complex behavior (see page 971 ), but concentrates on its lack of periodicity and sensitive dependence on initial conditions.
• Mid-1960s: Simulations of random Boolean networks are done (see page 936 ), but concentrate on simple average properties.
• 1970: John Conway introduces the Game of Life 2D cellular automaton (see above ).
• 1971: Michael Paterson considers a class of simple 2D Turing machines that he calls worms and that exhibit complicated behavior (see page 930 ).
• 1973: I look at some 2D cellular automata, but force the rules to have properties that prevent complex behavior (see page 864 ).
• Mid-1970s: Benoit Mandelbrot develops the idea of fractals (see page 934 ), and emphasizes the importance of computer graphics in studying complex forms.
• Mid-1970s: Tommaso Toffoli simulates all 4096 2D cellular automata of the simplest type, but studies mainly just their stabilization from random initial conditions.
• Late 1970s: Douglas Hofstadter studies a recursive sequence with complicated behavior (see page 907 ), but does not take it far enough to conclude much.
• 1979: Benoit Mandelbrot discovers the Mandelbrot set (see page 934 ) but concentrates on its nested structure, not its overall complexity.
• 1981: I begin to study 1D cellular automata, and generate a small picture analogous to the one of rule 30 on page 27 , but fail to study it.
• 1984: I make a detailed study of rule 30, and begin to understand the significance of it and systems like it.
As discussed on page 139 such digit sequences often seem locally very random.
If one chooses an n -variable Boolean function at random out of the 2 2 n possibilities, it is typical that regardless of depth a formula involving at least 2 n /n operations will be needed to represent it.
History [of Central Limit Theorem]
That averages of random numbers follow bell-shaped distributions was known in the late 1600s.
Quantum effects
Over the years, many suggested effects have been thought to be characteristic of quantum systems:
• Basic quantization (1913): mechanical properties of particles in effectively bounded systems are discrete;
• Wave-particle duality (1923): objects like electrons and photons can be described as either waves or particles;
• Spin (1925): particles can have intrinsic angular momentum even if they are of zero size;
• Non-commuting measurements (1926): one can get different results doing measurements in different orders;
• Complex amplitudes (1926): processes are described by complex probability amplitudes;
• Probabilism (1926): outcomes are random, though probabilities for them can be computed;
• Amplitude superposition (1926): there is a linear superposition principle for probability amplitudes;
• State superposition (1926): quantum systems can occur in superpositions of measurable states;
• Exclusion principle (1926): amplitudes cancel for fermions like electrons to go in the same state;
• Interference (1927): probability amplitudes for particles can interfere, potentially destructively;
• Uncertainty principle (1927): quantities like position and momenta have related measurement uncertainties;
• Hilbert space (1927): states of systems are represented by vectors of amplitudes rather than individual variables;
• Field quantization (1927): only discrete numbers of any particular kind of particle can in effect ever exist;
• Quantum tunnelling (1928): particles have amplitudes to go where no classical motion would take them;
• Virtual particles (1932): particles can occur for short times without their usual energy-momentum relation;
• Spinors (1930s): fermions show rotational invariance under SU(2) rather than SO(3);
• Entanglement (1935): separated parts of a system often inevitably behave in irreducibly correlated ways;
• Quantum logic (1936): relations between events do not follow ordinary laws of logic;
• Path integrals (1941): probabilities for behavior are obtained by summing contributions from many paths;
• Imaginary time (1947): statistical mechanics is like quantum mechanics in imaginary time;
• Vacuum fluctuations (1948): there are continual random field fluctuations even in the vacuum;
• Aharonov–Bohm effect (1959): magnetic fields can affect particles even in regions where they have zero strength;
• Bell's inequalities (1964): correlations between events can be larger than in any ordinary probabilistic system;
• Anomalies (1969): virtual particles can have effects that violate the original symmetries of a system;
• Delayed choice experiments (1978): whether particle or wave features are seen can be determined after an event;
• Quantum computing (1980s): there is the potential for fundamental parallelism in computations.
… And probabilistic effects can arise from any of the mechanisms for randomness discussed in Chapter 7 .
And indeed, it seems that the basic themes of repetition, nesting, randomness and localized structures that we already saw in specific cellular automata in the previous chapter are actually very general, and in fact represent the dominant themes in the behavior of a vast range of different systems.
Indeed, for a very large fraction of randomly chosen partial differential equations what one finds is that after just a small amount of time, the gray level one gets either becomes infinitely large or starts to vary infinitely quickly in space or time.
The results shown come from 500 steps of evolution starting from random initial conditions.