Universal computation is perhaps the single most important idea of the past hundred years: it is what has made computers and most of modern technology possible.
It could be that to do every different kind of computation would require a different kind of computer--an adding machine for addition, multiplying machine for multiplying, and so on. But the crucial idea that emerged in the 1930s is that there can be a single universal computer that can just be programmed to do any possible computation. In other words, one can have a single piece of hardware, but just change the software (or the input) to get different computations done. Eventually, it became known that there are many possible kinds of universal computers, although it was not known how common they are until Wolfram's recent work for A New Kind of Science.
Turing machines were invented by Alan Turing in 1936, and were the first clear example of a system that could be a universal computer. (The proof of Gödel's Theorem in 1931 implicitly used a similar idea; Alonzo Church's "lambda calculus" was more explicit.) Turing constructed (on paper) a particular Turing machine that he showed could be programmed to emulate any other Turing machine.
Turing's universal Turing machine was of great theoretical importance. But his actual machine was complicated and, in modern terms, hacky. Today, almost any course on theoretical computer science will talk about abstract Turing machines, and use them to prove theorems. But such a course is unlikely to show very much in the way of explicit Turing machines. And practical computers are certainly not architected in detail anything like Turing machines.
In the 1950s and 1960s, it became almost a sport among a number of researchers to try finding the simplest universal Turing machines. With some effort, it was established that no machine with only 2 states and 2 colors could be universal. The best result for a universal machine was found by Marvin Minsky in the early 1960s. His Turing machine has 7 states and 4 colors, so that it has a rule which specifies what to do in 28 different cases.
Only a small amount of rather fragmented work was done on small Turing machines for many years. And Minsky's machine was never convincingly improved upon. (Though there were machines constructed that involved different tradeoffs between numbers of states and colors, and required different kinds of initial conditions to show universality.)
But the main problem was that nobody could see any particularly good reason--other than curiosity--to study small Turing machines.
In the early 1980s, Stephen Wolfram started studying systems called cellular automata that, like Turing machines, correspond to simple programs. (Roughly, while Turing machines can be thought of as minimal idealizations of computers that process one piece of information at a time, cellular automata can be thought of as minimal idealizations of parallel computers that process a whole array of pieces of information at a time.)
At first, Wolfram assumed that cellular automata with simple rules would always have to have simple behavior. But doing computer experiments he found that this was not true, and that instead even some of the very simplest possible cellular automata could produce extremely complex behavior. Building on this remarkable result, he then looked at many kinds of systems in nature and argued that they generate complexity using the same essential mechanism as cellular automata.
By the 1990s, Wolfram had expanded his investigations to the general study of simple programs--the general "exploration of the computational universe". And from studying many different types of simple programs, Wolfram formulated his Principle of Computational Equivalence, which roughly states that different programs that show complex behavior can typically be characterized as performing computations of equivalent sophistication. One of the implications of this principle is that simple programs that do not show obviously simple behavior should be computation universal.
Wolfram picked out the "rule 110" cellular automaton as the first case for which to prove universality. And with help from one of his research assistants, this was done--establishing a remarkably low threshold for universality. Building on this result, Wolfram then found a 2-state, 5-color Turing machine which is universal, and this is now the simplest known universal Turing machine.
Before Wolfram's work, it had generally been assumed that universality was a rare property of systems, that in a sense had to be specially engineered. One of the implications of Wolfram's work is that in fact computation universality is ubiquitous. The result is that it should be happening in many systems in nature and elsewhere. One consequence of this is that--through what Wolfram calls computational irreducibility--it fundamentally limits what we can predict about these systems, and what traditional theoretical science can do with them. Another consequence is that it should be much easier to find systems in nature--say in nanotechnology--that can act as computers.
Wolfram's A New Kind of Science, published in 2002, had the immediate consequence of showing that finding minimal examples of computation universality was not just a curiosity, but was something connected to core questions about science, nature, and the foundations of mathematics and computer science.
In an effort to find the boundary between simple non-universal behavior and universality, among the systems Wolfram studied were simple Turing machines. He showed that there is a 2,5 Turing machine that is universal. He investigated all 3-state, 2-color Turing machines, and found their behavior too simple to be likely to support universality. But then he investigated the 2,985,984 2-state, 3-color Turing machines, and found 14 cases essentially equivalent to the Turing machine that is the subject of this prize, whose behavior seemed complex enough that it could support universality.
However, upon investigation, he discovered that a large part of this Turing machine's behavior was actually simple to analyze, being equivalent to the rule 60 cellular automaton that shows only regular nested or fractal behavior. But a small residue remained. And it is this residue that may be able to support universal computation.
Wolfram considers this Turing machine just on the edge between simplicity and complexity. It has some major simplifying features, but also some complex behavior. Will it turn out to be universal? Or will it turn out that every aspect of its behavior is just determined by some simple function from number theory? The purpose of this prize is to encourage finding that out.
The purpose of the prize is to encourage research that will help fill in foundational questions associated with the structure of the computational universe. The subject matter of A New Kind of Science is, as the title of Wolfram's book suggests, not something that fits into existing areas of science. This prize is an attempt to provide support that goes outside of the normal channels of scientific funding. We particularly hope that the prize will stimulate young researchers--either "professional" or "amateur"--to pursue this exciting direction.
Wolfram often likens the exploration of the computational universe to the early days of fields like chemistry. There are many chemicals to study. Some will reveal important theoretical concepts. Others will be useful. And some will just be curiosities. Where will the 2,3 Turing machine fit in? We're keen to find out.