Search NKS | Online
231 - 240 of 272 for Length
[Cellular automaton] rule emulations
The network below shows which quiescent symmetric elementary rules can emulate which with blocks of length 8 or less.
Typical rough characterizations of mathematical results—independent of their detailed history—might then end up being something like the following:
• lemma: short theorem appearing at an intermediate stage in a proof
• corollary: theorem connected by a short proof to an existing theorem
• easy theorem: theorem having a short proof
• difficult theorem: theorem having a long proof
• elegant theorem: theorem whose statement is short and somewhat unique
• interesting theorem (see page 817 ): theorem that cannot readily be deduced from earlier theorems, but is well connected
• boring theorem: theorem for which there are many others very much like it
• useful theorem: theorem that leads to many new ones
• powerful theorem: theorem that substantially reduces the lengths of proofs needed for many others
• surprising theorem: theorem that appears in an otherwise sparse part of the network of theorems
• deep theorem: theorem that connects components of the network of theorems that are otherwise far away
• important theorem: theorem that allows a broad new area of the network to be reached.
ElementaryRule works by converting num into a base 2 digit sequence, padding with zeros on the left so as to make a list of length 8.
And by using rules such as s[x___, 1, 0, y___] {s[x, 0, 1, 0, y], Length[s[x]]} one can keep track of the positions at which substitutions are made. ( StringReplace replaces all occurrences of a given substring, not just the first one, so cannot be used directly as an alternative to having a flat function.)
The dimension of a pattern can be computed by looking at how the number of grid squares that have any gray in them varies with the length a of the edge of each grid square.
The necessary transformation is the so-called Lorentz transformation
{t, x} {t - v x/c 2 , x - v t}/Sqrt[1 - v 2 /c 2 ]
And from this the time dilation factor 1/Sqrt[1 - v 2 /c 2 ] shown on page 524 follows, as well as the length contraction factor Sqrt[1 - v 2 /c 2 ] .
With a rule given in this form, a single step in the evolution of the Turing machine can be implemented with the function
TMStep[rule_List, {s_, a_List, n_}] /; (1 ≤ n ≤ Length[a]) := Apply[{#1, ReplacePart[a, #2, n], n + #3}&, Replace[{s, a 〚 n 〛 }, rule]]
The evolution for many steps can then be obtained using
TMEvolveList[rule_, init_List, t_Integer] := NestList[TMStep[rule, #]&, init, t]
An alternative approach is to represent the complete state of the Turing machine by MapAt[{s, #}&, list, n] , and then to use
TMStep[rule_, c_] := Replace[c, {a___, x_, h_List, y_, b___} Apply[{{a, x, #2, {#1, y}, b}, {a, {#1, x}, #2, y, b}} 〚 #3 〛 &, h /. rule]]
The result of t steps of evolution from a blank tape can also be obtained from (see also page 1143 )
s = 1; a[_] = 0; n = 0;
Do[{s, a[n], d} = {s, a[n]} /. rule; n += d, {t}]
But at least for all the initial conditions up to length 28, the rule eventually just leads to behavior that repeats with a period of 1, 2, 6, 10, 28 or 40.
Ulam systems
Having formulated the system around 1960, Stanislaw Ulam and collaborators (see page 877 ) in 1967 simulated 120 steps of the process shown below, with black cells after t steps occurring at positions
Map[First, First[Nest[UStep[p[q[r[#1], #2]] &, {{1, 0}, {0, 1}, {-1, 0}, {0, -1}}, #] &, ({#, #} &)[{{{0, 0}, {0, 0}}}], t]]]
UStep[f_, os_, {a_, b_}] := {Join[a, #], #} &[f[Flatten[ Outer[{#1 + #2, #1} &, Map[First, b], os, 1], 1], a]]
r[c_]:= Map[First, Select[Split[Sort[c], First[#1] First[#2] &], Length[#] 1 &]]
q[c_, a_] := Select[c, Apply[And, Map[Function[u, qq[#1, u, a]], a]] &]
p[c_]:= Select[c, Apply[And, Map[Function[u, pp[#1, u]], c]] &]
pp[{x_, u_}, {y_, v_}] := Max[Abs[x - y]] > 1 || u v
qq[{x_, u_}, {y_, v_}, a_] := x y || Max[Abs[x - y]] > 1 || u y || First[Cases[a, {u, z_} z]] y
These rules are fairly complicated, and involve more history than ordinary cellular automata.
But it is straightforward to define versions of entropy that take account of probabilities—and indeed the closest analog to the usual entropy in physics or information theory is obtained by taking the probabilities p[i] for the k n blocks of length n (assuming k colors), then constructing
-Limit[Sum[p[i] Log[k, p[i]], {i, k n }]/n, n ∞ ]
I have tended to call this quantity measure entropy, though in other contexts, it is often just called entropy or information, and is sometimes called information dimension.