Search NKS | Online
141 - 150 of 213 for Block
Thus for example unlike in 1D there is no guarantee in 2D that among repeating configurations of a particular period there is necessarily one that consists just of a repetitive array of fixed blocks.
It turns out that any rule for blocks of black and white cells can be represented as some combination of just a single type of operation—for example a so-called Nand function of the kind often used in digital electronics.
If one starts say with 5 black squares, then after a certain number of steps the cellular automaton will produce a block of exactly 5 × 5 = 25 black squares.
Specifying an operator f (taken in general to have n arguments with k possible values) by giving the rule number u for f[p, q, …] , the rule number for an expression with variables vars can be obtained from
With[{m = Length[vars]}, FromDigits[ Block[{f = Reverse[IntegerDigits[u, k, k n ]] 〚 FromDigits[ {##}, k] + 1 〛 &}, Apply[Function[Evaluate[vars], expr], Reverse[Array[IntegerDigits[# - 1, k, m] &, k m ]], {1}]], k]]
In many fields outside of statistics, however, the idea persisted even to the 1990s that block frequencies (or flat frequency spectra) were somehow the only ultimate tests for randomness. In 1909 Emile Borel had formulated the notion of normal numbers (see page 912 ) whose infinite digit sequences contain all blocks with equal frequency. … Starting in the late 1940s the development of information theory began to suggest connections between randomness and inability to compress data, but emphasis on p Log[p] measures of information content (see page 1071 ) reinforced the idea that block frequencies are the only real criterion for randomness.
But it is straightforward to define versions of entropy that take account of probabilities—and indeed the closest analog to the usual entropy in physics or information theory is obtained by taking the probabilities p[i] for the k n blocks of length n (assuming k colors), then constructing
-Limit[Sum[p[i] Log[k, p[i]], {i, k n }]/n, n ∞ ]
I have tended to call this quantity measure entropy, though in other contexts, it is often just called entropy or information, and is sometimes called information dimension. … An example of a generalization is the quantity given for blocks of size n by
h[q_, n_]:= Log[k, Sum[p[i] q , {i, k n }]]/(n(q - 1)
where q = 0 yields set entropy, the limit q 1 measure entropy, and q = 2 so-called correlation entropy.
The sequence {1, 2, 2, 1, 1, 2, …} defined by the property list Map[Length, Split[list]] was suggested as a mathematical puzzle by William Kolakoski in 1965 and is equivalent to
Join[{1, 2}, Map[First, CTEvolveList[{{1}, {2}}, {2}, t]]]
It is known that this sequence does not repeat, contains no more than two identical consecutive blocks, and has at least very close to equal numbers of 1's and 2's.
With all cells 0 on one step, and a block of nonzero cells on the next step, the periods are for example: {1} : 21 ; {1, 1} : 3n - 8 ; {1, 0, 1} : 666 ; {1, 1, 1} : 3n - 8 ; {1, 0, 0, 1} : irregular ( < 24n ; peaks at 6j + 1 ); {1, 0, 0, 1, 0, 1} : irregular ( ≲ 2 n ; 857727 for n= 2 6 ; 13705406 for n = 100 ).
But for example in the rule (a) picture on page 463 there is in effect a block of solid that persists in the middle—so that no ordinary diffusion behavior is seen.
Entropy estimates
Entropies h[n] computed from blocks of size n always decrease with n ; the quantity n h[n] is always convex (negative second difference) with respect to n .