Search NKS | Online
41 - 50 of 95 for Log
Maximal block compression
If one has data that consists of a long sequence of blocks, each of length b , and each independently chosen with probability p[i] to be of type i , then as argued by Claude Shannon in the late 1940s, it turns out that the minimum number of base 2 bits needed on average to represent each block in such a sequence is h = -Sum[p[i] Log[2, p[i]], {i, 2 b }] . … With this assumption one then finds that maximal compression occurs if a block of probability p[i] is represented by a codeword of length -Log[2, p[i]] .
Applying BitReverseOrder to this matrix yields a matrix which has an essentially nested form, and for size n = 2 s can be obtained from
Nest[With[{c = BitReverseOrder[Range[0, Length[#] - 1]/ Length[#]]}, Flatten2D[MapIndexed[#1 {{1, 1}, {1, -1} (-1)^c 〚 Last[#2] 〛 } &, #, {2}]]] &, {{1}}, s]
Using this structure, one obtains the so-called fast Fourier transform which operates in n Log[n] steps and is given by
With[{n = Length[data]}, Fold[Flatten[Map[With[ {k = Length[#]/2}, {{1, 1}, {1, -1}} . {Take[#, k], Drop[ #, k](-1)^(Range[0, k - 1]/k)}] &, Partition[##]]] &, BitReverseOrder[data], 2^Range[Log[2, n]]]/ √ n ]
(See also page 1080 .)
In the first 200 billion digits, the frequencies of 0 through 9 differ from 20 billion by
{30841, -85289, 136978, 69393, -78309, -82947, -118485, -32406, 291044, -130820}
An early approximation to π was
4 Sum[(-1) k /(2k + 1), {k, 0, m}]
30 digits were obtained with
2 Apply[Times, 2/Rest[NestList[Sqrt[2 + #]&, 0, m]]]
An efficient way to compute π to n digits of precision is
(# 〚 2 〛 2 /# 〚 3 〛 )& [NestWhile[Apply[Function[{a, b, c, d}, {(a + b)/2, Sqrt[a b], c - d (a - b) 2 , 2 d}], #]&, {1, 1/Sqrt[N[2, n]], 1/4, 1/4}, # 〚 2 〛 ≠ # 〚 2 〛 &]]
This requires about Log[2, n] steps, or a total of roughly n Log[n] 2 operations (see page 1134 ).
These numbers can also be obtained as the coefficients of x n in the series expansion of x ∂ x Log[ ζ [m, x]] , with the so-called zeta function, which is always a rational function of x , given by
ζ [m_, x_] := 1/Det[IdentityMatrix[Length[m]] - m x]
and corresponds to the product over all cycles of 1/(1 - x n ) .
The encoded version of a purely repetitive sequence of length n has a length that grows like Log[n] . The encoded version of a purely nested sequence grows like Log[n] 2 . … An example is the sequence Table[Append[Table[0, {r}], 1], {r, s}] whose encoded version grows like √ n Log[n] .
For large n , the average number of distinct cycles in all such networks is Sqrt[ π /2] Log[n] , and the average length of these cycles is Sqrt[ π /8 n] .
The so-called measure entropy is given as discussed on page 959 by the limit of -Sum[p i Log[k, p i ], {i, k b }]/b where the p i are the probabilities for the blocks.
In the limit of an infinite number of steps one gets a fractal known as a Sierpiński pattern (see page 934 ), with fractal dimension Log[2, 3] ≃ 1.59 (see page 933 ).
In general, to find the color of a cell after t steps of rule 188 or rule 60 evolution takes about Log[2, t] steps.
It is known how to evaluate π (see page 912 ) and all standard elementary functions to n -digit precision using about Log[n] m[n] operations. … Any iterative procedure (such as FindRoot ) that yields a constant multiple more digits at each step will take about Log[n] steps to get n digits. Roots of polynomials can thus almost always be found with NSolve in about Log[n] m[n] operations.