Search NKS | Online
1 - 8 of 8 for Compile
Here TMCompile creates a program segment for each element of the Turing machine rule, and TMToRM resolves addresses and links the segments together.
TMToRM[rules_] := Module[{segs, adrs}, segs = Map[TMCompile, rules] ; adrs = Thread[Map[First, rules] Drop[FoldList[Plus, 1, Map[Length, segs]], -1]]; MapIndexed[(# /. {dr[r_, n_] d[r, n + First[#2]], dm[r_, z_] d[r, z /. adrs]})&, Flatten[segs]]]
TMCompile[_ z:{_, _, 1}] := f[z, {1, 2}]
TMCompile[_ z:{_, _, -1}] := f[z, {2, 1}]
f[{s_, a_, _}, {ra_, rb_}] := Flatten[{i[3], dr[ra, -1], dr[3, 3], i[ra], i[ra], dr[3, -2], If[a 1, i[ra], {}], i[3], dr[rb, 5], i[rb], dr[3, -1], dr[rb, 1], dm[rb, {s, 0}], dr[rb, -6], i[rb], dr[3, -1], dr[rb, 1], dm[rb, {s, 1}]}]
A blank initial tape for the Turing machine corresponds to initial conditions {1, {0, 0, 0}} for the register machine.
Decompilers
Trying to reverse engineer source code in a programming language like C from machine code output by compilers involves in effect trying to deduce higher-level purposes from lower-level computational steps.
There are two basic ways that languages can operate: compiled or interpreted. In a compiled language like C or Fortran , the source code of the program must always first be translated by a compiler program into object code that essentially consists of machine instructions. Once compiled, a program can be executed any number of times.
For equations of the form
∂ tt u[t, x] ∂ xx u[t, x] + f[u[t, x]]
one can set up a simple finite difference method by taking f in the form of pure function and creating from it a kernel with space step dx and time step dt :
PDEKernel[f_, {dx_, dt_}] := Compile[{a,b,c,d}, Evaluate[(2 b - d) + ((a + c - 2 b)/dx 2 + f[b]) dt 2 ]]
Iteration for n steps is then performed by
PDEEvolveList[ker_, {u0_, u1_}, n_] := Map[First, NestList[PDEStep[ker, #]&, {u0, u1}, n]]
PDEStep[ker_, {u1_, u2_}] := {u2, Apply[ker, Transpose[ {RotateLeft[u2], u2, RotateRight[u2], u1}], {1}]}
With this approach an approximation to the top example on page 165 can be obtained from
PDEEvolveList[PDEKernel[ (1 - # 2 )(1 + #)&, {.1, .05}], Transpose[ Table[{1, 1} N[Exp[-x 2 ]], {x, -20, 20, .1}]], 400]
For both this example and the middle one the results converge rapidly as dx decreases.
For efficiency in early versions of Mathematica, explicit rule lists in the second definition can be preprocessed using Dispatch[rules] , and functions in the third definition preprocessed using Compile[{{x, _Integer, 1}}, body] .
In some compilers searches are occasionally done for optimal sequences of instructions to implement particular simple functions.
A few deviations from this approach were tried—notably in Lisp and APL—but by the 1970s, following the development of automated compiler generators such as yacc, so-called Backus–Naur context-free specifications for computer languages had become quite standard.
In some earlier versions of Mathematica a considerably faster version of the program can be created by using the definition
CAStep = Compile[{{rule, _Integer, 1}, {a, _Integer, 1}}, rule 〚 8 - (RotateLeft[a] + 2 (a + 2 RotateRight[a])) 〛 ]
In addition, in Mathematica 4 and above, one can use
CAStep[rule_, a_]:=rule 〚 8 - ListConvolve[{1, 2, 4}, a, 2]]] 〛
or directly in terms of the rule number num
Sign[BitAnd[2^ListConvolve[{1, 2, 4}, a, 2], num]]
(In versions of Mathematica subsequent to the release of this book the built-in CellularAutomaton function can be used, as discussed on page 867 .)