Search NKS | Online

101 - 110 of 283 for Function
Smooth iterated maps In the main text, all the functions used as mappings consist of linear pieces, usually joined together discontinuously. But the same basic phenomena seen with such mappings also occur when smooth functions are used. … (An important result discovered by Mitchell Feigenbaum in 1975 is that this basic setup is universal to all smooth maps whose functions have a single hump.)
Implementation [of my PDEs] All the numerical solutions shown were found using the NDSolve function built into Mathematica. … For equations of the form ∂ tt u[t, x]  ∂ xx u[t, x] + f[u[t, x]] one can set up a simple finite difference method by taking f in the form of pure function and creating from it a kernel with space step dx and time step dt : PDEKernel[f_, {dx_, dt_}] := Compile[{a,b,c,d}, Evaluate[(2 b - d) + ((a + c - 2 b)/dx 2 + f[b]) dt 2 ]] Iteration for n steps is then performed by PDEEvolveList[ker_, {u0_, u1_}, n_] := Map[First, NestList[PDEStep[ker, #]&, {u0, u1}, n]] PDEStep[ker_, {u1_, u2_}] := {u2, Apply[ker, Transpose[ {RotateLeft[u2], u2, RotateRight[u2], u1}], {1}]} With this approach an approximation to the top example on page 165 can be obtained from PDEEvolveList[PDEKernel[ (1 - # 2 )(1 + #)&, {.1, .05}], Transpose[ Table[{1, 1} N[Exp[-x 2 ]], {x, -20, 20, .1}]], 400] For both this example and the middle one the results converge rapidly as dx decreases. … The energy function (see above ) is at least roughly conserved, but it seems quite likely that the "shocks" visible are merely a consequence of the discretization procedure used.
In the late 1800s and early 1900s issues about the foundations of mathematics (see note below ) led to the formal definition of so-called recursive functions. But almost without exception the emphasis was on studying what such functions could in principle do, not on looking at the actual behavior of particular ones.
In most of them fairly standard mathematical functions are used, but in unusual combinations.
But for there to be computational reducibility this formula needs to be simple and easy to evaluate—as it is if it consists just of a few standard mathematical functions (see note above ; page 1098 ).
Properties [of logical primitives] Page 813 lists theorems satisfied by each function. {0, 1, 6, 7, 8, 9, 14, 15} are commutative (orderless) so that a ∘ b = b ∘ a , while {0, 6, 8, 9, 10, 12, 14, 15} are associative (flat), so that a ∘ (b ∘ c) = (a ∘ b) ∘ c .
Common framework [for cellular automaton rules] The Mathematica built-in function CellularAutomaton discussed on page 867 handles general and totalistic rules in the same framework by using ListConvolve[w, a, r + 1] and taking the weights w to be respectively k^Table[i - 1, {i, 2r + 1}] and Table[1, {2r + 1}] .
In organisms with a total of only a few thousand nerve cells, each individual cell typically has definite connections and a definite function. But in humans with perhaps 100 billion nerve cells, the physical connections seem quite haphazard, and most nerve cells probably develop their function as a result of building up weights associated with their actual pattern of behavior, either spontaneous or in response to external stimuli.
For typically space and time are both just represented by abstract symbolic variables, and the formal process of solving equations as a function of position in space and as a function of time is essentially identical.
These are related to the autocorrelation function according to Fourier[list] 2  Fourier[ListConvolve[list, list, {1, 1}]]/Sqrt[Length[list]] (See also page 1074 .)
1 ... 891011 ...