Notes

Chapter 12: The Principle of Computational Equivalence

Section 6: Computational Irreducibility


[Computation of] mathematical functions

The number of bit operations needed to add two n-digit numbers is of order n. The number of operations m[n]

m[n] needed to multiply them increases just slightly more rapidly than n (see page 1093). (Even if one can do operations on all digits in parallel it still takes of order n steps in a system like a cellular automaton for the effects of different digits to mix together—though see also page 1149.) The number of operations to evaluate Mod[a, b]
Mod[a, b]
is of order n if a has n digits and b is small. Many standard continuous mathematical functions just increase or decrease smoothly at large x (see page 917). The main issue in evaluating those that exhibit regular oscillations at large x is to find their oscillation period with sufficient precision. Thus for example if x is an integer with n digits then evaluating Sin[x]
Sin[x]
or FractionalPart[x c]
FractionalPart[x c]
requires respectively finding π or c to n-digit precision. It is known how to evaluate π (see page 912) and all standard elementary functions to n-digit precision using about Log[n] m[n]
Log[n] m[n]
operations. (This can be done by repeatedly making use of functional relations such as Exp[2x] Exp[x]2
Exp[2x]==\!\(\*SuperscriptBox[\(Exp[x]\),\(2\)]\)
which express f[2x]
f[2x]
as a polynomial in f[x]
f[x]
; such an approach is known to work for elementary, elliptic, modular and other functions associated with ArithmeticGeometricMean
ArithmeticGeometricMean
and for example DedekindEta
DedekindEta
.) Known methods for high-precision evaluation of special functions—usually based in the end on series representations—typically require of order n1/s m[n]
\!\(\*SuperscriptBox[\(n\),\(1/s\)]\) m[n]
operations, where s is often 2 or 3. (Examples of more difficult cases include HypergeometricPFQ[a, b, 1]
HypergeometricPFQ[a, b, 1]
and StieltjesGamma[k]
StieltjesGamma[k]
, where logarithmic series can require an exponential number of terms. Evaluation of BernoulliB[x]
BernoulliB[x]
is also difficult.) Any iterative procedure (such as FindRoot
FindRoot
) that yields a constant multiple more digits at each step will take about Log[n]
Log[n]
steps to get n digits. Roots of polynomials can thus almost always be found with NSolve
NSolve
in about Log[n] m[n]
Log[n] m[n]
operations. If one evaluates NIntegrate
NIntegrate
or NDSolve
NDSolve
by effectively fitting functions to order s polynomials the difficulty of getting results with n-digit precision typically increases like 2n/s
\!\(\*SuperscriptBox[\(2\),\(n/s\)]\)
. An adaptive algorithm such as Romberg integration reduces this to about 2^n
2^\!\(\*SqrtBox[\(n\)]\)
. The best-known algorithms for evaluating Zeta[1/2 + x]
Zeta[1/2 +  x]
(see page 918) to fixed precision take roughly x
\!\(\*SqrtBox[\(x\)]\)
operations—or 2n/2
\!\(\*SuperscriptBox[\(2\),\(n/2\)]\)
operations if x is an n-digit integer. (The evaluation is based on the Riemann–Siegel formula, which involves sums of about x
\!\(\*SqrtBox[\(x\)]\)
cosines.) Unlike for continuous mathematical functions, known algorithms for number theoretical functions such as FactorInteger[x]
FactorInteger[x]
or MoebiusMu[x]
MoebiusMu[x]
typically seem to require a number of operations that grows faster with the number of digits n in x than any power of n (see page 1090).



Image Source Notebooks:

From Stephen Wolfram: A New Kind of Science [citation]