Having talked a bit about historical context let’s now talk about what the things we’ve discussed here mean for the future of mathematics—both in theory and in practice.
At a theoretical level we’ve characterized the story of mathematics as being the story of a particular way of exploring the ruliad. And from this we might think that in some sense the ultimate limit of mathematics would be to just deal with the ruliad as a whole. But observers like us—at least doing mathematics the way we normally do it—simply can’t do that. And in fact, with the limitations we have as mathematical observers we can inevitably sample only tiny slices of the ruliad.
But as we’ve discussed, it is exactly this that leads us to experience the kinds of “general laws of mathematics” that we’ve talked about. And it is from these laws that we get a picture of the “large-scale structure of mathematics”—that turns out to be in many ways similar to the picture of the large-scale structure of our physical universe that we get from physics.
As we’ve discussed, what corresponds to the coherent structure of physical space is the possibility of doing mathematics in terms of high-level concepts—without always having to drop down to the “atomic” level. Effective uniformity of metamathematical space then leads to the idea of “pure metamathematical motion”, and in effect the possibility of translating at a high level between different areas of mathematics. And what this suggests is that in some sense “all high-level areas of mathematics” should ultimately be connected by “high-level dualities”—some of which have already been seen, but many of which remain to be discovered.
Thinking about metamathematics in physicalized terms also suggests another phenomenon: essentially an analog of gravity for metamathematics. As we discussed earlier, in direct analogy to the way that “larger densities of activity” in the spatial hypergraph for physics lead to a deflection in geodesic paths in physical space, so also larger “entailment density” in metamathematical space will lead to deflection in geodesic paths in metamathematical space. And when the entailment density gets sufficiently high, it presumably becomes inevitable that these paths will all converge, leading to what one might think of as a “metamathematical singularity”.
In the spacetime case, a typical analog would be a place where all geodesics have finite length, or in effect “time stops”. In our view of metamathematics, it corresponds to a situation where “all proofs are finite”—or, in other words, where everything is decidable, and there is no more “fundamental difficulty” left.
Absent other effects we might imagine that in the physical universe the effects of gravity would eventually lead everything to collapse into black holes. And the analog in metamathematics would be that everything in mathematics would “collapse” into decidable theories. But among the effects not accounted for is continued expansion—or in effect the creation of new physical or metamathematical space, formed in a sense by underlying raw computational processes.
What will observers like us make of this, though? In statistical mechanics an observer who does coarse graining might perceive the “heat death of the universe”. But at a molecular level there is all sorts of detailed motion that reflects a continued irreducible process of computation. And inevitably there will be an infinite collection of possible “slices of reducibility” to be found in this—just not necessarily ones that align with any of our current capabilities as observers.
What does this mean for mathematics? Conceivably it might suggest that there’s only so much that can fundamentally be discovered in “high-level mathematics” without in effect “expanding our scope as observers”—or in essence changing our definition of what it is we humans mean by doing mathematics.
But underneath all this is still raw computation—and the ruliad. And this we know goes on forever, in effect continually generating “irreducible surprises”. But how should we study “raw computation”?
In essence we want to do unfettered exploration of the computational universe, of the kind I did in A New Kind of Science, and that we now call the science of ruliology. It’s something we can view as more abstract and more fundamental than mathematics—and indeed, as we’ve argued, it’s for example what’s underneath not only mathematics but also physics.
Ruliology is a rich intellectual activity, important for example as a source of models for many processes in nature and elsewhere. But it’s one where computational irreducibility and undecidability are seen at almost every turn—and it’s not one where we can readily expect “general laws” accessible to observers like us, of the kind we’ve seen in physics, and now see in mathematics.
We’ve argued that with its foundation in the ruliad mathematics is ultimately based on structures lower level than axiom systems. But given their familiarity from the history of mathematics, it’s convenient to use axiom systems—as we have done here—as a kind of “intermediate-scale metamodel” for mathematics.
But what is the “workflow” for using axiom systems? One possibility in effect inspired by ruliology is just to systematically construct the entailment cone for an axiom system, progressively generating all possible theorems that the axiom system implies. But while doing this is of great theoretical interest, it typically isn’t something that will in practice reach much in the way of (currently) familiar mathematical results.
But let’s say one’s thinking about a particular result. A proof of this would correspond to a path within the entailment cone. And the idea of automated theorem proving is to systematically find such a path—which, with a variety of tricks, can usually be done vastly more efficiently than just by enumerating everything in the entailment cone. In practice, though, despite half a century of history, automated theorem proving has seen very little use in mainstream mathematics. Of course it doesn’t help that in typical mathematical work a proof is seen as part of the high-level exposition of ideas—but automated proofs tend to operate at the level of “axiomatic machine code” without any connection to human-level narrative.
But if one doesn’t already know the result one’s trying to prove? Part of the intuition that comes from A New Kind of Science is that there can be “interesting results” that are still simple enough that they can conceivably be found by some kind of explicit search—and then verified by automated theorem proving. But so far as I know, only one significant unexpected result has so far ever been found in this way with automated theorem proving: my 2000 result on the simplest axiom system for Boolean algebra.
And the fact is that when it comes to using computers for mathematics, the overwhelming fraction of the time they’re used not to construct proofs, but instead to do “forward computations” and “get results” (yes, often with Mathematica). Of course, within those forward computations, there are many operations—like Reduce, SatisfiableQ, PrimeQ, etc.—that essentially work by internally finding proofs, but their output is “just results” not “why-it’s-true explanations”. (FindEquationalProof—as its name suggests—is a case where an actual proof is generated.)
Whether one’s thinking in terms of axioms and proofs, or just in terms of “getting results”, one’s ultimately always dealing with computation. But the key question is how that computation is “packaged”. Is one dealing with arbitrary, raw, low-level constructs, or with something higher level and more “humanized”?
As we’ve discussed, at the lowest level, everything can be represented in terms of the ruliad. But when we do both mathematics and physics what we’re perceiving is not the raw ruliad, but rather just certain high-level features of it. But how should these be represented? Ultimately we need a language that we humans understand, that captures the particular features of the underlying raw computation that we’re interested in.
From our computational point of view, mathematical notation can be thought of as a rough attempt at this. But the most complete and systematic effort in this direction is the one I’ve worked towards for the past several decades: what’s now the full-scale computational language that is the Wolfram Language (and Mathematica).
Ultimately the Wolfram Language can represent any computation. But the point is to make it easy to represent the computations that people care about: to capture the high-level constructs (whether they’re polynomials, geometrical objects or chemicals) that are part of modern human thinking.
The process of language design (on which, yes, I’ve spent immense amounts of time) is a curious mixture of art and science, that requires both drilling down to the essence of things, and creatively devising ways to make those things accessible and cognitively convenient for humans. At some level it’s a bit like deciding on words as they might appear in a human language—but it’s something more structured and demanding.
And it’s our best way of representing “high-level” mathematics: mathematics not at the axiomatic (or below) “machine code” level, but instead at the level human mathematicians typically think about it.
We’ve definitely not “finished the job”, though. Wolfram Language currently has around 7000 built-in primitive constructs, of which at least a couple of thousand can be considered “primarily mathematical”. But while the language has long contained constructs for algebraic numbers, random walks and finite groups, it doesn’t (yet) have built-in constructs for algebraic topology or K-theory. In recent years we’ve been slowly adding more kinds of pure-mathematical constructs—but to reach the frontiers of modern human mathematics might require perhaps a thousand more. And to make them useful all of them will have to be carefully and coherently designed.
The great power of the Wolfram Language comes not only from being able to represent things computationally, but also being able to compute with things, and get results. And it’s one thing to be able to represent some pure mathematical construct—but quite another to be able to broadly compute with it.
The Wolfram Language in a sense emphasizes the “forward computation” workflow. Another workflow that’s achieved some popularity in recent years is the proof assistant one—in which one defines a result and then as a human one tries to fill in the steps to create a proof of it, with the computer verifying that the steps correctly fit together. If the steps are low level then what one has is something like typical automated theorem proving—though now being attempted with human effort rather than being done automatically.
In principle one can build up to much higher-level “steps” in a modular way. But now the problem is essentially the same as in computational language design: to create primitives that are both precise enough to be immediately handled computationally, and “cognitively convenient” enough to be usefully understood by humans. And realistically once one’s done the design (which, after decades of working on such things, I can say is hard), there’s likely to be much more “leverage” to be had by letting the computer just do computations than by expending human effort (even with computer assistance) to put together proofs.
One might think that a proof would be important in being sure one’s got the right answer. But as we’ve discussed, that’s a complicated concept when one’s dealing with human-level mathematics. If we go to a full axiomatic level it’s very typical that there will be all sorts of pedantic conditions involved. Do we have the “right answer” if underneath we assume that 1/0=0? Or does this not matter at the “fluid dynamics” level of human mathematics?
One of the great things about computational language is that—at least if it’s written well—it provides a clear and succinct specification of things, just like a good “human proof” is supposed to. But computational language has the great advantage that it can be run to create new results—rather than just being used to check something.
It’s worth mentioning that there’s another potential workflow beyond “compute a result” and “find a proof”. It’s “here’s an object or a set of constraints for creating one; now find interesting facts about this”. Type into Wolfram|Alpha something like sin^4(x) (and, yes, there’s “natural math understanding” needed to translate something like this to precise Wolfram Language). There’s nothing obvious to “compute” here. But instead what Wolfram|Alpha does is to “say interesting things” about this—like what its maximum or its integral over a period is.
In principle this is a bit like exploring the entailment cone—but with the crucial additional piece of picking out which entailments will be “interesting to humans”. (And implementationally it’s a very deeply constrained exploration.)
It’s interesting to compare these various workflows with what one can call experimental mathematics. Sometimes this term is basically just applied to studying explicit examples of known mathematical results. But the much more powerful concept is to imagine discovering new mathematical results by “doing experiments”.
Usually these experiments are not done at the level of axioms, but rather at a considerably higher level (e.g. with things specified using the primitives of Wolfram Language). But the typical pattern is to enumerate a large number of cases and to see what happens—with the most exciting result being the discovery of some unexpected phenomenon, regularity or irregularity.
This type of approach is in a sense much more general than mathematics: it can be applied to anything computational, or anything described by rules. And indeed it is the core methodology of ruliology, and what it does to explore the computational universe—and the ruliad.
One can think of the typical approach in pure mathematics as representing a gradual expansion of the entailment fabric, with humans checking (perhaps with a computer) statements they consider adding. Experimental mathematics effectively strikes out in some “direction” in metamathematical space, potentially jumping far away from the entailment fabric currently within the purview of some mathematical observer.
And one feature of this—very common in ruliology—is that one may run into undecidability. The “nearby” entailment fabric of the mathematical observer is in a sense “filled in enough” that it doesn’t typically have infinite proof paths of the kind associated with undecidability. But something reached by experimental mathematics has no such guarantee.
What’s good of course is that experimental mathematics can discover phenomena that are “far away” from existing mathematics. But (like in automated theorem proving) there isn’t necessarily any human-accessible “narrative explanation” (and if there’s undecidability there may be no “finite explanation” at all).
So how does this all relate to our whole discussion of new ideas about the foundations of mathematics? In the past we might have thought that mathematics must ultimately progress just by working out more and more consequences of particular axioms. But what we’ve argued is that there’s a fundamental infrastructure even far below axiom systems—whose low-level exploration is the subject of ruliology. But the thing we call mathematics is really something higher level.
Axiom systems are some kind of intermediate modeling layer—a kind of “assembly language” that can be used as a wrapper above the “raw ruliad”. In the end, we’ve argued, the details of this language won’t matter for typical things we call mathematics. But in a sense the situation is very much like in practical computing: we want an “assembly language” that makes it easiest to do the typical high-level things we want. In practical computing that’s often achieved with RISC instruction sets. In mathematics we typically imagine using axiom systems like ZFC. But—as reverse mathematics has tended to indicate—there are probably much more accessible axiom systems that could be used to reach the mathematics we want. (And ultimately even ZFC is limited in what it can reach.)
But if we could find such a “RISC” axiom system for mathematics it has the potential to make practical more extensive exploration of the entailment cone. It’s also conceivable—though not guaranteed—that it could be “designed” to be more readily understood by humans. But in the end actual human-level mathematics will typically operate at a level far above it.
And now the question is whether the “physicalized general laws of mathematics” that we’ve discussed can be used to make conclusions directly about human-level mathematics. We’ve identified a few features—like the very possibility of high-level mathematics, and the expectation of extensive dualities between mathematical fields. And we know that basic commonalities in structural features can be captured by things like category theory. But the question is what kinds of deeper general features can be found, and used.
In physics our everyday experience immediately makes us think about “large-scale features” far above the level of atoms of space. In mathematics our typical experience so far has been at a lower level. So now the challenge is to think more globally, more metamathematically and, in effect, more like in physics.
In the end, though, what we call mathematics is what mathematical observers perceive. So if we ask about the future of mathematics we must also ask about the future of mathematical observers.
If one looks at the history of physics there was already much to understand just on the basis of what we humans could “observe” with our unaided senses. But gradually as more kinds of detectors became available—from microscopes to telescopes to amplifiers and so on—the domain of the physical observer was expanded, and the perceived laws of physics with it. And today, as the practical computational capability of observers increases, we can expect that we’ll gradually see new kinds of physical laws (say associated with hitherto “it’s just random” molecular motion or other features of systems).
As we’ve discussed above, we can see our characteristics as physical observers as being associated with “experiencing” the ruliad from one particular “vantage point” in rulial space (just as we “experience” physical space from one particular vantage point in physical space). Putative “aliens” might experience the ruliad from a different vantage point in rulial space—leading them to have laws of physics utterly incoherent with our own. But as our technology and ways of thinking progress, we can expect that we’ll gradually be able to expand our “presence” in rulial space (just as we do with spacecraft and telescopes in physical space). And so we’ll be able to “experience” different laws of physics.
We can expect the story to be very similar for mathematics. We have “experienced” mathematics from a certain vantage point in the ruliad. Putative aliens might experience it from another point, and build their own “paramathematics” utterly incoherent with our mathematics. The “natural evolution” of our mathematics corresponds to a gradual expansion in the entailment fabric, and in a sense a gradual spreading in rulial space. Experimental mathematics has the potential to launch a kind of “metamathematical space probe” which can discover quite different mathematics. At first, though, this will tend to be a piece of “raw ruliology”. But, if pursued, it potentially points the way to a kind of “colonization of rulial space” that will gradually expand the domain of the mathematical observer.
The physicalized general laws of mathematics we’ve discussed here are based on features of current mathematical observers (which in turn are highly based on current physical observers). What these laws would be like with “enhanced” mathematical observers we don’t yet know.
Mathematics as it is today is a great example of the “humanization of raw computation”. Two other examples are theoretical physics and computational language. And in all cases there is the potential to gradually expand our scope as observers. It’ll no doubt be a mixture of technology and methods along with expanded cognitive frameworks and understanding. We can use ruliology—or experimental mathematics—to “jump out” into the raw ruliad. But most of what we’ll see is “non-humanized” computational irreducibility.
But perhaps somewhere there’ll be another slice of computational reducibility: a different “island” on which “alien” general laws can be built. But for now we exist on our current “island” of reducibility. And on this island we see the particular kinds of general laws that we’ve discussed. We saw them first in physics. But there we discovered that they could emerge quite generically from a lower-level computational structure—and ultimately from the very general structure that we call the ruliad. And now, as we’ve discussed here, we realize that the thing we call mathematics is actually based on exactly the same foundations—with the result that it should show the same kinds of general laws.
It’s a rather different view of mathematics—and its foundations—than we’ve been able to form before. But the deep connection with physics that we’ve discussed allows us to now have a physicalized view of metamathematics, which informs both what mathematics really is now, and what the future can hold for the remarkable pursuit that we call mathematics.