Wolfram Computation Meets Knowledge

Wolfram Summer School

Alumni

John DeBrota

Summer School

Class of 2012

Bio

John DeBrota is currently pursuing a bachelor’s degree in physics and mathematics from Indiana University. He enjoys learning about physics, mathematics, cognitive science, and complexity, and, in his free time, playing music and climbing. For the last two years, he has been a student researcher in computational chemistry. He first became interested in Mathematica after attending the Wolfram Technology Conference in 2009 and, more recently, in NKS after reading A New Kind of Science.

Project: Universal Assemblers in Multiway Systems

One of the biggest goals of nanotechnology is to develop universal molecular assemblers that can manipulate chemical reactions on an atomic scale. Motivated by the possibility of such machines, I investigated the conditions necessary for a multiway system to generate every string from any string.

A multiway system—also called a string rewriting system or a semi-Thue system—is a string along with a set of axioms that define certain substring transformations. At each step, every legal transformation is done one at a time to create a number of daughter strings, each of which is acted upon separately in the next step. This continues arbitrarily long or until there are no legal transformations left. In the picture below, I show a visualization of a multiway system starting at the green string:

Rule: {“A” -> “AB”, “AB” -> “B”, “B” -> “A”}

If we wanted to show that “A” goes to “BA”, we can see that it does, because the red string “BA” shows up in the image. It turns out it is relatively easy to find all the complete rules for a two-letter alphabet with a number of axioms per rule and a maximum string length per axiom, because a necessary and sufficient condition is that {“A”->”AA”, “A”->”B”, “AA”->”A”, “B”->”A} are all provable string transformations. Of the complete rules, I wanted to see if there were some rules that were inherently more efficient at proving strings, so I compared their average proof lengths across every string transformation up to a certain length. For the complete rules in a two-letter alphabet with three axioms per rule and axioms up to two letters long, the line plot below shows the average number of steps each rule required to prove each transformation and notes the optimal rules:

Thus the complete rules, or rules that can be used to construct any string from any other string, are easily condensed out of any particular rule space with a enumerable set of necessary and sufficient lemmas. I have also found that each complete rule in the two-letter alphabet is a member of a four- or two-rule set that is related by inverting the direction of implication or flipping every letter to the other, subsequently forming something resembling an algebra relating the rules. Of the complete rules, it seems that there exist optimal rules that are fundamentally faster at proving string transformations. The existence of optimal rules implies that for universal assemblers, it will very likely be possible to determine a most efficient set of rules that a hypothetical developer will want his or her machines to have.

In continuing this study, I would like to come up with rationale for why there should be optimal rules and come up with a system for enumerating them that doesn’t require brute force. Next I want to generalize my results to arbitrarily sized alphabets and investigate the relationship between vertex growth and completeness in the implication trees.

Favorite Four-Color, Four State Turing Machine

Rule 1107926433466231544254084