A Unique Educational & Career Opportunity with Stephen Wolfram

A unique opportunity to do original research at the frontiers of science, the Wolfram Science Summer School teaches about 40 students from a diverse range of scientific backgrounds how to use Stephen Wolfram's A New Kind of Science and the Wolfram Language to implement projects in their fields of interest. Most of these students are advanced undergraduates and early graduate students, but those in different circumstances are considered. We are looking for students who want to enhance their careers with Wolfram Science and the Wolfram Language. Read more »

Class of 2012

Vladimir Feinberg

Bio [2012]

Vladimir is a rising senior at the Harker School in California's Bay area. The school's advanced technical facilities and course offerings have allowed him to explore areas of mathematics, programming, and physics. Ever since he was offered Mathematica as a tool in his math class during freshman year, Vladimir has been interested in computational science and has conducted his own mathematical investigations using Mathematica. Besides a continuous interest in Mathematica, such as his contributions to the Wolfram Demonstrations Project, Vladimir enjoys coding in C++ and Java, exploring deep mathematical concepts, playing club volleyball, and practicing martial arts.

Project Title

2-State, 2-Color Turing Machines with Memory

Project

The behavior of all 2-state, 2-color (2,2) Turing machines (TMs) as defined in Stephen Wolfram's A New Kind of Science from simple starting conditions can be described as either Class 1 or Class 2. In an attempt to investigate potential for intrinsic randomness, some impetus from the environment may be required. Simply starting a Turing machine with a randomly selected tape does not demonstrate innate complexity; however, it may show the ability to sustain such randomness—a rare characteristic for 2,2 TMs. On the other hand, one can take the approach of using memory as a part of the rule. In other words, the TM takes some of the information necessary to update its tape from a prior step in its evolution. There are several ways of implementing this:

  1. The size of memory. This controls how far back the TM looks for information for its next update. Since TMs have single active cells, size of memory greatly effects the light cone of information that affects the new update. Let a memory size n TM contain n number of steps, which act as memory. Thus size 0 is the standard TM with no memory.
  2. The use of memory with current state. The information the TM has from before is the current cell's prior color. Additionally, the TM has its current color and state. This enables one to have several types of rules that use memory. In this project, the prior color and current state will be used, with some potential to extend the type of memory protocol that is investigated. Queries for the prior color by a TM are location-dependent. Consider a memory 1 TM head with a currently +5 relative head position*. Then the prior color value is equivalent to the color value at the prior update's +5 relative position cell.

As such, this project's goals are to find the most complex behavior from the simplest conditions possible in any of the variations enumerated above for memory-based TMs, to identify the light cone of information transfer that occurs in such machines, and to potentially program and interpret the input and output of such machines to extend their computational potential.

*Note: Relative head position refers to how many cells to the right of the first cell that the cell in question is (in TMs with memory, tapes in memory are not first to be read, even though they may be concretely placed before the first tape to be read).

Project-related Demonstrations

Turing Machines with Memory

Favorite Four-Color, Four-State Turing Machine

Rule 578870483740283859310689