Error-correcting codes
In many information transmission and storage applications one needs to be able to recover data even if some errors are introduced into it. The standard way to do this is to set up an error-correcting code in which blocks of m original data elements are represented by a codeword of length n that in effect includes some redundant elements. Then—somewhat in analogy to retrieving closest memories—one can take a sequence of length n that one receives and find the codeword that differs from it in the fewest elements. If the codewords are chosen so that every pair differs by at least r elements (or equivalently, have so-called Hamming distance at least r), then this means that errors in up to Floor[(r - 1)/2] elements can be corrected, and finding suitable codewords is like finding packings of spheres in n-dimensional space. It is common in practice to use so-called linear codes which can be analyzed using algebraic methods, and for which the spheres are arranged in a repetitive array. The Hamming codes with n = 2s - 1, m = n - s, r = 3 are an example, invented by Marcel Golay in 1949 and Richard Hamming in 1950. Defining
PM[s_] := IntegerDigits[Range[2s - 1], 2, s]
blocks of data of length m can be encoded with
Join[data, Mod[data . Select[PM[s], Count[#, 1] > 1 &], 2]]
while blocks of length n (and at most one error) can be decoded with
Drop[(If[# 0, data, MapAt[1 - # &, data, #]] &)[FromDigits[Mod[data . PM[s], 2], 2]], -s]
A number of families of linear codes are known, together with a few nonlinear ones. But in all cases they tend to be based on rather special mathematical structures which do not seem likely to occur in any system like the brain.