|
SOME HISTORICAL NOTES
From: Stephen Wolfram, A New Kind of Science Notes for Chapter 10: Processes of Perception and Analysis
Section: Visual Perception
Page 1076
History [of visual perception]. Ever since antiquity the visual arts have yielded practical schemes and some× also fairly abstract frameworks for determining what features of images will have what impact. In fact, even in prehistoric × it seems to have been known, for example, that edges are often sufficient to communicate visual forms, as in the pictures below.
Visual perception has been used for centuries as an example in philosophical discussions about the nature of experience. Traditional mathematical methods began to be applied to it in the second half of the 1800s, particularly through the development of psychophysics. Studies of visual illusions around the end of the 1800s raised many questions that were not readily amenable to numerical measurement or traditional mathematical analysis, and this led in part to the Gestalt approach to psychology which attempted to formulate various global principles of visual perception.
In the 1940s and 1950s, the idea emerged that visual images might be processed using arrays of simple elements. At a largely theoretical level, this led to the perceptron model of the visual system as a network of idealized neurons. And at a practical level it also led to many systems for image processing (see below), based essentially on simple cellular automata (see page 930). Such systems were widely used by the end of the 1960s, especially in aerial reconnaissance and biomedical applications.
Attempts to characterize human abilities to perceive texture appear to have started in earnest with the work of Bela Julesz around 1962. At first it was thought that the visual system might be sensitive only to the overall autocorrelation of an image, given by the probability that randomly selected points have the same color. But within a few years it became clear that images could be constructed - notably with systems equivalent to additive cellular automata (see below) - that had the same autocorrelations but looked completely different. Julesz then suggested that discrimination between textures might be based on the presence of "textons", loosely defined as localized regions like those shown below with some set of distinct geometrical or topological properties.
In the 1970s, two approaches to vision developed. One was largely an outgrowth of work in artificial intelligence, and concentrated mostly on trying to use traditional mathematics to characterize fairly high-level perception of objects and their geometrical properties. The other, emphasized particularly by David Marr, concentrated on lower-level processes, mostly based on simple models of the responses of single nerve cells, and very often effectively applying ListConvolve with simple kernels, as in the pictures below.
In the 1980s, approaches based on neural networks capable of learning became popular, and attempts were made in the context of computational neuroscience to create models combining higher- and lower-level aspects of visual perception.
The basic idea that early stages of visual perception involve extraction of local features has been fairly clear since the 1950s, and researchers from a variety of fields have invented and reinvented implementations of this idea many ×. But mainly through a desire to use traditional mathematics, these implementations have tended to be implicitly restricted to using elements with various linearity properties - typically leading to rather unconvincing results. My model is closer to what is often done in practical image processing, and apparently to how actual nerve cells work, and in effect assumes highly nonlinear elements.
Stephen Wolfram, A New Kind of Science (Wolfram Media, 2002), page 1076.
© 2002, Stephen Wolfram, LLC
|
|