On Mon, 08 Feb 2016 12:57:00 -0800, Stuart McKelvie wrote:
Dear Tipsters,

I like D. O. Hebb's distinction between sensation and
perception as a way of distinguishing bottom-up and
top-down processing.

One thing to keep in mind is that the "bottom-up" versus
"top-down" distinction originates in computer science
or what we now refer to as artificial intelligence.  Alan
Turing proposed the bottom-up approach in a 1947
lecture and 1948 paper as operating in a simple
neural network model (the first real neural network
model, Rosenblatt and perceptrons notwithstanding).
Hebb's (1949) "Organization of Behavior" does not
mention Turing work but what is being described is
sensation ad Stuart describes below.

Chris Green can confirm if Hebb's distinction is in
fact a classical distinction going back to the 19th
century.  Today, the distinction is fuzzy given that
we know that stimuli can activate knowledge structures
and schemas, so it become a single process of
sensation to cognition with perception as an intermediate
step for simple stimuli.  But I could be wrong.

Hebb defines sensation as activity in the sense organ
and corresponding sensory receiving areas of the brain.
You can easily illustrate this with a diagram, say for the
visual system.

More importantly, stimulus information processing causes
the system to learn recurring patterns of stimulation which
causes the neural network to recognize symbols over time.
This would be a purely bottom-up system and can be described
mathematically -- it implies that such a neural network can be
implemented in biological as well as nonbiological systems.
For more background on these points see the following
reference:
Copeland, B. J., & Proudfoot, D.. (1996). On Alan Turing's
Anticipation of Connectionism. Synthese, 108(3), 361-377.

For a superficial statement on Turing's contribution, see the
following entry on AlanTuring.net:
http://www.alanturing.net/turing_archive/pages/reference%20articles/what_is_AI/What%20is%20AI09.html
See also:
www.cs.virginia.edu%2F~robins%2FAlan_Turing%27s_Forgotten_Ideas.pdf&usg=AFQjCNHjHHgmZ7dZi_snhg8l6mpb27WHcQ&sig2=kBGlsXOMwl6ABdKkHEu7Zg

The Turing 1948 paper is titled "Intelligent machinery".

Perception is then what occurs when this information is sent
on to other parts of the brain and interpreted in the light of context
and past experience (top-down processing).

In Turing's framework, a list or a representation is stored in memory
and in top-down processing, activating this representation guides
processing.  In a mixed system, bottom-up processing matches
its output to the representation as a confirmation that it's processing
was correct.  The interaction of bottom-up and top-down processing
is somewhat redundant because once the input network learns to
recognized a pattern, it should function stablely if the stimulus doesn't
vary.  Building in top-down processing allows one to deal with
such variability where the stored representation is a prototypical
representation and has a category containing variations of the
prototype.

Gibson, I believe would argue that an internal representation is not
necessary because all of the information is present in the stimulus
and a trained neural network extract the information and could execute
and action based on this output.  In a perceptual system this might
work but even Neisser had accept that mental representations existed
in an ecological approach to cognition because it was necessary for
other mental operations.

Re: Annette's question of human use of template models of pattern
recognition:  take a look at Selfridge & Neisser (1960) Pattern
Recognition by Machine, Scientific American where they present
Pandenmonium, a neural network perceptron pattern recognition
system.  They discuss the template model and its problems and
why it fails at character recognition.  However, Hubel and Wiesel
feature detectors can be thought of as microtemplates (e.g., lines
of different orientation, curves, etc.) which are combined in the
visual neural network involved in bottom-up processing.  To take it
one step further, Biederman's geons can be thought of as abstract
templates which are used as components to form objects that
ultimately are confirmed by matching to existing representations
of objects in world.  Geon being simple geometric forms can be
built into the nervous system (like feature detectors) because they
represent the basic geometry in the real world and define the shap
of objects therein or can be derived through training the neural
network for objects (but this would take much longer than activating
built-in geons which would confer an evolutionary advantage).

-Mike Palij
New York University
[email protected]


---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=48104
or send a blank email to 
leave-48104-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to