Moshe wrote:
>
> Michael wrote:
> > First: definitions.  I am presuming that by 'amodal' you mean a
> > representation without a conventional 'live' input channel from
> > external reality.  Is that correct?  Perhaps you could give us a
> > working definition.
> Well, there is a somewhat fuzzy boundary here. I would say that a modal
> representation is one that uses the same format and obeys the
> same rules as
> some sensory input.  Moreover, the mapping should be approximately
> one-to-one: that is, it should be possible to represent most of the
> "possible sensory inputs", and most things that can be represented should
> correspond to a possible sensory input.

I think I understand your intuitive idea here, but I'm not yet happy with
your definition...

First, about the 1-to-1 part.  Obviously, the mapping between "retinal
stimulation patterns" and "cognitively meaningful renditions of visual
scenes" isn't anywhere near 1-to-1.  But I think the 1-to-1-ness is lost
very early in the vision pipeline.  Probably it's lost along the optic
nerve!!

I'm not sure what "approximately 1-to-1" means.  Does it mean that almost
all sensory patterns are mapped into only one mental pattern?  Or does it
mean that on average each sensory pattern is only mapped into a handful of
mental patterns?  Either way I'm not sure I believe it...

Next, it seems to me there are two kinds of ways to interpret the idea of
"corresponding to a possible sensory input::

-- corresponding to the actual sensory input to the system
-- corresponding to our cultural model of the world supplying the sensory
input to the system

For instance, the mapping from 2D retinal images to 2D sections of cortical
sheet is highly "correspondent to sensory input" -- in the sense that these
2D sections are "modal" representations of the retinal images.

But when a 3D sketch is produced, one is moving to a less modal
representation, in that the sensory input is 2D not 3D.  On the other hand,
one is moving to a representation that corresponds *better* to the external
world as we conventionally model it (as a 3D world).

There may really be two questions here:

1) What is the value of a specialized {knowledge representation + associated
dynamic} package that closely reflects the structure of *the data coming in
thru a given sensory modality* ?  We may call this
*modality-correspondence*.

2) What is the value of a specialized {knowledge representation + associated
dynamic} package that closely reflects the structure of *the physical world*
(or more properly, of a particular relatively-low-dimensional projection of
the insanely-high-dimensional physical world, such as the set of objects
visible under the humanly perceivable subset of the spectrum) ?  We may call
this *world-correspondence.*

In the human visual system we see subsystems that are
modality-correspondent, AND subsystems that are world-correspondent.

In the tactile system, we see world-correspondence and
modality-correspondence merged together, with different brain regions
corresponding to different parts of the physical body...

In the acoustic system, similar to the visual system, there is a difference
btw modality-correspondent subsystems and world-correspondent subsystems --
but a smaller difference.  Part of what makes the visual system so big &
complex is the need to transition from sense data to modality-correspondent
subsystems to world-correspondent subsystems and finally to the more
abstractly cognitive subsystems of the mind.  More layers than in other
human modalities.

Now, getting back to the 1-to-1 bit, *one* way to formalize what we mean by
"X corresponds to Y" is to say that there are a lot of common patterns
between X and Y.  If we have a retinal pattern X, and a cortical sheet
pattern Y corresponding to X, then there are a lot of common patterns
between X and Y, much more so than between X and an abstract cognitive
correlate of X.  So if we calculated what I call the "structure" of X [the
fuzzy set of all patterns in X], and also the structure of Y, then

(structure(X)) intersect (structure(Y))

will be large.

I'm not sure this is entirely satisfactory as a formalization of your notion
of correspondence though; I'll think about it some more...

Anyway, however one defines correspondence, one definitely can distinguish
modality-correspondence from world-correspondence...

> Ben later wrote:
> > I think it's worthwhile to experiment both with
> > -- vision processing using generic cognitive mechanisms
> > -- specialized vision processing algorithms, feeding their output into
> > generic cog. mechanisms, and with parameters tunable by cognitive
> > schemata
> So what representation would the vision processes produce?  I don't think
> that nodes+links are very good for reasoning about vision and space...

I have a fair bit to say about this but I'm tired, so I'll address this in
the morning ;)

However, I'll say now that neurons are basically nodes and synapses are
basically links, so it's clear that a node-link data structure in itself
isn't way off...

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to