Hi,

Thank you all for the thought-provoking responses :-).  To respond to some
specific points:

Michael wrote:
> First: definitions.  I am presuming that by 'amodal' you mean a
> representation without a conventional 'live' input channel from
> external reality.  Is that correct?  Perhaps you could give us a
> working definition.
Well, there is a somewhat fuzzy boundary here. I would say that a modal
representation is one that uses the same format and obeys the same rules as
some sensory input.  Moreover, the mapping should be approximately
one-to-one: that is, it should be possible to represent most of the
"possible sensory inputs", and most things that can be represented should
correspond to a possible sensory input.  Of course, this can be somewhat
imperfect when related to the real world (e.g., M.C. Escher-type perceptual
impossibilities).  I could add more, but I think this is a reasonable
start.  Any critiques?

Michael also wrote:
> To the question: does an AI need a visual-cortex-analog in order to
> reason spacially?
> The answer is: You betcha fer sure!

However, Tony wrote:
> My own view is that a specialised visual-cortex analog is not required.
> ...
> The key challenge is the semantics of vision. It is the same problem as
> voice recognition. Visemes/phonemes have different meaning dependant on
> context. Determining the context requires the whole mind and its
> general reasoning capabilities.
I think we may be talking at cross-purposes: I think that it is fairly clear
that the visual cortex does a lot more than pre-processing (in fact, more
than 50% of the human neocortex is responsible for vision-related
processing).  When you say "the whole mind and its general reasoning
capabilities", do you mean something distinct from vision that uses its own,
amodal representation scheme (e.g, formal logic), or rely on the visual
cortex (or other senses, for that matter) to structure representations?

Tony concluded:
> So I don't see any reason why spatial processing has to be any
> different to say, genome processing. The real challenge is in the
> pre-processing and for this you have to bring all your real world
> knowledge to bear. 
The reason I think it is different is that many abstract problems can be
analogically mapped onto problem involving spatial relationships to aid in
finding a solution.  I doubt that the same could be said of genomes. (also,
on a practical note, spatial knowlege is needed to undersand everyday human
speech)

Ben wrote:
> Things like edge detection and the analysis of moving objects clearly
> CAN be done by generic inference, but there's a lot to be said for
> having specific inference parameter settings and inference control
> methods for these problems.
I am more interested in the question of whether non-generic code is needed
for higher-levels spatial reasoning than in edge-detection and the like:
blind people clearly don't need edge detection, but I still doubt that their
spatial reasoning is amodal and generic...

Ben later wrote:
> I think it's worthwhile to experiment both with
> -- vision processing using generic cognitive mechanisms
> -- specialized vision processing algorithms, feeding their output into
> generic cog. mechanisms, and with parameters tunable by cognitive
> schemata
So what representation would the vision processes produce?  I don't think
that nodes+links are very good for reasoning about vision and space...

Pei wrote:
> I won't try to implement vision alone, but will do it together with
> motion, in very simple form, of course. To me, the coordination of the
> two is much more important that the quality of each of them in
> isolation.  It means that vision should be studied in the context of
> robotics.
This is an good point.  To be precise, all of the places where I said
"spatial reasoning" should be replaced with "spatio-temporal reasoning". 
However, I would note that this kind of "robotics" should not require
embodiement in the real world.

Moshe


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to