Hi,

I mostly agree with Ben. As for the research approach, I'd rather depend on
general cognition at the very beginning, just to see how far we can go.
Then, in the second stage, special mechanism is introduced for efficiency
consideration.

I won't try to implement vision alone, but will do it together with motion,
in very simple form, of course. To me, the coordination of the two is much
more important that the quality of each of them in isolation.  It means that
vision should be studied in the context of robotics.

As for Moshe's original questions, to me "spatial reasoning" is special only
in that many "concepts" it works on are mainly defined by visual
sensation/perception, as well as by their coordination with motor/action.  I
haven't feel the need to introduce special inference rules for spatial
reasoning (though it may be good for efficiency).  Anyway, I don't like the
current "spatial reasoning" research in AI (for example, see
http://www.cs.albany.edu/~amit/tutijcai.html). They depend too much on human
ontological knowledge.

Pei

----- Original Message -----
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, October 30, 2002 7:44 AM
Subject: RE: [agi] Spatial Reasoning: Modal or Amodal?


>
> Hi all,
>
> First, an interesting essay on this topic [with a terrible color scheme,
by
> Keith Hoyes, is at:
>
> http://www.incaresearch.com/chapters/draft12.htm
>
> He argues that 3D simulation (sometimes abstract, but even then, done by
> analogy to the visual world) is a key cognitive tool in the human mind.
>
> My own take is as follows.
>
> 1)
> I think it's perfectly *possible* for an AGI to handle vision processing
> adeptly, purely as a special case of general cognition.  General knowledge
> representation schemes can be used to represent visual data, and general
> cognitive mechanisms can be used to analyze visual data.  As usual,
though,
> one has a question of efficiency.  How inefficient (space and time wise)
> would it be for a system to take this course.  In this case, I guess the
> relative space inefficiency involed in taking a very general approach is a
> not too huge constant, but the relative time inefficiency could be quite
> large.
>
> 2)
> It's clear that vision is an area where significant efficiency
improvements
> (hence significant intelligence improvements, given fixed resources) can
be
> obtained by introducing some specialized methods.  For instance, there's a
> lot of mileage to be gotten by Fourier analysis in various forms (windowed
> Fourier transforms, wavelets, or whatever).  Things like edge detection
and
> the analysis of moving objects clearly CAN be done by generic inference,
but
> there's a lot to be said for having specific inference parameter settings
> and inference control methods for these problems.
>
> A superintelligent AGI system that could modify its own code, but was not
> given any vision-specific wiring up front, might well end up creating
> specialized schemata for vision processing and installing them within
> itself..
>
> 3)
> Human vision is great but not perfect.  The particular algorithms involved
> in human vision are natural for human wetware, not necessarily for digital
> hardware.  So to vision processing we may also apply the maxim: The ideal
> algorithms/datastructures for an AGI running on nonhuman hardware may not
be
> the human-brain algorithms/datastructures.
>
>
> 4)
> I agree with Keith that 3D visual simulation is a valuable cognitive tool,
> but the example of people blind from birth is sobering.  I more strongly
> feel that "physical world simulation" is a valuable cognitive tool, and 3D
> visual simulation is one useful type of physical world simulation.
>
>
> I think it's worthwhile to experiment both with
>
> -- vision processing using generic cognitive mechanisms
>
> -- specialized vision processing algorithms, feeding their output into
> generic cog. mechanisms, and with parameters tunable by cognitive schemata
>
> My inclination would be to start with a bug's-eye type of vision.  Big
> pixels, black and white only, and an "experiential learning" context so
that
> vision processes can be tuned based on their interactions with
goal-oriented
> action and cognition processes.
>
> -- Ben
>
> -- Ben
>
>
>
>
> > -----Original Message-----
> > From: [EMAIL PROTECTED] [mailto:owner-agi@;v2.listbox.com]On
> > Behalf Of Tony Lofthouse
> > Sent: Wednesday, October 30, 2002 4:48 AM
> > To: [EMAIL PROTECTED]
> > Subject: RE: [agi] Spatial Reasoning: Modal or Amodal?
> >
> >
> > Hi Moshe,
> >
> > A great starter topic. To avoid boring you all with my bio I'll just
> > point you at http://www.realai.net.
> >
> > >Some key questions arise:
> > >1) Even if spatial reasoning is amodal, it seems very "tightly bound"
> > to >the
> > >visual modality, and even blind people still have visual cortexes.
> > Would >an
> > >AGI need a visual-cortex-analog in order to handle spatial reasoning,
> > or
> > >could it be efficiently represented in an amodal format?
> >
> > My own view is that a specialised visual-cortex analog is not required.
> > Much of the human visual cortex is devoted to what I term
> > pre-processing. The current evolved solution is natures way of
> > identifying 'important' aspects of the visual stream. Simplifying
> > massively, visemes(lines/edges/etc), motion and colour are identified.
> > Any computer vision researcher can testify to how difficult it is to
> > build a real world model from this pre-processing.
> >
> > The key challenge is the semantics of vision. It is the same problem as
> > voice recognition. Visemes/phonemes have different meaning dependant on
> > context. Determining the context requires the whole mind and its general
> > reasoning capabilities.
> >
> > >2) How does spatial reasoning interact with other forms of thought?
> > For
> > >example, I personally structure almost all of my mental life in terms
> > of
> > >space and visualization: I honestly cannot imagine how I would be able
> > to
> > >think without "seeing things in my head"...
> >
> > Much like you Moshe I'm also a totally visual person. This occurs on two
> > levels; Visual input and visual thinking. I much prefer my learning to
> > be visual based. Aural based learning for me is very difficult. The same
> > with thinking, for me it's very much about pictures and spaces. So I
> > think there are two aspects to consider here; vision representation
> > (what I term the pre-processing) and manipulation of the representation
> > within the mind.
> >
> > Now it is possible that the vision sense representation has a different
> > form to 'visual thinking representation' but I think this unlikely due
> > to the duplication aspect. There is a good deal of research that shows
> > real world objects are represented as 3-dimensional regions of neurones
> > within the brain. For the purpose of this discussion I am referring to
> > mammals which possess stereoscopic vision.
> >
> > I have never understood why computer vision scientists have not focused
> > more on stereo vision. It is a much simpler problem to reconstruct a
> > 3-dimensional world from two stereoscopic images than from one image.
> > This does appear to be changing now though.
> >
> > So back to the question; How does spatial reasoning interact with other
> > forms of thought?
> >
> > At a simplistic level you could argue that there is no problem. You can
> > use a 3-dimensinal coordinate system to represent locations in real
> > world space and manipulate these vectors just the same as any other
> > numbers.
> >
> > Anna Wierzbicka's work on universal semantic primes identifies several
> > spatial primitives; under, above, where. These could easily be
> > represented in the above 3-d coordinate system;
> >
> > Above: y1 > y2
> > Below: y1 < y2
> > Where: x, y, z
> >
> > You could easily add 'near' and 'far' to this list on a similar basis.
> >
> > Yet this seems very unsatisfying to me. Reducing the visual experience
> > to a stream of numbers seems to lose the richness of the
> > interconnectedness of the real world. Maybe this is just an illusion.
> >
> > Clearly a stream of 3d coordinates can be used to create a very
> > convincing representation of the real world. You only have to look at
> > the state of the art in computer games or cinema to see this.
> >
> > So I don't see any reason why spatial processing has to be any different
> > to say, genome processing. The real challenge is in the pre-processing
> > and for this you have to bring all your real world knowledge to bear.
> >
> > The human visual system is massively interconnected, in fact, there are
> > significantly more feedback connections than feed forward, on the order
> > of 1000. So you could say that to process a visual scene, for each step
> > forward there are 1000 steps back. Clearly in a serial process this
> > would get you nowhere fast. But of course in the parallel brain those
> > 1000 steps back reinforce certain paths which eventually lead to a
> > stable representation.
> >
> > So in summary, I don't see a need for a specialised form of thinking for
> > spatial reasoning. The challenge is in the pre-processing and this is
> > where the break though needs to come. Once you have a representation
> > then it is relatively straight forward to carry out spatial reasoning.
> >
> > Comments?
> >
> > T
> >
> >
> > -------
> > To unsubscribe, change your address, or temporarily deactivate
> > your subscription,
> > please go to http://v2.listbox.com/member/
> >
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

Reply via email to