From: "J. Storrs Hall, PhD." <[EMAIL PROTECTED]>
(4) wasting time on "symbol grounding." (this wouldn't be a problem for a
10-year, $50M project but the question was to us and for 5 years.) A computer
has direct access to enough domains of discourse (such as basic math) that
there's no need to try to (a) simulate the physical world and then (b)
reduplicate a few billion years evolution working out an appropriate sensory
and motor interface.

These questions were thought-provoking. Here's what comes to my mind.
Symbol grounding is a philosophically tinted argument that has
a very important idea embedded: that the development of intelligence
necessarily requires direct sensory contact with the world. However
one may think of "special environments" where this contact would be much
more easier to build than the real world. An intelligent computer
monitoring network traffic could be an example of this. I consider
this an example of an intelligent system.

But when one asks for AGI, one is talking of a machine that presents
intelligent behavior that we, humans, can effortlessly recognize (and
perhaps even "talk" to it). In order for us to communicate with such a
machine it must necessarily have capabilities similar to our own mind
(otherwise we wouldn't understand what it says or vice-versa).
Thus we can try to help evolution by providing that computer with
artificially designed sensory and motor interfaces, but these
interfaces will have to be used by the system to develop, by itself,
its own perceptual system. And that requires that this system be in
direct interaction with the environment. In other words (and summarizing
a lot), an AGI that can communicate with us will only be successful if
it has the same kind of developmental path as a child. We cannot think
about "knowledge acquisition", or "spoon-fed knowledge bases as cyc", or
"inference algorithms", or "logical reasoning". We will only be
successful if we think about "educating" that system.

Sergio Navega.



----- Original Message ----- From: "J. Storrs Hall, PhD." <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Monday, September 25, 2006 3:05 PM
Subject: Re: [agi] Failure scenarios


99% of AI projects failed of being AGI by simply being diverted into
applications. The problem is that it is MUCH easier to write a program to do
X than it is to write a system that can learn to do X without your having
told it about X to begin with, for any X.

Assuming we can stay focussed on AGI, the main failure modes are probably:

(1) insufficient horsepower (echoing Ben) -- I gave up on one design in the 90s for that reason, which I may pick up on again now that I have 1000x the
horses to play with...

(2) not enough depth and generality in the design. A typical case would be
adopting one specific logic or modelling method that limits the domain of
discourse the system could even conceivably think about.

(3) TOO MUCH depth and generality in the design. This is what happened to
Eurisko. If your system ever looks like it's biting itself in the back, it's
working in too big a space.

(4) wasting time on "symbol grounding." (this wouldn't be a problem for a
10-year, $50M project but the question was to us and for 5 years.) A computer
has direct access to enough domains of discourse (such as basic math) that
there's no need to try to (a) simulate the physical world and then (b)
reduplicate a few billion years evolution working out an appropriate sensory
and motor interface.

But the failure mode that EVERY attempted AGI has hit to date is:

(0) Wind-up toy. They didn't really have a general learning capacity, so they
learned to the edges of their built-in potential and stopped. Classic AI
example: AM.


On Monday 25 September 2006 08:52, Joshua Fox wrote:
...
To those doing AGI development: If, at the end of the development stage of
your project  -- say, after approximately five years -- you find that it
has failed technically to the point that it is not salvageable, what do you
think is most likely to have caused it?...

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.12.8/455 - Release Date: 22/09/2006



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to