(4) wasting time on "symbol grounding." (this wouldn't be a problem for a
10-year, $50M project but the question was to us and for 5 years.) A computer
has direct access to enough domains of discourse (such as basic math) that
there's no need to try to (a) simulate the physical world and then (b)
reduplicate a few billion years evolution working out an appropriate sensory
and motor interface.

My own view is that symbol grounding is not a waste of time ... but,
**exclusive reliance** on symbol grounding is a waste of time.
Novamente utilizes a combination of grounding of symbols in
simulated-embodied experience with ingestion of information from
existing databases.  I believe this sort of combination is optimal,
rather than purely relying on data sources with no attention to
embodied experience....

But the failure mode that EVERY attempted AGI has hit to date is:

(0) Wind-up toy. They didn't really have a general learning capacity, so they
learned to the edges of their built-in potential and stopped. Classic AI
example: AM.

This is a tricky point....  For example, it is obvious that Novamente
has a general learning capacity in the trivial sense that, given
enough computational resources, it can learn anything....  But the
same could also be said about a lot of much simpler AI systems.  So
the real question is how does learning ability scale, in terms of the
amount of computational horsepower needed to solve problems of a given
complexity...

My own view is that all serious learning algorithms are inevitably
going to scale exponentially -- so the whole art of AGI design is in
figuring out appropriate tricks for making the exponent and the
constant outside the exponential function "not too large" for problem
classes of practical import...

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to