Mike Tintner writes:

Let's call it the Neo-Maze Test.

I think this type of test is pretty interesting; the objection
if any is whether the capabilities of this robot are really
getting toward what we would like to consider general
intelligence.

For example, moving from the simple maze to navigating
an office building involves new reasoning abilities such as
understanding how to work an elevator.  If that ability
is programmed explicitly it is suspicious, but if it is somehow
learned, that's certainly more interesting.

In some ways this idea is similar to another easy-to-define
approach to AGI with tangible intermediate goals:  recapitulating
phylogeny:  Start with a fruitfly simulator and build a "brain"
capable of passing the Turing-fruitfly test (well, not fooling
the other fruitflies so much, but being able to flourish in the
fruitfly's world by controlling a fruitfly's body).  Then move
on to the Turing-mouse test, the Turing-dog test, and
the Turing-monkey test.

The reason such an approach is distasteful to most AGI
researchers is the opinion that it puts a lot of work into
doing things that seem completely unrelated to the core
task, and even once you have a simulated monkey, are
you really very close to AGI?

I don't know the answer.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to