I don't think this is all that crazy an idea.  A reasonable
number of people think that intelligence is essentailly about
"game playing" in some sense, I happen to be one.

I actually used to play The Legend of Zelda many years back.
Not a bad game from what I remember.  However I'm not convinced
that this is the best game for this purpose as, if I remember
correctly, there were quite a few things in the game that had
meaning to me as a player because they related to things in
the external world.  I'm talking about different sorts of objects
etc. that you could pick up and use.  Thus as a player I had a
reasonable amount of background knowledge and understanding of
what various objects were for and what their properties were
likely to be that was based on my knowledge of the real world.
An AGI wouldn't have this and so playing the game would be a
lot harder.

Perhaps then PacMan would be a better game?  When you walk into
a wall that hurts (pain), when you eat a "dot" (?) that's pleasure,
eating a cherry is lots of pleasure and running into a ghost and
losing a life is lots of pain.  With a little experimentation the
AGI would be able to quickly figure all this out without needing
background knowledge from the real world to start with.

My other point is that an AGI has to be a General Intelligence.
So being able to just play PacMan isn't really enough, what we
would really need is huge collection games like this that
exercised the AI's brain in all sorts of slightly different ways
with different types of simple learning problems.  We need somebody
to build a collection of simple games with a common simple API.
A standard AGI test bed of sorts.

(in case those with a theoretical bent are wondering: yes, I'm
very much an RL, AIXI model of intelligence kind of a guy, in
fact it's my PhD area)

Cheers
Shane


Alan Grimes wrote:
In 1986 Nintendo released a game called The Legend of Zelda.
It remained on the top-10 list for the next five years.

So why do I mention this totally irrelevant game on this list?

Well, I'ts become apparent that I am well suited for a niche on
list-ecology that is responsible for throwing up a semi-crazy idea and
provoking useful discussion. This aims to be such a posting.

The basic problem of a baby AI mind is that you want to give it some
interactive environment that is heavy on feedback but doesn't require it
to understand abstract relationships right off the bat. A game such as
Dragon Warrior would not be good at all because it relies heavily on
textual clues.
A game such as the legend of Zelda, however, is excelent because you
hardly have to be literate at all to begin to play it. There may be a
game that better-maches this criterian but lets stick to this one.

The game's ROM was only 160k and the NES is easily emulated on a PC. As
there are open-source interpriters available, it should be feasable to
adapt it to serve an AI's needs.

One would need to hack the rom a bit to lay down traps for certain
events such as bumping into something but that shouldn't be to terrably
hard.
The idea is to then take all the IO+hacks, and then map them onto your
AI's simulated spinal chord.

If Link bumps into something, the event is trapped and sent to the AI's
mind and thus it learns... (It would also corelate this experience with
the audio and visual feedback).
The output would be the directional buttons, A, B, [select] and [start].

This approach is rather limiting as it doesn't give the AI any
real-world capabilities but it would serve quite well for demonstration
purposes.
The AI would need to demonstrate basic planning skills (ie: you should
restore your health and pick up some potions before attempting a big
level), as well as navigation using the map systems.
My godforsaken develment machine (if it ever works) should be well
suited to this type of experament.
Currently I am planning an AI based on an architecture that I call
"mind-2". It is an attempt at a high-level brain emulation. It will not
use neurons but rather vectors and registers to achieve functional
equivalence to the apparent CAM organization of the brain.

This Mind-2 architecture is not a strong AI but it should be no less
general than the human brain. I've shifted my focus to it because it
doesn't require nearly as deep an understanding of the function of the
brain as would a strong AI. The mere fact that we have no AI at present
makes it a useful project.
A mind-2 architecture for Link can be greatly simplified next to the
complexity required for dealing with the real world. The organization of
this can be a small fraction of the size of a real-world intelligence.

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to