Regarding testing grounds for AGI. Personally I feel that ordinary computer games could provide an excellent proving ground for the early stages of AGI, or maybe even better if they are especially constructed. Computer games are usually especially designed to encourage the player towards creativity and exploration. Take a simple platform game for example, at every new stage new graphics and monsters are introduced, and in large, the player undergoes a continuous self training that last throughout the whole game. Game developers carefully distribute rewards and challenges to make this learning process as smooth as possible.
But also I would like to say that given any proving ground for the first stages of AGI could be misused if AGI designers bring specialized code into their system. So if there is to be a competition for first generation AGI, there would have to be some referee that evaluates how much domain specific knowledge has been encoded to any given system. For the late development stages of AGI, where we basically have virtual human minds, then we could use so hard problems that specialized code could not help the AGI system anymore. But I guess that at that time we have basically already solved the problem of AGI, and competitions where AGI systems compete in writing essays on some subject, could only be used to polish some already outlined solution to AGI. I am a fan of Novamente, but for example when I watched the movie where they trained an AGI dog, I was left with the question about what parts of its cognition was specialization. For example, the human teacher used natural language to talk to the dog. Did the dog understand any of it, and in that case, was there any special language module involved? Also, training a dog is quite open ended, and it is difficult to assess what is progress. This shows just how difficult it is to demonstrate AGI. Any demonstration of AGI would have to support a list of what cognitive aspects are coded, and which are learnt. Only then you can understand whether it is impressive or not. Also, because we need to have firm rules about what can be pre-programmed, and what needs to be learnt, it is easier if we used some world with pretty simple mechanics. What I basically would like to see is an AGI learning to play a certain computer game, starting by learning the fundamentals, and then playing it to the end. Take an old videogame classic like The Legend of Zelda. http://www.zelda.com/universe/game/zelda/. I know a lot of you would say that this is a far to simplistic world for training an AGI, but not if you prohibit ANY pre-programmed knowledge. You only allow the AGI system to start with proto-knowledge representation, and basically hard-wire the in-game rewards and punishemnts to the goal of the AGI. The AGI system would then have to learn basic concepts such as: "objects moving around on the screen" "which graphics correspond to yourself" "walls where you can go" "keys that opens doors" "the concept of coming to a new screen when walking of the edge of one" "how screens relate to each other" "teleportation" (the flute for anyone who remembers) If the AGI system then can learn to play the game to the end and slay "Ganon" based on only proto-knowledge, then maybe we have some interesting going on. Such an AGI could maybe be compared to a rodent running in a maze, even if the motoric and vision system are more complicated. Then we are ready to increase the complexity of the computer game, adding communication with other characters, more complex concepts and puzzles, more dimensions, more motorics etc.. Basically, I would like to se Novamente and similar AGI systems play some goal oriented computer game, since AGI in itself needs to be goal oriented. /R 2007/10/20, Benjamin Goertzel <[EMAIL PROTECTED]>: > > > > > > > I largely agree. It's worth pointing out that Carnot published > > "Reflections on > > the Motive Power of Fire" and established the science of thermodynamics > > more > > than a century after the first working steam engines were built. > > > > That said, I opine that an intuitive grasp of some of the important > > elements > > in what will ultimately become the science of intelligence is likely to > > be > > very useful to those inventing AGI. > > > > > Yeah, most certainly.... However, an intuitive grasp -- and even a > well-fleshed-out > qualitative theory supplemented by heuristic back-of-the-envelope > calculations > and prototype results -- is very different from a defensible, rigorous > theory that > can stand up to the assaults of intelligent detractors.... > > I didn't start seriously trying to design & implement AGI until I felt I > had a solid > intuitive grasp of all related issues. But I did make a conscious choice > to devote > more effort to utilizing my intuitive grasp to try to design and create > AGI, > rather than to creating better general AI theories.... Both are worthy > pursuits, > and both are difficult. I actually enjoy theory better. But my sense is > that the > heyday of AGI theorizing is gonna come after AGI experimentation has > progressed > a good bit further than it has today... > > -- Ben G > > ------------------------------ > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=55773640-8193b9
