Ben wrote:
>>>
The ability to cope with narrow, closed, deterministic environments in an
isolated way is VERY DIFFERENT from the ability to cope with a more
open-ended, indeterminate environment like the one humans live in

<<< 

These narrow, closed, deterministic domains are *subsets* of what AGI is
intended to do and what humans can do. Chess can be learned by young
children.  


>>>
Not everything that is a necessary capability of a completed human-level,
roughly human-like AGI, is a sensible "first step" toward a human-level,
roughly human-like AGI

<<< 

This is surely true.  But let's say someone wants to develop a car. Doesn't
it makes sense first to develop and test its essential parts before I put
everything together and go to the road? 

I think chess is a good testing area because in the domain of chess there
are too many situations to consider them all. This is a very typical and
very important problem of human environments as well. On the other hand
there are patterns in chess which can be learned and which makes life less
complex. This is the second analogy to human environments. Therefore the
domain of chess is not so different. It contains an important subset of
typical problems for human-level AI.

And if you want to solve the complex problem to build AGI then you cannot
avoid the task of solving every single of its sub problems. 

If your system sees no patterns in chess, then I would doubt whether it is
really suitable for AGI.

 

>>>
I'm not saying that making a system that's able to learn chess is a **bad**
idea.   I am saying that I suspect it's not the best path to AGI.

<<< 

Ok.



>>>
I'm slightly more attracted to the General Gameplaying (GGP) Competition
than to a narrow-focus on chess

 <http://games.stanford.edu/> http://games.stanford.edu/

but not so much to that either...

I look at it this way.  I have a basic understanding of how a roughly
human-like AGI mind (with virtual embodiment and language facility) might
progress from the preschool level up through the university level, by
analogy to human cognitive development.

On the other hand, I do not have a very good understanding at all of how a
radically non-human-like AGI mind would progress from "learn to play chess"
level to the university level, or to the level of GGP, or robust
mathematical theorem-proving, etc.  If you have a good understanding of this
I'd love to hear it.

<<< 

Ok. I do not say that your approach is wrong. In fact I think it is very
interesting and ambitious. But as you think that my approach is not the best
one I think that your approach is not the best one.  Probably, the
discussion could be endless. And probably you already have invested too much
effort in your approach that you really can consider to change it. I hope
you are right because I would be very happy to see the first AGI soon,
regardless who will build it and regardless which concept is used.

-Matthias








-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to