> Within the domain of chess there is everything to know about chess. 
> So if it comes up to be a good chess player learning chess from playing
> chess must be sufficient. Thus, an AGI which is not able to enhance its 
> abilities in chess from playing chess alone is no AGI.  

I'm jumping into this conversation a little late, but I think chess is
something that should be avoided in the context of AGI. I have three reasons
for this, but as far as I can see, it has primarily been the first of these
that has been discussed.

1. Intelligence is too easy to fake in chess
2. Chess is too hard to learn from scratch
3. Chess is tainted with the failed ambitions of early GOFAI.

The success of Deep Blue and more modern chess playing systems like Fritz
and Rybka, with hand-coded search algorithms and heuristics demonstrates how
easy it is to fake real intelligence. I'm not a good chess player, but it
wouldn't be too hard for me to implement a search algorithm over a simple
heuristic evaluation function, resulting in a chess system that can outplay
me. In contrast, the seemingly less "intelligent" problem of walking, is
very hard to get right by hand-coding my own knowledge (the nicest walks are
almost always discovered by machine learning). That is, in chess it is too
easy to fake intelligence by hand coding your own knowledge into the
heuristics. Less structured problems, in which expert knowledge is little
assistance, are a better challenge for early AGI systems.

Chess is also too difficult a problem to learn in a "general" way from zero
knowledge. The difficulty of chess in the general game playing competitions
confirms this (last time I heard from one of the teams, even though the best
systems do pretty well on simpler games, in chess they can't do much more
than simply play legal moves). When playing chess, we have draw on knowledge
of space and time and concepts like "control" and "domination". We quickly
realize by ourselves that it is good to control the centre of the board, and
that the queen is often worth defending, and that even though you can win
with just two pieces it is generally bad to lose pieces. But a real AGI
would have to discover concepts like "center" and "more powerful" by itself
(center is a difficult concept to express if you only know about 64 squares
and which ones are next to each other). The chess board itself is too large,
the moves are too complicated, and the rewards come far far too late to
expect a system to automatically discover how to play good chess with no
prior knowledge of simpler games or of the larger world. I suspect that the
complexity of the problem is such that a system that learns chess without
prior knowledge would discover quirky rules that provide local maxima: for
example, it might unintentionally learn to sacrifice many of its own pieces
because doing so makes the search space smaller, so that the system can
think more moves ahead (rather than, say, developing heuristics to simply
disregard some of its pieces).

And finally, while I did not personally experience the early days of AI,
there seems to be an implication in some of the early literature that "if
only we could create a chess playing robot, then we'll have solved the
problem of AI". I think AI has moved on from this simple attitude, and even
mentioning chess seems to - at least in my mind - sound like forgetting all
the mistakes and lessons from the past. Even with a good argument for
resurrecting chess, and an explanation for the past failure of the game to
generate real progress in strong AI, I still suspect that mentioning chess
is bad marketing for a young field.

-Ben




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to