On Thu, 2007-07-05 at 09:47 +0100, Jacques Basaldúa wrote:
> Don Dailey wrote:
> 
> >I have posted before about the evils of trying to extract 
> >knowledge from human games. I don't think it is very effective 
> >compared to generating that knowledge from computer games for 
> >several reasons.


> I would agree if we could have quality games played by computers.
> In 19x19, of course. But because computer moves are so vulgar 
> when they are tesuji it is because a search has found a killer 
> move, not because the program has good style. The killer moves
> only apply to that position exactly (or a local subgame whose 
> limits are not trivial to determine). There is not much to learn
> from killer moves. What programs need to learn is style and from
> programs you only learn bad habits.


I'm really advocating self-generated knowledge where the computer itself
has an active part in the process, not as a passive observer trying to
learn by example only.   I see very little value in this method of
trying to extract knowledge from a collection of games.   

What you can expect to learn from human games is pretty limited.  A
chain is as strong as its weakest link - and computers have too many
weak links that must be fixed first.   Learning sophisticated and
profound concepts from high Dan players by looking at their games is
shooting for the moon.   

A master friend of mine, who was a good chess teacher once told me that
if you want to learn to play chess well, it is far more important to
have a good teacher teaching you than a strong player.    His conclusion
was that teaching skill is the overwhelming consideration and his
strength was a relatively minor consideration as long as he was stronger
than you.    (Since then, I've come to believe that even a weaker player
can teach, as long as he has something you don't have - we see that with
athletic coaches who "teach" despite the fact that their "student" is
far better than they are.)

With machine learning, it's all about "teaching skill" but the teacher
is some kind of learning algorithm.   A passive set of games is not a
teacher.  The quality of the games is just not very important at the
current levels of Go play compared to the "teaching algorithm."    You
can almost (but not quite) ignore this factor.    That being said,  if
you can generate useful data by using computers instead, you have far
more control over the consistency of the data and all the variables that
are important.    

For instance,  in a set of master games what feedback do I have about
each move other than that it was chosen?   How do I get the opinion of
the master player concerning the moves played and not played?    The
learning signal is almost non-existent and it's generally assumed to be
move X is good because the master chose it, the others are bad because
he didn't.     Because of this and other things too I don't have much
belief in computers learning from huge sets of games in this way.   The
computer has too passive a role in this kind of learning.   There is no
2 way interaction between master and student.  

Far better, if you want to involve human players, is some kind of human
assisted learning where games are played and learning takes place by
trial and error and direct interaction with the "teacher."   But this
isn't very practical for machine learning which likes thousands of
examples to work from.


- Don




> Jacques.
> 
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to