One way to figure out how good your static evaluator is, is to have it do a
one ply search, evaluate, and display the top 20 or so evaluations on a go
board.  Ask a strong player to go through a pro game, showing your
evaluations at each move.  He can tell you pretty quickly how bad your
evaluator is.

If you evaluator produces an estimate of the score (like Many Faces of Go),
you can compare it to Many Faces.  The Game Score Graph function just shows
the static evaluation with no search at each position in a game.

David

> -----Original Message-----
> From: computer-go-boun...@computer-go.org [mailto:computer-go-
> boun...@computer-go.org] On Behalf Of George Dahl
> Sent: Tuesday, February 17, 2009 11:23 AM
> To: computer-go
> Subject: Re: [computer-go] static evaluators for tree search
> 
> You're right of course.  We have a (relatively fast) move pruning
> algorithm that can order moves such that about 95% of the time, when
> looking at pro games, the pro move will be in the first 50 in the
> ordering.  About 70% of the time the expert move will be in the top
> 10.  So a few simple tricks like this shouldn't be too hard to
> incorporate.
> 
> However, the main purpose of making a really *simple* alpha-beta
> searching bot is to compare the performance of static evaluators.  It
> is very hard for me to figure out how good a given evaluator is (if
> anyone has suggestions for this please let me know) without seeing it
> incorporated into a bot and looking at the bot's performance.  There
> is a complicated trade off between the accuracy of the evaluator and
> how fast it is.  We plan on looking at how well our evaluators
> predicts the winner or territory outcome or something for pro games,
> but in the end, what does that really tell us?  There is no way we are
> going to ever be able to make a fast evaluator using our methods that
> perfectly predicts these things.
> 
> So I have two competing motivations here.  First, I want to show that
> the evaluators I make are good somehow.  Second, I want to build a
> strong bot.
> 
> - George
> 
> On Tue, Feb 17, 2009 at 2:04 PM,  <dave.de...@planet.nl> wrote:
> > A simple alfabeta searcher will only get a few plies deep on 19x19, so
it
> > won't be very useful (unless your static evaluation function is so good
that
> > it doesn't really need an alfabeta searcher)
> >
> > Dave
> > ________________________________
> > Van: computer-go-boun...@computer-go.org namens George Dahl
> > Verzonden: di 17-2-2009 18:27
> > Aan: computer-go
> > Onderwerp: [computer-go] static evaluators for tree search
> >
> > At the moment I (and another member of my group) are doing research on
> > applying machine learning to constructing a static evaluator for Go
> > positions (generally by predicting the final ownership of each point
> > on the board and then using this to estimate a probability of
> > winning).  We are looking for someone who might be willing to help us
> > build a decent tree search bot that can have its static evaluator
> > easily swapped out so we can create systems that actually play over
> > GTP.  As much as we try to find quantitative measures for how well our
> > static evaluators work, the only real test is to build them into a
> > bot.
> >
> > Also, if anyone knows of an open source simple tree search bot
> > (perhaps alpha-beta or something almost chess like) for Go, we might
> > be able to modify it ourselves.
> >
> > The expertise of my colleague and I is in machine learning, not in
> > tree search (although if worst comes to worst I will write my own
> > simple alpha-beta searcher).  We would be eager to work together with
> > someone on this list to try and create a competitive bot.  We might at
> > some point create a randomized evaluator that returns win or loss
> > nondeterministically for a position instead of a deterministic score,
> > so an ideal collaborator would also have some experience with
> > implementing monte carlo tree search (we could replace playouts with
> > our evaluator to some extent perhaps).  But more important is
> > traditional, chess-like searching algorithms.
> >
> > If anyone is interested in working with us on this, please let me
> > know!  We have a prototype static evaluator complete that is producing
> > sane board ownership maps, but we will hopefully have many even better
> > ones soon.
> >
> > - George
> > _______________________________________________
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
> > _______________________________________________
> > computer-go mailing list
> > computer-go@computer-go.org
> > http://www.computer-go.org/mailman/listinfo/computer-go/
> >
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/

_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to