[computer-go] Re: static evaluators for tree search

2009-02-17 Thread Dave Dyer

While your goal is laudable, I'm afraid there is no such thing
as a simple tree search with a plug-in evaluator for Go.  The
problem is that the move generator has to be very disciplined,
and the evaluator typically requires elaborate and expensive to
maintain data structures.  It all tends to be very convoluted and
full of feedback loops, in addition to the actual alpha-beta which
is trivial by comparison.

If you look at GnuGo or some other available program, I'm pretty sure
you'll find a line of code where the evaluator is called, and you could
replace it, but you'll find it's connected to a pile of spaghetti.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread George Dahl
I am aware such a decoupled program might not exist, but I don't see
why one can't be created.  When you say the move generator has to be
very disciplined what do you mean?  Do you mean that the evaluator
might be used during move ordering somehow and that generating the
nodes to expand is tightly coupled with the static evaluator?
- George

On Tue, Feb 17, 2009 at 12:55 PM, Dave Dyer dd...@real-me.net wrote:

 While your goal is laudable, I'm afraid there is no such thing
 as a simple tree search with a plug-in evaluator for Go.  The
 problem is that the move generator has to be very disciplined,
 and the evaluator typically requires elaborate and expensive to
 maintain data structures.  It all tends to be very convoluted and
 full of feedback loops, in addition to the actual alpha-beta which
 is trivial by comparison.

 If you look at GnuGo or some other available program, I'm pretty sure
 you'll find a line of code where the evaluator is called, and you could
 replace it, but you'll find it's connected to a pile of spaghetti.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: static evaluators for tree search

2009-02-17 Thread Dave Dyer

Do you mean that the evaluator might be used during move ordering somehow
and that generating the nodes to expand is tightly coupled with the static 
evaluator?

That's the general idea.  

No search program can afford to use a fan-out factor of 361.  The information
about what to cut has to come from somewhere.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: static evaluators for tree search

2009-02-17 Thread Dave Dyer

Do you mean that the evaluator might be used during move ordering somehow
and that generating the nodes to expand is tightly coupled with the static 
evaluator?

That's the general idea.  

No search program can afford to use a fan-out factor of 361.  The information
about what to cut has to come from somewhere.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


[computer-go] Re: static evaluators for tree search

2009-02-17 Thread Dave Dyer

This is old and incomplete, but still is a starting point you might
find useful  http://www.andromeda.com/people/ddyer/go/global-eval.html

General observations (from a weak player's point of view):  

Go is played on a knife edge between life and death.  The only evaluator
that matters is is this stone alive, and there are no known proxies 
that will not fall short a significant amount of the time.  If you fall
short once or twice in a game against a competent player, you will lose.

General strategic considerations will play you false every time.

-- Notwithstanding the above, improving general considerations
will improve play, but not much.  It's all about the minutia of
the situation.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread George Dahl
I really don't like the idea of ranking moves and scoring based on the
distance to the top of a list for a pro move.  This is worthless if we
ever want to surpass humans (although this isn't a concern now, it is
in principle) and we have no reason to believe a move isn't strong
just because a pro didn't pick it.  Perhaps another pro would pick a
different move in the same situation.  If we had a pro that ranked all
the legal moves, or at least the top 10 or so, this would be one
thing, but those data are almost never available.  Or if we had two
pro's watching a game and giving what they thought was the best move
at each point and we scored an evaluator based on how it did only when
the pro's agreed (although this would bias scoring towards things like
forced moves).  Also, there are bad moves and then even worse moves
(like filling the eyes of a powerful living group, killing it).  If an
evaluator makes catastrophic evaluations sometimes and plays a perfect
pro move other times, it could still be much worse in balance (if we
can't tell when it is blundering versus being brilliant).

I think it would be much more informative to compare evaluator A and
evaluator B in the following way.
Make a bot that searched to a fixed depth d before then calling a
static evaluator (maybe this depth is 1 or 2 or something small).  Try
and determine the strength of a bot using A and a bot using B as
accurately as possible against a variety of opponents.  The better
evaluator is defined to be the one that results in the stronger bot.

Obviously this methods introduces a whole host of new problems (even
finding the strength of a running bot is non-trivial), but at least
it attempts to measure what we would eventually care about --- playing
strength.  So of course we care about how fast the static evaluators
are, because we might be able to search more nodes with a faster
evaluator, but for measuring the quality of the evaluations, I can't
at the moment think of a better way of doing this.

One of the problems with my suggestion is that maybe the evaluators
are better at evaluating positions beyond a certain number of moves
and that if we could just get to that depth before calling them, they
would be much more accurate.  Or maybe changes to how searching works
can compensate for weaknesses in evaluators or emphasize strengths.
Really one would want the strongest possible bot built around a single
evaluator versus the strongest possible bot built around another
evaluator, but this is clearly impossible to achieve.

I guess the another question is, what would you need to see a static
evaluator do to be so convinced it was useful that you then built a
bot around it?  Would it need to win games all by itself with one ply
lookahead?

- George

On Tue, Feb 17, 2009 at 2:41 PM, Dave Dyer dd...@real-me.net wrote:

 This is old and incomplete, but still is a starting point you might
 find useful  http://www.andromeda.com/people/ddyer/go/global-eval.html

 General observations (from a weak player's point of view):

 Go is played on a knife edge between life and death.  The only evaluator
 that matters is is this stone alive, and there are no known proxies
 that will not fall short a significant amount of the time.  If you fall
 short once or twice in a game against a competent player, you will lose.

 General strategic considerations will play you false every time.

 -- Notwithstanding the above, improving general considerations
 will improve play, but not much.  It's all about the minutia of
 the situation.

 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread Michael Williams

George Dahl wrote:

I guess the another question is, what would you need to see a static
evaluator do to be so convinced it was useful that you then built a
bot around it?  Would it need to win games all by itself with one ply
lookahead?


Here is one way to look at it:  Since a search tends to seek out errors in the evaluation, you must minimize the maximum error that can ever occur.  If you do 
not, then a very fast computer will search many nodes in the tree, find a place that your evaluator makes a big error, and use that error in it's search 
decisions.  So how can one test the error that your evaluator has?


As you suggested, you can use the evaluator in a game engine and see how it performs.  But that performance is also a function of how big your search is and not 
only how good your evaluator is.  Another option would be to generate positions and test the accuracy of the evaluator.  For this, you would two things:


1) An authority on how strong each position is for black
2) A source of applicable positions to test

For the authority, I would use MoGo.  You can make it almost arbitrarily strong 
by just waiting longer for it to arrive at an answer (and using more memory, 
etc).

As for the source of applicable positions, that's a bit harder, IMO.  My first thought was to use random positions since you don't want any bias, but that will 
probably result in the evaluation of the position being very near 0.5 much of the time.  But I would still try this method since the lack of bias makes it so 
attractive.  Another option is that you could take random positions from actual games and generate a search tree from that position.  From that search tree, you 
could select a few random nodes to use as test positions.  But now we are back to being coupled to the search parameters -- the same thing we were trying to 
avoid by not writing a game engine.


___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread Michael Williams

Michael Williams wrote:
As for the source of applicable positions, that's a bit harder, IMO.  My 
first thought was to use random positions since you don't want any bias, 
but that will probably result in the evaluation of the position being 
very near 0.5 much of the time.  But I would still try this method since 
the lack of bias makes it so attractive.  Another option is that you 
could take random positions from actual games and generate a search tree 
from that position.  From that search tree, you could select a few 
random nodes to use as test positions.  But now we are back to being 
coupled to the search parameters -- the same thing we were trying to 
avoid by not writing a game engine.


Here are some follow-up ideas I had on how to get meaningful test positions...

The most important positions to evaluate accurately are those near the root of the search tree (because they will be seen first and may direct further 
progression of the search.  So to generate a tier-N test position, take a position from an actual game and make make N random moves.  In tuning your 
evaluator, tier-N positions are more important to get correct (get correct means minimize the error) than tier-(N+1) positions.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread Gunnar Farnebäck

Dave Dyer wrote:
 If you look at GnuGo or some other available program, I'm pretty sure
 you'll find a line of code where the evaluator is called, and you could
 replace it, but you'll find it's connected to a pile of spaghetti.

That would have to be some other available program. GNU Go doesn't
have a static position evaluator and consequently no line of code
calling it.

/Gunnar
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread Jason House

On Feb 17, 2009, at 12:55 PM, Dave Dyer dd...@real-me.net wrote:



While your goal is laudable, I'm afraid there is no such thing
as a simple tree search with a plug-in evaluator for Go.  The
problem is that the move generator has to be very disciplined,
and the evaluator typically requires elaborate and expensive to
maintain data structures.  It all tends to be very convoluted and
full of feedback loops, in addition to the actual alpha-beta which
is trivial by comparison.



Why would initial research require a full bot molded around the new  
static evaluator? It's far simpler to leave the necessary machinery in  
place and add in the new static evaluator. That may mean extra work  
such as running two evaluation methods and reusing a prior move  
orderer/generator, but who cares?


There's a lot of power in quick and dirty evaluations. If the changes  
for the new static evaluation reduces bot strength, it's time to go  
back to the drawing board. It's only once you create a strength  
improvement that the steps you mention really matter.

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread Darren Cook
 I think it would be much more informative to compare evaluator A and
 evaluator B in the following way.
 Make a bot that searched to a fixed depth d before then calling a
 static evaluator (maybe this depth is 1 or 2 or something small).  Try
 and determine the strength of a bot using A and a bot using B as
 accurately as possible against a variety of opponents.  The better
 evaluator is defined to be the one that results in the stronger bot.

If you do this I'd suggest also including monte-carlo as one of your
static evaluators. You want a score, but monte carlo usually returns
information like 17 black wins, 3 white wins. However you can instead
just sum ownership in the terminal positions, so if A1 is owned by black
15 times, white 5 times, count that as a point for black. If exactly
equal ownership count the point for neither side. (Alternatively just
sum black and white score of each terminal position.)

You could have two or three versions using different number of playouts
(with the result trade-off of more playouts means fewer nodes visited in
the global search); I suspect 20-50 playouts will be optimum.

My hunch is that monte carlo version will always out perform any static
evaluation, given the same overall time (*). But it would be interesting
to know.

Darren

*: Or run the experiment giving the static evaluation four times the
clock time, on the assumption there is more potential for optimization
in complex code.

-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread George Dahl
Really?  You think that doing 20-50 uniform random playouts and
estimating the win probability, when used as a leaf node evaluator in
tree search, will outperform anything else that uses same amount of
time?  I must not understand you.  What do you mean by static
evaluator?  When I use the term, I mean something quite general,
basically any function of the board state that doesn't do explicit
lookahead.

I guess 20-50 optimized playouts take so little time they would just
be so much faster than the sort of evaluators I might make that could
use hundreds of thousands of floating point multiplies.  But more
towards that costly end of the regime, I think doing thousands of
random playouts would be really bad (but then maybe you could just
expand more nodes in the tree and do fewer playouts instead).  I am
looking for higher quality, more costly evaluators than 50 MC
playouts.  But that is a good point that I should compare to those
evaluators since if my evaluator is going to take more time it had
better show some other advantage.

- George

On Tue, Feb 17, 2009 at 6:13 PM, Darren Cook dar...@dcook.org wrote:
 I think it would be much more informative to compare evaluator A and
 evaluator B in the following way.
 Make a bot that searched to a fixed depth d before then calling a
 static evaluator (maybe this depth is 1 or 2 or something small).  Try
 and determine the strength of a bot using A and a bot using B as
 accurately as possible against a variety of opponents.  The better
 evaluator is defined to be the one that results in the stronger bot.

 If you do this I'd suggest also including monte-carlo as one of your
 static evaluators. You want a score, but monte carlo usually returns
 information like 17 black wins, 3 white wins. However you can instead
 just sum ownership in the terminal positions, so if A1 is owned by black
 15 times, white 5 times, count that as a point for black. If exactly
 equal ownership count the point for neither side. (Alternatively just
 sum black and white score of each terminal position.)

 You could have two or three versions using different number of playouts
 (with the result trade-off of more playouts means fewer nodes visited in
 the global search); I suspect 20-50 playouts will be optimum.

 My hunch is that monte carlo version will always out perform any static
 evaluation, given the same overall time (*). But it would be interesting
 to know.

 Darren

 *: Or run the experiment giving the static evaluation four times the
 clock time, on the assumption there is more potential for optimization
 in complex code.

 --
 Darren Cook, Software Researcher/Developer
 http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
 http://dcook.org/work/ (About me and my work)
 http://dcook.org/blogs.html (My blogs and articles)
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread Darren Cook
 Really?  You think that doing 20-50 uniform random playouts and
 estimating the win probability, when used as a leaf node evaluator in
 tree search, will outperform anything else that uses same amount of
 time?

Same amount of clock time for the whole game. E.g. if playing 20 random
playouts to the end of the game takes N ms, and your static evaluation
takes 5*N ms then the playout version can explore 5 times more global
nodes. But the main advantage, I feel, is that playouts play out the game.

Here is an observation that I think I wrote here many years ago: Many
Faces (version 9 or 10) would give a much more accurate analysis of the
score of a game by putting it on level 1 and having it play it itself,
which took about a second (even with all the overhead of the moves being
shown on the board), compared to pressing F9 to get the static evaluation.

As already mentioned, life/death analysis is so hard that it just has to
be played out. But also, the value of having sente can realistically
only be discovered by playing out. This really stumped me when I was
trying to statically analyze endgame positions (i.e. trying to extend
the Mathematical endgame ideas back a bit further in the game).

Darren



-- 
Darren Cook, Software Researcher/Developer
http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
open source dictionary/semantic network)
http://dcook.org/work/ (About me and my work)
http://dcook.org/blogs.html (My blogs and articles)
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread Mark Boon
On Tue, Feb 17, 2009 at 8:35 PM, George Dahl george.d...@gmail.com wrote:
 Really?  You think that doing 20-50 uniform random playouts and
 estimating the win probability, when used as a leaf node evaluator in
 tree search, will outperform anything else that uses same amount of
 time?

You'll probably find a variety of opinions on that. I think you can
make a static evaluator that will give you a better estimate of the
win probability estimate than 50 uniform random playouts.

But... (there are a few big ones) implementing uniform random playout
is a lot less work. On top of that, with some prudent exploration, you
rarely spend 50 playouts all in one place. This is definitely
something powerful in MCTS programs, that they can reject unpromising
lines of play at relatively little cost.

Mark
___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread David Fotland
It is very clear that nonuniform random playouts is a far better evaluator
than any reasonable static evaluation, given the same amount of time.  Many
people (including myself) spent decades creating static evaluations, using
many techniques, and the best ones ended up with similar strength programs.
The strongest MCTS programs are far stronger.

I put 25 years effort into a static evaluation, and replaced it with an MCTS
system that I worked on for 6 months, and got about 5 stones strength
improvement.

MCTS playouts are not uniform random, so I can't say how they would perform,
but why build a system with uniform random playouts, when it is so easy to
do better?

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of George Dahl
 Sent: Tuesday, February 17, 2009 3:36 PM
 To: computer-go
 Subject: Re: [computer-go] Re: static evaluators for tree search
 
 Really?  You think that doing 20-50 uniform random playouts and
 estimating the win probability, when used as a leaf node evaluator in
 tree search, will outperform anything else that uses same amount of
 time?  I must not understand you.  What do you mean by static
 evaluator?  When I use the term, I mean something quite general,
 basically any function of the board state that doesn't do explicit
 lookahead.
 
 I guess 20-50 optimized playouts take so little time they would just
 be so much faster than the sort of evaluators I might make that could
 use hundreds of thousands of floating point multiplies.  But more
 towards that costly end of the regime, I think doing thousands of
 random playouts would be really bad (but then maybe you could just
 expand more nodes in the tree and do fewer playouts instead).  I am
 looking for higher quality, more costly evaluators than 50 MC
 playouts.  But that is a good point that I should compare to those
 evaluators since if my evaluator is going to take more time it had
 better show some other advantage.
 
 - George
 
 On Tue, Feb 17, 2009 at 6:13 PM, Darren Cook dar...@dcook.org wrote:
  I think it would be much more informative to compare evaluator A and
  evaluator B in the following way.
  Make a bot that searched to a fixed depth d before then calling a
  static evaluator (maybe this depth is 1 or 2 or something small).  Try
  and determine the strength of a bot using A and a bot using B as
  accurately as possible against a variety of opponents.  The better
  evaluator is defined to be the one that results in the stronger bot.
 
  If you do this I'd suggest also including monte-carlo as one of your
  static evaluators. You want a score, but monte carlo usually returns
  information like 17 black wins, 3 white wins. However you can instead
  just sum ownership in the terminal positions, so if A1 is owned by black
  15 times, white 5 times, count that as a point for black. If exactly
  equal ownership count the point for neither side. (Alternatively just
  sum black and white score of each terminal position.)
 
  You could have two or three versions using different number of playouts
  (with the result trade-off of more playouts means fewer nodes visited in
  the global search); I suspect 20-50 playouts will be optimum.
 
  My hunch is that monte carlo version will always out perform any static
  evaluation, given the same overall time (*). But it would be interesting
  to know.
 
  Darren
 
  *: Or run the experiment giving the static evaluation four times the
  clock time, on the assumption there is more potential for optimization
  in complex code.
 
  --
  Darren Cook, Software Researcher/Developer
  http://dcook.org/mlsn/ (English-Japanese-German-Chinese-Arabic
 open source dictionary/semantic network)
  http://dcook.org/work/ (About me and my work)
  http://dcook.org/blogs.html (My blogs and articles)
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread David Fotland
Many Faces of Go has a static position evaluator, but it's not spaghetti :)
It makes many passes over the board building up higher level features from
lower level ones, and it does local lookahead as part of feature evaluation,
so it has a lot of code, and is fairly slow.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of Gunnar Farnebäck
 Sent: Tuesday, February 17, 2009 1:30 PM
 To: computer-go
 Subject: Re: [computer-go] Re: static evaluators for tree search
 
 Dave Dyer wrote:
   If you look at GnuGo or some other available program, I'm pretty sure
   you'll find a line of code where the evaluator is called, and you
could
   replace it, but you'll find it's connected to a pile of spaghetti.
 
 That would have to be some other available program. GNU Go doesn't
 have a static position evaluator and consequently no line of code
 calling it.
 
 /Gunnar
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


RE: [computer-go] Re: static evaluators for tree search

2009-02-17 Thread David Fotland
Many Faces uses information from the static evaluator to order and prune
moves during move generation.  For example if the evaluation finds a big
unsettled group, the move generator will favor eye making or escaping moves
for the big group.

David

 -Original Message-
 From: computer-go-boun...@computer-go.org [mailto:computer-go-
 boun...@computer-go.org] On Behalf Of George Dahl
 Sent: Tuesday, February 17, 2009 10:14 AM
 To: computer-go
 Subject: Re: [computer-go] Re: static evaluators for tree search
 
 I am aware such a decoupled program might not exist, but I don't see
 why one can't be created.  When you say the move generator has to be
 very disciplined what do you mean?  Do you mean that the evaluator
 might be used during move ordering somehow and that generating the
 nodes to expand is tightly coupled with the static evaluator?
 - George
 
 On Tue, Feb 17, 2009 at 12:55 PM, Dave Dyer dd...@real-me.net wrote:
 
  While your goal is laudable, I'm afraid there is no such thing
  as a simple tree search with a plug-in evaluator for Go.  The
  problem is that the move generator has to be very disciplined,
  and the evaluator typically requires elaborate and expensive to
  maintain data structures.  It all tends to be very convoluted and
  full of feedback loops, in addition to the actual alpha-beta which
  is trivial by comparison.
 
  If you look at GnuGo or some other available program, I'm pretty sure
  you'll find a line of code where the evaluator is called, and you
could
  replace it, but you'll find it's connected to a pile of spaghetti.
 
  ___
  computer-go mailing list
  computer-go@computer-go.org
  http://www.computer-go.org/mailman/listinfo/computer-go/
 
 ___
 computer-go mailing list
 computer-go@computer-go.org
 http://www.computer-go.org/mailman/listinfo/computer-go/

___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/