RE: [computer-go] FooBar

2007-12-24 Thread David Fotland
Hi Don,

I never heard of this technique before.  Are there any more you can share?

ManyFaces12 uses:

-Iterative deepening, with hash table
-Zero-window search (beta is alpha+1, and research when fail high)
-Null move (reduce depth by one, only try null when beta is not infinite,
only one null move allowed per variation)
-Try null first, then the best move from the previous iteration (from the
hash table), then moves in the order suggested by the move generator.  The
move generator gives a pretty good ordering since it estimates the number of
points gained by the move.
-Full ply extension when there is one forced move
-Fractional ply extensions

I plan to add:
-2 killer moves, then history heuristic
-This pruning idea from Don.
-Internal iterative deepening
-Some kind of iterative widening
-narrow search window based on result of previous iteration

Today I search 20 to 40 moves at the root, then 10 moves per ply during the
main search, then a tapering quiescence search with no maximum depth.

I find developing search code less satisfying then working on the evaluation
function.  If I add knowledge to the evaluation function I know the program
is stronger.  When I make a change to the search parameters I can only test
to see what happened.  It's a much more trial-and-error process.

Still, I have to do it because global search makes the program much, much
stronger.

David


-Original Message-
[EMAIL PROTECTED] On Behalf Of Don Dailey


The various versions I'm testing are selective, I use a technique
similar to that used in modern chess programs of aborting the search
at a given node after only a few children nodes have been tried when
the expectation remains below alpha.  

There are many potential tricks that can speed up the search greatly
and I've barely scratched the surface, but I have done a couple of
simple basic things that appear to be working very well.



___
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/


Re: [computer-go] FooBar

2007-12-24 Thread Don Dailey
David,

There are many variations of this technique,  but what they all have in
common is the assumption that once you have searched some number of
child moves (perhaps 2 or 3) then (with good move ordering) there is
much reduced  chance than one of the remaining moves will be
useful.  Often the remaining moves are searched with reduced
depth.   Also,  most programs have constraints on this (don't reduce
checks, out of checks,  attacking moves, etc.) Some programs limit
the total number of reductions in a given line.   

This is sometimes called history reductions (not a very accurate
description in general) because some programs use the history heuristic
and prune moves using this same basic technique but  based on having low
history scores.  Some programs only reduce a few ply from end
nodes.   

The implementation details vary significantly from program to program
and the prejudices and superstitions that we programmers carry over to
our programs. 

But this is considered one of the major recent advances in computer
chess - and just about every chess program jumped in strength 50-100
ELO  all of  sudden when this started to be used.

In my alpha/beta searcher a very simple minded version is actually
working very well - it will take weeks though to test and refine
this.  The only thing I am doing now is:

   At any given node ...
 If  less than 4 ply remaining 
 if move is not a capture  
if at least 5 moves have been tried ...
 do not search the subtree.

This probably wouldn't work in Chess without better conditions.And
it can probably be improved a lot GO too,   the next thing I will do is
(at the very least) include (not prune) atari moves.  

I also have a nice set of patterns inherited from Lazarus.  I set the
strongest version of Lazarus to do deep searches and kept ELO ratings of
the 20 points patterns (5x5 with corners cut off.) This produced a
nice set of patterns and I use this mostly for finding silly moves.
I will probably add to the conditions that all strong patterns should be
played no matter what. 

Right now I use weak patterns to veto moves from the tree.If a
move is below a given rating I don't even try it if I am on the last 4
ply of the search.   All of the patterns fitting this critiera represent
moves any player would consider pretty weak 99% of the time,  such as
moving into the corner point when nothing is happening.The most
common pattern pruned is moving to an edge when nothing is nearby. 

What I really want to find is a more general way of identifying  weak
moves. What seems to be the case in GO is that you can be a lot more
liberal about pruning moves without hurting the search.   Amazingly,  at
the same depth, using the veto patterns along with selectivity
simultaneously made the program stronger and faster.Normally you
don't get both at the same time because you are dealing with
trade-offs. In this case, I think the veto patterns probably
strengthen the program because of the noisy evaluation function.   I am
basically making it impossible (at low depth levels) to play a bad
move.  For instance moving to a1 in the opening,  although very
unlikely is possible with a MC evaluation function but when I use the
patterns it is impossible. 

I have more comments in response to yours:

David Fotland wrote:
 Hi Don,

 I never heard of this technique before.  Are there any more you can share?
   
Since you are using hash tables,  I assume you are aware of ETC 
(Enhanced Transposition Cutoffs.)  ?

For anyone unaware -  it works like this:

1.  If a hash table entry does not exist for a given move.
2.  Do a quick search of the children - in the hopes of finding a
cutoff.

At interior nodes it's cheap and it will provide a cutoff sometimes when
a direct cutoff is not in the table.   If it does not provide a cutoff
it might be used to narrow the alpha beta window.

 ManyFaces12 uses:

 -Iterative deepening, with hash table
   
My experimental alpha/beta searcher doesn't even have hash tables
yet. I have to think about how to implement this because scores are
probabilistic.   Even worse, they can be based on varying numbers of
play-outs since I stop play-outs early when it appears that the score
will not shrink the alpha beta window.

I will probably first try a version that pretends none of this is an issue.

Do you have parity issues in ManyFaces?This is a problem in GO
with a MC evaluation function.   An odd ply search seems to return a
substantially higher score than an even ply search,  and it seems like
you cannot compare an odd to an even ply search.  This affects the
hash table implementation signficantly because you cannot take a cutoff
based on scores of a different parity. (at least not without a
slop margin.)

I'm currently trying to find a workable scheme to deal with this.   It's
even a worse problem with MC evaluation because of the nature of