On Thu, Jul 2, 2009 at 7:20 PM, Peter Drake <[email protected]> wrote:

> Okay, so I've finally built a transposition table. There is a big pool of
> nodes, allocated at startup. To find a node, I:
>
> 1) Play the move on the board and get the new Zobrist hash (incorporating
> the simple ko point and color to play).
> 2) Take this modulo the length of the table (currently 128K nodes) to find
> the first place to probe.


I assume you mean exactly 131072, a power of two?     Then modulo is a bit
masking operation.   The classic algorithm is to use a prime number table
size,   and I think this idea of using a power of 2 table size and bit
masking was started in the old days when division was horribly expensive.
Just about every game program I'm aware of using the power of two table size
with a simple mask to calculated the first address to use.

    3) Linearly probe (i.e., try the next location, then the next one, etc.)
until either I find a node with the correct Zobrist hash or hit the limit on
the number of probes (1024 seems to work okay).

> 4) In situations where I'm trying to allocate a node, if I didn't find one
> that matched, I overwrite the first "old" node I encountered. An old node is
> one that wasn't touched on the previous turn (as indicated by a timestamp).
> If I didn't find an old node, the allocation fails.


This is the best way I think.    It's silly to traverse the whole table and
clear it before each move if you can spare a few extra bits.  Even 2 or 3
bits for this timestamp is enough.     Of course that is the only downside,
you do need a few extra bits and you want to keep the records as small as
reasonably possible.     You have the added advantage that you can still use
nodes from previous searches - but of course you would update the time stamp
whenever you did this.


>
>
> This works all right, but I'm bothered by the tension between a low probe
> limit (speeding up unsuccessful searches) and a high one (making more
> complete use of the table). Also, the table tends to fill up.
>
> Questions for others:
>
> a) Are you using this timestamp trick, or are you traversing your DAG to
> mark which nodes are still relevant every time you accept a (real,
> non-playout) move? I realize that almost none of the nodes touched last turn
> are still relevant, but traversing the entire table seems expensive.


In Lazarus I did not use a transposition table,  just a tree of nodes.
However I did go to some trouble between moves to do something, I forgot
what it was exactly.   But I remember the principal idea was to save
important parts of the tree from one search to the next one.

Also I did not use standard garbage collection, I created a pool of nodes
and managed this myself,  no memory allocations, no free()

>
>
> b) Is there some clever way to abandon an unsuccessful search earlier? In a
> hash table that wasn't full or close to it, I'd go until I found a null
> (i.e., a node that had never been used, as opposed to a node that is old or
> marked for deletion). Unfortunately, Orego fills up the table of 128k nodes
> in a matter of seconds.
>
> c) Is there some useful way of "rehashing" to get rid of the old nodes?
> Unfortunately, I can't fit a second copy of the table in memory...


I'll be looking forward to the response as I don't know how others do
this.   But I know the good programs all have some kind of scheme to reuse
nodes that are not proving to be particularly relevant to the current
search.

- Don


>
>
> Thanks,
>
> Peter Drake
> http://www.lclark.edu/~drake/ <http://www.lclark.edu/%7Edrake/>
>
>
>
> _______________________________________________
> computer-go mailing list
> [email protected]
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to