a) I don't use timestamps. I don't have a DAG. I just reuse nodes that have few visits or are old (I keep the depth from root in each node).
b) By unsuccessful search, I assume you mean the linear search in the probe. I don't do a probe search, so I can't help you there. I have a strong aversion to doing big linear searches in performance-sensitive code :) How big are your nodes? If you have 2 GB, they are about 15 KB each. That seems a little large. David > -----Original Message----- > From: computer-go-boun...@computer-go.org [mailto:computer-go- > boun...@computer-go.org] On Behalf Of Peter Drake > Sent: Thursday, July 02, 2009 4:21 PM > To: Computer Go > Subject: [computer-go] Hash tables > > Okay, so I've finally built a transposition table. There is a big pool > of nodes, allocated at startup. To find a node, I: > > 1) Play the move on the board and get the new Zobrist hash > (incorporating the simple ko point and color to play). > 2) Take this modulo the length of the table (currently 128K nodes) to > find the first place to probe. > 3) Linearly probe (i.e., try the next location, then the next one, > etc.) until either I find a node with the correct Zobrist hash or hit > the limit on the number of probes (1024 seems to work okay). > 4) In situations where I'm trying to allocate a node, if I didn't find > one that matched, I overwrite the first "old" node I encountered. An > old node is one that wasn't touched on the previous turn (as indicated > by a timestamp). If I didn't find an old node, the allocation fails. > > This works all right, but I'm bothered by the tension between a low > probe limit (speeding up unsuccessful searches) and a high one (making > more complete use of the table). Also, the table tends to fill up. > > Questions for others: > > a) Are you using this timestamp trick, or are you traversing your DAG > to mark which nodes are still relevant every time you accept a (real, > non-playout) move? I realize that almost none of the nodes touched > last turn are still relevant, but traversing the entire table seems > expensive. > > b) Is there some clever way to abandon an unsuccessful search earlier? > In a hash table that wasn't full or close to it, I'd go until I found > a null (i.e., a node that had never been used, as opposed to a node > that is old or marked for deletion). Unfortunately, Orego fills up the > table of 128k nodes in a matter of seconds. > > c) Is there some useful way of "rehashing" to get rid of the old > nodes? Unfortunately, I can't fit a second copy of the table in > memory... > > Thanks, > > Peter Drake > http://www.lclark.edu/~drake/ > > > > _______________________________________________ > computer-go mailing list > computer-go@computer-go.org > http://www.computer-go.org/mailman/listinfo/computer-go/ _______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/