Petr Baudis wrote:
> On Mon, Mar 10, 2008 at 06:57:07PM -0400, Don Dailey wrote:
>
>> I think you may still have a bug. You should get well over 1700 with
>> 110,000 playouts, even if they are light playouts.
>>
>
> Hmmm... That is going to be some tough debugging I suspect.
>
I'm working on a new program, starting from scratch. When it gets far
enough along I will compare my numbers (and ELO) to yours.
>
>>> I'm pretty sure my code is fairly well debugged now, but of course
>>> there may be still bugs lurking; when I have put my bots on CGOS for the
>>> first time it was awfully bug-ridden (and about 800 ELO worse ;-). What
>>> ELO rating did pure UCT bots get historically with how many playouts?
>>>
>>>
>> FatMan does 20k playouts and has heavy play-outs, very similar to the
>> first paper where mogo described it's play-out strategy - basically
>> playing a random out of atari move or a local move that fits one of
>> their patterns. It is rated 1800 on CGOS. The tree expansion policy
>> for nodes is based on the parent count, not the child itself. So
>> once the parent has 100 play-outs children are expanded regardless of
>> the number of games they have seen. (Near the end of the game in 9x9
>> go some moves could be tried a few times before being expanded.)
>>
>
> Oh, interesting! I must have misread libEGO code which seems to use
> similar thresholds.
>
> What is the justification of using the parent playout count instead of
> the node playout count itself?
>
>
I don't know if it makes much difference how this is done, and I don't
know how everybody else is doing it. I allocate all the children at
once and do not have to store pointers to each child, just one pointer
to the first child and a counter so that I know how many children there
are. On average I'm probably expanding every other time in the middle
of the game.
I preallocate all the memory for the nodes when the program starts
instead of using malloc and free, and I assume most others are doing
something similar.
Here is my basic data structure:
typedef struct ntag
{
int wins;
int games;
int child_count; // zero until parent wins > 100
struct ntag *children; // a pointer to a place in the big "node
pool"
mv_t mv; // move that got us here.
} node_t;
When the child nodes are allocated, they are done all at once with
this code - where cc is the number of fully legal child nodes:
// reserve space for cc entries
n->child = &(pool[pool_count]);
pool_count += cc;
overflow checking is done outside of the search routine.
There are almost certainly better schemes, I just used what occurred
to me to be the easiest and quickest to implement without working too
hard at it.
Some programs hash each position and the tree is more abstract, no
pointers just positions leading to other positions by zobrist hash keys
in a hash table.
My scheme probably wastes a lot of space on nodes that are left
unvisited at the leaves of the tree. But I don't waste much on storing
pointers since I keep them in an array.
What is the state of the art on this? How is everyone else doing it?
- Don
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/