On Thu, 15 Nov 2007, Petr Baudis wrote:
This looks like a good technique I should implement too. What "big"
values are popular? I'm thinking size*size/3, but maybe that is too
conservative?
If there is a capture of more than 1 stone during the random-games then
count the number of white and black stones on the board.
If there are more than twice as many stones of one color then
score current board position
If this is consistent with the winner of stone counting then
abort the current simulation
pure MC 10k 1050 ELO -> myCtest-10k
50k 1350
AMAF 10k 1450 -> myCtest-10k-AMAF
50k 1450
UCT 10k 1300 -> myCtest-10k-UCT
50k 1550
All algorithms above are "basic" playouts. No Go knowledge, except
the "dont-fill-your-1-pt-eye" rule.
I will put up the bots above for reference again.
Thanks a lot! I'm doing that now and while the ranks are not yet stable,
they are all only slightly above 1050 now already. :-( (Even the
variants with extra domain-specific knowledge.) I guess I still have
some bugs there.
I recommend removing (temporarily) all knowledge and debug the
basic structure.
By the way, when I make significant change to the algorithm (even a
bugfix that will significantly improve it), is it considered better
practice to have the changed bot playing from scratch, or introduce the
change to existing bots and see how the rating changes?
If I discover a bug before the bot has played ~20 games I usually keep
the name.
Christoph
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/