Okay, I added a few more timings (playouts / second, very rough):
Plug-and-Go refbot: 14700
CRef bot (-O3) 12500
Gongo 1
Java bot: 6500
CRef bot (no optimization) 5882
Note that Gongo and Plug-and-Go are using different board data
On Mon, Dec 14, 2009 at 10:37:24PM -0800, Peter Drake wrote:
It's easy to get confused -- different researchers use the terms
slightly differently.
They both gather data on moves other than a move made from the
current board configuration. I would say that AMAF stores statistics
on every
Hi!
On Mon, Dec 14, 2009 at 07:46:54PM +0100, Rémi Coulom wrote:
Petr Baudis wrote:
How do you (e.g. CrazyStone) solve the issue? Or do you perform
explicit unpruning by sorting the nodes instead of biasing them?
I bias them too. I use move probability, not move gamma, which
normalizes
Petr Baudis wrote:
I wonder now, do you use separate set of gammas for simulations and node
biasing? Since I've found that the performance seems very bad if I don't
include some time-expensive features, since the gammas are then very
off; I will probably simply generate two gamma sets, but
I took AMAF as the process to consider all the moves regardless when
they were played in the sequence (although a slight discount for later
in the sequence seems to help a little) whereas RAVE is using an
undefined method to favour some nodes over others prior to expanding
them. The reason (as far
The relative values look about right. But I remember getting much
higher numbers. Did you run the Java versions with or without the
-server parameter?
Mark
On Mon, Dec 14, 2009 at 11:00 PM, Brian Slesinsky br...@slesinsky.org wrote:
Okay, I added a few more timings (playouts / second, very