Looking at the lightest playout version of my bitmap-based 9x9 program
(source-code somewhere in the archives), I spent an estimated 2% of the
time generating random numbers, so 40% seems to indicate something is not
right, such as re-initializing the generator all the time.
The execution time of
Awesome Joshua! I agree with the others. Start open sourcing it right away.
That's what I did with my Go bot that I started writing in a language I
didn't know. And people (well, two ;)) just decided to help out.
As for features. Well, I'd be happy if you just reimplemented CGOS and it
were
What elements did you like about CGOS and what do you wish for?
I've begun writing a new version from scratch that isn't TCL based.
With the aim for future use and also open source and open to public
commits.
A simple json interface that enables people to do automated checks for
elo rating,
For profiling I use callgrind.
Afaik it is the most accurate as it simulates a processor and counts
cycles etc.
As others pointed out: my playout-code is somewhat lightweight. In that
40% version it only checked if a cross is empty. I added super-ko check
which gave a 10% hit on the number of
The complex formula at the end is for a lower confidence bound of a
Bernoulli distribution with independent trials (AKA biased coin flip) and
no prior knowledge. At a leaf of your search tree, that is the most correct
distribution. Higher up in a search tree, I'm not so sure that's the
correct
On Mon, Mar 30, 2015 at 4:09 PM, Petr Baudis pa...@ucw.cz wrote:
The strongest programs often use RAVE or LGRF or something like that,
with or without the UCB for tree exploration.
Huh, are there any strong programs that got LGRF to work?
Erik
___
On Mon, Mar 30, 2015 at 09:11:52AM -0400, Jason House wrote:
The complex formula at the end is for a lower confidence bound of a
Bernoulli distribution with independent trials (AKA biased coin flip) and
no prior knowledge. At a leaf of your search tree, that is the most correct
distribution.
Hi,
When performing a montecarlo search, we end up with a number of wins
and number of looses for a position on the board.
What is now the proven methology for comparing these values?
I tried the method described here:
http://www.evanmiller.org/how-not-to-sort-by-average-rating.html
On Mon, Mar 30, 2015 at 04:17:13PM +0200, Erik van der Werf wrote:
On Mon, Mar 30, 2015 at 4:09 PM, Petr Baudis pa...@ucw.cz wrote:
The strongest programs often use RAVE or LGRF or something like that,
with or without the UCB for tree exploration.
Huh, are there any strong programs