-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Jason,
I'm at a loss to explain your poor results. Some ideas:
1. Are you scoring the end of the game correctly? What is your
exact algorithm? Does your scoring properly and accurately take komi
into account? Are you sure you are getting the sign correct? It
seems that any bug here could very likely hurt your results in a big
way.
AnchorMan always considers the score from BLACK point of view:
1. Count black stones.
2. Count space owned by black.
3. Count white stones.
4. Count space owned by white.
score is bst + bsp - wst - wsp
OR bst + bsp - (wst + wsp)
You are still not done - you must add komi which is an integer in
AnchorMan. If komi is 7.5 we use 7
The logic: if score > integer_komi then black wins, else white wins.
Make your program show the board at the end of each playout and print
the WINNER and manually check many examples to see if it's correct.
2. Are there any silly errors with your random number generator?
I also use Mersenne Twister which is perfectly fine like you said
you do. But how do you use it to generate a random list entry?
3. Are you clearing all the arrays for calculating the statistics
before each new move is generated?
4. Do you tally results using integers or floating point? Do you
do the proper casting to get a floating point value?
AnchorMan considers a win 1 and a lose -1 and tallies them up along with
the count of how many times they were encountered.
With ALL-MOVES-AS-FIRST 1 game can produce MANY samples, so you must
increment the tally multiple times in a single game. Could you be
doing this wrong?
This is wrong:
wins / number_of_simulations
instead count every data point as if it were a new simulation.
Somewhere in your program I am sure there is a head-slapping error you
will find and when you do you will scream out loud! Don't give up.
- - Don
Jason House wrote:
>
>
> On 9/28/07, *Christoph Birk* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
> wrote:
>
>
> On Sep 28, 2007, at 4:28 AM, Jason House wrote:
> > On 9/28/07, Jason House <[EMAIL PROTECTED]
> <mailto:[EMAIL PROTECTED]>> wrote:
> > Since there's obviously some kind of major performance gap, for now
> > I'll aim to align with Anchor_1k. From there, I hope it'll be
> > easier to diagnose what's going wrong.
> >
> > Correction: I meant to say GenAnchor_1k
>
> Hi Jason,
>
> Looking at the performance of hb-amaf-1k I suspect you have some
> serious bug(s)
>
>
> I agree. I'm actually quite shocked at just how serious. I have unit
> tests for a lot of the core logic (that run at start up, always ensuring
> accuracy), and have changed a lot of other logic to conform to what
> others have done... Those changes even tripped various unit tests and
> asserts (which then had to change). Before doing all this AMAF testing,
> I would have sworn that I had a reasonable implementation... Maybe not
> the best one, but certainly reasonable. Now I know that it's not the
> case. I think I've pretty much ruled problems with the eye rule,
> suicide handling (I don't allow it), selection of random empty legal
> non-eye-filling moves, how the end of the game is detected, and how
> games are scored. I'm starting to run out of ideas. I'm thinking of
> doing a million random numbers and seeing if they look uniform, or going
> back to scour the core logic (that passes all the unit tests).
>
>
>
> in our code. A simple MC program without ANY heuristics except not
> playing iside
> your own 1-pt-eyes should get about 1050 ELO (10k simulations, eg.
> myCtest-10k).
>
>
> Other configurations of my bot have achieved that rating with 1-ply
> logic (example: housebot-633-UCB is 1050 ELO).. I haven't tracked the
> number of playouts (since it's variable), but if they're as fast as the
> amaf variant, that'd be about 20k simulations per move near the start of
> the game. Without really knowing what other bots did (such as more than
> 1-ply), I thought I had a reasonable implementation.
>
>
> I appreciate everyone who's put up simple bots and explained what they
> did. That has really helped with comparisons.
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> computer-go mailing list
> [email protected]
> http://www.computer-go.org/mailman/listinfo/computer-go/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFG/YOKDsOllbwnSikRAhOwAJ0X7Sw2UE/cOV4O+IcluRJBw/er8QCgtKYV
g4UsJpXnRPweU0fOhjIdsuE=
=uO3X
-----END PGP SIGNATURE-----
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/