Hi All

I've been lurking on the list for a while now and I have a question
I'd like to ask the list, but I'd first like to take this opportunity
to announce myself and my computer Go program, Oakfoam.

I am an Engineering student at the University of Stellenbosch, South
Africa. I have been developing my computer Go program for the past 6
months or so mostly for my own enjoyment, however I plan on doing my
final year project based on it next year.

Some details about Oakfoam:
- UCT algo (surprise, surprise ;))
- RAVE
- Mogo 3x3 patterns
- Open Source under the BSD license
- Almost everything is adjustable at runtime using parameters
- Achieved a 1700 ELO rating on CGOS 9x9 recently
- Repo at http://bitbucket.org/francoisvn/oakfoam/

I have been working almost exclusively on 9x9. I would also like to
mention that most of my parameters have not been tuned, so when I get
around to that I should get some more "free" strength. However, my
program mostly seems to be comparable to others when using UCT+RAVE so
I'm satisfied for now. For reference I need about 100k playouts with
RAVE to get 50% winrate against GnuGo 3.8 L10. Does this seem in
order?

I have recently been working on ELO Features like in Rémi Coloum's
paper, "Computing Elo Ratings of Move Patterns in the Game of Go." I
have, using the MM tool, trained some features and they seem to
correspond more-or-less with the gammas in the paper (one noticeable
exception is my self-atari gamma is much closer to 1). I also did a
test on another collection of games and plotted a cumulative
distribution comparing the ordered list of moves by gammas to the move
played like in the paper. Some points on my graph: top 1: 27%, top 5:
58%, top 10: 68%. These are slightly weaker than the paper's results,
however, I only used 3x3 patterns so this is to be expected. At this
stage everything seemed to be in order.

The next step is obviously to apply these to the playouts. I am
currently testing my program with the ELO features in the playouts,
but unfortunately the preliminary results don't look good. The paper
speaks of an increase from a 38.2% to a 68.2% winrate, which is
obviously quite substantial. However my program seems to be weaker
when using the features. Results are still rolling in, so technically
things could improve, but a large improvement has basically been ruled
out already.

So my questions are: Does anyone know where I might have gone wrong?
Is there a way for me to better verify that my feature gammas are ok?

Sorry for the long email, but I had a lot to say :) Any help is appreciated.
--
Francois van Niekerk
Email: [email protected] | Twitter: @francoisvn
Cell: +2784 0350 214 | Website: http://leafcloud.com
_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to