To be clear, what I was talking about was building an opening book as
part of the game-generation process that produces training data for
the neural network. This makes sure you don't generate the same game
over and over again.
A few more things about my Spanish checkers experiment from a few
Building an opening book is a good idea. I do it too.
By the way, if anybody is interested, I have put a small 9x9 opening book
online:
https://www.crazy-sensei.com/book/go_9x9/
Evaluation is +1 for a win, -1 for a loss, for a komi of 7. It may not be
very good, because evaluations was done by my
For checkers, I used a naive implementation of UCT as my opening book
(the "playout" being the actual game where the engine is thinking). So
towards the end of the opening book there is always a position where
it will try a random move, but in the long run good opening moves will
be explored more
This is a report after my first day of training my Ataxx network:
https://www.game-ai-forum.org/viewtopic.php?f=24=693
Ataxx is played on a 7x7 board. The rules are different, but I expect 7x7
Go would produce similar results. 2k self-play games are more than enough
to produce a huge strength
I would be surprised if my model ever lost to GNU Go on 9x9. It's a lot
stronger than Fuego, which already stomps GNU Go. It would be a waste of
time to test it vs. GNU Go or even MCTS bots. I only plan on running tests
vs. current best models to see how it does against the state of the art 9x9
Thanks again for your thoughts and experiences Rémi and Igor.
I'm still puzzled by what is making training slower for me than Rémi (although
I wouldn't be surprised if Igor's results were faster when matched for
hardware, model size, strength etc-- see below). Certainly komi sounds like it
I trained using David Wu's code for a few months on 9x9 only and it's been
superhuman after a few months.
I'm not sure if anyone's interested, but I can release my network to the
world. It's around the strength of KataGo, but only on 9x9. I could do a
final test before releasing it into the wild
Yes, using komi would help a lot. Still, I feel that something else must be
wrong, because winning 100% of the games as Black without komi should be
very easy on 7x7.
I have not written anything about what I did with Crazy Stone. But my
experiments and ideas were really very similar to what David
Hi Rémi,
Thanks for your comments! I am not using any komi and had not given much
thought to it. Although, I suppose by having black win most games, I'm
depriving the network of its only learning signal. I will have to try with an
appropriately set komi next...
>When I started to develop the
Hi,
Thanks for sharing your experiments.
Your match results are strange. Did you use a komi? You should use a komi
of 9:
https://senseis.xmp.net/?7x7
The final strength of your network looks surprisingly weak. When I started
to develop the Zero version of Crazy Stone, I spend a lot of time
Hi All,
I wanted to share an update to a post I wrote last year about using the AlphaGo
Zero algorithm on small boards (7x7). I train for approximately 2 months on a
single desktop PC with 2 GPU cards.
In the article I was getting mediocre performance from the networks. Now, I've
found that
11 matches
Mail list logo