A move generator, that always plays it's first choice, that can win games
against Fuego?
That smells like a possible game changer.(pardon the pun).
Surely, programmers will take this workhorse, and put it before the MC cart.
Stefan
___
Computer-go
When I had an opportunity to talk to Yann LeCun about a month ago, I asked
him if anybody had used convolutional neural networks to play go and he
wasn't aware of any efforts in that direction.
There was work using neural networks in the mid 1990s, when I first
started with computer go. I
Álvaro, this is exactly something that I have been thinking about as
well (the last part about MC+NN and feedback between the two). It
seems like the authors of that paper are also thinking about
something similar.
I currently have the very basics of an implementation as well but
performance is
Hi!
On Mon, Dec 15, 2014 at 08:53:45AM +0900, Hiroshi Yamashita wrote:
This paper looks very cool.
Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf
Thier move prediction got 91% winrate against GNU Go and 14%
against Fuego in 19x19.
That's
Thanks for posting this Hiroshi!
Nice to see this neural network revival. It is mostly old ideas, and it is
not really surprising to me, but with modern compute power everyone can now
see that it works really well. BTW for some related work (not cited),
people might be interested to read up on
Chris Maddison also produced very good (in fact much better) results using
a deep convolutional network during his internship at Google. Currently
waiting for publication approval, I will post the paper once it is passed.
Aja
On Mon, Dec 15, 2014 at 2:59 PM, Erik van der Werf
I tested Aya's move prediction strength.
Prediction rate is 38.8% (first choice is same as pro's move)
against GNU Go 3.7.10 Level 10
winrate games
19x19 0.059 607
13x13 0.170 545
9x9 0.1411020
I was bit surprised there is no big difference from 9x9 to 19x19.
But 6%
So I read this kind of study with some skepticism. My guess is that the
large-scale pattern systems in use by leading programs are already pretty
good for their purpose (i.e., progressive bias).
Rampant personal and unverified speculation follows...
On Mon, Dec 15, 2014 at 02:57:32PM -0500, Brian Sheppard wrote:
I found the 14% win rate against Fuego is potentially impressive, but I
didn't get a sense for Fuego's effort level in those games. E.g., Elo
ratings. MCTS actually doesn't play particularly well until a sufficient
investment
You don't need a neural net to predict pro moves at this level.
My measurement metric was slightly different, I counted how far down the
list of moves the pro move appeared, so matching the pro move scored
as 100% and being tenth on a list of 100 moves scored 90%.
Combining simple metrics such
On 12/15/2014 01:39 PM, Dave Dyer wrote:
You don't need a neural net to predict pro moves at this level.
My measurement metric was slightly different, I counted how far down the
list of moves the pro move appeared, so matching the pro move scored
as 100% and being tenth on a list of 100 moves
2014-12-15 21:31 GMT+00:00 Petr Baudis pa...@ucw.cz:
Still, strong play makes sense for a strong predictor. I believe I
can also beat GNUGo 90% of time in blitz settings without doing pretty
much *any* concious sequence reading. So I would expect a module that's
supposed to mirror my
On Mon, Dec 15, 2014 at 11:03:35PM +, Aja Huang wrote:
2014-12-15 21:31 GMT+00:00 Petr Baudis pa...@ucw.cz:
Still, strong play makes sense for a strong predictor. I believe I
can also beat GNUGo 90% of time in blitz settings without doing pretty
much *any* concious sequence
Finally, I am not a fan of NN in the MCTS architecture. The NN
architecture imposes a high CPU burden (e.g., compared to decision trees),
and this study didn't produce such a breakthrough in accuracy that I would
give away performance.
Is it really such a burden? Supporting the move
2014-12-15 23:29 GMT+00:00 Petr Baudis pa...@ucw.cz:
Huh, aren't you?
I just played quick two games GnuGoBot39 where I tried very hard not
to read anything at all, and had no trouble winning. (Well, one of my
groups had some trouble but mindless clicking saved it anyway.)
That well
RE: MC + NN feedback:
One area I'm particularly interested in is using NN to apply knowledge
from the tree during the playout. I expect that NNs will have
difficulty learning strong tactical play, but a combination of a
pre-trained network with re-training based on the MCTS results might
be able
Is it really such a burden?
Well, I have to place my bets on some things and not on others.
It seems to me that the costs of a NN must be higher than a system based on
decision trees. The convolution NN has a very large parameter space if my
reading of the paper is correct. Specifically,
Correct me if I am wrong, but I believe that the CrazyStone approach of
team-of-features can be cast in terms of a shallow neural network. The
inputs are matched patterns on the board and other local information on
atari, previous moves, ko situation, and such. Remi alluded as much on this
list
18 matches
Mail list logo