Hi Oliver
Reinforcement learning is different to unsupervised learning. We used
reinforcement learning to train the Atari games. Also we published a more
recent paper (www.nature.com/articles/nature14236) that applied the same
network to 50 different Atari games (achieving human level in around
Can you say anything about whether you think their approach to unsupervised
learning could be applied to networks similar to those you trained? Any
practical or theoretical constraints we should be aware of?
On Monday, 16 March 2015, Aja Huang ajahu...@gmail.com wrote:
Hello Oliver,
@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play
Go
But, I imagine this is more fuss than it is worth; the NN will be
integrated into MCTS search, and I think the strong programs already
have ways to generate ko threat candidates.
Darren
Do
But, I imagine this is more fuss than it is worth; the NN will be
integrated into MCTS search, and I think the strong programs already
have ways to generate ko threat candidates.
Darren
Do they? What would look like? Playing 2 moves in a row for the same side?
I thought the programs naively
On 2014-12-19 15:25, Hiroshi Yamashita wrote:
Ko fight is weak. Ko threat is simpley good pattern move.
I suppose you could train on a subset of data: only positions where
there was a ko-illegal move on the board. Then you could learn ko
threats. And then use this alternative NN when meeting a
Hi,
I am just trying to reproduce the data from page 7 with all features
disabled. I do not reach the accuracy (I stay below 20%).
Now I wonder about a short statement in the paper, I did not really
understand:
On page 4 top right they state In our experience using the rectifier
function
Hi!
On Wed, Dec 31, 2014 at 11:16:57AM +0100, Detlef Schmicker wrote:
I am just trying to reproduce the data from page 7 with all features
disabled. I do not reach the accuracy (I stay below 20%).
Now I wonder about a short statement in the paper, I did not really
understand:
On page 4
Am 31.12.2014 um 14:05 schrieb Petr Baudis:
Hi!
On Wed, Dec 31, 2014 at 11:16:57AM +0100, Detlef Schmicker wrote:
I am just trying to reproduce the data from page 7 with all features
disabled. I do not reach the accuracy (I stay below 20%).
Now I wonder about a short statement in the
I would very much appreciate an open source implementation of this
- or rather, I'd rather spend my time using one to do interesting things
rather than building one, I do plan to open source my implementation if
I have to make one and can bring myself to build one from scratch...
I started
Hi!
On Fri, Dec 19, 2014 at 10:50:30AM +0900, Hiroshi Yamashita wrote:
One question: Is there a place where I can find sgf
Paper author, Christopher Clark kindly sent me sgf and let me share on ML.
That's great, thanks for negotiating that. :-)
This is a copy of sgf.
That's pretty good looking for a pure predictor. Considering it has no
specific knowledge about semeais, ladders, or ko threat situations...
Switching out the pattern matcher (not the whole move generator) in an
existing mc program, should be pretty straightforward. Even if the nn is a
lot slower
Hi,
The predictor is white. It really does just play shapes, but evidently
it's plenty enough sometimes or against weaker opponents.
I saw some games, and my impression are
DCNN sees board widely.
Without previous move info, DCNN can answer opponent move.
It knows well corner life and death
play is very helpful in that regard.
-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
Hiroshi Yamashita
Sent: Friday, December 19, 2014 10:25 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural
I put two commented games on
http://webdocs.cs.ualberta.ca/~mmueller/fuego/Convolutional-Neural-Network.html
http://webdocs.cs.ualberta.ca/~mmueller/fuego/Convolutional-Neural-Network.html
Enjoy!
Martin
___
Computer-go mailing list
On Sun Dec 14 23:53:45 UTC 201, Hiroshi Yamashita wrote:
Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf
Wow, this resembles somewhat what I was hoping to do! But now I
should look for some other avenue :-) But
I'm surprised it's only published on
Hi,
One question: Is there a place where I can find sgf
Paper author, Christopher Clark kindly sent me sgf and let me share on ML.
This is a copy of sgf.
http://www.yss-aya.com/dcnn_games_20141218.tar.gz
His notes is as follows.
...@bd.mbn.or.jp
An: computer-go@computer-go.org
Betreff: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go
Hi,
This paper looks very cool.
Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf
Thier move prediction got 91% winrate
Hi Ingo,
One question: Is there a place where I can find sgf
I could not find. I also want to see sgf.
Hiroshi Yamashita
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
Of
René van de Veerdonk
Sent: Monday, December 15, 2014 11:47 PM
To: computer-go
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play
Go
Correct me if I am wrong, but I believe that the CrazyStone approach of
team-of-features can be cast in terms of a shallow neural
Hi Aja,
That being said, Hiroshi, are you sure there was no problem in your
experiment? 6% winning rate against GnuGo on 19x19 seems too low for a
predictor of 38.8% accuracy. And yes, in the paper we will show a game that
I tried without resign, but result is similar.
winrate games
: Tuesday, December 16, 2014 10:23 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play
Go
Hi Brian,
I understand your points, but deep convolutional neural networks are very
powerful in the sense that they can represent very
A move generator, that always plays it's first choice, that can win games
against Fuego?
That smells like a possible game changer.(pardon the pun).
Surely, programmers will take this workhorse, and put it before the MC cart.
Stefan
___
Computer-go
When I had an opportunity to talk to Yann LeCun about a month ago, I asked
him if anybody had used convolutional neural networks to play go and he
wasn't aware of any efforts in that direction.
There was work using neural networks in the mid 1990s, when I first
started with computer go. I
Álvaro, this is exactly something that I have been thinking about as
well (the last part about MC+NN and feedback between the two). It
seems like the authors of that paper are also thinking about
something similar.
I currently have the very basics of an implementation as well but
performance is
Hi!
On Mon, Dec 15, 2014 at 08:53:45AM +0900, Hiroshi Yamashita wrote:
This paper looks very cool.
Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf
Thier move prediction got 91% winrate against GNU Go and 14%
against Fuego in 19x19.
That's
Thanks for posting this Hiroshi!
Nice to see this neural network revival. It is mostly old ideas, and it is
not really surprising to me, but with modern compute power everyone can now
see that it works really well. BTW for some related work (not cited),
people might be interested to read up on
Chris Maddison also produced very good (in fact much better) results using
a deep convolutional network during his internship at Google. Currently
waiting for publication approval, I will post the paper once it is passed.
Aja
On Mon, Dec 15, 2014 at 2:59 PM, Erik van der Werf
I tested Aya's move prediction strength.
Prediction rate is 38.8% (first choice is same as pro's move)
against GNU Go 3.7.10 Level 10
winrate games
19x19 0.059 607
13x13 0.170 545
9x9 0.1411020
I was bit surprised there is no big difference from 9x9 to 19x19.
But 6%
performance.
-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
Hiroshi Yamashita
Sent: Monday, December 15, 2014 10:27 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play
Go
I
On Mon, Dec 15, 2014 at 02:57:32PM -0500, Brian Sheppard wrote:
I found the 14% win rate against Fuego is potentially impressive, but I
didn't get a sense for Fuego's effort level in those games. E.g., Elo
ratings. MCTS actually doesn't play particularly well until a sufficient
investment
You don't need a neural net to predict pro moves at this level.
My measurement metric was slightly different, I counted how far down the
list of moves the pro move appeared, so matching the pro move scored
as 100% and being tenth on a list of 100 moves scored 90%.
Combining simple metrics such
On 12/15/2014 01:39 PM, Dave Dyer wrote:
You don't need a neural net to predict pro moves at this level.
My measurement metric was slightly different, I counted how far down the
list of moves the pro move appeared, so matching the pro move scored
as 100% and being tenth on a list of 100 moves
2014-12-15 21:31 GMT+00:00 Petr Baudis pa...@ucw.cz:
Still, strong play makes sense for a strong predictor. I believe I
can also beat GNUGo 90% of time in blitz settings without doing pretty
much *any* concious sequence reading. So I would expect a module that's
supposed to mirror my
On Mon, Dec 15, 2014 at 11:03:35PM +, Aja Huang wrote:
2014-12-15 21:31 GMT+00:00 Petr Baudis pa...@ucw.cz:
Still, strong play makes sense for a strong predictor. I believe I
can also beat GNUGo 90% of time in blitz settings without doing pretty
much *any* concious sequence
Finally, I am not a fan of NN in the MCTS architecture. The NN
architecture imposes a high CPU burden (e.g., compared to decision trees),
and this study didn't produce such a breakthrough in accuracy that I would
give away performance.
Is it really such a burden? Supporting the move
2014-12-15 23:29 GMT+00:00 Petr Baudis pa...@ucw.cz:
Huh, aren't you?
I just played quick two games GnuGoBot39 where I tried very hard not
to read anything at all, and had no trouble winning. (Well, one of my
groups had some trouble but mindless clicking saved it anyway.)
That well
RE: MC + NN feedback:
One area I'm particularly interested in is using NN to apply knowledge
from the tree during the playout. I expect that NNs will have
difficulty learning strong tactical play, but a combination of a
pre-trained network with re-training based on the MCTS results might
be able
Sent: Monday, December 15, 2014 6:37 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play
Go
Finally, I am not a fan of NN in the MCTS architecture. The NN architecture
imposes a high CPU burden (e.g., compared to decision trees
suitable than another.
*From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
Behalf Of *Stefan Kaitschick
*Sent:* Monday, December 15, 2014 6:37 PM
*To:* computer-go@computer-go.org
*Subject:* Re: [Computer-go] Teaching Deep Convolutional Neural Networks
to Play Go
Hi,
This paper looks very cool.
Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf
Thier move prediction got 91% winrate against GNU Go and 14%
against Fuego in 19x19.
Regards,
Hiroshi Yamashita
___
40 matches
Mail list logo