Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Stefan Kaitschick
A move generator, that always plays it's first choice, that can win games
against Fuego?
That smells like a possible game changer.(pardon the pun).
Surely, programmers will take this workhorse, and put it before the MC cart.

Stefan
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Darren Cook
 When I had an opportunity to talk to Yann LeCun about a month ago, I asked
 him if anybody had used convolutional neural networks to play go and he
 wasn't aware of any efforts in that direction. 

There was work using neural networks in the mid 1990s, when I first
started with computer go. I think the problem, at that time, came down
to if you use just a few features it was terrible quality, but if you
used more interesting inputs the training times increased exponentially,
so much so that it became utterly impractical.

I suppose this might be another idea, like monte carlo, that just needed
enough computing power for it to become practical for go; it'll be
interesting to see how their attempts to scale it turn out. I've added
the paper in my Christmas Reading list :-)

Darren

 Teaching Deep Convolutional Neural Networks to Play Go
 http://arxiv.org/pdf/1412.3409v1.pdf

 Thier move prediction got 91% winrate against GNU Go and 14%
 against Fuego in 19x19.




-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Mikael Simberg

Álvaro, this is exactly something that I have been thinking about as
well (the last part about MC+NN and feedback between the two). It
seems like the authors of that paper are also thinking about
something similar.

I currently have the very basics of an implementation as well but
performance is bad and I don't have access to any (reasonable) GPUs and
currently only a very slow CPU. It would be nice to try to figure
something out together if you're interested.

Regards, Mikael Simberg


On Mon, Dec 15, 2014, at 02:54, Álvaro Begué wrote:

 When I had an opportunity to talk to Yann LeCun about a month ago, I
 asked him if anybody had used convolutional neural networks to play go
 and he wasn't aware of any efforts in that direction. This is
 precisely what I had in mind. Thank you very much for the link,
 Hiroshi!

 I have been learning about neural networks recently, I have some basic
 implementation (e.g., no convolutional layers yet), and I am working
 hard on getting GPU acceleration.

 One of the things I want to do with NNs is almost exactly what's
 described in this paper (but including number of liberties and chain
 length as inputs). I also want to try to use a similar NN to
 initialize win/loss counts in new nodes in the tree in a MCTS program.
 One last thing I want to experiment with is providing the results of
 MC simulations as inputs to the network to improve its performance. I
 think there is potential for MC to help NNs and viceversa.

 Does anyone else in this list have an interest in applying these
 techniques to computer go?


 Álvaro.




 On Sun, Dec 14, 2014 at 6:53 PM, Hiroshi Yamashita
 y...@bd.mbn.or.jp wrote:
 Hi,


This paper looks very cool.


Teaching Deep Convolutional Neural Networks to Play Go
 http://arxiv.org/pdf/1412.__3409v1.pdf


Thier move prediction got 91% winrate against GNU Go and 14%

against Fuego in 19x19.


Regards,

Hiroshi Yamashita


_

Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/__mailman/listinfo/computer-go
 _
 Computer-go mailing list Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Petr Baudis
  Hi!

On Mon, Dec 15, 2014 at 08:53:45AM +0900, Hiroshi Yamashita wrote:
 This paper looks very cool.
 
 Teaching Deep Convolutional Neural Networks to Play Go
 http://arxiv.org/pdf/1412.3409v1.pdf
 
 Thier move prediction got 91% winrate against GNU Go and 14%
 against Fuego in 19x19.

  That's awesome!  I was hoping to spend the evenings in the first
quarter of 2015 playing with exactly this, it seems I've been far from
the only one with this idea - but also that the prediction task really
is as easy as I suspected.  :-)

  Thanks a lot for pointing this out.  It's a pity they didn't make
their predictor open source - I don't look forward to implementing
reflectional preservation.

Petr Baudis
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Erik van der Werf
Thanks for posting this Hiroshi!

Nice to see this neural network revival. It is mostly old ideas, and it is
not really surprising to me, but with modern compute power everyone can now
see that it works really well. BTW for some related work (not cited),
people might be interested to read up on the 90s work of Stoutamire,
Enderton, Schraudolph and Enzenberger.

Comparing results to old publications is a bit tricky. For example, the
things I did in 2001/2002 are reported to achieve around 25% prediction
accuracy, which at the time seemed good but is now considered unimpressive.
However, in hindsight, an important reason for that number was time
pressure and lack of compute power, which is not really related to anything
fundamental. Nowadays using nearly the same training mechanism, but with
more data and more capacity to learn (i.e., a bigger network), I also get
pro-results around 40%. In case you're interested, this paper
http://arxiv.org/pdf/1108.4220.pdf by Thomas Wolf has a figure with more
recent results (the latest version of Steenvreter is still a little bit
better though).

Another problem with comparing results is the difficulty to obtain
independent test data. I don't think that was done optimally in this case.
The problem is that, especially for amateur games, there are a lot of
people memorizing and repeating the popular sequences. Also, if you're not
careful, it is quite easy to get duplicate games in you dataset (I've had
cases where one game was annotated in chinese, and the other (duplicate) in
English, or where the board was simply rotated). My solution around this
was to always test on games from the most recent pro-tournaments, for which
I was certain they could not yet be in the training database. However, even
that may not be perfect, because also pro's play popular joseki, which
means there will at least be lots of duplicate opening positions.

I'm not surprised these systems now work very well as stand alone players
against weak opponents. Some years ago David and Thore's move predictors
managed to beat me once in a 9-stones handicap game, which indicates that
also their system was already stronger than GNU Go. Further, the version of
Steenvreter in my Android app at its lowest level is mostly just a move
predictor, yet it still wins well over 80% of its games.

In my experience, when the strength difference is big, and the game is
even, it is usually enough for the strong player to only play good shape
moves. The move predictors only break down in complex tactical situations
where some form of look-ahead is critical, and the typical shape-related
proverbs provide wrong answers.

Erik

On Mon, Dec 15, 2014 at 12:53 AM, Hiroshi Yamashita y...@bd.mbn.or.jp
wrote:

 Hi,

 This paper looks very cool.

 Teaching Deep Convolutional Neural Networks to Play Go
 http://arxiv.org/pdf/1412.3409v1.pdf

 Thier move prediction got 91% winrate against GNU Go and 14%
 against Fuego in 19x19.

 Regards,
 Hiroshi Yamashita

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Aja Huang
Chris Maddison also produced very good (in fact much better) results using
a deep convolutional network during his internship at Google. Currently
waiting for publication approval, I will post the paper once it is passed.

Aja

On Mon, Dec 15, 2014 at 2:59 PM, Erik van der Werf erikvanderw...@gmail.com
 wrote:

 Thanks for posting this Hiroshi!

 Nice to see this neural network revival. It is mostly old ideas, and it is
 not really surprising to me, but with modern compute power everyone can now
 see that it works really well. BTW for some related work (not cited),
 people might be interested to read up on the 90s work of Stoutamire,
 Enderton, Schraudolph and Enzenberger.

 Comparing results to old publications is a bit tricky. For example, the
 things I did in 2001/2002 are reported to achieve around 25% prediction
 accuracy, which at the time seemed good but is now considered unimpressive.
 However, in hindsight, an important reason for that number was time
 pressure and lack of compute power, which is not really related to anything
 fundamental. Nowadays using nearly the same training mechanism, but with
 more data and more capacity to learn (i.e., a bigger network), I also get
 pro-results around 40%. In case you're interested, this paper
 http://arxiv.org/pdf/1108.4220.pdf by Thomas Wolf has a figure with more
 recent results (the latest version of Steenvreter is still a little bit
 better though).

 Another problem with comparing results is the difficulty to obtain
 independent test data. I don't think that was done optimally in this case.
 The problem is that, especially for amateur games, there are a lot of
 people memorizing and repeating the popular sequences. Also, if you're not
 careful, it is quite easy to get duplicate games in you dataset (I've had
 cases where one game was annotated in chinese, and the other (duplicate) in
 English, or where the board was simply rotated). My solution around this
 was to always test on games from the most recent pro-tournaments, for which
 I was certain they could not yet be in the training database. However, even
 that may not be perfect, because also pro's play popular joseki, which
 means there will at least be lots of duplicate opening positions.

 I'm not surprised these systems now work very well as stand alone players
 against weak opponents. Some years ago David and Thore's move predictors
 managed to beat me once in a 9-stones handicap game, which indicates that
 also their system was already stronger than GNU Go. Further, the version of
 Steenvreter in my Android app at its lowest level is mostly just a move
 predictor, yet it still wins well over 80% of its games.

 In my experience, when the strength difference is big, and the game is
 even, it is usually enough for the strong player to only play good shape
 moves. The move predictors only break down in complex tactical situations
 where some form of look-ahead is critical, and the typical shape-related
 proverbs provide wrong answers.

 Erik

 On Mon, Dec 15, 2014 at 12:53 AM, Hiroshi Yamashita y...@bd.mbn.or.jp
 wrote:

 Hi,

 This paper looks very cool.

 Teaching Deep Convolutional Neural Networks to Play Go
 http://arxiv.org/pdf/1412.3409v1.pdf

 Thier move prediction got 91% winrate against GNU Go and 14%
 against Fuego in 19x19.

 Regards,
 Hiroshi Yamashita

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go


 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Hiroshi Yamashita

I tested Aya's move prediction strength.

Prediction rate is 38.8% (first choice is same as pro's move)

against GNU Go 3.7.10 Level 10

  winrate  games
19x19   0.059 607
13x13   0.170 545
9x9 0.1411020

I was bit surprised there is no big difference from 9x9 to 19x19.
But 6% in 19x19 is still low, paper's 91% winrate is really high.
It must understand whole board life and death.
I'd like to see their sgf vs GNU Go and Fuego.

Aya's prediction includes local string capture search.
So this result maybe include some look-ahead.
Aya uses this move prediction in UCT, playout uses another prediction.
Aya gets 50% against GNU Go with 300 playout in 19x19 and 100 in 9x9.

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Brian Sheppard
So I read this kind of study with some skepticism. My guess is that the 
large-scale pattern systems in use by leading programs are already pretty 
good for their purpose (i.e., progressive bias).

Rampant personal and unverified speculation follows...
--
I found the 14% win rate against Fuego is potentially impressive, but I didn't 
get a sense for Fuego's effort level in those games. E.g., Elo ratings. MCTS 
actually doesn't play particularly well until a sufficient investment is made.

I am not sure what to think about winning 91% against Gnu Go. Gnu Go makes a 
lot of moves based on rules, so it replays games. I found that many of 
Pebbles games against Gnu Go were move-for-move repeats of previous games, so 
much so that I had to randomize Pebbles if I wanted to use Gnu Go for 
calibrating parameters. My guess is that the 91% rate is substantially 
attributable to the way that Gnu Go's rule set interacts with the positions 
that the NN likes. This could be a measure of strength, but not necessarily.

My impression is that the progressive bias systems in MCTS programs should 
prioritize interesting moves to search. A good progressive bias system might 
have a high move prediction rate, but that will be a side-effect of tuning it 
for its intended purpose. E.g., it is important to search a lot of bad moves 
because you need to know for *certain* that they are bad.

Similarly, it is my impression is that a good progressive bias engine does not 
have to be a strong stand-alone player. Strong play implies a degree of 
tactical pattern matching that is not necessary when the system's 
responsibility is to prioritize moves. Tactical accuracy should be delegated to 
the search engine. The theoretical prediction is that MCTS search will be 
(asymptotically) a better judge of tactical results.

Finally, I am not a fan of NN in the MCTS architecture. The NN architecture 
imposes a high CPU burden (e.g., compared to decision trees), and this study 
didn't produce such a breakthrough in accuracy that I would give away 
performance.


-Original Message-
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Hiroshi Yamashita
Sent: Monday, December 15, 2014 10:27 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play 
Go

I tested Aya's move prediction strength.

Prediction rate is 38.8% (first choice is same as pro's move)

against GNU Go 3.7.10 Level 10

   winrate  games
19x19   0.059 607
13x13   0.170 545
9x9 0.1411020

I was bit surprised there is no big difference from 9x9 to 19x19.
But 6% in 19x19 is still low, paper's 91% winrate is really high.
It must understand whole board life and death.
I'd like to see their sgf vs GNU Go and Fuego.

Aya's prediction includes local string capture search.
So this result maybe include some look-ahead.
Aya uses this move prediction in UCT, playout uses another prediction.
Aya gets 50% against GNU Go with 300 playout in 19x19 and 100 in 9x9.

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Petr Baudis
On Mon, Dec 15, 2014 at 02:57:32PM -0500, Brian Sheppard wrote:
 I found the 14% win rate against Fuego is potentially impressive, but I 
 didn't get a sense for Fuego's effort level in those games. E.g., Elo 
 ratings. MCTS actually doesn't play particularly well until a sufficient 
 investment is made.

  Generally I'd expect Fuego in the described hardware configurations
and time seetings to be in 2k-1d KGS range.

 I am not sure what to think about winning 91% against Gnu Go. Gnu Go makes a 
 lot of moves based on rules, so it replays games. I found that many of 
 Pebbles games against Gnu Go were move-for-move repeats of previous games, so 
 much so that I had to randomize Pebbles if I wanted to use Gnu Go for 
 calibrating parameters. My guess is that the 91% rate is substantially 
 attributable to the way that Gnu Go's rule set interacts with the positions 
 that the NN likes. This could be a measure of strength, but not necessarily.

  That's an excellent point!

 My impression is that the progressive bias systems in MCTS programs should 
 prioritize interesting moves to search. A good progressive bias system might 
 have a high move prediction rate, but that will be a side-effect of tuning it 
 for its intended purpose. E.g., it is important to search a lot of bad moves 
 because you need to know for *certain* that they are bad.

  That sounds a bit backwards; it's enough to find a single good move,
you don't need to confirm that all other moves are worse.  Of course
sometimes this collapses to the same problem, but not nearly all the
time.

 Similarly, it is my impression is that a good progressive bias engine does 
 not have to be a strong stand-alone player. Strong play implies a degree of 
 tactical pattern matching that is not necessary when the system's 
 responsibility is to prioritize moves. Tactical accuracy should be delegated 
 to the search engine. The theoretical prediction is that MCTS search will be 
 (asymptotically) a better judge of tactical results.

  I don't think anyone would *aim* to make the move predictor as strong
as possible, just that everyone is surprised that it is so strong
coincidentally. :-)

  Still, strong play makes sense for a strong predictor.  I believe I
can also beat GNUGo 90% of time in blitz settings without doing pretty
much *any* concious sequence reading.  So I would expect a module that's
supposed to mirror my intuition to do the same.

 Finally, I am not a fan of NN in the MCTS architecture. The NN architecture 
 imposes a high CPU burden (e.g., compared to decision trees), and this study 
 didn't produce such a breakthrough in accuracy that I would give away 
 performance.

  ...so maybe it is MCTS that has to go!  We could be in for more
surprises.  Don't be emotionally attached to your groups.

-- 
Petr Baudis
If you do not work on an important problem, it's unlikely
you'll do important work.  -- R. Hamming
http://www.cs.virginia.edu/~robins/YouAndYourResearch.html
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Dave Dyer

You don't need a neural net to predict pro moves at this level. 

My measurement metric was slightly different, I counted how far down the
list of moves the pro move appeared, so matching the pro move scored
as 100% and being tenth on a list of 100 moves scored 90%.

Combining simple metrics such as 3x3 neighborhood, position on the board,
and proximity to previous play, you can easily get to an average score
of 85%, without producing noticeably good play, at least without a search 
to back it up.

http://real-me.net/ddyer/go/global-eval.html

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Christoph Birk

On 12/15/2014 01:39 PM, Dave Dyer wrote:


You don't need a neural net to predict pro moves at this level.

My measurement metric was slightly different, I counted how far down the
list of moves the pro move appeared, so matching the pro move scored
as 100% and being tenth on a list of 100 moves scored 90%.


There is a huge difference between matching a pro move or have
it #10 on a list of 100


Combining simple metrics such as 3x3 neighborhood, position on the board,
and proximity to previous play, you can easily get to an average score
of 85%, without producing noticeably good play, at least without a search
to back it up.


85% is basically meaningless, I am sure even a mid-kyu player
can put a pro-move in the top 15% of 100 moves.

Christoph




___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Aja Huang
2014-12-15 21:31 GMT+00:00 Petr Baudis pa...@ucw.cz:

   Still, strong play makes sense for a strong predictor.  I believe I
 can also beat GNUGo 90% of time in blitz settings without doing pretty
 much *any* concious sequence reading.  So I would expect a module that's
 supposed to mirror my intuition to do the same.


I'm very surprised you are so confident in beating GnuGo over 90% of time
in *blitz settings*. There are even people complaining he couldn't beat
GnuGo

http://www.lifein19x19.com/forum/viewtopic.php?f=18t=170


  Finally, I am not a fan of NN in the MCTS architecture. The NN
 architecture imposes a high CPU burden (e.g., compared to decision trees),
 and this study didn't produce such a breakthrough in accuracy that I would
 give away performance.

   ...so maybe it is MCTS that has to go!  We could be in for more
 surprises.  Don't be emotionally attached to your groups.


Fair enough. :)

Aja
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Petr Baudis
On Mon, Dec 15, 2014 at 11:03:35PM +, Aja Huang wrote:
 2014-12-15 21:31 GMT+00:00 Petr Baudis pa...@ucw.cz:
 
Still, strong play makes sense for a strong predictor.  I believe I
  can also beat GNUGo 90% of time in blitz settings without doing pretty
  much *any* concious sequence reading.  So I would expect a module that's
  supposed to mirror my intuition to do the same.
 
 
 I'm very surprised you are so confident in beating GnuGo over 90% of time
 in *blitz settings*. There are even people complaining he couldn't beat
 GnuGo
 
 http://www.lifein19x19.com/forum/viewtopic.php?f=18t=170

  Huh, aren't you?

  I just played quick two games GnuGoBot39 where I tried very hard not
to read anything at all, and had no trouble winning.  (Well, one of my
groups had some trouble but mindless clicking saved it anyway.)

Petr Baudis
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Stefan Kaitschick

 Finally, I am not a fan of NN in the MCTS architecture. The NN
 architecture imposes a high CPU burden (e.g., compared to decision trees),
 and this study didn't produce such a breakthrough in accuracy that I would
 give away performance.


  Is it really such a burden? Supporting the move generator with the NN
result high up in the decision tree can't be that expensive.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Aja Huang
2014-12-15 23:29 GMT+00:00 Petr Baudis pa...@ucw.cz:

   Huh, aren't you?

   I just played quick two games GnuGoBot39 where I tried very hard not
 to read anything at all, and had no trouble winning.  (Well, one of my
 groups had some trouble but mindless clicking saved it anyway.)


That well explains your level is far beyond GnuGo, probably at least 3k on
KGS.

That being said, Hiroshi, are you sure there was no problem in your
experiment? 6% winning rate against GnuGo on 19x19 seems too low for a
predictor of 38.8% accuracy. And yes, in the paper we will show a game that
the neural network beat Fuego (or pachi) at 100k playouts / move.

Aja
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Mark Wagner
RE: MC + NN feedback:
One area I'm particularly interested in is using NN to apply knowledge
from the tree during the playout. I expect that NNs will have
difficulty learning strong tactical play, but a combination of a
pre-trained network with re-training based on the MCTS results might
be able to apply the knowledge gained in MCTS during the playout to
correctly resolve LD situations, semeai, and maybe ko fights. Does
anyone else have interest in this?

-Mark

On Mon, Dec 15, 2014 at 3:58 PM, Aja Huang ajahu...@gmail.com wrote:
 2014-12-15 23:29 GMT+00:00 Petr Baudis pa...@ucw.cz:

   Huh, aren't you?

   I just played quick two games GnuGoBot39 where I tried very hard not
 to read anything at all, and had no trouble winning.  (Well, one of my
 groups had some trouble but mindless clicking saved it anyway.)


 That well explains your level is far beyond GnuGo, probably at least 3k on
 KGS.

 That being said, Hiroshi, are you sure there was no problem in your
 experiment? 6% winning rate against GnuGo on 19x19 seems too low for a
 predictor of 38.8% accuracy. And yes, in the paper we will show a game that
 the neural network beat Fuego (or pachi) at 100k playouts / move.

 Aja

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread Brian Sheppard
Is it really such a burden?

 

Well, I have to place my bets on some things and not on others.

 

It seems to me that the costs of a NN must be higher than a system based on 
decision trees. The convolution NN has a very large parameter space if my 
reading of the paper is correct. Specifically, it can represent all patterns 
translated and rotated and matched against all points in parallel.

 

To me, that seems like a good way to mimic the visual cortex, but an 
inefficient way to match patterns on a Go board.

 

So my bet is on decision trees. The published research on NN will help me to 
understand the opportunities much better, and I have every expectation that the 
performance of decision trees should be = NN in every way. E.g., faster, more 
accurate, easier and faster to tune. 

 

I recognize that my approach is full of challenges. E.g., a NN would 
automatically infer soft qualities such as wall, influence that would 
have to be provided to a DT as inputs. No free lunch, but again, this is about 
betting that one technology is (overall) more suitable than another.

 

 

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Stefan Kaitschick
Sent: Monday, December 15, 2014 6:37 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play 
Go

 

 


Finally, I am not a fan of NN in the MCTS architecture. The NN architecture 
imposes a high CPU burden (e.g., compared to decision trees), and this study 
didn't produce such a breakthrough in accuracy that I would give away 
performance.

 

 Is it really such a burden? Supporting the move generator with the NN result 
high up in the decision tree can't be that expensive.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-15 Thread René van de Veerdonk
Correct me if I am wrong, but I believe that the CrazyStone approach of
team-of-features can be cast in terms of a shallow neural network. The
inputs are matched patterns on the board and other local information on
atari, previous moves, ko situation, and such. Remi alluded as much on this
list sometime after his paper got published.

Without having studied the Deep Learning papers in detail, it seems that
these are the types of smart features that could be learned by a Deep
Neural Net in the first few layers if the input is restricted to just the
raw board, but could equally well be provided as domain specific features
in order to improve computational efficiency (and perhaps enforce
correctness).

These approaches may not be all that far apart, other than the depth of the
net and the domain specific knowledge used directly. Remi recently
mentioned that the number of patterns in more recent versions of CrazyStone
also number in the millions. I think the prediction rates for these two
approaches are also pretty close. Compare the Deep Learning result to the
other recent study of a German group quoted in the Deep Learning paper.

The bigger questions to me are related to engine architecture. Are you
going to use this as an input to a search? Or are you going to use this
directly to play? If the former, it had better be reasonably fast. The
latter approach can be far slower, but requires the predictions to be of
much higher quality. And the biggest question, how can you make these two
approaches interact efficiently?

René

On Mon, Dec 15, 2014 at 8:00 PM, Brian Sheppard sheppar...@aol.com wrote:

 Is it really such a burden?



 Well, I have to place my bets on some things and not on others.



 It seems to me that the costs of a NN must be higher than a system based
 on decision trees. The convolution NN has a very large parameter space if
 my reading of the paper is correct. Specifically, it can represent all
 patterns translated and rotated and matched against all points in parallel.



 To me, that seems like a good way to mimic the visual cortex, but an
 inefficient way to match patterns on a Go board.



 So my bet is on decision trees. The published research on NN will help me
 to understand the opportunities much better, and I have every expectation
 that the performance of decision trees should be = NN in every way. E.g.,
 faster, more accurate, easier and faster to tune.



 I recognize that my approach is full of challenges. E.g., a NN would
 automatically infer soft qualities such as wall, influence that would
 have to be provided to a DT as inputs. No free lunch, but again, this is
 about betting that one technology is (overall) more suitable than another.







 *From:* Computer-go [mailto:computer-go-boun...@computer-go.org] *On
 Behalf Of *Stefan Kaitschick
 *Sent:* Monday, December 15, 2014 6:37 PM
 *To:* computer-go@computer-go.org
 *Subject:* Re: [Computer-go] Teaching Deep Convolutional Neural Networks
 to Play Go






 Finally, I am not a fan of NN in the MCTS architecture. The NN
 architecture imposes a high CPU burden (e.g., compared to decision trees),
 and this study didn't produce such a breakthrough in accuracy that I would
 give away performance.



  Is it really such a burden? Supporting the move generator with the NN
 result high up in the decision tree can't be that expensive.

 ___
 Computer-go mailing list
 Computer-go@computer-go.org
 http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go