Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-09 Thread Álvaro Begué
Yes, it's 0.15 seconds for 128 positions.

A minibatch is a small set of samples that is used to compute an
approximation to the gradient before you take a step of gradient descent. I
think it's not simply called a "batch" because "batch training" refers to
computing the full gradient with all the samples before you take a step of
gradient descent. "Minibatch" is standard terminology in the NN community.

Álvaro.





On Fri, Jan 9, 2015 at 6:04 PM, Darren Cook  wrote:

> Aja wrote:
> >> I hope you enjoy our work. Comments and questions are welcome.
>
> I've just been catching up on the last few weeks, and its papers. Very
> interesting :-)
>
> I think Hiroshi's questions got missed?
>
> Hiroshi Yamashita asked on 2014-12-20:
> > I have three questions.
> >
> > I don't understand minibatch. Does CNN need 0.15sec for a positon, or
> > 0.15sec for 128 positions?
>
> I also wasn't sure what "minibatch" meant. Why not just say "batch"?
>
> > Is "KGS rank" set 9 dan when it plays against Fuego?
>
> For me, the improvement from just using a subset of the training data
> was one of the most surprising results.
>
> Darren
>
>
> --
> Darren Cook, Software Researcher/Developer
> My new book: Data Push Apps with HTML5 SSE
> Published by O'Reilly: (ask me for a discount code!)
>   http://shop.oreilly.com/product/0636920030928.do
> Also on Amazon and at all good booksellers!
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-09 Thread Kahn Jonas

Is "KGS rank" set 9 dan when it plays against Fuego?


For me, the improvement from just using a subset of the training data
was one of the most surprising results.


As far as I can tell, they use ALL the training data. That's the point.
They filter by dan, and the CNN must then have less confidence in a 1dan
game than in a 9dan game when predicting a 9dan game, but the
information is used in a way.
The correlation will be nonzero. And depend on the situation, too. The
CNN sees that.

Jonas
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2015-01-09 Thread Darren Cook
On 2014-12-19 15:25, Hiroshi Yamashita wrote:
> Ko fight is weak. Ko threat is simpley good pattern move.

I suppose you could train on a subset of data: only positions where
there was a ko-illegal move on the board. Then you could learn ko
threats. And then use this alternative NN when meeting a ko-illegal
position in a game.

But, I imagine this is more fuss than it is worth; the NN will be
integrated into MCTS search, and I think the strong programs already
have ways to generate ko threat candidates.

Darren
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-09 Thread Darren Cook
Aja wrote:
>> I hope you enjoy our work. Comments and questions are welcome.

I've just been catching up on the last few weeks, and its papers. Very
interesting :-)

I think Hiroshi's questions got missed?

Hiroshi Yamashita asked on 2014-12-20:
> I have three questions.
> 
> I don't understand minibatch. Does CNN need 0.15sec for a positon, or
> 0.15sec for 128 positions?

I also wasn't sure what "minibatch" meant. Why not just say "batch"?

> Is "KGS rank" set 9 dan when it plays against Fuego?

For me, the improvement from just using a subset of the training data
was one of the most surprising results.

Darren


-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Evaluation function through Deep Convolutional Neural Networks

2015-01-09 Thread Darren Cook
> The discussion on move evaluation via CNNs got me wondering: has anyone
> tried to make an evaluation function with CNNs ?

My first thought was a human can find good moves with a glance at a
board position, but even the best pros need to both count and use search
to work out the score. So NNs good for move candidate generation, MCTS
good for scoring?

Darren


-- 
Darren Cook, Software Researcher/Developer
My new book: Data Push Apps with HTML5 SSE
Published by O'Reilly: (ask me for a discount code!)
  http://shop.oreilly.com/product/0636920030928.do
Also on Amazon and at all good booksellers!
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Testing on different board sizes

2015-01-09 Thread Andreas Persson
Follow up question to the testing, how should I interpret the resulting html 
file from the analyze command?

I alternate colors between the engines, Gnu Go is Black in the first game. Both 
Gnu Go and my engine have a section in the html file with Black Wins x%. Is 
that the amount of wins for each engine playing as black or the amount of wins 
for Gnu Go?
Regards Andreas

8 januari 2015, Petr Baudis  skrev:
> Hi!
> 
> On Thu, Jan 08, 2015 at 09:16:55AM +, Andreas Persson wrote:
> > I am running some test builds against GNU Go, to save time I have decided 
> > to play them on a 13x13 board. Do you guys also test on smaller board sizes 
> > to save time, or are these test worthless for 19x19?
> > 
> The behavior and optimal parameter setting (including enabled,
> disabled features) is very different specifically on 9x9 vs. 19x19,
> at least that's my experience with Pachi.
> 
> Between that, it's obviously some spectrum. I think 15x15 is a better
> compromise than 13x13 and use that board size for benchmarks, but it's
> not an opinion that's terribly well scientifically grounded. (I think I
> did some comparisons long ago, but I forgot the details. Also, some
> professionals advocated 15x15 and in my playing experience it's much
> closer to 19x19 strategically but the games are only a little slower
> than on 13x13.)
> 
> Anyhow, make sure you do not make your games too blitz even on
> a smaller board; very fast settings favor parameters that give a good
> behavior on initial choice of moves, longer time settings favor
> parameters that give a good asymptotic behavior and again there's
> a difference (often I find that ad hoc heuristics I implement help
> the blitz case but do not help or hamper the longer time settings).
> There is no good solution, but I do one-off tests with considerably
> longer time settings than usual time by time.
> 
> -- 
> Petr Baudis
> If you do not work on an important problem, it's unlikely
> you'll do important work. -- R. Hamming
> http://www.cs.virginia.edu/~robins/YouAndYourResearch.html
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go