Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-31 Thread Detlef Schmicker

Hi,

I am just trying to reproduce the data from page 7 with all features 
disabled. I do not reach the accuracy (I stay below 20%).


Now I wonder about a short statement in the paper, I did not really 
understand:
On page 4 top right they state In our experience using the rectifier 
function was slightly more effective then using the tanh function


Where do they put this functions in? I use caffe, and as far as I 
understood it, I would have to add extra layers to get a function like 
this. Does this mean: before every layer there should be a tanh or 
rectifier layer?


I would be glad to share my sources if somebody is trying the same,

Detlef

Am 15.12.2014 um 00:53 schrieb Hiroshi Yamashita:

Hi,

This paper looks very cool.

Teaching Deep Convolutional Neural Networks to Play Go
http://arxiv.org/pdf/1412.3409v1.pdf

Thier move prediction got 91% winrate against GNU Go and 14%
against Fuego in 19x19.

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go



___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-31 Thread Petr Baudis
  Hi!

On Wed, Dec 31, 2014 at 11:16:57AM +0100, Detlef Schmicker wrote:
 I am just trying to reproduce the data from page 7 with all features
 disabled. I do not reach the accuracy (I stay below 20%).
 
 Now I wonder about a short statement in the paper, I did not really
 understand:
 On page 4 top right they state In our experience using the
 rectifier function was slightly more effective then using the tanh
 function
 
 Where do they put this functions in? I use caffe, and as far as I
 understood it, I would have to add extra layers to get a function
 like this. Does this mean: before every layer there should be a tanh
 or rectifier layer?

  I think this is talking about the non-linear transformation function.
Basically, each neuron output y = f(wx) for weight vector w and input
vector x and transfer function f.  Traditionally, f is a sigmoid (the
logistic function 1/(1+e^-x)), but tanh is also popular and with deep
learning, rectifier and such functions are very popular IIRC because
they allow much better propagation of error to deep layers.

 I would be glad to share my sources if somebody is trying the same,

  I hope to be able to start dedicating time to this starting the end of
January (when I'll be moving to Japan for three months! I'll be glad to
meet up with fellow Go developers some time, and see you at the UEC if
it's in 2015 too :-).

  I would very much appreciate an open source implementation of this
- or rather, I'd rather spend my time using one to do interesting things
rather than building one, I do plan to open source my implementation if
I have to make one and can bring myself to build one from scratch...

-- 
Petr Baudis
If you do not work on an important problem, it's unlikely
you'll do important work.  -- R. Hamming
http://www.cs.virginia.edu/~robins/YouAndYourResearch.html
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-31 Thread Detlef Schmicker


Am 31.12.2014 um 14:05 schrieb Petr Baudis:

   Hi!

On Wed, Dec 31, 2014 at 11:16:57AM +0100, Detlef Schmicker wrote:

I am just trying to reproduce the data from page 7 with all features
disabled. I do not reach the accuracy (I stay below 20%).

Now I wonder about a short statement in the paper, I did not really
understand:
On page 4 top right they state In our experience using the
rectifier function was slightly more effective then using the tanh
function

Where do they put this functions in? I use caffe, and as far as I
understood it, I would have to add extra layers to get a function
like this. Does this mean: before every layer there should be a tanh
or rectifier layer?

   I think this is talking about the non-linear transformation function.
Basically, each neuron output y = f(wx) for weight vector w and input
vector x and transfer function f.  Traditionally, f is a sigmoid (the
logistic function 1/(1+e^-x)), but tanh is also popular and with deep
learning, rectifier and such functions are very popular IIRC because
they allow much better propagation of error to deep layers.
Thanks a lot. I was struggling with the traditionally, and expected 
this to be the case for the standard convolutional layers in caffe. This 
seems not to be the case, so now I added layers for f(x): Now I reach 
50% accuracy for a small dataset (285000 positions). Of cause this 
data set is too small (therefore the number is overestimated), but I 
only had 15% on this before introducing f(x) :)






I would be glad to share my sources if somebody is trying the same,

   I hope to be able to start dedicating time to this starting the end of
January (when I'll be moving to Japan for three months! I'll be glad to
meet up with fellow Go developers some time, and see you at the UEC if
it's in 2015 too :-).

   I would very much appreciate an open source implementation of this
- or rather, I'd rather spend my time using one to do interesting things
rather than building one, I do plan to open source my implementation if
I have to make one and can bring myself to build one from scratch...

oakfoam is open source anyway. In my branch my caffe based 
implementation is available. My branch is not so clean as Francois's 
one, and we did not merge for quite a time:(


At the moment the CNN part is in a very early state, you have to produce 
the database by different scripts...

But I would be happy to assist!

Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-31 Thread Hugh Perkins
 I would very much appreciate an open source implementation of this
- or rather, I'd rather spend my time using one to do interesting things
rather than building one, I do plan to open source my implementation if
I have to make one and can bring myself to build one from scratch...

I started building a convolution network library for OpenCL at
https://github.com/hughperkins/ClConvolve/
- tanh, relu, linear activations
- OpenCL
- fully connected and convolutional layers

OpenCL you might see as good or bad, depending on your point of view.  It's
certainly unique.  eg, caffe uses CUDA I believe, as does Theano, and so
on.  OpenCL has the advantage of being an open standard, and you can run it
on many CPUs, eg Intel Ivy Bridge integrated graphics cards.

I intend to implement 'pizza-slice' symmetry, or maybe
'kaleidoscope'-symmetry is a better name.  Either way, the 4-way symmetry,
for w: vertically, horizontally, and across both diagonals.

It's currently a work in progress.  It can get 83% accuracy on mnist, using
a single convolutional layer, and no other layers at all.  Fully-connected
layer also seems to be working.  forward prop and backward prop are both in
gpu, for convolutional layers.  fully-connected layers are still 100% on
cpu, but you only would have one such layer, right, so not a high
priority?  I'm currently building test cases to ensure that multiple, deep,
topologies work correctly.

Hugh
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Reminder: Handicap 29 Prize

2014-12-31 Thread Ingo Althöfer
Hi all,

back in 1998, Martin Mueller had beaten the traditional
(non-Monte-Carlo) Many Faces of Go despite of giving
28 handicap stones on the 19x19 board. 

The first bot that is able to achieve the same (before
the end of year 2020), will get 1,000 Euro from my
pocket. For details see:

http://www.althofer.de/handicap-29-prize.html

The prize money will also be given for a handicap-29 win
against the 12-kyu level of the current Many Faces of Go (version 12).

Cheers, Ingo.

PS. I know that the task is hard. But just in view of the current
CNN-development it may have lost some of its difficulty.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Reminder: Handicap 29 Prize

2014-12-31 Thread Ingo Althöfer
Sorry for the typo.

 back in 1998, Martin Mueller had beaten the traditional
 (non-Monte-Carlo) Many Faces of Go despite of giving
 28 handicap stones on the 19x19 board. 

Of course, Martin won at handicap 29.

Ingo.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go