Hi Aja

We've just submitted our paper to ICLR. We made the draft available at
http://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf

I hope you enjoy our work. Comments and questions are welcome.

I did not look at the go content, on which I'm no expert.
But for the network training, you might be interested in these articles:
«Riemaniann metrics for neural networks I and II» by Yann Ollivier:
http://www.yann-ollivier.org/rech/publs/gradnn.pdf
http://www.yann-ollivier.org/rech/publs/pcnn.pdf

He defines invariant metrics on the parameters, much lighter to compute
than the natural gradient, and then gets usually (very very) much faster
convergence, and in a much more robust way since it does not depend on
parametrisation or the activation function.

Jonas
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to