Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-16 Thread Brian Sheppard
My impression is that each feature gets a single weight in Crazy-stone. The team-of-features aspect arises because a single point can match several patterns, so you need a model to assign credit when tuning. The paper that I remember used fixed receptive fields to define the patterns. (E.g.,

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-16 Thread Hiroshi Yamashita
Hi Aja, That being said, Hiroshi, are you sure there was no problem in your experiment? 6% winning rate against GnuGo on 19x19 seems too low for a predictor of 38.8% accuracy. And yes, in the paper we will show a game that I tried without resign, but result is similar. winrate games

Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play Go

2014-12-16 Thread Brian Sheppard
I am very much looking forward to your paper. I do see the CNN research as a new idea that has great potential. Linear models like what MM is using are ... far less powerful than CNN My mathematical objection is that this cannot be. The no free lunch theorem applies, and besides, both

[Computer-go] Archive?

2014-12-16 Thread Ingo Althöfer
Hello,   it is fantastic that mails from the list are distributed again - thanks to Petr and to anybody else who helped with this.   One question: Is it somehow ensured that the mails will be properly archived? At least they are not shown in the old archive list:

Re: [Computer-go] Archive?

2014-12-16 Thread Janzert
On 12/16/2014 1:03 PM, Ingo Althöfer wrote: Hello, it is fantastic that mails from the list are distributed again - thanks to Petr and to anybody else who helped with this. One question: Is it somehow ensured that the mails will be properly archived? At least they are not shown in the old