Thanks for sharing the games, Rémi!
On Thu, May 7, 2020 at 6:27 AM Rémi Coulom wrote:
> In this game, Crazy Stone won using a typical Monte Carlo trick:
> http://www.yss-aya.com/cgos/viewer.cgi?9x9/SGF/2020/05/07/997390.sgf
> On move 27, it sacrificed a stone. According to Crazy Stone, the game
I wonder if this behavior could be avoided by giving a small incentive to
win by the most points (or most material in chess) similar to to the
technique mentioned by David Wu in KataGo a few days ago. The problem right
now is that the AI has literally no reason to think that winning with more
On Thu, Oct 26, 2017 at 2:02 PM, Gian-Carlo Pascutto wrote:
> On 26-10-17 15:55, Roel van Engelen wrote:
> > @Gian-Carlo Pascutto
> >
> > Since training uses a ridiculous amount of computing power i wonder
> > if it would be useful to make certain changes for future research,
> >
My guess is that they want to distribute playing millions of self-play
games. Then the learning would be comparatively much faster. Is that right?
On Wed, Oct 25, 2017 at 11:57 AM, Xavier Combelle wrote:
> Is there some way to distribute learning of a neural network
Also (if I'm understanding the paper correctly) 20 blocks ~= 40 layers
because each "block" has two convolution layers:
Each residual block applies the following modules sequentially to its input:
> (1) A convolution of 256 filters of kernel size 3×3 with stride 1
> (2) Batch normalization
> (3)
The same schedule is on a new Google site (appears to be China timezone).
It also says that there will be Livestream:
http://events.google.com/alphago2017/index.html
On May 19, 2017 07:26, "Hiroshi Yamashita" wrote:
> Is it japanese tome zone?
>>
>
>
> I think it is Japanese