from https://twitter.com/Miles_Brundage

https://arxiv.org/abs/1710.07535
https://arxiv.org/abs/1709.01041


On Mon, Oct 23, 2017 at 10:39 AM, Darren Cook <dar...@dcook.org> wrote:

> > The source of AlphaGo Zero is really of zero interest (pun intended).
>
> The source code is the first-hand account of how it works, whereas an
> academic paper is a second-hand account. So, definitely not zero use.
>
> > So yes, the database of 29M self-play games would be immensely more
> > valuable. (Probably like the last 5M or so is fine, too). I prefer the
> > games over the network - with the games it's easier to train a smaller
> > network that gives better results on PC's that don't have 4 TPUs in them.
>
> Does anyone know of research/code on the topic of reducing the
> size/complexity of deep learning networks? I think it should be possible
> to reduce either the number of layers, or the size of each layer, with
> only a small drop in accuracy, but it seems like the two fully-connected
> networks at the top will then need retraining?
>
> However, this article is showing results, beyond what I thought would be
> possible, even on the very deep image networks:
>
> https://www.oreilly.com/ideas/compressing-and-regularizing-
> deep-neural-networks
>
> BTW, I notice his PhD thesis has just been published. Might have to add
> it to my reading list:  http://stanford.edu/~songhan/
>
> Darren
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to