Re: [Computer-go] mit-develops-algorithm-to-accelerate-neural-networks-by-200x

2019-03-25 Thread Chaz G.
Concur w/Brian. While the authors present genuine contributions, meta-learning doesn't apply well to zero-sized architectures. I didn't get a lot from the article, the arxiv link for the work done is https://arxiv.org/abs/1812.00332 Best, -Chaz On Sun, Mar 24, 2019 at 4:17 PM Brian Lee wrote:

Re: [Computer-go] Efficient Parameter Tuning Software

2019-01-14 Thread Chaz G.
Hi Simon, Thanks for sharing. In my opinion, apart from discretizing the search space, the N-Tuple system takes a very intuitive approach to hyper-parameter optimization. The github repo readme notes you're working on an extended version to handle continuous parameters, what's your general

Re: [Computer-go] Significance of resignation in AGZ

2017-12-03 Thread Chaz G.
Hi Brian, Thanks for sharing your genuinely interesting result. One question though: why would you train on a non-"zero" program? Do you think your program as a result of your rules would perform better than zero, or is it imitating the best known algorithm inconvenient for your purposes? Best,

Re: [Computer-go] CPU vs GPU

2016-03-02 Thread Chaz G.
RĂ©mi, Nvidia launched the K20 GPU in late 2012. Since then, GPUs and their convolution algorithms have improved considerably, while CPU performance has been relatively stagnant. I would expect about a 10x improvement with 2016 hardware. When it comes to training, it's the difference between