I find that I get a nice bounce (about 2%) in the accuracy by reducing the 
learning rate by a factor of ten after the accuracy on the test set stops 
improving. I also found it pretty easy to get 45% with small networks as long 
as the bottom layer included some 5x5 filters.

 

David

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Brian Lee
Sent: Tuesday, August 23, 2016 7:00 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Converging to 57%

 

I've been working on my own AlphaGo replication (code on github 
https://github.com/brilee/MuGo), and I've found it reasonably easy to hit 45% 
prediction rate with basic features (stone locations, liberty counts, and turns 
since last move), and a relatively small network (6 intermediate layers, 32 
filters in each layer), using Adam / 10e-4 learning rate. This took ~2 hours on 
a GTX 960.

 

As others have mentioned, learning shoots up sharply at the start, and there is 
an extremely slow but steady improvement over time. So I'll experiment with 
fleshing out more features, increasing size of network, and longer training 
periods.

 

Brian

 

 

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to