Hi Hugh,

On Fri, Dec 26, 2014 at 9:49 AM, Hugh Perkins <hughperk...@gmail.com> wrote:

> Estimated total number of parameters
> approx = 12 layers * 128 filters * 128 previous featuremaps * 3 * 3
> filtersize
> = 1.8 million
>
> But you say 2.3 million.  It's similar, so seems feature maps are
> fully connected to lower level feature maps, but I'm not sure where
> the extra 500,000 parameters should come from?
>

You may have forgotten to include the position dependent biases. This is
how I computed the number of parameters

1st layer + 11*middle layers + final layer + 12*middle layer bias + output
bias

5*5*36*128 + 3*3*128*128*11 + 3*3*128*2 + 128*19*19*12 + 2*19*19 = 2,294,738


> 2. Symmetry
>
> Aja, you say in section 5.1 that adding symmetry does not modify the
> accuracy, neither higher or lower.  Since adding symmetry presumably
> reduces the number of weights, and therefore increases learning speed,
> why did you thus decide not to implement symmetry?


We were doing exploratory work that optimized performance not training
time, so we don't know how symmetry affects training time. In terms of
performance it seems not have an effect.

Aja
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to