[Computer-go] Fwd: Representing Komi for neural network
But then, the komi wont really participate in the hierarchical representation we are hoping that the network will build, that I suppose we are hoping is the key to obtaining human-comparable results? Well... it seems that Hinton, in his dropout paper http://arxiv.org/pdf/1207.0580.pdf , get kindof ok results with 'permutation-invariant' networks, basically consisting of 2-3 fc layers, so maybe a bunch of conv layers feeding into 2-3 fc layers, and the non-image inputs going into the fc-layers too is reasonable. Perhaps what we want is a compromise between convnets and fcs though? ie, either take an fc and make it a bit more sparse, and / or take an fc and randomly link sets of weights together??? ___ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go
[Computer-go] Fwd: Representing Komi for neural network
Perhaps what we want is a compromise between convnets and fcs though? ie, either take an fc and make it a bit more sparse, and / or take an fc and randomly link sets of weights together??? Maybe something like: each filter consists of eg 16 weights, which are assigned randomly over all input-output pairs, such that each pair is assign to exactly one of these shared weights, and then somehow: - either just fix the sharing assignment, a little like how echo state networks fix many of their weights, to keep the number of learnable parameters down, - or, have some way of optimizing the filters to learn the most useful sharing assigments, eg: - randomly modify them, genetic-type algorithm, or - some kind of Dirichlet-process type sampling? :-P - something else? ___ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go