It would be very interesting to see what these go playing neural networks dream about [1]. Admittedly it does not explain any specific moves the AI does - but it might show some interesting patterns that are encoded in the NN and might even give some insight into "how the NN thinks".

Put differently: select a single neuron and find a board pattern such that the excitation of this neuron is maximal. With some luck you might be able to give meaning to this individual neuron or to single layers of the network (like how the first layers in pattern recognition basically detect edges).


~ Ben

[1] http://googleresearch.blogspot.de/2015/06/inceptionism-going-deeper-into-neural.html

Am 30.03.2016 22:23, schrieb Jim O'Flaherty:
I agree, "cannot" is too strong. But, values close enough to
"extremely difficult as to be unlikely" is why I used it.

On Mar 30, 2016 11:12 AM, "Robert Jasiek" <jas...@snafu.de> wrote:

On 30.03.2016 16:58, Jim O'Flaherty wrote:

My own study says that we cannot top down include "English
explanations" of
how the ANNs (Artificial Neural Networks, of which DCNN is just
one type)
arrive a conclusions.

"cannot" is a strong word. I would use it only if it were proven
mathematically.

--
robert jasiek
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to