I.D. This is probably not what he means, but it comes from what I know.

The dot product is x*a+y*b+z*c  -> on and on...

xyz are the weights on the synapses/connections (and these are just 1 or 0.)  u 
can imagine them as being 1's when the pixel is existing in the picture in the 
neural network, and 0 if its not.

and a,b,c  would be the image coming in. (and this is where it might differ 
from what Sean is doing) and its a match when you get a larger sum. if you had 
a separate synapse for white and black,  it would be just be a pick max of 
which was the nearest neuron to match to.

The problem is, youd have to create a dot product for every single neuron in 
the net, and you get a square cost, I dont know Sean is doing to get rid of the 
square cost,  but that what he says hes got.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T894f73971549b2ee-M810891dc9284ea741a6a078b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to