Why? They are just graphs. In fact they are Christmasy! Too simple for you? 
What do you want!? Something you can't understand!? The images do lack some the 
things I talk about but you can see how it all works.

one more thing:
I didn't exactly say it in my little book but also: biases in ANNs make certain 
nodes more easily activated, but in my research a node is only activated if its 
children are activated and/or if was recently activated and/or if related nodes 
are activated. Reward would also make some nodes more likely said and lasts a 
lot longer than energization (and can be modified for high level nodes). SO: 
other than that there's no reason it should get boosted automatically. Others 
and my own code do use global weight for layers, so I guess this is wrong, the 
idea is some layers (or better, nodes) have more stats and should get more 
weight during mixing predictions from multiple nodes of which letter (or word) 
comes next likeliest. Perhaps it is a combined averaged weight from the same 
layer saying 'this layer seems to have enough statistics'. But that is adaptive 
biases then if it learns Online, it is not fixed biases.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T43ab26814eaa1bdd-M04ca0561b42847e60c1cb68b
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to