It seems being able to regenerate losslessly most of the data back (diverse 
data) while at smallest memory compression size is best (OUR GOAL), other 
setups are not optimal. So if 90% of Earth is all Ts in text, easy to compress, 
but it doesn't get trapped on too common data. Actually, if you overfit too 
losslessly you can see you take away a node and lose 1 output, but for general 
important nodes you remove 1 and lose 5000 generated outputs to regenerate, and 
can know which to Dropout therefore. This is like text data cleansing stage, 
you remove words that are too rare or not favorite or not related like grommet, 
Mars, sewers,.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tc1f2c133ae3e4762-M6bd7cfc428f8f23ddf904220
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to