https://community.singularitynet.io/t/pre-release-building-on-gpt-2s-successors-blender-and-pplm/2958/3

Notice by the end I solidify the concept that 1) more data improves prediction, 
2) new data (exploring) does even more, and 3) favorite data (exploit) does 
even more! And we evolve/update our filters, unlike Blender/ PPLM which don't 
evolve/ update them. The same concept is done in RL for learning to walk, but 
it's more powerful if done for text/ vision!

To make Blender/ PPLM more AGI-like, you force it to talk about food/breeding 
(survival) most, then it leaks in the embed space to related nodes. It's just 
generalization to past memories to help prediction, like done in GPT-2, BUT it 
must save/ update checkpoints! These desires/ forcing like in Blender/ PPLM, 
drive (as they call it) the model prediction/ attention.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T3cd584667cb2384b-M81a617d555dbcb43a4ac5d0e
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to