Again:
I have realized a very large next step for Blender/ PPLM. I want to keep it
short here but fully detailed still. So you know how GPT-2 recognizes the
context prompt to many past experiences/ memories, right? It generalizes /
translates the sentence, and may decide bank=river, not TDbank. Well this is
one of the things that helps it a lot. Now, you know how humans are born with
low level rewards for food and mates, right? Well through semantic relation,
those nodes leak/ update reward to similar nodes like farming/ cash/ homes/
cars/ science. Then it starts talking/ driving all day about money, not just
food. It specializes/ evolves its goal / domain. Why? Because it's collecting/
generating new data from specific sources/ questions / context prompts, so that
it can answer the original root question of course. It takes the installed
question wanting an outcome ex. "I will stop ageing by _" and is what I said
above: "recognizes the context prompt to many past experiences/ memories"
except it permanently translates into a narrower domain to create a
"checkpoint(s)". So during recognizing a Hard Problem context prompt / question
we taught it/installed like "I will stop ageing by _" - it jumps into a new
translation/ view and creates a new question / goal "I will create AGI by _".
It's semantics, it's gathering related predictions from similar memories, same
thing, just that it is picking specific semantic paths, updating, just like RL.
RL for text (prediction is objective).
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T3cd584667cb2384b-M5683380a94edd0fe189229a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription