I was saying earlier recently that the simply the code may be still working on 
last heard or desired prompts, "looking like we are thinking" about some prompt 
when really we are just taking long on the same prompt, as we circle the 
kitchen walking for 30 mins! Sometimes we aren't sure of the prediction and 
don't recognize enough of the prompt, so we may predict to collect more data, 
instead of the actual prediction, and then return later. Maybe we do predict 
fast and instead of circling the domain and come back to it more sure to 
predict with higher accuracy and be done with the prompt finally for good.

So let's see, possibilities for AGI like time taking behavior, are: using motor 
to look around memories, collect domain specific data, simply takes long, are 
circling the domain always instantly predicting away like a machine gun, using 
memories as code i.e. a neuron that says say cat 5 times then say its first 
letter and then say that letter and 10 other in a row gdhekfnksf, then say this 
thing and the first word you began with cat gdhekfnksf, then say it backwards, 
and what does it predict now. So.....I think we circle the domain ya, and I 
think the memories we learn are just things we predict after or of but in a way 
make us run around our brain in an intelligent way making us repeat things, 
recall, bind, etc. For example saying to grab 5 memories and get the first 
letters and collect data about this word made is just simple pattern finder 
mechanisms like ex. GPT is but when you run these memories it makes it do the 
things I said, collect 5 memories then etc, which is a useful course of action 
when you think about it.

In short, I think something like GPT has the potential to be taken further and 
use its Positive and Negative Reward memories (of which are handled by simple 
pattern mechanisms) to act human like.

**I think the prediction is what matters for context, and then the human level 
behavior handling that prediction is the reward memory nodes that say what 
their prompt is and etc and then things like no stop go collect more data, this 
would result in it making smarter predictions, because lots of our looping in 
our kitchen floor is the fact that we aren't sure and are trying to collect 
specific data on the domain. So to make it collect more domain specific data, 
you can teach it to do this using rewarded nodes for example.**
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfaee106f616454ab-Mb0ee0ced83ad62cdbffd2474
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to