Son of man, omg! Impressive!

"We show that language modeling improves continuously as we increase the size 
of the retrieval database, at least up to 2 trillion tokens – 175 full 
lifetimes of continuous reading."

If they were to listen to our ideas they would have learnt this long time ago.

Note that storing it in the neural network is way more faster and efficient, 
you only look back at the dataset if you are really cheesy and want to be God, 
because you obviously no one stores the whole memory after just one time of 
reading something.

BTW my algorithm already does this, so does others yearsss ago. Oh I see they 
kind-of do a few more things; they seem to point to a paragraph topic location, 
then search there for similar matches, then they blend the predictions with the 
neural predictions.

"a nearest-neighbor search is performed which returns similar sequences found 
in the training database, and their continuation. These sequences help predict 
the continuation of the input text."

That's right you little bitch, you get the match and get the next word. Can't 
do nothin without a matched memory.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T22ce813ce07d9b1a-Mbf48aaeb29349391011a3db7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to