Sifting through your BLOB of text above here are your actual questions:

"learning through human agents, that is your own conscious brains. The 
compressor improvements aren’t communicated compressor to compressor"

"Your compressors are isolated hard-coded hyper-AI agents, not AGI."

It doesn't take much to feed its predictions to the rest of a sentence/ images 
back into its storage tree/ hierarchy web. When we do that, its predictions 
being stored will govern what it predicts in the future, I won't say how 
because its too powerful an idea, we are really close now. For example I 
predict 'AI is smarter than me' (and so this makes me forget my cryonics 
research and am now interested in AGI) and so now I talk to myself about AGI 
issues and speak AGI into Google and eat data about ML, it's not that I search 
Google, it's just that I can't get AGI off my lips and it coughs out into 
everywhere I go - including Google. My predictions return me AGI data - by 
generating to myself or into Google's search bar. My AI and OpenAI's DALL-E are 
not hardcoded in the sense that they are not general purpose, they are, and 
soon will be making their own rules up by ex. linking together thoughts with 
motor actions, it is not hard to see DALL-E is very general purpose and can get 
so general purpose that it starts storing and using pieces of memories to do 
very rare prediction solving.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5d9be03603e8ba20-M5aa0bb5e79a9f4e9923db313
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to