Take a look below. From what I can see, LSTMs, GANs, Transformers, even my own 
AI, all benefit less and less the more data they have digested, until it 
basically doesn't help the AI's predictions any more. Adding more data has a 
limit.

The curve looks exponential. It has the same shape every time for all AI 
algorithms.* Is that so though? Is there any AI that has a linear curve?*

*The curve below starts off fast at learning, it slows though, it doesn't 
matter much how fast it slows, it just slows, and becomes essentially 0 help at 
some point to feed it more data. That's why I ask i there a linear curve, that 
at least maintains its linearness for a much longer time?*

https://www.google.com/search?q=ai+accuracy+curve&tbm=isch&sxsrf=ALeKk024gYUWS-rEG_r8M61pIw7I7Eus4A:1602459054540&source=lnms&sa=X&ved=0ahUKEwiBzNmV2a3sAhWHVN8KHUn-BYwQ_AUICygB&biw=1280&bih=923&dpr=1#imgrc=Oc3V9suxud6fVM

I had once thought that the more data you feed a true AGI, the more each can 
relate to each and so the relationships would become exponential and so its 
accuracy would shoot the other way, it'd get exponentially more accurate the 
more data you feed it. But no? Why is this idea of mine false? If I discover 
why - I'll update yous.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfce80000509c1fb3-M6d1bd9e15bf514a1abee78f1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to