Oh I got it now. Say we feed an AI lots of data and it learns semantics of 
phrases. These large 100 word long phrases are solved quite fast but no matter, 
the accuracy will curve soon and seemingly halt due to those many rare strings. 
Yes most our stuff is hundred rational strings but there ARE those rare 100 
word strings that are nonesense and we no know what 2 do!! You might not see 
such a nutty 100 word phrase cuz humans write them usually but in 3D particle 
features you will see such rare cases. Yes there is a way to learn patterns in 
these rare nutty prompts, because they have patterns, they aren't exactly 
nutty, but at some point the truly random prompts are of many and really aren't 
possible to solve, and so that's where we still have not hit against in our AI 
algorithms yet. So a system of some size at the highest technology of 
immortality can't live forever on its own because it only handles most cases, 
there's many rare cases, and that's why accuracy makes a de-exponential curve. 
You can't avoid all issues, a gamma ray burst to be detected you must be moved 
by it first, damage occurs. As for adding more agents, scale, this adds more 
data but our finite system here is already at the highest technology state and 
even if we added more memory for more agent's data it wouldn't help us. Adding 
more agents is for an other reason, it makes 2 of these systems, then 4, 8, 16, 
the time to clone is less than its life expectancy, so it can be immortal, in 
theory.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tfce80000509c1fb3-M6d9ed6be07c0d693eee190ab
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to