If you have an empty brain with just the question "I will cure cancer by" it 
may just answer "getting rid of it". It needs more data to know how that 
totally won't work. Perhaps you say place cells closer on discs to make storage 
higher but actually this causes cells to collapse by overheating. Not only does 
it need more data to know its boundaries in reality, but also to answer the 
question. Through this giagigabhamoth, it can find a path. But now the question 
is, just how much dataset do you need before you can answer "I will cure cancer 
by"? The answer is reality. Does the solution work. Also you can check how the 
accuracy isn't improving much anymore as feed it more data, but still is tied 
to reality checking.

Strange how this goes against me myself...I know lots and use that to find my 
way through the maze of AGI, I become confident/satisfied as I evolve towards 
the end outcome. But that confidency threshold/criteria I have is based on the 
data I know, if I was 2 and said "cure cancer by telling it to go away" I would 
think it sounds correct, my criteria is based on what I know, it doesn't tell 
me if my criteria is correct though. Or can it? (knowing I know enough is based 
on what I know though) Otherwise I need to check reality. Or compare it to our 
accuracy. Yet, I seem to be able to know if I am there or not. Right now, I 
KNOW I'm missing data. How's that possible?

It's as if I see walls stopping me from saying I see a path. I could say 
anything right now, and I know it won't solve AGI.

Must think on this.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb02616c3d13f47b7-M21d5cf22da564ca50a563a0c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to