That's exactly what I thought doddy.

Hmm, this is one thing YKY made that is really not my type of explanations, it 
doesn't even make sense in any way.

You mention truth values and the fuzzy "don't know" or "kinda yes/no" answer, 
but AGI only works on data prediction, truth and trust is only Prediction,  and 
saying something is only what it stores and what it predicts or does into the 
real world... What tells AGI X is true or even kinda true and kinda false ! ? 
All that happens in the universe is some things are seen more often, some 
translate cat=dog, some predict themselves if will see a cat probably more if 
did lots already, etc....that's all we have that is true is patterns....Data 
Cleaning and annotation is only useful for when you can't make the AGI as good 
as you and want to help it in areas you are better at, so you tell it some 
things, you clean some things ex. CAPS and stopwords.... this annotation/ 
cleaning is only for less than AGI systems...

I remember reading Ben uses truth values or something as a helping way to add 
to prediction, but I disagree if that's the case because the human brain 
doesn't use that and is, well, like I said above.

We have more serious problems though with our AGI team though 
anyway.....there's no AGI Guide, but I'm working on one, that explains all AGI 
and in baby English simple easy and soft but FULL. ;p And quick.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T54594b98b5b98f83-Maa9f4a40431a575ed0e2dedf
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to