>
>
> But has a human, asking Wen out on a date, I don't really know what
> "Wen likes cats" ever really meant. It neither prevents me from talking
> to Wen, or from telling my best buddy that "...well, I know, for
> instance, that she likes cats..."
>
>
yes, exactly...
The NLP statement "Wen likes cats" is vague in the same way as the
Novamente or NARS relationship
EvaluationLink
likes
ListLink
Wen
cats
is vague.... The vagueness passes straight from NLP into the internal KR,
which is how it should be.
And that same vagueness may be there if the relationship is learned via
inference based on experience, rather than acquired by natural language.
I.e., if the above relationship is inferred, it may just mean that
" {the relationship between Wen and cats} shares many relationships with
other person/object relationships that have been categorized as 'liking'
before"
In this case, the system can figure out that "Wen likes cats" without ever
actually making explicit what this means. All it knows is that, whatever it
means,
it's the same thing that was meant in other circumstances where "liking"
was used as a label.
So, vagueness can not only be important into an AI system from natural
language,
but also propagated around the AI system via inference.
This is NOT one of the trickier things about building probabilistic AGI,
it's really
kind of elementary...
-- Ben G
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64674694-3ada83