On 3/4/08, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>
> >> But the question is whether the internal knowledge representation of
the AGI needs to allow ambiguities, or should we use an ambiguity-free
representation.  It seems that the latter choice is better.
>
> An excellent point.  But what if the representation is natural language
with pointers to the specific intended meaning of any words that are
possibly ambiguous?  That would seem to be the best of both worlds.

Yes, that's the very same strategy I can think of.  Not sure if there're
better ways.

The problem here is that the "decompression" algorithm seems to be very
complex.  The algorithm to compute the combination of two concepts A and B
goes like this:

1.  generate random sentences containing A and B
2.  test these sentences to see if they "make sense" (to "make sense" means
to be supported by, or be consistent with, other facts/rules).

Such an algorithm may be very time-consuming -- this may explain why
*reading* NL is a slow task for humans -- we need to find abductive
explanations for the texts.

YKY

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to