Matt,
I confess, I'm not sure I understand your response. It seems to be a variant of
the critique made by three people early-on in this thread based on the
misleading example query in my original post. These folks noted that an
analysis of linguistic surface features (i.e., the word "fomlepung" would not
"sound right" to an English speaking query recipient) could account for the
"feeling of not knowing." And they were right. For queries of that type (i.e.,
queries that contained foreign, slang or uncommon words).
I apologized for that first example and provided an improved query (one that has
valid English syntax and uses common English words -- so it will pass linguistic
surface feature analysis). To wit: "Which team won the 1924 World Series?"
Cheers,
Brad
Matt Mahoney wrote:
This is not a hard problem. A model for data compression has the task of predicting the next
bit in a string of unknown origin. If the string is an encoding of natural
language text, then
modeling is an AI problem. If the model doesn't know, then it assigns a
probability of about
1/2 to each of 0 and 1. Probabilities can be easily detected from outside the
model, regardless
of the intelligence level of the model.
-- Matt Mahoney, [EMAIL PROTECTED]
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com