Take all the Wiki information, store it and your program, then measure that size, and subject it to a large test of various information inside it. It may answer in similar words, as long as it is "correct"
Then whoever makes the smallest data set and program wins. The loss of exacting words is less important than the effecient storage of concepts and the ability to use them.
Compression by itself, is merely squishing the data. There is no intelligence in that, merely some neat effeciency algorithms.
James
boris <[EMAIL PROTECTED]> wrote:
It's been said that we have to go after lossless compression because there's no way to objectively measure the quality of lossy compression. That makes sense only in the context of dumb indiscriminate transforms conventionally used for compression.If compression is produced by pattern recognition we can quantify lossless compression of individual patterns, which is a perfectly objective criterion for selectively *losing* insufficiently compressed patterns. To make Hutter's prize meaningful it must be awarded for compression of the *best* patterns, rather than of the whole data set. And, of course, linguistic/semantic data is a lousy place to start, it's already been heavily compressed by "algorithms" unknown to any autonomous system. An uncompressed movie would be a far, far better data sample. Also, the real criterion of intelligence is prediction, which is a *projected* compression of future data. The difference is that current compression is time-symmetrical, while prediction obviously isn't.
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Thank You
James Ratcliff
http://falazar.com
Get your own web address for just $1.99/1st yr. We'll help. Yahoo! Small Business.
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
