On 8/20/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:

The argument for lossy vs. lossless compression as a test for AI seems to be
motivated by the fact that humans use lossy compression to store memory, and
cannot do lossless compression at all.  The reason is that lossless
compression requires the ability to do deterministic computation.  Lossy
compression does not.  So this distinction is not important for machines.

No; the main argument is that lossy compression allows the use of
much, much more sophisticated, and much, much more powerful
compression algorithms, achieving much higher compression ratios.
Also, lossless compression is already nearly as good as it can be.
Statistical methods will probably out-perform intelligent methods on
lossless compression, especially if the size of the compressor is
included.

The proof that an ideal language model implies passing the Turing test
requires a lossless model.  A lossy model has only partial knowledge of the
distribution of strings in natural language dialogs.  Without full
knowledge, it is not possible to duplicate the same distribution of
equivalent representations of the same idea, allowing such expressions to be
recognized as not human, even if the compression is ideal.

By this argument, no human can pass the Turing test, since none of us
have the same distributions, either.  Or perhaps just one human can
pass it.  Presumably Turing.

You will never, never, never, never recreate the same exact language
model in a computer as resides in any particular human.  Losslessness
is relevant only when you need to recreate it exactly, and you can't,
so it's irrelevant.

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to