On 8/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
The argument for lossy vs. lossless compression as a test for AI seems to be
motivated by the fact that humans use lossy compression to store memory, and
cannot do lossless compression at all. The reason is that lossless
compression requires
On 8/20/06, Matt Mahoney [EMAIL PROTECTED] wrote:
Uncompressed video would be the absolutely worst type of test data.
Uncompressed video is about 10^8 to 10^9 bits per second. The human brain
has a long term learning rate of around 10 bits per second. So all the rest
is noise. How are you
As I stated earlier, the fact that there is normal variation in human language
models makes it easier for a machine to pass the Turing test. However, a
machine with a lossless model will still outperform one with a lossy model
because the lossless model has more knowledge.
I agree it is
However, a machine with a lossless model will still outperform one with a
lossy model because the lossless model has more knowledge.
PKZip has a lossless model. Are you claiming that it has more knowledge?
More data/information *might* be arguable but certainly not knowledge -- and
PKZip
- Original Message
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, August 25, 2006 5:58:02 PM
Subject: Re: [agi] Lossy ** lossless compression
However, a machine with a lossless model will still outperform one with a
lossy model because the lossless model has
Matt Mahoney wrote:
DEL has a lossy model, and nothing compresses smaller. Is it smarter
than PKZip?
Let me state one more time why a lossless model has more knowledge.
If x and x' have the same meaning to a lossy compressor (they
compress to identical codes), then the lossy model only knows