Matt Mahoney wrote:
DEL has a lossy model, and nothing compresses smaller. Is it smarter
than PKZip?
Let me state one more time why a lossless model has more knowledge.
If x and x' have the same meaning to a lossy compressor (they
compress to identical codes), then the lossy model only knows
p(x)+p(x'). A lossless model also knows p(x) and p(x'). You can
argue that if x and x' are not distinguishable then this extra
knowledge is not important. But all text strings are distinguishable
to humans.
Suppose I give you a USB drive that contains a lossless model of the
entire universe excluding the USB drive - a bitwise copy of all quark
positions and field strengths.
(Because deep in your heart, you know that underneath the atoms,
underneath the quarks, at the uttermost bottom of reality, are tiny
little XML files...)
Let's say that you've got the entire database, and a Python interpreter
that can process it at any finite speed you care to specify.
Now write a program that looks at those endless fields of numbers, and
says how many fingers I'm holding up behind my back.
Looks like you'll have to compress that data first.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]