On 8/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
First, the compression problem is not in NP.  The general problem of encoding strings as the smallest programs to output them is undecidable.

But as I said, it becomes NP when there's an upper limit to decompression time.

Second, given a model, then compression is the same as prediction.  A model is a function that maps any string s to an estimated probability p(s).

However in this case we are not given such a function, nor is there any need to create one - lots of compression algorithms work fine without them.

Third, given a fixed test set, it would be trivial to write a decompressor that memorized it verbatim and compress to 0 bytes if we did not include the size of the decompressor in the contest.

Of course - that's why I said there would be a problem with allowing decompression to rely on a local knowledge base.

It is easy to dismiss compression as unrelated to AGI.

Yes, it was quite easy :)

How do you test if a machine with only text I/O knows that roses are red?

Very easily...

Suppose it sees "red roses", then later "roses are" and predicts "red".

Sure, but that only shows it knows that the string "roses are" is followed by the string "red", not that it knows roses are red. To test the latter requires showing it pictures of flowers of different colors and seeing whether it can use the red color as a cue to pick out the roses, or telling it to draw a picture of a rose and seeing if it fills in the color correctly.

A machine with only text I/O of course automatically fails, so you know even without running it that it cannot know that roses are red. See, told you the testing process would be easy :)

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to