Hutter's only assumption about AIXI is that the environment can be simulated by 
a Turing machine.

With regard to forgetting, I think it plays a minor role in language modeling 
compared to vision and hearing.  To model those, you need to understand what 
the brain filters out.  Lossy compression formats like JPEG and MP3 exploit 
this by discarding what cannot be seen or heard.  However, text doesn't work 
this way.  How much can you discard from a text file before it differs 
noticeably?
 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Pei Wang <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Saturday, August 12, 2006 8:53:40 PM
Subject: Re: [agi] Marcus Hutter's lossless compression of human knowledge prize

Matt,

So you mean we should leave forgetting out of the picture, just
because we don't know how to objectively measure it.

Though objectiveness is indeed desired for almost all measurements, it
is not the only requirement for a good measurement of intelligence.
Someone can objectively measure a wrong property of a system.

I haven't been convinced why "lossless compression" can be taken as an
indicator of intelligence, except that it is objective and easy to
check. You wrote in your website that "Hutter [21,22] proved that
finding the optimal behavior of a rational agent is equivalent to
compressing its observations.", but his proof is under certain
assumptions about the agent and its environment. Do these assumptions
hold for the human mind or AGI in general?

Pei


On 8/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> "Forgetting" is an important function in human intelligence because the 
> storage capacity of the brain is finite.  This is a form of lossy 
> compression, discarding the least important information.  Unfortunately, 
> lossy compression cannot be evaluated objectively.  We can compare an image 
> compressed with JPEG with an equal sized image compressed by discarding the 
> low order bits of each pixel, and judge the JPEG image to be of higher 
> quality.  JPEG uses a better model of the human visual system by discarding 
> the same information that the human visual perception process does.  It is 
> more intelligent.  Lossy image compression is a valid but subjective 
> evaluation of models of human vision.  There is no objective algorithm to 
> test for image quality.  It has to be done by humans.
>
> A lossless image compression contest would not measure intelligence because 
> you are modeling the physics of light and matter, not something that comes 
> from humans.  Also, the vast majority of information in a raw image is 
> useless noise, which is not compressible.  A good model of the compressible 
> parts would have only a small effect.  It is better to discard the noise.
>
> We are a long way from understading vision.  Standing in 1973 measured 
> subjects ability to memorize 10,000 pictures, viewed for 5 seconds each, then 
> 2 days later in a recall test showed pictures and asked if they were in the 
> earlier set, which they did correctly much of the time [1].  You could 
> achieve the same result if you compressed each picture to about 30 bits and 
> compared Hamming distances.  This is a long term learning rate of 6 bits per 
> second for images, or 2 x 10^9 bits over a lifetime, assuming we don't forget 
> anything after 2 days.  Likewise, Landauer [2] estimated human long term 
> memory at 10^9 bits based on rates of learning and forgetting.  It is also 
> about how much information you can absorb as speech or writing in a lifetime 
> assuming 150 words per minute at 1 bpc entropy.  It seems that the long term 
> learning rate of the brain is independent of the medium.  This is why I chose 
> 1 GB of text for the benchmark.
>
> Text compression measures intelligence because it models information that 
> comes from the human brain, not an external source.  Also, there is very 
> little noise in text.  If a paragraph can be rephrased in 1000 different ways 
> without changing its meaning, it only adds 10 more bits to code which 
> representation was chosen.  That is why lossless compression makes sense.
>
> [1] Standing, L. (1973), "Learning 10,000 Pictures", Quarterly Journal of 
> Experimental Psychology (25) pp. 207-222.
>
> [2] Landauer, Tom (1986), "How much do people remember?  Some estimates of 
> the quantity of learned information in long term memory", Cognitive Science 
> (10) pp. 477-493.
>
>  -- Matt Mahoney, [EMAIL PROTECTED]
>
> ----- Original Message ----
> From: Pei Wang <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Saturday, August 12, 2006 4:03:55 PM
> Subject: Re: [agi] Marcus Hutter's lossless compression of human knowledge 
> prize
>
> Matt,
>
> To summarize and generalize data and to use the summary to predict the
> future is no doubt at the core of intelligence. However, I do not call
> this process "compressing", because the result is not faultless, that
> is, there is information loss.
>
> It is not only because the human brains are "noisy analog devices",
> but because the future is different from the past, and the mind works
> under resources restriction. Only when certain information is
> temporally or permanently ignored (forgotten) can the system
> efficiently uses its knowledge.
>
> For this reason, I'd make a conjunction that is opposite Hutter's: A
> necessary condition for a system to be intelligent is that it can
> forget.
>
> Of course it is not a sufficient condition for a system to be intelligent. ;-)
>
> Pei
>
>
>
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your 
> subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to