In showing that compression implies AI, I first make the simplifying assumption that everyone shares the same language model. Then I relax that assumption and argue that this makes it easier for a machine to pass the Turing test.
But I see your point. I argued that a lossless model knows everything that a lossy model does, plus more, because the lossless model knows p(x) and p(x'), while a lossy model only knows p(x) + p(x'). However I missed that the lossy model knows that x and x' are equivalent, while the lossless model does not. However, I think that a lossless model can reasonably derive this information by observing that p(x, x') is approximately equal to p(x) or p(x'). In other words, knowing both x and x' does not tell you any more than x or x' alone, or CDM(x, x') ~ 0.5. I think this is a reasonable way to model lossy behavior in humans. -- Matt Mahoney, [EMAIL PROTECTED] ----- Original Message ---- From: Philip Goetz <[EMAIL PROTECTED]> To: [email protected] Sent: Sunday, August 27, 2006 9:23:25 PM Subject: Re: [agi] Lossy *&* lossless compressi On 8/25/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > As I stated earlier, the fact that there is normal variation in human > language models makes it easier for a machine to pass the Turing test. > However, a machine with a lossless model will still outperform one with a > lossy model because the lossless model has more knowledge. That would be true only if there were one correct language model, AND you knew what it was. Besides which, every human has a lossy model. It seems to me that by your argument, a machine with a lossless model would "out-perform" a human, and thus /fail/ the Turing test. - Phil ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
