On 8/13/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Whether or not a compressor implements a model as a predictor or not is irrelevant.  Modeling the entire input at once is mathematically equivalent to predicting successive symbols.  Even if you think you are not modeling, you are.  If you design a code so that s is coded in n bits, you are implicitly assigning p(s) = 2^-n.  Any compression or decompression algorithm can be expressed in terms of prediction.

You can take the view that there is this implicit mathematical equivalence if you wish, but that doesn't change the fact that typical compression programs don't actually predict anything.

Also, Turing would disagree with your definition of AI.  The Turing test does not require vision or the ability to draw.

The Turing test is known to be nowhere near as sound as was believed in Turing's day; we now know that human tendency to anthropomorphize is strong enough that Elizas have been taken for human. Basically, language is an ultra low bandwidth medium, so much so that an awful lot of assumption has to be done to make it work. Adding visual elements would make things much faster and more accurate because you're not desperately trying to strain meaning from very small quantities of data.

But while gratuitously difficult, it is possible - so yes, passing a properly administered Turing test _does_ require vision and the ability to draw. You'd want to pose questions like... let's see...

"Consider a 3 inch solid sphere of red glass. Embedded at the center is a 1 inch solid sphere of blue glass. Shine a white light through it onto a white sheet of paper. What appears on the paper?"

Basically you need to ask the sort of questions that a blind, paralyzed human would need a still-functioning visual cortex to answer.

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to