--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:

> As Loosemore has argued, compression is a poor AGI test in general, as shown
> by
> the fact that humans are generally intelligent but are poor compressors!
> Some AGIs
> may be great compressors, others not.

Well it is true that people are poor at compression (because brains are
analog), but people are good at learning and prediction.  You don't really
need to compress, just measure how well your AGI predicts using a fixed
training set.  Compression of x_1...x_n is equivalent to Sum_i 1/log(p(x_i)). 
There are other measures, such as the number of guesses per symbol, but
experimentally, compression correlates well with more direct measures such as
word error rate in language models for speech recognition.

I think it is not building the knowledge base that takes so long, but refining
the learning algorithm.  The large text compression benchmark simulates 20
years of language learning in a few minutes or hours.  This allows you to do a
lot of experiments.

> For instance, if someone built a robotic dog that was as good as a real dog
> at perception,
> cognition and action, I would consider that a big step toward powerful AGI.
> But dogs really
> suck at compression.  (Yeah, their brains may carry out compression
> operations internally.
> But, if you give a dog an explicit compression problem to solve, it will not
> give a very
> useful or impressive answer...)

How will you develop and test the vision and hearing systems?  Does your
system extract the right features?  If so, your reconstructed inputs should be
perceptually indistinguishable (at least to a real dog).


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to