I still dont really follow this entire line of argument as well.

Youve given two main and not similar proposals here:
A.

 An AGI residing in your PC should be able to do the same 
tasks as a human assistant, at least as fast and 
as accurately.

B.

I proposed text compression and video compression as tests.  For text, 
the AGI must be able to losslessly compress 1 GB of text with no initial 
training
A seems to be a general description of what an AGI is by many people and I 
concur this would definitely pass as an AGI as well
What is B?  You propose it as a test, but it seems that 
1.  the test is merely a lossless compression of data back and forth.
2. Something that would make A. better or faster or have a smaller KB, but is 
not itself and AGI, or an AGI test?

A GOOD AGI should be able to store data effeciently, but conversely does a 
program that stores data (compresses well) qualify as an AGI?
I would think this is a very limited AGI, and maybe just a helper application 
for a KB instead.
  I would say first we have to have a functional AGI according to A, and if it 
requires a Google-data center to house the knowledge then so be it.
Later a great upgrade t oa working AGI would be to stick it on a PC, and then 
down to an Ipod, but that would not seem a reasonable first requirement to an 
AGI.

James Ratcliff



David Clark <[EMAIL PROTECTED]> wrote: ----- Original Message ----- 
From: "Matt Mahoney" 
To: 
Sent: Wednesday, April 18, 2007 3:43 PM
Subject: Re: Goals of AGI (was Re: [agi] AGI interests)


I wasn't aware when I posted, in response your initial email, that you were
proposing a *test* to determine if a program was actually an AGI.  This
test, I presume, being passable only by an intelligent mind but wouldn't be
that minds only ability.  Someone else has proposed on this list, and I
agree with them, that any concrete (unchangeable) test is insufficient
because a program (AGI) can be created to just past that test.  Other
features that anyone would consider a real AGI should be able to do, could
be missing and the AGI could still pass the test.  (Example: the chess
computer from IBM if your test had been beating a human chess master.)

> What is your definition of "understanding"?  I know what it means in
people,
> but what does it mean in a computer?  If you accept Turing's definition of
AI,
> then you have to accept the equivalence of passing the Turing test with
> computing a probability distribution.

Turing's test is obviously not sufficient for AGI.  Why would an AGI waste
it's time learning to lie, miscompute numbers, simulate a forgetful memory
etc, to pass a test?  Why would the creators of an AGI spend time and money
to create the worst aspects of being human?

I use a simple metaphor for *understanding*.  If information was X/Y pairs
of numbers, and they were plotted on a graph, the Y intercept and slope of
the resulting line would be *understanding*.  If all you know are the
points, you can only accurately respond with information you have memorized.
If you know the intercept and slope of the line, then the resulting
information that can be produced, is infinite.  If the formula is *taught*
by humans or calculated from the experience of the AGI, the usefulness of
the formula is the same. (The usefulness would actually depend on the fit of
the line to the function in the real world but my point is the same.)  I
therefore don't believe that this kind of *understanding* requires
experiential learning or embodiment. (Both experiential learning and
embodiment would be nice to have but not necessary to get an AGI.)

For the people that want to gather huge amounts of data, or input huge
amounts of *facts*, I would ask, what is the method of deducing
*understanding* from the data points?  Isn't a pile of words, just a pile of
words, unless the relationship "formula" of the "lines" can be discerned?

Is it easier to input a Y intercept and slope or input a huge number of data
points and then have the computer somehow calculate the resulting line?  If
the points actually are represented by a linear line, then a computer
program can easily calculate the slope and Y intercept but most
relationships are not that simple.

> A common argument against compression as a test for AI is that humans
don't
> compress like a zip program.  Compression requires a *deterministic*
model.  A
> compressor codes string x using a code of length log 1/p(x) bits.  The
> decompressor must also compute p(x) exactly to invert the code.  Humans
can't
> do this because they use noisy neurons to compute p(x) that varies a bit
each
> time.

Any test that requires the AGI to jump through hoops that a human (or any
human) can't pass, is a poor test.  The idea isn't to make the potential AGI
fail but to recognize when something approximating human level intelligence
is achieved.  To make a test so hard as to fail, obviously intelligent and
useful programs wouldn't have much value.

The bigger question would be if creating an AGI to compress text and video
is a worthwhile attribute of an emerging AGI.  If the AGI has unlimited
resources (unlikely), whether whose resources are human teachers or
hardware, then any useful ability would be ok.  However, do you think that
the effort that would be required in the short term to get an AGI to be as
proficient as you wish in compression, would be the best use of it's
relatively scarce resources?

-- David Clark


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to