--- On Thu, 10/16/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> If some folks want to believe that self-modifying AGI is not possible,
> that's OK with me.  Lots of folks believed human flight was not possible
> also, etc. etc. ... and there were even attempts at
> mathematical/theoretical
> proofs of this.  Fortunately the Wright Brothers spent their time
> building
> planes rather than laboriously poking holes in the intuitively
> obviously-wrong
> supposed-impossibility-proofs of what they were doing...

I have heard this analogy before. OTOH there are people working on polynomial 
time solutions to NP-complete problems, or recursive data compression, because 
it would be *so cool* to prove the naysayers wrong.

First, I don't claim that RSI is impossible. In my paper I give a trivial 
example of a self rewriting program that achieves greater intelligence, which I 
define as goal achievement within time bounds. A nontrivial example of self 
improvement would be my own CMR design, where the peers in a global brain 
redistribute their knowledge so it can be stored more efficiently, resulting in 
specialization.

I don't claim that Ben's OpenCog design is flawed or that it could not produce 
a "smarter than human" artificial scientist. I do claim that this step would 
not launch a singularity. You cannot produce a seed AI.

The intuitively obvious -- but wrong -- counterargument goes like this: if we 
can produce an AI with an IQ of 200, then it could produce an AI with an IQ of 
400, and so on. It is wrong because:

1. It is meaningless to talk of an IQ above 200 because there is no test for it.

2. It is meaningless to talk of intelligence for AI because the distribution of 
skills will not be the same as the distribution in humans unless you 
deliberately cripple it. My calculator has an IQ of 10^6 depending on what test 
I give it.

3. Even if you mean "superior to humans in every way", you need an objective 
intelligence test expressed in the form of goal achievement, such as 
compression ratio, dollars earned, or number of descendants.

4. By any measure in (3), collective humanity is far more intelligent than the 
artificial scientist, and was essential in its production. This is not 
improvement. It is a collective with an IQ of (say) 10^12 producing a machine 
with an IQ of 200. That step won't get you to 400 any faster than just hiring 
more people. But it *is* self improvement of the global brain in that you are 
going from 10^12 to 10^12 + 200. It is just not as fast as you expected.

You depend on the global brain a lot more than you think. Google makes everyone 
smart, including Kurzweil's chatbot Ramona. If you don't believe me about (4), 
try going back 100 years in time and building your AGI, or just disconnect the 
internet and lock yourself in a room until it is built. My paper on RSI 
explains more formally why RSI in isolation fails, or at least does not improve 
faster than O(log t).

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to