The difficulty of rigorously defining practical intelligence doesn't tell
you ANYTHING about the possibility of RSI ... it just tells you something
about the possibility of rigorously proving useful theorems about RSI ...

More importantly, you haven't dealt with my counterargument that the posited
AGI that is "qualitatively intellectually superior to humans in every way"
would

a) be able to clone itself N times for large N

b) have the full knowledge-base and infrastructure of human society at its
disposal

Surely these facts will help it to self-improve far more quickly than would
otherwise be the case...

I'm not thinking about this so abstractly, really.  I'm thinking,
qualitatively, that

1-- The members of this list, collectively, could solve algorithmic problems
that a team of one million people with IQ 100 would not be able to solve in
a feasible period of time

2-- an AGI that was created by, say, the members of this list, would be
architected based on **our** algorithms

3-- so, if we could create an AGI that was qualitatively intellectually
superior to **us** (even if only moderately so), this AGI (or a team of
such) could probably solve algorithmic problems that one million of **us**
would not be able to solve in a feasible period of time

4--thus, this AGI we created would be able to create another AGI that was
qualitatively much smarter than **it**

5--etc.

Apparently you are pushing back against step 3 in the above argument.
Because you're saying that the infrastructure of human society would not be
sufficient to allow it to make these algorithmic breakthroughs, even if it
in-principle had the brainpower.

But, I just don't believe it....  I strongly suspect that dramatic
improvement in intelligence could be achieved via mathematical insights into
algorithm design, hardware design, quantum computing, etc. ... that a
smarter mind would be able to come up with...

BTW, it's true that if I went back in time 1000 years I could not regenerate
modern society, or make all that much impact.  But if I went back in time
together with a million of my clones, we could dramatically accelerate the
progress of medieval society toward modernity, for sure....

-- Ben G

On Thu, Oct 16, 2008 at 9:00 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Thu, 10/16/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > If some folks want to believe that self-modifying AGI is not possible,
> > that's OK with me.  Lots of folks believed human flight was not possible
> > also, etc. etc. ... and there were even attempts at
> > mathematical/theoretical
> > proofs of this.  Fortunately the Wright Brothers spent their time
> > building
> > planes rather than laboriously poking holes in the intuitively
> > obviously-wrong
> > supposed-impossibility-proofs of what they were doing...
>
> I have heard this analogy before. OTOH there are people working on
> polynomial time solutions to NP-complete problems, or recursive data
> compression, because it would be *so cool* to prove the naysayers wrong.
>
> First, I don't claim that RSI is impossible. In my paper I give a trivial
> example of a self rewriting program that achieves greater intelligence,
> which I define as goal achievement within time bounds. A nontrivial example
> of self improvement would be my own CMR design, where the peers in a global
> brain redistribute their knowledge so it can be stored more efficiently,
> resulting in specialization.
>
> I don't claim that Ben's OpenCog design is flawed or that it could not
> produce a "smarter than human" artificial scientist. I do claim that this
> step would not launch a singularity. You cannot produce a seed AI.
>
> The intuitively obvious -- but wrong -- counterargument goes like this: if
> we can produce an AI with an IQ of 200, then it could produce an AI with an
> IQ of 400, and so on. It is wrong because:
>
> 1. It is meaningless to talk of an IQ above 200 because there is no test
> for it.
>
> 2. It is meaningless to talk of intelligence for AI because the
> distribution of skills will not be the same as the distribution in humans
> unless you deliberately cripple it. My calculator has an IQ of 10^6
> depending on what test I give it.
>
> 3. Even if you mean "superior to humans in every way", you need an
> objective intelligence test expressed in the form of goal achievement, such
> as compression ratio, dollars earned, or number of descendants.
>
> 4. By any measure in (3), collective humanity is far more intelligent than
> the artificial scientist, and was essential in its production. This is not
> improvement. It is a collective with an IQ of (say) 10^12 producing a
> machine with an IQ of 200. That step won't get you to 400 any faster than
> just hiring more people. But it *is* self improvement of the global brain in
> that you are going from 10^12 to 10^12 + 200. It is just not as fast as you
> expected.
>
> You depend on the global brain a lot more than you think. Google makes
> everyone smart, including Kurzweil's chatbot Ramona. If you don't believe me
> about (4), try going back 100 years in time and building your AGI, or just
> disconnect the internet and lock yourself in a room until it is built. My
> paper on RSI explains more formally why RSI in isolation fails, or at least
> does not improve faster than O(log t).
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to