Matt wrote, in reply to me:

> > An AI twice as smart as any human could figure
> > out how to use the resources at his disposal to
> > help him create an AI 3 times as smart as any
> > human.  These AI's will not be brains in vats.
> > They will have resources at their disposal.
>
> It depends on what you mean by "twice as smart". Do you mean twice as many
> brain cells? Twice as much memory? Twice as fast? Twice as much knowledge?
> Able to score 200 on an adult IQ test (if such a thing existed)?
>
> Unless you tell me otherwise, I have to assume that it means "able to do
> what 2 people can do" (or 3 or 10, the exact number isn't important). In
> that case, I have to argue it is the global brain that is creating the AI
> with a very tiny bit of help from the parent AI. You would get the same
> result by hiring more people.



Whatever ...

You are IMO just distracting attention from the main point, by making odd
definitions...

No, of course my colloquial phrase "twice as smart" does not mean "as smart
as two people put together".   That is not the accepted interpretation of
that colloquialism and you know it!

To make my statement clearer, one approach is to forget about quantitating
intelligence for the moment...

Let's talk about qualitative differences in intelligence.  Do you agree that
a dog is qualitatively much more intelligent than a roach, and a human is
qualitatively much more intelligent than a dog?

In this sense I could replace

> An AI twice as smart as any human could figure
> out how to use the resources at his disposal to
> help him create an AI 3 times as smart as any
> human.  These AI's will not be brains in vats.
> They will have resources at their disposal.

with

****
An AI that is qualitatively much smarter than
 any human could figure
  out how to use the resources at his disposal to
  help it create an AI that is qualitatively much
smarter than it.

  These AI's will not be brains in vats.
  They will have resources at their disposal.
****

On the other hand, if you insist on mathematical
definitions of intelligence, we could talk about, say,
the intelligence of a system
as the "total prediction difficulty of
the set S of sequences, with the property that the
system can predict S during a period of time
of length T".   We can define prediction difficulty
as Shane Legg does in his PhD thesis.  We can
then average this over various time-lengths T,
using some appropriate weighting function.

(I'm not positing the above as an ideal definition
of intelligence ... just throwing one definition
out there... my conceptual point is quite independent
of the specific definition of intelligence you choose)

Using this sort of definition, my statement is surely
true, though it would take work to prove it.

Using this sort of definition, a system A2 that is
twice as smart as system A1, if allowed to interact
with an appropriate
environment vastly more complex than either
of the systems, would surely be capable of modifying
itself into a system A3 that is twice as smart as A2.

This seems extremely obvious and I don't want to
spend time right now proving it formally.  No doubt
writing out the proof would reveal various mathematical
conditions on the theorem statement...

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to