Brian thanks for your response and Dr. Hall thanks for your post as well. I
will get around to responding to this as soon as time permits. I am
interested in what Michael Anissimov or Michael Wilson has to say.

On 12/4/06, Brian Atkins <[EMAIL PROTECTED]> wrote:

I think this is an interesting, important, and very incomplete subject
area, so
thanks for posting this. Some thoughts below.

J. Storrs Hall, PhD. wrote:
>
> Runaway recursive self-improvement
>
>
>> Moore's Law, underneath, is driven by humans.  Replace human
>> intelligence with superhuman intelligence, and the speed of computer
>> improvement will change as well.  Thinking Moore's Law will remain
>> constant even after AIs are introduced to design new chips is like
>> saying that the growth of tool complexity will remain constant even
>> after Homo sapiens displaces older homonid species.  Not so.  We are
>> playing with fundamentally different stuff.
>
> I don't think so. The singulatarians tend to have this mental model of a
> superintelligence that is essentially an analogy of the difference
between an
> animal and a human. My model is different. I think there's a level of
> universality, like a Turing machine for computation. The huge difference
> between us and animals is that we're universal and they're not, like the
> difference between an 8080 and an abacus. "superhuman" intelligence will
be
> faster but not fundamentally different (in a sense), like the difference
> between an 8080 and an Opteron.
>
> That said, certainly Moore's law will speed up given fast AI. But having
one
> human-equivalent AI is not going to make any more different than having
one
> more engineer. Having a thousand-times-human AI won't get you more than
> having 1000 engineers. Only when you can substantially augment the total
> brainpower working on the problem will you begin to see significant
effects.

Putting aside the speed differential which you accept, but dismiss as
important
for RSI, isn't there a bigger issue you're skipping regarding the other
differences between an Opteron-level PC and an 8080-era box? For example,
there
are large differences in the addressable memory amounts. This might for
instance
mean whereas a very good example of a human can study and become a true
expert
in perhaps a handful of fields, a SI may be able to be a true expert in
many
more fields simultaneously and to a more exhaustive degree than a human.
Will
this lead to the SI making more breakthroughs per given amount of runtime?
Does
it multiply with the speed differential?

Also, what is really the difference between an Einstein/Feynman brain, and
someone with an 80 IQ? It doesn't appear that E/F's brains run simply
slightly
faster, or likewise that they simply know more facts. There's something
else
isn't there? Call it a slightly better architecture or maybe only certain
brain
parts are a bit better, but this would seem to be a 4th issue to consider
besides the previously raised points of speed, "memory capacity", and
"universality". I'm sure we can come up with other things too.

(Btw, the preferred spelling is "singularitarian"; it gets most google
hits by
far from what I can tell. Also btw the term arguably now refers more
specifically to someone who wants to work on accelerating the singularity,
so
you probably can't group in here every single person who simply believes a
singularity is possible or coming.)

>
>> If modest differences in size, brain structure, and
>> self-reprogrammability make the difference between chimps and humans
>> capable of advanced technological activity, then fundamental
>> differences in these qualities between humans and AIs will lead to a
>> much larger gulf, right away.
>
> Actually Neanderthals had brains bigger than ours by 10%, and we blew
them off
> the face of the earth. They had virtually no innovation in 100,000
years; we
> went from paleolithic to nanotech in 30,000. I'll bet we were universal
and
> they weren't.
>
> Virtually every "advantage" in Elie's list is wrong. The key is to
realize
> that that we do all these things, just more slowly than we imagine
machines
> being able to do them:
>
>> Our source code is not reprogrammable.
>
> We are extremely programmable. The vast majority of skills we use
day-to-day
> are learned. If you watched me tie a sheepshank knot a few times, you
would
> most likely then be able to tie one yourself.
>
> Note by the way that having to "recompile" new knowledge is a big
security
> advantage for the human architecture, as compared with downloading
blackbox
> code and running it sight unseen...

This is missing the point entirely isn't it? Learning skills is using your
existing physical brain design, but not modifying its overall or even
localized
architecture or modifying "what makes it work". When source code is
mentioned,
we're talking a lower level down.

Can you cause your brain to temporarily shut down your visual cortex and
other
associated visual parts, reallocate them to expanding your working memory
by
four times its current size in order to help you juggle consciously the
bits you
need to solve a particularly tough problem? No. Can you reorganize how
certain
memories are stored so they would have 5 times as much fidelity or
losslessly
capture certain sensory inputs for a given period of time? No. Your mind
architecture, its resource allocations, etc. are very fixed.

>
>> We cannot automate the
>> execution of boring cognitive tasks.
>
> Actually we do this all the time, automatically; we call it forming
habits.
> Think of the ease with which you drive or type compared with having to
pay
> attention to each individual action.
>
> By the way, being able to get bored is a crucial part of the
self-monitoring
> process that makes us discovery machines as well as mere performance
> machines.

I think the original point is talking more about things along the lines of
cognitive tasks that are relatively demanding, and probably much more
"interactive" than performing some fixed task like driving between two
fixed
points that you've gotten used to doing.

For instance, let's say I want to design a new microprocessor. As part of
that
process I may need to design a multitude of different circuits, test them,
and
then integrate them. To humans, this is not a task that can run on
autopilot. If
I'm a human working on this project I've got to sit there and stare at the
screen and think about how to do this. Maybe there are some repetitive
elements
in my job I can chunk and auto spit out, but overall this job is at a
complexity
level which requires my continuing attention although not 100% of my
brainpower.

What if though I find doing this job kinda boring after a while and wish I
could
split off a largish chunk of my cognitive resources to chew away on it at
a
somewhat slower speed unconsciously in the background and get the work
done
while the conscious part of my mind surfs the web? Humans can't do this,
but an
AGI likely could. In fact we're sitting here trying to invent AGIs to do a
lot
of these tasks for us so we can do more of the things we enjoy
consciously.

>
>> We cannot blend together
>> autonomic and deliberative thought processes.
>
> We do nothing but, I'd say. See above.

Not exactly the same, no.

>
>> We cannot reprogram the
>> process whereby concepts are abstracted from sensory information.
>
> Some sensory processes are more opaque than others, but by the time we
extract
> concepts it's almost entirely reprogrammable. Indeed every time we learn
a
> new concept, we're reprogramming our perceptual systems to recognize
> something they didn't before; something that happens on average 10-100
times
> a day for each of us.

There are fairly fixed architectures in the brain setup to allow for
instance
recognizing objects, faces, etc. You can read articles in the popular news
lately for instance of people who lack the particular face-recognition
hardware
and suffer from "face blindness". They can't reprogram their mind around
this
problem. If we met some aliens from Alpha Centauri who had totally
different
"faces" with their own style of emotional expressions you would never ever
be
able to deal with them on a level as good as the level their own evolved
mindware allows.

So no I don't agree with your answer here - you're again jumping up at
least one
level from what the original point is aimed at.

>
>> We are not Bayesian.
>
> We *invented* Bayesian. We can each learn the (simple) math and use it
if we
> want to, i.e. reprogram ourselves.

Consciously. Slowly. But the 90% chunk of your mind that is operating
below your
conscious level? The original comment I guess is aimed again at the low
level
workings.

>
>> We cannot integrate new hardware.
>
> On the contrary, we are THE tool-using animal; we've been augmenting
ourselves
> with technologies ranging from clothing to weapons to writing as long as
> we've been human.

I guess the most key word here is integrate. If you wanted to expand your
brain's current working memory from say 6 chunked simultaneous items to
say 100,
how are you going to provide that using current technology, much less
fully
integrate it into your mind in a totally transparent way?

Now sure you can get into fancy stuff like the nanobots interfacing
directly to
neurons, etc. Maybe, but most likely in order to truly transcend the
current
human mind limitations you would have to completely abandon the meat and
become
something AGI-esque yourself. As long as the meat is in the picture it
will
impose fixed architectural limitations which will prevent fully
integrating
certain mental enhancements.

>
>> When we 'learn'
>> new things, our brain structure barely changes.
>
> It changes enough to know the thing we learned!

Again not quite aimed at the original point, being not rearchitecting the
mind
significantly around important new information. As an example, if you
learn a
second speaking language after you have already grown up, you can learn
the
words and "learn" how to make them sound, but it's never as good as if it
was
your first language. Your brain has limitations on its flexibility when it
comes
to some core items.

>
>> We cannot instantly
>> share memories.
>
> We have this skill called "speech" that allows us, alone of the animals,
to do
> just that, mod only a difference in speed to what we imagine for
machines...

Again, aiming off the bullseye. Attempting to explain to someone about the
particular clouds you saw yesterday, the particular colors of the sunrise,
etc.
you can of course not transfer the full information to them. A second
example
would be with skills, which could easily be shared among AGIs but cannot
be
shared between humans.

>
>> We cannot internally reassign computing power to
>> specific cognitive modules.
>
> In fact this happens automatically; a major shift takes a week or so
under the
> force of constant practice of whatever it is we need the extra
horsepower
> for.

I kind of addressed this above. Perhaps in some extremely limited ways
humans
can bring to bear a shifting amount of mental resources, but generally no
they
cannot reassign large chunks of neurons from particular cognitive modules
to do
other tasks. Your visual cortex cannot shift on the fly to enable another
8 bits
of working memory or to quickly load in and store an extra few gigabytes
of data
relating to a problem.

>
>> We cannot run ourselves on silicon, whose
>> transistors switching speeds are millions of times that of neurons.
>
> Personally I intend to, assuming they're still using silicon by the time
I get
> around to it, maybe ca. 2030. But again that's just a speed difference,
to
> begin with, anyway. Other benefits, like not dying, come later.

Nevertheless it is a current difference, and by your own timeline the
AGI(s)
will have this advantage for quite some time before humans may be able to
use it.

>
>> Brian, Josh is saying that no matter how many ops/sec or
>> self-reprogramming or cognitive complexity or smartness an AI engineer
>> has, it will always be the same as a human engineer.  This is a much
>> stronger claim than saying that AI will undergo a soft rather than
>> hard takeoff.
>
> You have to be careful about that word "same". What I am saying is that
there
> is a level of intelligence universality such that the important
difference is
> speed. It is the same sense in which an 8080 is the "same" as Blue
Gene/L --
> they are both Turing universal (given access to an infinite tape).
Indeed an

Except they don't have such access, and instead have widely differing
ceilings.

> 8080 will always be the same, in that sense, as any supercomputer (even
a
> quantum one; we're talking about computability, not tractability).
>
> Is there a practical difference? Of course. In fact, it's a bit subtle
to
> understand where the fact of universality has any practical impact. What
it
> means for Turing universality is that either machine can run any program
the
> other one can (either translated or interpreted).
>
> What it means for humans vis-s-vis hyperhuman AIs is that we will always
be
> their equal IF UPLOADED ONTO A FAST ENOUGH PROCESSOR (with enough
memory,
> etc, etc.)

Or conversely that SI AIs will be greatly unequal to unuploaded/unhacked
humans.

>
> The key architectural insight here is that humans learn by building new
> modules and integrating them seamlessly into the set of modules we
already
> have. Think about reading -- a (laboriously) learned skill, but one
which we
> perform quite effortlessly, and indeed faster than the genetically
endowed
> skill of hearing speech. That's why I laugh whenever I hear the term
"codic
> cortex"--any good human programmer already has a module that does
exactly
> what's needed, and there's no hint that AI is going to come even close
until
> it nails the general intelligence problem. Automatic programming has a
> 50-year history, and is pretty much the subfield that's made the LEAST
> progress in that time.
>
> Compare that with chess, where the learned chess module of a human is
about
> equal to a supercomputer with specialized hardware, but where the
problem is
> simple enough that we know how to program the supercomputer.

I'm not really sure what this segment of text is getting at. Are you
claiming
that humans can always laboriously create mind modules that will allow
them to
perform feats equal to an AGI? Yet as you point out current PCs using
_human-written_ modules can beat the very best humans. When we add more
capability to the PC, and the AGI running on it can have full rein to
optimize
the module, do you think this setup will still be approachable by human
chess
players? Or will the AGI setup at that point be a fundamentally better
player
that will always be able to easily outmatch any human because it can
implement
and improvise strategies impossible for a human to contain or chunk?

...

>
> Hard take-off is a fantasy sponsored by certain people and organizations
who
> stand to profit by people's being concerned. There's no credible
evidence
> that such a thing is even possible, much less likely. I've studied AI at
the
> postgraduate level for 40 years; believe me, there are lots of major
> disagreements in the field and there are people who will listen to any
> reasonable idea. NO ONE with a serious research background in AI
subscribes
> to the hard take-off idea.
>

Well unfortunately for AGI researchers this item is an existential risk in
many
scenarios. That puts a certain onus on them IMO to provide proof of a very
high
level of safety. Physicists can do this fairly well when it comes to
determining
if this supercollider-flavor-of-the-decade will create an Earth-destroying
particle or not. I'm afraid without something quite a bit stronger than
the
handwaving quoted above I must indeed rationally remain a bit concerned.

Our organization for one would certainly welcome research proposals to
really
try and work on this issue, and see what the limitations really might be.
Currently all I see is a very large and rapidly growing very insecure
network of
rapidly improving computers out there ripe for the picking by the first
smart
enough AGI. What comes after that I think is unprovable currently. We do
know of
course that relatively small tweaks in brain design led from apes to us, a
rather large difference in capabilities that did not apparently require
too many
more atoms in the skull. Your handwaving regarding universality aside,
this also
worries me.

...

>
> IQ tests measure *something* that has highly significant correlations to
> criminality, academic success, earning potential, and so forth. Like any
> scientific property, this was originally based on some intuitive
notions; a
> century of research has refined them; I expect they'll be refined
further in
> the future.
>
> On the other hand, the notion of what a "superintelligence" will or
won't be
> able to do appears to be based entirely on unfounded speculation and
some
> very shaky analogies. The actual experience we have with entities of
arguably
> greater than human intelligence are organizations like
corporations.  I've
> seen no substantive arguments from the "superAI takes over" side in
support
> of any model that I find remotely more plausible.

Your analogy rests solely on your own model of fairly interchangeable X
number
of humans = one AGI of given computing resources.

It absolutely breaks if the AGI instead can think thoughts that no human
or
organized human group can plausibly come up with. That would be a
qualitative
difference in capabilities which the humans could only match by creating
an
equally powering AGI to use as their "tool" or upgrading themselves past
the
point where they could still claim to be human.

Much of your argumentation seems to rely on groups of AGIs forming,
interacting
with society for the long term, etc., but it seems to completely dismiss
the
idea of an initial singleton grabbing significant computing resources and
then
going further. The problem I have with this is this "story" you want to
put
across relies on multiple things that would have to go just right in order
for
the overall story to turn out just so. This is highly unlikely of course.
I feel
it is safer to try and work out what the real maximum limits might be in
terms
of possible events, and ask (and prove) what exactly prevents this from
happening instead?
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to