J. Storrs Hall wrote:
On Monday 04 December 2006 07:55, Brian Atkins wrote:

Also, what is really the difference between an Einstein/Feynman brain, and someone with an 80 IQ?

I think there's very likely a significant structural difference and the IQ80 one is *not* learning universal in my sense.

So there is some group of humans you would say don't pass your learning universal test. Now, of the group that does pass, how big is that group roughly? The majority of humans? (IQ 100 and above) Whatever the size of that group, do you claim that any of these learning universalists would be capable of coming up with Einstein-class (or take your pick) ideas if they had been in his shoes during his lifetime? In other words, if they had access to his experiences, education, etc.

I would say no. I'm not saying that Einstein is the sole human who could have come up with his ideas, but I'm also saying that it's unlikely that someone with an IQ of 110 would be able to do so even if given every help. I would say there are yet more differences in human minds beyond your learning universal idea which separate us, which make the difference for example between a 110 IQ and 140.


For instance, let's say I want to design a new microprocessor. As part of
that
process I may need to design a multitude of different circuits, test them,
and
then integrate them. To humans, this is not a task that can run on
autopilot....
What if though I find doing this job kinda boring after a while and wish I
could
split off a largish chunk of my cognitive resources to chew away on it at a somewhat slower speed unconsciously in the background and get the work done while the conscious part of my mind surfs the web? Humans can't do this, but
an
AGI likely could.

At any given level, a mind will have some tasks that require all its attention and resources. If the task is simple enough that it can be done with a fraction of the resources, (e.g. driving) we learn to turn it into a habit / skill and to it more or less subconsciously. An AI might do that faster but we're assuming it could lots of things faster. On the other hand, it would still have to pay attention to tasks that require all its resources.

This isn't completely addressing my particular scenario, where let's say we have a roughly human level AGI, it has to work on a semi-repetitive design task, the kind of thing a human is forced to stare a monitor at yet doesn't take their full absolute maximum brainpower. The AGI should theoretically be able to divide its resources in such a way that the design task can be done unconsciously in the background, while it can use what resources remain to do other stuff at the same time.

The point being although this task takes only part of the human's max abilities, by their nature they can't split it off, automate it, or otherwise escape letting some brain cycles go to "waste". The human mind is too monolithic in such cases which go beyond simple habits, yet are below max output.


Again, aiming off the bullseye. Attempting to explain to someone about the particular clouds you saw yesterday, the particular colors of the sunrise,
etc.
you can of course not transfer the full information to them. A second
example
would be with skills, which could easily be shared among AGIs but cannot be shared between humans.

Actually the ability to copy skills is the key item, imho, that separates humans from the previous smart animals. It made us a memetic substrate. In terms of the animal kingdom, we do it very, very well. I'm sure that AIs will be able to as well, but probably it's not quite as simple as simply copying a subroutine library from one computer to another.

The reason is learning. If you keep the simple-copy semantics, no learning happens when skills are transferred. In humans, a learning step is forced, contributing to the memetic evolution of the skill.

IMO, AGIs plausibly could actually transfer full, complete skills including whatever learning is part of it. It's all computer bits sitting somewhere, and they should be transferable and then integrable on the other end.

If so, this is far more powerful, new, and distinct than a newbie tennis player watching a pro, and trying to learn how to serve that well over a period of years, or a math student trying to learn calculus. Even aside from the dramatic time scale difference, humans can never transfer their skills fully exactly in a lossless-esque fashion.


Currently all I see is a very large and rapidly growing very insecure
network of
rapidly improving computers out there ripe for the picking by the first
smart
enough AGI.

A major architectural feature of both the brain and existing supercomputers is that the majority of the structure/cost is in the communications fabric, not the processors themselves. A botnet using residential internet connections would be immensely hobbled. (It would be different, of course, if it took over someone's Blue Gene...)

I haven't examined the figures closely lately, but my guess would be no matter which particular bottleneck you want to focus on, that bottleneck is already large enough to allow for interesting things, and of course these things are all improving quickly. I think the aggregate backbone capacity of the internet has been doubling faster than Moore's Law's rate, but I don't have the figures handy to back that up.


What comes after that I think is unprovable currently. We do know of course that relatively small tweaks in brain design led from apes to us, a rather large difference in capabilities that did not apparently require too
many
more atoms in the skull. Your handwaving regarding universality aside, this
also
worries me.

There we will just have to disgree. I see one quantum jump in intelligence from Neanderthals to us. Existing AI programs are on the Neanderthal side of it. We AGIers are hoping to figure out what the Good Trick is and copy it, making our computers as innovative and adaptable as we are, relative to whatever processing power they may have. I don't see any reason to assume there is a whole nother Good Trick waiting for them after that (and it doesn't seem likely they'll need it!)

Minsky claims you could do an AI on a 486. I think he's wrong: the thing that makes us have to have brains with the computing power of a $30M supercomputer is learning, which is computationally expensive. So I don't think an AI will be able to improve itself all that much faster than a human can until it has substantially more processing power than we do. That's at least a few decades out.

Mmm hmm. Yes, well we do currently have a disagreement here. I on the other hand find it very unlikely that humans who have just barely peeked over the hill of real smarts will turn out to have anything like the best learning, creating, etc. setup. I don't see a lot of convincing evidence that your viewpoint is likely correct, but perhaps this will be in your book? In the meantime because of the potential existential risk I must err more towards caution along the kinds of arguments Bostrom presents here:

http://www.nickbostrom.com/astronomical/waste.html


Much of your argumentation seems to rely on groups of AGIs forming,
interacting
with society for the long term, etc., but it seems to completely dismiss the idea of an initial singleton grabbing significant computing resources and
then
going further. The problem I have with this is this "story" you want to put across relies on multiple things that would have to go just right in order
for
the overall story to turn out just so. This is highly unlikely of course.

It would be if it required everybody working on AI to conform to some specific plan or set of constraints, because no such thing will happen. The proper course for any of us is to see what we can do *in spite* of the fact that there will be a lot of (corporate, military) AIs out there that are NOT built according to our recipe. The obvious thing to start with is the social and moral constructs that humans have developed under essentially the same constraints. Morality works: that's why it could evolve.


While I sort of agree with the idea of trying to get out in front of others who may not be playing nice with their AGI designs (AGI arms race anyone?), I don't see how this gets back to answering the original discussion point you snipped: why "SuperAI takes over" isn't possible at all. Yes, you've got your particular ideas on how to keep your one single AGI nice and friendly. But as you point out above, there could be one that arrives before yours, or in your "AGI society" story there will be a lot of competing AGIs. Why again can't one of them silently root a bunch of boxes and rapidly outgrow the others?
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to