I'm on the road, so I'll have to give short shrift to this, but I'll try to 
hit a few high points:

On Monday 04 December 2006 07:55, Brian Atkins wrote:

> Putting aside the speed differential which you accept, but dismiss as 
important 
> for RSI, isn't there a bigger issue you're skipping regarding the other 
> differences between an Opteron-level PC and an 8080-era box? For example, 
there 
> are large differences in the addressable memory amounts. ... Does 
> it multiply with the speed differential?

I don't think the speed differential is unimportant -- but you need to 
separate it out to make certain kinds of theoretical analysis, same as you 
need to to consider Turing completeness in a computer.

ANY computer has a finite amount of memory and is thus a FSA and not a Turing 
machine. To think Turing-completeness you have to assume that it will read 
and write to an (unbounded) outboard memory. Thus the speed difference 
between, say, an 8080 and an Opteron will be a constant factor.

> Also, what is really the difference between an Einstein/Feynman brain, and 
> someone with an 80 IQ?

I think there's very likely a significant structural difference and the IQ80 
one is *not* learning universal in my sense.  

> For instance, let's say I want to design a new microprocessor. As part of 
that 
> process I may need to design a multitude of different circuits, test them, 
and 
> then integrate them. To humans, this is not a task that can run on 
autopilot....
> What if though I find doing this job kinda boring after a while and wish I 
could 
> split off a largish chunk of my cognitive resources to chew away on it at a 
> somewhat slower speed unconsciously in the background and get the work done 
> while the conscious part of my mind surfs the web? Humans can't do this, but 
an 
> AGI likely could.

At any given level, a mind will have some tasks that require all its attention 
and resources. If the task is simple enough that it can be done with a 
fraction of the resources, (e.g. driving) we learn to turn it into a habit / 
skill and to it more or less subconsciously. An AI might do that faster but 
we're assuming it could lots of things faster. On the other hand, it would 
still have to pay attention to tasks that require all its resources.

> Again, aiming off the bullseye. Attempting to explain to someone about the 
> particular clouds you saw yesterday, the particular colors of the sunrise, 
etc. 
> you can of course not transfer the full information to them. A second 
example 
> would be with skills, which could easily be shared among AGIs but cannot be 
> shared between humans.

Actually the ability to copy skills is the key item, imho, that separates 
humans from the previous smart animals. It made us a memetic substrate. In 
terms of the animal kingdom, we do it very, very well.  I'm sure that AIs 
will be able to as well, but probably it's not quite as simple as simply 
copying a subroutine library from one computer to another.

The reason is learning. If you keep the simple-copy semantics, no learning 
happens when skills are transferred. In humans, a learning step is forced, 
contributing to the memetic evolution of the skill. 

> > Compare that with chess, where the learned chess module of a human is 
about 
> > equal to a supercomputer with specialized hardware, but where the problem 
is 
> > simple enough that we know how to program the supercomputer. 
> 
> I'm not really sure what this segment of text is getting at. Are you 
claiming 
> that humans can always laboriously create mind modules that will allow them 
to 
> perform feats equal to an AGI?

No, of course not, when the AI is running on vastly superior hardware. All I 
was saying is that we seem to be able to compile our skills to a form that is 
about as efficient on the hardware we do have, as the innate skills are. (mod 
cases where the actual neurons are specialized.)

> Currently all I see is a very large and rapidly growing very insecure 
network of 
> rapidly improving computers out there ripe for the picking by the first 
smart 
> enough AGI. 

A major architectural feature of both the brain and existing supercomputers is 
that the majority of the structure/cost is in the communications fabric, not 
the processors themselves. A botnet using residential internet connections 
would be immensely hobbled. (It would be different, of course, if it took 
over someone's Blue Gene...)

> What comes after that I think is unprovable currently. We do know of  
> course that relatively small tweaks in brain design led from apes to us, a 
> rather large difference in capabilities that did not apparently require too 
many 
> more atoms in the skull. Your handwaving regarding universality aside, this 
also 
> worries me.

There we will just have to disgree. I see one quantum jump in intelligence 
from Neanderthals to us. Existing AI programs are on the Neanderthal side of 
it. We AGIers are hoping to figure out what the Good Trick is and copy it, 
making our computers as innovative and adaptable as we are, relative to 
whatever processing power they may have. 

I don't see any reason to assume there is a whole nother Good Trick waiting 
for them after that (and it doesn't seem likely they'll need it!)

Minsky claims you could do an AI on a 486. I think he's wrong: the thing that 
makes us have to have brains with the computing power of a $30M supercomputer 
is learning, which is computationally expensive. So I don't think an AI will 
be able to improve itself all that much faster than a human can until it has 
substantially more processing power than we do. That's at least a few decades 
out.

> Much of your argumentation seems to rely on groups of AGIs forming, 
interacting 
> with society for the long term, etc., but it seems to completely dismiss the 
> idea of an initial singleton grabbing significant computing resources and 
then 
> going further. The problem I have with this is this "story" you want to put 
> across relies on multiple things that would have to go just right in order 
for 
> the overall story to turn out just so. This is highly unlikely of course.

It would be if it required everybody working on AI to conform to some specific 
plan or set of constraints, because no such thing will happen. The proper 
course for any of us is to see what we can do *in spite* of the fact that 
there will be a lot of (corporate, military) AIs out there that are NOT built 
according to our recipe. The obvious thing to start with is the social and 
moral constructs that humans have developed under essentially the same 
constraints. Morality works: that's why it could evolve.

--Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to