--- "John G. Rose" <[EMAIL PROTECTED]> wrote:

> > From: Tom McCabe [mailto:[EMAIL PROTECTED]
> > > The AGI is going to have to embed itself into
> some
> > > organizational
> > > bureaucracy in order to survive.  It'll appear
> > > friendly to individual humans
> > > but to society it will need to get itself fed,
> kind
> > > of like a queen ant, and
> > > we are the worker ants all feeding it.
> > 
> > What?! Why even bother? Humans have managed to
> feed
> > themselves, to more than feed themselves (look at
> all
> > this civilization stuff we've built!), and you
> think
> > an AGI ten thousand times smarter than us is going
> to
> > need to rely on us for basic resources?! After
> all,
> > humans still rely on the chimps to fetch our
> dinner.
> > Riiight.
> 
> Humans eat domesticated animals which beget
> domesticated animals.  In many
> parts of the world we still rely on animals and not
> machines to bring dinner
> just like you say!

You're missing the point- obviously this specific
example isn't going to be completely accurate, because
an AI doesn't require dead organics. And it's not like
the animals are actively helping us- they just sit
there, growing, until we harvest them up. And even if
all the animals did revolt and refuse to feed us,
there's always vegetarianism.

> It doesn't matter how smart the AGI is, it does have
> to survive in some sort
> of symbiotic relationship for some period of time
> before it turns into your
> AGI-Zilla and takes over the earth in a few seconds
> and then borgs the rest
> of the universe within 5 minutes.

Why? When humans evolved intelligence, we did not
exist in some sort of "symbiotic relationship" with
chimps for a few million years. We took technology and
ran with it. Before the AGI develops human-level
intelligence, of course it's going to depend on us,
just as we depended on the environment to provide us
with food and shelter before we became intelligent.
But after we became intelligent, in an evolutionary
blink of the eye we're now growing our own food,
building our own houses, etc. We in the first world
rely on the wilderness for very little.

> I was thinking about a more realistic scenario where
> people and computers
> sort of live together and thrive like what is
> happening now.

About as realistic as the new humans and the chimps
"living together and thriving", instead of the humans
developing technology and confining the chimps to
parks and zoos.

> But
> unfortunately the computers are embedded and
> reliance builds up, liberties
> are lost yet these are seen as not necessary.

What liberties? For who? Sorry, you cannot be free of
dependence on technology without reverting to the
Stone Age, and even then you're still not free- you
are tied to your patch of savanna and can't leave it
without starving.

> The
> AGI needs to survive and
> an organization's people need to survive as well so
> perhaps corners are cut
> when things get tight, restrictions are loosened,
> AGI is given more liberty,
> etc.. this would never happen with the moral and
> ethical leadership that
> runs modern organizations though ;/ ...

A competent- not necessarily Friendly, just competent-
AGI organization would never cut corners when dealing
with AGI safety, because they'd know what the
consequences are. Even if the North Koreans built an
AGI, they wouldn't cut safety corners for fear of
killing the Dear Leader.

> The AGI-Zilla scenario I don't think will happen
> within 20 years but within
> a few years (or even now) we'll have some semi-smart
> AGI-like softwares
> trying to embed.

Why would software "try" to do anything, without
having motivation programmed into it? If you've
programmed on anything resembling a large project
before, please realize that 2020 software is going to
be very much like today's- it doesn't do so much as
add two and three without a programmer adding in the
capability specifically.

> Though the AGI-Zilla scenario is
> very possible I suppose
> from a nano-tech perspective...
> 
> John
> 
> 
> > 
> > > Eventually
> > > it will become
> > > indispensible.  If an individual human rebels
> > > against it - like someone
> > > rebelling against IRS computers, good luck. 
> Once it
> > > is embedded it ain't
> > > going away except for newer and better versions.
> > > And then different
> > > bureaucracies will have their own embedded AGI's
> all
> > > vying for control.  But
> > > without some sort of economic feeding base the
> AGI's
> > > won't embed they'll
> > > wane... it's a matter of survival.
> > 
> > Why wouldn't the AI simply take whatever it wants?
> If
> > you unleash a rogue AGI, by the time you take the
> five
> > seconds to pull the power cord, it's already
> gotten
> > out over the Internet, more than likely taken over
> > several nanotech and biotech labs, increased its
> > computing power several hundred fold, and planted
> > hundreds of copies of its own source code in every
> > writable medium should it ever get erased. In five
> > seconds. And that's not even a superintelligent
> AGI;
> > that's an AGI with a human intelligence level that
> > just thinks a few thousand times faster than we
> do.
> > 
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



 
____________________________________________________________________________________
Expecting? Get great news right away with email Auto-Check. 
Try the Yahoo! Mail Beta.
http://advision.webevents.yahoo.com/mailbeta/newmail_tools.html 

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to