Perhaps a wise AI might act sociopathically and assess each individual
(who has a history archived electonically in private and public databases,
web archives)
and just sit and monitor all human communications for a year or 2 to develop
a global
complete knowledge of how each and every human interacts with every other.

It would then pick out obscure individuals and interact with them and
perfect the art of manipulation of human activity.

An AI would not need to be that super-intelligent to simply create billions
of subroutines
of "what if's" and like we humans do maximize its strengths (such as
similtaneous
access to a billion or so electronically connected humans) and use human
strengths
as a tool to  manipulate the key humans it really wants to manipulate.

Many humans tend to deify/villify those who do these things on a community
or national scale.

The key item to be concerned about is what an AI would see as the purpose
for
sentient carbon entities in its current view of the "grand scheme of
things".

Perhaps our fatal flaw as a species is our savage, vicious competitve
nature.
If an AI channels this pattern of operation, but channels the goal as to be
to explore, to know the universe, to create many devices with which to
interact
with the material universe, then we have a niche to fit into.

Humans if ugraded by AI technical knowledge could  become a few hundred ..
or thousands ..or more diverse new
species.  The human free will and need to relax, diverge from work and
entertain
must be kept so as to partition the time slice of every year.  The portion
of our species who
become luddites and do not want enhancement and even want to see the means
to enhancement destroyed so that their mindset becomes dominant is a greater
danger than
enslavement by superintelligent AI.

We have seen this with stem cell R&D supression most recently and I see this
anti-science
notion as becomming more common as science strives to be able to reshape
humanity
in what I call "self-directed steady state evolution".

Morris Johnson
306-447-4944
701-240-9411






On 12/8/07, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> It'd be interesting, I kind of wonder about this sometimes, if an AGI,
> especially one that is heavily complex systems based would independently
> come up with the existence some form of a deity. Different human cultures
> come up with deity(s), for many reasons; I'm just wondering if it is like
> some sort of mathematical entity that is natural to incompleteness and
> complexity (simulation?) or is it just exclusively a biological thing
> based
> on related limitations.
>
> An AGI is going to banging its head against the same limitations that we
> know of though it will find ways around them or redefine limits. Like the
> speed of light, if it can't figure out a way around this it's stuck. The
> AGI
> will look at the rest of the universe and wonder what the hell are all
> those
> billions of galaxies doing out there that it can't get to? Or more likely
> it
> will figure out a way to quantum tunnel to some remote star and inject
> itself were all these other AGIs from other planets are socializing at
> some
> AGI clambake :)
>
> John
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78971515-ad4a85

Reply via email to