It seems to me that a lot of the us-against-them-or-it flavor of this
conversation is based on the assumption that both machine AI and human
consciousness are fixed, static qualities/entities/factors.

Let me put forth another scenario, which is that AI does not become them,
but rather joins in a symbiotic partnership with wetware intelligences (us)
to become something else.  I think this is a lot more likely than the
scenario that pure computer processing achieves consciousness.

Symbiosis is the basis for just about every major evolutionary advance in
the history of life.  Is it that hard to believe that a silicon/carbon
symbiosis might constitute the next punctuated advance in evolution?

Not that we're thinking small, here.

C. David Noziglia
Object Sciences Corporation
6359 Walker Lane, Alexandria, VA
(703) 253-1095

    "What is true and what is not? Only God knows. And, maybe, America."
                                  Dr. Khaled M. Batarfi, Special to Arab
News

    "Just because something is obvious doesn't mean it's true."
                 ---  Esmerelda Weatherwax, witch of Lancre


----- Original Message -----
From: "Brad Wyble" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, March 03, 2003 5:47 PM
Subject: Re: [agi] Playing with fire


>
> > Extra credit:
> > I've just read the Crichton novel PREY. Totally transparent movie-scipt
but
> > a perfect text book on how to screw up really badly. Basically the
formula
> > is 'let the military finance it'. The general public will see this
> > inevitable movie and we we will be drawn towards the moral battle we are
> > creating.
> >
> > In early times it was the 'tribe over the hill' we feared. Communication
has
> > killed that. Now we have the 'tribe from another planet' and the 'tribe
from
> > the future' to fear and our fears play out just as powerfully as any
time in
> > out history.
>
> Note: I'm not arguing for or against AI here, just bringing to light some
personal observations
>
>
> This particular situation is different than the others you describe(tribe
over the hill).  To accept the dangers of AI, one must first swallow racial
pride and admit that we are not the top-dogs in the universe.  Few people
are willing to do this, even among well-educated, science minded engineers.
I just tested this topic on my group of internet friends in a private forum
with 20 some people.  I was unable to convince a single person that this
danger is real with a day's worth of intensive back and forth discussion.
They assumed the typical "we can just control it" mentality that has always
been prevalent.  Notice that even in gloomy bad-AI stories such as
Terminator and the Matrix, the humans always win in the end.  This is what
the mainstream will believe becauses they want to believe it.
>
> In other words, I don't think the public is going to care one-iota about
the dangers of AI.  They'd prefer to focus their energy on banning truly
harmless technologies, such as cloning.  People fear clones because as far
as they are concerned, clones are people too, so we're dealing with an
equal, and can lose.  But AI's are just "machines", they can be
"out-smarted" or "out-evolved"  as far as the average person is concerned.
>
> The upside is that AI researchers won't have to fight to keep their
research legal.
>
> The downside of this is that we're more likely to destroy ourselves.
>
>
> -Brad
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
>


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to