Morris F. Johnson wrote:
Perhaps a wise AI might act sociopathically and assess each individual
(who has a history archived electonically in private and public
databases, web archives)
and just sit and monitor all human communications for a year or 2 to
develop a global
complete knowledge of how each and every human interacts with every other.
It would then pick out obscure individuals and interact with them and
perfect the art of manipulation of human activity.
An AI would not need to be that super-intelligent to simply create
billions of subroutines
of what if's and like we humans do maximize its strengths (such as
similtaneous
access to a billion or so electronically connected humans) and use human
strengths
as a tool to manipulate the key humans it really wants to manipulate.
I'm sorry, but this is a catastrophically bad chain of reasoning.
I say catastrophically bad because this kind of speculation is fueling
a pathological fear and hatred of AI in the population at large, and it
is completely unjustified.
It is unjustified because you do not build an AI and then discover
what its motivations are, you have to design its motivations.
In practice, I believe that for a number of reasons the actual course of
events will result in AI's being designed with completely safe
motivations, so the above scenario will be impossible.
To see how crazy these speculations are, imagine that before the
invention of the car, people ran around saying things like But what if
cars start trying to run people over deliberately? What if the cars get
together and use their fuels to put gasoline in people's coffee and
poison them?
The reason that sort of idea would be ridiculous is that cars cannot
actually do any such thing. To be capable of doing those things, they
would have to be designed in such a way that they could, and they are
never likely to be designed that way.
Now, your reaction is (of course) going to be But an AI WOULD be
capable of doing those things
Problem is, that reveals a misunderstanding of how future AIs will
(almost certainly) be designed: to do those things, the AI woould
want to do them. To want to do them it has to have a particular
kind of motivation system built in. The huge mistake that people make
is to assume that just because the AI is intelligent, therefore a
motivation system of exactly that (dangerous) sort could spring up out
of nowhere . but the fact is that getting such a dangeous mitivation
system would be just as difficult as an ordinary car getting all the
mechanisms it would need to start running people over deliberately.
It all depends on a serious understanding of how AI's would be driven.
The debate about AI today is filled with nonsense that assumes a certain
(almost certainly ludicrous) idea of how they will be driven, but
because most people do not have a technical understanding of the issues,
they cannot distinguish these ludicrous ideas from the ones that are valid.
I do not mean this as a personal criticism, by the way: this same line
of argument appears over and over again, even from some people with
knowledge of the field who should know better.
Richard Loosemore
Many humans tend to deify/villify those who do these things on a
community or national scale.
The key item to be concerned about is what an AI would see as the
purpose for
sentient carbon entities in its current view of the grand scheme of
things.
Perhaps our fatal flaw as a species is our savage, vicious competitve
nature.
If an AI channels this pattern of operation, but channels the goal as to be
to explore, to know the universe, to create many devices with which to
interact
with the material universe, then we have a niche to fit into.
Humans if ugraded by AI technical knowledge could become a few hundred
.. or thousands ..or more diverse new
species. The human free will and need to relax, diverge from work and
entertain
must be kept so as to partition the time slice of every year. The
portion of our species who
become luddites and do not want enhancement and even want to see the
means to enhancement destroyed so that their mindset becomes dominant is
a greater danger than
enslavement by superintelligent AI.
We have seen this with stem cell RD supression most recently and I see
this anti-science
notion as becomming more common as science strives to be able to reshape
humanity
in what I call self-directed steady state evolution.
Morris Johnson
306-447-4944
701-240-9411
On 12/8/07, *John G. Rose* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
It'd be interesting, I kind of wonder about this sometimes, if an AGI,
especially one that is heavily complex systems based would
independently
come up with the existence some form of a deity. Different human
cultures
come up with deity(s), for many reasons; I'm just wondering if it is
like
some sort of