Hi Stephen,

On Sun, 1 Dec 2002, Stephen Reed wrote:

> As one of the research groups in this forum, or elsewhere, begins to
> evidence AGI, then its management will have to decide how the US military
> will use it.  So the issue you raise is a general one.

First, I have no objection to current Cyc technology being used
to fight terrorism, because neither Cyc nor any other current
system is anywhere close to acheiving real intelligence. Of
course, I do want to see any technology applied consistently
with the U.S. consitution.

However, when technology does develop real intelligence, then
I think that resistence to military applications is necessary
and in fact presents the best opportunity for educating the
public to the dangers of machine intelligence. I will explain.

In my view, intelligent behavior cannot be described by any set
of rules explicitly written down by any programmers. Rather, the
behavior must be learned. Of course, the learning behavior will
be defined by rules written down by programmers, but that is
different from the learned behavior. By analogy, the DNA for
human brains is a set of rules for a learning architecture, but
not a set of rules for language or other intelligent behaviors,
which must be learned.

Learning is reinforced by a set of values, generally called
emotions in humans. Some behaviors are positively reinforced and
others are negatively reinforced. Human emotional values are
mostly for self-interest, although not all. This is nicely
described in Stephen Pinker's How the Mind Works.

When we develop machine intelligence, they key to human safety
will be that their behaviors are positively reinforced by
human happiness and negatively reinforced by human unhappiness.
Of course there are lots of conflicts among humans. So machines
will learn intelligent behaviors for resolving those conflicts
equitably, just as legal systems require judges who can render
intelligent judgements. The best model is the love of a mother
for her children: she balances the interests of all her children
and focuses her energy where it is needed most.

The greatest danger in the development of intelligent machines
is that they will be built by corporations with learning values
focused narrowly on corporate profits (this corresponds very
closely with current applications of machine learning to financial
investing). Or they will be built by militaries with learning
values focused on killing enemies and preserving lives of
friendly soldiers.

It is important to generate public resistence before wealthy
organizations build intelligent machines with learning values
focused on narrow interests, rather than the happiness of all
humans. Military applications provide an opportunity to make
a clear analogy with nuclear, chemical and especially biological
weapons, where the public and responsible leaders already
understand the importance of banning such technologies.

There will eventually be a terrific political battle over the
values of intelligent machines. Powerful corporations will
want machines that serve their narrow interests, and national
security will motivate many to argue for unrestricted military
applications. On the other hand, democracy, education and the
free flow of information are increasing (although there are
certainly challenges). Hopefully as the technology matures, a
"Ralph Nader" of machine intelligence will raise the general
public awareness.

Cheers,
Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI  53706
[EMAIL PROTECTED]  608-263-4427  fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to