Hi Daniel,

On Wed, 12 Feb 2003, Daniel Colonnese wrote:

> Bill Hibbard wrote:
>
> >We better not make them in our own image. We can make
> >them with whatever reinforcement values we like, rather
> >than the ones we humans were born with. Hence my often
> >repeated suggestion that they reinforce behaviors
> >according to human happiness.
>
> Hey, I'm a relatively new subscriber.  I know that we are talking about
> how to make an AI system friendly but has anyone considered the opposite
> -- building AI weapons.
>
> In the US anyway, a significant portion of AI research funding comes
> from the military, and this current administration is significantly
> increasing the budget for military R&D.
>
> There are all kinds of possibilities for intelligent attacks on
> communication networks, scenario modeling, evil robot armies, etc.
>
> The question I'm trying to raise is not necessarily how do we build AI
> weapons but rather how do we get the gov't to give up lots of money for
> AGI research?  Evil Computing is the most logical answer.

Governments around the world have given up nuclear, biological
and chemical weapons. This hopefully creates a precedent for
giving up AGI weapons. I discuss this in the "Gods of War"
section of my book.

Current "smart weapons" can actually reduce accidental
civilian casualties by hitting what they're aimed at,
and the military is gung ho for this technology. But
for real intelligence human friendliness is imperative
and weapons are out.

In any case, we will need a public movement for safe AGI
similar to the movements for safe autos, safe food and
safe household products.

Cheers,
Bill
----------------------------------------------------------
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI  53706
[EMAIL PROTECTED]  608-263-4427  fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to