On Wed, 12 Feb 2003, Brad Wyble wrote:

>
> I can't imagine the military would be interested in AGI, by its very
> definition.  The military would want specialized AI's, constructed
> around a specific purpose and under their strict control.  An AGI goes
> against everything the military wants from its weapons and agents.  They
> train soldiers for a long time specifically to beat the GI out of them (har
> har, no pun intended) so that they behave in a predictable manner in a
> dangerous situation.

> And while I'm not entirely optimistic about the practicality of
> building ethics into AI's, I think we should certainly try, and that
> rules military funding right out.

As an employee at Cycorp, a DARPA sub-contractor, and as project manager
for Cycorp's Terrorism Knowledge Base participation in the DARPA GENOA II
program, I believe that the military would be very interested in AGI
*because* of its definition.  A hierarchical military AGI would bottom out
in weapon systems but the General aspect of it facilitates coordination
at the battlespace level - involving forces from all services and allies.

With the growing military acceptance of Effects Based Operations, national
objectives assigned to our military can be accomplished by means other
than putting metal on a target.  EBO is implemented by Bayesian nets which
I imagine will be in the toolbox of any AGI group posting here.

In opposition to the military aspect of your second point, I am very
comfortable with the building of ethics into an AI and at the moment
subscribe the "Friendly AI" principles which I believe can be straight
forwardly expressed in the Cyc symbolic logic vocabulary and whose causal
goal structure can be the basis of future Cyc active, self-improving
behavior.  Furthermore, I believe that our culture entrusts the military
with awesome destructive power because of civilian oversight, legal
constraints and the ethical structure developed and taught to military
personnel of all ranks.  For operational ethics, I certainly would accept
the teachings of our military academies and staff schools.  And I can
provide web site links for anyone interested.

Civilian oversight is already a reality for my AI work.  For example, the
GENOA II program is funded by the DARPA Information Awareness Office whose
actives will be subject to congressional scrutiny and possible
termination if the current funding bill becomes law.

I believe that as evidence of AGI (e. g. software that can learn
from reading) becomes widely known: (1) the military will provide abundant
funding - possibly in excess of what commercial firms could do without a
consortium  (2) public outcry will assure that military AGI development
has civilian and academic oversight.

-Steve

-- 
===========================================================
Stephen L. Reed                  phone:  512.342.4036
Cycorp, Suite 100                  fax:  512.342.4040
3721 Executive Center Drive      email:  [EMAIL PROTECTED]
Austin, TX 78731                   web:  http://www.cyc.com
         download OpenCyc at http://www.opencyc.org
===========================================================

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to