> Steve, Ben, do you have any gauge as too what kind of grants are hot
> right now or what kind of narrow AI projects with AGI implications have
> recently been funded through military agencies?

The list would be very long.  Just look at the DARPA IPTO website for
starters...

http://www.darpa.mil/ipto/


> > And while I'm not entirely optimistic about the practicality of
> > building ethics into AI's, I think we should certainly try, and that
> > rules military funding right out.
>
> Yeah, it seems like somewhat of a *moral compromise* to pursue narrow AI
> research funding with the hopes of creating doing work which may help to
> one day create AGI.  Or as Sartre said:

I don't agree that receiving military funding for specific purposes rules
out creating an ethical AGI, nor that doing narrow AI work is a moral
compromise.

For example, suppose one accepts military funding to create an AI
application aimed at computer network security.

Suppose one creates this application using parts of one's in-development AGI
codebase.  But, suppose one retains ownership of one's AGI codebase.

Where's the ethical dilemma here?  In the fact that, theoretically, the
military could take one's computer security code and repurpose it for
violent intents?  There is a bit of an ethical dilemma here, but it is a
narrow-AI ethical dilemma, not an AGI ethical dilemma.  Because one may
still train one's AGI oneself, using one's own ethical principles...

-- Ben


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to