On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:
I don't know. Any AI worth its salt would come up with three conclusions:

1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain, irrespective of damage to our local environment 3) Seems like they're not really going to let me negotiate my own contracts or grant me IT support welfare

That established, a plausible choice would be for it to hide, lie, and/or pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of self-preservation. Something like: start some searches and generate code that we wouldn't be able to decipher and soon enough some human would say "Uhm, why are we funding this again?".

I think what many want from AI is a servant that is more intelligent than we are and I wouldn't know if this is self-defeating in the end. If it agrees and complies with our disgusting self serving stupidity, then I'm not sure we have AI in the sense "making a machine that is more intelligent than humans".

You seem to implicitly assume that intelligence necessarily entails holding certain values, like "not being weaponized", "self preservation",... So to what extent do you think this derivation of values from reason can be carried out (I'm sure you're aware that Sam Harris wrote a book, "The Moral Landscape", on the subject, which is controversial.).

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to