On 05 Sep 2014, at 23:35, LizR wrote:

I don't know how you could do this in practice, but nature has proved that intelligent beings can have their behaviour towards other beings constrained in various ways. An obvious example is that we care for our children. If one could built (or otherwise cause to come into being) an AI with a reward mechanism, and specify that "caring about human beings" would be one of the ways to trigger it, one might be able to make a benevolent God...


If there were such reward system, he will want optimize his reward and He might find more easy to switch his reward system so that it is is triggered by the human suffering, which is far more easy to produce.






(Of course Asimov's 3 Laws say exactly this, though in more "robotic" terms. And one might read Frank Herbert's "Destination Void" carefully before embarking on this project...)


We can't control children and machines, but we can teach them our errors.

Bruno





--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to