I have been reading up on this "Friendly AI" concept and found this
mailing list...  I was curious if anyone knew more about it.  I'm
specifically interested if anyone has any reading on the fairly
obvious (IMHO, of course) correlation between the goals of an FAI
entity and the maximization of negative entropy.

i.e. Since the defining characteristic of ALL life in general is it's
ability to temporarily reduce local entropy, the fundamental
underlying tenet of "Friendliness" seems like it could (should?) be
framed in these terms...  Except I haven't really spent enough time
navel gazing about it yet to be sure if that actually makes sense.
However, maybe someone else has?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to