Mason wrote:
I have been reading up on this "Friendly AI" concept and found this
mailing list...  I was curious if anyone knew more about it.  I'm
specifically interested if anyone has any reading on the fairly
obvious (IMHO, of course) correlation between the goals of an FAI
entity and the maximization of negative entropy.

i.e. Since the defining characteristic of ALL life in general is it's
ability to temporarily reduce local entropy, the fundamental
underlying tenet of "Friendliness" seems like it could (should?) be
framed in these terms...  Except I haven't really spent enough time
navel gazing about it yet to be sure if that actually makes sense.
However, maybe someone else has?

What I can tell you is that the "friendliness" idea has always been about finding ways to make sure an AI would be as helpful as possible to us humans, but the ways in which people talk about this issue are heavily colored by their interpretations of what the structure of AI would actually be like.

So, in other words, we all agree about the general idea (yes, we should ensure that any AI is going to be friendly and stay friendly to us), from that point on all the details get mired down in people's idiosyncratic ideas about how you go about choosing the motivations and goals of an AI.

In particular, I have had a profound dispute with what I consider the naive approach of the folks associated with SIAI. Their approach is predicated on AI systems being driven in very crude ways. I see this as being too simplistic to actually work at all.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to