Well, the term Friendliness as introduced by Eliezer Yudkowsky
is supposed to roughly mean "beneficialness to humans."

What you are talking about is a quite different thing, just the AI
top-level goal of "minimizing entropy"

As it happens, I think that is a poorly formulated goal, and if I
were going to postulate an information-theoretic goal for an AGI
I'd go with something like "maximizing the amount of static
and dynamic pattern in the universe."  Which requires entropy
be far from maximum, but isn't identical to minimizing entropy.

But anyway that sort of goal is quite different from the goal
of beneficialness to humans, as it's nowhere near clear that
benefitting humans is the best way to either minimize entropy
OR maximize the amount of static/dynamic pattern in the universe...
or fulfill any other abstract, information-theoretic goal...

-- Ben G

On 5/25/07, Mason <[EMAIL PROTECTED]> wrote:

I have been reading up on this "Friendly AI" concept and found this
mailing list...  I was curious if anyone knew more about it.  I'm
specifically interested if anyone has any reading on the fairly
obvious (IMHO, of course) correlation between the goals of an FAI
entity and the maximization of negative entropy.

i.e. Since the defining characteristic of ALL life in general is it's
ability to temporarily reduce local entropy, the fundamental
underlying tenet of "Friendliness" seems like it could (should?) be
framed in these terms...  Except I haven't really spent enough time
navel gazing about it yet to be sure if that actually makes sense.
However, maybe someone else has?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to