On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote:

I think we're getting terms mixed up here. By
"values", do you mean the "ends", the ultimate moral
objectives that the AGI has, things that the AGI
thinks are good across all possible situations?

No, sorry.  By "values", I mean something similar to preferences, but
in a functional sense that is both innate and comprehensive -- that
abstract component of an agent which provides the reference standard
for its subjective model of "the way things should be."  Most of this
model has been encoded as a result of evolution. Some has been encoded
by circumstances during the organism's lifetime, and some has been
encoded by recent circumstance.  All drive the organism's actions
vis-a-vis its environment, attempting to minimize the perceived
difference signal.

As a description of system dynamics this is more fundamental than "goals."


That's
what I've been meaning by "supergoals". A goal isn't
an "expected outcome" in the sense that it's what the
AGI thinks will happen; it's what the AGI wants to
happen, the target of the optimization.

I would suggest that this does in fact necessarily entail prediction,
whereas my concept of values is what the system effectively "wants",
subject always to updates.

[snipped the rest which was either agreement or based on the term (not
terminal) confusion above.]

<Sokath, his eyes uncovered!>

- Jef
(I'm really not a trekkie.)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=9085661-4b00ea

Reply via email to