On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote:

--- Jef Allbright <[EMAIL PROTECTED]> wrote:

> On 7/2/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
>
> > I think we're getting terms mixed up here. By
> > "values", do you mean the "ends", the ultimate
> moral
> > objectives that the AGI has, things that the AGI
> > thinks are good across all possible situations?
>
> No, sorry.  By "values", I mean something similar to
> preferences, but
> in a functional sense that is both innate and
> comprehensive -- that
> abstract component of an agent which provides the
> reference standard
> for its subjective model of "the way things should
> be."  Most of this
> model has been encoded as a result of evolution.
> Some has been encoded
> by circumstances during the organism's lifetime, and
> some has been
> encoded by recent circumstance.  All drive the
> organism's actions
> vis-a-vis its environment, attempting to minimize
> the perceived
> difference signal.
>
> As a description of system dynamics this is more
> fundamental than "goals."

Er, we are talking about AGIs here, not evolved
organisms, right?

More generally, in my opinion, adaptive systems.


So what exactly is the difference between "values" and
"supergoals"?

Well, "exactly" is especially challenging since one of my points all
along has been that meaning is dependent on context and it's not
practical to transfer sufficient context through this limited
bandwidth medium.  I also thought I provided a pretty good description
earlier.

To summarize a few key differences:

*  Values are more fundamental than goals because future goals are
derived from present values.

*  Values operate in the here/now.  Goals entail prediction of a future state.

*  Subjectively, values are undeniable, while goals are negotiable.

*  The concept of a "supergoal" is ultimately incoherent because a
supergoal depends on an ultimately inaccessible "supercontext" for its
meaning.  This confusion is similar to saying a person's ultimate goal
is personal survival, or pleasure, or any such.executors. Such
teleological assertions don't belong to more effective models of
reality.  We're back to my early statement that adaptive agents are
not fitness maximizers, but adaptation executors.

You could try to resolve this by saying "supergoal" means (my concept
of ) "values", but that would be both inconsistent and contradictory
because your "supergoal" would not share any "goal-ish" attributes as
described above and rather than acting as a super-target it acts as
the infra-ground of the system behavior.

<Temba, his arms wide.>

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=9182510-196513

Reply via email to