On 7/1/07, Tom McCabe <[EMAIL PROTECTED]> wrote:

--- Jef Allbright <[EMAIL PROTECTED]> wrote:

> For years I've observed and occasionally
> participated in these
> discussions of humans (however augmented and/or
> organized) vis-à-vis
> volitional superintelligent AI, and it strikes me as
> quite
> significant, and telling of our understanding of our
> own nature, that
> rarely if ever is there expressed any consideration
> of the importance
> of the /coherence/ of goals expressed within a given
> context --
> presumably the AI operating within a much wider
> context than the
> human(s).
>
> There's a common presumption that agents must act to
> maximize some
> supergoal, but this conception lacks a model for the
> supercontext
> defining the expectations necessary for any such
> goal to be
> meaningful.

"Supercontext"? "Meaningful"? What does that even
mean?

Meaning requires context.
Goal / Context : Supergoal / Supercontext.

Duh.  I've gotten the impression that you aren't even trying to grasp
the understandings of others.  Please put down your BB gun.

>  In the words of Cosmides and Tooby,
> [adaptive agents] are
> not fitness maximizers, but adaptation executors.

Yes, but they were referring to evolved organisms, not
optimization processes in general. There's no reason
why an AGI has to act like an evolved organism,
blindly following pre-written adaptations.

Again it's a matter of context.  Just as we humans feel that we have
free will, acting toward our goals, but from an external context it is
quite apparent that we are always only executing our programming.


> In
> a complex
> evolving environment,

Do you mean evolving in the Darwinian sense or the
"changing over time" sense?

In the broader than Darwinian sense of changing over time as a result
of interactions within a larger context.

> prediction fails in proportion
> to contextual
> precision,

Again, what does this even mean?

Yes, that was overly terse.  Predictive precision improves with
generality and degrades with specificity of the applicable principles.
We can predict, with very high precision, the positions of the
planets due to the very large context over which our understanding of
gravitation principles applies.


> so increasing intelligence entails an
> increasingly coherent
> model of perceived reality,
> applied to promotion of
> an agent's present
> (and evolving) values into the future.

Most goal systems are stable under reflection- while
an agent might modify itself

It is the incoherence of statements such as "an agent might modify
itself" that I was addressing.


to have different
immediate goals, the high-level goal is naturally
stable because any modification to it means that the
new agent will do things that are less desirable under
the original goal than the current agent.

> While I agree with you in regard to decoupling
> intelligence and any
> particular goals, this doesn't mean goals can be
> random or arbitrary.

Why not?

> To the extent that striving toward goals (more
> realistically:
> promotion of values) is supportable by intelligence,
> the values-model
> must be coherent.

What the heck is a "values-model"? If its goal system
is incoherent, a self-modifying agent will modify
itself until it stumbles upon a coherent goal system,
at which point the goal system will be stable under
reflection and so won't have any incentive to
self-modify.

I hope that my response to Stathis might further elucidate.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=8974050-08db75

Reply via email to