On 7/1/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote:

If its top level goal is to allow its other goals to vary randomly,
then evolution will favour those AI's which decide to spread and
multiply, perhaps consuming humans in the process. Building an AI like
this would be like building a bomb and letting it decide when and
where to go off.

For years I've observed and occasionally participated in these
discussions of humans (however augmented and/or organized) vis-à-vis
volitional superintelligent AI, and it strikes me as quite
significant, and telling of our understanding of our own nature, that
rarely if ever is there expressed any consideration of the importance
of the /coherence/ of goals expressed within a given context --
presumably the AI operating within a much wider context than the
human(s).

There's a common presumption that agents must act to maximize some
supergoal, but this conception lacks a model for the supercontext
defining the expectations necessary for any such goal to be
meaningful.  In the words of Cosmides and Tooby, [adaptive agents] are
not fitness maximizers, but adaptation executors. In a complex
evolving environment, prediction fails in proportion to contextual
precision, so increasing intelligence entails an increasingly coherent
model of perceived reality, applied to promotion of an agent's present
(and evolving) values into the future.

While I agree with you in regard to decoupling intelligence and any
particular goals, this doesn't mean goals can be random or arbitrary.
To the extent that striving toward goals (more realistically:
promotion of values) is supportable by intelligence, the values-model
must be coherent.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to