Ben,

On Tue, 11 Feb 2003, Ben Goertzel wrote:

> > The formality of Hutter's definitions can give the impression
> > that they cannot evolve. But they are open to interactions
> > with the external environment, and can be influenced by it
> > (including evolving in response to it). If the reinforcement
> > values are for human happiness, then the formal system and
> > humans together form a symbiotic system. This symbiotic
> > system is where you have to look for the friendliness. This
> > is part of an earlier discussion at:
> >
> >   http://www.mail-archive.com/agi@v2.listbox.com/msg00606.html
> >
> > Cheers,
> > Bill
>
> Bill,
>
> What you say is mostly true.
>
> However, taken literally Hutter's AGI designs involve a fixed,
> precisely-defined goal function.
>
> This strikes me as an "unsafe" architecture in the sense that we may not get
> the goal exactly right the first time around.
>
> Now, if humans iteratively tweak the goal function, then indeed, we have a
> synergetic system, whose dynamics include the dynamics of the goal-tweaking
> humans...
>
> But what happens if the system interprets its rigid goal to imply that it
> should stop humans from tweaking its goal?
> . . .

The key thing is that Hutter's system is open - it reads
data from the external world. And there is no essential
difference between data and code (all data needs is an
interpreter to become code). So evolving values (goals)
can come from the external world.

We can draw a system boundary around any combination of
the formal system and the external world. By defining
reinforcement values for human happiness, system values
are equated to human values and the friendly system is
the symbiosis of the formal system and humans. The formal
values are fixed, but to human values which are not fixed
but can evolve.

Cheers,
Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to