> The formality of Hutter's definitions can give the impression
> that they cannot evolve. But they are open to interactions
> with the external environment, and can be influenced by it
> (including evolving in response to it). If the reinforcement
> values are for human happiness, then the formal system and
> humans together form a symbiotic system. This symbiotic
> system is where you have to look for the friendliness. This
> is part of an earlier discussion at:
>
>   http://www.mail-archive.com/agi@v2.listbox.com/msg00606.html
>
> Cheers,
> Bill

Bill,

What you say is mostly true.

However, taken literally Hutter's AGI designs involve a fixed,
precisely-defined goal function.

This strikes me as an "unsafe" architecture in the sense that we may not get
the goal exactly right the first time around.

Now, if humans iteratively tweak the goal function, then indeed, we have a
synergetic system, whose dynamics include the dynamics of the goal-tweaking
humans...

But what happens if the system interprets its rigid goal to imply that it
should stop humans from tweaking its goal?

Of course, the goal function should be written in such a way as to make it
unlikely the system will draw such an implication...

It's also true that tweaking a superhumanly intelligent system's goal
function may be very difficult for us humans with our limited intelligence.

Making the goal function adaptable makes AIXItl into something a bit
different... and making the AIXItl code rewritable by AIXItl makes it into
something even more different...

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to