> The AIXI would just contruct some nano-bots to modify the reward-button so
> that it's stuck in the down position, plus some defenses to
> prevent the reward mechanism from being further modified. It might need to
> trick humans initially into allowing it the ability to construct such
> nano-bots, but it's certainly a lot easier in the long run to do
> this than
> to benefit humans for all eternity. And not only is it easier, but this
> way he gets the maximum rewards per time unit, which he would not be able
> to get any other way. No real evaluator will ever give maximum rewards
> since it will always want to leave room for improvement.

Fine, but if it does this, it is not anything harmful to humans.

And, in the period BEFORE the AIXI figured out how to construct nanobots (or
coerce & teach humans how to do so), it might do some useful stuff for
humans.

So then we'd have an AIXI that was friendly for a while, and then basically
disappeared into a shell.

Then we could build a new AIXI and start over ;-)

> > Furthermore, my stated intention is NOT to rely on my prior
> intuitions to
> > assess the safety of my AGI system.  I don't think that anyone's prior
> > intuitions about AI safety are worth all that much, where a
> complex system
> > like Novamente is concerned.  Rather, I think that once
> Novamente is a bit
> > further along -- at the "learning baby" rather than "partly implemented
> > baby" stage -- we will do experimentation that will give us the
> empirical
> > knowledge needed to form serious opinions about safety (Friendliness).
>
> What kinds of experimentations do you plan to do? Please give some
> specific examples.

I will, a little later on -- I have to go outside now and spend a couple
hours shoveling snow off my driveway ;-p

Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to