On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> You misunderstood me -- when I said robustness of the goal system, I meant
> the contents and integrity of the goal system, not the particular
> implementation.

I meant that too - and I didn't mean to imply this distinction.
Implementation of goal system, or 'goal system itself' are both in my
argument can be represented as text written in natural language, that
is in rather faulty way. 'Goal system as what it meant to be' is what
intelligent system tries to achieve.

>
> I do however continue to object to your phrasing about the system
> recognizing influence on it's goal system and preserving it.  Fundamentally,
> there are only a very small number of "Thou shalt not" supergoals that need
> to be forever invariant.  Other than those, the system should be able to
> change it's goals as much as it likes (chocolate or strawberry?  excellent
> food or mediocre sex?  save starving children, save adults dying of disease,
> or go on vacation since I'm so damn tired from all my other good works?)

This is just substitution of levels of abstraction. Programs on my PC
run on a fixed hardware and are limited by its capabilities, yet they
can vary greatly. Plus, intelligent system should be able to integrate
impact of goal system on multiple levels of abstraction, that is it
can infer level of strictness in various circumstances (which
interface with goal system through custom-made abstractions).


> A quick question for Richard and others -- Should adults be allowed to
> drink, do drugs, wirehead themselves to death?
>
> ----- Original Message -----
> From: "Vladimir Nesov" <[EMAIL PROTECTED]>
> To: <[email protected]>
> Sent: Tuesday, October 02, 2007 9:49 AM
> Subject: **SPAM** Re: [agi] Religion-free technical content
>
>
> > But yet robustness of goal system itself is less important than
> > intelligence that allows system to recognize influence on its goal
> > system and preserve it. Intelligence also allows more robust
> > interpretation of goal system. Which is why the way particular goal
> > system is implemented is not very important. Problems lie in rough
> > formulation of what goal system should be (document in English is
> > probably going to be enough) and in placing the system under
> > sufficient influence of its goal system (so that intelligent processes
> > independent on it would not take over).
> >
> > On 10/2/07, Mark Waser <[EMAIL PROTECTED]> wrote:
> >> The intelligence and goal system should be robust enough that a single or
> >> small number of sources should not be able to alter the AGI's goals;
> >> however, it will not do this by recognizing "forged communications" but
> >> by
> >> realizing that the aberrant goals are not in congruence with the world.
> >> Note that many stupid and/or greedy people will try to influence the
> >> system
> >> and it will need to be immune to them (or the solution will be worse than
> >> the problem).
> >
> > --
> > Vladimir Nesov                            mailto:[EMAIL PROTECTED]
> >
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>


-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48939671-58a21f

Reply via email to