On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj Sotala wrote:
>  > Alright. But previously, you said that Omohundro's paper, which to me
>  > seemed to be a general analysis of the behavior of *any* minds with
>  > (more or less) explict goals, looked like it was based on a
>  > 'goal-stack' motivation system. (I believe this has also been the
>  > basis of your critique for e.g. some SIAI articles about
>  > friendliness.) If built-in goals *can* be constructed into
>  > motivational system AGIs, then why do you seem to assume that AGIs
>  > with built-in goals are goal-stack ones?
>
>
> I seem to have caused lots of confusion earlier on in the discussion, so
>  let me backtrack and try to summarize the structure of my argument.
>
>  1)  Conventional AI does not have a concept of a "Motivational-Emotional
>  System" (MES), the way that I use that term, so when I criticised
>  Omuhundro's paper for referring only to a "Goal Stack" control system, I
>  was really saying no more than that he was assuming that the AI was
>  driven by the system that all conventional AIs are supposed to have.
>  These two ways of controlling an AI are two radically different designs.
[...]
>  So now:  does that clarify the specific question you asked above?

Yes and no. :-) My main question is with part 1 of your argument - you
are saying that Omohundro's paper assumed the AI to have a certain
sort of control system. This is the part which confuses me, since I
didn't see the paper to make *any* mentions of how the AI should be
built. It only assumes that the AI has some sort of goals, and nothing
more.

I'll list all of the drives Omohundro mentions, and my interpretation
of them and why they only require existing goals. Please correct me
where our interpretations differ. (It is true that it will be possible
to reduce the impact of many of these drives by constructing an
architecture which restricts them, and as such they are not
/unavoidable/ ones - however, it seems reasonable to assume that they
will by default emerge in any AI with goals, unless specifically
counteracted. Also, the more that they are restricted, the less
effective the AI will be.)

Drive 1: AIs will want to self-improve
This one seems fairly straightforward: indeed, for humans
self-improvement seems to be an essential part in achieving pretty
much *any* goal you are not immeaditly capable of achieving. If you
don't know how to do something needed to achieve your goal, you
practice, and when you practice, you're improving yourself. Likewise,
improving yourself will quickly become a subgoal for *any* major
goals.

Drive 2: AIs will want to be rational
This is basically just a special case of drive #1: rational agents
accomplish their goals better than irrational ones, and attempts at
self-improvement can be outright harmful if you're irrational in the
way that you try to improve yourself. If you're trying to modify
yourself to better achieve your goals, then you need to make clear to
yourself what your goals are. The most effective method for this is to
model your goals as a utility function and then modify yourself to
better carry out the goals thus specified.

Drive 3: AIs will want to preserve their utility functions
Since the utility function constructed was a model of the AI's goals,
this drive is equivalent to saying "AIs will want to preserve their
goals" (or at least the goals that are judged as the most important
ones). The reasoning for this should be obvious - if a goal is removed
from the AI's motivational system, the AI won't work to achieve the
goal anymore, which is bad from the point of view of an AI that
currently does want the goal to be achieved.

Drive 4: AIs try to prevent counterfeit utility
This is an extension of drive #2: if there are things in the
environment that hijack existing motivation systems to make the AI do
things not relevant for its goals, then it will attempt to modify its
motivation systems to avoid those vulnerabilities.

Drive 5: AIs will be self-protective
This is a special case of #3.

Drive 6: AIs will want to acquire resources and use them efficiently
More resources will help in achieving most goals: also, even if you
had already achieved all your goals, more resources would help you in
making sure that your success wouldn't be thwarted as easily.



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to