Mark Waser wrote:
he makes a direct reference to goal driven systems, but even more
important he declares that these bad behaviors will *not* be the result
of us programming the behaviors in at the start .... but in an MES
system nothing at all will happen unless the designer makes an explicit
decision to put some motivations into the system, so I can be pretty
sure that he has not considered that type of motivational system when he
makes these comments.

Richard, I think that you are incorrect here.

When Omohundro says that the bad behaviors will *not* be the result of us programming the behaviors in at the start, what he means is that the very fact of having goals or motivations and being self-improving will naturally lead (**regardless of architecture**) to certain (what I call generic) sub-goals (like the acquisition of power/money, self-preservation, etc.) and that the fulfillment of those subgoals, without other considerations (like ethics or common-sense), will result in what we would consider bad behavior.

This I do not buy, for the following reason.

What is this thing called "being self improving"? Complex concept, that. How are we going to get an AGI to do that? This is a motivation, pure and simple.

So if Omuhundro's claim rests on that fact that "being self improving" is part of the AGI's makeup, and that this will cause the AGI to do certain things, develop certain subgoals etc. I say that he has quietly inserted a *motivation* (or rather assumed it: does he ever say how this is supposed to work?) into the system, and then imagined some consequences.

Further, I do not buy the supposed consequences. Me, I have the "self-improving" motivation too. But it is pretty modest, and also it is just one among many, so it does not have the consequences that he attributes to the general existence of the self-improvement motivation. My point is that since he did not understand that he was making the assumption, and did not realize the role that it could play in a Motivational Emotional system (as opposed to a Goal Stack system), he made a complete dog's dinner of claiming how a future AGI would *necessarily* behave.

Could an intelligent system be built without a rampaging desire for self-improvement (or, as Omuhundro would have it, rampaging power hunger)? Sure: a system could just modestly want to do interesting things and have new and pleasureful experiences. At the very least, I don't think that you could claim that such an unassuming, hedonistic and unambitious type of AGI is *obviously* impossible.


I believe that he is correct in that goals or motivations and self-improvement will lead to generic subgoals regardless of architecture. Do you believe that your MES will not derive generic subgoals under self-improvement?

See above: if self-improvement is just one motivation among many, then the answer depends on exactly how it is implemented.

Only in a Goal Stack system is there a danger of a self-improvement supergoal going awol.



Omohundro's arguments aren't *meant* to apply to an MES system without motivations -- because such a system can't be considered to have goals. His arguments will start to apply as soon as the MES system does have motivations/goals. (Though, I hasten to add that I believe that his logical reasoning is flawed in that there are some drives that he missed that will prevent such bad behavior in any sufficiently advanced system).

As far as i can see, his arguments simply do not apply to MES systems: the arguments depend too heavily on the assumption that the architecture is a Goal Stack. It is simply that none of what he says *follows* if an MES is used. Just a lot of non-sequiteurs.

When an MES system is set up with motivations (instead of being blank) what happens next depends on the mechanics of the system, and the particular motivations.



Richard Loosemore



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to