J Storrs Hall, PhD wrote:
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
...Omuhundro's claim...

YES! But his argument is that to fulfill *any* motivation, there are generic submotivations (protect myself, accumulate power, don't let my motivation get perverted) that will further the search to fulfill your motivation.


It's perhaps a little more subtle than that. (BTW, note I made the same arguments re submotivations in Beyond AI p. 339)

Steve points out that any motivational architecture that cannot be reduced to a utility function over world states is incoherent in the sense that the AI could be taken advantage of in purely uncoerced transactions by any other agent that understood its motivational structure. Thus one can assume that non-utility-function-equivalent AIs (not to mention humans) will rapidly lose resources in a future world and thus it won't particularly matter what they want.

If you look at the suckerdom of average humans in todays sub-prime mortgage, easy credit, etc., markets, there's ample evidence that it won't take evil AI to make this "economic cleansing" environment happen. And the powers that be don't seem to be any too interested in shielding people from it...

So Steve's point is that utility-function-equivalent AIs will predominate simply by lack of that basic vulnerability (and the fact that it is a vulnerability is a mathematically provable theorem) which is a part of ANY other motivational structure.

The rest (self-interest, etc) follows, Q.E.D.


In a post I just sent in reply to Mark, I point out that, far from giving us any coherent argument about the performance of motivational systems, Omohundro simply weaves a long trail of assumptions into something that looks like an argument. In particular, i analyzed one of the early claims he made in the paper, and I think I have demonstrated quite clearly that what he states (without justification) as "the" behavior of an AI system is just the behavior of an arbitrarily chosen, amazingly obsessive AI.

This error is so egregious, and is repeated so many times in the paper that, in the end, all his 'arguments' are just statements about the behavior of one particular design of motivation mechanism, without any examination of why he makes the assumptions that he does. Every one of his statements about what an AGI would do can be attacked in the same way that I attacked that one on page 2 of the paper, with the same devastating results each time.

So when you say that "Steve points out that any motivational architecture that cannot be reduced to a utility function over world states is incoherent ...", I can only say that there is no place in the copy of his paper that I have here (ai_drives_final.pdf - there is no date on it), in which he produces a compelling argument (i.e. an argument not founded on handwaving and iimplicit assumptions) about the incoherence of systems that cannot be reduced to a utility function over world states.






Richard Loosemore



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to