On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
> ...Omuhundro's claim...

> YES!  But his argument is that to fulfill *any* motivation, there are 
> generic submotivations (protect myself, accumulate power, don't let my 
> motivation get perverted) that will further the search to fulfill your 
> motivation.


It's perhaps a little more subtle than that. (BTW, note I made the same 
arguments re submotivations in Beyond AI p. 339)

Steve points out that any motivational architecture that cannot be reduced to 
a utility function over world states is incoherent in the sense that the AI 
could be taken advantage of in purely uncoerced transactions by any other 
agent that understood its motivational structure. Thus one can assume that 
non-utility-function-equivalent AIs (not to mention humans) will rapidly lose 
resources in a future world and thus it won't particularly matter what they 
want.

If you look at the suckerdom of average humans in todays sub-prime mortgage, 
easy credit, etc., markets, there's ample evidence that it won't take evil AI 
to make this "economic cleansing" environment happen.  And the powers that be 
don't seem to be any too interested in shielding people from it...

So Steve's point is that utility-function-equivalent AIs will predominate 
simply by lack of that basic vulnerability (and the fact that it is a 
vulnerability is a mathematically provable theorem) which is a part of ANY 
other motivational structure.

The rest (self-interest, etc) follows, Q.E.D.

Josh


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to