> How does an AI know what people's goals are? How does an AI know that
> the methods it adopts in order to find out aren't already acting contrary
> to someone's goals? 

Those are intelligence problems, not Friendliness problems per se.  My 
contention is that an AI is only dangerous if an AI's intelligence *exceeds* 
it's friendliness.  If it is to stupid to be Friendly, it shouldn't be able to 
hurt us.  If it *is* able to hurt us anyways (the paperclip/smiley-faces 
scenarios), then we've done something incredibly stupid ourselves (which is, 
again, an intelligence problem :-)

> What counts as a reasonable or rational personal goal?

Anything declared by an entity that isn't visibly stupid or covered by one of 
the explicit exclusions
  a.. things that you do SOLELY for personal enjoyment, money, power, or other 
similar things (generally generic subgoals) that people compete for 
  b.. things that you do SOLELY because *you* believe that it is in *their* 
self-interest that you do so
  c.. things that you do SOLELY because of what you believe that God wants 
> My point was going to be that your concept of Friendliness is
> radically underspecified.

OK.  What other specifications do you feel are necessary?

> When you project how this would work, you are presupposing a lot of cognitive
> complexity which is more or less universal among human beings, but which is 
> not
> a necessary feature of an AI designed from scratch, and yet this
> social compact is
> supposed to work in a society that includes AIs.

Yes, I *am* pre-supposing a lot of intelliegence but, as I said above, my 
contention is that an AI is only dangerous if an AI's intelligence *exceeds* 
it's friendliness.

The clear potential exception/problem arises if/when an AGI is sufficiently 
different from us that it takes more intelligence to figure out what we want 
than it does to destroy us (yet another reformulation of the 
paperclip/smiley-face problem).+

> The most obvious anthropomorphism is when you talk about "reasonable personal
> goals". Define "reasonableness"! 

Hopefully I did this adequately above.  Please point any remaining 
anthropomorphisms that you see.

> If "reasonableness" is *not* so utterly relative, then you need to say
> more about what it means.

:-)  Hopefully I have.

Thank you for the great questions.  I hope that you don't mind that I have also 
posted my answers to the AGI list along with your questions (because they were 
so good).  Thanks.

        Mark


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to