Loosemore wrote:
> The motivational system of some types of AI (the types you would
> classify as tainted by complexity) can be made so reliable that the
> likelihood of them becoming unfriendly would be similar to the
> likelihood of the molecules of an Ideal Gas suddenly deciding to split
> into two groups and head for opposite ends of their container.

Wow!  This is a verrrry strong hypothesis....  I really doubt this
kind of certainty is possible for any AI with radically increasing
intelligence ... let alone a complex-system-type AI with highly
indeterminate internals...

I don't expect you to have a proof for this assertion, but do you have
an argument at all?

ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to