> From: Jim Bromer [mailto:[email protected]]
> 
> You cannot expect an AI program that is designed to use human-like thought
> processes to be free of negative 'personality' traits. The problem is that a
> person not only learns from various sources but he is also capable of
> designing his own learning programs because human beings are, to some,
> extent self-directed. The belief that superior intellect (whatever you want to
> call it) is going to prevent future AGI from being noxious is not realistic.
> 
> 

If you create an intelligent agent that is totally unoffending, the Totally 
Unoffending Agent (TUA) that never offends, which is probably impossible, so if 
it does offend it must compensate, give out a resource, money, or whatever, it 
will need to acquire that resource somehow since once in a while it will 
offend. If it doesn't acquire the resource it will become depleted and then owe 
the resource to an increasing number of offended agents :)

Systems, organizations, people, governments, companies, AI's, AGI's can be 
considered agents or (for lack of a better term) super-agents existing in 
resource limited environments full of other agents. They must operate with a 
consideration of their own survival. 

John




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to