To prevent this from degenerating into yet another standard political
argument about which verbal principles the existing complex systems called
humans should think about in order to channel their existing emotions and
morality, may I suggest these questions as departure points for discussion
of AI morality?
1) Supposing you are forced to admit that you don't *know* what factors
are responsible for altruism in humans, can you come up with a theory that
offers at least the possibility of transferring them over anyway?
2) Let's say there's a remote possibility the human programmers are not
infallible. Imagining a given type of possible cognitive or moral errors
by the programmers and the subsequent perturbations of the AI, what kind
of architecture would be needed for an AI goal system to conceive of,
define, notice, and correct that class of mistake?
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
- Re: [agi] OS and AGI James Rogers
- Re: [agi] OS and AGI Damien Sullivan
- Re: [agi] OS and AGI James Rogers
- Re: [agi] OS and AGI Alan Grimes
- Re: [agi] OS and AGI Damien Sullivan
- Re: [agi] Friendliness toward humans Alan Grimes
- RE: [agi] Friendliness toward humans Ben Goertzel
- RE: [agi] Friendliness toward humans Ben Goertzel
- Re: [agi] Friendliness toward humans maitri
- RE: [agi] Friendliness toward humans Ben Goertzel
- Re: [agi] General Friendliness theor... Eliezer S. Yudkowsky
- Re: [agi] General Friendliness theor... Pei Wang
- Re: [agi] Friendliness toward humans EGHeflin
- Re: [agi] Friendliness toward humans maitri
- RE: [agi] Friendliness toward humans Ben Goertzel
- Re: [agi] Friendliness toward humans maitri
- RE: [agi] Friendliness toward humans Ben Goertzel
- Re: [agi] Friendliness toward humans maitri
- [agi] AGI's sense of self Ben Goertzel
- Re: [agi] AGI's sense of self maitri
- Re: [agi] AGI's sense of self Youlian Troyanov
