Philip wrote: >This raises the issue of whether one should even try to build in >ethics right from the start of the evolution of AGIs when they will >not be very smart compared to humans.
Oh; I think one should try. This is the fate of the universe we're talking about here. And keep in mind that the thing we call "morality" didn't just pop up in humans out of nowhere; it was built on a whole lot of complexity that was already there to begin with, like the cognitive modules responsible for modelling other agents, prerequisites to Machiavellian intelligence, modules which began to repress preexisting proprietary tendencies in favor of more altruistic behaviors which promote inclusive fitness, etc. For an AGI designer, it means that planning for ethical behavior starts early. You can't design a fully functional, human-equivalent general intelligence and try to add in ethical behavior at the last minute. >I think ethics only come in where an intelligent entity can identify >'otherness' in the environment and needs that are not its own. >Ethics are then rules that guide the formulation of the intelligent >entity's behaviour in a way that optimises for not only the >intelligent entity's needs but the also the needs of the otherness. Wait; you're writing again with the implicit assumption that the AI will naturally tend to become selfish. In the mind of a being where distinctions between "self" and "other" completely break down in the context of morality, why do we need to make a specific point of teaching the AGI to identify "otherness"? >Building in ethics at this point I think then involves developing >some notion of what the goals of the otherness might be and hence >what its/their needs might be. Yep. Or start the AGI off from the beginning as modelling everything, including its own code, as "the state of the world" rather than "self" and "other". I'm basically trying to say that AGI should be egoless, but without the stigma that egolessness in humans sometimes carries. >The final step is for the AGI to modify its own actions to take >empathetic/sympathetic account of the needs of the otherness. Not just its own actions; its entire cognitive architecture and goal system. >Developing a sense of the needs of the 'otherness' could be done by >self referencing analogy - "my needs are ........, so the needs of >the otherness might be similar" or by observing the behaviour of the >other and trying to pattern it to infer the goal structure that the >otherness is pursuing and from there to infer what the othernesses >needs might be. This is the human way of doing things, and it works out okay because humans are all essentially identical when you compare us to the vast space of all physically possible sentiences. With AGI, there's a larger scope of empathy required, so accessing the needs of others could easily be based on inference entirely independent of a self- referencing analogy. >I don't think that the sophisticated adequacy of the ethical >judgments of early AGIs is the key issue - put bluntly their >judgements might be pretty limited and a bit pathetic. But I think >what matters is that the ethical system is present in the evolving >AGIs right from the start so that it is not treated as a bolt on >later and so that AGIs grow as ethical beings right from the start. Yep! >Furthermore I think that it would be beneficial for AGI developers to >have to think about the ethical system question from a practical >point of view right from the start - it's too >important to be treated in any way as an after thought. Right. It's pretty clear we agree on this. >There might even be a benefit to trying to develop an ethical system >for the earliest possible AGIs - and that is that it forces everyone >to strip the concept of an ethical system down to its absolute basics >so that it can be made part of a not very intelligent system. That >will probably be helpful in getting the clarity we need for any >robust ethical system (provided we also think about the upgrade path >issues and any evolutionary deadends we might need to avoid). Right, of course, we'd rather teach the AI how to think about upgrade path issues and avoiding evolutionary dead ends on its own, rather than thinking about it ourselves. So, what are your thoughts on creating an AI devoid of humanlike ego from the start? Michael Anissimov ----------------------------------------------------- http://eo.yifan.net Free POP3/Web Email, File Manager, Calendar and Address Book ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
