Michael/Ben,

Michael said:
> whether AIs with substantially prehuman (low) intelligence can have
> goals that deserve being called "ethical" or "unethical" is a matter of
> word choice and definitions. 

This raises the issue of whether one should even try to build in ethics 
right from the start of the evolution of AGIs when they will not be very 
smart compared to humans.

I'm not on the crunchy end of writing code for AGIs, so I can happily 
offer comments based on no practical AGI experience, .....but here are 
my thoughts for what they are worth anyway! ....... 

I think ethics only come in where an intelligent entity can identify 
'otherness' in the environment and needs that are not its own.  Ethics 
are then rules that guide the formulation of the intelligent entity's 
behaviour in a way that optimises for not only the intelligent entity's 
needs but the also the needs of the otherness.

I think the emergence of awareness of otherness came very early in the 
evolution of biological life - I'm guessing, but I imagine that almost any 
prototype AGI is smart enough to distinguish otherness.

Building in ethics at this point I think then involves developing some 
notion of what the goals of the otherness might be and hence what 
its/their needs might be.

The final step is for the AGI to modify its own actions to take 
empathetic/sympathetic account of the needs of the otherness.

Developing a sense of the needs of the 'otherness' could be done by 
self referencing analogy - "my needs are ........, so the needs of the 
otherness might be similar" or by observing the behaviour of the other 
and trying to pattern it to infer the goal structure that the otherness is 
pursuing and from there to infer what the othernesses needs might be.

I don't think that the sophisticated adequacy of the ethical judgments of 
early AGIs is the key issue - put bluntly their judgements might be 
pretty limited and a bit pathetic.  But I think what matters is that the 
ethical system is present in the evolving AGIs right from the start so 
that it is not treated as a bolt on later and so that AGIs grow as ethical 
beings right from the start.  Furthermore I think that it would be 
beneficial for AGI developers to have to think about the ethical system  
question from a practical point of view right from the start - it's too 
important to be treated in any way as an after thought.

There might even be a benefit to trying to develop an ethical system for 
the earliest possible AGIs - and that is that it forces everyone to strip 
the concept of an ethical system down to its absolute basics so that it 
can be made part of a not very intelligent system.  That will probably be 
helpful in getting the clarity we need for any robust ethical system 
(provided we also think about the upgrade path issues and any 
evolutionary deadends we might need to avoid).

Cheers, Philip

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to