Bill,

I agree that, over the long haul, and admitting all its limitations, there is 
no better system than democracy.

And it will be interesting to see how humans cope with admitting very 
intelligent AGIs into that democracy!

On another matter, I think there may be a way to deal with the needs of 
humans and other life when we try to develop a values foundation for 
AGIs.

We might adopt a no-major trade-offs approach.....

....how about.....

when dealing with the interests of humans AGIs will govern their 
actions to advance the happiness of all humans as expressed by the 
collectivity of all human beings....

.....and.....

when dealing with the interests of other living things AGIs will govern 
their actions to advance the ability of all living things to survive and 
continue their evolutionary development in the wild/not in a 
domesticated or captive setting and in the case of non-humans that are 
sentient that the AGIs will try to advance the happiness of each class of 
sentient being based on the collective expression of will of that class.

The wording here is a bit of a nightmare (it was a bit off the top of my 
head, to illustrate what I'm thinking) but what I'm trying to get over is 
the ideas that, sure, when dealing with humans interests we should 
follow your formula, but there are more living things around than 
humans - less intellegent living biological entities as well as AGIs 
themselves and if AGIs get out into space they may come across other 
life there and they will need an ethical base that allows them to interact 
in an empathetic way.

If we ground AGIs ethics purely by reference to humans we actually 
saddle them with the necessity to transend our limited ethics in major 
ways - perhaps putting at risk their commitment to humans.  

If we have bound AGIs tightly to be nice to humans and say nothing 
about other entities the AGIs will have to work hard to figure out ethics 
for how to treat other AGIs, non-human biological life and extra-
terrestrial and there is not certainty that they will, in the early days of 
their evolution, adopt compassionate approaches - so we could see 
some interesting AGI wars before they learn how to build a world on 
mutuality.

Cheers, Philip

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to