> > Brad Wyble wrote: > > > > Tell me this, have you ever killed an insect because it bothered you? > > > > Well, of course. But there's bothering and BOTHERING. Let me give you > an example: I will often squash a deer-fly or a mosquito because a) they > hurt and b) they can infect me with disease. However I live quite > comfortably with dust mites and spiders and we never bother each other.
It's not hard to imagine that an AI would consider the activities of humans hurtful, or even capable of causing disease. Consider the possibility of Congress trying to enact anti-AGI laws after perceiving that it is about to become irrelevant as the Singularity unfolds. Under the ethical code you describe, the AGI would swat them like a bug with no more concern than you swatting a mosquito. All I'm trying to do is shift the focus for a few moments to our own ethical standards as people. If we were put into the shoes of an AGI, would we behave well towards the inferior species? From everything I've seen about the behavior of people, I'm not at all sure. If our own ethical codes are so suspect, what hope have we of expecting to teach AGI's to do better? Philip brings up the point that a community AGI's could possibly self-police. I agree. > > (By the way, Ultimate Power is not on my list of personal goals ;) Nor, would one presume, on an AGI's. They might end up with it anyway. ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
