Brad,

Maybe what you said below is the key to friendly GAI....

> I don't think any human alive has the moral and ethical underpinnings
> to allow them to resist the corruption of absolute power in the long
> run.  We are all kept in check by our lack of power, the competition
> of our fellow humans, the laws of society, and the instructions of our
> peers.  Remove a human from that support framework and you will have a
> human that will warp and shift over time.  We are designed to exist in
> a social framework, and our fragile ethical code cannot function
> properly in a vacuum. 

If we create a *community* of AGIs that have ethics orientated architecture/ethical training then *they* might stand a chance of policing themselves.

The situation is analagous to how we try (so far with not enough success, but with improving odds) to protect non-human species.

Humans are the biggest threat to non-human species (well demonstrated) but there are more and more efforts being made by humans to stop that and to provide other species a chance to survive and continue evolving.

I think that we need to structure and train AGIs knowing that the same scenario could be played out in relation to us as has happened between us and less poweful life - but we have the advantages that:
-   we've seen where WE went wrong
-   we can shape the deep ethical structure of AGIs from the start with
    this meta issue in mind.

Cheers, Philip



Reply via email to