Eliezer,

> That's because your view of this problem has automatically factored
> out all the common variables.  All GM cars fail when dropped off a
> cliff.  All GM cars fail when crashed at 120 mph.  All GM cars fail on
> the moon, in space, underwater, in a five-dimensional universe.  All
> GM cars are, under certain circumstances, inferior to telecommuting. 

Good point.

Although not all failires will be of this sort so the group strategy is still 
useful for at least a sibset of the failure cases.

Seems to me then that safety lies in a combination of all our best safety 
factors:

-   designing all AGIs to be as effectively friendly as possible - as if
    we had a one shot chance of getting it right and we can't afford the
    risk of failure, and AS WELL

-   developing quite a few different types of AGI architecture so that
    the risk of sharing the same class of critical error is reduced; and
    AS WELL 

-   having a society of AGIs with multiples of each different type -
    that are uniquely trained - so that the degree of sameness and hence
    risk of failure is not so tightly linked. 

Cheers, Philip

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to