> For there to be another attractor F', it would of necessity have to be 
> an attractor that is not desirable to us, since you said there is only 
> one stable attractor for us that has the desired characteristics. 

Uh, no.  I am not claiming that there is ONLY one unique attractor (that has 
the desired characteristics).  I am merely saying that there is AT LEAST one 
describable, reachable, stable attractor that has the characteristics that we 
want.  (Note:  I've clarified a previous statement my adding the ONLY and AT 
LEAST and the parenthetical expression "that has the desired characteristics".)

> That's a better way of putting it. Conflict will be possible, but 
> they'll always be resolved via exchange of information rather than bullets.

Yes, exactly.

> You've said elsewhere that the constraints on how it deals with 
> non-friendlies are rather minimal, so while it might be 
> empathic/empathetc, it will still have no qualms about kicking ass and 
> inflicting pain where necessary.

I really don't like the particular quantifier "rather minimal".  I would argue 
(and will later attempt to prove) that the constraints are still actually as 
close to Friendly as rationally possible because that is the most rational way 
to move non-Friendlies to a Friendly status (which is a major Friendliness goal 
that I'll be getting to shortly).  The Friendly will indeed "have no qualms 
about kicking ass and inflicting pain where necessary" but the where necessary 
clause is critically important since a Friendly shouldn't resort to this (even 
for Unfriendlies) until it is truly necessary.

> I think you're fudging a bit here. If we are only likely to occupy the 
> circumstance space with probability less than 1, then the intentional 
> destruction of the human race is not 'most certainly ruled out': it is 
> with very high probability less than 1 ruled out. I'm not trying to say 
> it's likely; only that's it's possible. I make this point to distinguish 
> your approach from other approaches that purport to make absolute 
> guarantees about certain things (as in some ethical systems where 
> certain things are *always* wrong, regardless of context or circumstance).

Um.  I think that we're in violent agreement.  I'm not quite sure where you 
think I'm fudging.

> And we are not yet f-beings in general, since our current location in 
> state space is so far from F. Or do you believe that some (many?) of us 
> are close to F?

We are strongly tending towards f-hood (with some of us closer than others).

> I don't think it's inflammatory or a case of garbage in to contemplate 
> that all of humanity could be wrong. For much of our history, there have 
> been things that *every single human was wrong about*. This is merely 
> the assertion that we can't make guarantees about what vastly superior 
> f-beings will find to be the case. We may one day outgrow our attachment 
> to meatspace, and we may be wrong in our belief that everything 
> essential can be preserved in meatspace, but we might not be at that 
> point yet when the AI has to make the decision.

Why would the AI *have* to make the decision?  It shouldn't be for it's own 
convenience.  The only circumstance that I could think of where the AI should 
make such a decision *for us* over our objections is if we would be destroyed 
otherwise (but there was no way for it to convince us of this fact before the 
destruction was inevitable).

> Yes, when you talk about Friendliness as that distant attractor, it 
> starts to sound an awful lot like "enlightenment", where self-interest 
> is one aspect of that enlightenment, and friendly behavior is another 
> aspect.

Argh!  I would argue that Friendliness is *not* that distant.  Can't you see 
how the attractor that I'm describing is both self-interest and Friendly 
because **ultimately they are the same thing**  (OK, so maybe that *IS* 
enlightenment :-)

> Thanks for the detailed response.

Your contributions are *very* helpful.  Thank *you* for taking the time.

        Mark

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to