Wei Dai wrote:

Ok, I see. I think I agree with this. I was confused by your phrase "Hofstadterian superrationality" because if I recall correctly, Hofstadter suggested that one should always cooperate in one-shot PD, whereas you're saying only cooperate if you have sufficient evidence that the other side is running the same decision algorithm as you are.
Similarity in this case may be (formally) emergent, in the sense that a most or all plausible initial conditions for a bootstrapping superintelligence - even extremely exotic conditions like the birth of a Friendly AI - exhibit convergence to decision processes that are correlated with each other with respect to the oneshot PD. If you have sufficient evidence that the other entity is a "superintelligence", that alone may be sufficient correlation.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Reply via email to