Bill Hibbard wrote:
The real flaw in the AIXI discussion was Eliezer's statement:

Lee Corbin can work out his entire policy in step (2), before step
(3) occurs, knowing that his synchronized other self - whichever one
he is - is doing the same.
He was assuming that a human could know that another mind
would behave identically. Of course they cannot, but can
only estimate other mind's intentions based on observations.
I specified playing against your own clone. Under that situation the identity is, in fact, perfect. It is not knowably perfect. But a Bayesian naturalistic reasoner can estimate an extremely high degree of correlation, and take actions based on that estimate.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to