On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: > Ben Goertzel wrote: > . . . > >> Lee Corbin can work out his entire policy in step (2), before step > >> (3) occurs, knowing that his synchronized other self - whichever one > >> he is - is doing the same. > > > > OK -- now, if AIXItl were starting out with the right program, it could > > do this too, because the program could reason "that other AIXItl is > > gonna do the same thing as me, so based on this knowledge, what should > > I do...." > > It *could* do this but it *doesn't* do this. Its control process is such > that it follows an iterative trajectory through chaos which is forbidden > to arrive at a truthful solution, though it may converge to a stable > attractor. > . . .
This is the heart of the fallacy. Neither a human nor an AIXI can know "that his synchronized other self - whichever one he is - is doing the same". All a human or an AIXI can know is its observations. They can estimate but not know the intentions of other minds. Bill ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]