Hmmm.... My friend, I think you've pretty much convinced me with this last batch of arguments. Or, actually, I'm not sure if it was your excellently clear arguments or the fact that I finally got a quiet 15 minutes to really think about it (the three kids, who have all been out sick from school with a flu all week, are all finally in bed ;)
Your arguments are a long way from a rigorous proof, and I can't rule out that there might be a hole in them, but in this last e-mail you were explicit enough to convince me that what you're saying makes logical sense. I'm going to try to paraphrase your argument, let's see if we're somewhere in the neighborhood of harmony... Basically: you've got these two clones playing a cooperative game, and each one, at each turn, is controlled by a certain program. Each clone chooses his "current operating program" by searching the space of all programs of length < L that finish running in < T timesteps, and finding the one that, based on his study of prior gameplay, is expected to give him the highest chance of winning. But each guy takes 2^T timesteps to perform this search. So your basic point is that, because these clones are acting by simulating programs that finish running in <T timesteps, they're not going to be able to simulate each other very accurately. Whereas, a pair of clones each possessing a more flexible control algorithm could perform better in the game. Because, if a more flexible player wants to simulate his opponent, he can choose to devote nearly ALL his thinking-time inbetween moves to simulating his opponent. Because these more flexible players are not constrained to a rigid control algorithm that divides up their time into little bits, simulating a huge number of fast programs. AIXItl does not have the flexibility to say "Well, this time interval, I'm going to keep my operating program the same, and instead of using my time seeking a new operating program, I'm going to spend most of it trying to simulate my opponent, or trying to study my opponent." HOWEVER... it's still quite possible that the AIXItl clones can predict each other, isn't it? If one of them keeps running the same operating program for a while, then the other one should be able to learn an operating program that responds appropriately to that operating program. But I can see that for some cooperative games, it might be unlikely for one of them to keep running the same operating program for a while... they could just keep shifting from program to program in response to each other. > If AIXI-tl needs general intelligence but fails to develop > general intelligence to solve the complex cooperation problem, while > humans starting out with general intelligence do solve the problem, then > AIXI-tl has been broken. Well, we have different definitions of "broken" in this context, but that's not a point worth arguing about. > But we aren't *talking* about whether AIXI-tl has a mindlike operating > program. We're talking about whether the physically realizable > challenge, > which definitely breaks the formalism, also breaks AIXI-tl in practice. > That's what I originally stated, that's what you originally said you > didn't believe, and that's all I'm trying to demonstrate. Yes, you would seem to have successfully shown (logically and intuitively, though not mathematically) that AIXItl's can be dumber in their interactions with other AIXItl's, than humans are in their analogous interactions with other humans. I don't think you should describe this as "breaking the formalism", because the formalism is about how a single AIXItl solves a fixed goal function, not about how groups of AIXItl's interact. But it's certainly an interesting result. I hope that, even if you don't take the time to prove it rigorously, you'll write it up in a brief, coherent essay, so that others not on this list can appreciate it... Funky stuff!! ;-) -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]