RE: Belief Statements
Here's how I look at the question of whether a bit string, if accidentally implemented as part of another program, would be conscious. First, it's a little confusing what we mean by a bit string. Is this the program of the computer? A snapshot of its state? Can a program or a snapshot be conscious? Suppose that instead of talking about a bit string, we consider instead the actual sequence of states that the computer goes through. Then we could ask, if this sequence of states matched the sequence of states that was part of a conscious program, but in this case they happened accidentally as part of some other program, would they nevertheless create a consciousness? Second, even with this definition, it's an unreasonable question. That is, given what we know about the complexity of consciousness, it doesn't make sense that a computer could accidentally run a program that matched the run of a conscious simulation, for a long enough period that it would correspond to a perceptible moment of consciousness. The brain has something like 10^12 neurons and 10^15 synapses, and they'd probably have to be simulated at microsecond resolution (if not a million times smaller) to get a simulation that was at all accurate. This means that there would probably be something like 10^23 bits of information in a simulation of a tenth of a second of a human brain, if you capture all of the connectivity and timing information. There's no way that you could accidentally match a 10^23 bit pattern in this universe. Even if every sub-atomic particle in the observable universe were a computer, you'd be hard pressed to match even a 300 bit pattern by accident. The additional difficulty for the accidental match of a brain pattern is so much greater that our minds can't even conceive of how impossible it is. Third, even though it will never happen in our universe, if we believe in the multiverse then we have to admit that it will happen by accident, somewhere. So we might still want to answer the question of whether this accidental instantiation of the computation is conscious. I would approach this from the Schmidhuber perspective that all programs exist and run, in a Platonic sense, and this creates all computable universes. Some programs create universes like ours, which have conscious entities. Other programs create random universes, which may, through sheer outlandish luck, instantiate patterns which match those of conscious entities. All consciousnesses exist in this model, and as Bruno emphasizes, from the inside there is no way to know which program instantiated you. In fact this may not even be a meaningful question. But what are meaningful to ask, in the Schmidhuber sense, are two things. First, what is the measure of your consciousness: how likely are you to exist? And second, among all of the instantiations of your consciousness in all the universes, how much of your measure does each one contribute? This, then, is how I would approach the question. Not, is this accidental instantiation conscious; but rather, how much measure do such accidental instantiations contribute, compared to non-accidental ones like those we see in the universe around us? I suggest that the answer is that accidental instantiations only contribute an infinitesimal amount, compared to the contributions of universes like ours. Our universe appears to have extremely simple physical laws and initial conditions. Yet it formed complex matter and chemistry which allowed life to evolve and consciousness to develop. Maybe we got some lucky breaks; the universe doesn't seem particularly fecund as far as we can tell, but conscious life did happen. The odds against it were not, as in the case of accidental instantiation, an exponential of an astronomical number. This means that the contribution to a consciousness from a lawful universe like the one we observe is almost infinitely greater than the contribution from accidental instantiations. Therefore, I would suggest that the answer to the question of whether an accidental instantiation is conscious is simply this: it doesn't matter. Even if it is conscious, its contribution to the measure of that conscious experience is so small as to be completely negligible. Hal Finney
RE: Belief Statements
On 28 Jan 2005 Brent Meeker wrote: I'm not sure I understand the computational hyposthesis - and I certainly don't *believe* it. So you don't believe that even in principle a digital computer can be conscious? I think the challenge to this is going to come not from theoretical considerations, but from practical developments in AI in the coming decades. There will come a point where to insist that a computer is not conscious will be no more plausible than insisting you alone are conscious. >(1) This sequence of binary digits has a special organisation, which can be >understood as conforming to certain rules and relationships in a particular >programming language; > >(2) Implementing the binary sequence on a digital computer results in a >simulated world with inhabitants who are self-aware. > >You can stipulate that (1) must be true for (2) to be true, but it does not >thereby follow that any conscious being in the physical world must be able >to understand the details of (1) in order for (2) to be true. Sure. >For example, >suppose the computer language were devised by a long extinct civilization, >and no-one alive now is able to understand it: should that make any >difference to the simulation "from inside"? A good question. Another is, given any bitstring and a certain world, is there a language in which that bitstring simulates that world? Yes. This is the basic idea I am getting at. I don't see any way around it. >Similarly, if the entire >computation occurs by chance in the course of another computation - a >spreadsheet, a cryptography cracking program on the planet Zork, distributed >throughout a computer network in tiny pieces as in the Egan story - how can >the conscious beings "inside" possibly know this? This would seem to be contrary to (1) supra - the tiny pieces not longer have "a special organisation". No: they always have a special organisation, given the appropriate language, as per your point above. --Stathis Papaioannou _ Click here for the latest chart ringtones: http://ringtones.com.au/ninemsn/control?page=/ninemsn/main.jsp
Re: Belief Statements
From: [EMAIL PROTECTED] ("Hal Finney") To: everything-list@eskimo.com Subject: Re: Belief Statements Date: Thu, 27 Jan 2005 12:16:24 -0800 (PST) It is true that there are some physical systems for which we can predict the future state without calculating all intermediate states. Periodic systems will fall into this category if we can figure out analytically what the period is. But there are other systems where this is thought to be impossible; for example, chaotic systems. Chaotic systems are ones whose future behavior is sensitively dependent on the current state. Making even an infinitisimal change to the current state will cause massive changes in the future. I don't think it would be possible with any computational model to predict the state of a chaotic system far in the future without computing intermediate states. My guess is that consciousness as we know it is inherently chaotic. It seems like small changes to our beliefs and knowledge can lead to large changes in behavior. So often we experience being torn between alternate courses of action, where the tiniest change could tip us from one choice to the other. Neural behavior is inherently chaotic as well. Neurons are believed to sum the recent activity levels on their synapses and when this exceeds a threshold, the neuron suddenly and catastrophically fires a nerve impulse. It then goes through a refractory period (about 1 millisecond) in which it is unable to fire again until it has "rested" and regathered its strength, at which point it goes back to summing its inputs. If we plotted the net input strength to the neuron, it would be an irregular line with lots of little jags and bumps, and whenever it manages to exceed a certain level, there is a sudden firing. Probably we would often see the stimulation level approach that threshold line and fall back, not quite meeting the threshold, until we just reach it and another nerve impulse is fired. This kind of sensitive dependence on initial conditions is a recipe for mathematical chaos. Of course, this is not a rigorous proof, and it is conceivable that consciousness is not in fact chaotic even though it subjectively seems so, and even though its subtrate (the brain's neural net) is. Nevertheless it would be almost unbelievably bizarre to imagine that you could calculate the mental state of an 80 year old man, with all the memories of a lifetime, without actually calculating the experiences that led to those memories. In Egan's story, the computer is supposed to calculate his conscious experience of the 10th second first, then the 9th second, and so on. Suppose in the first (subjective) second he stutters on saying the number "one", out of nervousness. Then the memory of that stutter will be present as he recites all the other numbers. Perhaps he will enunciate them more carefully in order to compensate. So when the system calculates that 10th second, it has to know what happened during the first second. Those events will be latent in his memories during the 10th second, and may influence his behavior. His conscious reactions to earlier events are in his memory at later times. So I don't see how it could possibly work to calculate the 10th second first. Two other minor points: in Egan's story, this experiment was not being done on "dust". It was done on an ordinary computer. It was the result of this experiment, which is of course that there was no subjective awareness of the time scrambling, which was supposed to lend credence to the dust hypothesis. Second, quantum computers cannot efficiently solve NP complete problems, or at least they are not known to be able to. It's possible that ordinary computers can solve NP complete problems; no one has ever proven that they can't (this is the famous P = NP problem of computer science). And if it turns out that ordinary computers can handle them efficiently, then of course quantum computers will be able to as well, since they are a superset of ordinary computers. But if it turns out that P != NP and ordinary computers can't solve NP problems efficiently, there is no evidence that the situation will be different for quantum computers. Hal Finney _ FREE pop-up blocking with the new MSN Toolbar get it now! http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/
Re: Belief Statements
Stathis Papaioannou wrote: For example, if I am running an AI program on my computer and a particular bitstring is associated with the simulated being noting, "I think, therefore I am", then should not the same bitstring arising by chance in the course of, say, a spreadsheet calculation give rise to the same moment of consciousness - regardless of whether the spreadsheet user or anyone other than the simulated being himself is or can be aware of this? Only if you believe it's the bitstring itself which is mapped to a particular conscious experience, rather than the causal pattern enacted by the AI program's computation that led it to produce that bitstring. So if you believe in "psychophysical laws" (to use a term I have seen some philosophers use), it depends on how these laws map facts about the physical world to facts about first-person experience. Jesse
RE: Belief Statements
Brent Meeker wrote: >For example, if I am running an AI program on my computer and a particular >bitstring is associated with the simulated being noting, "I think, therefore >I am", then should not the same bitstring arising by chance in the course >of, say, a spreadsheet calculation give rise to the same moment of >consciousness - regardless of whether the spreadsheet user or anyone other >than the simulated being himself is or can be aware of this? I think not. Consciousness is a narrative the brain constructs to form memories. It has a context. It is consciousness *of* something. A bitstring in a spreadsheet has a different context (unless the spreadsheet is simulation of some "world") and isn't fulfilling the function of consciousness. So, how long a bitstring do you need to create a context? You could change the argument a little and consider the entire simulation of a world complete with conscious inhabitants; it would still only amount to a very long sequence of 1's and 0's running on a digital computer. If you believe in the computational hypothesis of mind, you believe two things about this computer program: (1) This sequence of binary digits has a special organisation, which can be understood as conforming to certain rules and relationships in a particular programming language; (2) Implementing the binary sequence on a digital computer results in a simulated world with inhabitants who are self-aware. You can stipulate that (1) must be true for (2) to be true, but it does not thereby follow that any conscious being in the physical world must be able to understand the details of (1) in order for (2) to be true. For example, suppose the computer language were devised by a long extinct civilization, and no-one alive now is able to understand it: should that make any difference to the simulation "from inside"? Similarly, if the entire computation occurs by chance in the course of another computation - a spreadsheet, a cryptography cracking program on the planet Zork, distributed throughout a computer network in tiny pieces as in the Egan story - how can the conscious beings "inside" possibly know this? --Stathis Papaioannou _ Click here for the latest chart ringtones: http://ringtones.com.au/ninemsn/control?page=/ninemsn/main.jsp
Re: Belief Statements
On 28 Jan 2005 Bruno Marchal wrote: At 22:19 27/01/05 +1100, Stathis Papaioannou wrote: For example, if I am running an AI program on my computer and a particular bitstring is associated with the simulated being noting, "I think, therefore I am", then should not the same bitstring arising by chance in the course of, say, a spreadsheet calculation give rise to the same moment of consciousness - regardless of whether the spreadsheet user or anyone other than the simulated being himself is or can be aware of this? But from the point of view of the simulated being himself he cannot have the slightest clue about which executions he is supported by. He is dispersed in 2^aleph0 computational histories and he can only bet on its most probable consistent extensions. You always talk like if the mind body relation was one-one, when with comp although you still can attach a mind to a ["piece of relative object" appearing in your most probable histories] the mind of "the piece of relative object" cannot attach an object to itself, only an infinity of such objects. With comp the mind-body relation is one-one in the body -> mind direction, and one-many in the mind-body direction. It is counter-intuitive but no less than QM without collapse (Everett, Deutch). Bruno, I don't see where you think I disagree with you. I agree that a particular simulated mind may have multiple physical implementations, and that it is in general impossible for the mind to know which implementation it is supported by. I make the further point that it is not necessary, in general, for any conscious being at the level of the physical implementation to be aware that the implementation is being run, in order for the simulated being to be conscious. --Stathis Papaioannou _ SEEK: Now with over 60,000 dream jobs! Click here: http://ninemsn.seek.com.au?hotmail
Re: Belief Statements
It is true that there are some physical systems for which we can predict the future state without calculating all intermediate states. Periodic systems will fall into this category if we can figure out analytically what the period is. But there are other systems where this is thought to be impossible; for example, chaotic systems. Chaotic systems are ones whose future behavior is sensitively dependent on the current state. Making even an infinitisimal change to the current state will cause massive changes in the future. I don't think it would be possible with any computational model to predict the state of a chaotic system far in the future without computing intermediate states. My guess is that consciousness as we know it is inherently chaotic. It seems like small changes to our beliefs and knowledge can lead to large changes in behavior. So often we experience being torn between alternate courses of action, where the tiniest change could tip us from one choice to the other. Neural behavior is inherently chaotic as well. Neurons are believed to sum the recent activity levels on their synapses and when this exceeds a threshold, the neuron suddenly and catastrophically fires a nerve impulse. It then goes through a refractory period (about 1 millisecond) in which it is unable to fire again until it has "rested" and regathered its strength, at which point it goes back to summing its inputs. If we plotted the net input strength to the neuron, it would be an irregular line with lots of little jags and bumps, and whenever it manages to exceed a certain level, there is a sudden firing. Probably we would often see the stimulation level approach that threshold line and fall back, not quite meeting the threshold, until we just reach it and another nerve impulse is fired. This kind of sensitive dependence on initial conditions is a recipe for mathematical chaos. Of course, this is not a rigorous proof, and it is conceivable that consciousness is not in fact chaotic even though it subjectively seems so, and even though its subtrate (the brain's neural net) is. Nevertheless it would be almost unbelievably bizarre to imagine that you could calculate the mental state of an 80 year old man, with all the memories of a lifetime, without actually calculating the experiences that led to those memories. In Egan's story, the computer is supposed to calculate his conscious experience of the 10th second first, then the 9th second, and so on. Suppose in the first (subjective) second he stutters on saying the number "one", out of nervousness. Then the memory of that stutter will be present as he recites all the other numbers. Perhaps he will enunciate them more carefully in order to compensate. So when the system calculates that 10th second, it has to know what happened during the first second. Those events will be latent in his memories during the 10th second, and may influence his behavior. His conscious reactions to earlier events are in his memory at later times. So I don't see how it could possibly work to calculate the 10th second first. Two other minor points: in Egan's story, this experiment was not being done on "dust". It was done on an ordinary computer. It was the result of this experiment, which is of course that there was no subjective awareness of the time scrambling, which was supposed to lend credence to the dust hypothesis. Second, quantum computers cannot efficiently solve NP complete problems, or at least they are not known to be able to. It's possible that ordinary computers can solve NP complete problems; no one has ever proven that they can't (this is the famous P = NP problem of computer science). And if it turns out that ordinary computers can handle them efficiently, then of course quantum computers will be able to as well, since they are a superset of ordinary computers. But if it turns out that P != NP and ordinary computers can't solve NP problems efficiently, there is no evidence that the situation will be different for quantum computers. Hal Finney
The chemistry of COMBINATORS
Hi, For those who are interested in the comp hypothesis, it is hardly a luxury to dig a little bit in computer science. If only to go toward explicit definition of notion like computations, computational history, consistent extensions, models, etc. I open this thread for the very long term. One of the jewel of computer science is the theory of combinators. They have been discovered and presented in a talk by Moses Schoenfinkel (from Moscow) in 1920. And rediscovered independently by Haskell Curry (USA) in 1930. Church rediscovered them too under the form of closed lambda _expression_, for which he will postulate his famous "Church thesis": the closed lambda _expression_ are enough to define all computable functions (from N to N, where N = the set of positive integers). There is no "Schoenfinkel thesis" nor any "Curry thesis", as opposed to "Church thesis". Indeed the goal of both Schoenfinkel and Curry was to "rebuild" an alternative to the whole of mathematics. One of Schoenfinkel's motivation was to eliminate all variables. Curry's motivation was to find the most elementary finitary operations rich enough to (re)build mathematics, and this preferably without formal sets, but only a finite set of primitive operations. Actually Combinatory Logic can "easily" be shown rich enough to represent the partial recursive function, so that the combinators gives a nice and pleasant computer programming language. (And indeed LISP and functionnal programming languages are all descendants or cousins of the combinators/lambda calculus). But at some fundamental level combinatory logic is much more than a programming language: it is really a possible road to tackle the problem of the nature of mathematics, and with comp: the nature of reality. Also, combinatory logic is very fine grained, and this will enable us to introduce at a very cheap price important nuances. Here is a short descrition of combinatory logic (beware: in the preceding post I made a typo error): STATIC: 1) K is a molecule (called the "kestrel" is Smullyan's terminology) 2) S is a molecule (the "Starling") 3) if x and y are molecules then (x y) is a molecule. From this you can easily enumerate all possible molecules: K, S, (K K), (K S), (S K), (S S), ((K K) K), ((K S) K) ... DYNAMICS: (X and Y are put for any molecules) 1) ((K X) Y) = X (Law of the Kestrel) 2) (((S X) Y) Z) = ((X Z) (Y Z)) (Law of the Starling) 1) means that on any molecules X the molecules (K X) is stable and does not evolves (except by the evolution of X perhaps). I will say that a molecules of the shape (K X) is a charged Kestrel. Now if (K X) comes to interact with some other molecules Y giving ((K X) Y) you get an explosion leaving as result of the reaction just the molecule X. So for example K is stable (K K) is stable (K (K K)) is stable ((K K) K) is unstable, indeed it matches the law "1)", with X = K, and Y = K, so the reaction is trigged giving K. ((K (K K)) (K K)) gives (K K), ok? Well the price of having a conceptually very simple syntax (static) is that the notation can be very quickly a little bit cumbersome. The tradition is to neglect the left parenthesis abbreviating (((a b) c) d) by abcd. The laws becomes: KXY = X SXYZ = XZ(YZ) The examples becomes K is stable KK is stable K(KK) is stable KKK is unstable and "decays" into K, and finally K(KK)(KK) gives (KK) ok? What gives S(KK)(KK) ? Solution: it remains S(KK)(KK). It is stable because S needs "three" molecules to trigger its dynamic. So S(KK)(KK)(KK) gives KK(KK)(KK(KK)), as SKKK gives KK(KK) which is still unstable and gives K. Exercices (Taken from the course "My First Everything Theory" Primary school Year 2127 :) Evaluate: (SS)KKK = ? KKK(SS) = ? (KK)(KK)(KK) = ? (KKK)(KKK)(KKK) = ? Evaluate: K KK KKK K KK KKK K KK A little more advanced exercices: is there a molecule, let us called it I, having the following dynamic: (X refers to any molecule). IX = X So a solution is some molecule made up from K and S which applied on any molecule give as result of the reaction that very molecule unchanged. For example KXS is not a solution, although it gives X, it is not of the shape (molecule X). Of course you can learn a lot by searching "combinators" or "lambda calcul" on the net. Two samples: For those "who knows", here is a paper on Kolmogorov Complexity viewed through the combinators. It can be used as a quick introduction to combinators. Kolmogorov Complexity in Combinatory Logic John Tromp http://homepages.cwi.nl/~tromp/cl/CL.pdf And here is a much more technical paper on some advanced stuff translating an amazing idea of Girard, the geometry of interaction (GOI) in terms of combinator. http://www.mathnet.or.kr/papers/Pennsy/Haghverdi/main7.pdf (Need Category Theory). Soon I will give the solution of the exercices. I will give you "my second everything theory (Year 2127)". Then a third ... I let you meditate on the following "philosophical" question "does Ke
Re: Belief Statements
At 22:19 27/01/05 +1100, Stathis Papaioannou wrote: For example, if I am running an AI program on my computer and a particular bitstring is associated with the simulated being noting, "I think, therefore I am", then should not the same bitstring arising by chance in the course of, say, a spreadsheet calculation give rise to the same moment of consciousness - regardless of whether the spreadsheet user or anyone other than the simulated being himself is or can be aware of this? But from the point of view of the simulated being himself he cannot have the slightest clue about which executions he is supported by. He is dispersed in 2^aleph0 computational histories and he can only bet on its most probable consistent extensions. You always talk like if the mind body relation was one-one, when with comp although you still can attach a mind to a ["piece of relative object" appearing in your most probable histories] the mind of "the piece of relative object" cannot attach an object to itself, only an infinity of such objects. With comp the mind-body relation is one-one in the body -> mind direction, and one-many in the mind-body direction. It is counter-intuitive but no less than QM without collapse (Everett, Deutch). Bruno http://iridia.ulb.ac.be/~marchal/
Re: Belief Statements
At 08:38 26/01/05 -0500, Tianran Chen wrote: Hal Finney wrote: I had a problem with the demonstration in Permutation City. They claimed to chop up a simulated consciousness timewise, and then to run the pieces backwards: first the 10th second, then the 9th second, then the 8th, and so on. And of course the consciousness being simulated was not aware of the chopping. The problem is that you can't calculate the 10th second without calculating the 9th second first. That's a fundamental property of our laws of physics and I suspect of consciousness as we know it. This means that what they actually did was to initially calculate seconds 1, 2, 3... in order, then to re-run them in the order 10, 9, 8 And of course the consciousness wasn't aware of the re-runs. But it's not clear that from this you can draw Egan's strong conclusions about "dust". It's possible that the initial, sequential run was necessary for the consciousness to exist. I doubt this is the case. But the sequential run, actually the infinity of sequential runs, exist(s) like any runs of any partial recursive functions exists in any of those representations allowed by the arithmetical relations. From "inside" an observer cannot distinguish real, virtual or just arithmetical realities (it is a theorem with the comp hyp and reasonable definition of observation). First of all, I don't think you should call it "law" at all, since such property is indeed derived purely from the interpretation we had made so far about our world. Although these interpretations (QM, Relativity, super string and etc.) are in favor now, they are logically no more "valid" than Newton's physics at his time (or even now). If we all this time dependency a "defect", then we (still) do not know whether it is a defect of theories we favored, or a defect of the world we are in now, or a defect of our reasoning ability, or even a resultant defect induced from some other defect of our world. Infinite (or at least very large) number of theory can be developed based on finite number of observed facts, just like infinite numer of curves can pass through finite number of common points. However, we have principles like Occam's Razor to choose between them. How do we know that some other theory may not suffer from this defect? Sure. That is why it is better to build a TOE from introspection than from observation. Then you can make it communicable in case you show it is the output of a machine belonging to a class of natural introspector. After that you can still compare with the facts. A case is made with the natural introspector played by "Lobian" machine (cf url below). Second, even with the physics we use nowadays, there are still simple problems that can be calculate NOT IN ORDER. For instance, the displace of a single pendular at any time can be calculate regardless of its history. Put into more formal way, there exist some turing machine that can calculate in constant (regard to the time) steps. More generally, dynamic systems and complex systems are the only thing that has "history". However, many dynamic system can be translated (however messly) into simple system of equations that can be solved in constant time with some turing machine. Take gas for example, the position of each molecule is no doubt a hard problem that only expressed with dynamic system. However, if we are to talk about gas in a higher level in terms of volume, pressure, and temperature, then most problem can be expressed in simple systems that can be calculated in constant time. Finally, our physics world may be one of the limit that some problem cannot be solved in constant time. This had been talked about quite thoroughly in the discussion about super-turing computation. I don't have much to add on to that. Actually the comp hyp, once you distinguish first and third person point of views makes part of reality not turing-emulable at all, at least a priori (it is the consequence of the universal dovetailer argument). The apparent computability of physical laws must be explained and this without invoking any magical selector of substancial reality (universe). Conclusion: A world can be simulated IN or OUT OF ORDER, depending on the physics to be simulated, the world the simulator is in, and the design of the simulator (which is related to the level of intellegence of the designer in this particular case). It is not relevant. The ORDER of such simulation is defined from inside by the simulated people. From outside you need less than the block-arithmetical reality. Bruno http://iridia.ulb.ac.be/~marchal/
Re: Belief Statements
On 27 Jan 2005 Tianran Chen wrote: Hal Finney wrote: I had a problem with the demonstration in Permutation City. They claimed to chop up a simulated consciousness timewise, and then to run the pieces backwards: first the 10th second, then the 9th second, then the 8th, and so on. And of course the consciousness being simulated was not aware of the chopping. The problem is that you can't calculate the 10th second without calculating the 9th second first. That's a fundamental property of our laws of physics and I suspect of consciousness as we know it. This means that what they actually did was to initially calculate seconds 1, 2, 3... in order, then to re-run them in the order 10, 9, 8 And of course the consciousness wasn't aware of the re-runs. But it's not clear that from this you can draw Egan's strong conclusions about "dust". It's possible that the initial, sequential run was necessary for the consciousness to exist. I doubt this is the case. . . . Second, even with the physics we use nowadays, there are still simple problems that can be calculate NOT IN ORDER. For instance, the displace of a single pendular at any time can be calculate regardless of its history. Put into more formal way, there exist some turing machine that can calculate in constant (regard to the time) steps. More generally, dynamic systems and complex systems are the only thing that has "history". However, many dynamic system can be translated (however messly) into simple system of equations that can be solved in constant time with some turing machine. Take gas for example, the position of each molecule is no doubt a hard problem that only expressed with dynamic system. However, if we are to talk about gas in a higher level in terms of volume, pressure, and temperature, then most problem can be expressed in simple systems that can be calculated in constant time. Finally, our physics world may be one of the limit that some problem cannot be solved in constant time. This had been talked about quite thoroughly in the discussion about super-turing computation. I don't have much to add on to that. Conclusion: A world can be simulated IN or OUT OF ORDER, depending on the physics to be simulated, the world the simulator is in, and the design of the simulator (which is related to the level of intellegence of the designer in this particular case). These considerations are valid if we are discussing how one would actually go about simulating a world or mind in OUR universe, so that we could interact with it, or at least eavesdrop. But Egan's point in the novel, as I understood it, was that the requisite computations would occur (and occur necessarily) in noise, perfectly hidden from us. Trying to "find" the computations in the noise would be at least as difficult as building a conventional computer and programming it from scratch, so in this sense, the claim that that there is this hidden information there at all is meaningless. However, if we allow that a computation can give rise to consciousness, why should that consciousness be contingent on our ("we" being conscious beings at the computer's level) ability to observe and recognise it as such? For example, if I am running an AI program on my computer and a particular bitstring is associated with the simulated being noting, "I think, therefore I am", then should not the same bitstring arising by chance in the course of, say, a spreadsheet calculation give rise to the same moment of consciousness - regardless of whether the spreadsheet user or anyone other than the simulated being himself is or can be aware of this? --Stathis Papaioannou _ Searching for that dream home? Try http://ninemsn.realestate.com.au for all your property needs.