At 15:33 -0700 13/07/2002, Hal Finney wrote: >Googling on "causality" led to a brief web page that summarized >2000 years of philosophical thought in a few paragraphs, >http://www.helsinki.fi/~mqsalo/documents/causalit.htm. Generally there >seem to be two schools of thought; one is that causality is an artifact >of our minds' efforts to understand and interpret the world; the other >is that causality has objective reality.
With comp you get both at once. Causality can be seen as a law of mind. Eventually and fundamentally causal computations rest on arithmetical laws, like (3 divides 17) entails ((3 = 1) or (3 = 17)), so that (3 does not divide 17), etc. >The real mystery is why our physical laws are time symmetric. But what are physical laws? Where does those laws come from. >Functionalists adopt essentially the view that Bruno calls COMP, >that consciousness is a computational phenomenon and any system which >implements the corresponding computation will be conscious. This is >the foundation for ideas of "uploading", that is, brain simulation by >computer. Functionalists believe such simulations are conscious in the >same way that the brain was, provided that the simulation is performing >the same computation that the brain did. OK. Note that functionalists presuppose the level being high (at the biological brain level for example). Although in the thought experiment I presuppose also a high level, I explicitly shows that the reasoning works on any levels (once it exists). Even if the only way to duplicate me consists in solving DeWitt-Wheeler equation of the "entire quantum cosmic universe", the conclusion prevails. This because, for example, the UD will generate solutions of the Dewitt-Wheeler. Comp entails that the universe is not computable (not generate by a program) except if "I am the universe". This case is a degenerate form of comp with a non-comp look. . >This then raises the question of whether a particular physical system >(like a computer) is implementing a particular computation (which is >an abstract program, perhaps expressed in some computing language). >Surprisingly, this is a difficult and yet unsolved question in philosophy. Surprisingly? >No one has ever come up with a widely accepted, hard and fast rule or >procedure to decide whether a physical system is implementing a given >computation. There is an interesting result, known as Rice Theorem, which shows that there is no algorithm capable of saying if a given program computes or not ... the factorial function (or whatever!). If you find that hard to believe, tell me if the following programs compute the factorial: Begin <is-it-fact? var = n> Giving n, search for a formal proof of the twin primes conjecture in Peano Arithmetic, once you get it compute output n*(n-1)* ...1 End <is-it-fact?> This proves nothing of course. But the general result is a direct consequence of the universality + self-referential abilities of machine. Little programs can behave in "uncomputable" (in the sense of unpredictable) ways. With comp there is two form of indeterminacy: 1)the form of immediate first person indeterminacy due to self-duplicability, and 2) a form of third person unpredictability in the long run. There exists evidence that this long run unpredictability is related to deterministic chaos. Both are important in the rise of consciousness + physical laws. >Broadly speaking, the basic approach is to set up a mapping or >correspondence between aspects of the physical system and aspects >of the abstract computation. Don't forget to set up a mapping between the abstract computation and the personal experience of the person being simulated. >This works well in this case, but there are two problems. The first is >that it does not capture the causal nature of the program. A computer >which was not actually computing, but just playing back a recorded pattern >of charge distributions, would successfully satisfy the mapping. I'll say >more about this in a moment. The second problem is that by making the >mapping more elaborate, one can show that simple, even trivial systems >are implementing complex computations; essentially you do all the work >in the mapping. It's hard to set up a concrete rule for how complex >the mapping can be, and I don't believe anyone has succeeded in this yet. Observe that comp doesn't have that problem. Consciousness and physical reality appears where anticipations remains stable, almost by definition. In that sense comp generalizes the anthropic principle, making it a consistent-machine-thropic principle. >But the more relevant problem for now is the issue of causality. >Most functionalist philosophers believe that for a physical system to be >truly implementing a computation, it must include all of the causality >of that computation and not just "passively" reproduce the patterns. And here Maudlin 1989 argument or my Toulouse Movie graph 1988 paper (see "movie" in the archive) shows that such counterfactualness cannot be captured in any singular physical exemplifications. Here also, the "everything idea" is hardly avoidable. >When we apply this rule to the functionalist model of consciousness, it >works out like this: if a system has the right causality, it is conscious; >if it is only passive, it is not conscious. I could swallow this sentence. the "right causality" is not so different from the relative self-referential correctness of the hopefully consistent machine. > >This elevates causality to having a functional role in the physical world! >It makes the difference between a system having a mind and just being an >elaborate tape recorder. It means that we cannot view causality as just >an artifact of our perceptions, it must be a true element of reality. Yes. A true element of the reality of mind. A fundamental principle operating at more than one level. It is the "material singular" aspect of reality which is an artifact of the mind. >[...] I agree very much with your post where you say: >One of the criticisms of no-collapse theories has always bothered me. >Some people say that the mystery of non-determinism in conventional >interpretations of QM is replaced by an equally baffling mystery in >no-collapse theories: why do I end up in *this* branch rather than >some other? > >The critics understand, I presume, that observers end up in all branches >asking themselves this question. But they still seem to think there is >some mystery about the process, something that needs to be explained. > >I don't see what the mystery is. We have often considered thought >experiments involving duplicating machines, which show the same >phenomenon. A man walks into a duplicating machine and two men walk out, >each having equal claim to being the original. Both ask this question: >"Why did I end up as *this* person instead of the other one?" Note that when you say: >Traditionally, no-collapse models were also criticized on two other >grounds. (I will ignore the complaint that the theory is inparsimonious >in its creation of multiple universes, because IMO this is more than >made up for by the simplicity of the theory. Algorithmic complexity >theory teaches us that it is the size of the program that counts, not >the size of the data, and that is the measure we have used on this list.) I also ignore that complains, but not exactly for the same reason. Although the size of the programs needs to be taken into account, it is the degree of "self-splittability" of the programs which plays the most important role for making our histories more relatively stable. This follows from the fact that I must consider also the reconstitution made by the UD of my current comp state generated slowly (through a very long computation) by arbitrary big programs. Those arbitrary big programs are infinitely more numerous than little programs for obvious reason. The UD cannot be made non redundant. Consciousness stabilizes when little programs generates vast data/programs with a high degree of "self-splittability" generating big numbers of (relatively) similar histories. Bruno