Tim May wrote: > On Wednesday, September 4, 2002, at 10:08 AM, Hal Finney wrote: > > There are a few objections which I am aware of which have been raised > > against the MWI. The first is its lack of parsimony in terms of > > creating a vast number of universes. We gain some simplification in > > the QM formalism but at this seemingly huge expense. The second is its > > untestability, although some people have claimed otherwise. > > The latter is the more important. If, for example, the plurality of > worlds are out of communication with each other, forever and always, > then it means nothing to assert that they "actually" exist.

## Advertising

Worlds are never absolutely out of touch. There is a small degree of interference which is always present between worlds in the MWI. Of course this is infinitisimal in practice and can't be observed between macroscopically distinct worlds. But there is a continuum from totally disjoint worlds down to worlds which differ only microscopically and where interference is easily observed, such as an electron double-slit experiment. As our technology improves, we will be able to detect smaller and more subtle degrees of interference, corresponding to worlds which are more and more macroscopically distinct. Assuming the MWI is in some sense "correct", then eventually it will become increasingly difficult to draw the line and say that, for example, small objects can be in distinct superpositions, but large objects cannot. > In weaker forms of the MWI, where it's the early state of the Big Bang > (for example) which are splitting off into N universes, De Witt and > others have speculated (as early as around 1970) that we may _possibly_ > see some evidence consistent with the EWG interpretation but NOT > consistent with other interpretations. I'm not familiar with the details of this. But I know that much of the impetus for increased acceptance of MWI models comes from the cosmologists. > > And the > > third is that it retains what we might call the problem of measure, > > that is, explaining why we seem to occupy branches with a high measure > > or amplitude, without just adding that as an extra assumption. > > What's the problem here? I find it utterly plausible that we would be > in a universe where matter exists, where stars exist, where entropy > gradients exist, etc., and NOT in a universe where the physical > constants or structure of the universe makes life more difficult or > impossible (or where the densities and entropy gradients mean that > evolution of complex structures might take 100 billion years, or more, > instead of the billion or so years it apparently took). The problem is more formal, that if we abandon measurement as a special feature of the physics, there is no longer an axiom that says that probability is proportional to amplitude squared. > > By the metrics we typically use for > > universe complexity, basically the number of axioms or the size of a > > program to specify the universe, the MWI is in fact simpler and > > therefore > > more probable than the traditional interpretation. > > This I'm not convinced of at all. I don't find the Copenhagen (aka > "Shut up and calculate") Interpretation requires any more axioms. So > long as we don't try to understand what is "really" happening, it's a > very simple system. The conventional formulation, as described for example at http://www.wikipedia.com/wiki/Mathematical_formulation_of_quantum_mechanics, has special axioms related to measurement. Systems evolve according to the Schrodinger equation except when they are being measured, when we get wave function collapse. MWI rejects the axioms related to observables and collapse. In the page above we can eliminate axiom 3 and probably axiom 2 as well. You are left with nothing but Hilbert space and the Schrodinger equation. It's a simpler set of axioms. > And, putting in a plug for modal/topos logic, the essence of nearly > every interpretation, whether MWI or Copenhagen or even Newtonian, is > that observers at time t are faced with unknowable and branching > futures. (In classical systems, these arise from limited amounts of > information available to observers and, importantly, in limited > positional information. Even a perfectly classical billiard ball > example is unpredictable beyond a few seconds or maybe tens of seconds, > because the positions and sets of forces (turbulence in the air > currents around the balls, even gravitational and static electricity > effects, etc.) are only known to, say, 20 decimal places (if that, of > course). Because the "actual" positions, masses, sphericities, static > charges, etc. are perhaps defined by 40-digit or even 200-digit > numbers, the Laplacian dream of a suffiicently powerful mind being able > to know the future is dashed. This is true in practice, but I think there is still a significant difference between deterministic systems like classical physics or the MWI, and inherently non-deterministic systems like the conventional Copenhagen interpretation of QM. In the latter you have the difficult philosophical problem of explaining where all the information comes from. This is what led Schmidhuber to suggest that quantum randomness is generated by a pseudo random number generator; otherwise you can't simulate the universe on a computer because computers can't create true random bits. The Copenhagen interpretation is fundamentally non-computable. Hal Finney