On 10/11/2013 4:09 AM, Pierz wrote:
On Friday, October 11, 2013 12:25:45 PM UTC+11, Brent wrote:
So there are infinitely many identical universes preceding a measurement.
these universes distinct from one another?
They aren't 'distinct'. The hypothesis is that every universe branch contains an
*uncountable* infinity of fungible (identical and interchangeable) universes. While this
seems extravagant, it actually kind of makes more sense than the idea of a universe
"splitting" into two (where did the second universe come from?). Instead, uncountable
infinities of universes are differentiated from one another. Quantum interference
patterns arise because of the possibility of universes merging back into one another again.
Do they divide into two infinite subsets on a binary measurement, or do
many come into existence in order that some branch-counting measure
right proportion? Do you not see any problems with assigning a measure to
countable subsets (are there more even numbers that square numbers?).
The former. Deutsch goes into the problem of infinite countable sets in great detail and
shows how this is *not* a problem for these uncountable infinities (as Russell points
out)), whereas it may be a problem for Bruno's computations - a point I've tried to
argue with Bruno, but he bamboozles my sophomoric maths with his replies. To me it seems
you can't count computations that go through a state, because for every function f that
computes a certain function, there is also some function f1 that also computes f such
that f1 = f + 1 - 1. But maybe that can be solved by counting only the functions with
the least number of steps (?).
And why should we prefer this model to simply saying the Born rule derives
Bayesian epistemic view of QM as argued by, for example, Chris Fuchs?
I don't know about Chris Fuchs, although isn't that just Copenhagen?
No, it's an interpretation of QM as personal probabilities, i.e. quantum Bayesianism. It
reifies information, not quantum states, c.f. http://arxiv.org/pdf/1207.2141.pdf or
http://arxiv.org/pdf/1301.3274.pdf It's might be compatible with Bruno's ideas where
Copenhagen certainly isn't.
It's clear that one would need strong reasons to favour MWI with its crazy proliferation
of entities, which at first blush seems to run against Occam's razor. However Deutsch
makes a damn good fist of explaining why we in fact have those reasons. For instance,
when a quantum computer calculates a function based on a superposition of states, MWI
can explain where these calculations are occurring - in other universes. The computer is
exploiting the possibility of massive parallelism inherent in that infinity of
universes. It is entirely unclear how these calculations occur in the standard
interpretation. MWI also solves the problem of what happens to non-realized measurement
states once a system decoheres. And of course it gets around the intractable
difficulties of non-computable wave "collapse". So it's a case of choose your poison:
infinite universes or conceptual incoherence. I'll take the former, even though in some
ways I'd "like" the universe (or the multiverse) better if it wasn't that way.
If you just read this list you have the impression that MWI is the consensus "true"
interpretation of QM; but it's still controversial (as are all other intepretations). I
highly recommend reading Scott Aaronson's arXiv:1108.1791v3 "Why Philosophers Should Care
About Computational Complexity". Section 8 is his discussion of Deutsch's argument based
on computation. He gives several reasons why Deutsch's argument, if not actually wrong,
may not mean what people think it means. Here's the concluding part:
One can sharpen the point as follows: if one took the parallel-universes
explanation of how a
quantum computer works too seriously (as many popular writers do!), then it would be
make further inferences about quantum computing that are flat-out wrong. For
“Using only a thousand quantum bits (or qubits), a quantum computer could store
This is true only for a bizarre definition of the word “store”! The fundamental problem is
when you measure a quantum computer’s state, you see only one of the possible
rest disappear. Indeed, a celebrated result called Holevo’s Theorem  says that, using
there is no way to store more than n classical bits so that the bits can be reliably
In other words: for at least one natural definition of “information-carrying capacity,”
exactly the same capacity as bits.
To take another example:
“Unlike a classical computer, which can only factor numbers by trying the
by one, a quantum computer could try all possible divisors in parallel.”
If quantum computers can harness vast numbers of parallel worlds, then the
above seems like a
reasonable guess as to how Shor’s algorithm works. But it’s not how it works at all.
if Shor’s algorithm did work that way, then it could be used not only for factoring
also for the much larger task of solving NP-complete problems in polynomial time. (As
in footnote 12, the factoring problem is strongly believed not to be NP-complete.) But
a common misconception, quantum computers are neither known nor believed to be
able to solve
NP-complete problems efficiently.52 As usual, the fundamental problem is that measuring
just a single random outcome |xi. To get around that problem, and ensure that the right
is observed with high probability, a quantum algorithm needs to generate an interference
in which the computational paths leading to a given wrong outcome cancel each
other out, while
the paths leading to a given right outcome reinforce each other. This is a delicate
and as far as anyone knows, it can only be achieved for a few problems, most of which
factoring problem) have special structure arising from algebra or number
A Many-Worlder might retort: “sure, I agree that quantum computing involves
parallel universes in subtle and non-obvious ways, but it’s still harnessing parallel
even here, there’s a fascinating irony. Suppose we choose to think of a quantum algorithm
of parallel universes. Then to put it crudely, not only must many universes interfere to
give a large
final amplitude to the right answer; they must also, by interfering, lose their identities
universes! In other words, to whatever extent a collection of universes is
useful for quantum
computation, to that extent it is arguable whether we ought to call them
at all (as opposed to parts of one exponentially-large, self-interfering,
Conversely, to whatever extent the universes have unambiguously separate identities, to
they’re now “decohered” and out of causal contact with each other. Thus we can
outputs of any future computations by invoking only one of the universes, and treating the
as unrealized hypotheticals.
To clarify, I don’t regard either of the above objections to Deutsch’s argument as
am unsure what I think about the matter. My purpose, in setting out the
objections, was simply
to illustrate the potential of quantum computing theory to inform debates about the
Max Born was my great grandfather. I wonder what he would have made of Everett if he'd
been a bit younger. When he died in 1970, it was still probably too out there for him to
have seriously considered.
A lineage to be proud of! My grandfather was Isaac Newton, Isaac Newton Hart, a Texas
farmer and rancher. :-) Everett proposed the relative state interpretation in 1957, just
three years after Born got the Nobel prize. But it wasn't very popular until years later,
so Born may not have heard of it.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to email@example.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.