> Date: Mon, 22 Feb 2010 22:58:20 -0800
> Subject: Re: Many-worlds vs. Many-Minds
> From: charlesrobertgood...@gmail.com
> To: everything-list@googlegroups.com
> On Feb 23, 7:13 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Having read the book a while ago, my memory is that Price offered this
idea as a conceptual argument for how one *might* explain things using the
EPR experiment, but I don't think he ever would have said that this idea
makes delayed-choice and EPR "trivial" to explain--to really explain them,
you'd have to provide a quantitative theory showing the precise connection
between these ideas about causality and the results of those experiments,
and Price didn't have one.
> it's true that Price is not a physicist, so he has to rely on others
> to provide a theoretical underpinning, or not as the case may be. if
> understand him correctly, and I'm quite willing to admit that I may
> have failed to do so, his suggestion is that the physical theory
> already exists, and is the standard formulation of quantum mechanics.
> My use of "trivial" was only intended to indicate that using his
> approach allows these phenomena to be explained using standard quantum
> theory, while most other explanations require some sort of extra input
> - faster than light signalling, and so on. (I believe the MWI doesn't
> require any extras to explain EPR?)

No, he's not claiming the time-invariance of QM explains the EPR results in
itself, what he's proposing is that one might come up with a
hidden-variables explanation where the variables associated with one
particle at its creation can be influenced by the measurement that will be
performed on its entangled twin at a later time. If you followed his analogy
involving the "Ypiarians" on p. 213(which can be viewed on google books at
the idea is that the perfect correlation between answers when both of
Ypiarian twins are asked the same question would involve a theory where each
twin did have a predetermined answer to each question (hidden variables),
but each twin would have a sort of precognitive knowledge of what question
the other twin would be asked so that he could make sure his predetermined
answer to *that* question matched that of his brother's. This would get
around the traditional objections to hidden-variables theories which assign
an objective reality to all variables (not just the ones that are measured),
namely Bell's theorem, which proves that local hidden variables theories
can't explain the statistics seen in QM (see the discussion of how assuming
that the Ypiarian twins have *all* answers matched leads to the conclusion
that when asked different questions they should give the same answer at
least 1/3 of the time, and yet the police find that when asked different
questions the twins only give the same answer 1/4 of the time). But as Price
points out, the proof of Bell's theorem includes the implicit assumption
that the hidden variables assigned to the particles when they are created at
a common location can't be influenced retrocausally by later measurements on
the particles, which implies that the only way to guarantee they always give
the same answer when asked the same question is to say that they both had
the same predetermined answers to all three possible questions. With
retrocausation you might explain why they always give the same answer when
asked the same question *without* assuming that they had the same
predetermined answers to all three questions, since they would only have to
coordinate their answers for the one question they knew in advance they'd be
asked, so when asked different questions they might give the same answer
less than 1/3 of the time.

Anyway, since orthodox QM does not contain any hidden variables at all, it's
not the type of theory he's looking for. See the paper by Price at
http://www.usyd.edu.au/time/price/preprints/QT7.pdf where he says on p. 5
"So much, at least in barest outline, for the case for investigating HV
models which abandon IA. (I have developed this case at much greater length
elsewhere [10].) However, much remains unclear, not only about the technical
possibilities for such models, but also about their physical significance,
and their relation to other approaches to the conceptual foundations of QM".
He does offer a simple toy model which is based on simply assuming that
measurements affect the hidden variables associated with particles in the
past, but the model gives no details on how this works (presumably a
complete hidden variables model would not rely on an absolute division
between measuring devices and quantum systems, but would be able to treat
the measuring device itself as just a large collection of particles which
interact according to the same rules that govern all particle
interactions--Bohm's interpretation of QM does something like this, for
example). About this toy model, he says (on p.8): "Second, people will
object to details of the model. For example, they may object that the model
does not tell us how the system knows about future measurements. To this
kind of objection two replies are appropriate, I think. Firstly, we should
acknowledge that as it stands, the model provides no mechanism for the
influence of the future on the present. It simply has the status of a
primitive law-like fact. But every physical theory relies eventually on
facts of this kind, so the fact that our model does so is not a damning
objection. Secondly, and perhaps more importantly, we should observe that
this objection misses the point of the model. The real purpose of the model
is to establish an existence claim, the claim that there are consistent HV
models for QM which are based on relinquishing IA. We do not need an elegant
model to prove such a claim— any model will do."

You might also take a look at the criticism of Price's lack of a detailed
theory in section 7 of this paper (which starts on p. 18 of the pdf):

> When you mention "hidden variables," I assume you mean that particles
> are in a definite state at a given time, rather than "undecided until
> measured" ? If so, then I believe that is (supposedly) an outcome of
> Price's approach, assuming I've understood him correctly. I don't
> think there is a problem explaining why you can't send information
> back in time. Surely to obtain a useful back-in-time signal from the
> system would require some form of amplification that would also have
> to operate backwards in time? But Price is only suggesting that time-
> symmetry is significant within a given quantum interaction; it can't
> be ampified.

What exactly do you mean by "amplification"? When we send information from
the past to the future, do you think this involves some notion of
amplification, and if so what does it mean in this case? I'm also not clear
what you mean by "Price is only suggesting that time-symmetry is significant
within a given quantum interaction"--what does "significant" mean here,
given that the laws of physics governing *all* processes, even large
collections of interacting particles, are completely time-symmetric? Is the
time-symmetry somehow "insignificant" for these more complex processes
involving many particles, and if so why?

> > Another thing to keep in mind is that Newtonian laws dealing with things
like gravity and elastic collisions are time-symmetric too, as are Maxwell's
laws of classical electromagnetism, but you don't see anything analogous to
the EPR experiment or the delayed choice experiment in classical physics
(including relativity without quantum theory). So merely pointing to the
time-symmetry of QM doesn't in itself explain much about these phenomena.
> I wouldn't say that Price is "merely" pointing to the time-symmetry.
> He is suggesting that, given the time symmetry that most physicists
> agree exists, then there are certain outcomes we should expect in
> systems where the information content is very limited, i.e. to quantum
> states, and that these seem to match those we observe (e.g. Bell's
> inequality, etc).

Does he use a phrase along the lines of "information content is very
limited", and if so where? I don't really understand what this means either.
We can imagine a purely classical world consisting of small particles which
collide elastically, would you say that interactions involving small numbers
of such classical particles (perhaps only two) would qualify as situations
where information content is limited?

And I don't think he actually says we should "expect" things like the
correlations seen in EPR experiments, just that they may be easier to
understand by invoking retrocausation. If you disagree, can you point to
where in the book he says it?

Having read any number of very complex attempts to
> explain why there is an "arrow of time" given the apparent
> indifference of most physical laws, this seems to me to be a line of
> enquiry that is at least worth pursuing.

Isn't his explanation for the arrow of time the same as most other
physicists, namely the idea that it should be explained in terms of the low
entropy of the Big Bang?

On Tue, Feb 23, 2010 at 8:47 PM, Charles <charlesrobertgood...@gmail.com>wrote:

> On Feb 23, 7:13 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Such a theory would also have to explain why we are not able to use
> things like the delayed choice quantum eraser to actually send information
> backwards in time--for example, we can't look at the screen behind a
> double-slit and determine whether the which-path information for the
> particles that hit the screen will in the future be erased or preserved (see
> the last two paragraphs of the section 'The experiment' athttp://
> en.wikipedia.org/wiki/Delayed_choice_quantum_eraser#The_experi...for a
> summary of why this doesn't work).
> Further to my earlier comment about this, I would like to add that
> this experiment appears to illustrate the time-symmetry principle.
> That is to say, whatever is ultimately done in a quantum system will
> effect its past behaviour, so long as the intervening quantum states
> have been preserved.
But do you understand why observing the pattern of "signal photons" on the
screen behind the double-slit doesn't tell you whether their entangled twins
(the 'idlers') will have their which-path information erased in the future?
That regardless of what happens to the idlers, the *total* pattern of signal
photons never shows an interference pattern, and that we can only recover an
interference pattern by looking at the subset of signal photons whose
corresponding idlers went to a particular which-path erasing detector?
Nature seems to go to an awful lot of trouble to make sure we can't use
these sorts of experiments to actually gain information about the future.
What's more, the results of this experiment can be explained just fine in
interpretations of QM that don't involve retrocausation (at least no more
than time-symmetric classical theories like Newtonian gravity), such as
Bohmian mechanics.


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to