Charles wrote:
On Feb 25, 6:41 am, Jesse Mazer <> wrote:
Yes, this is the mainstream point of view, not unique to Price. It's
generally thought that reason we see an arrow of time at the macroscopic
level--including the arrow of time inherent in the fact that we can look at
records in the present and gain knowledge of past events, but we can't do
the same for future events--is ultimately explained by the low-entropy
boundary condition at the Big Bang. In a deterministic universe, information
about the future actually would be implicit in the total distribution of
matter/energy and the present, but the problem would be that the relation
between future events and present information about them would be a
one-to-many relationship--you'd basically have to know the precise position
and velocity of every single particle in the past light cone of a future
event in order to reconstruct what that future event would actually be.
Because of the entropy gradient, with past events and present records you
can have a one-to-one relationship (or at least a one-to-few relationship),
where localized collections of particles can function as records of past

Yes, I agree that is the mainstream view as you say - it was a side
issue that people seem to regularly try to extract a "local" reason
for the arrow of time using for example causal dynamical triangulation
(or whatever it's called), and in my opinion unnecessary.

The problem I was alluding to had to do with the fact that Price is arguing
for "retrocausation" not just in the broad sense of any arbitrary
time-symmetric theory, where the entire distribution of particles in the
past light cone of some future event can be said to contain information
about that future event (and thus to 'anticipate' it in a sense), but in a
more narrow one-to-one sense. He's saying that the hidden variables states
of just *two* entangled particles will depend in a lawlike way on the future
measurements performed on these particles. If these variables weren't
hidden--if you could actually know the hidden-variables states of particles
before they were measured--then you could use them to know in advance what
measurement was going to be performed in the future. And the experimenters
could base their decision on what experiment to perform on the outcome of
some complicated future event involving many particles (say, a horse race!),
so in a sense you can even have a many-to-one relationship between future
events and present "records" of these events in the form of hidden-variables
states for individual pairs of particles.

This is the sticking point for me.I can't see how an experimenter can
measure a future influence on a quantum system in any direct way. I
mentioned amplification because normal measurements amplify the
signal, and a past-directed signal would need to be similarly
amplified (but presumably in a retrocausal manner), but that isn't the
fundamental problem. The fundamental problem is that to detect a
future influence, you need to measure the state of what is, in your
time sense, the photon you are generating. (Taking photons as a simple
example.) Suppose you arrange something like one of Price's polariser
experiments. You will set up your apparatus to emit a photon, and at a
later date arrange for it to pass through a polariser, orienting the
polariser horizontally if you want to send a '1' bit to your earlier
self, and vertically for a '0'. The problem is, although the polariser
may affect the state of the photon before it arrives (in our time
frame), the emitting device will *also* affect it. The photon's wave
function will be constrained at both ends of its path. It isn't at all
clear to me how we could arrange this system so that we can "read" any
retrocausal influence by "measuring" the photon's earlier state. The
idea doesn't seem to make sense, because we *have* to place a past
boundary condition on the photon, simply because we and our apparatus
are on the entropy gradient. We can't generate photons that are
unaware of the generating apparatus, and hence only have a wave
function with only a future constraint, and then somehow detect those
photons' past states in order to read their future states. But without
the ability to detect a past influence, we can't do any future

But isn't the EPR experiment a way of avoiding a past constraint. The past constraint is just that the net angular momentum is zero, so there is no constraint on the polarization of either photon. When one is measured it can be thought of as sending a message back to the origin and forward to the other photon so as to produce the QM correlation. So the amplification takes place on the other particle in the forward direction. Of course you can't send a signal via a correlation. Here's a good discussion of this and some other retrocausation ideas by William Wharton:



Which is precisely what we find happens in practice: we
get "unexpected" results in experiments "as though" the photon "knew"
what measurement we'd ultimately choose to make - or as though the
photon's state, while traversing the experiment, was affected by both
the emitter and the detector.

So it seems to me that the idea that this view fails because it should
allow signalling from the future falls down at the first hurdle,
namely how one could make such measurements, even in principle. (The
fact that we're also dealing with quantum systems that are disturbed
by "normally time-directed" measurements, never mind past-directed
ones (whatever that would mean in practice) may be an additional


You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to