On Tue, Jan 7, 2014 at 4:35 PM, LizR <lizj...@gmail.com> wrote:
> On 8 January 2014 08:59, Jesse Mazer <laserma...@gmail.com> wrote:
>> Well, most physicists already agrees physics is time-symmetric (well,
>> CPT-symmetric, but the implications are the same for Bell's inequality and
> Yes, they do, but it doesn't appear to be taken into account when
> discussing Bell's inequality.
>> but I don't see how this alone can explain violations of the Bell
> No, you need to work out the consequences mathematically, and I dare say
> that is quite difficult. This is simply a *logical* demonstration that
> Bell's inequality can be violated while retaining locality and realism,
> which is otherwise impossible.
As I said in another comment, if you allow information about the state of
complex systems like detectors to flow back in times as well as forwards,
it's not clear that this really counts as preserving locality. Consider
what would happen if we could send ordinary messages backwards in time as
well as forwards. Suppose I have a friend 2 light years away and a relay
station midway between us, 1 light year away from each of us. Then on Jan 1
2014 I can send a message at the speed of light and backwards in time to
the relay station, it can receive it on Jan 1 2013 and then forward a copy
of the message at the speed of light but forward in time to my friend, and
my friend can get that message on Jan 1 2014, the same time I sent it.
Now it might not be possible to send such explicit signals in an
interpretation of QM like the one Huw Price sketches, featuring quantum
signals moving backwards in time from measurements to particle emissions to
condition the hidden variables they are emitted with (which interestingly,
is *not* how Cramer's transactional interpretation works, his
interpretation actually doesn't feature any hidden values for properties of
a particle that aren't measured). The signals might be "hidden" in the
sense that we couldn't use them for communication, like how in Bohm's
interpretation there is an instantaneous "pilot wave" that coordinates the
behavior of distant particles but it can't be used for instantaneous
communication. Still, Bohm's interpretation is understood to contain
nonlocal causal influences in terms of the objective description of what's
going on "behind the scenes", and one might be able to make a similar
argument about hidden nonlocality in an interpretation like the one Huw
Price imagines, through something analogous to the "relay" argument.
Unfortunately, as far as I know, no one has actually managed to find a
detailed interpretation of QM that works in the way Huw Price sketches.
>> To explain Bell inequality violations using a time-symmetric theory like
>> the one sketched out by Huw Price, you need to assume hidden variables (the
>> particles have predetermined spin states along all axes the experimenters
>> might choose to measure),
> Yes, hence it retains realism. The variables are only "hidden" in the
> sense that they can't be measured half way through the experiment - e.g. by
> measuring the state of photons while in flight - because any interference
> with the experiment would destroy the correlations between the measuring
> apparatus and the emitter.
But Huw Price's explanation involves "hidden variables" in the traditional
sense of definite values for properties that are *never* measured, like the
spin of a particle on axis #1 on a trial where the experimenter chooses to
measure the spin on a different, non-commuting axis #2 (see Price's paper
at http://prce.hu/w/preprints/QT7.pdf for example).
Remember, whenever the experimenters happen to choose the same axis to
measure for their respective particles, they always find opposite spins.
The naive reaction to this is that there's nothing spookily nonlocal about
this because the particles might have been *created* with predetermined
opposite spins on all possible axes, and if they carried these spin values
with them to the points where they were measured, then this is enough to
guarantee that if the experimenters measure the same spin they'll get
opposite results. But Bell showed that when you analyze what this naive
explanation would mean for the statistics of trials where the experimenters
choose to measure to *different* axes, the statistics would have to obey
certain inequalities, but these inequalities are violated in QM. So, Bell
showed that the naive local hidden variables explanation for the
predictably opposite results when they choose the *same* axis doesn't work.
Price's idea is to try to save this sort of local explanation where the
particles are assigned predetermined spins "at birth" and just carry these
values with them to the experimenter. He suggests that when the emitter
"assigns" the predetermined spins on all the different axes, it takes into
account which spins are actually going to be measured by the experimenter
on that trial (even if they don't make the choice until after the particles
have left the emitter), and so the statistical pattern of these hidden spin
values would look different on the set of trials where the experimenters
both measured on the same axis than the trials where they measured
different ones. Bell's derivation of the inequality assumed that the
assignment of hidden variables was statistically independent of the later
choices of the experimenters, so Price is exploiting a loophole in the
>> *and* you must further assume that the particle emitter that creates the
>> particles can "predict" what axes the experimenters will choose to measure
>> on each trial,
> That's what time symmetry means. There is no "prediction" involved in the
> sense you mean - the state of the measuring apparatus affects the photons,
> just as the emitter does. (This can of course be extended to a multiverse,
> with the measuring apparatus simultaneously in various states which create
> a superposition of emitters. But that isn't necessary.)
But in this theory the state of the particles at the moment they are
emitted depends in a very specific and localized way on the later choices
of the macroscopic experimenters--if some observer outside the universe
could simultaneously know all the hidden variables of the particles without
disturbing them, they could say something like "hmm, I see the spins aren't
opposite on the x axis, that must mean the experimenters aren't going to
choose to both measure the x axis". As I said, this sort of highly
localized "record" of a future event is definitely not something that time
symmetry alone would lead you to expect.
>> so that the statistics of what combinations of hidden variables get
>> created will depend on the experimenters' later choices. For examples on
>> trials where they are both going to measure along the x-axis the emitter
>> will always create particles that have opposite spins along the x-axis,
>> whereas on trials where the experimenters both measure on some other axis,
>> or where they each choose different axes to measure, the emitter can create
>> particle pairs that don't have opposite spins on the x-axis. Is this the
>> type of solution you're thinking of?
> Yes, that sounds about right. The particles' states throughout the
> experiment are influenced by the measurement settings as well as by the
> emitter that creates them. From that it follows logically that information
> about particle A's measurement setting is available to particle B at the
> point of its measurement, and vice versa. (assuming the physics is local
> and realistic - the particles have definite states throughout).
>> If so, it seems like this goes well beyond time-symmetry, since
>> time-symmetry doesn't normally allow for systems to contain localized
>> "records" of events in the future the way that they can for events in the
>> past (which presumably could be explained in terms of the thermodynamic
>> arrow of time caused by the universe having a low-entropy past boundary
>> condition but not a low-entropy future boundary condition).
> I'm afraid you've missed the point here, and then gone on to tie yourself
> in knots. There is no thermodynamics or "sensitive dependence on initial
> conditions" at the level of the individual photons.
Sensitive dependence on initial conditions referred to the choices of the
macroscopic experimenters, not the photons. For example, if the
experimenters were on two planets far apart so the entangled photons took
weeks or years to reach them, they could each choose their detector setting
based on the weather on the day of detection. Weather does exhibit
sensitive dependence on initial conditions on timescales of less then a
week, so any microscopic perturbation to the conditions on either planet at
the time the emitter sends the photons could change the later detector
settings in an unpredictable way. But if Huw Price's idea is correct, that
implies that my imaginary observer outside the universe with knowledge of
hidden variables could predict something about the weather on each planet
weeks or years later just by looking at the variables of the particles at
the moment they were created.
Just because we assume time-symmetric fundamental laws, that doesn't mean
that localized records of future events should be just as easy to find as
localized records of past ones. The key is the boundary conditions. Suppose
we want to run a simulation of an isolated system with some known
combination of particles and known total energy; we could do this by
starting from some initial conditions, and simulating forward from there
for some set amount of time T. If the initial conditions are chosen
randomly, it's overwhelmingly likely that the initial condition will be at
maximum entropy or very close to it, and the system will remain at maximum
entropy or very close to it through the time interval T, so the system will
exhibit no macroscopic arrow of time (and no localized "records").
But if we do this over and over again for a sufficiently astronomical
number of trials, choosing the initial conditions randomly each time, there
will be some tiny fraction of trials where the initial conditions happen to
be at or below some low entropy S. And if the dynamical laws are
time-symmetric, there will be an equally tiny fraction of trials where the
*final* conditions after the simulation has run for the full time T happen
to be at or below that same low entropy S. Imposing a past low-entropy
boundary condition would be equivalent to just looking at the subset of
trials with initial entropy <= S, and throwing out the much larger number
of trials where this didn't occur. Likewise imposing a future low-entropy
boundary condition would be equivalent to just looking at the subset of
trials with final entropy <= S, and throwing out all the rest. Within each
subset it'd be overwhelmingly likely that the entropy "increases"
continuously in the direction of time pointing away from the low-entropy
boundary condition--for a low-entropy initial condition this would
naturally mean entropy continually increasing with time, but for a
low-entropy final condition this would mean entropy continually decreasing
until the final time T. If the simulation was complex enough to contain
configurations of particles that could act like localized "records", then
the behavior of records in these two subsets would also be symmetric--I
assume that in the subset with low-entropy initial condition you'd see
localized records of the past but not the future, so by time-symmetry, for
the subset with low-entropy final condition you'd see localized records of
the future but not the past.
The point is, in *neither* subset would you expect to see a mix of
localized records of both the past and the future. To find a subset of all
the randomly-generated histories which contained such a mix, you'd have to
impose much more stringent boundary conditions then just low entropy on one
end of time or the other. I don't even think it would be enough to impose
the more symmetric boundary condition that you only keep trials where both
the initial conditions and the final conditions at time T have entropy <=
S. Although I don't have a mathematical proof, I'd be willing to bet that
in this case what you'd typically get is the entropy increasing steadily
from time 0 to time T/2, and then decreasing steadily from T/2 to T, with
each half looking no different from the cases where only one "end" of time
had a low-entropy boundary condition.
If that's right, then to get a situation where you have localized records
of both past and future coexisting at the same time, I'd guess you'd have
to apply some very contrived boundary conditions that went well beyond just
specifying the entropy at different times. In which case this would go well
beyond anything that "time symmetry" alone should lead one to expect.
> Entropy is a statistical, high level outcome from a lot of low-level
> time-reversible processes. Price assumes realism, that the photons have a
> real state, with spins and so on, throughout the experiment. Time symmetry
> simply says that this state is influenced by boundary conditions *at
> either end of its path* - by the settings of the measurement apparatus it
> encounters, *and* by the state of the emitter.
That's not quite how the term "time symmetry" is normally used by
physicists, and I don't think Price means to alter the standard meaning of
the term. Time symmetry is defined mathematically as a property of the
equations that define the laws of physics, but here's how I think of it
conceptually. If we have time-symmetric laws, we can picture a movie of any
system obeying these laws that shows the maximum physical detail
possible--in classical physics this would mean the precise position and
velocity of every bit of matter at each moment, in quantum physics I
suppose it would be a movie of a precise quantum state vector in Hilbert
space. If you then you play that movie backwards, time symmetry means that
the reversed movie's dynamics will also be found to obey precisely the same
laws, so if you are simply given such a movie without being told whether
it's being run backwards or forwards, there's no way to determine it one
way or the other.
This conceptualization deals only with dynamics, whereas Price does talk
about boundary conditions as well, but I think his point there is that if
constraints on boundary conditions are generated in a lawlike way, and
these laws are themselves time-symmetric, then there should be no
fundamental asymmetry in what types of boundary conditions are possible (so
in the "space" of possible universes generated by the laws, ones going from
low entropy to high should be no more common than ones going from high to
low, similar to my thought-experiment about subsets of huge numbers of
randomly-generated simulated histories).
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.