On 20/04/2016 7:05 am, Jesse Mazer wrote:
On Tue, Apr 19, 2016 at 12:06 AM, Bruce Kellett
<[email protected] <mailto:[email protected]>> wrote:
On 19/04/2016 10:23 am, Jesse Mazer wrote:
On Mon, Apr 18, 2016 at 3:45 AM, Bruce Kellett
<[email protected] <mailto:[email protected]>> wrote:
The local mathematical rule in this case, say for observer A,
is that measurement on his own local particle with give
either |+> or |->, with equal probability. It does not matter
how many copies you generate, the statistics remain the same.
I am not sure whether your multiple copies refer to
independent repeats of the experiment, or simply multiple
copies of the observer with the result he actually obtained.
The set of outcomes on the past light cone for this observer
is irrelevant for the single measurement that we are
considering. Taking such copies can be local, but the utility
remains to be demonstrated.
Sorry if I was unclear, I thought we were on the same page about
the notion of "copies". The copies in my toy model are supposed
to represent the idea in the many-worlds that there are multiple
equally-real versions of a single system at a single location at
a single time, including human experimenters, and that in any
quantum experiment some versions will record one result and
others will record a different one. So the copies represent
different parallel versions of a simulated observer, and just as
in the MWI, some copies see one result and other copies see a
different result for any *single* experiment (and each copy
retains a memory, so different copies remember different
sequences of past results as well). And as in the MWI, these
copies would be unaware of one another--just imagine several
simulations of the same experimenter at the same time running in
parallel, with different variations on what results the
simulation feeds to them.
I have a couple of questions. Firstly, does the ensemble generated
in this way differ in any significant respect from the one
generated if the same Alice and Bob perform their (random
orientation) measurements a large number of times?
If the probability of them selecting each possible detector setting on
this single measurement is the same as the frequency with which they
would select each detector setting on a large number of trials, then
the statistics of results will also be the same.
And secondly, what exactly are they performing their measurements
on? On random unpolarized particles? or always on one of the
particles of an entangled singleton pair?
Within the context of the simulation, they are measuring the two
members of an entangled pair. But the computer doesn't use any
*actual* input from real-world instruments measuring entangled
particle pairs, all computations and inputs are classical ones.
In the latter case, one would assume that we have to keep track of
which Alice result comes from the same pair as which Bob result.
In other words, the ensemble is identical to the one generated by
many runs of the same experiment, on entangled pairs, by the same
observes.
That's true, the point here is just that you can generate these
statistics using what I would define to be a "local" set of rules (see
the bottom of this message for a discussion of what I understand
'local' rules to mean), and each copy has the *experience* of making
only a single measurement and getting a single reported measurement
from the other experimenter.
............
But it is absolutely crucial that the relevant pairing information be
retained. In other words, we have to know which Alice measurement
corresponds to the Bob measurement on /t//hat particular///entangled
pair. If that pairing information is lost, or not available, then your
toy model is not simulating the EPR set up, and so is useless.
In this case, what is being simulated is only a *single* entangled
pair, not multiple entangled pairs. Alice measures her member of the
simulated pair at a single moment and some copies get one result and
some copies get a different result at that moment, and likewise Bob
measures his member of the simulated pair at a position and time with
a spacelike separation from Alice's measurement, and some of his
copies at that position get one result and some get a different result.
--the computers simulating Alice has to assign the number of copies
that see each possible result without any foreknowledge of what
happened with Bob, and vice versa. If Bob is scheduled to transmit
his result to Alice at a particular time, then the computer
simulating Bob actually sends a package of messages from the
different copies of Bob, this message traveling to the computer
simulating Alice at the speed of light. When the computer simulating
Alice receives the package of messages, it has to match messages from
copies of Bob to copies of Alice in a one-to-one way,
Aye, there's the rub, as Shakespeare might say. The "computer has to
match messages from copies of Bob to copies of Alice....". What on
earth does that mean? If it means, as it must if your simulation has
to bear any relationship to EPR experiments, is that you match each
Alice result (|+> or |->) to the corresponding Bob result (|+> or |->)
that he got from the particle that was entangled with the particle
that gave Alice her result.
Yes, the matching is done in such a way as to match the statistics on
EPR-like experiments which violate some Bell inequality (though not
necessarily the precise experiment envisioned in the original EPR
paper). For example, one Bell-inequality-violating quantum experiment
would involve Alice and Bob each choosing from one of three detector
angles, with the result that when they choose the same angle they are
guaranteed to get opposite results with probability 1, whereas when
they measure different angles they only have a 1/4 probability of
getting opposite results (the corresponding Bell inequality says that
in any local realist theory, if they get opposite results with
probability 1 when they use the same setting, the probability of
getting opposite results on different settings must be greater than or
equal to 1/3--if anyone's interested, I did a little derivation of
this in a post at http://physics.stackexchange.com/a/140883/59406 ).
So in the simulation, let's say we have 360 copies of Alice and Bob
each, and 120 copies of Alice used each of the three settings 1,2,3,
likewise with Bob. Of the 120 copies of Alice who used setting 1, 60
got the result + and 60 got the result -. If we just look at the 60
copies of Alice who used setting 1 and got result +, then when the
collection of messages from copies of Bob arrives at the computer
simulating the copies of Alice, it will assign 20 of these copies of
Alice to get the message "Bob used setting 1 and got result -", 15 of
them to get the message "Bob used setting 2 and got result +", 5 of
them to get the message "Bob used setting 2 and got result -", 15 of
them to get the message "Bob used setting 3 and got the result +", and
5 of them to get the message "Bob used setting 3 and got the result
-". So indeed, we find that the Alice-copies who learn that Bob used
the same detector setting as her will always learn that Bob got the
opposite result with probability 1, whereas the Alice-copies who learn
that Bob used a different detector setting will only have a 1/4 chance
of hearing that Bob got the opposite result from their own.
Also, if you sum up *all* copies of Alice who got a message about a
given result from Bob like "Bob used setting 2 and got the result +"
(not just the number within the subset of Alice copies who used
setting 1 and got the result + as in the previous paragraph), you
always find that a total of 60 copies of Alice got a message from Bob
reporting this result, matching the original number in the pool of
copies of Bob who get any particular result.
If that is the case, then every |+>|+'>, etc, combination that is
observed from any random sample from this ensemble will agree with
the predictions of quantum mechanics -- your local matching
computer has no work to do because it was all done by the initial
entanglement and the non-local rules of quantum mechanics.
But there are no genuine entangled particles being used here here,
it's just a *simulation* of entangled particles being measured by
experimenters that's running on a couple of ordinary classical
computers, computers which communicate data to each other by ordinary
classical means like radio signals that can't travel faster than c
(with the radio antenna being too crude to pick out individual photons
in the incoming wave, so the data transmission system can be
understood in terms of classical EM). Also recall that the physical
separation of the two computers matches the separation Alice and Bob
are supposed to have in the real world--and if it wasn't clear, I
suppose I should add that everything that happens to the copies of
Alice and Bob on both computers is being computed in realtime, for
example if the copies of Alice are imagined to be full-fledged mind
uploads or other intelligent computer programs, if interactivity is
allowed you could have a realtime conversation with any given copy
about what she was experiencing, assuming you were at the same
location as the physical computer simulating the copy.
So, the fact that these simulated results were supposed to have come
from an entangled singlet pair has not been used anywhere in your
simulation. It has only ever been used to link the copies of Alice and
Bob, the statistics that they observe come entirely from what you happen
to put in you accumulator for each setting of the relative orientations.
I agree that you can generate the required statistics locally in this
way. In fact, I can do it even more simply by taking a number of urns
and labelling each with a particular relative orientation, say parallel,
antiparallel, 90 degrees, and so on. In the "parallel" urn I place a
number of tokens labeled (A+B-) and an equal number labelled (A-B+). In
the "antiparallel" urn, I place a number of tokens labelled (A+B+), and
an equal number labelled (A-B-). In the "90 degree" urn I place a number
of tokens labelled (A+B+), an equal number labelled (A+B-), an equal
number labelled (A-B+), and finally an equal number labelled (A-B-).
One could go on with separate urns for each relative orientation one
wanted to consider, and fill them with similar tokens in the proportions
required to reproduce the quantum statistics. One is then guaranteed
that if one simulates the EPR experiment by drawing tokens (without
replacement) at random from the appropriate urn, one would reproduce the
required quantum statistics.
Could one conceivably claim that this set up, which is completely local,
is a simulation of a real series of EPR experiments?
But that is precisely what you toy model does. It has absolutely no
connection with EPR or real experiments. One could generate any
arbitrary set of statistics to satisfy any theory whatsoever by this
method. You have demonstrated absolutely nothing about the locality or
otherwise of EPR.
Bruce
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.