On 19/04/2016 10:23 am, Jesse Mazer wrote:
On Mon, Apr 18, 2016 at 3:45 AM, Bruce Kellett <[email protected] <mailto:[email protected]>> wrote:


    The local mathematical rule in this case, say for observer A, is
    that measurement on his own local particle with give either |+> or
    |->, with equal probability. It does not matter how many copies
    you generate, the statistics remain the same. I am not sure
    whether your multiple copies refer to independent repeats of the
    experiment, or simply multiple copies of the observer with the
    result he actually obtained. The set of outcomes on the past light
    cone for this observer is irrelevant for the single measurement
    that we are considering. Taking such copies can be local, but the
    utility remains to be demonstrated.



Sorry if I was unclear, I thought we were on the same page about the notion of "copies". The copies in my toy model are supposed to represent the idea in the many-worlds that there are multiple equally-real versions of a single system at a single location at a single time, including human experimenters, and that in any quantum experiment some versions will record one result and others will record a different one. So the copies represent different parallel versions of a simulated observer, and just as in the MWI, some copies see one result and other copies see a different result for any *single* experiment (and each copy retains a memory, so different copies remember different sequences of past results as well). And as in the MWI, these copies would be unaware of one another--just imagine several simulations of the same experimenter at the same time running in parallel, with different variations on what results the simulation feeds to them.

I have a couple of questions. Firstly, does the ensemble generated in this way differ in any significant respect from the one generated if the same Alice and Bob perform their (random orientation) measurements a large number of times? And secondly, what exactly are they performing their measurements on? On random unpolarized particles? or always on one of the particles of an entangled singleton pair? In the latter case, one would assume that we have to keep track of which Alice result comes from the same pair as which Bob result. In other words, the ensemble is identical to the one generated by many runs of the same experiment, on entangled pairs, by the same observes.

A common topic of discussion on everything-list is the subject of "first-person indeterminacy", which would be expected to result when the pattern of a given physical brain is duplicated (I haven't been following a lot of recent threads so I don't know if you've already weighed in on this topic before). You could imagine an actual atom-for-atom duplicate of a biological person, but to avoid objections based on the uncertainty principle and no-cloning theorem, let's instead suppose the person in question is that of a "mind upload"--a very realistic simulation of a human brain (at the level of synapses or lower) running on a computer, which most on this list would assume would be just as conscious as a biological brain. If the computer is a deterministic classical one, then if the simulated brain is in a simulated body in a simulated environment which is closed off from outside input and that also evolves deterministically, then if a copy is made of the program with the same starting conditions and the copies run in parallel on two different computers, the behavior (and presumably inner experiences) of the upload should be the same. But say that after the two programs have been running in parallel for a while there is a plan to produce a difference, with a screen inside the simulation flashing blue in one simulation, yellow in the other simulation. When that happens, the behavior and experiences of the two copies of the uploaded brain should diverge somewhat (and probably continue to diverge even more over time, since sensitive dependence on initial conditions--the 'butterfly effect'--very likely applies to brain dynamics). If the upload knows in advance that the experiment will work this way, then before the screen flashes a color, it would make sense for him to reason as if it's a probabilistic event, with a 50% chance of the screen showing blue and a 50% chance of it showing yellow. On the other hand, if he knows that 9 copies of the program will be shown slightly different (but distinguishable) shades of blue but only one will be shown a yellow screen, it makes sense for him to reason as if there is a 9:1 probability he will see a blue screen, given that after the experiment is complete there will be 9 variants of him that remember a blue screen and only 1 that remembers a yellow screen. If he has to bet something of value to him on the outcome, he will assume 50/50 odds of seeing blue in the first type of experiment and 90/10 odds in the second type of experiment.

According to the many-worlds interpretation we similarly have a huge number of parallel copies diverging from any given initial state of our memories, and so a many-worlds advocate will naturally interpret the apparently probabilistic nature of quantum physics in the same sort of way, as a kind of first-person indeterminacy that does not conflict with the perfectly deterministic evolution of the universal wavefunction (similarly in my above example of first-person indeterminacy, the master program that assigns different colors to the screens in different parallel simulations of the upload and his environment can also be entirely deterministic). See http://www.preposterousuniverse.com/blog/2014/07/24/why-probability-in-quantum-mechanics-is-given-by-the-wave-function-squared/ for a nice discussion of a result on how in the many-worlds interpretation, assuming a "principle of indifference" about which branch you're on can be used to derive correct probabilities that match those given by the Born rule.

So the copies in the toy model are just simulating this aspect of the many-worlds interpretation. Again, to help make the locality more apparent, assume we have several physically distinct computers, each of which is *only* simulating copies of one particular observer at a fixed location in space, and the computers aren't allowed to communicate with each other any faster than should be permitted by the locality assumption.

But are they simulating measurements on one end of an entangled pair? Or measurements on completely separate unpolarized particles?

This could be made even more concrete by putting the different computers at spatial separations that match the separations the experimenters are supposed to have in the simulated universe--if two experimenters, Alice and Bob, are supposed to be 20 light-seconds apart, then we have a computer simulating copies of Alice that's actually 20 light-seconds apart from a computer simulating copies of Bob, and the computers can exchange information via light signals.

If Alice is supposed to measure her entangled particle at a particular time, the computer has all the copies of Alice make that measurement at the same time, but some copies get one result and some copies get a different result (as with the blue screen/yellow screen example). Likewise with Bob. And the computer simulating Alice has no information about what detector setting Bob used, likewise the computer simulating Alice has no information about what detector setting Alice used

But it is absolutely crucial that the relevant pairing information be retained. In other words, we have to know which Alice measurement corresponds to the Bob measurement on /t//hat particular///entangled pair. If that pairing information is lost, or not available, then your toy model is not simulating the EPR set up, and so is useless.

--the computers simulating Alice has to assign the number of copies that see each possible result without any foreknowledge of what happened with Bob, and vice versa. If Bob is scheduled to transmit his result to Alice at a particular time, then the computer simulating Bob actually sends a package of messages from the different copies of Bob, this message traveling to the computer simulating Alice at the speed of light. When the computer simulating Alice receives the package of messages, it has to match messages from copies of Bob to copies of Alice in a one-to-one way,

Aye, there's the rub, as Shakespeare might say. The "computer has to match messages from copies of Bob to copies of Alice....". What on earth does that mean? If it means, as it must if your simulation has to bear any relationship to EPR experiments, is that you match each Alice result (|+> or |->) to the corresponding Bob result (|+> or |->) that he got from the particle that was entangled with the particle that gave Alice her result. If that is the case, then every |+>|+'>, etc, combination that is observed from any random sample from this ensemble will agree with the predictions of quantum mechanics -- your local matching computer has no work to do because it was all done by the initial entanglement and the non-local rules of quantum mechanics.

If the matching means something different, or if the information relating each Alice result to the corresponding Bob result on the /same/ entangled pair is lost or ignored, or if the original particles that were measured were just a random selection of unpolarized particles, then your simulation bears no relation to EPR measurement on entangled singlet pairs, so is not worth the time it took you to describe it.

for example however many copies of Bob transmitted the message "I used detector setting #2 and got result +", the same number of copies of Alice must receive that message.

At the end of this process, each copy of Alice will both have the result of her own measurement, and a message from Bob that tells of the result he got on his measurement. My claim is that given this setup, it's possible to design the rules of the program in a way that ensures that if you select a copy of Alice *at random*, the probability she'll have learned of a given pair of measurement results will match the probabilities predicted by quantum mechanics

See the comments above. If you have set up your simulation so that it actually simulates the EPR experiment, then you do not have to do any work when you select from the final ensemble -- every single Alice-Bob combination will have results that agree with QM, since that is ensured by the entanglement of the original pair. If your pairing of Alice with Bob results does not correspond to their respective results from the same entangled pair, then you will never recover the quantum predictions -- or you might, but you will have discard an unknown number of unrelated couplings, and in order to recover anything like the quantum correlations, your program with have to be non-local in time (or simulate such) even if not non-local in space.

(recall my earlier argument that if an observer knows he or she is one of many copies whose experiences will diverge, the subjective probability he or she should assign to experiencing a particular outcome should be the same as the probability that a randomly-selected copy from the whole set will experience that outcome, for example if 9 copies of me will experience a blue screen and one will experience a yellow, I should reason as if there is a 90% chance I will experience a blue screen). This will work despite the fact that the computers doing the simulation of each experimenter are ordinary classical ones with no *actual* entangled particles being used, and despite the fact that the quantum experiment being simulated is one whose statistics would violate Bell inequalities, and despite the fact that locality is enforced by the actual separation between the two computers. The reason this doesn't actually violate Bell's theorem (which says no local realistic model can violate Bell inequalities) is that there is a loophole in the theorem: the proof of the theorem assumes that each measurement yields a single unique result, so if you drop this assumption the theorem no longer applies.

Anyway, do you disagree that the above is doable--that purely classical computers simulating copies of different observers, with the computers having an arbitrarily large spatial separation, can give these observers subjective probabilities identical to those in the quantum experiment?

The simulation is doable, provided you simulate the actual experimental situation with entangled pairs. But then you will have to build the standard (non-local) quantum correlations into the respective Alice/Bob results. Whatever you try, you will never recover the quantum correlations in a local way.

Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to