On Mon, Apr 18, 2016 at 3:45 AM, Bruce Kellett <[email protected]>
wrote:

> On 18/04/2016 5:00 pm, Jesse Mazer wrote:
>
> On Mon, Apr 18, 2016 at 1:37 AM, Bruce Kellett <[email protected]>
> wrote:
>
>> On 18/04/2016 2:53 pm, Jesse Mazer wrote:
>>
>> On Sun, Apr 17, 2016 at 9:19 PM, Bruce Kellett <
>> <[email protected]>[email protected]> wrote:
>>
>>> On 18/04/2016 10:11 am, Jesse Mazer wrote:
>>>
>>> On Sun, Apr 17, 2016 at 7:34 PM, Bruce Kellett <
>>> <[email protected]>[email protected]> wrote:
>>>
>>>>
>>>> The future light cones of the observers will overlap at a time
>>>> determined by their initial separation, regardless of whether they send
>>>> signals to each other or not.
>>>>
>>>
>>> Of course, I never meant to suggest otherwise. Imagining a central
>>> observer who receives messages about each experiment was just conceptually
>>> simpler than imagining an arbitrary system that is affected in some
>>> unspecified way by each experimenter's results along with every other part
>>> of that system's past light cone. But you certainly don't *need* to use
>>> that particular example.
>>>
>>>
>>> The issue is to find a local explanation of the correlations: appealing
>>> to some arbitrary system that is affected in some unspecified way. But my
>>> example shows that no exchange of information after the separate worlds of
>>> the two experimenters have fully decohered can ever explain the quantum
>>> correlations.
>>>
>>
>> Why do you think it shows that? Does "explain" mean something more than
>> giving a mathematical model that generates the correct correlations, or is
>> that sufficient?
>>
>>
>> Have you not understood my argument? The specified experiment results in
>> four possible combinations of results: |+>|+'>, |+>|-'>, |->|+'>, and
>> |->|-'>. It is relatively easy to show, either by looking at special cases,
>> or by consideration of a repeated sequence of such experiments, that the
>> probabilities are different for each of the four sets of results. The
>> differences in probability depend only on the relative orientations of the
>> measuring magnets. Conveying this angle information after the experiment
>> has been completed, and each of the measurements has totally decohered,
>> cannot explain these correlations.
>>
>> What is required is an account of how these correlations can arise
>> *before* A and B speak to each other, because once they have their
>> results in hand, it may be weeks before they actually communicate. Rubin's
>> argument (following from Deutsch) does not achieve this.
>>
>
>
> But as I said, you can achieve it if there is no fact of the matter about
> *both* results except in the overlap region of the future light cone of
> both measurements, where a single localized system may be causally
> influenced by both measurements (see below for more on what I mean by this
> if you're unclear).
>
>
>
>>
>>
>> This so-called "matching up" is pure fantasy. Who does this matching? If
>>>> the central umpire is to do the matching, he has to have the power to
>>>> eliminate cases that disagree with the quantum prediction. Who has that
>>>> power?
>>>>
>>>
>>>
>>> The laws of physics would do the matching in some well-defined
>>> mathematical way.
>>>
>>>
>>> I agree that the laws of physics will 'prevent' the formation of any
>>> worlds in which the laws of physics are violated. That is not the issue.
>>> The issue is: how do the laws of physics act in order to achieve this. Do
>>> they act locally or non-locally? If they act locally, then you are required
>>> to provided the local mechanism whereby they so act. You are not doing this
>>> at the moment.
>>>
>>
>> Similar to my question above, what do you mean by "mechanism" ? Do you
>> mean something more than simply "mathematical rule that gives you the set
>> of possible outcomes (with associated probabilities or at least probability
>> amplitudes) at each local region of spacetime, given only the set of
>> possible outcomes at regions in the past light cone"?
>>
>>
>> The mathematical rule that gives the differing probabilities for each
>> outcome depending on the relative angle of the magnets is just quantum
>> mechanics. But that is intrinsically non-local
>>
>
> I specified that I was talking about a local mathematical rule--I said the
> rule would give out the outcomes at one location in spacetime "given only
> the set of possible outcomes at regions in the past light cone". Did you
> miss that part, or do you disagree that if I mathematically determine the
> state of some region of spacetime using *only* information about the states
> of regions in the past light cone, that is by definition a local theory?
>
>
> The local mathematical rule in this case, say for observer A, is that
> measurement on his own local particle with give either |+> or |->, with
> equal probability. It does not matter how many copies you generate, the
> statistics remain the same. I am not sure whether your multiple copies
> refer to independent repeats of the experiment, or simply multiple copies
> of the observer with the result he actually obtained. The set of outcomes
> on the past light cone for this observer is irrelevant for the single
> measurement that we are considering. Taking such copies can be local, but
> the utility remains to be demonstrated.
>


Sorry if I was unclear, I thought we were on the same page about the notion
of "copies". The copies in my toy model are supposed to represent the idea
in the many-worlds that there are multiple equally-real versions of a
single system at a single location at a single time, including human
experimenters, and that in any quantum experiment some versions will record
one result and others will record a different one. So the copies represent
different parallel versions of a simulated observer, and just as in the
MWI, some copies see one result and other copies see a different result for
any *single* experiment (and each copy retains a memory, so different
copies remember different sequences of past results as well). And as in the
MWI, these copies would be unaware of one another--just imagine several
simulations of the same experimenter at the same time running in parallel,
with different variations on what results the simulation feeds to them.

A common topic of discussion on everything-list is the subject of
"first-person indeterminacy", which would be expected to result when the
pattern of a given physical brain is duplicated (I haven't been following a
lot of recent threads so I don't know if you've already weighed in on this
topic before). You could imagine an actual atom-for-atom duplicate of a
biological person, but to avoid objections based on the uncertainty
principle and no-cloning theorem, let's instead suppose the person in
question is that of a "mind upload"--a very realistic simulation of a human
brain (at the level of synapses or lower) running on a computer, which most
on this list would assume would be just as conscious as a biological brain.
If the computer is a deterministic classical one, then if the simulated
brain is in a simulated body in a simulated environment which is closed off
from outside input and that also evolves deterministically, then if a copy
is made of the program with the same starting conditions and the copies run
in parallel on two different computers, the behavior (and presumably inner
experiences) of the upload should be the same. But say that after the two
programs have been running in parallel for a while there is a plan to
produce a difference, with a screen inside the simulation flashing blue in
one simulation, yellow in the other simulation. When that happens, the
behavior and experiences of the two copies of the uploaded brain should
diverge somewhat (and probably continue to diverge even more over time,
since sensitive dependence on initial conditions--the 'butterfly
effect'--very likely applies to brain dynamics). If the upload knows in
advance that the experiment will work this way, then before the screen
flashes a color, it would make sense for him to reason as if it's a
probabilistic event, with a 50% chance of the screen showing blue and a 50%
chance of it showing yellow. On the other hand, if he knows that 9 copies
of the program will be shown slightly different (but distinguishable)
shades of blue but only one will be shown a yellow screen, it makes sense
for him to reason as if there is a 9:1 probability he will see a blue
screen, given that after the experiment is complete there will be 9
variants of him that remember a blue screen and only 1 that remembers a
yellow screen. If he has to bet something of value to him on the outcome,
he will assume 50/50 odds of seeing blue in the first type of experiment
and 90/10 odds in the second type of experiment.

According to the many-worlds interpretation we similarly have a huge number
of parallel copies diverging from any given initial state of our memories,
and so a many-worlds advocate will naturally interpret the apparently
probabilistic nature of quantum physics in the same sort of way, as a kind
of first-person indeterminacy that does not conflict with the perfectly
deterministic evolution of the universal wavefunction (similarly in my
above example of first-person indeterminacy, the master program that
assigns different colors to the screens in different parallel simulations
of the upload and his environment can also be entirely deterministic). See
http://www.preposterousuniverse.com/blog/2014/07/24/why-probability-in-quantum-mechanics-is-given-by-the-wave-function-squared/
for a nice discussion of a result on how in the many-worlds interpretation,
assuming a "principle of indifference" about which branch you're on can be
used to derive correct probabilities that match those given by the Born
rule.

So the copies in the toy model are just simulating this aspect of the
many-worlds interpretation. Again, to help make the locality more apparent,
assume we have several physically distinct computers, each of which is
*only* simulating copies of one particular observer at a fixed location in
space, and the computers aren't allowed to communicate with each other any
faster than should be permitted by the locality assumption. This could be
made even more concrete by putting the different computers at spatial
separations that match the separations the experimenters are supposed to
have in the simulated universe--if two experimenters, Alice and Bob, are
supposed to be 20 light-seconds apart, then we have a computer simulating
copies of Alice that's actually 20 light-seconds apart from a computer
simulating copies of Bob, and the computers can exchange information via
light signals.

If Alice is supposed to measure her entangled particle at a particular
time, the computer has all the copies of Alice make that measurement at the
same time, but some copies get one result and some copies get a different
result (as with the blue screen/yellow screen example). Likewise with Bob.
And the computer simulating Alice has no information about what detector
setting Bob used, likewise the computer simulating Alice has no information
about what detector setting Alice used--the computers simulating Alice has
to assign the number of copies that see each possible result without any
foreknowledge of what happened with Bob, and vice versa. If Bob is
scheduled to transmit his result to Alice at a particular time, then the
computer simulating Bob actually sends a package of messages from the
different copies of Bob, this message traveling to the computer simulating
Alice at the speed of light. When the computer simulating Alice receives
the package of messages, it has to match messages from copies of Bob to
copies of Alice in a one-to-one way, for example however many copies of Bob
transmitted the message "I used detector setting #2 and got result +", the
same number of copies of Alice must receive that message.

At the end of this process, each copy of Alice will both have the result of
her own measurement, and a message from Bob that tells of the result he got
on his measurement. My claim is that given this setup, it's possible to
design the rules of the program in a way that ensures that if you select a
copy of Alice *at random*, the probability she'll have learned of a given
pair of measurement results will match the probabilities predicted by
quantum mechanics (recall my earlier argument that if an observer knows he
or she is one of many copies whose experiences will diverge, the subjective
probability he or she should assign to experiencing a particular outcome
should be the same as the probability that a randomly-selected copy from
the whole set will experience that outcome, for example if 9 copies of me
will experience a blue screen and one will experience a yellow, I should
reason as if there is a 90% chance I will experience a blue screen). This
will work despite the fact that the computers doing the simulation of each
experimenter are ordinary classical ones with no *actual* entangled
particles being used, and despite the fact that the quantum experiment
being simulated is one whose statistics would violate Bell inequalities,
and despite the fact that locality is enforced by the actual separation
between the two computers. The reason this doesn't actually violate Bell's
theorem (which says no local realistic model can violate Bell inequalities)
is that there is a loophole in the theorem: the proof of the theorem
assumes that each measurement yields a single unique result, so if you drop
this assumption the theorem no longer applies.

Anyway, do you disagree that the above is doable--that purely classical
computers simulating copies of different observers, with the computers
having an arbitrarily large spatial separation, can give these observers
subjective probabilities identical to those in the quantum experiment?



>
>
> You are claiming to have a local account. But I have not yet seen it.
> Published attempts fail for the reasons given.
>
> Can you actually follow the detailed math of Rubin's argument in a
> step-by-step way, and identify the first step that's an error? Or are you
> just saying that your conceptual argument is sufficient to show that any
> such attempt is impossible, regardless of the details? If you're making an
> impossible-in-principle argument, I think a simple toy model like the one I
> described is sufficient to show your argument must be wrong.
>
>
> The conceptual argument is sufficient to show that Rubin must fail. Your
> toy model makes no impact on my argument.
>


But do you think the conceptual argument is sufficient to show that my
*local* toy model must fail to duplicate the statistics seen in well? If
not, what specific differentiating feature would you point to that implies
the conceptual argument rules out duplicating quantum statistics locally in
Rubin's model, but does *not* rule it out in my toy model?


>
> Do you dispute that it would be possible to have a purely local and
> algorithmic copy-spawning rule with this property of reproducing the
> statistics of the real-world experiment, even knowing the real-world
> experiment would violate Bell inequalities? Or would you acknowledge this
> could be done but say it's irrelevant to whatever argument makes you
> confident Rubin's paper fails to do something analogous but with more
> generality? Or do you think that even if my approach succeeds at doing what
> I describe above and Rubin's might succeed in an analogous way, any local
> mathematical rule that deals solely with "copies" of systems at each
> location in space, without assigning copies at different locations to any
> common "world", is a failure as a local "mechanism" or "explanation"?
>
>
> Even if your local copy model succeeds in doing what you claim, it cannot
> reproduce the quantum correlations.
>

But what I claim is that it *does* reproduce the correct quantum statistics
for a randomly-selected copy. Do you disagree with this?



>
> Let me reduce this to simple steps:
>
> 1) MWI is an interpretation of QM only. I.e., it reproduces all the
> results of QM without adding any additional structure or dynamics.
> 2) The QM state describing an entangled singlet pair does not refer to, or
> depend on, the separation between the particles.
> 3) The quantum calculation of the joint probabilities depends on the
> relative orientation between the separate measurements on the separated
> particles.
> 4) This quantum calculation is the same for any physical separation, since
> the singlet state itself does not depend on the separation.
> 5) The quantum calculation is, therefore, intrinsically non-local because
> it does not depend on the separation, which can be arbitrarily large.
> 6) Since MWI does not add anything to standard QM, and standard QM gives a
> non-local account of the probabilities we are considering, any MWI account
> must also be intrinsically non-local.
>
> You appear to be disagreeing with step 5 here -- by relying on a
> non-standard notion of locality.
>
>

What do you mean "non-standard notion of locality"? Do you think the
standard notion means anything more than that the state of given region of
spacetime can only be causally influenced by events in its past light cone?
Nothing in the notion of locality rules out the possibility that the "state
of a given region of spacetime" can include multiple parallel versions of a
person who have no awareness of one another.

Jesse

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to