On 2/28/2022 7:14 PM, Jesse Mazer wrote:


On Mon, Feb 28, 2022 at 7:39 PM Brent Meeker <meekerbr...@gmail.com> wrote:



    On 2/28/2022 3:39 PM, Jesse Mazer wrote:


    On Mon, Feb 28, 2022 at 6:12 PM Brent Meeker
    <meekerbr...@gmail.com> wrote:



        On 2/28/2022 1:12 PM, Jesse Mazer wrote:
        Superdeterminism goes well beyond Laplacean determinism.
        Determinism is just about the dynamical laws--if you know
        some "initial" state of the universe at time T1, it says you
        can perfectly predict the state at a later time T2 (or an
        earlier time, in a time-symmetric theory). Superdeterminism
        is a constraint on the initial conditions which is meant to
        rule out some broad class of possible worlds that are *not*
        ruled out by the dynamical laws.

        In a deterministic system any given initial condition rules
        out infinitely many futures.



    Yes, the conditional probability P(later conditions B | initial
    conditions A) is 1 for a unique value of B, 0 for every other
    possible value of B. But the dynamical laws themselves don't tell
    you anything about the non-conditional probability P(initial
    conditions A) for different possible choices of A.
    Superdeterminism adds an extra constraint which says P(initial
    conditions A) is 0 for the vast majority of possible initial
    conditions in the phase space, and only nonzero for a tiny
    fraction with some very special characteristics.

    But if the universe is deterministic it had only /*one*/ initial
    condition...so of course it had special characteristics.  Just as
    the winning lottery ticket had a special number on it.


But if you don't know that initial condition, then absent knowledge of some lawlike constraint on initial conditions, I think it makes sense to treat all initial microstates consistent with the historical data you've seen so far as equally likely in terms of the subjective probability you assign to them (this sort of assumption is needed in classical statistical mechanics, where to make probabilistic predictions about an isolated system, you generally start with the assumption that all microstates consistent with your knowledge of the macrostate are equally likely). So even if Bell inequalities have been consistently violated in the past, if you believe that's just a consequence of a particular "lucky" set of initial conditions and not the dynamical laws or a lawlike constraint on initial conditions, then if you believe the dynamical laws are local ones you should expect the pattern to break down in the future, since there are many more possible initial microstates consistent with the experimental results you've seen so far in which the pattern of Bell inequality violations would break down and the inequalities would subsequently be respected.


I agree.  And if that happens I guess it will be (weak) support for superdeterminism.




        In quantum theory, superdeterminism is invoked to allow for
        the possibility that the dynamical laws are local realist
        ones (of a single-world kind), so that under "generic"
        initial conditions one would expect statistically to see
        Bell inequalities respected (in contradiction to quantum
        predictions), but superdeterminism constrains the initial
        conditions to a special set

        Then postulating that the initial conditions were in this set
        seems like just another dynamical law; like Born's rule.


    Can you elaborate on the analogy to Born's rule? Born's rule is
    not a constraint on initial states.

    Born's rule for measurement results is not a dynamical law either.


I would say that in the Copenhagen interpretation the experimenter's choice about what to measure is not determined by dynamical laws, but once the state of the detector is set, the interaction between the detector and the quantum system being measured does obey a dynamical law, one that says the system's wavefunction will collapse onto one of the eigenstates of whatever variable the detector is set to measure (the projection postulate) with probability determined by the square of the prior amplitude on that eigenstate (Born's rule).

In any case, if you don't consider Born's rule to be any sort of true dynamical law, were you saying it "seems like" a dynamical law in some sense, and that the constraint on initial conditions "seems like" a dynamical law in the same sense?

I'm pointing out it could be imitated by superdeterminism even though it's used as a law in QM.  It's analogous to computer programs; there's really no sharp distinction between program and data.



    Even if we accept in principle the idea of laws that consist of
    constraints on allowable initial conditions, there is also the
    argument that the mathematical formulation of such a constraint
    would have to be incredibly complex in an algorithmic sense,

    Why?  "No hidden variable" isn't very complex.


Are you interpreting superdeterminist theories as ones where there are no hidden variables?

No, I'm saying no hidden variables rules out superdeterminism. Since either their are hidden variables that are sensitive to the polarization settings, or the polarization settings are influenced by the hidden variables.  But if there are no hidden variables...no superdeterminism.

Unless superdeterminism is assumed to measurably depart from the predictions of QM, it does require hidden variables--the idea in a Bell test measurement involving spin measurements, for example, is that the particle pair have hidden variables which predetermine what spins they will have along the axes the experimenters will later choose to measure.


    that it would have to have some built-in "concept" of high-level
    observers and measuring instruments so that the hidden variables
    could be assigned to particle pairs in a way that anticipated the
    fact that the two particles would later be measured by
    instruments in a certain configuration (the orientation of
    stern-gerlach devices used to measure each particle's spins, for
    example).

    But in a deterministic system all those things have a common
    cause; their past light cones overlap.


The event of the particle pair being emitted from the source is in the past light cone of each of the measurements, but each experimenter could for example base their decision about what axis to measure on a pseudorandom algorithm that took as its seed some astronomical data from a region that's outside the past light cone of the pair emission,

That's three past light cones that must not overlap to rule out a common cause violation of statistical independence.  That means they need to be more that 14 billion light years apart.

Brent

and also outside the past light cone of the other experimenter making their own decision. And the hidden variables assigned to the particles when they're emitted have to act as though they "anticipate" what measurements will be performed on them, even if the choice of measurements depended on information outside the past light cone of the emission event.



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/807c943d-c056-db35-655f-5f9bff4234f7%40gmail.com.

Reply via email to