On Wed, May 13, 2020 at 3:30 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 5/12/2020 10:08 PM, Bruce Kellett wrote:
>
> On Wed, May 13, 2020 at 2:06 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>> > Consequently, the amplitude multiplying any sequence of M zeros and
>> > (N-M) ones, is a^M b^(N-M). Again, differentiating with respect to 'a'
>> > to find the turning point (and the value of 'a' that maximizes this
>> > amplitude), we find
>> >
>> >     |a|^2 = M/N,
>>
>> Maximizing this amplitude, instead of simply counting the number of
>> sequences with M zeroes as a fraction of all sequences (which is
>> independent of a) is effectively assuming |a|^2 is a probability
>> weight.  The "most likely" number of zeroes, the number that occurs most
>> often in the 2^N sequences, is is N/2.
>>
>
> I agree that if you simply look for the most likely number of zeros,
> ignoring the amplitudes, then that is N/2. But I do not see that maximising
> the amplitude for any particular value of M is to effectively assume that
> it is a probability.
>
>
> I think it is.  How would you justify ".. the amplitude multiplying any
> sequence of M zeros and (N-M) ones, is a^M b^(N-M)..." except by saying a
> is a probability, so a^M is the probability of M zeroes.  If it's not a
> probability why should it be multiplied into and expression to be maximized?
>


Trivially, without any assumptions at all. The original state has
amplitudes a|0> + b|1>. If you carry the coefficients through at each
branch, the branch containing a new |0> carries a weight a, and similarly,
the branch containing a new |1> carries a weight b. One does not have to
assume that these are probabilities to do this -- each repeated trial is a
branch point, so each is another measurement of an instance of the initial
state, so automatically has the coefficients present. I don't see anything
sneaky here.

As to the question as to why it should be maximised? Well, why not? I am
simply maximising the carried through coefficients to find if this has any
bearing on the proportion M of zeros. The argument for probabilities then
proceeds by means of the analogy with the traditional binomial case. I
agree, this may not count as a derivation of the Born rule for
probabilities, but it is certainly a good explication of the same.



> In any case though, I don't see the form of the Born rule as something
> problematic.  It's getting from counting branches to probabilities.
>


I think my issue here is that counting branches is not the thing to do,
because the branches are not in proportion to the coefficients (which turn
out to be probabilities). And counting branches for probabilities requires
the self-location assumption, and that is intrinsically dualist (as David
Albert points out).

 Once you assume there is a probability measure, you're pretty much forced
> to the Born rule as the only consistent probability measure.
>

I agree. And that is the argument Everett made in his 1957 paper -- once
you require additivity, the fact that states are normalised, screams for
the coefficients to be treated as probabilities. The squared amplitudes
obey all the Kolmogorov axioms, after all.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTPmb%3D3r9Yo2Ykui3-HR%3DGrLnnP_W_p6Z-FWg6QiUh%3DeA%40mail.gmail.com.

Reply via email to