Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 5:22 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 10:07 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 11:33 AM Bruce Kellett 
> wrote:
>
>> On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/5/2020 3:33 PM, Bruce Kellett wrote:
>>>
>>> No, it doesn't. Just think about what each observer sees from within
>>> his branch.
>>>
>>>
>>> It's what an observer has seen when he calculates the statistics after N
>>> trials.  If a>b there will be proportionately more observers who saw more
>>> 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at
>>> each measurement split there will be two observers who see 0 and one who
>>> sees 1.  So after N trials there will be N^3 observers and most of them
>>> will have seen approximately twice as many 0s as 1s.
>>>
>>
>>
>> From within any branch the observer is unaware of other branches, so he
>> cannot see these weights. His statistics will depend only on the results on
>> his branch. In order for multiple branches to count as probabilities, you
>> have to appeal to some Self-Selecting-Assumption (SSA) in the 3p sense: you
>> have to consider that the observer self-selects at randem from the set of
>> all observers. Then, since there are more branches according to the
>> weights, the probability that the randomly selected observer will see a
>> branch that is multiplied over the ensemble will depend on the number of
>> branches with that exact sequence. But this is not how it works in
>> practise, because each observer can ever only see data within his branch,
>> even if that observer is selected at random from among all observers, he
>> will calculate statistics that are independent of any branch weights.
>>
>> Bruce
>>
>
> To put this another way., if a=sqrt(2/3) and b=sqrt(1/3), then if an
> observer is to conclude, from his data, that 0 is twice as likely as 1, he
> must see approximately twice as many zeros as ones. This cannot be achieved
> by simply multiplying the number of branches on a zero result. Multiplying
> the number of branches does not change the data within each branch,
>
>
> Sure it does.  The observer is twice as likely to add on 0 branches to his
> sequence of observations as to ad a 1 branch.  So more observers will see
> an excess of 0s over 1s.
>


The observer does not get to add branches to his sequence at will. Whether
more observers see an excess of zeros or not does not affect what each
individual observer sees.

so observers will obtain exactly the same statistics as they would for
> a=b=1/sqrt(2). As I have repeatedly said, the data on each (and every)
> branch is independent of the weights or coefficients. This is a trivial
> consequence of having every result occur on every trial. Even if zero has
> weight 0.99, and one has weight 0.01, at each fork there is still one
> branch corresponding to zero, and one branch corresponding to one.
>
>
> That was Everett's original idea.  But if at each trial there are 99 forks
> with |0> and 1 fork with |1>  then there will be many observers who have
> observed only |0>'s after say 20 trials and few or none who will have
> observed only |1>'s .
>

But it is not a question of how many observers see a particular string: the
issue is what each observer sees from his own data. Since this is a
deviation from Everett's relative state idea, you have departed from the
Schrodinger equation, and have not really replaced it with a viable
dynamical equation that will multiply branches in the required way.

Multiplying the number of zero branches at each fork does not change the
> statistics within individual branches.
>
>
> Yes it does.
>

Think again -- that is just an absurd comment. Every time a zero occurs in
a sequence, another identical sequence is added. That does not change
anything within the sequence. There are, after all, only 2^N possible
binary bit strings of length N.

Whatever the observed sequence up to given trial, the observer is more
> likely to add a |0> to his sequence on the next trial if there are more
> zero branches.
>

As above -- the observer does not get to add anything to his sequence -- it
is data that he is given. Actually, in the 2:1 ratio of zero branches to
one branches, one will end up with 3^N branches in total (since each
duplicated zero could be coded as 2, giving 3 branches to be added at each
fork). And there is a separate observer for each branch. It is what these
observers can infer from their data that is important -- not how many of
them there are.

And it is the data from within his branch that the physicist must use to
> test the theory. Even if he is selected at random from some population
> where the number of branches is proportional to the weights, he still has
> only the data from within a single branch against which to test the theory.
> Multiplying branches is as irrelevant as imposing branch

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 10:07 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 11:33 AM Bruce Kellett > wrote:


On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:

On 3/5/2020 3:33 PM, Bruce Kellett wrote:

No, it doesn't. Just think about what each observer sees from
within his branch.


It's what an observer has seen when he calculates the
statistics after N trials.  If a>b there will be
proportionately more observers who saw more 0s than those who
saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at each
measurement split there will be two observers who see 0 and
one who sees 1.  So after N trials there will be N^3 observers
and most of them will have seen approximately twice as many 0s
as 1s.



From within any branch the observer is unaware of other branches,
so he cannot see these weights. His statistics will depend only on
the results on his branch. In order for multiple branches to count
as probabilities, you have to appeal to some
Self-Selecting-Assumption (SSA) in the 3p sense: you have to
consider that the observer self-selects at randem from the set of
all observers. Then, since there are more branches according to
the weights, the probability that the randomly selected observer
will see a branch that is multiplied over the ensemble will depend
on the number of branches with that exact sequence. But this is
not how it works in practise, because each observer can ever only
see data within his branch, even if that observer is selected at
random from among all observers, he will calculate statistics that
are independent of any branch weights.

Bruce


To put this another way., if a=sqrt(2/3) and b=sqrt(1/3), then if an 
observer is to conclude, from his data, that 0 is twice as likely as 
1, he must see approximately twice as many zeros as ones. This cannot 
be achieved by simply multiplying the number of branches on a zero 
result. Multiplying the number of branches does not change the data 
within each branch,


Sure it does.  The observer is twice as likely to add on 0 branches to 
his sequence of observations as to ad a 1 branch.  So more observers 
will see an excess of 0s over 1s.


so observers will obtain exactly the same statistics as they would for 
a=b=1/sqrt(2). As I have repeatedly said, the data on each (and every) 
branch is independent of the weights or coefficients. This is a 
trivial consequence of having every result occur on every trial. Even 
if zero has weight 0.99, and one has weight 0.01, at each fork there 
is still one branch corresponding to zero, and one branch 
corresponding to one.


That was Everett's original idea.  But if at each trial there are 99 
forks with |0> and 1 fork with |1>  then there will be many observers 
who have observed only |0>'s after say 20 trials and few or none who 
will have observed only |1>'s .


Multiplying the number of zero branches at each fork does not change 
the statistics within individual branches.


Yes it does.  Whatever the observed sequence up to given trial, the 
observer is more likely to add a |0> to his sequence on the next trial 
if there are more zero branches.


And it is the data from within his branch that the physicist must use 
to test the theory. Even if he is selected at random from some 
population where the number of branches is proportional to the 
weights, he still has only the data from within a single branch 
against which to test the theory. Multiplying branches is as 
irrelevant as imposing branch weights.


That is where I think the attempt to force the Born rule on to Everett 
must inevitable fail -- there is no way that one can arrange fork 
dynamics so that there will always be twice as many zeros as ones 
along each branch (for the case a^2=2/3, b^2=1/3).


Not along each observed sequence.  But there will be many more sequences 
with twice as many zeros  than sequences with other proportions.




In the full set of all 2^N branches there will, of course, be branches 
in which this is the case. But that is just because when every 
possible bit string is included, that possibility will also occur. The 
problem is that the proportion of branches for which this is the case 
becomes small as N increases.


But not the proportion of branches which are within a fixed deviation 
from 2:1.  That proportion will increase with N.


I can see that I'm going to have to write a program to produce and 
example for you.


Brent

Consequently, the majority of observers will conclude that the Born 
rule is disconfirmed. This is not in accordance with observation, so 
Everett fails as a scientific theory -- it cannot account for our 
observation of probabilistic results.


Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 11:33 AM Bruce Kellett  wrote:

> On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 3:33 PM, Bruce Kellett wrote:
>>
>> No, it doesn't. Just think about what each observer sees from within his
>> branch.
>>
>>
>> It's what an observer has seen when he calculates the statistics after N
>> trials.  If a>b there will be proportionately more observers who saw more
>> 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at
>> each measurement split there will be two observers who see 0 and one who
>> sees 1.  So after N trials there will be N^3 observers and most of them
>> will have seen approximately twice as many 0s as 1s.
>>
>
>
> From within any branch the observer is unaware of other branches, so he
> cannot see these weights. His statistics will depend only on the results on
> his branch. In order for multiple branches to count as probabilities, you
> have to appeal to some Self-Selecting-Assumption (SSA) in the 3p sense: you
> have to consider that the observer self-selects at randem from the set of
> all observers. Then, since there are more branches according to the
> weights, the probability that the randomly selected observer will see a
> branch that is multiplied over the ensemble will depend on the number of
> branches with that exact sequence. But this is not how it works in
> practise, because each observer can ever only see data within his branch,
> even if that observer is selected at random from among all observers, he
> will calculate statistics that are independent of any branch weights.
>
> Bruce
>

To put this another way., if a=sqrt(2/3) and b=sqrt(1/3), then if an
observer is to conclude, from his data, that 0 is twice as likely as 1, he
must see approximately twice as many zeros as ones. This cannot be achieved
by simply multiplying the number of branches on a zero result. Multiplying
the number of branches does not change the data within each branch, so
observers will obtain exactly the same statistics as they would for
a=b=1/sqrt(2). As I have repeatedly said, the data on each (and every)
branch is independent of the weights or coefficients. This is a trivial
consequence of having every result occur on every trial. Even if zero has
weight 0.99, and one has weight 0.01, at each fork there is still one
branch corresponding to zero, and one branch corresponding to one.
Multiplying the number of zero branches at each fork does not change the
statistics within individual branches. And it is the data from within his
branch that the physicist must use to test the theory. Even if he is
selected at random from some population where the number of branches is
proportional to the weights, he still has only the data from within a
single branch against which to test the theory. Multiplying branches is as
irrelevant as imposing branch weights.

That is where I think the attempt to force the Born rule on to Everett must
inevitable fail -- there is no way that one can arrange fork dynamics so
that there will always be twice as many zeros as ones along each branch
(for the case a^2=2/3, b^2=1/3).

In the full set of all 2^N branches there will, of course, be branches in
which this is the case. But that is just because when every possible bit
string is included, that possibility will also occur. The problem is that
the proportion of branches for which this is the case becomes small as N
increases. Consequently, the majority of observers will conclude that the
Born rule is disconfirmed. This is not in accordance with observation, so
Everett fails as a scientific theory -- it cannot account for our
observation of probabilistic results.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTgbtOztrq%3DWSqnTCsaLOm8gxmgJSMwYGcGYcho5uv8sw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 3:33 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 10:18 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 2:01 PM, Bruce Kellett wrote:
>>
>> On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/5/2020 3:07 AM, Bruce Kellett wrote:
>>>
>>> there is no "weight" that differentiates different branches.


 Then the Born rule is false, and the whole of QM is false.

>>>
>>> No, QM is not false. It is only Everett that is disconfirmed by
>>> experiment.
>>>
>>> Everett + mechanism + Gleason do solve the core of the problem.

>>>
>>> No. As discussed with Brent, the Born rule cannot be derived within the
>>> framework of Everettian QM. Gleason's theorem is useful only if you have a
>>> prior proof of the existence of a probability distribution. And you cannot
>>> achieve that within the Everettian context. Even postulating the Born rule
>>> ad hoc and imposing it by hand does not solve the problems with Everettian
>>> QM.
>>>
>>> What needs to be derived or postulated is a probability measure on
>>> Everett's multiple worlds.  I agree that it can't be derived.  But I don't
>>> see that it can't be postulated that at each split the branches are given a
>>> weight (or a multiplicity) so that over the ensemble of branches the Born
>>> rule is statistically supported, i.e. almost all sequences will satisfy the
>>> Born rule in the limit of long sequences.
>>>
>>
>> Unfortunately, that does not work. Linearity means that any weight that
>> you assign to particular result remains outside the strings, so data within
>> each string are independent of any such assigned weights. The weights would
>> not, therefore, show up in any experimental results. The weights can only
>> work in a single-world version of the model.
>>
>>
>> True.  But the multiplicity still works.
>>
>
> No, it doesn't. Just think about what each observer sees from within his
> branch.
>
>
> It's what an observer has seen when he calculates the statistics after N
> trials.  If a>b there will be proportionately more observers who saw more
> 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at
> each measurement split there will be two observers who see 0 and one who
> sees 1.  So after N trials there will be N^3 observers and most of them
> will have seen approximately twice as many 0s as 1s.
>


>From within any branch the observer is unaware of other branches, so he
cannot see these weights. His statistics will depend only on the results on
his branch. In order for multiple branches to count as probabilities, you
have to appeal to some Self-Selecting-Assumption (SSA) in the 3p sense: you
have to consider that the observer self-selects at randem from the set of
all observers. Then, since there are more branches according to the
weights, the probability that the randomly selected observer will see a
branch that is multiplied over the ensemble will depend on the number of
branches with that exact sequence. But this is not how it works in
practise, because each observer can ever only see data within his branch,
even if that observer is selected at random from among all observers, he
will calculate statistics that are independent of any branch weights.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSRNgiqwhz9bBBHtBa0U_%3Do-s5MNs6aVjguXRNByqunyw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 11:14 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 3:44 PM, Bruce Kellett wrote:
>
> OR postulate that the splits are into many copies so that the branch count
>> gives the Born statistics.
>>
>
>
> That has possibilities, but I think it cannot work either. After all, each
> observer just sees a sequence of results -- he is unaware of other branches
> or sequences, so does not know how many branches are the same as his. The
> 1p/3p distinction comes into play again. Any attempt to make multiple
> branches reproduce probabilities necessarily confuses this distinction. You
> have to think in terms of what data an observer actually obtains. Thinking
> about what happens in the "other worlds" is illegitimate.
>
>
> Consider the many copies case as an ensemble and it will reproduce the
>> Born statistics even though it is deterministic.  This is easy to see
>> because every sequence a single observer has seen is the result of a random
>> choice at the split of which path you call "that observer".
>>
>
>
> But the weights do not influence that split, so the observer cannot see
> the weights.
>
>
> Not weights, multiple branches.
>

The observer cannot see multiple branches from within a branch, either.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLT%2BL5kG%2BTWJu3p1qEs_dpD0EphWH_q-siGyztbXiedpdw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 3:44 PM, Bruce Kellett wrote:


OR postulate that the splits are into many copies so that the
branch count gives the Born statistics.



That has possibilities, but I think it cannot work either. After all, 
each observer just sees a sequence of results -- he is unaware of 
other branches or sequences, so does not know how many branches are 
the same as his. The 1p/3p distinction comes into play again. Any 
attempt to make multiple branches reproduce probabilities necessarily 
confuses this distinction. You have to think in terms of what data an 
observer actually obtains. Thinking about what happens in the "other 
worlds" is illegitimate.



Consider the many copies case as an ensemble and it will reproduce
the Born statistics even though it is deterministic.  This is easy
to see because every sequence a single observer has seen is the
result of a random choice at the split of which path you call
"that observer".



But the weights do not influence that split, so the observer cannot 
see the weights.


Not weights, multiple branches.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c0b51f3b-d106-0f21-1822-df6d5088658c%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 3:33 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 10:18 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/5/2020 2:01 PM, Bruce Kellett wrote:

On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:

On 3/5/2020 3:07 AM, Bruce Kellett wrote:



there is no "weight" that differentiates different
branches.


Then the Born rule is false, and the whole of QM is false.


No, QM is not false. It is only Everett that is disconfirmed
by experiment.

Everett + mechanism + Gleason do solve the core of the
problem.


No. As discussed with Brent, the Born rule cannot be derived
within the framework of Everettian QM. Gleason's theorem is
useful only if you have a prior proof of the existence of a
probability distribution. And you cannot achieve that within
the Everettian context. Even postulating the Born rule ad
hoc and imposing it by hand does not solve the problems with
Everettian QM.


What needs to be derived or postulated is a probability
measure on Everett's multiple worlds.  I agree that it can't
be derived.  But I don't see that it can't be postulated that
at each split the branches are given a weight (or a
multiplicity) so that over the ensemble of branches the Born
rule is statistically supported, i.e. almost all sequences
will satisfy the Born rule in the limit of long sequences.


Unfortunately, that does not work. Linearity means that any
weight that you assign to particular result remains outside the
strings, so data within each string are independent of any such
assigned weights. The weights would not, therefore, show up in
any experimental results. The weights can only work in a
single-world version of the model.


True.  But the multiplicity still works.


NO, it doesn't. Just think about what each observer sees from within 
his branch.


It's what an observer has seen when he calculates the statistics after N 
trials.  If a>b there will be proportionately more observers who saw 
more 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  
Then at each measurement split there will be two observers who see 0 and 
one who sees 1.  So after N trials there will be N^3 observers and most 
of them will have seen approximately twice as many 0s as 1s.


Brent



Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSnDmp3r2KkOg6%2BxJk937Ni5Tn2zx%3DPMYL8ZAp-D7yrHg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e1a39f03-1d3d-1510-3e19-214409feb88f%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 10:15 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 1:57 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 8:08 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 2:45 AM, Bruce Kellett wrote:
>>
>>
>> Now sequences with small departures from equal numbers will still give
>> probabilities within the confidence interval of p = 0.5. But this
>> confidence interval also shrinks as 1/sqrt(N) as N increases, so these
>> additional sequences do not contribute a growing number of cases giving p ~
>> 0.5 as N increases. So, again within factors of order unity, the proportion
>> of sequences consistent with p = 0.5 decreases without limit as N
>> increases. So it is not the case that a very large proportion of the binary
>> strings will report p = 0.5. The proportion lying outside the confidence
>> interval of p = 0.5 is not vanishingly small -- it grows with N.
>>
>>
>> I agree with you argument about unequal probabilities, in which all the
>> binomial sequences occur anyway leading to inference of p=0.5.  But in the
>> above paragraph you are wrong about the how the probability density
>> function of the observed value changes as N->oo.  For any given interval
>> around the true value, p=0.5, the fraction of observed values within that
>> interval increases as N->oo.  For example in N=100 trials, the proportion
>> of observers who calculate an estimate of p in the interval (0.45 0.55) is
>> 0.68.  For N=500 it's 0.975.  For N=1000 it's 0.998.
>>
>> Confidence intervals are constructed to include the true value with some
>> fixed probability.  But that interval becomes narrower as 1/sqrt(N).
>> So the proportion lying inside and outside the interval is relatively
>> constant, but the interval gets narrower.
>>
>
>
> I think I am beginning to see why we are disagreeing on this. You are
> using the normal approximation to the binomial distribution for a large
> sequence of trials with some fixed probability of success on each trial. In
> other words, it is as though you consider the 2^N binary strings of length
> N to have been generated by some random process, such as coin tosses or the
> like, with some prior fixed probability value. Each string is then
> constructed as though the random process takes place in a single word, so
> that there is only one outcome for each toss.
>
> Given such an ensemble, the statistics you cite are undoubtedly correct:
> as the length of the string increases, the proportion of each string within
> some interval of the given probability increases -- that is what the normal
> approximation to the binomial gives you. And as N increases, the confidence
> interval shrinks, so the proportion within a confidence interval is
> approximately constant. But note these are the proportions within each
> string as generated with some fixed probability value. If you take an
> ensemble of such strings, the the result is even more apparent, and the
> proportion of strings in which the probability deviates significantly from
> the prior fixed value decreases without limit.
>
> That is all very fine. The problem is that this is not the ensemble of
> strings that I am considering!
>
> The set of all possible bit strings of length N is not generated by some
> random process with some fixed probability. The set is generated entirely
> deterministically, with no mention whatsoever of any probability. Just
> think about where these strings come from. You measure the spin of a
> spin-half particle. The result is 0 in one branch and 1 in the other. Then
> the process is repeated, independently in each branch, so the 1-branch
> splits into a 11-branch and a 10-branch; and the 0-branch splits into a
> 01-branch and a 00-branch. This process goes on for N repetitions,
> generating all possible bit strings of length N in an entirely
> deterministic fashion. The process is illustrated by Sean Carroll on page
> 134 of his book.
>
> Given the nature of the ensemble of bit strings that I am considering, the
> statistical results I quote are correct, and your statistics are completely
> inappropriate. This may be why we have been talking at cross purposes. I
> suspect that Russell has a similar misconception about the nature of the
> bit strings under consideration, since he talked about statistical results
> that could only have been obtained from an ensemble of randomly generated
> strings.
>
>
> Yes, I understand that.  And I understand that you have been talking about
> Everett's original idea in which at each split both results obtain, one in
> each branch...with no attribute of weight or probability or other measure.
> It's just 0 and 1.  Which generates all strings of zeros and ones.  This
> ensemble of sequences has the same statistics as random coin flipping
> sequences, even though it's deterministic.
>


That is, in fact, false. It does not generate the same strings as flipping
a coin

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 10:18 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 2:01 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 3:07 AM, Bruce Kellett wrote:
>>
>> there is no "weight" that differentiates different branches.
>>>
>>>
>>> Then the Born rule is false, and the whole of QM is false.
>>>
>>
>> No, QM is not false. It is only Everett that is disconfirmed by
>> experiment.
>>
>> Everett + mechanism + Gleason do solve the core of the problem.
>>>
>>
>> No. As discussed with Brent, the Born rule cannot be derived within the
>> framework of Everettian QM. Gleason's theorem is useful only if you have a
>> prior proof of the existence of a probability distribution. And you cannot
>> achieve that within the Everettian context. Even postulating the Born rule
>> ad hoc and imposing it by hand does not solve the problems with Everettian
>> QM.
>>
>> What needs to be derived or postulated is a probability measure on
>> Everett's multiple worlds.  I agree that it can't be derived.  But I don't
>> see that it can't be postulated that at each split the branches are given a
>> weight (or a multiplicity) so that over the ensemble of branches the Born
>> rule is statistically supported, i.e. almost all sequences will satisfy the
>> Born rule in the limit of long sequences.
>>
>
> Unfortunately, that does not work. Linearity means that any weight that
> you assign to particular result remains outside the strings, so data within
> each string are independent of any such assigned weights. The weights would
> not, therefore, show up in any experimental results. The weights can only
> work in a single-world version of the model.
>
>
> True.  But the multiplicity still works.
>

NO, it doesn't. Just think about what each observer sees from within his
branch.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSnDmp3r2KkOg6%2BxJk937Ni5Tn2zx%3DPMYL8ZAp-D7yrHg%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 2:01 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/5/2020 3:07 AM, Bruce Kellett wrote:



there is no "weight" that differentiates different branches.


Then the Born rule is false, and the whole of QM is false.


No, QM is not false. It is only Everett that is disconfirmed by
experiment.

Everett + mechanism + Gleason do solve the core of the problem.


No. As discussed with Brent, the Born rule cannot be derived
within the framework of Everettian QM. Gleason's theorem is
useful only if you have a prior proof of the existence of a
probability distribution. And you cannot achieve that within the
Everettian context. Even postulating the Born rule ad hoc and
imposing it by hand does not solve the problems with Everettian QM.


What needs to be derived or postulated is a probability measure on
Everett's multiple worlds.  I agree that it can't be derived.  But
I don't see that it can't be postulated that at each split the
branches are given a weight (or a multiplicity) so that over the
ensemble of branches the Born rule is statistically supported,
i.e. almost all sequences will satisfy the Born rule in the limit
of long sequences.


Unfortunately, that does not work. Linearity means that any weight 
that you assign to particular result remains outside the strings, so 
data within each string are independent of any such assigned weights. 
The weights would not, therefore, show up in any experimental results. 
The weights can only work in a single-world version of the model.


True.  But the multiplicity still works.

Brent



Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRDXKknUAs7yGbgVsdmhaD9-yY9S8ixzZw3u%2BghqEMqPw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/83ba3e02-3175-1243-bccc-5276e350cd4e%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 1:57 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 8:08 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/5/2020 2:45 AM, Bruce Kellett wrote:


Now sequences with small departures from equal numbers will still
give probabilities within the confidence interval of p = 0.5. But
this confidence interval also shrinks as 1/sqrt(N) as N
increases, so these additional sequences do not contribute a
growing number of cases giving p ~ 0.5 as N increases. So, again
within factors of order unity, the proportion of sequences
consistent with p = 0.5 decreases without limit as N increases.
So it is not the case that a very large proportion of the binary
strings will report p = 0.5. The proportion lying outside the
confidence interval of p = 0.5 is not vanishingly small -- it
grows with N.


I agree with you argument about unequal probabilities, in which
all the binomial sequences occur anyway leading to inference of
p=0.5.  But in the above paragraph you are wrong about the how the
probability density function of the observed value changes as
N->oo.  For any given interval around the true value, p=0.5, the
fraction of observed values within that interval increases as
N->oo.  For example in N=100 trials, the proportion of observers
who calculate an estimate of p in the interval (0.45 0.55) is
0.68.  For N=500 it's 0.975.  For N=1000 it's 0.998.

Confidence intervals are constructed to include the true value
with some fixed probability.  But that interval becomes narrower
as 1/sqrt(N).
So the proportion lying inside and outside the interval is
relatively constant, but the interval gets narrower.



I think I am beginning to see why we are disagreeing on this. You are 
using the normal approximation to the binomial distribution for a 
large sequence of trials with some fixed probability of success on 
each trial. In other words, it is as though you consider the 2^N 
binary strings of length N to have been generated by some random 
process, such as coin tosses or the like, with some prior fixed 
probability value. Each string is then constructed as though the 
random process takes place in a single word, so that there is only one 
outcome for each toss.


Given such an ensemble, the statistics you cite are undoubtedly 
correct: as the length of the string increases, the proportion of each 
string within some interval of the given probability increases -- that 
is what the normal approximation to the binomial gives you. And as N 
increases, the confidence interval shrinks, so the proportion within a 
confidence interval is approximately constant. But note these are the 
proportions within each string as generated with some fixed 
probability value. If you take an ensemble of such strings, the the 
result is even more apparent, and the proportion of strings in which 
the probability deviates significantly from the prior fixed value 
decreases without limit.


That is all very fine. The problem is that this is not the ensemble of 
strings that I am considering!


The set of all possible bit strings of length N is not generated by 
some random process with some fixed probability. The set is generated 
entirely deterministically, with no mention whatsoever of any 
probability. Just think about where these strings come from. You 
measure the spin of a spin-half particle. The result is 0 in one 
branch and 1 in the other. Then the process is repeated, independently 
in each branch, so the 1-branch splits into a 11-branch and a 
10-branch; and the 0-branch splits into a 01-branch and a 00-branch. 
This process goes on for N repetitions, generating all possible bit 
strings of length N in an entirely deterministic fashion. The process 
is illustrated by Sean Carroll on page 134 of his book.


Given the nature of the ensemble of bit strings that I am considering, 
the statistical results I quote are correct, and your statistics are 
completely inappropriate. This may be why we have been talking at 
cross purposes. I suspect that Russell has a similar misconception 
about the nature of the bit strings under consideration, since he 
talked about statistical results that could only have been obtained 
from an ensemble of randomly generated strings.


Yes, I understand that.  And I understand that you have been talking 
about Everett's original idea in which at each split both results 
obtain, one in each branch...with no attribute of weight or probability 
or other measure.  It's just 0 and 1.  Which generates all strings of 
zeros and ones.  This ensemble of sequences has the same statistics as 
random coin flipping sequences, even though it's deterministic.  But it 
doesn't have the same statistics as flipping an unfair coin, i.e. when 
a=/=b.  So to have a multiple world interpretation that produces 
statistics agreeing with the Born rule one has to either assign weights

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 3:07 AM, Bruce Kellett wrote:
>
> there is no "weight" that differentiates different branches.
>>
>>
>> Then the Born rule is false, and the whole of QM is false.
>>
>
> No, QM is not false. It is only Everett that is disconfirmed by experiment.
>
> Everett + mechanism + Gleason do solve the core of the problem.
>>
>
> No. As discussed with Brent, the Born rule cannot be derived within the
> framework of Everettian QM. Gleason's theorem is useful only if you have a
> prior proof of the existence of a probability distribution. And you cannot
> achieve that within the Everettian context. Even postulating the Born rule
> ad hoc and imposing it by hand does not solve the problems with Everettian
> QM.
>
> What needs to be derived or postulated is a probability measure on
> Everett's multiple worlds.  I agree that it can't be derived.  But I don't
> see that it can't be postulated that at each split the branches are given a
> weight (or a multiplicity) so that over the ensemble of branches the Born
> rule is statistically supported, i.e. almost all sequences will satisfy the
> Born rule in the limit of long sequences.
>

Unfortunately, that does not work. Linearity means that any weight that you
assign to particular result remains outside the strings, so data within
each string are independent of any such assigned weights. The weights would
not, therefore, show up in any experimental results. The weights can only
work in a single-world version of the model.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRDXKknUAs7yGbgVsdmhaD9-yY9S8ixzZw3u%2BghqEMqPw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 8:08 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 2:45 AM, Bruce Kellett wrote:
>
>
> Now sequences with small departures from equal numbers will still give
> probabilities within the confidence interval of p = 0.5. But this
> confidence interval also shrinks as 1/sqrt(N) as N increases, so these
> additional sequences do not contribute a growing number of cases giving p ~
> 0.5 as N increases. So, again within factors of order unity, the proportion
> of sequences consistent with p = 0.5 decreases without limit as N
> increases. So it is not the case that a very large proportion of the binary
> strings will report p = 0.5. The proportion lying outside the confidence
> interval of p = 0.5 is not vanishingly small -- it grows with N.
>
>
> I agree with you argument about unequal probabilities, in which all the
> binomial sequences occur anyway leading to inference of p=0.5.  But in the
> above paragraph you are wrong about the how the probability density
> function of the observed value changes as N->oo.  For any given interval
> around the true value, p=0.5, the fraction of observed values within that
> interval increases as N->oo.  For example in N=100 trials, the proportion
> of observers who calculate an estimate of p in the interval (0.45 0.55) is
> 0.68.  For N=500 it's 0.975.  For N=1000 it's 0.998.
>
> Confidence intervals are constructed to include the true value with some
> fixed probability.  But that interval becomes narrower as 1/sqrt(N).
> So the proportion lying inside and outside the interval is relatively
> constant, but the interval gets narrower.
>


I think I am beginning to see why we are disagreeing on this. You are using
the normal approximation to the binomial distribution for a large sequence
of trials with some fixed probability of success on each trial. In other
words, it is as though you consider the 2^N binary strings of length N to
have been generated by some random process, such as coin tosses or the
like, with some prior fixed probability value. Each string is then
constructed as though the random process takes place in a single word, so
that there is only one outcome for each toss.

Given such an ensemble, the statistics you cite are undoubtedly correct: as
the length of the string increases, the proportion of each string within
some interval of the given probability increases -- that is what the normal
approximation to the binomial gives you. And as N increases, the confidence
interval shrinks, so the proportion within a confidence interval is
approximately constant. But note these are the proportions within each
string as generated with some fixed probability value. If you take an
ensemble of such strings, the the result is even more apparent, and the
proportion of strings in which the probability deviates significantly from
the prior fixed value decreases without limit.

That is all very fine. The problem is that this is not the ensemble of
strings that I am considering!

The set of all possible bit strings of length N is not generated by some
random process with some fixed probability. The set is generated entirely
deterministically, with no mention whatsoever of any probability. Just
think about where these strings come from. You measure the spin of a
spin-half particle. The result is 0 in one branch and 1 in the other. Then
the process is repeated, independently in each branch, so the 1-branch
splits into a 11-branch and a 10-branch; and the 0-branch splits into a
01-branch and a 00-branch. This process goes on for N repetitions,
generating all possible bit strings of length N in an entirely
deterministic fashion. The process is illustrated by Sean Carroll on page
134 of his book.

Given the nature of the ensemble of bit strings that I am considering, the
statistical results I quote are correct, and your statistics are completely
inappropriate. This may be why we have been talking at cross purposes. I
suspect that Russell has a similar misconception about the nature of the
bit strings under consideration, since he talked about statistical results
that could only have been obtained from an ensemble of randomly generated
strings.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQeHcBgfa_SPc2AE02VwFFhKzmFbbmhW92%3DZhXOftUKBw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 3:07 AM, Bruce Kellett wrote:



there is no "weight" that differentiates different branches.


Then the Born rule is false, and the whole of QM is false.


No, QM is not false. It is only Everett that is disconfirmed by 
experiment.


Everett + mechanism + Gleason do solve the core of the problem.


No. As discussed with Brent, the Born rule cannot be derived within 
the framework of Everettian QM. Gleason's theorem is useful only if 
you have a prior proof of the existence of a probability distribution. 
And you cannot achieve that within the Everettian context. Even 
postulating the Born rule ad hoc and imposing it by hand does not 
solve the problems with Everettian QM.


What needs to be derived or postulated is a probability measure on 
Everett's multiple worlds.  I agree that it can't be derived.  But I 
don't see that it can't be postulated that at each split the branches 
are given a weight (or a multiplicity) so that over the ensemble of 
branches the Born rule is statistically supported, i.e. almost all 
sequences will satisfy the Born rule in the limit of long sequences.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d58c69f9-0469-62c2-2132-6a5cabd540e1%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 2:45 AM, Bruce Kellett wrote:


Now sequences with small departures from equal numbers will still give 
probabilities within the confidence interval of p = 0.5. But this 
confidence interval also shrinks as 1/sqrt(N) as N increases, so these 
additional sequences do not contribute a growing number of cases 
giving p ~ 0.5 as N increases. So, again within factors of order 
unity, the proportion of sequences consistent with p = 0.5 decreases 
without limit as N increases. So it is not the case that a very large 
proportion of the binary strings will report p = 0.5. The proportion 
lying outside the confidence interval of p = 0.5 is not vanishingly 
small -- it grows with N.


I agree with you argument about unequal probabilities, in which all the 
binomial sequences occur anyway leading to inference of p=0.5. But in 
the above paragraph you are wrong about the how the probability density 
function of the observed value changes as N->oo.  For any given interval 
around the true value, p=0.5, the fraction of observed values within 
that interval increases as N->oo.  For example in N=100 trials, the 
proportion of observers who calculate an estimate of p in the interval 
(0.45 0.55) is 0.68. For N=500 it's 0.975.  For N=1000 it's 0.998.


Confidence intervals are constructed to include the true value with some 
fixed probability.  But that interval becomes narrower as 1/sqrt(N).
So the proportion lying inside and outside the interval is relatively 
constant, but the interval gets narrower.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/f0a35d39-6287-2955-bcf5-3f6bc01c1317%40verizon.net.


Re: Horizons protect Church-Turing

2020-03-05 Thread Lawrence Crowell
There seems to be a bit of a gell-mould setting that is at work. This and 
related ideas are appearing in a number of places. Read my paper on the 
FQXi contest

https://fqxi.org/community/forum/topic/3392

and my final comment has a loose summary of some of this. The MIC = ER is 
interesting as well, and I am carving out a time next week to seriously 
study this. I think the domain of computation there has a connection with 
Hogarth-Malament spacetimes and the role of epistemic horizons, whether 
topological obstructions with quantum entanglements or event horizons, this 
appears to present barriers that protect the Church-Turing thesis. 

LC

On Thursday, March 5, 2020 at 5:42:27 AM UTC-6, ronaldheld wrote:
>
> Any comments, especially from Bruno, and the Physicalists?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/394794c7-c321-4dab-a92c-377c1be1%40googlegroups.com.


Scientists “film” a quantum measurement (?)

2020-03-05 Thread 'scerir' via Everything List
https://www.su.se/english/research/research-news/scientists-film-a-quantum-measurement-1.487234

https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.124.080401

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1729041341.357498.1583410053234%40mail1.libero.it.


Re: More on MIP="RE"

2020-03-05 Thread Lawrence Crowell
I am thinking about how this has some duality with Hogarth-Malament 
spacetimes and hypercomputation. If spacetime is a large N-entanglement 
coherence or condensate then these should bear some relationship with each 
other.

LC

On Wednesday, March 4, 2020 at 2:43:37 PM UTC-6, Philip Thrift wrote:
>
> Title should be MIP*=RE.
>
> @philipthrift
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/b3eb1662-cc0b-4d0e-9ada-b4761f851f22%40googlegroups.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 10:05 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 05:52, Bruce Kellett  wrote:
>
> On Thu, Mar 5, 2020 at 3:23 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/4/2020 7:54 PM, Bruce Kellett wrote:
>>
>> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>>
>>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
>>> everything-list@googlegroups.com> wrote:
>>>
 On 3/4/2020 6:18 PM, Bruce Kellett wrote:


 But one cannot just assume the Born rule in this case -- one has to use
 the data to verify the probabilistic predictions. And the observers on the
 majority of branches will get data that disconfirms the Born rule. (For any
 value of the probability, the proportion of observers who get data
 consistent with this value decreases as N becomes large.)


 No, that's where I was disagreeing with you.  If "consistent with" is
 defined as being within some given fraction, the proportion increases as N
 becomes large.  If the probability of the an even is p and q=1-p then the
 proportion of events in N trials within one std-deviation of p approaches
 1/e and N->oo and the width of the one std-deviation range goes down at
 1/sqrt(N).  So the distribution of values over the ensemble of observers
 becomes concentrated near the expected value, i.e. is consistent with that
 value.

>>>
>>>
>>> But what is the expected value? Does that not depend on the inferred
>>> probabilities? The probability p is not a given -- it can only be inferred
>>> from the observed data. And different observers will infer different values
>>> of p. Then certainly, each observer will think that the distribution of
>>> values over the 2^N observers will be concentrated near his inferred value
>>> of p. The trouble is that that this is true whatever value of p the
>>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>>>
>>>
>>> Not if the branches are unequally weighted (or numbered), as Carroll
>>> seems to assume, and those weights (or numbers) define the probability of
>>> the branch in accordance with the Born rule.  I'm not arguing that this
>>> doesn't have to be put in "by hand".  I'm arguing it is a way of assigning
>>> measures to the multiple worlds so that even though all the results occur,
>>> almost all observers will find results close to the Born rule, i.e. that
>>> self-locating uncertainty will imply the right statistics.
>>>
>>
>> But the trouble is that Everett assumes that all outcomes occur on every
>> trial. So all the branches occur with certainty -- there is no "weight"
>> that differentiates different branches. That is to assume that the branches
>> occur with the probabilities that they would have in a single-world
>> scenario. To assume that branches have different weights is in direct
>> contradiction to the basic postulates the the many-worlds approach. It is
>> not that one can "put in the weights by hand"; it is that any assignment of
>> such weights contradicts that basis of the interpretation, which is that
>> all branches occur with certainty.
>>
>>
>> All branches occur with certainty so long as their weight>0.  Yes,
>> Everett simply assumed they all occur.  Take a simple branch counting
>> model.  Assume that at each trial a there are a 100 branches and a of them
>> are |0> and b are |1> and the values are independent of the prior values in
>> the sequence.  So long as a and b > 0.1 every value, either |0> or |1> will
>> occur at every branching.  But almost all observers, seeing only one
>> sequence thru the branches, will infer P(0)~|a|^2 and P(1)~|b|^2.
>>
>> Do you really disagree that there is a way to assign weights or
>> probabilities to the sequences that reproduces the same statistics as
>> repeating the N trials many times in one world?  It's no more than saying
>> that one-world is an ergodic process.
>>
>
>
> I am saying that assigning weights or probabilities in Everett, by hand
> according to the Born rule, is incoherent.
>
>
> I think that it is incoherent with a preconception of the notion of
> “world”. There are only consistent histories, and in fact "consistent
> histories supported by a continuum of computations”. You take Everett to
> much literally.
>


I thought you were the one that claimed that Everett had essentially solved
all the problems..

But actually, all I need for my proof is that every outcome occurs on every
trial, which is a very slim version of Everett. The proof of the
impossibility of a sensible notion of probability works just as well for
the classical deterministic case, such as your WM-duplication scenario.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:59 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 04:54, Bruce Kellett  wrote:
>
> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>
>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
>>>
>>>
>>> But one cannot just assume the Born rule in this case -- one has to use
>>> the data to verify the probabilistic predictions. And the observers on the
>>> majority of branches will get data that disconfirms the Born rule. (For any
>>> value of the probability, the proportion of observers who get data
>>> consistent with this value decreases as N becomes large.)
>>>
>>>
>>> No, that's where I was disagreeing with you.  If "consistent with" is
>>> defined as being within some given fraction, the proportion increases as N
>>> becomes large.  If the probability of the an even is p and q=1-p then the
>>> proportion of events in N trials within one std-deviation of p approaches
>>> 1/e and N->oo and the width of the one std-deviation range goes down at
>>> 1/sqrt(N).  So the distribution of values over the ensemble of observers
>>> becomes concentrated near the expected value, i.e. is consistent with that
>>> value.
>>>
>>
>>
>> But what is the expected value? Does that not depend on the inferred
>> probabilities? The probability p is not a given -- it can only be inferred
>> from the observed data. And different observers will infer different values
>> of p. Then certainly, each observer will think that the distribution of
>> values over the 2^N observers will be concentrated near his inferred value
>> of p. The trouble is that that this is true whatever value of p the
>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>>
>>
>> Not if the branches are unequally weighted (or numbered), as Carroll
>> seems to assume, and those weights (or numbers) define the probability of
>> the branch in accordance with the Born rule.  I'm not arguing that this
>> doesn't have to be put in "by hand".  I'm arguing it is a way of assigning
>> measures to the multiple worlds so that even though all the results occur,
>> almost all observers will find results close to the Born rule, i.e. that
>> self-locating uncertainty will imply the right statistics.
>>
>
> But the trouble is that Everett assumes that all outcomes occur on every
> trial. So all the branches occur with certainty —
>
>
> In the 3p view, but then the “self-locating” idea explains that QM
> predicts that the observers abstained  do not see the “other branches”
> (“they don’t even feel the split”, as Everett argued correctly).
>


But each individual can test the probability predictions from the
first-person data obtained on his branch. And most will find that the Born
rule is disconfirmed if Everett is true.

there is no "weight" that differentiates different branches.
>
>
> Then the Born rule is false, and the whole of QM is false.
>

No, QM is not false. It is only Everett that is disconfirmed by experiment.

Everett + mechanism + Gleason do solve the core of the problem.
>

No. As discussed with Brent, the Born rule cannot be derived within the
framework of Everettian QM. Gleason's theorem is useful only if you have a
prior proof of the existence of a probability distribution. And you cannot
achieve that within the Everettian context. Even postulating the Born rule
ad hoc and imposing it by hand does not solve the problems with Everettian
QM.

(Except that we can’t use the universal wave no more, but then we do
> recover it in arithmetic, like it was necessary, so no problem at all,
> except difficult mathematics …).
>
>
>
>
> That is to assume that the branches occur with the probabilities that they
> would have in a single-world scenario. To assume that branches have
> different weights is in direct contradiction to the basic postulates the
> the many-worlds approach.
>
>
> Since the paper by Graham, nobody count the worlds by the distinguishable
> outcome, but use Gleason or Kochen, or other manner to attribute a
> weighting.
>

And that is contradicted by the data.

It is not that one can "put in the weights by hand"; it is that any
> assignment of such weights contradicts that basis of the interpretation,
> which is that all branches occur with certainty.
>
>
>
> They all occur with certainty, but the formalism explain why, from the
> first person perspective, they all occur with relative weighted
> uncertainties.
>


That is false. How many times do I have to prove to you that this does not
work.

Bruce

There is only “relative states”, some sharable, some non sharable.
>
> Bruno
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@goog

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 05:52, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 3:23 PM 'Brent Meeker' via Everything List 
> mailto:everything-list@googlegroups.com>> 
> wrote:
> On 3/4/2020 7:54 PM, Bruce Kellett wrote:
>> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
 
 But one cannot just assume the Born rule in this case -- one has to use 
 the data to verify the probabilistic predictions. And the observers on the 
 majority of branches will get data that disconfirms the Born rule. (For 
 any value of the probability, the proportion of observers who get data 
 consistent with this value decreases as N becomes large.)
>>> 
>>> No, that's where I was disagreeing with you.  If "consistent with" is 
>>> defined as being within some given fraction, the proportion increases as N 
>>> becomes large.  If the probability of the an even is p and q=1-p then the 
>>> proportion of events in N trials within one std-deviation of p approaches 
>>> 1/e and N->oo and the width of the one std-deviation range goes down at 
>>> 1/sqrt(N).  So the distribution of values over the ensemble of observers 
>>> becomes concentrated near the expected value, i.e. is consistent with that 
>>> value.
>>> 
>>> 
>>> But what is the expected value? Does that not depend on the inferred 
>>> probabilities? The probability p is not a given -- it can only be inferred 
>>> from the observed data. And different observers will infer different values 
>>> of p. Then certainly, each observer will think that the distribution of 
>>> values over the 2^N observers will be concentrated near his inferred value 
>>> of p. The trouble is that that this is true whatever value of p the 
>>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>> 
>> Not if the branches are unequally weighted (or numbered), as Carroll seems 
>> to assume, and those weights (or numbers) define the probability of the 
>> branch in accordance with the Born rule.  I'm not arguing that this doesn't 
>> have to be put in "by hand".  I'm arguing it is a way of assigning measures 
>> to the multiple worlds so that even though all the results occur, almost all 
>> observers will find results close to the Born rule, i.e. that self-locating 
>> uncertainty will imply the right statistics.
>> 
>> But the trouble is that Everett assumes that all outcomes occur on every 
>> trial. So all the branches occur with certainty -- there is no "weight" that 
>> differentiates different branches. That is to assume that the branches occur 
>> with the probabilities that they would have in a single-world scenario. To 
>> assume that branches have different weights is in direct contradiction to 
>> the basic postulates the the many-worlds approach. It is not that one can 
>> "put in the weights by hand"; it is that any assignment of such weights 
>> contradicts that basis of the interpretation, which is that all branches 
>> occur with certainty.
> 
> All branches occur with certainty so long as their weight>0.  Yes, Everett 
> simply assumed they all occur.  Take a simple branch counting model.  Assume 
> that at each trial a there are a 100 branches and a of them are |0> and b are 
> |1> and the values are independent of the prior values in the sequence.  So 
> long as a and b > 0.1 every value, either |0> or |1> will occur at every 
> branching.  But almost all observers, seeing only one sequence thru the 
> branches, will infer P(0)~|a|^2 and P(1)~|b|^2.
> 
> Do you really disagree that there is a way to assign weights or probabilities 
> to the sequences that reproduces the same statistics as repeating the N 
> trials many times in one world?  It's no more than saying that one-world is 
> an ergodic process.
> 
>  
> I am saying that assigning weights or probabilities in Everett, by hand 
> according to the Born rule, is incoherent.

I think that it is incoherent with a preconception of the notion of “world”. 
There are only consistent histories, and in fact "consistent histories 
supported by a continuum of computations”. You take Everett to much literally.

Bruno



> 
> Consider a state, |psi> = a|0> + b|1>, and a branch such that the 
> single-world probability by the Born rule is p = 0.001. (Such a branch can 
> trivially be constructed, for example, with a^2 = 0.9 and b^2 = 0.1). Then 
> according to Everett, this branch is one of the 2^N branches that must occur 
> in N repeats of the experiment. But, by construction, the single world 
> probability of this branch is p = 0.001. So if MWI is to reproduce the 
> single-world probabilities, we have with certainty a branch with weight p = 
> 0.001. Now this is not to say that we certainly have a branch with 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:46 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 01:40, Bruce Kellett  wrote:
>
> On Thu, Mar 5, 2020 at 10:39 AM Stathis Papaioannou 
> wrote:
>
>> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  wrote:
>>
>>>
>>> The greater problem is that any idea of probability founders when all
>>> outcomes occur for any measurement. Or have you not followed the arguments
>>> I have been making that shows this to be the case?
>>>
>>
>> I think it worth noting that to some people it is obvious that if an
>> entity is to be duplicated in two places it should have a 1/2 expectation
>> of finding itself in one or other place while to other people it is obvious
>> that there should be no such expectation.
>>
>
>
> Hence my point that intuition is usually faulty in such cases -- the
> straightforward testing of any intuition with repeated trials shows the
> unreliability of such intuitions.
>
>
> It did not. You were confusing the first person account with the third
> person account.
>

Bullshit. There is no such confusion. You are just using a rhetorical
flourish to avoid facing the real issues.



> QM predicts that all measurement outcome are obtained, and by linearity,
> that all observers obtained could not have predicted it, for the same
> reason nobody can predict the outcome in the WM self)duplication
> experience. Those who claim the contrary have to say at some point that the
> Helsinki guy has died, but then Mechanism is refuted.
>


Of course no one can predict the outcome of a quantum spin measurement on a
random spin-half particle. Just as no one can predict the his 1p outcome in
WM-duplication. That  is the point I have been making -- there is no useful
notion of probability available in either case.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLT1V8Pbe%2BCkHQbNKdDy005rk0B%2BxeoC_Tizd%3Dsw7YchFQ%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 04:54, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List 
> mailto:everything-list@googlegroups.com>> 
> wrote:
> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
>>> 
>>> But one cannot just assume the Born rule in this case -- one has to use the 
>>> data to verify the probabilistic predictions. And the observers on the 
>>> majority of branches will get data that disconfirms the Born rule. (For any 
>>> value of the probability, the proportion of observers who get data 
>>> consistent with this value decreases as N becomes large.)
>> 
>> No, that's where I was disagreeing with you.  If "consistent with" is 
>> defined as being within some given fraction, the proportion increases as N 
>> becomes large.  If the probability of the an even is p and q=1-p then the 
>> proportion of events in N trials within one std-deviation of p approaches 
>> 1/e and N->oo and the width of the one std-deviation range goes down at 
>> 1/sqrt(N).  So the distribution of values over the ensemble of observers 
>> becomes concentrated near the expected value, i.e. is consistent with that 
>> value.
>> 
>> 
>> But what is the expected value? Does that not depend on the inferred 
>> probabilities? The probability p is not a given -- it can only be inferred 
>> from the observed data. And different observers will infer different values 
>> of p. Then certainly, each observer will think that the distribution of 
>> values over the 2^N observers will be concentrated near his inferred value 
>> of p. The trouble is that that this is true whatever value of p the observer 
>> infers -- i.e., for whatever branch of the ensemble he is on.
> 
> Not if the branches are unequally weighted (or numbered), as Carroll seems to 
> assume, and those weights (or numbers) define the probability of the branch 
> in accordance with the Born rule.  I'm not arguing that this doesn't have to 
> be put in "by hand".  I'm arguing it is a way of assigning measures to the 
> multiple worlds so that even though all the results occur, almost all 
> observers will find results close to the Born rule, i.e. that self-locating 
> uncertainty will imply the right statistics.
> 
> But the trouble is that Everett assumes that all outcomes occur on every 
> trial. So all the branches occur with certainty —

In the 3p view, but then the “self-locating” idea explains that QM predicts 
that the observers abstained  do not see the “other branches” (“they don’t even 
feel the split”, as Everett argued correctly).




> there is no "weight" that differentiates different branches.

Then the Born rule is false, and the whole of QM is false. Everett + mechanism 
+ Gleason do solve the core of the problem. (Except that we can’t use the 
universal wave no more, but then we do recover it in arithmetic, like it was 
necessary, so no problem at all, except difficult mathematics …).




> That is to assume that the branches occur with the probabilities that they 
> would have in a single-world scenario. To assume that branches have different 
> weights is in direct contradiction to the basic postulates the the 
> many-worlds approach.

Since the paper by Graham, nobody count the worlds by the distinguishable 
outcome, but use Gleason or Kochen, or other manner to attribute a weighting. 



> It is not that one can "put in the weights by hand"; it is that any 
> assignment of such weights contradicts that basis of the interpretation, 
> which is that all branches occur with certainty.


They all occur with certainty, but the formalism explain why, from the first 
person perspective, they all occur with relative weighted uncertainties. There 
is only “relative states”, some sharable, some non sharable. 

Bruno


> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQRb1FGgGT-%2Bt%2Bb6FCzoU_PN9xB82bmK8zY5K1X4DNxCw%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0355090F-0A79-4355-BF25-78A652209E2D%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:39 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 00:39, Stathis Papaioannou  wrote:
>
>
> I think it worth noting that to some people it is obvious that if an
> entity is to be duplicated in two places it should have a 1/2 expectation
> of finding itself in one or other place while to other people it is obvious
> that there should be no such expectation.
>
>
> It is not just obvious. It is derivable from the simplest definition of
> “first person” and “third person”.
>

This is simply false. It cannot be derived from anything. The truth is that
testing any such notion about  the probability by repeating the trial shows
that no single value of the probability is appropriate. Alternatively, for
most 1p observers, any particular theory about the probability will be
disconfirmed. The first person data is the particular bit string recorded
by an individual. From the 3p perspective, there are 2^N different 1p bit
strings after N trials.

Bruce



> All arguments presented against the 1p-indeterminacy have always been
> refuted, and almost all time by pointing on a confusion between first
> person and third person.  The first person id defined by the owner of the
> personal memory taken with them in the box, and the third person is
> described by the personal memory of those outside the box.
>
>
>
>
> This seems to be an immediate judgement on considering the question, with
> attempts at rational justification perhaps following but not being the
> primary determinant of belief. A parallel is Newcomb’s paradox: on learning
> of it some people immediately feel it is obvious you should choose one box
> and others immediately feel you should choose both boxes.
>
>
>
> I think that the Newcomb situation is far more complex, or that the
> self-duplication is far more easy, at least for anyone who admits even weak
> form of Mechanism. To believe that there is no indeterminacy is like
> believing that all amoebas have telepathic power.
>
> The only reason I can see to refuse the first person indeterminacy is the
> comprehension that it leads to the end of physicalism, that is a long
> lasting comfortable habit of thought. People tend to hate change of
> paradigm.
>
> Bruno
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQ_9TuO2n8ggPP4UggctLLtQJKHpvJqkD7vUnPrg-%2B6hA%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 01:40, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 10:39 AM Stathis Papaioannou  > wrote:
> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  > wrote:
> 
> The greater problem is that any idea of probability founders when all 
> outcomes occur for any measurement. Or have you not followed the arguments I 
> have been making that shows this to be the case?
> 
> I think it worth noting that to some people it is obvious that if an entity 
> is to be duplicated in two places it should have a 1/2 expectation of finding 
> itself in one or other place while to other people it is obvious that there 
> should be no such expectation.
> 
> 
> Hence my point that intuition is usually faulty in such cases -- the 
> straightforward testing of any intuition with repeated trials shows the 
> unreliability of such intuitions.

It did not. You were confusing the first person account with the third person 
account. QM predicts that all measurement outcome are obtained, and by 
linearity, that all observers obtained could not have predicted it, for the 
same reason nobody can predict the outcome in the WM self)duplication 
experience. Those who claim the contrary have to say at some point that the 
Helsinki guy has died, but then Mechanism is refuted.

Bruno





> 
> This seems to be an immediate judgement on considering the question, with 
> attempts at rational justification perhaps following but not being the 
> primary determinant of belief. A parallel is Newcomb’s paradox: on learning 
> of it some people immediately feel it is obvious you should choose one box 
> and others immediately feel you should choose both boxes.
> 
> 
> Newcomb's 'paradox' seems to be just another illustration of the 
> unreliability of intuition in these situations. Except that Newcomb's paradox 
> relies on the unrealistic assumption of a perfect predictor. No such problems 
> beset the argument against intuition in the case of classical duplication, or 
> the case of binary quantum measurements. (See my simple outline of the 
> arguments in my reply to Russell.)
> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLTtH%2B96zApvgW3qtE-%3DNTPDrrztH81e61uQ96ay95R4vw%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/14166D34-3B73-4A29-822D-39AF164DBDF4%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 5:26 PM Russell Standish 
wrote:

> On Thu, Mar 05, 2020 at 11:34:55AM +1100, Bruce Kellett wrote:
> > On Thu, Mar 5, 2020 at 10:39 AM Russell Standish 
> wrote:
> >
> > ISTM - probability is all about what an observer observes. Since the
> > observer cannot see all outcomes, an objection based on all outcomes
> > occurring seems moot to me.
> >
> >
> > The fact that the observer cannot see all outcomes is actually central
> to the
> > argument. If, in the person-duplication scenario, the participant naively
> > assumes a probability p = 0.5 for each outcome, such an intuition can
> only be
> > tested by repeating the duplication a number of times and inferring a
> > probability value from the observed outcomes. Since each observer can
> see only
> > the outcomes along his or her particular branch (and, ipso facto, is
> unaware of
> > the outcomes on other branches), as the number of trials N becomes very
> large,
> > only a vanishingly small proportion of observers will confirm their 50/50
> > prediction . This is a trivial calculation involving only the binomial
> > coefficient -- Brent and I discussed this a while ago, and Brent could
> not
> > fault the maths.
>
> But a very large proportion of them (→1 as N→∞) will report being
> within ε (called a confidence interval) of 50% for any given ε>0
> chosen at the outset of the experiment. This is simply the law of
> large numbers theorem. You can't focus on the vanishingly small
> population that lie outside the confidence interval.
>

This is wrong. In the binary situation where both outcomes occur for every
trial, there are 2^N binary sequences for N repetitions of the experiment.
This set of binary sequences exhausts the possibilities, so the same
sequence is obtained for any two-component initial state -- regardless of
the amplitudes. You appear to assume that the natural probability in this
situation is p = 0.5 and, what is more, your appeal to the law of large
numbers applies only for single-world probabilities, in which there is only
one outcome on each trial.

In order to infer a probability of p = 0.5, your branch data must have
approximately equal numbers of zeros and ones. The number of branches with
equal numbers of zeros and ones is given by the binomial coefficient. For
large even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
approximation to the factorial for large N, this goes as 2^N/sqrt(N)
(within factors of order one). Since there are 2^N sequences, the
proportion with n_0 = n_1 vanishes as 1/sqrt(N) for N large.

Now sequences with small departures from equal numbers will still give
probabilities within the confidence interval of p = 0.5. But this
confidence interval also shrinks as 1/sqrt(N) as N increases, so these
additional sequences do not contribute a growing number of cases giving p ~
0.5 as N increases. So, again within factors of order unity, the proportion
of sequences consistent with p = 0.5 decreases without limit as N
increases. So it is not the case that a very large proportion of the binary
strings will report p = 0.5. The proportion lying outside the confidence
interval of p = 0.5 is not vanishingly small -- it grows with N.


> > The crux of the matter is that all branches are equivalent when both
> outcomes
> > occur on every trial, so all observers will infer that their observed
> relative
> > frequencies reflect the actual probabilities. Since there are observers
> for all
> > possibilities for p in the range [0,1], and not all can be correct, no
> sensible
> > probability value can be assigned to such duplication experiments.
>
> I don't see why not. Faced with a coin flip toss, I would assume a
> 50/50 chance of seeing heads or tails. Faced with a history of 100
> heads, I might start to investigate the coin for bias, and perhaps by
> Bayesian arguments give the biased coin theory greater weight than the
> theory that I've just experience a 1 in 2^100 event, but in any case
> it is just statistics, and it is the same whether all oputcomes have
> been realised or not.
>

The trouble with this analogy is that coin tosses are single-world events
-- there is only one outcome for each toss. Consequently, any intuitions
about probabilities based on such comparisons are not relevant to the
Everettian case in which every outcome occurs for every toss. Your
intuition that it is the same whether all outcomes are realised or not is
simply mistaken.

> The problem is even worse in quantum mechanics, where you measure a state
> such
> > as
> >
> >  |psi> = a|0> + b|1>.
> >
> > When both outcomes occur on every trial, the result of a sequence of N
> trials
> > is all possible binary strings of length N, (all 2^N of them). You then
> notice
> > that this set of all possible strings is obtained whatever non-zero
> values of a
> > and b you assume. The assignment of some propbability relation to the
> > coefficients is thus seen to be meaningless -- all probabilities occur
>

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 00:39, Stathis Papaioannou  wrote:
> 
> 
> 
> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  > wrote:
> On Thu, Mar 5, 2020 at 9:31 AM Stathis Papaioannou  > wrote:
> On Thu, 5 Mar 2020 at 08:54, Bruce Kellett  > wrote:
> On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou  > wrote:
> On Fri, 28 Feb 2020 at 08:40, Bruce Kellett  > wrote:
> On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List 
> mailto:everything-list@googlegroups.com>> 
> wrote:
> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>> 
>> That is probably what all this argument is actually about -- the maths show 
>> that there are no probabilities. Because there are no unique probabilities 
>> in the classical duplication case, the concept of probability has been shown 
>> to be inadmissible in the deterministic (Everettian) quantum case. The 
>> appeal by people like Deutsch and Wallace to betting quotients, or quantum 
>> credibility measures, are just ways of forcing a probabilistic 
>> interpretation on to quantum mechanics by hand -- they are not derivations 
>> of probability from within the deterministic theory. There are no 
>> probabilities in the deterministic theory, even from the 1p perspective, 
>> because the data are consistent with any prior assignment of a probability 
>> measure.
> 
> The probability enters from the self-location uncertainty; which is other 
> terms is saying: Assume each branch has the same probability (or some 
> weighting) for you being in that branch.  Then that is the probability that 
> you have observed the sequence of events that define that branch.
> 
> I think that is Sean Carroll's approach. I am uncertain as to whether this 
> really works or not. The concept of a 'weight' or 'thickness' for each branch 
> is difficult to reconcile with the first-person experience of probability: 
> which is obtained within the branch, so is independent of any overall 
> 'weight'. But that aside, self-locating uncertainty is just another idea 
> imposed on quantum mechanics and, like decision-theoretic ideas, it is 
> without theoretical foundation -- it is just imposed by fiat on a 
> deterministic theory. It makes  probability a subjective notion imposed on a 
> theory that is supposedly objective: there is an objective probability that a 
> radioactive nucleus will decay in a certain time period -- independent of our 
> subjective impressions, or self-location. (I can develop this thought 
> further, if required, but I think it shows Sean's approach to fail.)
> 
> Probability derived from self-locating uncertainty is an idea independent of 
> any particular physics. It is also independent of any theory of 
> consciousness, since we can imagine a non-conscious observer reasoning in the 
> same way. To some people it seems trivially obvious, to others it seems very 
> strange. I don’t know if which group one falls into correlates with any other 
> beliefs or attitudes.
> 
> As I said, self-locating uncertainty is just another idea imposed on the 
> quantum formalism without any real theoretical foundation -- "it is just 
> imposed by fiat on a deterministic theory." If nothing else, this shows that 
> Carroll's claim that Everett is just "plain-vanilla" quantum mechanics, 
> without any additional assumptions, is a load of self-deluded hogwash.
> 
> And as I said, probabilities derived from self-locating uncertainty is, for 
> many people, trivially obvious, just a special case of frequentist inference.
> 
> That is not a particularly solid basis on which to base a scientific theory. 
> The trivially obvious is seldom useful.
> 
> The greater problem is that any idea of probability founders when all 
> outcomes occur for any measurement. Or have you not followed the arguments I 
> have been making that shows this to be the case?
> 
> I think it worth noting that to some people it is obvious that if an entity 
> is to be duplicated in two places it should have a 1/2 expectation of finding 
> itself in one or other place while to other people it is obvious that there 
> should be no such expectation.

It is not just obvious. It is derivable from the simplest definition of “first 
person” and “third person”. All arguments presented against the 
1p-indeterminacy have always been refuted, and almost all time by pointing on a 
confusion between first person and third person.  The first person id defined 
by the owner of the personal memory taken with them in the box, and the third 
person is described by the personal memory of those outside the box.




> This seems to be an immediate judgement on considering the question, with 
> attempts at rational justification perhaps following but not being the 
> primary determinant of belief. A parallel is Newcomb’s paradox: on learning 
> of it some people immediately feel it is obvious you should choose