On Mon, Feb 17, 2020 at 6:04 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 2/16/2020 9:48 PM, Bruce Kellett wrote:
>
> On Mon, Feb 17, 2020 at 4:13 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 2/16/2020 2:17 PM, Bruce Kellett wrote:
>>
>> That is where the proof given by Kent comes into play. If in the N trials
>> you observe pN zeros and (1-p)N ones, you estimate the probability for zero
>> to be p, within certain confidence limits that depend on the number of
>> trials. Note that this is precisely the 1p perspective, one person taking
>> his actual data and making some estimates. This person then considers that
>> some other person might have obtained r zeros, rather than the pN that he
>> obtained. Applying the binomial theorem, he estimates the probability for
>> this to occur as p^r(1-p)^{N-r}. This goes to zero in the limit as N
>> becomes very large, so our original observer believes that he has the
>> correct probability, since the probability of results significantly deviant
>> from his goes to zero as N becomes large.
>>
>>
>> The problem, of course, is that this reasoning applies equally well for
>> all the inhabitants (from their individual first-person perspectives),
>> whatever relative frequency p they see on their branch. All of them
>> conclude that their relative frequencies represent (to a very good
>> approximation) the branch weights. They clearly can't all be right, so
>> either there is no actual probability underlying the events and their
>> calculations are misguided, or the theory itself is incoherent.
>>
>>
>> But exactly the same reasoning applies for any given true value of p.
>> There will be different estimates by different experimenters and they can't
>> all be right.  Each will infer that any proportion other than the one he
>> observed will have zero measure in the limit N->oo.
>>
>
> Exactly right. That is what my example of spin measurements on an ensemble
> of equally prepared spin states comes into play. If all 2^N bit strings are
> realized for one orientation of the S-G magnet, then exactly the same 2^N
> bit strings are realized for every other orientation.
>
>
> ?? Suppose the ensemble is equally prepared in spin-up.  What does it mean
> to say all 2^N bit strings are realized for the S-G oriented left/right?
> We may expect they will be for any number of trials >>N.  But certainly
> not for the S-G oriented up/down.
>

What on earth are you talking about? I might not have been totally specific
in the above summary, but as I said to you in another post, the ensemble is
prepared in x-spin up and then the z-spin is measured (or the other way
round, I don't remember).

Consequently, the coefficients in the expansion play no role in determining
> the data, and it makes no sense to talk of "the true value of p". There is
> no such true value if all values are realized.
>
>
> In Kent's thought experiment, if you consider the self-location as
>> probabilistic then it's exactly the same as taking a sample of N from an
>> ensemble for which p=0.5 is the true proportion.  I think you prove too
>> much by saying the estimate of any proportion of the other observers has
>> zero measure in the limit, therefore everybody is wrong.
>>
>
> That is a strange thing to say -- I prove too much by showing that the
> whole thing makes no sense?
>
>
> No you prove too much by showing that everybody is necessarily wrong.
>

But the point is that everyone is wrong when there is no "true" probability!



> If you take a sample of N from an ensemble with true proportion p=0.5?????
> The trouble is that you get the same ensemble even if the true proportion
> is 0.99, or 0.01. or any other value.
>
>
> I don't understand that last remark.  If I take a sample of N from and
> ensemble with true proportion 0.5, then with high probability I get a
> sample with proportion near 0.5.  I don't know what you mean by "you get
> the same ensemble"?
>

You are not thinking about MWI where every result occurs for every
measurement. You are still being blinded by your single-world intuitions.


> If instead you estimate how many other experimenters will get estimates
>> which are consistent with yours by being of high probability in your
>> posterior Bayesian distribution, with high probability you will find that
>> most of them will.
>>
>
> Exactly. Even if you estimate p=0.01, you will dismiss branches with
> approximately equal numbers of zeros and ones as highly unlikely, and you
> expect other experimenters to verify your results.
>
>
> But if you estimate p=0.01 you will find that no one agrees with you even
> approximately.  And the bigger is N the more singular will seem your
> result.  Every particular sequence of observed 1s and 0s is equally rare,
> the proportion of sequences with equal numbers of 1s and 0s is large.  So
> if you estimate p=0.5 you will have lots of agreeable replications.
>


Rubbish. You are simply not understanding what is going on here. The number
of agreements with your estimate of 0.01 may not be large, but it is not
negligible. The point is that this person estimates that all other
sequences are of low probability. Forget about 0.5 -- that is just a
consequence of the fact that there are only two possibilities on each
trial. It is not a probability.

If the number of trials N is large, there are N(N-1)/2 branches with
> exactly 2 zeros and N-2 ones. The probability for N/2 zeros and N/2 ones is
> (2/N)^N/2*(1-2/N)^N/2 ~ N^{-N/2}, which goes to zero very rapidly for large
> N.
>
> There is no "intrinsic probability" in your scenario.
>>>
>>>
>>> If there is no probability, what do you expect when you are still in
>>> Helsinki. If you predict that you die, then you reject Mechanism (assumed
>>> here). If you predict P(W) = 1, the city in Moscow will understand that the
>>> prediction was wrong. If you predict that your history is the development
>>> of PI, then only 1/2^N will be be confirmed, etc.
>>>
>>
>>
>> I turn the tables on you here, Bruno. You are confusing the 1p and 3p
>> pictures. From each individual's personal perspective, he concludes,
>> according to above argument, that his are the correct probabilities. It is
>> only from the outside, third-person perspective, that we can see that he
>> represents only a small fraction of the total population of 2^N branches.
>>
>> What is you prediction, if there is no probability. Keep in mind that “W”
>>> and “M” does not refer to self-localisation, but to the first person
>>> experience. Do you agree that in this case W and M are incompatible.
>>> I just try to understand.
>>>
>>
>> As I said, I make no prediction, since I do not think that the concept of
>> probability can be meaningfully applied in cases of person duplication,
>> such as the WM scenario, or, for that matter, Everettian quantum mechanics.
>>
>>
> I think you are inconsistent in this.  You (correctly) emphasize that
> science is an inference from data.  Everett's QM is an interpretation.  I
> doesn't change the data.  So how does it change the inference?  What
> prediction will be falsified?
>

The interpretation has been shown to be incoherent. Of course the data
obtained in real life doesn't change. But under the assumption that Everett
is true, you get different data for these experiments. And that different
data is inconsistent with experience. So the theory (or interpretation)
Everett is false.



>
>> This is also Adrian Kent's objection to MWI, and it will also nullify any
>>> benefit you might seek to gain from the "frequency operator" -- every
>>> "first person" will get a different eigenvalue in the limit of infinite
>>> trials..
>>>
>>>
>>> That is not correct. If it is the frequency operator which is measure,
>>> it gives the Born Probabilities, at least if the “simple” derivation is
>>> correct.
>>>
>>
>> No,that argument is mistaken, as Kent's general argument in terms of the
>> binomial expansion shows. All 2^N persons will use the frequency operator
>> to conclude that their probabilities are the correct ones. Some will be
>> seriously wrong,
>>
>>
>> But almost all will intersubjectively agree that p is near 0.5.  Science
>> theories are based on intersubjective agreement...not personal expriences.
>>
>
>
> But the true probability,
>
>
> You said there was no true probability.  I said that if there is one, in
> an empirical sense, it must be 0.5.
>

No, in the singe world case, the Born rule probabilities are given by the
square of the coefficients (amplitudes). There is no earthly reason why the
probability in the case under consideration should be 0.5. The empirical
probabilities, inferred from frequency counts of the data, can be anything
at all. Get over your obsession with 0.5!

as given by the amplitudes, might be very far from 0.5.
>
>
> It might be.  And I might be even if the experiment were bernoulli
> sampling with a true value of 0.5.  But given Kent's thought experiment (or
> coin flipping), most observers will infer a value near 0.5...and so we may
> expect to be one of them.
>


I don't often agree with Sean Carroll, though, on occasion, he does get
things right. In his book on Everett, p. 137, he says: "There's an answer
that is tempting but wrong: that we don't know 'which world we will end up
in'. This is wrong because it implicitly relies on a notion of personal
identity that simply isn't applicable in a quantum universe."

He might not have avoided this problem himself, but at least he recognized
that these things go wrong quite easily.


  This sort of hypothesis that our observations are typical in some sense
> are commonplace in cosmology and science would be impossible without it.
>

And that hypothesis is completely without foundation. That is what goes
wrong with much of these discussions. It relies on an implicitly
dualist assumption, or on the existence of an immortal soul that defines
personal identity.

This shows that the data are independent of the Born amplitudes -- the Born
> rule does not apply, and cannot be simply grafted on.
>
>
>
>> so the frequency operator is not a reliable indicator of probability.
>>
>> Incidentally, the fact that there are more bit strings in the set of all
>> 2^N bit strings with approximately equal numbers of 0 and 1 results is a
>> consequence of the binomial expansion when there are only two possible
>> outcomes, as in the cases we have considered -- it is no more fundamental
>> than that, and does not reflect some 3p-preferred probability.
>>
>> But my question is independent of Everett, so even if Kent is correct for
>>> QM, it remains false for Mechanism. Let us agree first on the simple
>>> Mechanist case, and then come back to Everett.
>>>
>>
>>
>> Kent posed his argument in terms of  completely classical simulations, so
>> it is precisely parallel to your WM-duplication scenario. I have applied
>> the argument to Everettian QM because of the parallels between the two:
>> Everett is just like the classical duplication case since it is completely
>> deterministic and every possibility occurs on every trial. The only real
>> difference is that the different outcomes in QM occur on different branches
>> which, by decoherence, cannot interact or be aware of each other. So there
>> is no effective 3p perspective in QM as there is in the WM-duplication.
>> Arguments about the proportion of individuals who see particular sets of
>> outcomes in QM are arguments from the 3p perspective, and it can be argued
>> that in the absence of any possible 3p observer, such arguments are invalid.
>>
>>
>> Then the arguments about every estimate being contrary to other estimates
>> is also invalid.
>>
>
> Why should you think that?
>
>
> *I* don't think it.  But you above make an observation that the
> estimations will disagree is an argument from a 3p perspective and hence
> invalid.
>

I do not ever use the 3p perspective in these discussions. You and Bruno
are implicitly relying on a 1p/3p confusion.

The estimates of probability that the individual observer makes are all
> strictly first person estimates -- they use their own data for their
> estimates, they do not take any third person view of the situation. It is
> the argument that most observers will lie towards the centre of the
> binomial distribution for two results that is an invalid appeal to the
> third person view, and an appeal which leads to manifestly wrong results if
> the true probability is far from 0.5.
>
>
> It's not an argument, it is the premise of the thought experiment.  The
> thought experiment implies the binomial distribution.
>

Binomial distributions do not necessarily have p = 0,5.

  What is invalid is to say each observer will infer a p value from one of
> the unique binomial sequences (a 3p view)
>

That is not a 3p view. It is the view of the person collecting the data. He
infers a binomial distribution with probability given by the observed
frequencies. An entirely 1p view.

and then deny that most of those observers will observe a value near 0.5
> because it's a 3p view.
>

Brent. I think you need to take a deep breath and rethink what happens with
the data when both outcomes from a sequence of two-possibility trials
occur. I think you are failing to understand that this simply generates all
possible binary strings of length N. Each sequence defines a binomial
probability. But by no means are all of p = 0.5, or even anything near
that. The 1p view treats all such binary strings equally -- it does not
select out any strings preferentially (as you seem determined to do.)

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQN3B57LAvcF9YYdUBxKX5_-s9hvV6r2qjRU981X4ggyw%40mail.gmail.com.

Reply via email to