On Mon, Feb 17, 2020 at 1:27 AM Bruno Marchal <[email protected]> wrote:
> On 14 Feb 2020, at 22:48, Bruce Kellett <[email protected]> wrote: > > On Sat, Feb 15, 2020 at 1:35 AM Bruno Marchal <[email protected]> wrote: > >> >> Just to be clear, are you OK with P(W) = 1/2 in the WM-duplicatipon, when >> “W” refers to the first person experience? >> > > No. As I have said before, the H-man has no basis on which to assign any > probability at all to the possibility that he will see W (or M) tomorrow, > > > Do you accept the idea that if we offer him (to the two copies, thus) a > cup of coffee after reconstitution, in both M and W, that he can say in > Helsinki that if mechanism is correct, he will drink coffee with > probability one? What would you say if you were the H-guy? > If all copies are given a cup of coffee, then it is certain that W and M will drink coffee--by hypothesis. > The trouble is that probabilities tend to be defined by the limit of > relative frequencies over a large number of trials. > > > But one trial is enough to refute P(W) = 1 and P(M) = 1. Or to refute P(W > & M) = 1, given that W and M are incompatible first person experience (none > of the copies will feel to be in two cities at once). > One trial is enough to refute P(W)=1 if you take the view of the M-man. So what? We are talking about estimating the probability from repeated trials -- there is no other sensible way to estimate empirical probabilities. You can estimate probabilities in the single-world case, say for coin tosses: if you assume that the coin and the tossing method are fair, then the probabilty for "heads" equals the probability for "tails"; again, by hypothesis. But in the duplication case we do not have this possibility available, so we must estimate probabilities from the relative frequencies in a number of trials. If you perform the WM-duplication N times, there will be 2^N "first person > experiences” > > > OK. > > and many of them will assign probabilities greatly different from 0.5. > > > Not at all. In the limit most will say that it looks like white noise: > arbitrary sequence. We can show that most histories (sequence of W and M) > will be algorithmically incompressible, and if the copies met, they can see > that their population is well described by the Pascal triangle (or Newton’s > binomial). > That is where the proof given by Kent comes into play. If in the N trials you observe pN zeros and (1-p)N ones, you estimate the probability for zero to be p, within certain confidence limits that depend on the number of trials. Note that this is precisely the 1p perspective, one person taking his actual data and making some estimates. This person then considers that some other person might have obtained r zeros, rather than the pN that he obtained. Applying the binomial theorem, he estimates the probability for this to occur as p^r(1-p)^{N-r}. This goes to zero in the limit as N becomes very large, so our original observer believes that he has the correct probability, since the probability of results significantly deviant from his goes to zero as N becomes large. The problem, of course, is that this reasoning applies equally well for all the inhabitants (from their individual first-person perspectives), whatever relative frequency p they see on their branch. All of them conclude that their relative frequencies represent (to a very good approximation) the branch weights. They clearly can't all be right, so either there is no actual probability underlying the events and their calculations are misguided, or the theory itself is incoherent. > There is no "intrinsic probability" in your scenario. > > > If there is no probability, what do you expect when you are still in > Helsinki. If you predict that you die, then you reject Mechanism (assumed > here). If you predict P(W) = 1, the city in Moscow will understand that the > prediction was wrong. If you predict that your history is the development > of PI, then only 1/2^N will be be confirmed, etc. > I turn the tables on you here, Bruno. You are confusing the 1p and 3p pictures. From each individual's personal perspective, he concludes, according to above argument, that his are the correct probabilities. It is only from the outside, third-person perspective, that we can see that he represents only a small fraction of the total population of 2^N branches. What is you prediction, if there is no probability. Keep in mind that “W” > and “M” does not refer to self-localisation, but to the first person > experience. Do you agree that in this case W and M are incompatible. > I just try to understand. > As I said, I make no prediction, since I do not think that the concept of probability can be meaningfully applied in cases of person duplication, such as the WM scenario, or, for that matter, Everettian quantum mechanics. This is also Adrian Kent's objection to MWI, and it will also nullify any > benefit you might seek to gain from the "frequency operator" -- every > "first person" will get a different eigenvalue in the limit of infinite > trials.. > > > That is not correct. If it is the frequency operator which is measure, it > gives the Born Probabilities, at least if the “simple” derivation is > correct. > No,that argument is mistaken, as Kent's general argument in terms of the binomial expansion shows. All 2^N persons will use the frequency operator to conclude that their probabilities are the correct ones. Some will be seriously wrong, so the frequency operator is not a reliable indicator of probability. Incidentally, the fact that there are more bit strings in the set of all 2^N bit strings with approximately equal numbers of 0 and 1 results is a consequence of the binomial expansion when there are only two possible outcomes, as in the cases we have considered -- it is no more fundamental than that, and does not reflect some 3p-preferred probability. But my question is independent of Everett, so even if Kent is correct for > QM, it remains false for Mechanism. Let us agree first on the simple > Mechanist case, and then come back to Everett. > Kent posed his argument in terms of completely classical simulations, so it is precisely parallel to your WM-duplication scenario. I have applied the argument to Everettian QM because of the parallels between the two: Everett is just like the classical duplication case since it is completely deterministic and every possibility occurs on every trial. The only real difference is that the different outcomes in QM occur on different branches which, by decoherence, cannot interact or be aware of each other. So there is no effective 3p perspective in QM as there is in the WM-duplication. Arguments about the proportion of individuals who see particular sets of outcomes in QM are arguments from the 3p perspective, and it can be argued that in the absence of any possible 3p observer, such arguments are invalid. Bruce -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAFxXSLRL41oVsJzHTxokj_mYz4y7%3D3ZHPihP6TSnDj9YnRZ6rw%40mail.gmail.com.

