On Mon, Mar 9, 2020 at 5:29 AM 'Brent Meeker' via Everything List < [email protected]> wrote:
> On 3/8/2020 3:56 AM, Bruce Kellett wrote: > > On Sun, Mar 8, 2020 at 7:46 PM Russell Standish <[email protected]> > wrote: > >> On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote: >> > On Sun, Mar 8, 2020 at 5:32 PM Russell Standish <[email protected]> >> wrote: >> > >> > On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote: >> > >> > > That is, in fact, false. It does not generate the same strings as >> > flipping a >> > > coin in single world. Sure, each of the strings in Everett could >> have >> > been >> > > obtained from coin flips -- but then the probability of a >> sequence of >> > 10,000 >> > > heads is very low, whereas in many-worlds you are guaranteed that >> one >> > observer >> > > will obtain this sequence. There is a profound difference between >> the two >> > > cases. >> > >> > You have made this statement multiple times, and it appears to be at >> > the heart of our disagreement. I don't see what the profound >> > difference is. >> > >> > If I select a subset from the set of all strings of length N, for >> example >> > all strings with exactly N/3 1s, then I get a quite specific value >> for the >> > proportion of the whole that match it: >> > >> > / N \ >> > | | 2^{-N} = p. >> > \N/3/ >> > >> > Now this number p will also equal the probability of seeing exactly >> > N/3 coins land head up when N coins are tossed. >> > >> > What is the profound difference? >> > >> > >> > >> > Take a more extreme case. The probability of getting 1000 heads on 1000 >> coin >> > tosses is 1/2^1000. >> > If you measure the spin components of an ensemble of identical spin-half >> > particles, there will certainly be one observer who sees 1000 spin-up >> results. >> > That is the difference -- the difference between probability of >> 1/2^1000 and a >> > probability of one. >> > >> > In fact in a recent podcast by Sean Carroll (that has been discussed on >> the >> > list previously), he makes the statement that this rare event (with >> probability >> > p = 1/2^1000) certainly occurs. In other words, he is claiming that the >> > probability is both 1/2^1000 and one. That this is a flat contradiction >> appears >> > to escape him. The difference in probabilities between coin tosses and >> > Everettian measurements couldn't be more stark. >> >> That is because you're talking about different things. The rare event >> that 1 in 2^1000 observers see certainly occurs. In this case >> certainty does not refer to probability 1, as no probabilities are >> applicable in that 3p picture. Probabilities in the MWI sense refers >> to what an observer will see next, it is a 1p concept. >> >> And that 1p context, I do not see any difference in how probabilities >> are interpreted, nor in their numerical values. >> >> Perhaps Caroll is being sloppy. If so, I would think that could be >> forgiven. >> > > > Yes, I think the Carroll's comment was just sloppy. The trouble is that > this sort of sloppiness permeates all of these discussions. As you say, > probability really has meaning only in the 1p picture. So the guy who sees > 1000 spin-ups in the 1000 trials will conclude that the probability of > spin-up is very close to one. That is why it makes sense to say that the > probability is one. The fact that this one guy sees this is certain in > Many-worlds (This may be another meaning of probability, but an event that > is certain to happen is usually referred to as having probability one.). > > The trouble comes when you use the same term 'probability' to refer to the > fact that this guy is just one of the 2^N guys who are generated in this > experiment. The fact that he may be in the minority does not alter the fact > that he exists, and infers a probability close to one for spin-up. The 3p > picture here is to consider that this guy is just chosen at random from a > uniform distribution over all 2^N copies at the end of the experiment. And > I find it difficult to give any sensible meaning to that idea. No one is > selecting anything at random from the the 2^N copies because that is to how > the copies come about -- it is all completely deterministic. > > The guy who gets the 1000 spin-ups infers a probability close to one, so > he is entitled to think that the probability of getting an approximately > even number of ups and downs is very small: eps^1000*(1-eps)^1000 for eps > very close to zero. Similarly, guys who see approximately equal numbers of > up and down infers a probability close to 0.5. So they are entitled to > conclude that the probability of seeing all spin-up is vanishingly small, > namely, 1/2^1000. > > The main point I have been trying to make is that this is true whatever > the ratio of ups to downs is in the data that any individual observes. > Everyone concludes that their observed relative frequency is a good > indicator of the actual probability, and that other ratios of up:down are > extremely unlikely. This is a simple consequence of the fact that > probability is, as you say, a 1p notion, and can only be estimated from the > actual data that an individual obtains. Since people get different data, > they get different estimates of the probability, covering the entire range > [0,1]; no 3p notion of probability is available -- probabilities do not > make sense in the Everettian case when all outcomes occur. > > > I think this is wrong. The is both a 3p and 1p notion of probability. > The 1p notion is that I'm ignorant of the probability of getting a 1 or 0 > but those are some fixed values so I can estimate the probability from > Bernoulli trials. The 3p notion is that there is a fixed ensemble of > sequences produced by a certain fixed branching ratio of 1s and 0s. If I > pick one sequence at random from the ensemble I can estimate that branching > ratio. My claim is that these are equivalent. The 1p is the ergodic > process model of the 3p. > This depends on the branching ratio of 0s and 1s defining the probability. This is not Everettian QM. > This is the basic argument that Kent makes in arxiv:0905.0624. > > The difference from the deterministic coin tossing situation is that in > that case, only one outcome occurs in any trial, so the sequence of N > trials generates a single bit sting of length N, indicating a particular > value of the probability for success on any toss. The situation could not > be more different from the case in which all outcomes always occur. > > > Yes it could be more different. The N->oo 1p and 3p statistics could > disagree, but they don't. The expected values are the same, and the > std-deviation is the same. > They do disagree for the majority of observers if the branching follows the number of terms in the superposition. Bruce -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAFxXSLTxgsG-KAz9XNVq2Ru%3DpE0nvarS5xE9gTj3SUshTVX3Xg%40mail.gmail.com.

