Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 11:54 PM smitra  wrote:

> On 08-03-2020 11:56, Bruce Kellett wrote:
> >
> > Yes, I think the Carroll's comment was just sloppy. The trouble is
> > that this sort of sloppiness permeates all of these discussions. As
> > you say, probability really has meaning only in the 1p picture. So the
> > guy who sees 1000 spin-ups in the 1000 trials will conclude that the
> > probability of spin-up is very close to one. That is why it makes
> > sense to say that the probability is one. The fact that this one guy
> > sees this is certain in Many-worlds (This may be another meaning of
> > probability, but an event that is certain to happen is usually
> > referred to as having probability one.).
> >
> > The trouble comes when you use the same term 'probability' to refer to
> > the fact that this guy is just one of the 2^N guys who are generated
> > in this experiment. The fact that he may be in the minority does not
> > alter the fact that he exists, and infers a probability close to one
> > for spin-up. The 3p picture here is to consider that this guy is just
> > chosen at random from a uniform distribution over all 2^N copies at
> > the end of the experiment. And I find it difficult to give any
> > sensible meaning to that idea. No one is selecting anything at random
> > from the the 2^N copies because that is to how the copies come about
> > -- it is all completely deterministic.
> >
> > The guy who gets the 1000 spin-ups infers a probability close to one,
> > so he is entitled to think that the probability of getting an
> > approximately even number of ups and downs is very small:
> > eps^1000*(1-eps)^1000 for eps very close to zero. Similarly, guys who
> > see approximately equal numbers of up and down infers a probability
> > close to 0.5. So they are entitled to conclude that the probability of
> > seeing all spin-up is vanishingly small, namely, 1/2^1000.
> >
> > The main point I have been trying to make is that this is true
> > whatever the ratio of ups to downs is in the data that any individual
> > observes. Everyone concludes that their observed relative frequency is
> > a good indicator of the actual probability, and that other ratios of
> > up:down are extremely unlikely. This is a simple consequence of the
> > fact that probability is, as you say, a 1p notion, and can only be
> > estimated from the actual data that an individual obtains. Since
> > people get different data, they get different estimates of the
> > probability, covering the entire range [0,1]; no 3p notion of
> > probability is available -- probabilities do not make sense in the
> > Everettian case when all outcomes occur. This is the basic argument
> > that Kent makes in arxiv:0905.0624.
>
> It's not true that everyone concludes that their observed relative
> frequency is
> a good indicator of the actual probability. Precisely in cases where
> there is a large deviation of the statistics from the actual probability
> will this also be visible in the observed data.


You appear to assume that there is an actual probability in these
situations. There is no evidence for that in Everett.

> It's only when you
> consider the case where the statistical fluctuation has affected all the
> data in a self-consistent way that this becomes hidden. But, of course,
> nothing limits that freak observer from doing a few more measurements.
>

I think you are referring to the possibility that sub-sequences of data do
not reflect the overall probability. Yes, but that is always the case. Why
do you think that experimenters at the LHC see so many apparently
significant results that go away with more data? The experimenter does not
know from his data that it is 'freak'. If he does more trials, or repeats
the experiment, the data may converge to some result, or they may not. If
Everett is correct, and there is no true probability, then the fact that
the data appear to converge is just a miracle -- or Everett is wrong. I
think the latter is more likely.

Bruce


The laws of physics may make it inevitable that there are observers who
> will happen to observe such large statistical deviations that they'll
> draw the wrong conclusions about the laws of physics. That fact is not
> evidence for or against such laws of physics. Experiments can still
> settle the question if the laws of physics are correct. Pointing to
> freak observers is not a good argument, because all these freak
> observers need to do is do more experiments to demonstrate that their
> previous observations are a statistical fluke.
>
> One can then continue to select those observers who'll continue to see
> statistical flukes. But the problem is then that these observers need to
> stop at some point, being satisfied with their observations implying the
> wrong theory. This means that not just the spin experiment, but
> everything else must also have been a statistical fluke in such a way as
> to imply the wrong theory in a consistent way. So, for centuries a large
> number 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Mon, Mar 9, 2020 at 5:29 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/8/2020 3:56 AM, Bruce Kellett wrote:
>
> On Sun, Mar 8, 2020 at 7:46 PM Russell Standish 
> wrote:
>
>> On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
>> > On Sun, Mar 8, 2020 at 5:32 PM Russell Standish 
>> wrote:
>> >
>> > On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
>> >
>> > > That is, in fact, false. It does not generate the same strings as
>> > flipping a
>> > > coin in single world. Sure, each of the strings in Everett could
>> have
>> > been
>> > > obtained from coin flips -- but then the probability of a
>> sequence of
>> > 10,000
>> > > heads is very low, whereas in many-worlds you are guaranteed that
>> one
>> > observer
>> > > will obtain this sequence. There is a profound difference between
>> the two
>> > > cases.
>> >
>> > You have made this statement multiple times, and it appears to be at
>> > the heart of our disagreement. I don't see what the profound
>> > difference is.
>> >
>> > If I select a subset from the set of all strings of length N, for
>> example
>> > all strings with exactly N/3 1s, then I get a quite specific value
>> for the
>> > proportion of the whole that match it:
>> >
>> > / N \
>> > || 2^{-N}  = p.
>> > \N/3/
>> >
>> > Now this number p will also equal the probability of seeing exactly
>> > N/3 coins land head up when N coins are tossed.
>> >
>> > What is the profound difference?
>> >
>> >
>> >
>> > Take a more extreme case. The probability of getting 1000 heads on 1000
>> coin
>> > tosses is 1/2^1000.
>> > If you measure the spin components of an ensemble of identical spin-half
>> > particles, there will certainly be one observer who sees 1000 spin-up
>> results.
>> > That is the difference -- the difference between probability of
>> 1/2^1000 and a
>> > probability of one.
>> >
>> > In fact in a recent podcast by Sean Carroll (that has been discussed on
>> the
>> > list previously), he makes the statement that this rare event (with
>> probability
>> > p = 1/2^1000) certainly occurs. In other words, he is claiming  that the
>> > probability is both 1/2^1000 and one. That this is a flat contradiction
>> appears
>> > to escape him. The difference in probabilities between coin tosses and
>> > Everettian measurements couldn't be more stark.
>>
>> That is because you're talking about different things. The rare event
>> that 1 in 2^1000 observers see certainly occurs. In this case
>> certainty does not refer to probability 1, as no probabilities are
>> applicable in that 3p picture. Probabilities in the MWI sense refers
>> to what an observer will see next, it is a 1p concept.
>>
>> And that 1p context, I do not see any difference in how probabilities
>> are interpreted, nor in their numerical values.
>>
>> Perhaps Caroll is being sloppy. If so, I would think that could be
>> forgiven.
>>
>
>
> Yes, I think the Carroll's comment was just sloppy. The trouble is that
> this sort of sloppiness permeates all of these discussions. As you say,
> probability really has meaning only in the 1p picture. So the guy who sees
> 1000 spin-ups in the 1000 trials will conclude that the probability of
> spin-up is very close to one. That is why it makes sense to say that the
> probability is one. The fact that this one guy sees this is certain in
> Many-worlds (This may be another meaning of probability, but an event that
> is certain to happen is usually referred to as having probability one.).
>
> The trouble comes when you use the same term 'probability' to refer to the
> fact that this guy is just one of the 2^N guys who are generated in this
> experiment. The fact that he may be in the minority does not alter the fact
> that he exists, and infers a probability close to one for spin-up. The 3p
> picture here is to consider that this guy is just chosen at random from a
> uniform distribution over all 2^N copies at the end of the experiment. And
> I find it difficult to give any sensible meaning to that idea. No one is
> selecting anything at random from the the 2^N copies because that is to how
> the copies come about -- it is all completely deterministic.
>
> The guy who gets the 1000 spin-ups infers a probability close to one, so
> he is entitled to think that the probability of getting an approximately
> even number of ups and downs is very small: eps^1000*(1-eps)^1000 for eps
> very close to zero. Similarly, guys who see approximately equal numbers of
> up and down infers a probability close to 0.5. So they are entitled to
> conclude that the probability of seeing all spin-up is vanishingly small,
> namely, 1/2^1000.
>
> The main point I have been trying to make is that this is true whatever
> the ratio of ups to downs is in the data that any individual observes.
> Everyone concludes that their observed relative frequency is a good
> 

Re: Parallel Worlds Probably Exist. Here’s Why

2020-03-08 Thread Alan Grayson


On Sunday, March 8, 2020 at 11:51:11 AM UTC-6, spudb...@aol.com wrote:
>
> You must be a fan of Kurt Godel than? He was big time into a spinning 
> cosmos, and yeah, a 7 sphere. 
>

Not exactly. I have to check out the data. If it's spinning, then looking 
in opposite directions at distant galaxies should show reversed direction 
if rotating. If just a drift toward a huge mass, which is not observed, 
could mean another "nearby" universe in that direction. I wonder about 
Clark's view on this situation. AG 

>
>
> -Original Message-
> From: Alan Grayson >
> To: Everything List >
> Sent: Sat, Mar 7, 2020 6:29 pm
> Subject: Re: Parallel Worlds Probably Exist. Here’s Why
>
>
>
> On Friday, March 6, 2020 at 4:17:34 PM UTC-7, John Clark wrote:
>
> This video just went online, I thought it was excellent: 
>
> Parallel Worlds Probably Exist. Here’s Why 
> 
>
> John K Clark
>
>
> As you know, I believe our universe is shaped like a hyper-sphere. With 
> this in mind, perhaps the best tentative evidence for other worlds is the 
> fact that distant galaxies are moving in unison in the direction of what is 
> hypothesized as "The Great Attractor". But maybe what we're observing is 
> the rotation of our universe. Rotations are caused by glancing blows, and 
> in this case, the glancing blow might be another universe. AG 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everyth...@googlegroups.com .
> To view this discussion on the web visit 
>
> https://groups.google.com/d/msgid/everything-list/ca27229c-1519-469d-91b2-0e5701add23a%40googlegroups.com
>  
> 
>  
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/df8fd200-7f51-4fc6-bc39-32c75c1db09a%40googlegroups.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread 'Brent Meeker' via Everything List



On 3/8/2020 3:56 AM, Bruce Kellett wrote:
On Sun, Mar 8, 2020 at 7:46 PM Russell Standish > wrote:


On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
> On Sun, Mar 8, 2020 at 5:32 PM Russell Standish
mailto:li...@hpcoders.com.au>> wrote:
>
>     On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
>
>     > That is, in fact, false. It does not generate the same
strings as
>     flipping a
>     > coin in single world. Sure, each of the strings in Everett
could have
>     been
>     > obtained from coin flips -- but then the probability of a
sequence of
>     10,000
>     > heads is very low, whereas in many-worlds you are
guaranteed that one
>     observer
>     > will obtain this sequence. There is a profound difference
between the two
>     > cases.
>
>     You have made this statement multiple times, and it appears
to be at
>     the heart of our disagreement. I don't see what the profound
>     difference is.
>
>     If I select a subset from the set of all strings of length
N, for example
>     all strings with exactly N/3 1s, then I get a quite specific
value for the
>     proportion of the whole that match it:
>
>     / N \
>     |    | 2^{-N}  = p.
>     \N/3/
>
>     Now this number p will also equal the probability of seeing
exactly
>     N/3 coins land head up when N coins are tossed.
>
>     What is the profound difference?
>
>
>
> Take a more extreme case. The probability of getting 1000 heads
on 1000 coin
> tosses is 1/2^1000.
> If you measure the spin components of an ensemble of identical
spin-half
> particles, there will certainly be one observer who sees 1000
spin-up results.
> That is the difference -- the difference between probability of
1/2^1000 and a
> probability of one.
>
> In fact in a recent podcast by Sean Carroll (that has been
discussed on the
> list previously), he makes the statement that this rare event
(with probability
> p = 1/2^1000) certainly occurs. In other words, he is claiming
 that the
> probability is both 1/2^1000 and one. That this is a flat
contradiction appears
> to escape him. The difference in probabilities between coin
tosses and
> Everettian measurements couldn't be more stark.

That is because you're talking about different things. The rare event
that 1 in 2^1000 observers see certainly occurs. In this case
certainty does not refer to probability 1, as no probabilities are
applicable in that 3p picture. Probabilities in the MWI sense refers
to what an observer will see next, it is a 1p concept.

And that 1p context, I do not see any difference in how probabilities
are interpreted, nor in their numerical values.

Perhaps Caroll is being sloppy. If so, I would think that could be
forgiven.



Yes, I think the Carroll's comment was just sloppy. The trouble is 
that this sort of sloppiness permeates all of these discussions. As 
you say, probability really has meaning only in the 1p picture. So the 
guy who sees 1000 spin-ups in the 1000 trials will conclude that the 
probability of spin-up is very close to one. That is why it makes 
sense to say that the probability is one. The fact that this one guy 
sees this is certain in Many-worlds (This may be another meaning of 
probability, but an event that is certain to happen is usually 
referred to as having probability one.).


The trouble comes when you use the same term 'probability' to refer to 
the fact that this guy is just one of the 2^N guys who are generated 
in this experiment. The fact that he may be in the minority does not 
alter the fact that he exists, and infers a probability close to one 
for spin-up. The 3p picture here is to consider that this guy is just 
chosen at random from a uniform distribution over all 2^N copies at 
the end of the experiment. And I find it difficult to give any 
sensible meaning to that idea. No one is selecting anything at random 
from the the 2^N copies because that is to how the copies come about 
-- it is all completely deterministic.


The guy who gets the 1000 spin-ups infers a probability close to one, 
so he is entitled to think that the probability of getting an 
approximately even number of ups and downs is very small: 
eps^1000*(1-eps)^1000 for eps very close to zero. Similarly, guys who 
see approximately equal numbers of up and down infers a probability 
close to 0.5. So they are entitled to conclude that the probability of 
seeing all spin-up is vanishingly small, namely, 1/2^1000.


The main point I have been trying to make is that this is true 
whatever the ratio of ups to downs is in the data that any individual 
observes. Everyone concludes that their observed relative frequency is 
a good 

Re: The Fermi Paradox

2020-03-08 Thread spudboy100 via Everything List
The cosmos is of course bigger (allegedly) than the Hubble Volume we have so 
far detected. So, I will just kick the can down the road and suspect that we 
may be the first to emerge round these parts. The Milky Way and the Local 
Group. Sounds like a band, huh? My continuous attitude is that why you guys get 
deep in the weeds, of mathematics, physics, and cosmology, I simply sort for 
what might be useful to help our species thrive? Quantum computing seems to be 
one of these. 


-Original Message-
From: John Clark 
To: everything-list@googlegroups.com
Sent: Sat, Mar 7, 2020 3:13 pm
Subject: Re: The Fermi Paradox



On Sat, Mar 7, 2020 at 12:30 PM Lawrence Crowell 
 wrote:


after the so called Hadean period of mass bombardment life emerged within a few 
100 million years. Given that time periods tend to telescope the early you go 
in geological history this is fairly quick. 

Given that we have only one example to work with there is no way of knowing if 
that is typical or not. Life could have Evolved freakishly quickly on Earth 
because in at least one way we know the example is not typical, not only did it 
eventually produce life it eventually produced intelligent life. And bacteria 
only planets must far outnumber amoeba planets, and amoeba planets must far 
outnumber worm planets, and worm planets must far outnumber monkey planets, and 
monkey planets must far outnumber planets with beings who make radio 
telescopes. I think the most obvious explanation for the Fermi Paradox is 
probably the correct one, we're the first, after all somebody has to be.
 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv1fvCLWPNmDx%2BwdNT-ggNE1aw4oYAAjTgAHndxBkA%3Dn%3DA%40mail.gmail.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1290476526.2093006.1583691430563%40mail.yahoo.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread 'Brent Meeker' via Everything List



On 3/8/2020 12:08 AM, Bruce Kellett wrote:
On Sun, Mar 8, 2020 at 6:14 PM Russell Standish > wrote:


On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> On Thu, Mar 5, 2020 at 5:26 PM Russell Standish
mailto:li...@hpcoders.com.au>> wrote:
>
>     But a very large proportion of them (→1 as N→∞) will report
being
>     within ε (called a confidence interval) of 50% for any given ε>0
>     chosen at the outset of the experiment. This is simply the
law of
>     large numbers theorem. You can't focus on the vanishingly small
>     population that lie outside the confidence interval.
>
>
> This is wrong.

Them's fighting words. Prove it!


I have, in other posts and below.

> In the binary situation where both outcomes occur for every
> trial, there are 2^N binary sequences for N repetitions of the
experiment. This
> set of binary sequences exhausts the possibilities, so the same
sequence is
> obtained for any two-component initial state -- regardless of
the amplitudes.

> You appear to assume that the natural probability in this
situation is p = 0.5
> and, what is more, your appeal to the law of large numbers
applies only for
> single-world probabilities, in which there is only one outcome
on each trial.

I didn't mention proability once in the above paragraph, not even
implicitly. I used the term "proportion". That the proportion will be
equal to the probability in a single universe case is a frequentist
assumption, and should be uncontroversial, but goes beyond what I
stated above.


Sure. But the proportion of the 2^N sequences that exhibit any 
particular p value (proportion of 1's) decreases with N.


> In order to infer a probability of p = 0.5, your branch data
must have
> approximately equal numbers of zeros and ones. The number of
branches with
> equal numbers of zeros and ones is given by the binomial
coefficient. For large
> even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> approximation to the factorial for large N, this goes as
2^N/sqrt(N) (within
> factors of order one). Since there are 2^N sequences, the
proportion with n_0 =
> n_1 vanishes as 1/sqrt(N) for N large.

I wasn't talking about that. I was talking about the proportion of
sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
than the proportion of sequences that have exactly equal 0 or 1
bits. That proportion grows as sqrt N.



No, it falls as 1/sqrt(N). Remember, the confidence interval depends 
on the standard deviation, and that falls as 1/sqrt(n). Consequently 
deviations from equal numbers of zeros and ones for p to be within the 
CI of 0.5 must decline as n becomes large



> Now sequences with small departures from equal numbers will
still give
> probabilities within the confidence interval of p = 0.5. But
this confidence
> interval also shrinks as 1/sqrt(N) as N increases, so these
additional
> sequences do not contribute a growing number of cases giving p ~
0.5 as N
> increases.

The confidence interval ε is fixed.


No, it is not. The width of, say the 95% CI, decreases with N since 
the standard deviation falls as 1/sqrt(N).


Right.  But that's just a different way of saying the density of results 
concentrates around the expected value.  The CI interval in constructed 
to contain a certain fraction, but it's width contracts as 1/sqrt(N).   
Or if you take a fixed deviation interval around the expected value, 
e.g. 0.333_+_0.01  then the proportion within that interval goes to 1 as 
N->oo.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/92e9a166-bef2-26f0-9c28-30d5f2298730%40verizon.net.


Re: Parallel Worlds Probably Exist. Here’s Why

2020-03-08 Thread spudboy100 via Everything List
You must be a fan of Kurt Godel than? He was big time into a spinning cosmos, 
and yeah, a 7 sphere. 


-Original Message-
From: Alan Grayson 
To: Everything List 
Sent: Sat, Mar 7, 2020 6:29 pm
Subject: Re: Parallel Worlds Probably Exist. Here’s Why



On Friday, March 6, 2020 at 4:17:34 PM UTC-7, John Clark wrote:
This video just went online, I thought it was excellent: 
Parallel Worlds Probably Exist. Here’s Why

John K Clark

As you know, I believe our universe is shaped like a hyper-sphere. With this in 
mind, perhaps the best tentative evidence for other worlds is the fact that 
distant galaxies are moving in unison in the direction of what is hypothesized 
as "The Great Attractor". But maybe what we're observing is the rotation of our 
universe. Rotations are caused by glancing blows, and in this case, the 
glancing blow might be another universe. AG -- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/ca27229c-1519-469d-91b2-0e5701add23a%40googlegroups.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2119392739.2098717.1583689863004%40mail.yahoo.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread smitra

On 08-03-2020 11:56, Bruce Kellett wrote:

On Sun, Mar 8, 2020 at 7:46 PM Russell Standish
 wrote:


On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:

On Sun, Mar 8, 2020 at 5:32 PM Russell Standish

 wrote:


On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:


That is, in fact, false. It does not generate the same

strings as

flipping a

coin in single world. Sure, each of the strings in Everett

could have

been

obtained from coin flips -- but then the probability of a

sequence of

10,000

heads is very low, whereas in many-worlds you are guaranteed

that one

observer

will obtain this sequence. There is a profound difference

between the two

cases.


You have made this statement multiple times, and it appears to

be at

the heart of our disagreement. I don't see what the profound
difference is.

If I select a subset from the set of all strings of length N,

for example

all strings with exactly N/3 1s, then I get a quite specific

value for the

proportion of the whole that match it:

/ N \
|| 2^{-N}  = p.
\N/3/

Now this number p will also equal the probability of seeing

exactly

N/3 coins land head up when N coins are tossed.

What is the profound difference?



Take a more extreme case. The probability of getting 1000 heads on

1000 coin

tosses is 1/2^1000.
If you measure the spin components of an ensemble of identical

spin-half

particles, there will certainly be one observer who sees 1000

spin-up results.

That is the difference -- the difference between probability of

1/2^1000 and a

probability of one.

In fact in a recent podcast by Sean Carroll (that has been

discussed on the

list previously), he makes the statement that this rare event

(with probability

p = 1/2^1000) certainly occurs. In other words, he is claiming

that the

probability is both 1/2^1000 and one. That this is a flat

contradiction appears

to escape him. The difference in probabilities between coin tosses

and

Everettian measurements couldn't be more stark.


That is because you're talking about different things. The rare
event
that 1 in 2^1000 observers see certainly occurs. In this case
certainty does not refer to probability 1, as no probabilities are
applicable in that 3p picture. Probabilities in the MWI sense refers
to what an observer will see next, it is a 1p concept.

And that 1p context, I do not see any difference in how
probabilities
are interpreted, nor in their numerical values.

Perhaps Caroll is being sloppy. If so, I would think that could be
forgiven.


Yes, I think the Carroll's comment was just sloppy. The trouble is
that this sort of sloppiness permeates all of these discussions. As
you say, probability really has meaning only in the 1p picture. So the
guy who sees 1000 spin-ups in the 1000 trials will conclude that the
probability of spin-up is very close to one. That is why it makes
sense to say that the probability is one. The fact that this one guy
sees this is certain in Many-worlds (This may be another meaning of
probability, but an event that is certain to happen is usually
referred to as having probability one.).

The trouble comes when you use the same term 'probability' to refer to
the fact that this guy is just one of the 2^N guys who are generated
in this experiment. The fact that he may be in the minority does not
alter the fact that he exists, and infers a probability close to one
for spin-up. The 3p picture here is to consider that this guy is just
chosen at random from a uniform distribution over all 2^N copies at
the end of the experiment. And I find it difficult to give any
sensible meaning to that idea. No one is selecting anything at random
from the the 2^N copies because that is to how the copies come about
-- it is all completely deterministic.

The guy who gets the 1000 spin-ups infers a probability close to one,
so he is entitled to think that the probability of getting an
approximately even number of ups and downs is very small:
eps^1000*(1-eps)^1000 for eps very close to zero. Similarly, guys who
see approximately equal numbers of up and down infers a probability
close to 0.5. So they are entitled to conclude that the probability of
seeing all spin-up is vanishingly small, namely, 1/2^1000.

The main point I have been trying to make is that this is true
whatever the ratio of ups to downs is in the data that any individual
observes. Everyone concludes that their observed relative frequency is
a good indicator of the actual probability, and that other ratios of
up:down are extremely unlikely. This is a simple consequence of the
fact that probability is, as you say, a 1p notion, and can only be
estimated from the actual data that an individual obtains. Since
people get different data, they get different estimates of the
probability, covering the entire range [0,1]; no 3p notion of
probability is available -- probabilities do not make sense in the
Everettian case when all outcomes occur. This is the basic argument
that Kent makes in 

Re: Why physics has become fantasy fiction

2020-03-08 Thread Philip Thrift


On Sunday, March 8, 2020 at 5:32:15 AM UTC-5, Philip Thrift wrote:
>
>
>
> On Saturday, March 7, 2020 at 10:39:09 PM UTC-6, Brent wrote:
>>
>>
>>
>> On 3/7/2020 8:17 PM, Bruce Kellett wrote:
>>
>> On Sun, Mar 8, 2020 at 3:10 PM 'Brent Meeker' via Everything List <
>> everyth...@googlegroups.com> wrote:
>>
>>> On 3/7/2020 7:38 PM, Alan Grayson wrote:
>>>
>>>
>>> I think the Transactional Interpretation has additional problems, such 
>>> as forward (or backward?) in time signaling. 
>>>
>>>
>>> Ruth Kastner has tried to fix that by postulating a possibility space 
>>> where the offer wave elicits the answer wave; so it's not in spacetime.
>>>
>>
>>
>> Possibility space sounds very much like magical space, where anything you 
>> want to happen can happen.
>>
>>
>> No, it's like the wave function where possibilities are encoded with 
>> amplitudes.  The essential idea isn't the possibility space, it's the idea 
>> that "events" which are interactions that transfer energy really happen.  
>> This takes the place of decoherence and collapse of the wave function.
>>
>> Brent
>>
>
>
> The nature of the probability space is that it is based on complex numbers 
> q from the unit circle group |q| = 1 rather real numbers p in [0,1].
>
> Rather than an event weighted by a real number, an event is a set of 
> counterevents - each c.e. weighted by a complex number (which are then 
> summed and the norm is taken) to get the real number weight of the event.
>
> @philipthrift
>

Actually there's a list of references that's been collected:

http://physics.bu.edu/~youssef/quantum/quantum_refs.html

@philipthrift 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/11ea5a08-f148-465d-97bd-09275697ede0%40googlegroups.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 7:59 PM Russell Standish 
wrote:

> On Sun, Mar 08, 2020 at 07:08:25PM +1100, Bruce Kellett wrote:
> > On Sun, Mar 8, 2020 at 6:14 PM Russell Standish 
> wrote:
> >
> > On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> > > On Thu, Mar 5, 2020 at 5:26 PM Russell Standish <
> li...@hpcoders.com.au>
> > wrote:
> > >
> > > But a very large proportion of them (→1 as N→∞) will report
> being
> > > within ε (called a confidence interval) of 50% for any given
> ε>0
> > > chosen at the outset of the experiment. This is simply the law
> of
> > > large numbers theorem. You can't focus on the vanishingly small
> > > population that lie outside the confidence interval.
> > >
> > >
> > > This is wrong.
> >
> > Them's fighting words. Prove it!
> >
> >
> > I have, in other posts and below.
>
> You didn't do it below, that's why I said prove it. What you wrote
> below had little bearing on what I wrote.
>

I outlines the proof in my reply to your other post tonight. Besides, the
proof is in Kent's paper arXiv:0905.0624.

> > In the binary situation where both outcomes occur for every
> > > trial, there are 2^N binary sequences for N repetitions of the
> > experiment. This
> > > set of binary sequences exhausts the possibilities, so the same
> sequence
> > is
> > > obtained for any two-component initial state -- regardless of the
> > amplitudes.
> >
> > > You appear to assume that the natural probability in this
> situation is p
> > = 0.5
> > > and, what is more, your appeal to the law of large numbers applies
> only
> > for
> > > single-world probabilities, in which there is only one outcome on
> each
> > trial.
> >
> > I didn't mention proability once in the above paragraph, not even
> > implicitly. I used the term "proportion". That the proportion will be
> > equal to the probability in a single universe case is a frequentist
> > assumption, and should be uncontroversial, but goes beyond what I
> > stated above.
> >
> >
> > Sure. But the proportion of the 2^N sequences that exhibit any
> particular p
> > value (proportion of 1's) decreases with N.
> >
>
> So what?

You claim that the proportion  reporting p ~ 0.5 goes to one as N --> oo.
That is manifestly false. The absolute number increases with the number of
trials, but the proportion of the 2^N copies at the end of the N trials
decreases as 1/sqrt(N).


> > In order to infer a probability of p = 0.5, your branch data must
> have
> > > approximately equal numbers of zeros and ones. The number of
> branches
> > with
> > > equal numbers of zeros and ones is given by the binomial
> coefficient. For
> > large
> > > even N = 2M trials, this coefficient is N!/M!*M!. Using the
> Stirling
> > > approximation to the factorial for large N, this goes as
> 2^N/sqrt(N)
> > (within
> > > factors of order one). Since there are 2^N sequences, the
> proportion with
> > n_0 =
> > > n_1 vanishes as 1/sqrt(N) for N large.
>


This is the nub of the proof you wanted.

> I wasn't talking about that. I was talking about the proportion of
> > sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
> > than the proportion of sequences that have exactly equal 0 or 1
> > bits. That proportion grows as sqrt N.
> >
> >
> >
> > No, it falls as 1/sqrt(N). Remember, the confidence interval depends on
> the
> > standard deviation, and that falls as 1/sqrt(n). Consequently deviations
> from
> > equal numbers of zeros and ones for p to be within the CI of 0.5 must
> decline
> > as n becomes large
> >
>
> The value ε defined above is fixed at the outset. It is independent of
> N. Maybe I incorrectly called it a confidence interval, although it is
> surely related.
>

Calling it a confidence interval certainly threw me. If it is meant to be a
fixed interval independent of N, then OK. But that is not a useful concept.
For any fixed interval around p = 0.5, relative frequencies at the limits
of that interval will eventually estimate p values for which the CI does
not include 0.5. So they can no longer infer that the probability is 0.5.
Since the CI decreases with N, the proportion of the total who infer any
particular value for the probability decreases with N.



> The number of bitstrings having a ratio of 0 to 1 within ε of 0.5
> grows as √N.
>
> IIRC, a confidence interval is the interval of a fixed proportion, ie we
> can be 95% confident that strings will have a ratio between 49.5% and
> 51.5%. That interval (49.5% and 51.5%) will decrease as √N for fixed
> confidence level (95%).
>
> >
> >
> > > Now sequences with small departures from equal numbers will still
> give
> > > probabilities within the confidence interval of p = 0.5. But this
> > confidence
> > > interval also shrinks as 1/sqrt(N) as N increases, so these
> additional
> 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 7:46 PM Russell Standish 
wrote:

> On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
> > On Sun, Mar 8, 2020 at 5:32 PM Russell Standish 
> wrote:
> >
> > On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
> >
> > > That is, in fact, false. It does not generate the same strings as
> > flipping a
> > > coin in single world. Sure, each of the strings in Everett could
> have
> > been
> > > obtained from coin flips -- but then the probability of a sequence
> of
> > 10,000
> > > heads is very low, whereas in many-worlds you are guaranteed that
> one
> > observer
> > > will obtain this sequence. There is a profound difference between
> the two
> > > cases.
> >
> > You have made this statement multiple times, and it appears to be at
> > the heart of our disagreement. I don't see what the profound
> > difference is.
> >
> > If I select a subset from the set of all strings of length N, for
> example
> > all strings with exactly N/3 1s, then I get a quite specific value
> for the
> > proportion of the whole that match it:
> >
> > / N \
> > || 2^{-N}  = p.
> > \N/3/
> >
> > Now this number p will also equal the probability of seeing exactly
> > N/3 coins land head up when N coins are tossed.
> >
> > What is the profound difference?
> >
> >
> >
> > Take a more extreme case. The probability of getting 1000 heads on 1000
> coin
> > tosses is 1/2^1000.
> > If you measure the spin components of an ensemble of identical spin-half
> > particles, there will certainly be one observer who sees 1000 spin-up
> results.
> > That is the difference -- the difference between probability of 1/2^1000
> and a
> > probability of one.
> >
> > In fact in a recent podcast by Sean Carroll (that has been discussed on
> the
> > list previously), he makes the statement that this rare event (with
> probability
> > p = 1/2^1000) certainly occurs. In other words, he is claiming  that the
> > probability is both 1/2^1000 and one. That this is a flat contradiction
> appears
> > to escape him. The difference in probabilities between coin tosses and
> > Everettian measurements couldn't be more stark.
>
> That is because you're talking about different things. The rare event
> that 1 in 2^1000 observers see certainly occurs. In this case
> certainty does not refer to probability 1, as no probabilities are
> applicable in that 3p picture. Probabilities in the MWI sense refers
> to what an observer will see next, it is a 1p concept.
>
> And that 1p context, I do not see any difference in how probabilities
> are interpreted, nor in their numerical values.
>
> Perhaps Caroll is being sloppy. If so, I would think that could be
> forgiven.
>


Yes, I think the Carroll's comment was just sloppy. The trouble is that
this sort of sloppiness permeates all of these discussions. As you say,
probability really has meaning only in the 1p picture. So the guy who sees
1000 spin-ups in the 1000 trials will conclude that the probability of
spin-up is very close to one. That is why it makes sense to say that the
probability is one. The fact that this one guy sees this is certain in
Many-worlds (This may be another meaning of probability, but an event that
is certain to happen is usually referred to as having probability one.).

The trouble comes when you use the same term 'probability' to refer to the
fact that this guy is just one of the 2^N guys who are generated in this
experiment. The fact that he may be in the minority does not alter the fact
that he exists, and infers a probability close to one for spin-up. The 3p
picture here is to consider that this guy is just chosen at random from a
uniform distribution over all 2^N copies at the end of the experiment. And
I find it difficult to give any sensible meaning to that idea. No one is
selecting anything at random from the the 2^N copies because that is to how
the copies come about -- it is all completely deterministic.

The guy who gets the 1000 spin-ups infers a probability close to one, so he
is entitled to think that the probability of getting an approximately even
number of ups and downs is very small: eps^1000*(1-eps)^1000 for eps very
close to zero. Similarly, guys who see approximately equal numbers of up
and down infers a probability close to 0.5. So they are entitled to
conclude that the probability of seeing all spin-up is vanishingly small,
namely, 1/2^1000.

The main point I have been trying to make is that this is true whatever the
ratio of ups to downs is in the data that any individual observes. Everyone
concludes that their observed relative frequency is a good indicator of the
actual probability, and that other ratios of up:down are extremely
unlikely. This is a simple consequence of the fact that probability is, as
you say, a 1p notion, and can only be estimated from the actual data that
an individual obtains. Since people get different data, they get 

Re: Why physics has become fantasy fiction

2020-03-08 Thread Philip Thrift


On Saturday, March 7, 2020 at 10:39:09 PM UTC-6, Brent wrote:
>
>
>
> On 3/7/2020 8:17 PM, Bruce Kellett wrote:
>
> On Sun, Mar 8, 2020 at 3:10 PM 'Brent Meeker' via Everything List <
> everyth...@googlegroups.com > wrote:
>
>> On 3/7/2020 7:38 PM, Alan Grayson wrote:
>>
>>
>> I think the Transactional Interpretation has additional problems, such as 
>> forward (or backward?) in time signaling. 
>>
>>
>> Ruth Kastner has tried to fix that by postulating a possibility space 
>> where the offer wave elicits the answer wave; so it's not in spacetime.
>>
>
>
> Possibility space sounds very much like magical space, where anything you 
> want to happen can happen.
>
>
> No, it's like the wave function where possibilities are encoded with 
> amplitudes.  The essential idea isn't the possibility space, it's the idea 
> that "events" which are interactions that transfer energy really happen.  
> This takes the place of decoherence and collapse of the wave function.
>
> Brent
>


The nature of the probability space is that it is based on complex numbers 
q from the unit circle group |q| = 1 rather real numbers p in [0,1].

Rather than an event weighted by a real number, an event is a set of 
counterevents - each c.e. weighted by a complex number (which are then 
summed and the norm is taken) to get the real number weight of the event.

@philipthrift

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0d5fbe40-baeb-4d2e-b3c4-df919e32ea5a%40googlegroups.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Russell Standish
On Sun, Mar 08, 2020 at 07:08:25PM +1100, Bruce Kellett wrote:
> On Sun, Mar 8, 2020 at 6:14 PM Russell Standish  wrote:
> 
> On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> > On Thu, Mar 5, 2020 at 5:26 PM Russell Standish 
> wrote:
> >
> >     But a very large proportion of them (→1 as N→∞) will report being
> >     within ε (called a confidence interval) of 50% for any given ε>0
> >     chosen at the outset of the experiment. This is simply the law of
> >     large numbers theorem. You can't focus on the vanishingly small
> >     population that lie outside the confidence interval.
> >
> >
> > This is wrong.
> 
> Them's fighting words. Prove it!
> 
> 
> I have, in other posts and below.

You didn't do it below, that's why I said prove it. What you wrote
below had little bearing on what I wrote.

> 
> 
> > In the binary situation where both outcomes occur for every
> > trial, there are 2^N binary sequences for N repetitions of the
> experiment. This
> > set of binary sequences exhausts the possibilities, so the same sequence
> is
> > obtained for any two-component initial state -- regardless of the
> amplitudes.
> 
> > You appear to assume that the natural probability in this situation is p
> = 0.5
> > and, what is more, your appeal to the law of large numbers applies only
> for
> > single-world probabilities, in which there is only one outcome on each
> trial.
> 
> I didn't mention proability once in the above paragraph, not even
> implicitly. I used the term "proportion". That the proportion will be
> equal to the probability in a single universe case is a frequentist
> assumption, and should be uncontroversial, but goes beyond what I
> stated above.
> 
> 
> Sure. But the proportion of the 2^N sequences that exhibit any particular p
> value (proportion of 1's) decreases with N.
> 

So what?

> 
> > In order to infer a probability of p = 0.5, your branch data must have
> > approximately equal numbers of zeros and ones. The number of branches
> with
> > equal numbers of zeros and ones is given by the binomial coefficient. 
> For
> large
> > even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> > approximation to the factorial for large N, this goes as 2^N/sqrt(N)
> (within
> > factors of order one). Since there are 2^N sequences, the proportion 
> with
> n_0 =
> > n_1 vanishes as 1/sqrt(N) for N large. 
> 
> I wasn't talking about that. I was talking about the proportion of
> sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
> than the proportion of sequences that have exactly equal 0 or 1
> bits. That proportion grows as sqrt N.
> 
> 
> 
> No, it falls as 1/sqrt(N). Remember, the confidence interval depends on the
> standard deviation, and that falls as 1/sqrt(n). Consequently deviations from
> equal numbers of zeros and ones for p to be within the CI of 0.5 must decline
> as n becomes large
>

The value ε defined above is fixed at the outset. It is independent of
N. Maybe I incorrectly called it a confidence interval, although it is
surely related. 

The number of bitstrings having a ratio of 0 to 1 within ε of 0.5
grows as √N.

IIRC, a confidence interval is the interval of a fixed proportion, ie we can be 
95% confident that strings will have a ratio between 49.5% and 51.5%. That 
interval (49.5% and 51.5%) will decrease as √N for fixed confidence level 
(95%). 

> 
> 
> > Now sequences with small departures from equal numbers will still give
> > probabilities within the confidence interval of p = 0.5. But this
> confidence
> > interval also shrinks as 1/sqrt(N) as N increases, so these additional
> > sequences do not contribute a growing number of cases giving p ~ 0.5 as 
> N
> > increases.
> 
> The confidence interval ε is fixed.
> 
> 
> No, it is not. The width of, say the 95% CI, decreases with N since the
> standard deviation falls as 1/sqrt(N).

Which only demonstrates my point. An increasing number of strings will
lie in the fixed interval ε. I apologise if I used the term "confidence
interval" in a nonstandard way.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200308085904.GE2903%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Russell Standish
On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
> On Sun, Mar 8, 2020 at 5:32 PM Russell Standish  wrote:
> 
> On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
> 
> > That is, in fact, false. It does not generate the same strings as
> flipping a
> > coin in single world. Sure, each of the strings in Everett could have
> been
> > obtained from coin flips -- but then the probability of a sequence of
> 10,000
> > heads is very low, whereas in many-worlds you are guaranteed that one
> observer
> > will obtain this sequence. There is a profound difference between the 
> two
> > cases.
> 
> You have made this statement multiple times, and it appears to be at
> the heart of our disagreement. I don't see what the profound
> difference is.
> 
> If I select a subset from the set of all strings of length N, for example
> all strings with exactly N/3 1s, then I get a quite specific value for the
> proportion of the whole that match it:
> 
> / N \
> |    | 2^{-N}  = p.
> \N/3/
> 
> Now this number p will also equal the probability of seeing exactly
> N/3 coins land head up when N coins are tossed.
> 
> What is the profound difference?
> 
> 
> 
> Take a more extreme case. The probability of getting 1000 heads on 1000 coin
> tosses is 1/2^1000.
> If you measure the spin components of an ensemble of identical spin-half
> particles, there will certainly be one observer who sees 1000 spin-up results.
> That is the difference -- the difference between probability of 1/2^1000 and a
> probability of one.
> 
> In fact in a recent podcast by Sean Carroll (that has been discussed on the
> list previously), he makes the statement that this rare event (with 
> probability
> p = 1/2^1000) certainly occurs. In other words, he is claiming  that the
> probability is both 1/2^1000 and one. That this is a flat contradiction 
> appears
> to escape him. The difference in probabilities between coin tosses and
> Everettian measurements couldn't be more stark.

That is because you're talking about different things. The rare event
that 1 in 2^1000 observers see certainly occurs. In this case
certainty does not refer to probability 1, as no probabilities are
applicable in that 3p picture. Probabilities in the MWI sense refers
to what an observer will see next, it is a 1p concept.

And that 1p context, I do not see any difference in how probabilities
are interpreted, nor in their numerical values.

Perhaps Caroll is being sloppy. If so, I would think that could be forgiven.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200308084635.GD2903%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 6:14 PM Russell Standish 
wrote:

> On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> > On Thu, Mar 5, 2020 at 5:26 PM Russell Standish 
> wrote:
> >
> > But a very large proportion of them (→1 as N→∞) will report being
> > within ε (called a confidence interval) of 50% for any given ε>0
> > chosen at the outset of the experiment. This is simply the law of
> > large numbers theorem. You can't focus on the vanishingly small
> > population that lie outside the confidence interval.
> >
> >
> > This is wrong.
>
> Them's fighting words. Prove it!
>

I have, in other posts and below.

> In the binary situation where both outcomes occur for every
> > trial, there are 2^N binary sequences for N repetitions of the
> experiment. This
> > set of binary sequences exhausts the possibilities, so the same sequence
> is
> > obtained for any two-component initial state -- regardless of the
> amplitudes.
>
> > You appear to assume that the natural probability in this situation is p
> = 0.5
> > and, what is more, your appeal to the law of large numbers applies only
> for
> > single-world probabilities, in which there is only one outcome on each
> trial.
>
> I didn't mention proability once in the above paragraph, not even
> implicitly. I used the term "proportion". That the proportion will be
> equal to the probability in a single universe case is a frequentist
> assumption, and should be uncontroversial, but goes beyond what I
> stated above.
>

Sure. But the proportion of the 2^N sequences that exhibit any particular p
value (proportion of 1's) decreases with N.

> In order to infer a probability of p = 0.5, your branch data must have
> > approximately equal numbers of zeros and ones. The number of branches
> with
> > equal numbers of zeros and ones is given by the binomial coefficient.
> For large
> > even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> > approximation to the factorial for large N, this goes as 2^N/sqrt(N)
> (within
> > factors of order one). Since there are 2^N sequences, the proportion
> with n_0 =
> > n_1 vanishes as 1/sqrt(N) for N large.
>
> I wasn't talking about that. I was talking about the proportion of
> sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
> than the proportion of sequences that have exactly equal 0 or 1
> bits. That proportion grows as sqrt N.
>


No, it falls as 1/sqrt(N). Remember, the confidence interval depends on the
standard deviation, and that falls as 1/sqrt(n). Consequently deviations
from equal numbers of zeros and ones for p to be within the CI of 0.5 must
decline as n becomes large


> Now sequences with small departures from equal numbers will still give
> > probabilities within the confidence interval of p = 0.5. But this
> confidence
> > interval also shrinks as 1/sqrt(N) as N increases, so these additional
> > sequences do not contribute a growing number of cases giving p ~ 0.5 as N
> > increases.
>
> The confidence interval ε is fixed.
>

No, it is not. The width of, say the 95% CI, decreases with N since the
standard deviation falls as 1/sqrt(N).

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTTikTege169WoO-yN-MpxWsT1JX5NY4VN3-0FH3b0ybg%40mail.gmail.com.