On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> On Thu, Mar 5, 2020 at 5:26 PM Russell Standish <[email protected]> wrote:
> 
> 
>     But a very large proportion of them (→1 as N→∞) will report being
>     within ε (called a confidence interval) of 50% for any given ε>0
>     chosen at the outset of the experiment. This is simply the law of
>     large numbers theorem. You can't focus on the vanishingly small
>     population that lie outside the confidence interval.
> 
> 
> This is wrong.

Them's fighting words. Prove it!

> In the binary situation where both outcomes occur for every
> trial, there are 2^N binary sequences for N repetitions of the experiment. 
> This
> set of binary sequences exhausts the possibilities, so the same sequence is
> obtained for any two-component initial state -- regardless of the amplitudes.

> You appear to assume that the natural probability in this situation is p = 0.5
> and, what is more, your appeal to the law of large numbers applies only for
> single-world probabilities, in which there is only one outcome on each trial.

I didn't mention proability once in the above paragraph, not even
implicitly. I used the term "proportion". That the proportion will be
equal to the probability in a single universe case is a frequentist
assumption, and should be uncontroversial, but goes beyond what I
stated above.

> 
> In order to infer a probability of p = 0.5, your branch data must have
> approximately equal numbers of zeros and ones. The number of branches with
> equal numbers of zeros and ones is given by the binomial coefficient. For 
> large
> even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> approximation to the factorial for large N, this goes as 2^N/sqrt(N) (within
> factors of order one). Since there are 2^N sequences, the proportion with n_0 
> =
> n_1 vanishes as 1/sqrt(N) for N large. 

I wasn't talking about that. I was talking about the proportion of
sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
than the proportion of sequences that have exactly equal 0 or 1
bits. That proportion grows as sqrt N.


> 
> Now sequences with small departures from equal numbers will still give
> probabilities within the confidence interval of p = 0.5. But this confidence
> interval also shrinks as 1/sqrt(N) as N increases, so these additional
> sequences do not contribute a growing number of cases giving p ~ 0.5 as N
> increases.

The confidence interval ε is fixed.

So, again within factors of order unity, the proportion of sequences
> consistent with p = 0.5 decreases without limit as N increases. So it is not
> the case that a very large proportion of the binary strings will report p =
> 0.5. The proportion lying outside the confidence interval of p = 0.5 is not
> vanishingly small -- it grows with N.
> 
> 
> 
>     > The crux of the matter is that all branches are equivalent when both
>     outcomes
>     > occur on every trial, so all observers will infer that their observed
>     relative
>     > frequencies reflect the actual probabilities. Since there are observers
>     for all
>     > possibilities for p in the range [0,1], and not all can be correct, no
>     sensible
>     > probability value can be assigned to such duplication experiments.
> 
>     I don't see why not. Faced with a coin flip toss, I would assume a
>     50/50 chance of seeing heads or tails. Faced with a history of 100
>     heads, I might start to investigate the coin for bias, and perhaps by
>     Bayesian arguments give the biased coin theory greater weight than the
>     theory that I've just experience a 1 in 2^100 event, but in any case
>     it is just statistics, and it is the same whether all oputcomes have
>     been realised or not.
> 
> 
> The trouble with this analogy is that coin tosses are single-world events --
> there is only one outcome for each toss. Consequently, any intuitions about
> probabilities based on such comparisons are not relevant to the Everettian 
> case
> in which every outcome occurs for every toss. Your intuition that it is the
> same whether all outcomes are realised or not is simply mistaken.
> 
> 
>     > The problem is even worse in quantum mechanics, where you measure a 
> state
>     such
>     > as
>     >
>     >      |psi> = a|0> + b|1>.
>     >
>     > When both outcomes occur on every trial, the result of a sequence of N
>     trials
>     > is all possible binary strings of length N, (all 2^N of them). You then
>     notice
>     > that this set of all possible strings is obtained whatever non-zero
>     values of a
>     > and b you assume. The assignment of some propbability relation to the
>     > coefficients is thus seen to be meaningless -- all probabilities occur
>     equal
>     > for any non-zero choices of a and b.
>     >
> 
>     For the outcome of any particular binary string, sure. But if we
>     classify the outcome strings - say ones with a recognisable pattern,
>     or when replayed through a CD player reproduce the sounds of
>     Beethoven's ninth, we find that the overwhelming majority are simply
>     gobbledegook, random data.
> 
> 
> Sure. Out of all possible binary strings of length N, most will resemble 
> random
> noise. Though if N is large enough, all the works of Shakespeare will be
> encoded, in order. And an increasingly large number of times as N -> oo. I do
> not see that this is in any way relevant to the issues at hand.
> 
> 
>     And the overwhelming majority of those will
>     have a roughly equal number of 0s and 1s.
> 
> 
> Now that is simply false, as shown above.
> 
> 
>     For each of these
>     categories, there will be a definite probability value, and not all
>     will be 2^-N. For instance, with Beethoven's ninth, that the tenor has
>     a cold in the 4th movement doesn't render the music not the ninth. So
>     there will be set of bitstrings that are recognisably the ninth
>     symphony, and a quite definite probability value.
> 
> 
> 
> There will be a definite number of such strings encoding something close to
> Beethoven's ninth. And they will also all have similar proportions of zeros 
> and
> ones, and thus represent similar probabilities. But again, this is not 
> relevant
> to the underlying issue.
> 
> 
> 
>     >     You may counter that the assumption that an observer cannot see all
>     >     outcomes is an extra thing "put in by hand", and you would be right,
>     >     of course. It is not part of the Schroedinger equation. But I would
>     >     strongly suspect that this assumption will be a natural outcome of a
>     >     proper theory of consciousness, if/when we have one. Indeed, I
>     >     highlight it in my book with the name "PROJECTION postulate".
>     >
>     >     This is, of course, at the heart of the 1p/3p distinction - and of
>     >     course the classic taunts and misunderstandings between BM and JC
>     >     (1p-3p confusion).
>     >
>     >
>     > I know that it is a factor of the 1p/3p distinction. My complaint has
>     > frequently been that advocates of the "p = 0.5 is obvious" school are
>     often
>     > guilty of this confusion.
>     >
>     >
>     >     Incidently, I've started reading Colin Hales's "Revolution of
>     >     Scientific Structure", a fellow Melburnian and member of this
>     >     list. The interesting proposition about this is Colin is proposing
>     >     we're on the verge of a Kuhnian paradigm shift in relation to the
>     role
>     >     of the observer in science, and the that this sort of
>     misunderstanding
>     >     is a classic symptom of such a shift.
>     >
>     >
>     >
>     > Elimination of the observer from physics was one of the prime 
> motivations
>     for
>     > Everett's 'relative state' idea. Given that 'measurement' and 'the
>     observer'
>     > play central roles in variants of the 'Copenhagen' interpretation.
>     >
> 
>     Yes - but not everyone is pure Everett, even if they're many worlds. I
>     have often argued publicly that the observer needs to be front and
>     centre in ensemble theories. It is also true of Bruno's
>     computationalism - the observer is front and centre, and characterised
>     by being a computation. Maybe it's so, maybe it ain't, but at least the
>     idea gets us out of the morass that science of conscioussness is in.
> 
> 
> This may well be the case. But I have been concerned primarily with the
> possibility of developing some useful notion of probability in Everettian
> quantum mechanics, when every possible outcome occurs (in different branches)
> on every trial. This is relevant to Bruno's WM-duplication scenario, but
> probably not for your plenum consisting of every possible bit string -- I only
> consider all possible bit strings of length N in N repetitions of the
> experiment, which is far fewer that all possible bit strings of any length.
> Consciousness studies are outside my brief, and I follow standard physics
> practice in eliminating consideration of the role of the observer -- 
> everything
> is just quantum mechanics in this approach.
> 
> Bruce
> 
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email
> to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/
> everything-list/
> CAFxXSLSJvyZ1ud1KtD6%3DFJm9we%3Dx38dnRZRr4o-Mpmse%2BpZg7g%40mail.gmail.com.

-- 

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders     [email protected]
                      http://www.hpcoders.com.au
----------------------------------------------------------------------------

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200308071420.GA2903%40zen.

Reply via email to