On 5/3/2010 11:08 AM, Jesse Mazer wrote:



On Sat, May 1, 2010 at 8:26 PM, Rex Allen <rexallen...@gmail.com <mailto:rexallen...@gmail.com>> wrote:

    On Sat, May 1, 2010 at 7:37 PM, Brent Meeker
    <meeke...@dslextreme.com <mailto:meeke...@dslextreme.com>> wrote:
    >
    > Sure we can, because part of the meaning of "random", the very
    thing that
    > lost us the information, includes each square having the same
    measure for
    > being one of the numbers.  If, for example, we said let all the
    "1"s come
    > first - in which case we can't hit any "not-1"s, that would be
    inconsistent
    > with saying we didn't have any information.

    We have two things here.  Random.  And infinite.

    Three things actually.  My random aim.  An infinite row of squares.
    And each square's randomly assigned number lying between 1 and 6.

    If, due to the nature of infinity, there are the same number of 1's
    and not-1's, then I'd expect the probability of hitting a 1 to be
    50-50.

    But, there are also the same number of 1's and even numbers.

    And the same number of evens and odds.

    And the same number of 1's and 2's.

    And the same number of 2's and not-2's.

    AND...I have the *random* aim of the dart that I'm throwing at the
    row.  So it's not a question of saying which number is likely to be
    next in a sequence.  Rather, the question is which number am I likely
    to hit on this infinite row of squares.

    SO, I think we have zero information that we can use to base our
    probability calculation on.  Because of the counting issues introduced
    by the infinity combined with the lack of pattern.  There is no usable
    information.



Mathematicians do apparently have a well-defined notion of the "frequency" of different possible finite sequences (including one-digit sequences) in an infinite digit sequence. For example, see the article at http://www.lbl.gov/Science-Articles/Archive/pi-random.html which talks about attempts by mathematicians to prove that the digit sequence of pi has a property called "normality", which means that any n-digit sequence should appear with the same frequency as every other n-digit sequence (so in base 2, it would imply that the 2-digit sequences 00, 01, 10 and 11 all appear equally frequently in the infinite sequence):


'Describing the normality property, Bailey explains that "in the familiar base 10 decimal number system, any single digit of a normal number occurs one tenth of the time, any two-digit combination occurs one one-hundredth of the time, and so on. It's like throwing a fair, ten-sided die forever and counting how often each side or combination of sides appears."'

'Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0 through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality even in base 10, much less normality in other number bases.'

'In fact, not a single naturally occurring math constant has been proved normal in even one number base, to the chagrin of mathematicians. While many constants are believed to be normal -- including pi, the square root of 2, and the natural logarithm of 2, often written "log(2)" -- there are no proofs.'


So while it hasn't been proved, it sounds like it's at least a well-defined notion (and the article discusses some approaches to proving it which show some promise). Perhaps it means that if you look at the frequencies of different n-digit sequences in the first N digits of a number, the frequencies all approach equality in the limit as N goes to infinity. It would presumably be possible to find infinite sequences that *aren't* "normal" in this sense, like .011011011011...

(Meanwhile, note that the naive idea of just picking a digit randomly from the entire infinite sequence, with all digits equally likely, doesn't actually make sense because you can't have a uniform probability distribution on an infinite series of numbers. It would lead to paradoxes along the lines of the two-envelope paradox discussed at http://consc.net/papers/envelope.html except in this variant you'd be given one of two envelopes which you find to contain N dollars, where N was chosen at random from the infinite series of natural numbers 1,2,3,... using a uniform probability distribution so each natural number was equally likely. Then if you have a choice to exchange it for another sealed envelope chosen in the same way, you should always bet that the second envelope contains more money with probability 1 since there are an infinite number of possible Ns larger than the one you got and only a finite number of Ns smaller. The paradox is that this argument would seem to work even before you have opened the first envelope and seen the specific value of N inside, so you're saying that there's a probability 1 that one of two identical featureless sealed envelopes has more money in it than the other!)

There's a solution to the two-envelope paradox using a distribution over the infinite range of possible values and so applies to the above form also. You start by assuming arbitrary distribution functions and then show that if density function is assumed to be uniform (0,inf) the paradox goes away. This is realistic, since in any actual realization you would have some idea of the upper bound on the distribution and if you opened and envelope with this amount you wouldn't swap - the paradoxical symmetry depends on the assumption of an unbounded range. Here's the solution. It is for a generalixed form of the two-envelope puzzle in which the larger amount is r times as big as the smaller amount. In the end the solution is independent of the value of r, so the solution also applies to the form you cite above.

Without loss of generality, we can describe our prior density functions for the amounts in the two envelopes in terms of a density function, fo(x), the ratio r of the larger amount to the smaller, and a scale factor, k. Let L be the event that the evelope with the larger amount is picked and S the event that the envelope with the smaller amount if picked. Then our prior density functions for the amount m in the envelope is:

    For the smaller amount our prior is:     f(m|S k) = k fo(km)
    and for the larger amount:        f(m|L k) = (k/r) fo(km/r)

Our uncertainity about the scale factor, k, is described by a density g(k). So

f(m|S) = INT k fo(km) g(k) dk ,where INT is integral zero-to-infinity

    f(m|L) = INT (k/r) fo(km/r) g(k) dk

Now in the first equation make a change of variable in the integral by y=km

    f(m|S) = INT (y/m) fo(y) g(y/m) dy/m = (1/m^2) INT y fo(y) g(y/m) dy

and in the second change the variable of integration by x=km/r

f(m|L) = INT (x/m) fo(x) g(rx/m) (r/m) dx = (r/m^2) INT x fo(x) g(rx/m) dx

Now if we assume no prior knowledge of the scale of the amounts, we will take g(k) to be a flat (improper) density and the two integrals will be equal; whence

    f(m|L)/f(m|S) = r

But, by Bayes

    f(m|L) = P(L|m) f(m)/P(L)

so
    P(L|m) = f(m|L) P(L)/ f(m) = f(m|L) P(L)/[f(m|L) P(L) + f(m|S) P(S)]

using P(L) = P(S), i.e. equal prior probability of selecting the larger or the smaller

    P(L|m) = f(m|L)/[f(m|L) + f(m|S)]

Then dividing numerator and denominator by f(m|S)

    P(L|m) = r/[r + 1]

and     P(S|m) = 1 - P(L|m) = 1/[r + 1]

So the expected value of switching is

<switch> = P(L|m)m/r + P(S|m)rm
         = [r/(r+1)]m/r + [1/(r+1)]rm
        =m

which is the same as not switching and keeping the amount m found in the first envelope; so there is no paradox. Note that if (as would be the case in a real instance) we do suppose we know something about the scale of the amounts, i.e. our prior for g(k) is not actually flat, then we will expect a gain from switching if we see an amount m that is toward the low end of our prior and we will not expect a gain if the amount we see is high. We do not have the paradox of wanting to switch even before we see the amount in the first envelope selected.


If this notion of considering the frequency of different finite sequences in an infinite sequence is a well-defined one, perhaps something similar could also be applied to an infinite spacetime and the frequency of Boltzmann brains vs. ordinary observers, although the mathematical definition would presumably be more tricky. You could consider finite-sized chunks of spacetime, or finite-sized spin networks or something in quantum gravity, and then look at the relative frequency of all the ones of a given "size" large enough to contain macroscopic observers. Suppose you knew the frequency F1 of "chunks" that appeared to be part of the early history of a baby universe, with entropy proceeding from lower on one end to higher on the other end, vs. the frequency F2 of "chunks" that seem to be part of a de Sitter space that had high entropy on both ends. Then if you could also estimate the average number N1 of ordinary observers that would be found in a chunk of the first type, and the average number N2 of Boltzmann brains that would be found spontaneously arising in a chunk of the second type, then if F1*N1 was much greater than F2*N2 you'd have a justification for saying that a typical observer is much more likely to be an ordinary one than a Boltzmann brain.

Jesse

Right. You hope to find some relative frequency of generation based on the physics and that gives you a probability measure. You can't use the infinite cardinality as a measure.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to