On Mon, Jan 31, 2011 at 03:29:52PM -0800, Travis Garrett wrote:
> Hi Russell,
>    No problem at all - I myself confess to having skimmed papers in
> the past, perhaps even in the last 5 minutes...  That I took a bit of
> umbrage just shows that I haven't yet transcended into a being of pure
> thought :-)
>   Let me address your 3rd paragraph first.  Consider the statements:
> "3 is a prime number" and "4 is a prime number".  Both of these are
> well formed (as opposed to, say, "=3==prime4!=!"), but the first is
> true and the second is false.  To be slightly pedantic, I would count
> over the first statement (that is, in the process of counting all
> information structures) and not the second.  Note that the first
> statement can be rephrased in an infinite number of different ways,
> "2+1 is a prime number", "the square root of 9 is not composite" and
> so forth.  However, we should not count over all of these
> individually, but rather just the invariant information that is
> preserved from translation to translation (This is the meta-lesson
> borrowed from Faddeev and Popov).
>   Consider then "4 is a prime number" - which we can perhaps rephrase
> as "the square root of 16 is a prime number".  In this case we are now
> carefully translating a false statement - but as it is false there is
> no longer any invariant core that must be preserved - it would be fine
> to also say "the square root of 17 is a prime number" or any other
> random nonsense...  "There is no there there", so to speak.  The same
> goes for all of the completely random sequences - there seems to be a
> huge number of them at first, but none of them actually encode
> anything nontrivial.  When I choose to only count over the nontrivial
> structures - that which is invariant upon translation - they all
> disappear in a puff of smoke.  Or rather (being a bit more careful),
> there really never was anything there in the first place: the
> appearance that the random structures carry a lot of information (due
> to their incompressibility) was always an illusion.
>    Thus, when I propose only counting over the gauge invariant stuff,
> it is not that I am skipping over "a bunch of other stuff" because "I
> don't want to deal with it right now" - I really am only counting over
> the real stuff.  Let me give an example that I thought about including
> in the paper.  Say ETs show up one day - the solution to the Fermi
> paradox is just that they like to take long naps.  As a present they
> offer us the choice of 2 USB drives.  USB A) contains a large number
> of mathematical theorems - some that we have derived, others that we
> haven't (perhaps including an amazing solution of the Collatz
> conjecture).  For concreteness say that all the thereoms are less than
> N bits long as the USB drive has some finite capacity.  In contrast,
> USB B) contains all possible statements that are N bits long or less.
> One should therefore choose B) because it has everything on A), plus a
> lot more stuff!  But of course by "filling in the gaps" we have not
> only not added any more information, but have also erased the
> information that was on A): the entire content of B) can be
> compactified to the program: "print all sequences N bits long or
> less".
>   The nontrivial information thus forms a sparse subset of all
> sequences.  The sparseness can be seen through combinatorics.  Take
> some very complex nontrivial structure which is composed of many
> interacting parts: say, a long mathematical theorem, or a biological
> creature like a frog.  Go in and corrupt one of the many interacting
> parts - now the whole thing doesn't work.  Go and randomly change
> something else instead, and again the structure no longer works: there
> are many more ways to be wrong than to be right (with complete
> randomness emerging in the limit of everything being scrambled).
>   Note that it is a bit more subtle than this however - for instance
> in the case of the frog, small changes in its genotype (and thus in
> its phenotype) can slightly improve or decrease its fitness (depending
> on the environment).  There is thus still a degree of randomness
> remaining, as there must be for entities created through iterative
> trial and error: the boundary between the sparse subset of nontrivial
> structures and the rest of sequence space is therefore somewhat
> blurry.  However, even if we add a very fat "blurry buffer zone" the
> nontrivial structures still comprise a tiny subset of statement space
> - although they dominate the counting after a gauge choice is made
> (which removes the redundant and random).
>   Does that make sense?

This is, by and large, Tegmark's proposal, which he calls MUH
(Mathematical Universe Hypothesis).

Note that this proposal is somewhat ill-defined. What mathematical
statements are in or out of your proposal? Any of the bizzare zoo of
mathematical objects that might take a mathematician's fancy, including any
arbitrary finite set of axioms I might dream up and their enumerable
theorems. Or are the whole numbers somehow priveleged (Konecker's "God
made the integers; all else is the work of man")? If so, why? And
would all possible alien intelligences agree?

The answer is important, because it changes the ultimate measure you
get. We know from Goedel that no finite axiomatic system can capture
all properties of the whole numbers. Given any finite axiomatic
system, we can always find another axiom to extend it, but we can also
extend the system by its negation, so that there must be 2^\aleph_0
axiomatic systems (although the number of finite systems must still be
countable), and an uncountable number of theorems in total. Given that
theorems are embeddable in the space of descriptions (or infinite
binary string if you prefer), this entails an isomorphism between the

By contrast, if we fix the space to being say all theorems of Robinson
Arithmetic, these are enumerable (as are theorems of all finite
axiomatic systems). Furthermore, all Turing machine programs can be
represented in RA, so one could equally as well talk about these. This
set has the advantage of being closed to diagonalisation. As Bruno
Marchal showed in his UDA, the number of histories passing through an
observer's current observer moment is uncountable, and dense in the
space of all descriptions, so in a sense it make little difference,
ontologically speaking, whether one starts with Robinson Arithmetic,
or the set of histories traced out by the universal dovetailer
(UD*), or the set of all infinite length descriptions ({0,1}*). Just
as your frog example above shows that observers will sample from all
descriptions, including the nonsense one. 

Now, your OCH is the assertion that to each theorem, there will be an
observer capable of observing that theorem, and given the ontological
assumption that observers are theorems, observers will observe other
observers, even perhaps be self-observing. Although I suspect
self-observation is a little different to this concept, but I could be wrong. 

How might you turn that into a sampling measure though? It is not
enough to say that there is a one-to-one correspondence (or even
many-to-one). We can say there is a one-to-one correspondence between
the whole numbers and the squares, but by most measures, the squares
will be less dense than the integers. If you get the measure wrong,
you will end up with White Rabbit problems, as there will be many more
theorems of white rabbits with pocket watches than ones of regular worlds.



Prof Russell Standish                  Phone 0425 253119 (mobile)
UNSW SYDNEY 2052                         hpco...@hpcoders.com.au
Australia                                http://www.hpcoders.com.au

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to