Hi Russell,

   No problem at all - I myself confess to having skimmed papers in
the past, perhaps even in the last 5 minutes...  That I took a bit of
umbrage just shows that I haven't yet transcended into a being of pure
thought :-)

  Let me address your 3rd paragraph first.  Consider the statements:
"3 is a prime number" and "4 is a prime number".  Both of these are
well formed (as opposed to, say, "=3==prime4!=!"), but the first is
true and the second is false.  To be slightly pedantic, I would count
over the first statement (that is, in the process of counting all
information structures) and not the second.  Note that the first
statement can be rephrased in an infinite number of different ways,
"2+1 is a prime number", "the square root of 9 is not composite" and
so forth.  However, we should not count over all of these
individually, but rather just the invariant information that is
preserved from translation to translation (This is the meta-lesson
borrowed from Faddeev and Popov).

  Consider then "4 is a prime number" - which we can perhaps rephrase
as "the square root of 16 is a prime number".  In this case we are now
carefully translating a false statement - but as it is false there is
no longer any invariant core that must be preserved - it would be fine
to also say "the square root of 17 is a prime number" or any other
random nonsense...  "There is no there there", so to speak.  The same
goes for all of the completely random sequences - there seems to be a
huge number of them at first, but none of them actually encode
anything nontrivial.  When I choose to only count over the nontrivial
structures - that which is invariant upon translation - they all
disappear in a puff of smoke.  Or rather (being a bit more careful),
there really never was anything there in the first place: the
appearance that the random structures carry a lot of information (due
to their incompressibility) was always an illusion.

   Thus, when I propose only counting over the gauge invariant stuff,
it is not that I am skipping over "a bunch of other stuff" because "I
don't want to deal with it right now" - I really am only counting over
the real stuff.  Let me give an example that I thought about including
in the paper.  Say ETs show up one day - the solution to the Fermi
paradox is just that they like to take long naps.  As a present they
offer us the choice of 2 USB drives.  USB A) contains a large number
of mathematical theorems - some that we have derived, others that we
haven't (perhaps including an amazing solution of the Collatz
conjecture).  For concreteness say that all the thereoms are less than
N bits long as the USB drive has some finite capacity.  In contrast,
USB B) contains all possible statements that are N bits long or less.
One should therefore choose B) because it has everything on A), plus a
lot more stuff!  But of course by "filling in the gaps" we have not
only not added any more information, but have also erased the
information that was on A): the entire content of B) can be
compactified to the program: "print all sequences N bits long or
less".

  The nontrivial information thus forms a sparse subset of all
sequences.  The sparseness can be seen through combinatorics.  Take
some very complex nontrivial structure which is composed of many
interacting parts: say, a long mathematical theorem, or a biological
creature like a frog.  Go in and corrupt one of the many interacting
parts - now the whole thing doesn't work.  Go and randomly change
something else instead, and again the structure no longer works: there
are many more ways to be wrong than to be right (with complete
randomness emerging in the limit of everything being scrambled).

  Note that it is a bit more subtle than this however - for instance
in the case of the frog, small changes in its genotype (and thus in
its phenotype) can slightly improve or decrease its fitness (depending
on the environment).  There is thus still a degree of randomness
remaining, as there must be for entities created through iterative
trial and error: the boundary between the sparse subset of nontrivial
structures and the rest of sequence space is therefore somewhat
blurry.  However, even if we add a very fat "blurry buffer zone" the
nontrivial structures still comprise a tiny subset of statement space
- although they dominate the counting after a gauge choice is made
(which removes the redundant and random).

  Does that make sense?


>
> Sorry about that, but its a sad fact of life that if I don't get the
> general gist of a paper by the time the introduction is over, or get
> it wrong, I am unlikely to delve into the technical details unless a)
> I'm especially interested (as in I need the results for something I'm
> doing), or b) I'm reviewing the paper.
>
> I guess I don't see why there's a problem to solve in why we observe
> ourselves as being observers. It kind of follows as a truism. However,
> there is a problem of why we observe ourselves at all, as opposed to
> disorganised random information (the white rabbit problem) or simple
> uninteresting information (the occam catastrophe problem).
>
> I'm not sure you really address either of the latter two issues - you
> seem to be assuming away white rabbits in restricting yourself to
> "gauge invariant" information (which I assume can be formalised as the
> set of programs of a universal machine). I would be interested to know
> if your proposal could address the occam catastrophe issue though.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to