Hi Travis,

Thanks you for this excellent post! I only have one more question about the gauge invariance idea. Is it different in kind from the use of equivalence classes in the following sense?

Say we have a collection of observers and each of those observers has some class (not set unless we allow for hypersets) of possible observations that can be considered as 'communicable' if for any pair observers there is some element x and y respectively of an equivalence class X and Y of each of those observers that would generate the same effect as per some third observer point of view, i.e. that that third has an equivalence class Z that has a element z that "carries" the same functional relationship as z and y? In effect we define observers in terms of the equivalence classes of information structures. Thus I (my identity) could be defined as an invariant of sorts over the transformations in the class all of those experiences that contain or carry some information about what it is like for Stephen Paul King to be having some or another experience including those experiences of having a email discussion with something that would be equivalent to the invariant of sorts that is "what it is like for Travis Garrett to be having some or another experience..." that you would have in your class.

From what I can tell this set up would filter out any transient white rabbits as well as act to cull possible observables down to only that subclass that can be communicated among some collection of observers. It seems equivalent to your idea but I am not sure. My thought here is inspired by the idea of diffeomorphism invariance in general relativity. I look forward to your discussion of the idea of "absorption".



-----Original Message----- From: Travis Garrett
Sent: Friday, February 18, 2011 6:44 PM
To: Stephen Paul King
Subject: Re: Observers Class Hypothesis

Hi Stephen,

  Sorry for the slow reply, I have been working on various things and
also catching up on the many conversations (and naming conventions) on
this board.  And thanks for your interest!  -- I think I have
discovered a giant "low hanging fruit", which had previously gone
unnoticed since it is rather nonintuitive in nature (in addition to
being a subject that many smart people shy away from thinking

 Ok, let me address the Faddeev-Popov, "gauge-invariant information"
issue first.  I'll start with the final conclusion reduced to its most
basic essence, and give more concrete examples later.  First, note
that any one "structure" can have many different "descriptions".  When
counting among different structures thus it is crucial to choose only
one description per structure, as including redundant descriptions
will spoil the calculation.  In other words, one only counts over the
gauge-invariant information structures.

 A very important lemma to this is that all of the random noise is
also removed when the redundant descriptions are cut, as the random
noise doesn't encode any invariant structure.  Thus, for instance, I
agree with COMP, but I disagree that white rabbits are therefore a
problem...  The vast majority of the output of a universal dovetailer
(which I call A in my paper) is random noise which doesn't actually
describe anything (despite "optical illusions" to the contrary...) and
can therefore be zapped, leaving the union of nontrivial, invariant
structures in U (which I then argue is dominated by the observer class
O due to combinatorics).

 Phrasing it differently, if "anything goes" then there is actually
nothing there!   This is despite the "optical illusion" of there being
a vast number of different possibilities as afforded by the
"nonrestrictive policy" of "anything goes".  One thus needs
*constraints* so that "only some things go  and not others" in order
to generate nontrivial structures -- such as the constraints
introduced by existing within our physical universe (i.e. a complex,
nontrivial mathematical structure).

  Ok, time for examples.  I'll start with one so simple that it is
borderline silly...  Say professor X is tracking a species of squirrel
which comes in two populations: a short haired and a long haired
version (let's say the long hair version stems from a dominant
allele).  In one square kilometer of forest he counts 202 short haired
squirrels and 277 long haired.  2 years later, after 2 colder than
average winters, he sends out his 3 grad students to count the
populations again.  They all write down that there are 184 short
haired squirrels, but student 1 writes that there are 298 long haired
squirrels, student 2 writes down that there are 296 group B squirrels,
and student 3 records the existence of 301 shaggy squirrels.  In a
hurry prof X gathers the data, and seeing in the notes that the group
B and shaggy populations both have long hair, he adds them all up for
a total of 895 long-haired squirrels vs 184 short-haired -- a huge
change instead of a mild selection!  He rushes off his groundbreaking
paper to Science...  Anyways, one could concoct a more realistic
example (perhaps using more abstract labeling), but the main point
holds -- it doesn't matter which description is used (long-haired,
group B, or shaggy) but it does need to be consistent to avoid over-
counting and getting the wrong answer...

  A silly example, but things become much, much more subtle when
considering different mathematical structures which can have various
mathematical descriptions!   Consider the case in general relativity.
Here the structure of spacetime can take different forms - you can
have flat, empty Minkowski space, the "dimpled" spacetime resulting
from a single star, or a double helix of dimples due to 2 stars
orbiting each other, or a wormhole geometry for a black hole and so
forth...  Each one of these spacetime structures can then be described
by many different coordinate systems and associated metrics.  For
instance, flat space can be mapped out by a Cartesian coordinate
system, or cylindrical coordinates, or spherical coordinates, or an
infinite number of alternatives, most of which completely obscure the
simple Minkowski spacetime structure.  Likewise for the black hole -
one can use Schwarzschild, or Eddington-Finkelstein, or Kruskal-
Szekeres or an infinite number of other variations...  Thus, consider
a complex metric which describes some highly warped spacetime geometry
as expressed in an intricate coordinate system.  It is natural to
wonder what components of the metric are directly informing on the
exotic shape of the spacetime, and which parts are just artifacts of
the peculiar coordinate system that has been chosen.  The answer is:
it's not clear in general!  Relativists thus depend heavily on scalar
invariants (The Ricci scalar, Kretschmann scalar and so forth) which
are the same in all coordinate systems for a given geometry, and on
asymptotic quantities defined at large distances where the spacetime
is nearly flat (one can get the total mass, spin, and charge this way)
in order to understand complex spacetimes structures.

 Let's move on to QFT.  Say we have the function f(a,x,y) = e^(-
ax^2), and we want to integrate this over all values of x and y, thus
producing Z(a), which is just a function of a (we will then want to
compute things like d(log(Z))/da ...).  Ok, fine, we have:

            / +infinity     / +infinity
           |                   |                      -
Z(a)=   |                   |                   e             dx dy
           |                   |
          / -infinity       /  -infinity

But the final result is infinite: Z(a) = (pi/a)^(1/2) * infinity =
infinity, so at first it looks like we can't work with this theory...
But of course this is also a bit of a silly example -- the original
function f(a,x,y) never depended on y: y is a redundant variable, and
this redundancy is responsible for the failure of the integration to
produce a sensible result.  The solution is obvious - just get rid of
y and integrate f(a,x) over all values of x.  This is essentially what
is going on in the Faddeev Popov case, where f (i.e. the Lagrangian)
depends on gauge fields (as in electromagnetism, or the strong nuclear
force), and f is a constant for some variations of the gauge fields.
However in this case the extraction of the "redundant y variable" is
much more subtle than in the above example - it is this subtlety which
necessitates all of the intricate computational machinery as described
in the wikipedia article.

So yeah, to get back to your question on the nature of Faddeev-Popov
ghosts, I would go with:

1) a computational tool that does not have a “real” physical
expression since they violate some key requirements of “reality”
This is certainly the consensus view at least.  The rich nature of
these mathematical tools is in some sense a reflection of the real
structures which they assist in the analysis of...  That said, it is
fun to wonder if some more complex "theory of everything" could make
them real (string theory does not as far as I can tell...) - perhaps
they could be useful in a star-ship engine in a science fiction

 So, right, one should only consider the real, nontrivial,
description-invariant structures when counting among all forms of
information.  Let's apply this to the white rabbit problem that I have
read about on this board (my name for these is "reality
discontinuities"...).  First note that white rabbits are not
completely forbidden in our universe, but rather just exceedingly
unlikely - not just exponentially suppressed, but rather doubly-
exponentially suppressed.  The easiest one to think about is the
probability that all the air molecules in a room could spontaneously
shift to the left half of the room.  The probability for this will be
about 1/2^N, where N is the number of air molecules - itself something
like 10^27 (in the case that the density is low enough that the mean
free path is larger than the dimensions of the room, this should be an
accurate estimate...).  This should be the general pattern for
macroscopic violations of the 2nd law and macroscopic quantum-
tunneling - the probability will be something like 1/(10^(10^N)) for
some 2 digit N - i.e. fantastically unlikely, if not completely

 But then, if one can simulate our universe in a computer, then one
could also simulate what I call an "if-then" type universe -
essentially a second program that runs the universe simulation and
watches it, and "if" certain conditions are met it "then" pauses the
simulation and goes in and changes things, perhaps inserting a "white
rabbit" of some sort - thus making the white rabbits quite common in
that particular "if-then" program instead of incredibly unlikely.
Furthermore there will be an incredible number of different "if-then"
type programs with different "if" and "then" conditions, as opposed to
just one "unmolested" universe...

 I don't think this is a problem however!  All of these "if-then"
universes are essentially an optical illusion - there actually isn't
any nontrivial structure there.  Note for instance that one is free to
pick any "if" condition and any "then" condition - they are both
completely arbitrary and contentless.  I believe at one point Bruno
said that if a physical universe was capable of doing an infinite
number of computations then it necessarily would run a universal
dovetailer and thus spend the vast majority of its time simulating
these random "if-then" simulations and thus causing a white rabbit
problem.  I disagree with this!  In a physical universe where ever
more powerful computers are being built (which may be the case in our
universe) the vast majority of the time they would be programmed to
work on ever more complex nontrivial problems, with very rare
excursions out into the "random wilderness".  The "if-then" type
programs could thus only be a problem in Platonia...

 And I don't think they are a problem there either!  Like I have
mentioned earlier, the apparent plethora of white rabbit type
universes is essentially an "optical illusion", and the Platonic
ensemble is itself composed of nonrandom, nontrivial structures...
Let me copy and paste an example I gave in a previous post:

"Say ETs show up one day - the solution to the Fermi paradox is just
that they like to take long naps.  As a present they offer us the
choice of 2 USB drives.  USB A) contains a large number of
mathematical theorems - some that we have derived, others that we
haven't (perhaps including an amazing solution of the Collatz
conjecture).  For concreteness say that all the thereoms are less
than N bits long as the USB drive has some finite capacity.  In
contrast, USB B) contains all possible statements that are N bits
long or less. One should therefore choose B) because it has
everything on A), plus a lot more stuff!  But of course by "filling
in the gaps" we have not only not added any more information, but
have also erased the information that was on A): the entire content
of B) can be compactified to the program: "print all sequences N bits
long or less". "

I am saying that the universal dovetailer is the dynamical equivalent
of USB drive B)!  By filling in all the gaps between the nontrivial
structures with random noise we end up with no structure what-so-ever
-- as evidenced by how trivial it is to generate the content of drive
B: "print all sequences N bits long or less".  The same is true for a
universal dovetailer - it is just slightly obscured as it is now
dynamical, but the end result is the same.

Ok, this post is getting long, so I'll talk about absorption later,
but perhaps it won't be too surprising if I say that it needs to be
nonrandom in character: i.e. photons can bounce off of a fig and hit
both a monkey and a rock but only one is absorbing information in a
nontrivial fashion -- the other is merely being warmed slightly...

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to