On 22 Feb 2011, at 07:58, Russell Standish wrote:

On Fri, Feb 18, 2011 at 03:46:45PM -0800, Travis Garrett wrote:
Hi Stephen,

  Sorry for the slow reply, I have been working on various things and
also catching up on the many conversations (and naming conventions) on
this board.  And thanks for your interest!  -- I think I have
discovered a giant "low hanging fruit", which had previously gone
unnoticed since it is rather nonintuitive in nature (in addition to
being a subject that many smart people shy away from thinking
about...).

 Ok, let me address the Faddeev-Popov, "gauge-invariant information"
issue first. I'll start with the final conclusion reduced to its most
basic essence, and give more concrete examples later.  First, note
that any one "structure" can have many different "descriptions". When
counting among different structures thus it is crucial to choose only
one description per structure, as including redundant descriptions
will spoil the calculation.  In other words, one only counts over the
gauge-invariant information structures.

This is essentially what one does in the derivation of the
Solomonoff-Levin distribution, aka "Universal Prior". That is, fix a
universal prefix Turing machine, which halts on all input. Then all
input programs generating the same output are considered
equivalent. The universal prior for a given output is given by summing
over the equivalence class of inputs giving that output, weighted
exponentially by the length of the unique prefix.

This result (which dates from the early 70s) gives rise to the various
Occams razor theorems that have been published since. My own modest
contribution was to note that any classifier function taking bit
strings as input and mapping them to a discrete set (whether integers,
or meanings, matters not) in a prefix way (the meaning of the string,
once decided, does not change on reading more bits) will work. Turing
machines are not strictly needed, and one expects observers to behave
this way, so an Occams razor theorem will apply to each and every
observer, even if the observers do not agree on the relative
complexities of their worlds.

However, this only suffices to eliminate what Bruno would call "3rd
person white rabbits". There are still 1st person white rabbits that
arise through the failure of induction problem. I will explain my
solution to that issue further down.


 A very important lemma to this is that all of the random noise is
also removed when the redundant descriptions are cut, as the random
noise doesn't encode any invariant structure.  Thus, for instance, I
agree with COMP, but I disagree that white rabbits are therefore a
problem...  The vast majority of the output of a universal dovetailer
(which I call A in my paper) is random noise which doesn't actually
describe anything (despite "optical illusions" to the contrary...) and
can therefore be zapped, leaving the union of nontrivial, invariant
structures in U (which I then argue is dominated by the observer class
O due to combinatorics).

It is important to remember that random noise events are not white rabbits. A
nice physicsy example of the distinction is to consider a room full of
air. The random motion of the molecules are not white rabbits, that is
just normal thermal noise. All of the molecules being situated in one
small corner of the room, however, so that an observer sitting in the room
ends up suffocating is a white rabbit. One could say that white
rabbits are extremely low entropy states that happen by chance, which
is the key to udnerstanding why they're never observed. To be low
entropy, the state must have significance to the observer, as well as
being of low probability. Otherwise, any arbitrary configuration will
have low entropy.

When observing data, it is important that observers are relatively
insensitive to error. It does not help to not recognise a lion in the
African savannah, just because it is partically obscured by a
tree. Computers used to be terrible at just this sort of problem - you
needed the exact key to extract a record from a database - now various
sorts of fuzzy techniques, particularly ones inspired by the neural
structure in the brain - mean computers are much better at dealing
wiuth noisy data. With this observation, it becomes clear that the
myriad of nearby histories that differ only in a few bits are not
recognised as different from the original observation. These are not
white rabbits. It requires many bits to make a white rabbit, and this,
as you eloquently point out, is doubly exponentially suppressed.

Bruno will probably still comment that this does not dispose of all
the 1st person white rabbits, but I fail to see what other ones could exist.


You might be on the right track. Assuming an 'energetical' or thermodynamical universe, isotropic, bottom linear, sufficiently symmetrical, such form of white rabbit elimination can work for collectivity of interacting observers. That would eliminate the first person plural WRs. But that assumes a lot on the physical part, which should be extracted from all computations, where we sytill don't know if a notion of normal world emerge at all. Meaning that we have not yet succesfully hunt the thris person WRs.

First person white rabbits crop up due to the fact that, although a longstanding gentle white rabbit does consume *many* bits, it happens nevertheless easily in the relative way, as dreams confirms, and they are easily builded from our relative computational states in UD* (at all levels), and we have to exclude them only on a priori grounds (by UDA). Due to its peculiar dumbness, the UD generates them all. Their "high cost" is relatively high, in deep computational histories, but the first person cannot know that, and below her substitution level she might jump as well on an infinities of aberrant stories.

Neurophysiology makes the problem even more complex, because it seems the brain extracts already information from noise, so we can easily see lions where there are not. We have to explain why the UD does not make them even more frequent from the point of view of the first person. Their high cost in first person plural situation (the physical) will not been lifted automatically on the first person points of view. But I don't exclude that OCCAM can get rid of them. UDA just shows that this would be ultimately equivalent with a derivation of the physical laws, including isotropic condition, geometrical homogeneity, linearity and symmetries, from the digital structure and its digital observers, (keeping in mind this defines only a flux of consciousness which differentiates on the limit (the first person is distributed on the limit of the "UD work")). The derivation of physics from addition and multiplication, should be equivalent with the elimination of the first person plural white rabbits. If Bp & Dp (& p) gives the right logic of observation, it will remains hard to eliminate the 3-WR properly. The measure one has to be extended to the whole probability calculus, and even if we extract the quantum calculus, we have to get the right corresponding part on the qualia to handle the 1-rabbit. Interviewing the universal machine is probably not the shorter way to figure out the reason of quanta, but I think it might be the only way to handle the qualia, and so to handle the (pure, singular) first person WRs.

The quantum shadow of the bodies appears also in pure number theory, with the Riemann zeta function, and with the psoitive integer partition function (where even gravity seems to emerge), but if we extract the body without the whole theology, we might eliminate the person for even more than one millennium.

The advantage of the Löbian interview is that we keep track of the difference between the internal views, and so we keep track on the qualia/quanta distinction, without eliminating the (first) person at all. Practically, the first person white rabbits are also those who might play some role "near death", and intermediate real dreams are not excluded. Computer science promises many jumps and gap, and surprises. With comp and the interview, we are a bit at the beginning of the beginning I'm afraid. It is a chance that Platonists are patient :)

I hope I was not too much unclear.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to