`At 16:54 05/11/03 -0500, Jesse Mazer wrote:`

Hal Finney wrote:

One correction, in the descriptions below I should have said multiverse for all of them instead of universe. The distinction between the SSA and the SSSA is not multiverse vs universe, it is observers vs observer- moments. I'll send out an updated copy when I get some more links and/or corrections and new definitions.

Hal

> SSA - The Self-Sampling Assumption, which says that you should consider > yourself as a randomly sampled observer from among all observers in the > multiverse. > > SSSA - The Strong Self-Sampling Assumption, which says that you should > consider this particular observer-moment you are experiencing as being > randomly sampled from among all observer-moments in the universe. > > ASSA - The Absolute Self-Sampling Assumption, which says that you should > consider your next observer-moment to be randomly sampled from among all > observer-moments in the universe. > > RSSA - The Relative Self-Sampling Assumption, which says that you should > consider your next observer-moment to be randomly sampled from among all > observer-moments which come immediately after your current observer-moment > and belong to the same observer.

In your definition of the ASSA, why do you define it in terms of your next observer moment? Wouldn't it be possible to have a version of the SSA where you consider your *current* observer moment to be randomly sampled from the set of all observer-moments, but you use something like the RSSA to guess what your next observer moment is likely to be like?

Also, what about a weighted version of the ASSA? I believe other animals are conscious and thus would qualify as observers/observer-moments, which would suggest I am extraordinarily lucky to find myself as an observer-moment of what seems like the most intelligent species on the planet...but could there be an element of the anthropic principle here? Perhaps some kind of theory of consciousness would assign something like a "mental complexity" to different observer-moments, and the self-sampling assumption could be biased in favor of more complex minds.

Likewise, one might use a graded version of the RSSA to deal with "degrees of similarity", instead of having it be a simple either-or whether a future observer-moment "belongs to the same observer" or not as you suggest in your definition. There could be some small probability that my next observer-moment will be of a completely different person, but in most cases it would be more likely that my next observer-moment would be basically similar to my current one. But one might also have to take into account the absolute measure on all-observer moments that I suggest above, so that if there is a very low absolute probability of a brain that can suggest a future observer-moment which is very similar to my current one (because, say, I am standing at ground zero of a nuclear explosion) then the relative probability of my next observer-moment being completely different would be higher. Again, one would need something like a theory of consciousness to quantify stuff like "degrees of similarity" and the details of how the tradeoff between relative probability and absolute probability would work.

In my opinion, and if I understand Jesse Mazer properly, he is right. Now, with the comp. hyp. you have (obviously) constraints coming from computer science (itself related to number theory including universal one not depending of any particular implementation). A theory of consciousness which suits well both the traditional thought experiment (self-duplicability) and self-referential discourse can be extracted from what a machine can, in general, correctly bet on its possible consistent computational extensions. That moves corresponds to comp-immortality, we just don't take into account the cul-de-sac worlds, (which corresponds to the world with no more accessible worlds in the Kripke semantics of the logic of self-reference). It is the move going from the logic of machine-provability to the logic of machine "provability & consistency", or the move from []p to a *new* box defined by []p&-[]-p. From this (when p is restricted on the DU accessible proposition), (the $\Sigma_1 proposition for the logicians), you get A quantum logic, from which you get, I think, the similarity relations you are searching. (This because from the yes/no quantum logic you can derive an angle of PI/2 radians, and from that angle you can derive all the angles, well if THAT quantum logic behaves sufficiently well, and that's not yet clear at this point. Of course at this point things are rather technical.

Just to make a link with what Hal Finney said, I have provide indeed an argument showing that if we (I) are machine then physics comes from computer science, but I have also provide the more technically involved arithmetical translation of that argument in the language of a "mean" self-referentially consistent universal Turing machine (from which I extract that special "quantum logic"). We can also drop "Turing" by the use of Church Thesis (CT).

`Hope that helps (to talk like Matt).`

`Bruno`

PS A collegue of mine *did* find an error in my thesis where I say that the more natural first candidate (the move from []p to []p & p ) for the modelisation of the first person. collapse into elementary logic when p is restricted to the \Sigma_1 sentences. I indeed forget that this last moves forces a weakening of the substitution rule. This makes the S4Grz1 logic also a non trivial candidate for the quantum logic, and, because this one is a Brouwerian solipsistic logic, it is still open if physics could rely on a solipsistic psychology. which I hope not! I realise also we should perhaps need an acronym for the first person/third person distinction (which corresponds to the subjective/ objective distinction in Everett papers, and which is akin to the Tegmark frog/bird (though vague) distinction, and which is absent in Schmidhuber approach. I propose "the F/T distinction". Read more in the beginning of the UDA.