Le 13-juin-05, � 15:39, Jesse Mazer a �crit :
Bruno Marchal:
To Jesse: You apparently completely separate the probability of x and
x' from the similarity of x and x'.
I am not sure that makes sense for me.
In particular how could x and x' be similar, if x', but not x,
involves a 'white rabbit events'.
It's not completely separable, but I'd think that "similarity" would
mostly be a function of memories, personality, etc...even if I
experience something very weird, I can still have basically the same
mind. For example, a hoaxer could create a realistic animatronic
talking white rabbit, and temporarily I might experience an
observer-moment identical to what I'd experience if I saw a genuine
white talking rabbit, so the "similarity" between my current
experience and what I'd experience in a white rabbit universe would be
the same as the "similarity" between my current experience and what
I'd experience in a universe where someone creates a realistic hoax. I
don't think the first-person probabilities of experiencing hoaxes are
somehow kept lower than what you'd expect from a third-person
perspective, do you?
Perhaps I misunderstood you, but it seems to me, that in case you ask
me to compute P(x -> y) (your notation), it could and even should
change that prediction result. In particular if the rabbit has been
generated by a genuine hoaxer I would predict the white rabbit will
stay in y, and if the hoaxer is not genuine, then I would still
consider x and x' as rather very dissimilar. What do you think? This
follows *also* from a relativisation of Hall Finney's theory based on
kolmogorov complexity: a stable white rabbit is expensive in
information resource. No?
Bruno
http://iridia.ulb.ac.be/~marchal/