On 11 Aug 2017, at 13:40, Bruce Kellett wrote:
On 11/08/2017 7:13 pm, Bruno Marchal wrote:
On 11 Aug 2017, at 02:11, Bruce Kellett wrote:
On 11/08/2017 9:45 am, Stathis Papaioannou wrote:
"What will I see tomorrow?" is meaningful and does not contain
any false propositions. Humans who are fully aware that there
will be multiple copies understand the question and can use it
consistently, and as I have tried to demonstrate even animals
have an instinctive understanding of it. Probabilities can be
consistently calculated using the assumption that I will
experience being one and only one of the multiple future copies,
and these probabilities can be used to plan for the future and to
run successful business ventures. If you still insist it is
gibberish that calls into question your usage of the word
"gibberish ".
Not everyone will be successful in this scenario. No matter how
mane duplications cycles are gone through, there will always be
one individual at the end who has not received any reward at all
(he has never seen Washington :-)). This is the problem of
"monster sequences" that is so troublesome for understanding
probability in Everett QM.
? You might elaborate. It looks like the white rabbit problem.
It has nothing to do with the white rabbit problem. In the
duplication model, each iteration gives W and M, each with unit
probability.
In the third person view. OK. That is part of the enunciation of the
problem which will concerned the first person points of view.
This is a trivial consequence of the fact that there is a person
created in W and in M every time, so we know in advance that these
occur necessarily.
OK. (assuming mechanism)
So after N iterations of the duplication (each person is re-
duplicated on each iteration) so there are 2^N sequences after N
iterations.
Indeed.
One of these sequences will be N occurrences of M, and one will be N
occurrences of W. So the prediction of the person with N occurrences
of M, based on induction from his past experiences, will be M, with
p =1.
Not if the person is rational and understand mechanism. Even if you
have thrown a perfect coin 1000 times and get head, the probability to
get head is still 1/2.
If the iteration is continued, most of the copies will confirmed that
p = 1 was wrong, and by definition of the first person and mechanism,
we have to take their feelings into account.
Similarly, for the person with N occurrences of W, his prediction
will be p(W) = 1.
Similarly wrong.
People from other sequences predict W or M with varying
probabilities. Very few actually predict p(W) = p(M) = 1/2.
They are incompressible in the limit, which appears quickly. 1/2 is
provably the best bet, due to that provable incompressibility of the
vast majority of sequences.
In the duplication scenario, the third person view enables one to
put a natural measure over these sequences -- just by counting the
number of sequences with particular relative frequencies. The low
measure (probability) sequences are those known as "monster
sequences" in Everett QM, and they can be seen to be of small
measure in the classical duplication scenario.
Very good; so you did get the point. That was not apparent from above.
The problem in QM is that no external observer is possible.
That is the problem with Copenhagen QM.
A probabilistic interpretation then becomes problematic because we
cannot count over all the sequences: we only have the one sequence
that we actually observe, and we can have no way of knowing whether
or not what we have observed is a "monster sequence". This gives
rise to the question as to whether observation can ever be a
reliable guide for determining the underlying probabilities -- how
can we use any sequence of observed results as a test of some
theory? The sequence we have observed might, for all we know, be
some 'monster sequence' of very low probability.
Yes, that is science.
We cannot prove there is a reality, and no experience can prove
anything about that possible (or not) reality.
But we have beliefs, some more solid than other, and when we do
experiences, either our beliefs are confirmed, and we learn nothing,
or our beliefs are refuted, and we learn something. If a very solid
belief is refuted, we learn a lot.
The problem is usually circumvented by assuming a probabilistic
model from the start, but that is imposed from the outside and does
not arise from the theory itself.
Until Everett realized that the probabilities were *first person
(plural)*, and relative. But he uses digital mechanism, which
"aggravates" the situation, in the sense that now we have to extract
the universal wave from a sum on all computations, or explain why the
classical aberrant dreams are rare, and, question, are their rare in
the near death first person experience.
Deutsch and Wallace get around the problem in this way -- they
assume at the outset that small amplitudes correspond to small
probabilities, so monster sequences are assumed to be very unlikely,
and observed frequencies are assumed to converge towards the true
underlying probabilities. But then, this convergence is not uniform,
or even necessarily monotonic: the best one can say is that observed
frequencies tend to converge only in probability to the true
probabilities.
Probably so :)
Hence there is circularity inherent in any such approach to
probability in Everett QM, where every outcome occurs with
probability equal to one. Deutsch and Wallace do not avoid this
circularity in their attempts to derive the Born Rule.
OK, but they try. But if they succeed, they still miss the first
person, which is not even called for, and so they miss that for
getting right both the physical and the psychological perspective the
only way to get it is by studying the "observer" ability to introspect
itself, and what, mathematically, can he proves about itself and its
consistent or sound continuations.
Everett has just decided that the physical laws applies to the
physicists.
Gödel has do the same in math, someone, with the birth of
metamathematics: the mathematical study of mathematical theories, and
the relation between truth and proofs in general.
In that approach, we can distinguish many ways a universal machinery
can look at itself, and isolate from inside the unique measure which
has to exist, if digital mechanism is assumed.
The problem of Deutsch and Wallace and many physicists is that they
take the brain-mind identity thesis for granted, even Everett did it
explicitly. With mechanism, this is corrected, and, to be short, we
need to use a the theological structure imposed by the difference
between the provable and the true.
I see the problem with mechanism, (indeed that is the result of the
UDA: there is a measure on first person experience problem), but in
Everett the problem is solved by Feynman phase randomization,
itself justifiable from Gleason theorem. Then the math of self-
reference shows that, very possibly, Gleason theorem will probably
solve the classical case too, given that we find quantum logics at
the place needed.
Everett does not solve the measure problem, or give any non-circular
account of probability in QM: Feynman phase randomization is a
possible solution to white rabbits, but it has nothing to do with
the origin of probabilities.
OK.
You make me think that John Clark is right. The digital mechanist self-
duplication explain where the probabilities comes from. Below our
substitution level, an infinity of universal numbers compete "the
multiverse", above our substitution level, a finite number of
universal compete (cosmos, earth, collegues, FORTRAN IV, bacteria,
colleagues, family, ...).
Gleason's theorem does not avoid the circularity problem either. All
that Gleason's theorem demonstrates is that for space of greater
than two dimensions, any viable probabilistic interpretation has to
accord with the Born Rule. But that does not demonstrate that one
can actually have a probabilistic interpretation in the many worlds
case.
With mechanism, you might be right. It is probability only up to some
renormalization, but in the big internal picture of any universal
machinery (in Post, Turing, Kleene, Church sense), it is a measure of
plausibility, and the limit of all renormalization possible of
arithmetic is not between 0 and 1, but between 0 and infinity.
Zurek is quite dismissive of Gleason's theorem because, as he says,
it assumes the additivity of probabilities, rather than deriving
this result from within the theory.
Of course, with mechanism, you got them from the boolean structure,
perfectly well determined by the triangle of numbers of Pascal.
I mean in the ultra-simple case of the iterated self-duplication. It
is equivalent with a Random Oracle. In that case the computable
sequences are among the white rabbit/Monster-sequence.
In the "real" case, in front of the sigma_1 reality, I prefer to
tackle the problem by the arithmetical indexicalization of the person.
Gödel, Löb and eventually the summing-up theorem of Solovay paved the
way, and give a tool to formulate and partially solve the problem, and
up to now, the evidence are this might work, and that if this does not
work, well, we will learn something.
You have to show that results in QM give a model that satisfies
probability axioms, such as those of Kolmogorov --
Let me first get QM. Let us all se if the universal machine get QM.
one can't just assume from the start that these axioms apply.
You are right. But it is very difficult. Sometimes I think that a
"proof" in arithmetic of P = 1/2 will require Riemann Hypothesis. the
reason is that the infinities of the prime numbers might encode the
complete complexity of the relation between addition and
multiplication, and reflect the fact addition+multiplication is
already Turing universal. Then, the Riemann zero would give the
spectrum of a universal quantum system. And the probabilities would be
the usually boolean one in the outer picture, and be the standard
epistemic one in the many first person, inner, picture.
This is one of the main strengths of Zurek's 'envariance' approach,
based as it is on the symmetries of in entanglement -- he does not
have to assume a measure (probability) or probability axioms, he
derives them from entanglement.
I have heard about it. It seems very interesting.
Are-you defending John Clark? That would be nice! He convinces
nobody since years, and some helps might be handy.
I think that John does have a point -- the prediction of
probabilities different from unity is possible only in a third
person overview of the situation.
?
I do not see the realtion with John Clark idea that there is no first
person indeterminacy, which is that if I am asked where I will feel to
be after the pushing, the correct (assuming mechanism) answer is that
"I don't know".
The prediction p(M) = p(W) = 1 is all that the set up actually
allows one to conclude prior to the duplication.
That contradict what you say above. What you say here is correct from
a third person description made by someone not doing the experience,
or by someone doing the experience and still describing the outcome in
the third person way, like John C. did once. The guy in Washington can
say "I am in Washington and Moscow", but this means he does not answer
the question which is the city seen, not the city seen + the city
imagined by hoping the doppelgangers has been well reconstituted too.
Are you telling us that P(W) ≠ P(M) ≠ 1/2. What do *you* expect
when pushing the button in Helsinki?
I expect to die, to be 'cut', according to the protocol. The guys in
W and M are two new persons, and neither was around in H to make any
prediction whatsoever.
Fair enough.
You think the digital mechanism thesis is wrong.
Personally I do not argue on true or wrong. My point is only that it
is testable, and that it fits well with the observations until now.
Incidently this provides a rationalist conception of a notion of
(universal) person playing the main building block in the
appearances. It might help people to learn to listen to themselves,
and favors the spiritual on the material, perhaps, for a change. My
meta-meta-goal is just to illustrate that it is about time for
theology to come back at the academy of science, and allow people to
doubt, and be skeptical, in the fundamental field.
The intuition of the mystic is right, we are at the center of the
universe, we are the builder of the realities. But "we" is taken in
the large sense of "universal Turing machine" or "universal number".
Well, the Löbian numbers, or Gödel-Löbian numbers.
The definition is in Gödel 1931 paper, i.e. in Davis Dover book "The
Undecidable" from page 17 to 22. It is a simple sequence of 46
definitions starting from division and getting the non computable, but
semi-computable Beweisbar. The one later Solovay whose propositional
logic is given by G and G*.
God created the Natural Numbers, and looking at them, he said "good".
Then God told the Natural Numbers "add yourselves", and looking at
that, he said "good".
Then God told the Natural Numbers "multiply yourselves", and looking
at the result, he said ... "oops".
Bruno
Bruce
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.