>I'm not sure I do understand it. Do you think the "measure of comp
>indeterminacy" is relevant in decision theory? Before you answer that
>question, please read the book _Foundations of Causal Decision Theory_
>that I recommended earlier so you can understand what it is that I'm
>asking. If the answer is yes, then please explain how it can be used in
>decision theory. If the answer is no, then I believe the "lemma" is
OK I will try to read Joyce's book asap. In general I am quite skeptic
about the use of the notion of "causality". I have also no understanding
of your posts in which you argue about a relationship between the search
of a TOE and decision theory.
> > Perhaps the trouble is that you are not really aware of the mind-body
>I'm aware of *a* mind-body problem. I'm not sure if it's the same one you
>have in mind. The one I have in mind is this: how do I derive a
>probability distribution for the (absolute) SSA from a third-person
>description of the multiverse?
The mind-body problem I am talking about is the one formulated by Descartes
(but also by Hindu philosophers before J.C). It is really the problem
of linking private (first person) sensations and third person communicable
phenomena. How a grey brain produces sensation of color, as someone put it.
> > This is where the list splits in two. Those aware of a measure problem
>> and the other ... It is linked to our all debate between RSSA and ASSA
>> (Relative Self-Sampling Assumption/Absolute SSA).
>I guess we're going in circles a little bit, except I now have a better
>understanding of how decision theory would work given the absolute SSA. (I
>still have to write down my thoughts on this matter.) But I still do not
>see how it could work with the RSSA.
Remember that I have argue the RSSA is not an option but follows from the
comp hypothesis in the cognitive science.
> > Because we want to understand the nature of reality and where does it comes
>I think we can do that without first-person indeterminancy.
Not with comp, imo. We *face* 1-person indeterminacy. we just cannot throw it
> > Consciousness is part of the data we must explain,
>That's still a hard problem (i.e. we have no theory of consciousness) but
>I don't see how first-person indeterminancy helps.
It is not intended to help us. It is part of the computationalist formulation
of the problem.
>I think that's fairly easy, at least conceptually. It's "just" a matter of
>showing that almost all observer-moments are experiencing or
>remembering lawful phenomena. Again, I think that can be done without
> > With comp the laws emerge from the relation between numbers, but as
>> seen from inside, so that I don't see how to avoid the modal distinctions in
>> a search toward a toe. I think you fail to appreciate the "proof" character
>> of the uda.
>What axioms are you assuming for the proof?
Those I have encapsulated in the label "comp". Precisely it consists in
1)accepting a minimal amount of arithmetical realism, i.e. the truth of
elementary statements of arithmetic does not depend of me or us ...
2) the Church Thesis (also called the Church Turing Thesis, or the
Post Law, etc.)
i.e. all universal machine are equivalent with respect to their simulation
abilities (making abstraction of the duration of those simulation).
3) The existence of a level of description of my body (whatever it
is) such that
my first person experience remains invariant through a functional substitution
made at that level.
(Note that the Arithmetical uda makes it possible to eliminate the "3)" above).
>> I'm not sure you are not a sound machine especially when proving
>> things on numbers. In the worst case take my "restriction" as a
>> simplifying assumption. But I don't think there is any restriction here.
>> The amazing fact is that the sound machine has (through the Z logics) an
>> amazingly large non-monotonical layer. Remember that "inconsistency" has
> > been shown consistent in Peano Arithmetic, ZF theory, etc. Sound machines,
>> sound at the basic level I interrogate them, can be consistently
>> once entangled to deep computational histories. I am taking the full nuance
>> given by the second incompleteness theorem into account here.
>Unfortunately I don't understand much of what you say in this paragraph,
>especially these two sentences:
>> Remember that "inconsistency" has
>> been shown consistent in Peano Arithmetic, ZF theory, etc. Sound machines,
>> sound at the basic level I interrogate them, can be consistently
>> once entangled to deep computational histories.
>The first sentence obviously refers to some result from metamathematics,
>but you have to remember that I'm still learning about it and be more
>explicit (at least use a standard name to refer to the result so I can
>look it up).
I was referring to the second incompleteness theorem by Godel: a consistent
machine cannot prove its own consistency. This means that if you add the
inconsistency as a new axiom the machine will not derive a contradiction,
(because if the machine derive a contradiction from her inconsistency, she
will prove its consistency by reductio ad absurdo). So a consistent machine
will not be inconsistent when she asserts its own inconsistency.
>I have no idea what the second sentence means.
OK. It is not clear and not really relevant.
> > You would be really unsound, through my basic use of the term, only if you
>> were able to give a finite proof of a false proposition, not just pretend
>> you could find it.
>Of course I can give a finite proof of a false proposition. I just have to
>use an unsound logic to do the proof. If you restrict us to using sound
>logics, then nobody can give a finite proof of a false proposition, so if
>that's your definition of sound machine, I'm not sure what the point of
>qualifying "machine" with "sound" is.
Of course I restrict us and the machines I interview to sound logics. Why
should I interview unsound machines? It would be like an historian working on
a biography of Napoleon, and interviewing a mad guy in an asylum pretending
to be Napoleon. I limit my interview to sound machine for the same reason
I would stop reading papers by someone if I realize
he is using (systematically) a theory which is unsound.
Except for clinical case I have never find someone using unsound
> > Because with comp the relation between our first person inspired decisions
>> and third person (or first person plural) realities remains to be explained.
>> The comp indeterminacy, by the invariance lemma, pertains on the whole
>> of UD* (the computationalist form of everything).
>> Of course you can forget comp, postulate a reality (defined by what you
>> see and expect to see) and take your decisions. But we are looking for
>> a toe, not a recipe for life. You don't need quantum gravity for everyday
>> decision do you? But you can imagine quantum gravity being related to the
>> search of a TOE, ok? Well, what I try to say is that if you take seriously
>> the hypothesis that our private experience are invariant for functional
>> substitution at some level, then the utimate explanation of quantum gravity
>> is accessible by UTMS pure introspection, and that the toe is a mixture
>> of machine's machine psychology (G), and machine psychology (G*).
>> The advantage of my way is that it gives an explanation for the origin
>> of physical laws and at the same time of physical sensations. It
>>has no direct
>> use in decision theory, except perhaps by predicting new phenomena, perhaps
>> exploitable, like any new theory.
>> No doubt I make simplifications here and there, but what remains is
>> very complex and unknown. I would be glad if someone find a flaw or some
>> implicit hypothesis I use unconsciously ...
>I would be happy to try to find some flaw or implicit hypothesis in your
>argument, but I'm still waiting for the English paper you've promised us.
>:( Until then I'm just trying to understand your explicit assumptions and
Fair enough. It is discussion like this one which could make higher the
probability that I write that paper.
>I don't have to explain how I "keep being in the same computation" because
>I don't know or claim that. I'm not sure that's even a meaningful
It seems to me you claim it in your next sentence, here:
>All I do claim is that for any given computation, if I am in
>that computation, I care about the future version of me in that
>computation, and I can causally affect its future (and only its future).
>In other words, the causal influence of my actions stay in the same
The whole point of the uda thought experiment consists in showing that
expressions like "I am in that computation" are not well defined. The uda shows
also that we have a lot of futures ("future, btw, is a first person construct:
there is no notion of future in any "block-reality" approach).
The fact that "I can causally affect its future" is not clear at all,
and any clearer version should be justified.
Let me give you a simple example.
Suppose you decide to drink a cup of coffee.
You will prepare that cup of coffee hoping this will causally affect "its
(yours!) future, in a way such that you make the first person experience
of drinking that cup of coffee.
But the UD, because he is shallow, will generate an infinite number of
computations in which you will experience drinking a cup of tea (if
not a white rabbit), and this although you have the same experience
of the past which include
your preparing that cup of coffee).
The "invariance lemma" prevents "easy" use of (Kolmogorov or
notion for dismissing those abnormal stories.
The comp indeterminacy hints to transform that problem into a search
of a measure,
and into showing that relatively abnormal consistent
extension/stories are rare.
This is not unlike the Feynman integration on path in quantum mechanics.