Title: Re: being inside a universe

Wei Dai <[EMAIL PROTECTED]> wrote:

Here's a new question for you, Bruno. What interpretation of probability
theory do you subscribe to? I've been saying that the meaning of
probabilities come from decision theory and specificly a probability only
has meaning if it actually is relevant to making a decision.

I think we discussed this before (when I told you that this position
is not unlike my iridia (ex)-boss position). But probabilities never really
appeared in that context, credibilities appeared instead, somehow akin to the
Dempster Shafer evidence theory. You can look at Philippe Smets webpage
at http://iridia.ulb.ac.be/~psmets/ for links.

So far no one
has posted a disagreement with that philosophy, but perhaps we don't all
agree. Would you like to clarify your position on this issue?

I think I do not disagree with that philosophy. Now in my approach
the only axiom I really need for the machine's rational beliefs on
probability is that [P(x) = 1] entails [not P(x) = 0]. (P(x) is
for probability of event x). This will
correspond to the modal formula []A -> <>A. That is: if the event A
is necessary then it must be possible. This formula
is called D (for deontic: it is also basic in the modal approach to
obligation and permission). If D was not a theorem of the Z logic, I would
have stop the machine interview and would be much more doubtful about
It is an open problem how to treat more general notion of probability
in the language of a consistent machine. With the introduction of self-
duplication experiment, probabilities becomes very counterintuitive. That's
why I try to give partial axiomatic and just listen to what a consistent
machine can say about that, and remaining consistent.

On Wed, Jul 17, 2002 at 04:13:54PM +0200, Bruno Marchal wrote:
> The mind-body problem is hard to formulate purely formally because it
> search a link between the somehow formal body and the non formal mind.

I think you can formalize the problem, or at least an aspect of it, in the
language of decision theory. So perhaps you can come back to this question
after reading Joyce's book.

OK. But honestly my feeling is that "decision" is a higher concept that

> Come on, I'm sure you see what I mean.  (Of course "functional substitution"
> is an interesting concept by itself. It would be just a slight exaggeration
> to say that the lambda calculus and even category theory has been invented
> for making that concept precise). In the uda frame, once the level of
> digital substitution has been chosen, a substitution is functional if it
> preserves the counterfactuals input/output relations of the thing which
> is substituted.

You claimed that the concept of causality is problematic. So how do you
define "counterfactuals input/output relations" without
reference to causality?

Imo, concept like "causality", "free-will", "decision", etc. are fundamental
and very interesting *high level psychological* notion.
The "counterfactuals input-output relations" are semantically defined by UD*
the running of all computations. But the "running" is itself defined
in arithmetic, or in the minimal set of inference rules needed to formalize
notion of computation. This is confirmed partially by the formal similitude
of the Z logic and Lewis/Hardegree/Stalnaker quantum like approach of the
notion of relevance. It is also linked to the non-monotonic aspect of quantum
logic (cf hardegree). But the only low level notion of causality is the comp
approach is the classical material implication. A is a cause of B if A is false
or B is true. From this a pyramid of causality notion can (and must) evolve.
I'm not sure this can make sense if you have not an idea of the psycho/physics
comp reversal.

> I have written more in this list than I will ever be able to write in
> a paper. I have begin at least four papers; I don't know if I will
> finish them. "Our field" overlaps too much disciplines. Either the
> papers grow too much, or the paper became relatively incomprehensible.
> Perhaps I should write a book instead. I don't know.
> I must think about that. Advices are welcome.
Sorry, I'm not an academic and have no idea how things work in that world.

Academic are like any human societies. It works like happy families in the
best cases and like mafia in the worst but fortunately rarer (I hope) cases.

I just want you to write your ideas down in a comprehensive form that I
can understand.

I guess we are not on the same length wave. It would really help me to write
that comprehensive form if I knew what is your problem with the uda, unless
for some reasons you reject the *hypothesis*. I would really
appreciate if you could tell me at which step of the uda you stop(*).
The comprehensive paper must begin by the uda, and I don't see what I can
The AUDA without an understanding of the UDA is just a formal games.
I am open to the idea that something is wrong or unclear, or imprecise.
I mean once you understand the difference between 1-person and 3-person, and
once you understand the 1-comp indeterminacy, and once you understand that
if we are machine we cannot be aware of any delays in UD-computations (invariance
lemma), then it follows(**) that our futures are determined by all computations
going through and relative to our actual (1-actual) computational states.
At first it looks like a refutation of comp because empirically we have good
reasons to believe in negative amplitudes of probability, which intuitively
does not seem to appear through comp, but then taking into account the
godelian incompleteness, through  the translation of UDA in the consistent
machine language shows that the probability matter is more subtle than

(*) if *anyone* find a flaw or even imprecisions I would be grateful
letting me know it. (links  http://www.escribe.com/science/theory/m3044.html ).

(**) Easily with some Occam Razor. Less easily without Occam; you need the
Movie Graph Argument if you don't accept Occam.

Reply via email to