On Wed, Jan 02, 2002 at 05:36:19PM +0100, Marchal wrote:
> But I would insist that it is preferable to understand the UDA before
> the AUDA. Your last conversation with Hall Finney makes me
> suspect that you *do* have a "problem" with the self-duplication thought
> experiment and/or the 1-3 person pov distinction.
> UDA is much more simple than AUDA (except for the professionnal logician).
I have a problem with trying to quantify 1-indetermincy. I'm not sure it's
a useful exercise, useful in the sense that a decision theory will involve
> To be honest I have not understand your answers to Hal Finney last posts,
> I agree with Hall's remark though).
It's probably because I haven't explained my current overall position.
I'll try to do that now. Do you agree with the following?
1. All computational facts exist. In other words all statements of the
form "the output of GTM x converges to y" or "the output of GTM x doesn't
converge" have objective truth values.
2. Any statement that has a truth value is equivalent to a computational
statement. For example "1+1=2" is equivalent to "The output of x is '2'"
where x is a GTM for computing 1+1. "I will win the lottery tomorrow with
m-measure at least 1/2 of my current m-measure, where m is defines as ..."
is equivalent to "The output of x converges to 'true'" where x is a GTM
that simulates all universes in parallel, while keeping track of an upper
bound on m(me-now) and a lower bound on m(me winning the lottery
tomorrow), and outputing 'true' when the ratio between the two drops below
3. Caring about anything is equivalent to caring about computational
facts. That is, your goals are equivalent to goals in the form "I want the
output of GTM x to converge to y". This goal makes sense if you are part
of the computation of x and can influence its history. The problem of
consciousness, or the mind-body problem, then becomes how do I tell which
computations I am a part of?
4. There is no objective standard about what goals one should have, or how
much weight one should put on each goal.
>From this position it appears that there is no need or room for an