On Wed, Apr 17, 2002 at 08:36:29PM -0700, H J Ruhl wrote:
> I am interested because currently I find it impossible to support the
> concept of a decision.
I was also having the problem of figuring out how to make sense of the
concept of a decision. My current philosophy is that you can have
preferences about what happens in a number of universes, where each
universe is defined by a complete mathematical description (for example an
algorithm with no inputs for computing that universe). So you could say "I
wish this event would occur in the universe computed by algorithm A, and
that event would occur in the universe computed by algorithm B." Whether
or not those events actually do occur is mathematically determined, but if
you are inside those universes, parts of their histories computationally
or logically depend on your actions. In that case you're in principle
unable to compute your own choices from the description of the universe,
and you also can't compute any events that depend on your choices. That
leaves you free to say "If I do X the following will occur in universes A
and B" even if it is actually mathematically impossible for you to do X in
universes A and B. You can then make whatever choice best satisfies your
preferences. Decision theory is then about how to determine which choice
That's the normative approach. The positive approach is the following.
Look at the parts of the multiverse that we can see observe or simulate.
How can we explain or predict the behavior of intelligent beings in the
observable/simulatable multiverse? One way is to present a model of
decision theory and show that most intelligent beings we observed or
simulated follow the model. We can also justify the model by showing that
if those beings did not behave the way the model says they should, we
would not been able to observe or simulate them (for example because they
would have been evolutionarily unsuccessful).