Rolf Nelson wrote:
> Wei, your examples are convincing, although other decision models have
> similar problems. If your two examples were the only problems that
> UDASSA had, I would have few qualms about adopting it over the other
> decision models I've seen. Note that even if you adopt a decision
> model, you still in practice (as a human being) can keep an all-
> purpose "escape hatch" where you can go against your formal model if
> there are edge cases where you dislike its results.

For me, this line of thought started with the question "what does 
probability mean if everything exists?" (Actually, before that I had thought 
about "what does probability mean if brain copying is possible?") I've 
entertained many different possible answers. I looked at decision theories 
not because I'm looking for a decision procedure to adopt, but because that 
is one way probability is interpreted and justified. I'm actually more 
interested in the philosophical issues rather than the practical ones.

Besides, if you program a decision procedure into an AI, it had better be 
flawless because there may be no "escape hatches".

> In other words, I would prioritize "UDASSA doesn't yet make many
> falsifiable predictions" and "We don't see a total ordering of points
> in spacetime, so UDASSA probably doesn't run on a typical Turing
> Machine" as larger problems. But sure, if UDASSA can be improved to
> solve the morality edge-cases that you gave, I'm all for the
> improvements.

I consider UD+ASSA to be a theory of how people reason, or how they ought to 
reason, and as such, it does make falsifiable predictions. In fact, as I 
showed in several examples, the predictions have been falsified.

About your comment "We don't see a total ordering of points in spacetime, so 
UDASSA probably doesn't run on a typical Turing Machine". I don't follow 
your reasoning here as to why UD+ASSA+typical TM implies that we should see 
a total ordering of points in spacetime. Isn't it possible that such an 
ordering exists internal to the TM's program, but it's not visible to the 
people inside the universe that the TM simulates?

> As far as our observations of the Universe, I don't quite follow: how
> can you go from "in terms of morality, probability is imperfect" to
> "there's no such thing as probability, therefore there's no measure
> problem?"

My reasoning goes like this:

1. We need to reinterpret probability, from "subjective degree of belief" to 
"how much do I care about something" in order to fix counterintuitive 
implications of decision theory.
2. Once we do that, we no longer seem to have a solution to the "measure 
3. Let's look closer at the nature of the problem. It seems to consist of 
two parts:
(A) Why am I living in an apparently lawful universe?
(B) Why should I expect the future to continue to be lawful?
4. I think (B) is the easier question, and I answered it in a previous post 
in this thread. (A) is more problematic, but my tentative answer is that, as 
Brent Meeker stated it, "among all possible worlds, somebody has to live in 
law-like ones; so it might as well be us."

I'm out of time today, and will respond to your other post tomorrow.

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to