On Tuesday, January 14, 2003, at 02:27  PM, Russell Standish wrote:

Dear Tim,
	Since you joined the list relatively more recently, you're
unlikely to have come across a couple of example in decision theory I
mentioned back in 1999
(http://www.escribe.com/science/theory/m781.html), namely with respect
to superannuation (pension insurance) and euthanasia. This has more to
do with QTI than MWI, but since many of us think an expectation of
subjective immortality is a corollory of the MWI, this is relevant.
OK, I found it. Thanks. Rather than read through the comments others made about your article, I'll comment on it as I found it:

"However, I thought I'd report on how QTI has influenced me on a couple
of items recently. The first was in making a decision whether to stay
with my present superannuation scheme (which provides for a pension
for life, based on one's salary at retirement) and having the funds
invested in conventional shares and bonds, a possibly risky
strategy. With QTI, the life-long pension sounds like a good deal, and
is what I eventually chose, for that reason. However, I fully realise
that with QTI, I am likely to outlive the pension fund, or inflation
will so erode its value, that perhaps the decision is not so
clear-cut."

By now you know my main perspective on this: the only world in which your choices matter is the world you are in.

But even with your logic above, why not go for the gusto, grab the bull by the horns, and simply do quantum russian roulette? The payoff on this is better (in the worlds you happen to survive in!) than any fuddy-duddy pension plans.

How with your theory are you expected to outlive your pension funds? Am I missing some assumption in this thread from a few years ago that somehow you, the actual you who is corresponding with me right here in this branch, that somehow you expect to live to an unusually old age?

What does your wife think of your plans?

As for investing, I invest exclusively in high tech companies. These have been severely beaten down in the 2000-present period, but my portfolio is still ahead of where it was in early 1998, way ahead of where it was in 1995, and vastly ahead of where it was in 1986 when I retired. And I only used good old reasoning about likely technologies, not any notions of killing myself if my investments failed to reach a certain return, etc.

I don't mean to disparage the ideas as weird, but they are.


"The second issue is in relation to euthanasia. I used to be in favour
of this, on the basis that I grew up on a farm, and understood the
phrase "putting it out of its misery". However, the process of
attempting to kill someone is only likely to increase their suffering
in those worlds where they survive. So now I'm against euthanasia, at
least until someone can convince me they can control the outcome of
the "merciful death" well enough to ensure that the patient is almost
always in a better world because of it."

I know of some pretty fool-proof ways of committing suicide, ones with 99.99999% likelihood of working. Jumping off a high enough place almost always works. Jumping out of an airplane at 40,000 feet with an explosive collar set to detonate at 1000 feet, oh, and a rope attached to the plane to break the jumper's neck: that's four different approaches, any one of which is overwhelmingly likely to work. (And if one doubts this, but others play the odds the usual way, arrange a betting pool with a payoff sufficient to make one vastly wealth and able to buy the best pain-killing drugs in the 1 in 10^30 of the worlds where one survives the noose, the freezing cold and oxygen deprivation, the explosive collar, and the impact with the ground.)

Or pay a firing squad to do the job...shades of Tegmark's machine gunner. Except make sure a dozen rifles are used.

More easily, there are various combinations of pills which will do the job reliably. A friend of mine in college did it with cyanide from the chemistry lab...he didn't have a chance once he'd taken his dose. (Not that this compares with the 10^-30 chance, or whatever, one can arrange as above. So combine this with a large bottle of Tylenol and, for good measure, be doing the backstroke about a mile offshore when this process is started...)

But using the logic about "increasing their suffering," have you also considered that another implication of MWI is that not only are they sick and may wish/need to be put down, but that MWI guarantees all manner of _EVEN WORSE_ developments? Some worlds where the cancer runs really wild, leading to excruciating pain, tumors growing out of eyeballs, psychotic episodes where the afflictee kills his family, and, basically, every bad thing imaginable. There must be worlds where this is happening even now.

So, even using your logic, leaving a sick person to die because the dangers of an unsuccessful suicide are likely to be even worse is not itself an obvious conclusion. Which is why the usual decision theory process gives about the right answer without appeal to many worlds theories.

(More notes: I've also known unsuccessful suicides and have read about others. They are usually not "worse off" in any determinant way for our exercise. They wake up in a hospital room, or the bleeding stops, whatever. Only occasionally do suicides fail in ways that make them worse off. And in those cases, they can always try again, or have a hired killer waiting to finish the job.)

The argument wrt superannuation is that standard decision theory
should lead one to the same decision as the QTI case (a bit like life
insurance, except one is betting that you won't die instead that you
will) - however in such a case why would anyone choose lump sum
investment options over lifetime pension?
Because they think they can grow the sum more than the pension fund managers can. And, since they can always take the grown amount and _buy_ an annuity later if and when they get tired of investing, they lose nothing by trying, if they think they can in fact do better than the managers.

(I hope you considered this benefit of the lump sum payout. If you are at all bright, which you clearly are, you can grow the lump sum at least as well as some government/bureaucrat pension fund managers typically do. At some point you can then buy a larger annuity than the one you picked in the conservative option. And if you die before you expect to, you had the use of that lump sum for things of importance to you.)


Noone that I could recall came up with a convincing argument against
the Euthanasia issue - it would seem that committing euthanasia on
someone is actually condemning them to an eternity of even greater
misery than if you'd just left things alone - quite the contrary to
what one expects in a single universe notion of reality.

I don't see this at all, even besides the issue of my own focus on the branch *I* am in.

--Tim May

Reply via email to