RE: ASSA and Many-Worlds

2007-01-24 Thread Stathis Papaioannou


 Jason Resch writes:

 Stathis Papaioannou wrote:
  Jason Resch writes:
 
  Let's say being spared is neutral while being tortured is obviously bad, 
  even
  if you are tortured for only a few minutes. Also, assume the intensity of 
  the
  torture and the quality of life on being spared is the same in duplication/ 
  coin toss
  situations.
 
  What if I change the example and say you will be duplicated a million 
  times, and
  only one of the copies will be tortured? From a selfish point of view, you 
  can
  almost certainly expect to find yourself one of the copies that will be 
  spared,
  and I think you would be crazy to choose the coin flip. The equivalence of 
  the
  coin flip/ duplication example (when the probabilities are equal) is why we 
  cannot
  distinguish between MWI and CI of QM. It makes no difference to me whether
  the world splits into two and one copy of me is tortured if I toss the coin 
  or whether
  there is only one version of me with a 50% chance of being tortured.
 
 
 In the case you laid out you give two choices:
 
 A) The replicator
 B) The coin flip
 
 Case A results in 999,999 neutral lifetimes worth of observer moments
 and 1 lifetime of excruciating torture filled observer moments.  Net
 outcome among all branched universes: -1
 
 Case B results if half of one's future observer moments remebering
 torture and half remembering being spared.  Net outcome among all
 branched universes: -0.5
 
 Therefore it's still best to take case B, the coin flip.
 
 What makes the result seem so unintuitive is the concept of a lifetime
 of observer moments that has a net result being neutral.  That means
 that trough all the ups and downs in that life, if one could see it all
 laid out before them, they would realize that person had so many
 negative events in their life that they might as well never have been
 born.  With this consideration, it becomes more apparent that the
 999,999 extra neutral lives offer no real advantage in living out,
 nor does the spared life in the coin flip need to be figured in.  All
 that should be considered in this case is that with replication all
 universes will have someone who is tortured, while in the coin flip
 only half will.
 
 Most people consider their life to be a positive thing, and few would
 say they wouldn't mind if they had never been born.  For most people,
 if it came down to a million life times for one person's torture, it
 would be a better choice over than the coin flip.
 
 Here the replication is only the optimal choice for neutral life times.
  If a lifetime is very positive, the 999,999 good lives outweigh the
 one tortured.  If the spared lifetimes were very negative, the 999,999
 lifetimes would only add to the negative observer moments created
 through the torture, and again the coin flip is best.

and correction:

My appologies to those on this list, this is how I should have worded
my conclusion:
 
Positive spared lives = Take replication
Neutral spared lives = Take coin flip
Negative spared lives = Take coin flip
 
This is an analysis from an altruistic viewpoint, i.e. which choice will 
increase 
the net happiness in the world. What I am asking is the selfish question, what 
should I do to avoid being tortured? If I choose the replication it won't worry 
me from a selfish point of view if one person will definitely be tortured 
because 
I am unlikely to be that person. Indeed, after the replication it won't affect 
me 
if *all* the other copies are tortured, because despite sharing the same 
psychology 
up to the point of replication, I am not going to experience their pain. 

If some multiverse theory happens to be true then by your way of argument we 
should all be extremely anxious all the time, because every moment terrible 
things 
are definitely happening to some copy of us. For example, we should be 
constantly 
be worrying that we will be struck by lightning, because we *will* be struck by 
lightning. 
But normally we don't worry about this because being struck by lightning in 
1/million 
actual worlds is subjectively equivalent to being struck by lightning in a 
single world 
with probability 1/million.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Rép : The Meaning of Life

2007-01-24 Thread Bruno Marchal


Le 23-janv.-07, à 06:17, Stathis Papaioannou a écrit :


 Searle's theory is that consciousness is a result of actual brain
 activity, not Turing emulable.

 No... True: Searle's theory is that consciousness is a result
 of brain activity, but nowhere does Searle pretend that brain is not
 turing emulable. He just implicitly assume there is a notion of
 actuality that no simulation can render, but does not address the
 question of emulability. Then Searle is known for confusing level of
 description (this I can make much more precise with the Fi and Wi, or
 with the very important difference between computability (emulability)
 and provability.

 Searle seems to accept that CT implies the brain is Turing emulable, 
 but he
 does not believe that such an emulation would capture consciousness any
 more than a simulation of a thunderstorm will make you wet. Thus, a 
 computer
 that could pass the Turing Test would be a zombie.


Yes. It confirms my point. And Searle is coherent, he has to refer to a 
notion of physically real for his non-computationalism to proceed.
He may be right. Now his naturalistic explanation of consciousness 
seems rather ad hoc.
But all what I say is that IF comp is correct, we have to abandon 
physicalism.


 Searle is not a computationalist - does not believe in strong AI - but 
 he does
 believe in weak AI. Penrose does not believe in weak AI either.

Yes. In that way Searle is not even wrong.

snip: see my preceding post to you


 If there are more arbitrary sequences than third person computations, 
 how
 does it follow that arbitrary sequences are not computations?


I guess I miss something (or you miss your statement?). Is it not 
obvious that if there are more arbitrary sequences than third person 
computations, then some (even most) arbitrary sequences are not 
computations.

Let us define what is a computable infinite sequence. A sequence is 
computable if there is a program (a machine) which generates 
specifically the elements of that sequence in the right order, and 
nothing else. The set of programs is enumerable, but by Cantor theorem 
the set of *all* sequences is not enumerable. So the set of computable 
sequences is almost negligible compared to the arbitrary one.

Does it mean there is no program capable of generating a non computable 
sequence?

Not at all. A universal dovetailer generates all the infinite 
sequences. The computable one, (that is, those nameable by special 
purpose, specific,  program) and the non computable one (how? by 
generating them all).

I give another example of the same subtlety. One day a computer 
scientist told me that it was impossible to write a program of n bits 
capable of generating an incompressible finite sequence or string of 
length m with m far greater than n. I challenge him.
Of course, what is true is that there is no program of n bit capable of 
generating that m bits incompressible string, AND ONLY, SPECIFICALLY,  
THAT STRING.
But it is really easy to write a little  program capable of generating 
that incompressible string by letting him generate ALL strings: the 
program COUNT is enough.

I think this *is* the main line of the *everything* list, or a 
miniature version of it if you want.

Now, when you run the UD, as far as you keep the discourse in the third 
person mode, everything remains enumerable, even in the limit.
But from the first person point of view, a priori the uncountable 
stories, indeed generated by the UD, take precedence on the computable 
one: thus the continua of white rabbits. This results from the lack of 
any possibility from the first person point of view to locate herself 
into UD*. Somehow the first person belongs to 2^aleph_zero histories at 
the start.

A similar explosion of stories appears with quantum mechanics, except 
that here the physicist as an easy answer: white rabbits and Potter 
universe are eliminated through phase randomization (apparently).

I am not satisfied by this answer if only because my motivation is to 
understand where that quantum comes from.

Is complex randomization of histories the only way to force normal 
nature into the shorter path?

Well, my point is that if we take comp seriously, we have to justify 
the absence of rabbits from computer science. In case too much white 
rabbits remains, comp would be false, and this would be an argument in 
favor of materialism. But, when you interview a universal machine on 
this question you can realize at least that this question is far from 
being settled.



Hope you don't mind I continue to comment your post tomorrow,


Bruno




http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 

Re: ASSA and Many-Worlds

2007-01-24 Thread Johnathan Corgan

Stathis Papaioannou wrote:

 If some multiverse theory happens to be true then by your way of argument we 
 should all be extremely anxious all the time, because every moment terrible 
 things 
 are definitely happening to some copy of us. For example, we should be 
 constantly 
 be worrying that we will be struck by lightning, because we *will* be struck 
 by lightning. 

If MWI is true, *and* there isn't a lowest quantum of
probability/measure as Brent Meeker speculates, there is an interesting
corollary to the quantum theory of immortality.

While one branch always exists which continues our consciousness
forward, indeed we are constantly shedding branches where the most
brutal and horrific things happen to us and result in our death.  Their
measure is extremely small, so from a subjectively probability
perspective, we don't worry about them.

I'd speculate that there are far more logically possible ways to
experience an agonizing, lingering death than to live.  Some have a
relatively high measure, like getting hit by a car, or getting lung
cancer (if you're a smoker), so we take steps to avoid these (though
they still happen in some branch.)  Others, like having all our
particles spontaneously quantum tunnel into the heart of a burning
furnace, are so low in measure, we can blissfully ignore the
possibility.  Yet if MWI is true, there is some branch where this has
just happened to us. (modulo Brent's probability quantum.)

If there are many more ways to die than to live, even of low individual
measure, I wonder how the integral of the measure across all of them
comes out.

-Johnathan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---