On 10/09/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > No, it is not necessary to destroy the original. If you do destroy the > > original you have a 100% chance of ending up as the copy, while if you > > don't you have a 50% chance of ending up as the copy. It's like > > probability if the MWI of QM is correct. > > No, you are thinking in the present, where there can be only one copy of a > brain. When technology for uploading exists, you have a 100% chance of > becoming the original and a 100% chance of becoming the copy.
It's the same in no collapse interpretations of quantum mechanics. There is a 100% chance that a copy of you will see the atom decay and a 100% chance that a copy of you will not see the atom decay. However, experiment shows that there is only a 50% chance of seeing the atom decay, because the multiple copies of you don't share their experiences. The MWI gives the same probabilistic results as the CI for any observer. > > > So if your brain is a Turing machine in language L1 and the program is > > > recompiled to run in language L2, then the consciousness transfers? But > > if > > > the two machines implement the same function but the process of writing > > the > > > second program is not specified, then the consciousness does not transfer > > > because it is undecidable in general to determine if two programs are > > > equivalent? > > > > It depends on what you mean by "implements the same function". A black > > box that emulates the behaviour of a neuron and can be used to replace > > neurons one by one, as per Hans Moravec, will result in no alteration > > to consciousness (as shown in David Chalmers' "fading qualia" paper: > > http://consc.net/papers/qualia.html), so total replacement by these > > black boxes will result in no change to consciousness. It doesn't > > matter what is inside the black box, as long as it is functionally > > equivalent to the biological tissue. On the other hand... > > I mean "implements the same function" in that identical inputs result in > identical outputs. I don't insist on a 1-1 mapping of machine states as > Chalmers does. I doubt it makes a difference, though. Chalmers' argument works for identical outputs for identical inputs. > Also, Chalmers argues that a machine copy of your brain must be conscious. > But he has the same instinct to believe in consciousness as everyone else. My > claim is broader: that either a machine can be conscious or that consciousness > does not exist. I think Chalmers' claim is that either a machine can be conscious or else some sort of weird substance dualism is the case. I'm not sure I understand what you mean when you say consciousness does not exist. Even if it's just an epiphenomenon, nothing but what it feels like to process certain kinds of information, there is a sense in which it exists. Otherwise it's like saying multiplication doesn't exist because it's just repeated addition. -- Stathis Papaioannou ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=40100384-1dbeb8
