Bruno Marchal wrote:
> On 29 Nov 2011, at 18:44, benjayk wrote:
>> Bruno Marchal wrote:
>>>> I only say that I do not have a perspective of being a computer.
>>> If you can add and multiply, or if you can play the Conway game of
>>> life, then you can understand that you are at least a computer.
>> So, then I am computer or something more capable than a computer? I  
>> have no
>> doubt that this is true.
> OK. And comp assumes that we are not more than a computer, concerning  
> our abilities to think, etc. This is what is captured in a quasi  
> operational way by the "yes doctor" thought experiment. Most people  
> understand that they can survive with an artificial heart, for  
> example, and with comp, the brain is not a privileged organ with  
> respect to such a possible substitution.
If YES doctor means we are just an immaterial abstract computer than there
is nothing to deduce (our experience already is only related to
computations, since we defined as by them).
But if YES doctor just means our bodies work *like* a computer (and thus the
substitution works, and we already know that this is the case to some
extent) then none of the step works because they assume we work exactly 100%
like a abstract computer. In actuality we can eg never be sure that
teleportation, duplication etc... work as intended, because actual computers
are not totally reliable, and actually quantum objects, and not purely
digital in an abstract sense (I argue in a more detailed way below).
In other words, you are assuming an abstraction of a computer in the
argument, which is already the conlusion.
The steps rely on the substitution being "perfect", which they will never

I'm probably making it to complicated, because I can't seem to point out the
simple fallacy. That's why I'm continuing to give examples of why either YES
doctor does not mean what you need it to mean (we are exactly, and only, and
always an abstract digital computer) or why you can't assume that the
reasoning work.

Bruno Marchal wrote:
>> Bruno Marchal wrote:
>>>> When I look
>>>> at myself, I see (in the center of my attention) a biological being,
>>>> not a
>>>> computer.
>>> Biological being are computers. If you feel to be more than a
>>> computer, then tell me what.
>> Biological beings are not computers. Obviously a biological being it  
>> is not
>> a computer in the sense of physical computer.
> I don't understand this. A bacteria is a physical being (in the sense  
> that it has a physical body) and is a computer in the sense that its  
> genetic regulatory system can emulate a universal machine.
Usually computer means programmable machine, not "something that can emulate
a universal machine". It seems you are so hooked on the abstract perspective
of a computer scientist, that you don't even see the possibility of the
distinction abstract computer / actual computer.

Bruno Marchal wrote:
>> It is also not an abstract
>> digital computer (even according to COMP it isn't) since a  
>> biological being
>> is physical and "spiritual" (meaning related to subjective conscious
>> experience beyond physicality and computability).
> But all universal machine have a link with something beyond  
> physicality and computability. Truth about computability is beyond the  
> computable. So your point is not valid.
Yes, but then the whole argument does not work, because it deals with
something that even according to your conclusion can't be purely
computational (actual computers), so you can't assume it works as they
should do. COMP does just mean we work enough like computers to make a
substitution possible (we say YES to a *functionally* correct substitution),
it does not mean that there is any substitution that works perfectly.

Bruno Marchal wrote:
>> Neither
>> can they be derived from it.
> Physicality can be derived. And has to be derived (by UDA). Both  
> quanta and qualia. Only the "geography" cannot be derived, but the  
> physical laws can. You might elaborate why you think they can't.
Frankly I don't believe in absolute physical laws, so we can't derive them.
They are just locally valid approximate rules, like "swans are white".

Bruno Marchal wrote:
>> And no, there is no need for any evidence for some non-turing emulable
>> infinity in the brain. We just need non-turing emulable finite stuff  
>> in the
>> brain, and that's already there.
> I thought you were immaterialist. What is that finite stuff which is  
> non Turing emulable?
Matter. It is a form of consciousness that is finite in terms of apparent
size and apparent information content but still not computable, because the
qualia of matter itself cannot be substituted.
I don't believe in primitive matter, but I believe in stuff as a sensation
of stuffiness.

Bruno Marchal wrote:
> I really try to understand. Sometimes it seems you argue against comp,  
> and sometimes it seems you argue against the proof that comp entails  
> the Platonist reversal (to be short).
Well, actually I am arguing agains both, but relevant to your argument is
just that the COMP assumption does not mean what you need it to mean for the
argument, or is just equal to the conlusion.

Bruno Marchal wrote:
>> and we can just assume something can be substituted by an emulation  
>> if we
>> show that it can be.
> This is not true. We might doubt it to be true and make a Pascal like  
> sort of bet.
OK, but from a bet grounded on pure faith that somehow something will work
we cannot derive anthing. The COMP assumption does not state that we survive
due to the abstract computations involved. Actually it doesn't even state
that we survive due to computational activity, but even if we add that
assumption this computational activity need not be reducable to the abstract
computations involved.

Bruno Marchal wrote:
>> That seems quite unlikely, since already very simple objects like a  
>> stone
>> can't be emulated.
> The notion of stone is no more well defined in the comp theory. Either  
> you mean the "stuff" of the stone. Then comp makes it non Turing  
> emulable, because that "apparent stuff" is emerging from an infinity  
> of computations. So you are right. Or you mean by stone what we can do  
> with a stone (a functional stone), and this will depend on the  
> functionality that you ascribe to the stone.
I mean the actual apparent stuffy stone. And since everything that could be
substituted is stuffy we can't assume it 

Bruno Marchal wrote:
>> Honestly I am quite stupid to discuss with someone that just chooses  
>> to
>> plainly ignore everything that doesn't fit into his own preconceived  
>> notions
>> of what someone that's criticizing is saying.
> Just tell me where in the proof you have a problem.
See above and below.

Bruno Marchal wrote:
> You just seems acting  
> like "knowing that comp is false".
I don't "know" it false, but it seems unlikely to be true practically, and
it seems nonsensical to say we work exactly like abstract digital machines,
and even your own conclusion says that this can't be true (but this is not
even the COMP assumption).

Bruno Marchal wrote:
> In that case, it is up to you to explain why you believe or know that  
> comp is false, and I might change my mind (stopping being agnostic).
We can just doubt it. I don't think any belief can be known to be true, or
to be false. In this sense I am agnostic on all beliefs. But practically, we
are of course not agnostic on every belief, and in this case I just mean
that there are severe reasons to doubt COMP.

Bruno Marchal wrote:
>> It is quite strange to say over and over again that I haven't  
>> studied your
>> arguments (I have, though obviously I can't understand all the  
>> details,
>> given how complicated they are),
> UDA is rather simple to understand. I have never met people who does  
> not understand UDA1-7 among the scientific academical.
> Some academics pretends it is wrong, but they have never accepted a  
> public or even private discussion. And then they are "literary"  
> continental philosophers with a tradition of disliking science. Above  
> all, they do not present any arguments.
It is indeed not hard to understand.
Again, there is no specific flaw in the argument, because all steps rely on
an abstraction of how a computer works, which needn't be true. Nevertheless
below I adress more specific examples of the flaw in the argument.

Bruno Marchal wrote:
>> while you don't even bother to remember the
>> most fundamental premise of my argumentation (non-materialism). It  
>> is like I
>> was saying to you: "Oh it seems to me you just presuppose that we are
>> material computers, that's why your argument works".
>> Your argument may work against materialism (I am not sure, I don't  
>> take
>> materialism seriously anyway - frankly materialism is a joke, since
>> materialist are not even capable to say what matter is supposed to  
>> be), but
>> you don't take into account any of the alternatives that can be  
>> taken more
>> seriously (any sort of non-materialism).
> On the contrary, I have always insisted that we agree on that  
> immaterialism. My point is only that "mechanism implies  
> immaterialism", and in a constructive way so that by looking on the  
> way matter behaves in our neighborhood we might refute mechanism.
> I don't understand why you dislike the idea that some theory implies  
> an idea that you appreciate.
What I dislike is not immaterialism, but the idea that this can be derived
from the COMP assumption, and the idea that the immaterial reality is

Bruno Marchal wrote:
>> It seems very much you presuppose a purely material or computational
>> ontology.
> I don't understand this. I show that both consciousness and matter are  
> partially non-computational, once we bet that we are Turing emulable.
Yes, but they supposedly emerge from a computational ontology. That is, in
my mind, a fatal metaphysical (and practical) error.
Honestly I don't even know what that should mean, given that the way of how
it is supposed to emerge from the computations is itself non-computational
("inner view"), which really means it does not only emerge from the
computations (it also emerges from the non-computational "inner view") .
Consciousness supposedly emerges from self-reference of numbers, but the
very concept of self-reference needs the existence of self (=consciousness).
Without self, no self-reference.

Bruno Marchal wrote:
>> Bruno Marchal wrote:
>>>> Bruno Marchal wrote:
>>>>>> We can only say YES if we assume there is no self-referential loop
>>>>>> between
>>>>>> my instantiation and my environment (my instantiation influences
>>>>>> what world
>>>>>> I am in, the world I am in influences my instantiation, etc...).
>>>>> Why? Such loops obviously exist (statistically), and the relative
>>>>> proportion statistics remains unchanged, when doing the  
>>>>> substitution
>>>>> at the right level. If such loop plays a role in consciousness, you
>>>>> have to enlarge the digital "generalized" brain. Or comp is wrong,
>>>>> 'course.
>>>> I think it is self-refuting if we not already take the conclusion  
>>>> for
>>>> granted (saying YES only based on the faith we are already purely
>>>> digital).
>>>> Imagine substituting our whole generalized brain (let's say the
>>>> milky way).
>>>> Then you cannot have access to the fact that the whole milky way was
>>>> substituted,
>>> In the reasoning we use the fact that you are told in advance. That
>>> you cannot see the difference is the comp assumption.
>> Ah, OK. If you can't notice you are being substituted the very  
>> statement
>> that you are being substituted is meaningless.
> Why? I can say yes to the doctor, and tell him that it seems that the  
> artificial brain is 100% OK, because I don't notice the difference,  
> and then he can show me a scan of my skull, and I can see the  
> evidences for the artificial brain. So I can believe that I have  
> perfectly survived with that digital brain.
But then your experience didn't remain *totally* invariant. If you don't
suppose it does remain *absolutely* remain invariant, then you can't assume
it does in any of the thought experiments in the steps of your proof. For
exampl you could change during the teleportation.

Bruno Marchal wrote:
>> If we have that faith,
>> we believe in abitrary mysterious occurences.
> We believe just that the brain is a sort of machine, and that we can  
> substitute for a functionnally equivalent machine. This cannot be  
> proved and so it asks for some faith. But it might not need to be  
> blind faith. You can read books on brain, neuro-histology, and make  
> your own opinion, including about the subst level. The reasoning uses  
> only the possibility of this in theory. It is theoretical reasoning in  
> a theoretical frame.
But functionally equivalent does not mean *totally* equivalent, but this is
required for the steps to work.

Bruno Marchal wrote:
>> Unfortunately then we could as well base
>> the argument on "1+1=3" or "there are pink unicorn in my room even  
>> though I
>> don't notice them", so it's worthless.
> This does not follow. We do have biological evidence that the brain is  
> a Turing emulable entity. It is deducible from other independent  
> hypothesis (like the idea that QM is (even just approximately)  
> correct, for example).
> You don't seem to realize, a bit like Craig, that to define a non-comp  
> object, you need to do some hard work.
We have no biological evidence whatsoever that the brain is turing emulable.
We have evidence that it has similiarities with a computer, but that it is
not evidence it is equivalent to it. Just like a bird is not equal to a
plane due to being similar.
I can easily define a non-comp object: It is the qualia of seeing or feeling
or hearing an object. But you are right we can't give a precise definition,
but it is quite possible that not everything real can be defined precisely.
Certainly reality is real, and we can't define it.

Bruno Marchal wrote:
>> Bruno Marchal wrote:
>>>> Or we just *believe* we are being substituted (for whatever reason)
>>>> and say
>>>> YES to that, without any evidence we actually are being substituted,
>>>> but
>>>> then we are not saying YES to an actual substitution but to the
>>>> conclusion
>>>> (I am just a digital machine that is already equal to the
>>>> substitution).
>>> Please just study the proof and tell me what you don't understand. I
>>> don't see the relevance of the paragraph above, nor can I see what  
>>> you
>>> are arguing about.
>> I studied your proof. Of course your proof works if you assume the
>> conclusion at the start
> In that case the proof does not work, of course. I don't put the  
> conclusion in the hypothesis, or show me where. Show me the precise  
> line which makes you feeling so.
You take "functionally correct digital substitution" to mean "a correct
substitution assuming we are only abstract digital computers". Otherwise you
can never assume at any steps that a teleportation or duplication actually
works in the sense that the teleportation didn't change anything except
adding a belief. Again, COMP does not assume that *nothing* changes (as then
we couldn't even speak of a substitution having happened), it just assumes
we survive *relatively* unchanged.

Bruno Marchal wrote:
>> or assume something nonsensical (like saying YES to
>> a substitution that doesn't subjectively happen).
> The whole point of comp, is that we survive without any subjective  
> change to such a substitution, done by other people (so that witness  
> can attest it).
But then our experience did change, since we experience a witness attesting
it, which we didn't before. Of course you mean "relatively" unchanged, but
this is not enough for the argument to work, since there you rely on us
remaining completely unchanged.

Bruno Marchal wrote:
>> or your proof doesn't work (because actually the patient will notice  
>> he has
>> been substituted, that is, he didn't survive a substitution, but a  
>> change of
>> himself - if he survives).
> He might notice it for reason which are non relevant in the reasoning.  
> He might notice it because he got a disk with a software making him  
> able to uploading himself on the net, or doing classical  
> teleportation, or living 1000 years, etc.
But if he noticed this his subjective experience did change, and we can't
assume in any of the steps that it doesn't (except for an added belief).

Bruno Marchal wrote:
>> Apparently you are
>> dogmatically insisting that everyone that criticizes your argument  
>> doesn't
>> understand it and is wrong, and therefore you don't actually have to  
>> inspect
>> what they are saying.
> On the contrary, I answer all objection of all kind. I do not impose  
> any view. But if the proof is not valid, you have to say at which line  
> it becomes invalid.
OK, I didn't do it until now, because the same error is repeated over and
over again in the argument. The main problem already starts at step 1: The
line "if we identify an individual with its (hopefully consistent) set of
beliefs, the experience adds only a new belief (I did arrive in Helsinki)"
in step 1 is not valid, because we can't assume that nothing else changes.
Betting on the correct substitution level does just mean that we survive
subjectively "unchanged". But this does not mean we just add a new belief. I
survive "unchanged" when I take a shower, but it does not mean I only added
the new belief "I took a shower", alot of other things also changed.
If you assume that we only identify with our beliefs, and that functional
correct substitution means that *only our beliefs* changed, than you have to
make that explicit in the assumption. It is not accurate that the COMP
assumption is equivalent to step 1.

It is even more obvious in step 3: "The description 
encoded at Brussels after the reading-cutting process is just the
description of a state of some 
Turing machine, giving  that we assume comp. So its description can be
duplicated, and  the 
experiencer  can be  reconstituted  simultaneously  at two different 
places,  for  example 
Washington and Moscow". This assumes we work precisely like an abstract
turing machine, but we can't be sure that a physical computer works exactly
like it (indeed, empirically it doesn't). COMP does only say we survive a
functionally correct substitution with a digital computer. But it does not
say "an digital computer that works like our abstraction of how a physical
computer works.".
I am not suprised that many materialist won't make that argument, because
they want the universe to follow precise computational laws, which would
mean that there can't be any difference between the working of a digital
computer and its (correct) abstraction. But that most materialist have
incoherent beliefs does not make the argument valid.

The same argument could be made in step 7, where you also assume that just
the abstract computations matter.

Bruno Marchal wrote:
>> This just works as long as
>> the neurons can make enough new connections to fill the similarity  
>> gap.
>> Bruno Marchal wrote:
>>>> This would make COMP work in a quite special case scenario, but
>>>> wrong in
>>>> general.
>>> It is hard to follow you.
>> I am not saying anything very complicated.
> You seem to oscillate between "comp is nonsense" and "there is  
> something wrong with the reasoning".
> You need to be able to conceive that comp might be true to just follow  
> the reasoning.
I can conceive that a substitution might work, but I can hardly conceive
that just the *abstract computations* matter for what happens, but this is
not required in COMP. It says only that we can have a functional correct
digital substition.
If we would substitute a brain with a digital brain, I think it is
unavoidable we would discover that this digital brain can reflect on itself
in a manner that necessitates that the computer is interpreted by something
beyond the computer, making the conlusion that we are just related to
computations wrong.


View this message in context:
Sent from the Everything List mailing list archive at

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at

Reply via email to