On 02 Dec 2011, at 19:08, benjayk wrote:

Bruno Marchal wrote:

On 29 Nov 2011, at 18:44, benjayk wrote:

Bruno Marchal wrote:

I only say that I do not have a perspective of being a computer.

If you can add and multiply, or if you can play the Conway game of
life, then you can understand that you are at least a computer.
So, then I am computer or something more capable than a computer? I
have no
doubt that this is true.

OK. And comp assumes that we are not more than a computer, concerning
our abilities to think, etc. This is what is captured in a quasi
operational way by the "yes doctor" thought experiment. Most people
understand that they can survive with an artificial heart, for
example, and with comp, the brain is not a privileged organ with
respect to such a possible substitution.
If YES doctor means we are just an immaterial abstract computer than there
is nothing to deduce (our experience already is only related to
computations, since we defined as by them).

The "YES doctor" assumption does not refer to abstract computer. Those are handled at the step 8 of the UD Argument.

But if YES doctor just means our bodies work *like* a computer (and thus the
substitution works, and we already know that this is the case to some
extent) then none of the step works because they assume we work exactly 100%
like a abstract computer.

By the assumption on the level of digital substitution, we are 100% preserved, in the relative local way, through a physical computer, running a digital encoding suppose to be done at the right level.

In actuality we can eg never be sure that
teleportation, duplication etc... work as intended, because actual computers
are not totally reliable,

Yes, but that is not relevant for the reasoning, which use ideal default hypotheses (skilful doctor, level adequacy, ...)

and actually quantum objects,

This is not part of the assumption.

and not purely
digital in an abstract sense (I argue in a more detailed way below).

OK. See below.

In other words, you are assuming an abstraction of a computer in the
argument, which is already the conlusion.

Not at all. The assumption does not refer to abstract or immaterial machine.

The steps rely on the substitution being "perfect", which they will never

That would contradict the digital and correct level assumption.

I'm probably making it to complicated, because I can't seem to point out the simple fallacy. That's why I'm continuing to give examples of why either YES doctor does not mean what you need it to mean (we are exactly, and only, and
always an abstract digital computer) or why you can't assume that the
reasoning work.

Abstract computer enters at step 8. Up to step seven all computing machinery are supposed to be as concrete/physical as the computer you are looking at right now.

Bruno Marchal wrote:

Bruno Marchal wrote:

When I look
at myself, I see (in the center of my attention) a biological being,
not a

Biological being are computers. If you feel to be more than a
computer, then tell me what.
Biological beings are not computers. Obviously a biological being it
is not
a computer in the sense of physical computer.

I don't understand this. A bacteria is a physical being (in the sense
that it has a physical body) and is a computer in the sense that its
genetic regulatory system can emulate a universal machine.
Usually computer means programmable machine, not "something that can emulate
a universal machine".

That can be proved to be equivalent.

It seems you are so hooked on the abstract perspective
of a computer scientist, that you don't even see the possibility of the
distinction abstract computer / actual computer.

UDA 1-7 use actual computer. Step 8 treats the immateriality/ abstraction point.

Bruno Marchal wrote:

It is also not an abstract
digital computer (even according to COMP it isn't) since a
biological being
is physical and "spiritual" (meaning related to subjective conscious
experience beyond physicality and computability).

But all universal machine have a link with something beyond
physicality and computability. Truth about computability is beyond the
computable. So your point is not valid.
Yes, but then the whole argument does not work, because it deals with
something that even according to your conclusion can't be purely
computational (actual computers),

The stuff the computer is made of is shown to be not entirely computational. That is why we never use exact copy of the physical machine, but only a digital description made at some level.

so you can't assume it works as they
should do.

And, so I can do that. You seems to have missed the key notion of substitution level.

COMP does just mean we work enough like computers to make a
substitution possible (we say YES to a *functionally* correct substitution),
it does not mean that there is any substitution that works perfectly.

By definition of digital machine, if I am copied at the right substitution level, it will works perfectly.

Bruno Marchal wrote:

can they be derived from it.

Physicality can be derived. And has to be derived (by UDA). Both
quanta and qualia. Only the "geography" cannot be derived, but the
physical laws can. You might elaborate why you think they can't.
Frankly I don't believe in absolute physical laws, so we can't derive them.

You cannot use this new assumption to invalidate a reasoning which does not depend on it. The reasoning ends up by showing some absolute physical laws exist. They are invariant for all Turing universal observers, but that's part of the conclusion and is not used in the reasoning.

They are just locally valid approximate rules, like "swans are white".

No. That's contingent geography, like finding oneself in W or in M. Eventually comp explains that difference and makes it clear. usual physics cannot do that and must use an universal inductive inference.

Bruno Marchal wrote:

And no, there is no need for any evidence for some non-turing emulable
infinity in the brain. We just need non-turing emulable finite stuff
in the
brain, and that's already there.

I thought you were immaterialist. What is that finite stuff which is
non Turing emulable?
Matter. It is a form of consciousness that is finite in terms of apparent size and apparent information content but still not computable, because the
qualia of matter itself cannot be substituted.

This does not make sense for me.

I don't believe in primitive matter, but I believe in stuff as a sensation
of stuffiness.

I can agree with this, and it is somehow made precise in the conclusion. Not in the assumption.

Bruno Marchal wrote:

I really try to understand. Sometimes it seems you argue against comp,
and sometimes it seems you argue against the proof that comp entails
the Platonist reversal (to be short).
Well, actually I am arguing agains both, but relevant to your argument is just that the COMP assumption does not mean what you need it to mean for the
argument, or is just equal to the conclusion.

You are changing the assumption by introducing a notion of abstract computer and by missing the notion of substitution level.

Bruno Marchal wrote:

and we can just assume something can be substituted by an emulation
if we
show that it can be.

This is not true. We might doubt it to be true and make a Pascal like
sort of bet.
OK, but from a bet grounded on pure faith that somehow something will work
we cannot derive anthing.

That argument would make all theoretical reasoning non valid.

The COMP assumption does not state that we survive
due to the abstract computations involved.


Actually it doesn't even state
that we survive due to computational activity, but even if we add that
assumption this computational activity need not be reducable to the abstract
computations involved.

Up to step seven, you are right. The reduction to abstract or immaterial computation is the subject matter of the 8th step. The 8th step is only sketched in sane04. A more detailed account has been given on this list:

Bruno Marchal wrote:

That seems quite unlikely, since already very simple objects like a
can't be emulated.

The notion of stone is no more well defined in the comp theory. Either
you mean the "stuff" of the stone. Then comp makes it non Turing
emulable, because that "apparent stuff" is emerging from an infinity
of computations. So you are right. Or you mean by stone what we can do
with a stone (a functional stone), and this will depend on the
functionality that you ascribe to the stone.
I mean the actual apparent stuffy stone. And since everything that could be
substituted is stuffy we can't assume it


Bruno Marchal wrote:

Honestly I am quite stupid to discuss with someone that just chooses
plainly ignore everything that doesn't fit into his own preconceived
of what someone that's criticizing is saying.

Just tell me where in the proof you have a problem.
See above and below.

You were changing the assumption.

Bruno Marchal wrote:
You just seems acting
like "knowing that comp is false".
I don't "know" it false, but it seems unlikely to be true practically,

The practicality has no relevance. Some practicalness is used for pedagogical purpose, like some neuro-hypothesis, but are eliminated at step seven, where only a concrete physical UD is used instead.

it seems nonsensical to say we work exactly like abstract digital machines, and even your own conclusion says that this can't be true (but this is not
even the COMP assumption).

This is ambiguous. Not sure what you are saying.

Bruno Marchal wrote:

In that case, it is up to you to explain why you believe or know that
comp is false, and I might change my mind (stopping being agnostic).
We can just doubt it. I don't think any belief can be known to be true, or
to be false.

A belief can be refuted, and thus shown false.

In this sense I am agnostic on all beliefs. But practically, we
are of course not agnostic on every belief, and in this case I just mean
that there are severe reasons to doubt COMP.

I agree, and the whole UDA singles that fact.

Bruno Marchal wrote:

It is quite strange to say over and over again that I haven't
studied your
arguments (I have, though obviously I can't understand all the
given how complicated they are),

UDA is rather simple to understand. I have never met people who does
not understand UDA1-7 among the scientific academical.
Some academics pretends it is wrong, but they have never accepted a
public or even private discussion. And then they are "literary"
continental philosophers with a tradition of disliking science. Above
all, they do not present any arguments.
It is indeed not hard to understand.
Again, there is no specific flaw in the argument, because all steps rely on
an abstraction of how a computer works,

They relies on the definition of digital computer. The digitalness allows exact simulation (emulation).

which needn't be true. Nevertheless
below I adress more specific examples of the flaw in the argument.

Bruno Marchal wrote:

while you don't even bother to remember the
most fundamental premise of my argumentation (non-materialism). It
is like I
was saying to you: "Oh it seems to me you just presuppose that we are
material computers, that's why your argument works".
Your argument may work against materialism (I am not sure, I don't
materialism seriously anyway - frankly materialism is a joke, since
materialist are not even capable to say what matter is supposed to
be), but
you don't take into account any of the alternatives that can be
taken more
seriously (any sort of non-materialism).

On the contrary, I have always insisted that we agree on that
immaterialism. My point is only that "mechanism implies
immaterialism", and in a constructive way so that by looking on the
way matter behaves in our neighborhood we might refute mechanism.
I don't understand why you dislike the idea that some theory implies
an idea that you appreciate.
What I dislike is not immaterialism, but the idea that this can be derived
from the COMP assumption, and the idea that the immaterial reality is

The immateriality is derived in the 8th step of UDA.
The fact that you dislike an argument does not make it invalid.
Then, the immaterial reality is shown to be mainly non computational (with some nuance I will not explain here).

Bruno Marchal wrote:

It seems very much you presuppose a purely material or computational

I don't understand this. I show that both consciousness and matter are
partially non-computational, once we bet that we are Turing emulable.
Yes, but they supposedly emerge from a computational ontology.

The ontology is basically not computational. Arithmetical truth is not a computable notion. Löbian machine can prove this.

That is, in
my mind, a fatal metaphysical (and practical) error.
Honestly I don't even know what that should mean, given that the way of how it is supposed to emerge from the computations is itself non- computational
("inner view"), which really means it does not only emerge from the
computations (it also emerges from the non-computational "inner view") .

It emerges from infinities of computations (still "concrete" up to step seven), in a way which is made precise (but not computable) through the first person indeterminacy and its invariance for delays, real-virtual shifts, etc.

Consciousness supposedly emerges from self-reference of numbers, but the very concept of self-reference needs the existence of self (=consciousness).
Without self, no self-reference.

The discovery of Löbian machine and of arithmetical self-reference contradicts this. To equate self and consciousness is not warranted and theoretical computer science disentangles this by distinguishing inner self and third person self. I can explain more on this if you want. The third person self is defined by diagonalization, and plays a key role in the computer math. The first person self can be (re)defined by the Theatetus notion of knower, but that is not needed to understand the reversal physics/machine('s) theory.

Bruno Marchal wrote:

Bruno Marchal wrote:

Bruno Marchal wrote:

We can only say YES if we assume there is no self-referential loop
my instantiation and my environment (my instantiation influences
what world
I am in, the world I am in influences my instantiation, etc...).

Why? Such loops obviously exist (statistically), and the relative
proportion statistics remains unchanged, when doing the
at the right level. If such loop plays a role in consciousness, you have to enlarge the digital "generalized" brain. Or comp is wrong,
I think it is self-refuting if we not already take the conclusion
granted (saying YES only based on the faith we are already purely
Imagine substituting our whole generalized brain (let's say the
milky way).
Then you cannot have access to the fact that the whole milky way was

In the reasoning we use the fact that you are told in advance. That
you cannot see the difference is the comp assumption.
Ah, OK. If you can't notice you are being substituted the very
that you are being substituted is meaningless.

Why? I can say yes to the doctor, and tell him that it seems that the
artificial brain is 100% OK, because I don't notice the difference,
and then he can show me a scan of my skull, and I can see the
evidences for the artificial brain. So I can believe that I have
perfectly survived with that digital brain.
But then your experience didn't remain *totally* invariant. If you don't suppose it does remain *absolutely* remain invariant, then you can't assume it does in any of the thought experiments in the steps of your proof. For
exampl you could change during the teleportation.

Up to step seven the experience remain relatively invariant, like any hospital operations. The change occurs at the reawakening of the patient, or when a teleported person open the reconstitution boxes.

Bruno Marchal wrote:

If we have that faith,
we believe in abitrary mysterious occurences.

We believe just that the brain is a sort of machine, and that we can
substitute for a functionnally equivalent machine. This cannot be
proved and so it asks for some faith. But it might not need to be
blind faith. You can read books on brain, neuro-histology, and make
your own opinion, including about the subst level. The reasoning uses
only the possibility of this in theory. It is theoretical reasoning in
a theoretical frame.
But functionally equivalent does not mean *totally* equivalent, but this is
required for the steps to work.

There is no notion of total equivalence used. Only of subjective equivalence (first person invariance), modulo the changing conditions of reawakening, reconstitution in different environment, etc.

Bruno Marchal wrote:

Unfortunately then we could as well base
the argument on "1+1=3" or "there are pink unicorn in my room even
though I
don't notice them", so it's worthless.

This does not follow. We do have biological evidence that the brain is
a Turing emulable entity. It is deducible from other independent
hypothesis (like the idea that QM is (even just approximately)
correct, for example).
You don't seem to realize, a bit like Craig, that to define a non- comp
object, you need to do some hard work.
We have no biological evidence whatsoever that the brain is turing emulable.

This is simply not true. There are many evidences, including the fact that we don't know physical laws which are not Turing emulable, with exception like the non intelligible collapse of the waves, or some theoretical physical phenomenon build by diagonalization, but unknown in nature. But this is not relevant for the validity of the reasoning which assume Turing emulability at some description level at the start.

We have evidence that it has similiarities with a computer, but that it is
not evidence it is equivalent to it.

At the substitution level, and below, it is, by definition.

Just like a bird is not equal to a
plane due to being similar.

An artficial heart is not equivalent with a bilogical heart. We need only functional equivalence at some level.

I can easily define a non-comp object: It is the qualia of seeing or feeling
or hearing an object.

This is begging the question, or missing the point that some inner view will already be non computation in the comp theory.

But you are right we can't give a precise definition,
but it is quite possible that not everything real can be defined precisely.
Certainly reality is real, and we can't define it.


Bruno Marchal wrote:

Bruno Marchal wrote:

Or we just *believe* we are being substituted (for whatever reason)
and say
YES to that, without any evidence we actually are being substituted,
then we are not saying YES to an actual substitution but to the
(I am just a digital machine that is already equal to the

Please just study the proof and tell me what you don't understand. I
don't see the relevance of the paragraph above, nor can I see what
are arguing about.
I studied your proof. Of course your proof works if you assume the
conclusion at the start

In that case the proof does not work, of course. I don't put the
conclusion in the hypothesis, or show me where. Show me the precise
line which makes you feeling so.
You take "functionally correct digital substitution" to mean "a correct
substitution assuming we are only abstract digital computers".

You keep saying this, but "abstraction" is not part of the hypothesis. If I have been unclear on this, just tell me where, and I will make correction. But I don't see where.

Otherwise you
can never assume at any steps that a teleportation or duplication actually works in the sense that the teleportation didn't change anything except adding a belief. Again, COMP does not assume that *nothing* changes (as then we couldn't even speak of a substitution having happened), it just assumes
we survive *relatively* unchanged.

Yes. I use "nothing" for the state of the reconstituted person just before the person makes some self-localization experiment. The new belief or beliefs are of the style "oh, I am in W, and it is raining here, etc.".

Bruno Marchal wrote:

or assume something nonsensical (like saying YES to
a substitution that doesn't subjectively happen).

The whole point of comp, is that we survive without any subjective
change to such a substitution, done by other people (so that witness
can attest it).
But then our experience did change, since we experience a witness attesting
it, which we didn't before. Of course you mean "relatively" unchanged,

Yes. That is obvious in the frame of the thought experiment.

this is not enough for the argument to work, since there you rely on us
remaining completely unchanged.

Yes, modulo the circumstance of the experience. If you survive at the hospital with an artificial brain, you will have all the difference due to reminding yourself having undergo that experience. Those change are not relevant for the validity of the reasoning. Indeed, those change play a key role for assessing the relative invariance of the first person experience, for delays, etc.

Bruno Marchal wrote:

or your proof doesn't work (because actually the patient will notice
he has
been substituted, that is, he didn't survive a substitution, but a
change of
himself - if he survives).

He might notice it for reason which are non relevant in the reasoning.
He might notice it because he got a disk with a software making him
able to uploading himself on the net, or doing classical
teleportation, or living 1000 years, etc.
But if he noticed this his subjective experience did change, and we can't assume in any of the steps that it doesn't (except for an added belief).

This just trivially says that we are changed by any experience, even drinking a cup of coffee. But if the substitution is done at the right level, the changes will only be of that type, and are not relevant for the issue. On the contrary, they are important for the rest of the reasoning. If we use a notion of survival implying no change at all, then we could survive only by freezing time. That looks much more like dying.

Bruno Marchal wrote:

Apparently you are
dogmatically insisting that everyone that criticizes your argument
understand it and is wrong, and therefore you don't actually have to
what they are saying.

On the contrary, I answer all objection of all kind. I do not impose
any view. But if the proof is not valid, you have to say at which line
it becomes invalid.
OK, I didn't do it until now, because the same error is repeated over and over again in the argument. The main problem already starts at step 1: The line "if we identify an individual with its (hopefully consistent) set of beliefs, the experience adds only a new belief (I did arrive in Helsinki)" in step 1 is not valid, because we can't assume that nothing else changes.

I have just answered this above.

Betting on the correct substitution level does just mean that we survive subjectively "unchanged". But this does not mean we just add a new belief. I survive "unchanged" when I take a shower, but it does not mean I only added
the new belief "I took a shower", alot of other things also changed.

Of course. What is the relevance of this for the reasoning. I could have put "except for a cloud of belief", but then some will ask what is a cloud of belief. To be too much precise makes the reasoning unecessary harder and longer. You are supposed to fill some gaps. In some presentation I just add "exercice" to invite the reader to fill all the gaps. I have made longer presentation, including in this list, but eventually the UDA in 8 short steps is the one people grasp more easily.

If you assume that we only identify with our beliefs, and that functional correct substitution means that *only our beliefs* changed, than you have to
make that explicit in the assumption. It is not accurate that the COMP
assumption is equivalent to step 1.

I think my answer above settles this misunderstanding. Ask for more explanation if you have still a problem with step one.

It is even more obvious in step 3: "The description
encoded at Brussels after the reading-cutting process is just the
description of a state of some
Turing machine, giving  that we assume comp. So its description can be
duplicated, and  the
experiencer  can be  reconstituted  simultaneously  at two different
places,  for  example
Washington and Moscow". This assumes we work precisely like an abstract
turing machine,

Like a concrete Turing machine. As I said, the abstract Turing machine enters only at step 8. And the "precisely" is handled by the fact that we are copied at the right susbstitution level, or below.

but we can't be sure that a physical computer works exactly
like it (indeed, empirically it doesn't).

The working way of the physical computer is not relevant, once there is a level of substitution, which is the comp assumption.

COMP does only say we survive a
functionally correct substitution with a digital computer.

Yes. Exactly.

But it does not
say "an digital computer that works like our abstraction of how a physical
computer works.".

That abstraction is precisely handled by step 8.
The UDA1-7 proves only the existence of first person indeterminacy, the non locality, and the reversal in the case of robust universe (which run a physical concrete UD forever). Then step 8 shows that the need of the concrete UD and concrete universe can be eliminated.

I am not suprised that many materialist won't make that argument, because they want the universe to follow precise computational laws, which would mean that there can't be any difference between the working of a digital
computer and its (correct) abstraction. But that most materialist have
incoherent beliefs does not make the argument valid.

The same argument could be made in step 7, where you also assume that just
the abstract computations matter.

Precisely not. Or show me where.

Bruno Marchal wrote:

This just works as long as
the neurons can make enough new connections to fill the similarity

Bruno Marchal wrote:

This would make COMP work in a quite special case scenario, but
wrong in

It is hard to follow you.
I am not saying anything very complicated.

You seem to oscillate between "comp is nonsense" and "there is
something wrong with the reasoning".
You need to be able to conceive that comp might be true to just follow
the reasoning.
I can conceive that a substitution might work,

Nice. that is comp. So you can conceive it to be true.

but I can hardly conceive
that just the *abstract computations* matter for what happens,

OK. That's the point of step 8.

but this is
not required in COMP.

Indeed. It would put the conclusion of the reasoning in the assumption, which I do no, despite you seem to think so, without providing reference that I did.

It says only that we can have a functional correct
digital substition.

Yes, indeed, at some description level.

If we would substitute a brain with a digital brain, I think it is
unavoidable we would discover that this digital brain can reflect on itself in a manner that necessitates that the computer is interpreted by something
beyond the computer, making the conlusion that we are just related to
computations wrong.

We are of course related to anything making sense of what is a computation, and what is an implementation. Up to step seven we can rely on a physical universe to proceed in the reasoning, and step 8 shows it to be a red herring, so that we have to rely on some part of arithmetical truth (in which computations makes sense). I use arithmetical truth for simplicity, any first order logical specification of any universal system would work similarly.

To sump up, your main misunderstanding is that:
1) Exact computational states or slightly changed one, after the recovering, are not relevant for the issue, and this is made clear at step seven, given that the robust universe running the concrete UD just goes through all those computational states, in all histories. The relevant points are only the first person indeterminacy, and its many invariance for some third person changes. 2) The immateriality of computations is not assumed in any step, and is the conclusion of the eight step.

Don't hesitate to ask any precisions,



You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
For more options, visit this group at 

Reply via email to