On 4/08/2016 1:04 am, Bruno Marchal wrote:
On 03 Aug 2016, at 07:16, Bruce Kellett wrote:
On 3/08/2016 2:55 am, Bruno Marchal wrote:
On 02 Aug 2016, at 14:40, Bruce Kellett wrote:
On 2/08/2016 3:07 am, Bruno Marchal wrote:
On 01 Aug 2016, at 09:04, Bruce Kellett wrote:

Consider ordinary consequences of introspection: I can be conscious of several unrelated things at once. I can be driving my car, conscious of the road and traffic conditions (and responding to them appropriately), while at the same time carrying on an intelligent conversation with my wife, thinking about what I will make for dinner, and, in the back of my mind thinking about a philosophical email exchange. These, and many other things, can be present to my conscious mind at the same time. I can bring any one of these things to the forefront of my mind at will, but processing of the separate streams goes on all the time.

Given this, it is quite easy to imagine that a subset of these simultaneous streams of consciousness might be associated with myself in a different body -- in a different place at a different time. I would be aware of things happening to the other body in real time in my own consciousness -- because they would, in fact, be happening to me.

If you dissociate consciousness from an actual single brain, then these things are quite conceivable.

Dissociating consciousness from any actual single brain is what UDA explains in detail. Then the math shows that this dissociation run even deeper, as your 1p consciousness is associated with the infinitely many relative and faithful (at the correct substitution level or below) state in the (sigma_1) arithmetical relations.

Duplication experiments would then be a real test of the hypothesis that consciousness could be separated from the physical brain. If the duplicates are essentially separate conscious beings, unaware of the thoughts and happenings of the other, then consciousness is tied to a particular physical brain (or brain substitute).

Not at all, but it might look like that at that stage, but what you say does not follow from computationalism. The same consciousness present at both place before the door is open *only* differentiated when they get the different bit of information W or M.

However, if consciousness is actually an abstract computation that is tied to a physical brain only in a statistical sense, then we should expect that the single consciousness could inhabit several bodies simultaneously.

It is irrelevant to decide how many consciousness or first person there is. We need only to listen to those which have differentiated to extract the statistics.

The point that I am trying to make here is that a person's consciousness at any moment can consist of many independent threads. From this I speculate that some of these separate threads could actually be associated with separate physical bodies. In other words, it is conceivable that a duplication experiment would not result in two separate consciousnesses, but a single consciousness in separate bodies. If this is so, the fact that the separate bodies receive different inputs does not necessarily mean that they differentiate into separate conscious beings, any more than the fact that I receive different inputs from moment to moment means that I dissociate into multiple consciousnesses.

It seems that the only reason that one might expect that the different inputs experienced by the separate duplicates would lead to a differentiation of the consciousnesses -- i.e., two separate and distinct conscious beings -- is that one is implicitly making the physicalist assumption that a single consciousness is necessarily associated with a single body, such that separate physical bodies necessarily have separate consciousnesses.

I suggest that for step 3 to go through, you need to demonstrate that computationalism requires that a single consciousness cannot inhabit two or more separate physical bodies: without such a demonstration you cannot conclude that W&M is not a possible outcome that the duplicated person could experience. You must demonstrate that different inputs lead to a differentiation of the consciousnesses in the duplication case, while not so differentiating the consciousness of a single person. The required demonstration must be based on the assumptions of computationalism alone, you cannot rely on physics that is not yet in evidence.

In other words, start from your basic assumptions:
(1) The "yes doctor" hypothesis;
(2) The Church-Turing thesis; and
(3) Arithmetical realism;

(3) is redundant. There is no (2) without (3).

Yes there is. Arithmetical realism, as you use the term, is different from the ability to calculate.

No. I define arithmetical realism by the belief in elementary arithmetic.

I don't "believe in" elementary arithmetic -- I use it to do calculations.

I have even redefined an Arithmetical Realist by someone who does not complain to the director of its children's school when they learn arithmetic. There is no "metaphysic assumption" here. I use arithmetic like all theoretical physicists use it.

You use it to define an ontology. Your metaphysics is evident.

You can believe that 2+2=4 is true without commiting to the actual existence of entities corresponding to '2', '4', etc.

Do you have a problem with predicate logic? I guess no, as you would have told this before.

I use the common inference rule:  P(n) ===> ExP(x),
so from s(0) + s(0) = s(s(0)), I can derive Ex(x+s(0) = s(s(0)))

There is no ontology inherent in arithmetic, so the results of arithmetic are not efficient causes of anything at all.

and demonstrate that consciousness is limited to a single physical brain. Not that consciousness can be associated with a physical brain; but that the one consciousness cannot inhabit two identical, but physically separated brains.

?

Computationalism refutes that claim immediately. Take the WM-duplication experience, maybe the virtual case to make the reconstitution box as much numerically identical than the copies of the body (at the relevant digital level). Or just suppose the atom in the reconstitution box does not distinguish the first person experiences. In such a case, after the guy pushed on the button in Helsinki, he will find itself with once consciousness, emulated in two places at once. So one consciousness inhabits two physical separated brains, and as I explained you in my preceding posts, the understanding of this is part of the understanding of the FPI (step 3) and the sequel. Eventually, one consciousness is emulated in infinitely many different numerical relations in arithmetic, and the bodies appearances will emerge from that.

You asked me something impossible, contradicting comp immediately, and which would be a problem for the sequel of the reasoning. It is a bit weird.

It is a bit weird that you do not understand the point I am making. What I ask is entirely reasonable.

I just proved in my last post to you that it is impossible.

If it is impossible then your "proof" is useless.


You use the assumption that the duplicated consciousnesses automatically differentiate when receiving different inputs.

It is not an assumption.

Of course it is an assumption. You have not derived it from anything previously in evidence.

Up to step 3, I use only the notion of first person, and it is defined by the content of the diary that the person doing the duplication transported with him/herself. Once the copies open the door, the diaries differentiate. One diary contains H-W, the other contains H-M.

The one consciousness could well be aware of two different diaries simultaneously, just as it is aware of two cities simultaneously. Diaries add nothing to the argument -- despite your seeming reliance on them.

I ask you to justify that assumption without appealing to physicalist notions such as mind-body identity.

Well, I show that computationalism (like Everett by the way) violates the mind-brain identity thesis. So I don't use it at all, except in the "yes dorctor" sense of the comp hypothesis.

"Computationalism" as you use it here is the endpoint of the argument -- it cannot be used to justify intermediate steps in the argument. The "Yes doctor" hypothesis does not refute the mind-brain identity thesis since you are replacing a physical brain by an entirely equivalent physical "brain" (viz., an emulation on a physical computer.)

No physical assumption are made, except local one for pedagogical purpose, and they will be eliminated later. Indeed it is the main object of the proof (+ some use of Occam, as always in applied science).

If it is only pedagogical, then you can eliminate this local assumption. That is why I ask you to prove that the physically separated consciousnesses diverge directly from your stated assumptions of YD, Church, and arithmetical realism. If you cannot do this, your argument collapses, and computationalism is false.

I think I am beginning to see what you mean when you say that everything you say assumes computationalism. By 'computationalism' you do not mean the three basic assumptions listed above -- rather, you mean the end point of the argument, including the arithmetic-physics reversal.

Of course not.
Please let us go step by step. tell me if you are OK with Clarks answer to the question 1, and what you think about question 2. Then we can proceed.

I have not kept any record of your exchanges with John Clark so I have no idea what you are talking about.

Your argument then goes something like this: we have assumed computationalism is true, namely that the endpoint of the argument for my (Bruno's) theory is true. From this it follows that all the steps taken to reach that conclusion must also be valid/true, so one cannot criticize the conclusion by undermining any of the intermediate steps because they are true by assumption.

Please read the argument.

I have -- that is what it says.

That is a neat trick if you can get away with it, but all it means is that your arguments are irreducibly and irredeemably circular. /Reductio ad absurdum/ (assume the conclusion and from that deduce a contradiction) is not the only way that one can show a purported proof to be invalid: all that it takes for the whole edifice to collapse is that one shows that just one step in the proof is invalid.

As I understand the structure of your argument, you claim that the UDA -- in arithmetic -- involves an infinity of computations that pass through your conscious state.

UD ≠ UDA.

That does not answer the point I have made.

You then want to use this to show that physics can be derived from arithmetic by looking at the statistics of all these computations and selecting out a consistent set -- which would, it is claimed, correspond to the physics we observe. An essential ingredient of this final phase of the argument is FPI, which is why the early steps of your deductive argument aim to establish the FPI from more elementary considerations.

But you have not succeeded in doing this because an essential element of the FPI in step 3 is that consciousnesses in separate bodies differentiate on different inputs.

See above.

See above.

The only way in which this could happen is if consciousness is localized to a particular physical body.

Why ?

You tell me. You are the one who claims that this differentiation occurs.

(You acknowledge the truth of this when you say that for one consciousness to inhabit more than one body would require telepathy or "spooky action at a distance".) But the physics required to establish this is not available until you have recovered physics from arithmetic at step 8.

Physics is no more required than a bit of biology, but not at the primary level.

Biology is reducible to physics.

You cannot call upon the results of physics to establish that physics is derivative and not fundamental

Why not? The whole enterprise would be senseless if I did not believe in some physical reality. Computationalism would be senseless.

Finally, something we can agree on. Computationalism is senseless.

But what is not assumed is that physical primary assumption are necessary. Any computation in arithmetic would work as well, but would be unpedagogical.

Get over you concerns with whether the physical is primary. You are claiming to derive physics from arithmetic, so you cannot use physics in your derivation.


(unless via a /reductio/ argument, but step 3 is not a /reductio/)

Indeed. Just tell me if you are OK with John Clarks' answer. And then if you agree with the principle exposed in the question 2.

I have no idea what you are talking about.

As John Clark seems uninterested in the reasoning, and failed to answer my "QUESTION 1", I take the opportunity to ask you, given that you seem to misunderstand the FPI.

You are told, in the WM duplication protocol, that both copies will have a cup of coffee after the reconstitution. Are you OK that P("experience of drinking coffee") = 1? (assuming digital mechanism and of course all default hypotheses). Do you think the guy in Helsinki was wrong when he said, in Helsinki, to expect to drink some coffee soon?
What possible relevance has that to the points that I am making?

It is sub-step for helping to get the step 3, and thus the local FPI. The global FPI, which is needed for the rversal will be given in step 7.

I know. That is why deriving the FPI in step 3 is crucial to your argument. But step 3, as presented, assumes the results of step 7, so the argument is invalidly circular.

Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to