On 02 Aug 2016, at 14:40, Bruce Kellett wrote:
On 2/08/2016 3:07 am, Bruno Marchal wrote:
On 01 Aug 2016, at 09:04, Bruce Kellett wrote:
Consider ordinary consequences of introspection: I can be
conscious of several unrelated things at once. I can be driving my
car, conscious of the road and traffic conditions (and responding
to them appropriately), while at the same time carrying on an
intelligent conversation with my wife, thinking about what I will
make for dinner, and, in the back of my mind thinking about a
philosophical email exchange. These, and many other things, can be
present to my conscious mind at the same time. I can bring any one
of these things to the forefront of my mind at will, but
processing of the separate streams goes on all the time.
Given this, it is quite easy to imagine that a subset of these
simultaneous streams of consciousness might be associated with
myself in a different body -- in a different place at a different
time. I would be aware of things happening to the other body in
real time in my own consciousness -- because they would, in fact,
be happening to me.
If you dissociate consciousness from an actual single brain, then
these things are quite conceivable.
Dissociating consciousness from any actual single brain is what UDA
explains in detail. Then the math shows that this dissociation run
even deeper, as your 1p consciousness is associated with the
infinitely many relative and faithful (at the correct substitution
level or below) state in the (sigma_1) arithmetical relations.
Duplication experiments would then be a real test of the
hypothesis that consciousness could be separated from the physical
brain. If the duplicates are essentially separate conscious
beings, unaware of the thoughts and happenings of the other, then
consciousness is tied to a particular physical brain (or brain
substitute).
Not at all, but it might look like that at that stage, but what you
say does not follow from computationalism. The same consciousness
present at both place before the door is open *only* differentiated
when they get the different bit of information W or M.
However, if consciousness is actually an abstract computation that
is tied to a physical brain only in a statistical sense, then we
should expect that the single consciousness could inhabit several
bodies simultaneously.
It is irrelevant to decide how many consciousness or first person
there is. We need only to listen to those which have differentiated
to extract the statistics.
The point that I am trying to make here is that a person's
consciousness at any moment can consist of many independent threads.
From this I speculate that some of these separate threads could
actually be associated with separate physical bodies. In other
words, it is conceivable that a duplication experiment would not
result in two separate consciousnesses, but a single consciousness
in separate bodies. If this is so, the fact that the separate bodies
receive different inputs does not necessarily mean that they
differentiate into separate conscious beings, any more than the fact
that I receive different inputs from moment to moment means that I
dissociate into multiple consciousnesses.
It seems that the only reason that one might expect that the
different inputs experienced by the separate duplicates would lead
to a differentiation of the consciousnesses -- i.e., two separate
and distinct conscious beings -- is that one is implicitly making
the physicalist assumption that a single consciousness is
necessarily associated with a single body, such that separate
physical bodies necessarily have separate consciousnesses.
I suggest that for step 3 to go through, you need to demonstrate
that computationalism requires that a single consciousness cannot
inhabit two or more separate physical bodies: without such a
demonstration you cannot conclude that W&M is not a possible outcome
that the duplicated person could experience. You must demonstrate
that different inputs lead to a differentiation of the
consciousnesses in the duplication case, while not so
differentiating the consciousness of a single person. The required
demonstration must be based on the assumptions of computationalism
alone, you cannot rely on physics that is not yet in evidence.
In other words, start from your basic assumptions:
(1) The "yes doctor" hypothesis;
(2) The Church-Turing thesis; and
(3) Arithmetical realism;
(3) is redundant. There is no (2) without (3).
and demonstrate that consciousness is limited to a single physical
brain. Not that consciousness can be associated with a physical
brain; but that the one consciousness cannot inhabit two identical,
but physically separated brains.
?
Computationalism refutes that claim immediately. Take the WM-
duplication experience, maybe the virtual case to make the
reconstitution box as much numerically identical than the copies of
the body (at the relevant digital level). Or just suppose the atom in
the reconstitution box does not distinguish the first person
experiences. In such a case, after the guy pushed on the button in
Helsinki, he will find itself with once consciousness, emulated in two
places at once. So one consciousness inhabits two physical separated
brains, and as I explained you in my preceding posts, the
understanding of this is part of the understanding of the FPI (step 3)
and the sequel. Eventually, one consciousness is emulated in
infinitely many different numerical relations in arithmetic, and the
bodies appearances will emerge from that.
You asked me something impossible, contradicting comp immediately, and
which would be a problem for the sequel of the reasoning. It is a bit
weird.
As John Clark seems uninterested in the reasoning, and failed to
answer my "QUESTION 1", I take the opportunity to ask you, given that
you seem to misunderstand the FPI.
You are told, in the WM duplication protocol, that both copies will
have a cup of coffee after the reconstitution. Are you OK that
P("experience of drinking coffee") = 1? (assuming digital mechanism
and of course all default hypotheses). Do you think the guy in
Helsinki was wrong when he said, in Helsinki, to expect to drink some
coffee soon?
Bruno
Bruce
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.