On 3/08/2016 2:55 am, Bruno Marchal wrote:
On 02 Aug 2016, at 14:40, Bruce Kellett wrote:
On 2/08/2016 3:07 am, Bruno Marchal wrote:
On 01 Aug 2016, at 09:04, Bruce Kellett wrote:
Consider ordinary consequences of introspection: I can be conscious
of several unrelated things at once. I can be driving my car,
conscious of the road and traffic conditions (and responding to
them appropriately), while at the same time carrying on an
intelligent conversation with my wife, thinking about what I will
make for dinner, and, in the back of my mind thinking about a
philosophical email exchange. These, and many other things, can be
present to my conscious mind at the same time. I can bring any one
of these things to the forefront of my mind at will, but processing
of the separate streams goes on all the time.
Given this, it is quite easy to imagine that a subset of these
simultaneous streams of consciousness might be associated with
myself in a different body -- in a different place at a different
time. I would be aware of things happening to the other body in
real time in my own consciousness -- because they would, in fact,
be happening to me.
If you dissociate consciousness from an actual single brain, then
these things are quite conceivable.
Dissociating consciousness from any actual single brain is what UDA
explains in detail. Then the math shows that this dissociation run
even deeper, as your 1p consciousness is associated with the
infinitely many relative and faithful (at the correct substitution
level or below) state in the (sigma_1) arithmetical relations.
Duplication experiments would then be a real test of the hypothesis
that consciousness could be separated from the physical brain. If
the duplicates are essentially separate conscious beings, unaware
of the thoughts and happenings of the other, then consciousness is
tied to a particular physical brain (or brain substitute).
Not at all, but it might look like that at that stage, but what you
say does not follow from computationalism. The same consciousness
present at both place before the door is open *only* differentiated
when they get the different bit of information W or M.
However, if consciousness is actually an abstract computation that
is tied to a physical brain only in a statistical sense, then we
should expect that the single consciousness could inhabit several
bodies simultaneously.
It is irrelevant to decide how many consciousness or first person
there is. We need only to listen to those which have differentiated
to extract the statistics.
The point that I am trying to make here is that a person's
consciousness at any moment can consist of many independent threads.
From this I speculate that some of these separate threads could
actually be associated with separate physical bodies. In other words,
it is conceivable that a duplication experiment would not result in
two separate consciousnesses, but a single consciousness in separate
bodies. If this is so, the fact that the separate bodies receive
different inputs does not necessarily mean that they differentiate
into separate conscious beings, any more than the fact that I receive
different inputs from moment to moment means that I dissociate into
multiple consciousnesses.
It seems that the only reason that one might expect that the
different inputs experienced by the separate duplicates would lead to
a differentiation of the consciousnesses -- i.e., two separate and
distinct conscious beings -- is that one is implicitly making the
physicalist assumption that a single consciousness is necessarily
associated with a single body, such that separate physical bodies
necessarily have separate consciousnesses.
I suggest that for step 3 to go through, you need to demonstrate that
computationalism requires that a single consciousness cannot inhabit
two or more separate physical bodies: without such a demonstration
you cannot conclude that W&M is not a possible outcome that the
duplicated person could experience. You must demonstrate that
different inputs lead to a differentiation of the consciousnesses in
the duplication case, while not so differentiating the consciousness
of a single person. The required demonstration must be based on the
assumptions of computationalism alone, you cannot rely on physics
that is not yet in evidence.
In other words, start from your basic assumptions:
(1) The "yes doctor" hypothesis;
(2) The Church-Turing thesis; and
(3) Arithmetical realism;
(3) is redundant. There is no (2) without (3).
Yes there is. Arithmetical realism, as you use the term, is different
from the ability to calculate. You can believe that 2+2=4 is true
without commiting to the actual existence of entities corresponding to
'2', '4', etc.
and demonstrate that consciousness is limited to a single physical
brain. Not that consciousness can be associated with a physical
brain; but that the one consciousness cannot inhabit two identical,
but physically separated brains.
?
Computationalism refutes that claim immediately. Take the
WM-duplication experience, maybe the virtual case to make the
reconstitution box as much numerically identical than the copies of
the body (at the relevant digital level). Or just suppose the atom in
the reconstitution box does not distinguish the first person
experiences. In such a case, after the guy pushed on the button in
Helsinki, he will find itself with once consciousness, emulated in two
places at once. So one consciousness inhabits two physical separated
brains, and as I explained you in my preceding posts, the
understanding of this is part of the understanding of the FPI (step 3)
and the sequel. Eventually, one consciousness is emulated in
infinitely many different numerical relations in arithmetic, and the
bodies appearances will emerge from that.
You asked me something impossible, contradicting comp immediately, and
which would be a problem for the sequel of the reasoning. It is a bit
weird.
It is a bit weird that you do not understand the point I am making. What
I ask is entirely reasonable. You use the assumption that the duplicated
consciousnesses automatically differentiate when receiving different
inputs. I ask you to justify that assumption without appealing to
physicalist notions such as mind-body identity.
I think I am beginning to see what you mean when you say that everything
you say assumes computationalism. By 'computationalism' you do not mean
the three basic assumptions listed above -- rather, you mean the end
point of the argument, including the arithmetic-physics reversal. Your
argument then goes something like this: we have assumed computationalism
is true, namely that the endpoint of the argument for my (Bruno's)
theory is true. From this it follows that all the steps taken to reach
that conclusion must also be valid/true, so one cannot criticize the
conclusion by undermining any of the intermediate steps because they are
true by assumption.
That is a neat trick if you can get away with it, but all it means is
that your arguments are irreducibly and irredeemably circular. /Reductio
ad absurdum/ (assume the conclusion and from that deduce a
contradiction) is not the only way that one can show a purported proof
to be invalid: all that it takes for the whole edifice to collapse is
that one shows that just one step in the proof is invalid.
As I understand the structure of your argument, you claim that the UDA
-- in arithmetic -- involves an infinity of computations that pass
through your conscious state. You then want to use this to show that
physics can be derived from arithmetic by looking at the statistics of
all these computations and selecting out a consistent set -- which
would, it is claimed, correspond to the physics we observe. An essential
ingredient of this final phase of the argument is FPI, which is why the
early steps of your deductive argument aim to establish the FPI from
more elementary considerations.
But you have not succeeded in doing this because an essential element of
the FPI in step 3 is that consciousnesses in separate bodies
differentiate on different inputs. The only way in which this could
happen is if consciousness is localized to a particular physical body.
(You acknowledge the truth of this when you say that for one
consciousness to inhabit more than one body would require telepathy or
"spooky action at a distance".) But the physics required to establish
this is not available until you have recovered physics from arithmetic
at step 8. You cannot call upon the results of physics to establish that
physics is derivative and not fundamental (unless via a /reductio/
argument, but step 3 is not a /reductio/)
As John Clark seems uninterested in the reasoning, and failed to
answer my "QUESTION 1", I take the opportunity to ask you, given that
you seem to misunderstand the FPI.
You are told, in the WM duplication protocol, that both copies will
have a cup of coffee after the reconstitution. Are you OK that
P("experience of drinking coffee") = 1? (assuming digital mechanism
and of course all default hypotheses). Do you think the guy in
Helsinki was wrong when he said, in Helsinki, to expect to drink some
coffee soon?
What possible relevance has that to the points that I am making?
Bruce
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.