On 10/7/2011 7:11 PM, Stathis Papaioannou wrote:
On Tue, Oct 4, 2011 at 3:02 AM, Bruno Marchal<marc...@ulb.ac.be> wrote:
Nevertheless, you talk about swapping your brain for a suitably
designed computer and consciousness surviving teleportation and
pauses/restarts of the computer.
As a starting point, these ideas
assume the physical supervenience thesis.
It does not. At the start it is neutral on this. A computationalist
practitioner (knowing UDA, for example) can associate his consciousness with
all the computations going through its state, and believe that he will
survive locally on the normal computations (the usual "physical reality")
only because all the pieces of matter used by the doctors share his normal
histories, and emulate the right computation on the right level. But the
consciousness is not attributed to some physical happening hereby, it is
attributed to the infinitely many arithmetical relations defining his
possible and most probable histories.
Only in step 8 is the physical supervenience assumed, but only to get the
reductio ad absurdum.
There is no [consciousness] evolving in [time and space]. There is only
[consciousness of time and space], "evolving" (from the internal indexical
perspective), but relying and associated on infinities of arithmetical
relations (in the 3-view).
The progression surely must be to start by assuming that your mind is
generated as a result of brain activity, rather than an immaterial
soul. You then consider whether you would accept a computerised brain
and retain consciousness.
There might be two different choices here. One would be a kind of artificial neuron or
bundle of neurons that would be physically placed in your head and designed with the same
connectivity as your natural neurons. The other would be a transceiver that would send
out the afferent signals intended for your brain to a computer outside your body which
would do some calculation emulating your brain and then sending the result back to the
efferent nerves connections. Within the multiverse that is being instantiated by the UD
these might correspond to very different states of computation even though they are the
same so far as your input/output is concerned.
If you decide yes, you accept
computationalism, and if you accept computationalism you can show that
physical supervenience is problematic. You then adjust your theory to
keep computationalism and drop physical supervenience or drop
computationalism altogether. This is the sequence in which most people
would think about it.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org.
To unsubscribe from this group, send email to
For more options, visit this group at