On 21 Feb 2012, at 18:05, meekerdb wrote:
On 2/21/2012 8:32 AM, Terren Suydam wrote:
Bruno and others,
Here's a thought experiment that for me casts doubt on the notion
consciousness requires 1p indeterminacy.
Imagine that we have scanned my friend Mary so that we have a
functional description of her brain (down to some substitution level
that we are betting on). We run the scan in a simulated classical
physics. The simulation is completely closed, which is to say,
deterministic. In other words, we can run the simulation a million
times for a million years each time, and the state of all them will
identical. Now, when we run the simulation, we can ask her (within
context of the simulation) "Are you conscious, Mary? Are you aware
your thoughts?" She replies yes.
Next, we tweak the simulation in the following way. We plug in a
source of quantum randomness (random numbers from a quantum random
number generator) into a simulated water fountain. Now, the
is no longer deterministic. A million runs of the simulation will
result in a million different computational states after a million
years. We ask the same questions of Mary and she replies "yes".
In the deterministic scenario, Mary's computational state is traced
infinite number of times in the UD*, but only because of the infinite
number of ways a particular computational state can be instantiated
the UD* (different levels, UD implementing other UDs recursively,
iteration along the reals, etc). It's a stretch however to say that
there is 1p indeterminacy, because her computational state as
implemented in the simulation is deterministic.
In the second scenario, her computational state is traced in the UD*
and it is clear there is 1p indeterminacy, as the splitting entailed
by the quantum number generator "brings Mary along", so to speak.
So if Mary is not conscious in the deterministic scenario, she is a
zombie. The only way to be consistent with this conclusion is to
insist that the substitution level must be at the quantum level.
If OTOH she is conscious, then consciousness does not require 1p
But is it really either-or? Isn't it likely there are different
kinds and degrees of consciousness. I'm not clear on what Bruno's
theory says about this. On the one hand he says all Lobian machines
are (equally?) conscious, but then he says it depends on the program
they are executing.
Imagine that I am duplicated in W and M. I would say that the guy in M
and the guy in W are equally conscious, and that both are me, although
they will feel very different and have different content of
In that sense I would say that all Löbian machines are equally
conscious. Of course the Löbian humans have very different experience
than the jumping spider, and even more different than Peano Arithmetic.
As I said in another post today, I am not sure why Terren thinks that
that the first person indeterminacy is needed for consciousness. First
person indeterminacy is implied by the self-multiplication (in the UD,
say), as a consequence of comp, but is not presented as something
needed for the existence of consciousness. Mary is conscious in both
scenario. But comp implies, as Quentin said, that she cannot escape
the indeterminacy of its many continuations in the UD. It is hoped
that the QM indeterminacy is just the reflect of the comp
indeterminacy, so that QM confirms comp. The Everett mutiplication of
populations of machines in QM would also be an empirical reason to
assess that comp does not lead to solipsism (which I would take as a
refutation of comp, if that happen to be the case). The apparition of
a quantum logic in the material hypostases is a reassuring step in
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at