Sorry, I answered a paragraph to quickly. You raised a key question
which is at the crux of the mind-body problem, and its comp
On 09 Feb 2012, at 10:49, Bruno Marchal wrote:
On 07 Feb 2012, at 23:05, Craig Weinberg wrote:
I think that the 1p-sense that the machine has is unrelated to
It is related to an infinity of 3p local representations.
What makes it anything other than that?
That "nothing" is not correct.
A better answer is computer science and truth.
I might say comp, by definition. But I guess you are arguing against
comp, so I have to explain more.
But for this you have to be able to assume comp, if only temporarily.
With comp, we are duplicable. I can be "cut" in Brussels, and pasted
in two places, W and M, says. In that simple local case, we get the
two 3p local representations of 3-me (my body at the right comp
level): one is W and one in M.
The one in M will observe his environment, and conclude that he feels,
subjectively, to be in M, and not to be in W. (And similarly for the
one in W). OK?
Now, even without using comp, nor even strong AI, but just the much
weaker behavioral-comp (which allows zombie and accept that machines
can at least imitate humans behavior), you should be able to
understand the explanations that the zombie in M (say) will give to
your question, and which is that computer science will make one
machine (betting on comp and surviving or pretending surviving) that
she knows the difference between the objective collection of 3-me in M
and 3-me in W, and what she personally feel when looking where she is.
This is a point which, I think, has already been made by Gunderson,
which is the fact that men, or machines, when individuated are
individuals, and that it entails a natural asymmetry between your body
and the body of others. For example you can see the back of the neck
of anybody else more directly than yours. That kind of obvious truth
is truth for the machine as for the man. A machine can understand that
an objective description of the existing 3ps will not allow a
selection of one particular 1ps. So what can do that? The machine can
understand the zombie machine in M, who will just say that she looked
around and recognize M, making her understand the difference between
her 1p and the "objective" 3p.
Formally, this will be the difference between the Gödel Bp, which
asserts only that the machine (conceive in a 3p body or code, or
number) believes (asserts) p, and (Bp & p) the machine believes p, and
"God agree" (say), I mean "p is true".
You are saying that all the 3ps together cannot create the sense. I am
saying that we can interview the individuated machines, and that for
them a sort of miracle occurs, they know perfectly well the difference
between them and the others. And they can make that difference
relative to their probable computations.
In that context, you can describe a "free-will" choice, as of form of
self-killing, for example by duplicating you in W and M, but
annihilating you in W, or in M, according to your will before. Or in
deciding to not reconstitute yourself in some place. A free-choice is
a form of premeditated suicide. A local pruning of possibilities.
The distinction between 1p and collection of 3p will be natural for
the machine points of view, and is indeed a difference of points of
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com.
To unsubscribe from this group, send email to
For more options, visit this group at