On 29 Oct 2013, at 21:41, Jason Resch wrote:
On Tue, Oct 29, 2013 at 2:06 PM, meekerdb <meeke...@verizon.net>
On 10/29/2013 8:19 AM, Jason Resch wrote:
Perhaps it is simpler to think about first person indeterminacy
like this (it requires some familiaraity with programming,
but I will try to elaborate those details):
Imagine there is a conscious AI inside a virtual environment (an
Inside that virtual environment is a ball, which the AI is looking
at and next to the ball is a note which reads:
"At noon (when the virtual sun is directly overhead) the protocol
will begin. In the protocol, the process containing this
simulation will fork (split in two), after the fork, the color of
the ball will change to red for the parent process and it will
change to blue in the child process (forking duplicates a process
into two identical copies, with one called the parent and the other
the child). A second after the color of the ball is set, another
fork will happen. This will happen 8 times leading to 256
processes, after which the simulation will end."
It is 11:59 in the simulation, what can the AI expect to see during
the next 1 minute and 8 seconds?
I don't see that as any different.
It is similar, but it never hurts to look at the same problem from
different angles. What is a little more evident in this case is
that of the 256 possible memories of the AI about to meet its doom,
none contain the memory of seeing all 256 possibilities, an in fact,
the majority of them see the ball change color back and forth at
random. Only 2 see it stay all red or all blue for the last 8
seconds. None of them can predict from the view inside the
simulation, whether the ball will stay the same color or change
after the next fork occurs.
The problem is still what is the referent of "the AI". As John
Clark points out "the AI" is ambiguous when there are duplicates.
Personal identity is less of an issue in this case, because it
concerns the AI or anything/anyone else inside the simulation who
might also be viewing the ball. In this way, it is slightly more
analogous to MWI since it is the environment which is duplicated,
not just the person, and so the apparent random changing of the ball
color is also something that can be agreed upon by the group of
observers within the simulation.
Sometimes Bruno talks about "the universal person" who is merely
embodied as particular persons. So on that view it would be right
to say *the* universal person sees Washington and Moscom.
But not "at the same time" or as "an integrated experience", so the
appearance of randomness still arises from the first person
But then that's contrary to identifying a person by their memories.
My view is that "a person" is just a useful model, when there is no
duplication - and that's true whether the duplication is via Everett
or Bruno's teleporter.
What model should be used in a world with duplication, fission
machines, mind uploading, split brains, biological clones, amnesia,
etc.? Or does personhood no longer make sense at all in the face of
Personally I believe no theory that aims to attach persons to one
psychological or physiological continuity can be successful.
I agree. The notion of personal identity is distracting with respect
to UDA. All what is needed is the belief in the mundane perfect
survival with an artificial brain, or equivalently, the consciousness
(1p) invariance for some 3p-self transformation.
Of course, there *are* interesting connection with the notion of
personal identity, but that is another topic. Note that with computer
science, the 3-self has a completely standard definition (using the Dx
= "xx" method), and the 1-self, if you agree to define it by the
"(self)-knower", looks completely like the unnameable inner God or the
mystics and the antics, and formally, obeys the most classical theory
of knowledge, adding some (hard to interpret) precision to it.
Here there is something admittedly subtle (which took me 30 years of
mathematical logic, to figure out).
The 3-self notion is definable. basically it is the plan of the
machine, his "Gödel number", the i of the phi_i.
That some machine have some self-referential ability comes from the
existence of solutions to equation like phi_x(y, z) = T(x, y, z). the
amoeba is a solution of phi_x() = x. (I can come back on this).
But the 1-self notion is infinitely more subtle. It happens that by
applying the Theaetetus' definition of knowledge (which is coherent
with most analyses of the "dream argument") to the provability
predicate of the Löbian machines, we get, offered by the machine
itself, a logic of knowledge, where the first person (the knower) is
not definable by the machine itself, and the (rich, Löbian) machine
can *know* that.
That 'first person' is not a machine, in the eye of the machine.
Constructively. The first person can contradicts all formal
descriptions of itself. Yet, it is a machine, or better, an infinity
of machines, in the eye of God (here: arithmetical truth).
And it is testable, if you agree that by adding the "Dt", or "<>t"
nuance, we get the type of probability (and "certainty") needed for
solving the "measure on consistent extension realized by in the UD (or
realized in Robinson Arithmetic) problem.
So you are right, I think. No theory that aims to attach persons to
one psychological or physiological continuity can be successful.
Computationalism provided an explanation why it has to be like that
for machines (living above the sigma_1 threshold) when they search
their first person identity. Then, when enough self-honest, with
enough instrospective power (already got by PA, ZF, ...) the machine
is confronted with ... *its* unnameable notion of truth. (As you can
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to email@example.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.