I suggested doing this on FOAR (I used HAL from 2001). It simply makes it
easier to visualise if you forget about biological creatures. Assuming
comp, an AI is exactly equivalent to a human person, so anything you can do
to an AI could be done (in theory) to a human by a teleporter, or to a
human by MWI style splitting.

What should the AI expect to see? It should expect to see the ball turn red
and remain red. There are *copies *of it which see the ball go blue at
various points...

However this answer doesn't assume comp. According to comp it doesn't know
what "it" will see, or to be more exact it knows that "it" will see all
combinations, but by that time it will no longer be an "it" but a "them".
Technically - in this case - we know which ones are the copies and which
ones aren't - however comp says that the AI will experience becoming many
AIs, with varied experiences.

In any case, although one copy is the original, that doesn't really help,
because an AI, by its nature, is probably being constantly swapped into
different parts of computer memory (or stored on disc), parts of it are
being copied, other parts erased, and so on. Comp says none of this matters
- that its experiences are at a fundamental level exactly like ours.

So. What's wrong with this picture, if anything?



On 30 October 2013 09:41, Jason Resch <jasonre...@gmail.com> wrote:

>
>
>
> On Tue, Oct 29, 2013 at 2:06 PM, meekerdb <meeke...@verizon.net> wrote:
>
>>  On 10/29/2013 8:19 AM, Jason Resch wrote:
>>
>> Chris,
>>
>>  Perhaps it is simpler to think about first person indeterminacy like
>> this (it requires some familiaraity with programming, but I will try to
>> elaborate those details):
>>
>>  Imagine there is a conscious AI inside a virtual environment (an open
>> field)
>> Inside that virtual environment is a ball, which the AI is looking at and
>> next to the ball is a note which reads:
>>
>> "At noon (when the virtual sun is directly overhead) the protocol will
>> begin.  In the protocol, the process containing this simulation will fork
>> (split in two), after the fork, the color of the ball will change to red
>> for the parent process and it will change to blue in the child process
>> (forking duplicates a process into two identical copies, with one called
>> the parent and the other the child). A second after the color of the ball
>> is set, another fork will happen.  This will happen 8 times leading to 256
>> processes, after which the simulation will end."
>>
>> It is 11:59 in the simulation, what can the AI expect to see during the
>> next 1 minute and 8 seconds?
>>
>>
>> I don't see that as any different.
>>
>
> It is similar, but it never hurts to look at the same problem from
> different angles.  What is a little more evident in this case is that of
> the 256 possible memories of the AI about to meet its doom, none contain
> the memory of seeing all 256 possibilities, an in fact, the majority of
> them see the ball change color back and forth at random.  Only 2 see it
> stay all red or all blue for the last 8 seconds. None of them can predict
> from the view inside the simulation, whether the ball will stay the same
> color or change after the next fork occurs.
>
>
>> The problem is still what is the referent of "the AI".  As John Clark
>> points out "the AI" is ambiguous when there are duplicates.
>>
>
> Personal identity is less of an issue in this case, because it concerns
> the AI or anything/anyone else inside the simulation who might also be
> viewing the ball.  In this way, it is slightly more analogous to MWI since
> it is the environment which is duplicated, not just the person, and so
> the apparent random changing of the ball color is also something that can
> be agreed upon by the group of observers within the simulation.
>
>
>>   Sometimes Bruno talks about "the universal person" who is merely
>> embodied as particular persons.  So on that view it would be right to say
>> *the* universal person sees Washington and Moscom.
>>
>
> But not "at the same time" or as "an integrated experience", so the
> appearance of randomness still arises from the first person perspective(s).
>
>
>> But then that's contrary to identifying a person by their memories.  My
>> view is that "a person" is just a useful model, when there is no
>> duplication - and that's true whether the duplication is via Everett or
>> Bruno's teleporter.
>>
>>
> What model should be used in a world with duplication, fission machines,
> mind uploading, split brains, biological clones, amnesia, etc.? Or does
> personhood no longer make sense at all in the face of such situations?
>
> Personally I believe no theory that aims to attach persons to one
> psychological or physiological continuity can be successful.
>
> Jason
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to