On 30 Oct 2013, at 06:35, Jason Resch wrote:




On Tue, Oct 29, 2013 at 3:12 PM, LizR <lizj...@gmail.com> wrote:
I suggested doing this on FOAR (I used HAL from 2001). It simply makes it easier to visualise if you forget about biological creatures. Assuming comp, an AI is exactly equivalent to a human person, so anything you can do to an AI could be done (in theory) to a human by a teleporter, or to a human by MWI style splitting.

What should the AI expect to see? It should expect to see the ball turn red and remain red.

Should it expect (expect as in place a high probability on) that? Only 1 of the 256 actually see that happen. It is far more likely to see an incompressible pattern.

There are copies of it which see the ball go blue at various points...

However this answer doesn't assume comp.

The existence of a conscious AI implicitly assumes comp

It assumes only "strong AI". Strong AI might be true, yet comp false. It is not because machines can think that only machine can think. may be angel and gods can think too. Of course, if comp is true, some machine can think, and thus string AI is true.



(at least for some types of observers, you could still like Craig argue that computers cannot support *your* experience, only some limited class of experience).

Well, the puppet's experience, I presume.

We know that a Turing universal puppet can cut his links with the manipulator :)




According to comp it doesn't know what "it" will see, or to be more exact it knows that "it" will see all combinations, but by that time it will no longer be an "it" but a "them". Technically - in this case - we know which ones are the copies and which ones aren't - however comp says that the AI will experience becoming many AIs, with varied experiences.


====================

I think we can all agree on this (LizR, Bruno, Clark, Chris, myself, etc.):

If the AI (or all of them) went through two tests, test A, and test B
A) The test described where the simulation process forks 8 times and 256 copies are created and they each see a different pattern of the ball changing color B) A test where the AI is not duplicated but instead a random number generator (controlled entirely outside the simulation) determines whether the ball changes to red or blue with 50% probability 8 times Then the AI (or AIs) could not say whether test A occurred first or test B occurred first.

===================

If you agree with this, that is sufficient to reach the main point of step 3, which is the two tests are subjectively indistinguishable. Expecting the ball to change color at random (test B), and being iteratively duplicated and seeing all possibilities in different instances (test A), are absolutely indistinguishable from any point of view that exists inside the simulation. No one inside the simulation can determine whether test A was happening, or whether test B was happening. It is a very simple point, and I don't think anyone here would argue that an observer within the simulation could distinguish between the two cases.

If you happen to disagree that an entity within the simulation could distinguish between test A or test B (that is to say, that they could guess whether test A or test B was happening with greater than a 50% probability) then please state how that can be done. Otherwise, you understand the point of step 3 sufficiently to move on and there is no more need to argue about pronouns, personal identity, which you you happen to be, etc.

If anyone does not provide an argument for how the AI, or AIs, (or any observer or entity) within the simulation could distinguish these cases, and continues to argue about pronouns, personal identity, etc., then I think the only conclusion that remains is that such a person has little or no interest in advancing their own or anyone else's understanding and is simply being a troll.

The point is crystal clear and indisputable in this situation, it doesn't matter how the AI is programmed: there is no way for any entity in the simulation to distinguish between an inherently random process (test B) from a wholly deterministic one (test A). If you think you know a way, then please tell us how. If you see no way, then you accept step 3, which is that the appearance of subjective indeterminacy can arise in an objectively deterministic processes.

(Note the above is not aimed at any person in particular. If anyone can show where the reasoning is wrong, please do so.)

Indeed.






In any case, although one copy is the original, that doesn't really help, because an AI, by its nature, is probably being constantly swapped into different parts of computer memory (or stored on disc), parts of it are being copied, other parts erased, and so on. Comp says none of this matters - that its experiences are at a fundamental level exactly like ours.

So. What's wrong with this picture, if anything?

What do you mean by one copy is the original? How can you distinguish an original from a copy?

I don't think Liz did that.

Note that someone disbelieving comp, and drugged in Helsinki, and then forced to do the WM-duplication, will wake up in both city, both pretending that they are the real original one, perhaps even disbelieving they have been teleported. They will not recognize themselves in the copy, as that might not be that easy. (Especially if your face is quite dissymmetrical, like in the novel "Despair" by Nabokov).

Bruno




Jason




On 30 October 2013 09:41, Jason Resch <jasonre...@gmail.com> wrote:



On Tue, Oct 29, 2013 at 2:06 PM, meekerdb <meeke...@verizon.net> wrote:
On 10/29/2013 8:19 AM, Jason Resch wrote:
Chris,

Perhaps it is simpler to think about first person indeterminacy like this (it requires some familiaraity with programming, but I will try to elaborate those details):

Imagine there is a conscious AI inside a virtual environment (an open field) Inside that virtual environment is a ball, which the AI is looking at and next to the ball is a note which reads: "At noon (when the virtual sun is directly overhead) the protocol will begin. In the protocol, the process containing this simulation will fork (split in two), after the fork, the color of the ball will change to red for the parent process and it will change to blue in the child process (forking duplicates a process into two identical copies, with one called the parent and the other the child). A second after the color of the ball is set, another fork will happen. This will happen 8 times leading to 256 processes, after which the simulation will end." It is 11:59 in the simulation, what can the AI expect to see during the next 1 minute and 8 seconds?

I don't see that as any different.

It is similar, but it never hurts to look at the same problem from different angles. What is a little more evident in this case is that of the 256 possible memories of the AI about to meet its doom, none contain the memory of seeing all 256 possibilities, an in fact, the majority of them see the ball change color back and forth at random. Only 2 see it stay all red or all blue for the last 8 seconds. None of them can predict from the view inside the simulation, whether the ball will stay the same color or change after the next fork occurs.

The problem is still what is the referent of "the AI". As John Clark points out "the AI" is ambiguous when there are duplicates.

Personal identity is less of an issue in this case, because it concerns the AI or anything/anyone else inside the simulation who might also be viewing the ball. In this way, it is slightly more analogous to MWI since it is the environment which is duplicated, not just the person, and so the apparent random changing of the ball color is also something that can be agreed upon by the group of observers within the simulation.

Sometimes Bruno talks about "the universal person" who is merely embodied as particular persons. So on that view it would be right to say *the* universal person sees Washington and Moscom.

But not "at the same time" or as "an integrated experience", so the appearance of randomness still arises from the first person perspective(s).

But then that's contrary to identifying a person by their memories. My view is that "a person" is just a useful model, when there is no duplication - and that's true whether the duplication is via Everett or Bruno's teleporter.


What model should be used in a world with duplication, fission machines, mind uploading, split brains, biological clones, amnesia, etc.? Or does personhood no longer make sense at all in the face of such situations?

Personally I believe no theory that aims to attach persons to one psychological or physiological continuity can be successful.

Jason

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to