Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 6/08/2016 2:36 am, Bruno Marchal wrote:

On 05 Aug 2016, at 14:11, Bruce Kellett wrote:

The difficulty is with your assumption that differentiation into two 
persons is inevitable.


It is not an assumption. With the protocol and the hypothesis, the 
diaries have differentiated.


Diaries are not people.


The first person are approximated/associated by their personal diaries.


I am not defined by any diary I might keep -- that is merely an 
irrelevant adjunct.


Everett use a similar theory of mind, and indeed most account of the 
QM-without-collapse use digital mechanism, more or less implicitly.



Accusations of bad faith are not required.


Sorry for the accusation of bad faith, but I hope now we can move on 
step 4. I mean, come back to the original definition of first person 
discourse.


The notion of first person and third person have been defined since 
long, and you were persisting in talking like if it could be possible 
that the first person experience does not bifurcate, differentiate. 
When we comp we admit that the only way to know is asked the copies or 
consulted their opinions and experiences, and then simple elementary 
logic shows that they all differentiate.


I suggested doing the experiment and determining the answer empirically. 
Logic can only tell us what follows from certain premises, and your 
premises do not entail differentiation in the described circumstances.


We admit P=1 in the simple teleportation case, then the 
differentiation is a simple consequence that the robot in W sees W, 
believes he is in W, and as it is in W, he knows that he is in W (with 
the antic notion of knowledge: true belief). The same for the robot in 
M. They are both right, they have just differentiated. They both 
confirmed "W v M", and refute "W & M", as, by computationalism, the 
W-machine has been made independent from the M-machine.


Again, you merely assume differentiation, you do not prove its necessity.

The W-machine has no first person clue if the M-machine even exist, 
and vice versa. (Or you bring telepathy, etc.).


I don't need telepathy to unify the various streams of my consciousness 
-- to know that I am the person driving the car, talking to my wife, 
etc, at a given moment. Neither is telepathy need if one person is in 
two places at once.


You can't invalidate a reasoning by changing, in the reasoning, the 
definition which have been given in the reasoning.The differentiation 
are obvious. In the n-iterated case, the differentiations are given by 
the 2^n sequences of W and M.


You continue to assume what you are required to prove.

Keep well in mind that I am not arguing for or against 
computationalism. I assume it, and study the consequences.


There is little sense in studying the consequences of an inconsistent 
theory: you have to defend computationalism against the charge that it 
is not well-established.




Later, I can explain that the "P=1" of 'UDA step one' belongs to the 
machine's G*\G type of true but non- justifiable proposition, which 
can explain the uneasiness. "P=1" requires a strong axiom, and indeed 
both CT and YD are strong axioms in "cognitive science/computer 
science/theology".


So derive the necessity of differentiation from these axioms.

Bruce


Computationalism could be the most insane theology except for all the 
others. I don't know if comp is true or not, but I am pretty sure that 
IF digital mechanism is true, then the "correct theology" will be more 
close to Plato than to Aristotle 


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 6/08/2016 12:58 am, Bruno Marchal wrote:

On 05 Aug 2016, at 13:23, Bruce Kellett wrote


He writes in the diaries what he sees: it is just a matter of the 
protocol whether he writes the name of the city in which each diary 
is located in that particular diary, or if he writes in both diaries 
what he sees in total, in which case he writes W in both diaries. 
It need be no different from my seeing one thing with my right eye 
and writing that down with my right hand, and seeing something 
different with my left eye and writing that down with my left hand, 
or writing down both things with both hands. (This is not a 
split-brain experiment.)


All the things that you bring up could easily happen without any 
differentiation into two separate consciousnesses. You might find the 
non-locality of the unified experience a little surprising, but that 
is only because you are not used to the concept of non-locality.


I say again, even though it seems obvious to you that the 
differentiation must occur,


It is just trivial, by the definition of first person experience.


That is not a suitable answer -- there is only one person experiencing 
both cities.


If you were right, and using the definition I provided at the start, 
 we would have a situation where a guy is in Moscow, and write in his 
diary "Washington". But then he did not survive sanely, and if that is 
the case, P get ≠ from 1 at step 1.


Your phrasing of this is wrong. There is no such thing as "a guy in 
Moscow" -- there is a guy who is in both places simultaneously. If there 
are diaries in both W and M, and one person writing in these diaries, it 
is not inconsistent to write W in the M diary and vice versa -- maybe 
not what was intended, but since it is just one person writing in 
diaries, what is written is not incorrect.


that is just a failure of imagination on your part. Try to put 
yourself in the situation in which some of the many strands of your 
conscious thoughts relate to bodies in different cities. There is no 
logical impossibility in this. You seem to accept that a single mind 
can be associated with more that one body: "We can associate a mind 
to a body, but the mind itself (the 1p) can be (and must be) 
associated with many different bodies, in the physical universe and 
later in arithmetic." (quoted from your comment above.) Hold on to 
this notion, and consider the possibility that there is no 
differentiation into separate conscious persons in such a case (the 
1p is singular -- there is only ever just one person).


I love the idea, but it is not relevant for the problem of prediction.


There is no problem of prediction -- there is only a question as to 
whether differentiation necessarily occurs.



And I am not sure it makes sense, even legally.


Why should it make sense legally? Legal systems were not drawn up to 
take account of person duplicating machines.


If the W-man commits a murder in W, with your bizarre theory, we can 
put the M man in prison. Your non-locality assumption is a bit 
frightening. Some will say, we are all that type of human, but not 
this type, etc. If you consider the W-man and the M-man as the same 
person, then, all living creature on this earth is the same person, 
and 'to eat' becomes equivalent with 'to be eaten'.


Such bizarre consequences do not follow from what I have said -- not all 
people are the result of digital duplication experiments.


Why not eventually, but this has no relevance at all in the reasoning, 
where we assume digital mechanism, so that the M and W man would not 
be aware of their existence in a protocol where they would not known 
the protocol.


That doesn't matter -- they would know that they were one person, 
experiencing two cities at once.


 And the duplications gives a simple distinction between the 1p and 
3p, and we can see, in very simple simulation, that all copies feels 
1p-separate from the others, in the protocol described.


You have still not proved this, or given any cogent reason as to why it 
should be the case. You suffer from what, in the philosophy of science, 
is known as the problem of unconsidered alternatives. You simply have 
not considered non-differentiation as a relevant possibility in your 
theory/model. Now that this alternative has been raised, you have to 
give reasons against it, or revise your original thesis.


I hope you understand well that we assume computationalism, with an 
open mind that the theory might lead to a contradiction, in which case 
we would learn a lot. But up to now, we get only (quantum?) weirdness.


You are very keen to assume computationalism, i.e., that your theory is 
at least internally consistent. But I have raised a relevant 
consideration that counts against the coherence of your theory. You have 
not yet given any substantial argument for your assumption that 
differentiation into separate persons is inevitable in the circumstances 
described -- lots of assertions, but no arguments.



Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-05 Thread John Clark
On Fri, Aug 5, 2016 at 3:32 AM, Bruno Marchal  wrote:


​
>>> ​>>>​>
>>> Assigning probabilities about what "YOU" will see next is not ambiguous
>>> as long as "YOU" duplicating machine are not around.
>>
>>
>> ​>
>>> ​>>​
>>> ​
>>> So, you are OK that the guy in Helsinki write P("drinking coffee") = 1.
>>>
>>
>> ​
>> ​>> ​
>> The
>>  guy in Helsinki
>> ​?​
>> NO!!! Bruno Marchal said  "The question is not about duplication"
>>
>
> ​> ​
> The question 2 was not about duplication,
>

​If duplication was not involved then why ​on god's green earth were you
talking about the goddamn* HELSINKI MAN*?!


> ​> ​
> but the question 1 was, and you said that P("drinking coffee") was equal
> to one.
>

​P can always be equal to 1, it depends on what P means, and if P has no
meaning, if for example too many unspecified personal pronouns are used,
then P has no value at all, not even zero. In the first case
BOTH the Moscow man and the Washington man got the coffee so the identity
of the mysterious Mr. You does not need to be specified and so P had both a
meaning and a value.

​If one gets the coffee and ​one does not what is the probability (P) that "
*YOU*" will get the coffee? Is it 1? No. Is it 1/2? No. Is it 0? No, P has
no value at all because P is gibberish.

 John K Clark


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


​Computationalism​

2016-08-05 Thread John Clark
On Fri, Aug 5, 2016 at 12:36 PM, Bruno Marchal  wrote:

​> ​
> Keep well in mind that I am not arguing for or against computationalism. I
> assume it, and study the consequences.


​No, y
ou're assuming at the very start that Computationalism is false and then
going on from there. Computationalism means that every subjective
experience about you can be duplicated by computations made with a physical
system. Not almost everything, EVERYTHING. But then you say:

"*Nothing can duplicate a first person view from its first person point of
view, with or **without computationalism**.*"

​
Computationalism​ says ​intelligent behavior can be duplicated by
computations performed by a physical system, and Darwin's Theory demands
that consciousness is a byproduct of intelligence, so
your statement contradicts both the meaning of
Computationalism​ and Evolution.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Brent Meeker



On 8/5/2016 4:23 AM, Bruce Kellett wrote:
We can associate a mind to a body, but the mind itself (the 1p) can 
be (and must be) associated with many different bodies, in the 
physical universe and later in arithmetic.


You seem to accept my point -- there is still only one mind even after 
different data are fed to the copies: one mind in two bodies in this 
case (a one-many relationship).


Which is empirically supported by neurological studies that indicate the 
brain consists of "modules" each of which has "a mind of it's own."  If 
we think about engineering an autonomous being it becomes obvious that 
this is a good architecture.   Decision making should be hierarchical 
with only a few requiring system-wide consideration.  With RF 
communication this autonomous being could easily "be"in both Moscow and 
Washington.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Brent Meeker



On 8/5/2016 1:15 AM, Bruno Marchal wrote:


On 05 Aug 2016, at 06:27, Brent Meeker wrote:




On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:



On 5 August 2016 at 04:01, Brent Meeker > wrote:




On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:

The problem with (3) is a general problem with multiverses.  A
single, infinite universe is an example of a multiverse theory,
since there will be infinite copies of everything and every
possible variation of everything, including your brain and your
mind.


That implicitly assumes a digital universe, yet the theory that
suggests it, quantum mechanics, is based on continua; which is
why I don't take "the multiverse" too seriously.


It appears that our brains are finite state machines. Each neuron 
can either be "on" or "off", there are a finite number of neurons, 
so a finite number of possible brain states, and a finite number of 
possible mental states. This is analogous to a digital computer:


Not necessarily. A digital computer also requires that time be 
digitized so that its registers run synchronously. Otherwise "the 
state" is ill defined.  The finite speed of light means that 
spacially separated regions cannot be synchronous.  Even if neurons 
were only ON or OFF, which they aren't, they have frequency 
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital machine, 
and that is all what is needed for the reasoning.


True, but only going to a level far below a "state of consciousness" so 
that in this finer level of emulation there are no longer identifiable 
states of consciousness.  Rather "states" are coming into being and 
fading away, with various overlaps.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 14:11, Bruce Kellett wrote:



On 5/08/2016 9:30 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 00:31, Bruce Kellett wrote:

On 5/08/2016 1:28 am, Brent Meeker wrote:

On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states  
where the
conscious state differs by at least one bit - the W/M bit.  
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because  
of the

single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by  
assumption (YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single  
consciousness through time. We get different input data all the  
time but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two  
identical consciousnesses are created, and by the identity of  
indiscernibles, they form just a single consciousness. Then data  
is input. It seems to me that there is no reason why this should  
lead the initial consciousness to differentiate, or split into  
two. In normal life we get inputs from many sources  
simultaneously -- we see complex scenes, smell the air, feel  
impacts on our body, and hear many sounds from the environment.  
None of this leads our consciousness to disintegrate. Indeed,  
our evolutionary experience has made us adept at coping with  
these multifarious inputs and sorting through them very  
efficiently to concentrate on what is most important, while  
keeping other inputs at an appropriate level in our minds.


I have previously mentioned our ability to multitask in complex  
ways: while I am driving my car, I am aware of the car, the  
road, other traffic and so on; while, at the same time, I can be  
talking to my wife; thinking about what to cook for dinner; and  
reflecting on philosophical issues that are important to me. And  
this is by no means an exhaustive list of our ability to  
multitask -- to run many separate conscious modules within the  
one unified consciousness.


Given that this experience is common to us all, it is not in the  
least bit difficult to think that the adding of yet another  
stream of inputs via a separate body will not change the basic  
structure of our consciousness -- we will just take this  
additional data and process in the way we already process  
multiple data inputs and streams of consciousness. This would  
seem, indeed, to be the default understanding of the  
consequences of person duplication. One would have to add some  
further constraints in order for it to be clear that the  
separate bodies would necessarily have differentiated conscious  
streams. No such additional constraints are currently in evidence.


Not empirically proven constraints, but current physics strongly  
suggests that the duplicates would almost immediately, in the  
decoherence time for a brain, differentiate; i.e. the  
consciousness is not separate from the physics.  It's only "not  
in evidence" if your trying to derive the physics from the  
consciousness.


Of course,  that is what I was trying to get people to see: the  
additional constraint that is necessary for differentiation is  
essentially a mind-brain identity thesis.


Not really. To get differentiation, you need only different  
memories or different first person report, not different brain.


The differentiation we are talking about is into two separate  
persons who do not share a consciousness. You need the  
differentiation before you get two first person reports: one  
consciousness could store several different memories.


What you say is very weird. If there is no differentiation of the  
first person experience, then how could the diary in W contains W,  
and the diary in M contains M.


I explained that in the previous post. It is not in the least  
mysterious -- no different from seeing different things with each  
eye and recording what is seen with different hands. Ther emight be  
twoexperiences, but that does not need two persons.


You lost me with your last post, as they seem to conflict  
immediatey with step 1 and "step 0", the definition of (weak)  
computationalism used in the UD Argument.


I don't see any conflict with ordinary teleportation, with or  
without a delay. There is no duplication in those cases, so ordinary  
considerations apply. Of course, if there is a delay that the  
teletransported person has no way of knowing about, then he will not  
know about that delay -- so what?


I presume by "step 0" you mean YD + CT. There is no problem with  
these assumptions; it is just that you 

Re: Holiday Exercise

2016-08-05 Thread Platonist Guitar Cowboy
On Fri, Aug 5, 2016 at 4:17 PM, Bruno Marchal  wrote:

>
> On 05 Aug 2016, at 15:01, Bruce Kellett wrote:
>
> On 5/08/2016 10:11 pm, Bruce Kellett wrote:
>>
>>> On 5/08/2016 9:30 pm, Bruno Marchal wrote:
>>>
>>> Just tell me if you are OK with question 1. The Helsinki guy is told
 that BOTH copies will have a hot drink after the reconstitutions, in both
 Moscow and Washington. Do you agree that the Helsinki guy (a believer in
 computationalism) will believe that he can expect, in Helsinki, with
 probability, or credibility, or plausibility ONE (resp maximal) to have
 some hot drink after pushing the button in Helsinki?

>>>
>>> As I said, the H-guy can expect to drink two cups of coffee.
>>>
>>
>> Once again, some amplification of the this answer is perhaps in order. I
>> cannot answer your question with a Yes/No as you wish because the question
>> is basically dishonest -- of the form of "Have you stopped beating your
>> wife yet?". The question contains an implicit assumption that the
>> differentiation takes place.
>>
>
>
> Not at all. Question 1 is neutral on this, but if you prefer I split
> question 1 into two different questions.
>
> Question 1a.
> The H-guy is told that the coffee is offered *in* the reconstitution
> boxes, and that it has the same taste. Put it differently, we ensure that
> the differentiation has not yet occurred.
> And the question 1a is the same, assuming he is a coffee addict, and that
> he wants drink coffee as soon as possible, should he worried, knowing the
> protocol telling the coffee is offered, or can he argue that he is not
> worried, and that if comp is true and everything go well, P("drinking
> coffee") = 1?
>
> Question 1b
> Same question, but now, the coffee is offered after the opening of the
> doors.
>
>
>
>
>
> Since it is this differentiation that is in question, the question is
>> disingenuous: it can only be answered as I have done above.
>>
>
> Oh nice! The Helsinki guy, as a coffee addict, is very please you tell him
> that he will drink two cups of coffee.
>

If this kind of connection can be made, then you play right into the hands
of the people who accuse you or your work to be "anything goes". And I say
this because I believe your work has some merit to it, when you're not
trying to shove it down people's throat a la "WHAT IS YOUR THEOLOGY?" in
setting of a public list.

The kind of pushiness of late, tactics of flooding the list with posts
where you set discussion forcibly, and explicitly demanding your questions
to be answered seem to paint a picture where you abandon your own
convictions: modesty, avoidance of blasphemy, use of linguistic games where
only you can set the frame, argument from authority etc. I liked the old
Bruno from 2015 better who didn't need to resort to these things to make a
point.

Particularly the cheap way of trying to ensnare people into discussing your
research interests. So obvious and so out of character, it makes one wonder
as to your general welfare. Play nice, folks! Take care of yourselves. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 13:23, Bruce Kellett wrote:


On 5/08/2016 6:12 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 04:13, Bruce Kellett wrote:

On 5/08/2016 3:41 am, Bruno Marchal wrote:

On 04 Aug 2016, at 04:37, Bruce Kellett wrote:

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:


You use the assumption that the duplicated consciousnesses  
automatically differentiate when receiving different inputs.


It is not an assumption.


Of course it is an assumption. You have not derived it from  
anything previously in evidence.


See my answer to Brent. It is just obvious that the first person  
experience differentiated when it get different experience,  
leading to different memories. We *assume* computationalism. How  
coud the diaries not differentiate? What you say does not make  
any sense.


I have been at pains to argue (in several different ways) that the  
differentiation of consciousness is not automatic. It is very easy  
to conceive of a situation in which a single consciousness  
continues in two bodies, with the streams of consciousness arising  
from both easily identifiable, but still unified in the  
consciousness of a single person. (I copy below my recent argument  
for this in a post replying to Russell.) So the differentiation  
you require is not necessary or automatic -- it has to be  
justified separately because it is not "just obvious".


Your recent expansion of the argument of step 3 in discussions  
with John Clark does not alter the situation in any way -- you  
still just assert that the differentiation takes place on the  
receipt of different input data.


I had thought that the argument for such differentiation of  
consciousness in different physical bodies was a consequence of  
some mind-brain identity thesis. But I am no longer sure that even  
that is sufficient -- the differentiation clearly requires  
separate bodies/brains (separate input data streams), but separate  
bodies are not sufficient for differentiation, as I have shown.


That was shown and explained before and is not contested here.


I thought I was contesting it.


Please read the posts.
That is why I introduce a painting in question 2.


That still just gives differentiation on different data inputs -- it  
changes nothing.



But let us first see if you agree with question 1.

Do you agree that if the H-guy is told that a hot drink will be  
offered to both reconstitution in W and in M, he is entitled to  
expect a hot drink with probability one (assuming computationalisme  
and the default hypothesis)


I do not assume computationalism, I am questioning its validity.


Do you agree that P(X) = 1 in Helsinki, if X will occur in both city?


I think that it is entirely possible that the H-guy will, after the  
duplication, experience drinking two coffees.


What is required is a much stronger additional assumption, namely  
an association between minds and brains such that a mind can  
occupy only one brain.


Not at all. We can say that one mind occupy both brain in the WM- 
duplication , before the opening of the door, assuming the  
reconstitution box identical. The mind brain identity fails right  
at step 3.


Mind-brain identity need not fail: what fails in my interpretation  
of duplication is the one-to-one correspondence of one mind with one  
body. One need something stronger that mind-brain identity to  
justify the differentiation on different data inputs because we can  
have one-many and many-one mind-body relationships.


We can associate a mind to a body, but the mind itself (the 1p) can  
be (and must be) associated with many different bodies, in the  
physical universe and later in arithmetic.


You seem to accept my point -- there is still only one mind even  
after different data are fed to the copies: one mind in two bodies  
in this case (a one-many relationship).


(Whether a single brain can host only one mind is a separate  
matter, involving one's attitude to the results of split brain  
studies and the psychological issues surrounding multiple  
personalities/minds.) In other words, the differentiation  
assumption is an additional assumption that does not appear to  
follow from either physicalism or YD+CT.


It follows from very elementary computer science, and in our case,  
it follows necessarily, as the 1p is identified, in this setting  
with the content of the personal diary, which obviously  
differentiate on the self-localization result made by the  
reconstitutions.


I think the diaries are just confusing you. The copy in M can write  
M in the diary in Moscow, and the copy in W write W in the diary in  
Washington. That is not necessarily different from me writing M in  
one diary with my left hand while writing W in a separate diary with  
my right hand. No differentiation into two separate persons is  
necessary in either case. There is no "self-localization" if there  
is only ever one consciousness -- the person 

Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 15:01, Bruce Kellett wrote:


On 5/08/2016 10:11 pm, Bruce Kellett wrote:

On 5/08/2016 9:30 pm, Bruno Marchal wrote:

Just tell me if you are OK with question 1. The Helsinki guy is  
told that BOTH copies will have a hot drink after the  
reconstitutions, in both Moscow and Washington. Do you agree that  
the Helsinki guy (a believer in computationalism) will believe  
that he can expect, in Helsinki, with probability, or credibility,  
or plausibility ONE (resp maximal) to have some hot drink after  
pushing the button in Helsinki?


As I said, the H-guy can expect to drink two cups of coffee.


Once again, some amplification of the this answer is perhaps in  
order. I cannot answer your question with a Yes/No as you wish  
because the question is basically dishonest -- of the form of "Have  
you stopped beating your wife yet?". The question contains an  
implicit assumption that the differentiation takes place.



Not at all. Question 1 is neutral on this, but if you prefer I split  
question 1 into two different questions.


Question 1a.
The H-guy is told that the coffee is offered *in* the reconstitution  
boxes, and that it has the same taste. Put it differently, we ensure  
that the differentiation has not yet occurred.
And the question 1a is the same, assuming he is a coffee addict, and  
that he wants drink coffee as soon as possible, should he worried,  
knowing the protocol telling the coffee is offered, or can he argue  
that he is not worried, and that if comp is true and everything go  
well, P("drinking coffee") = 1?


Question 1b
Same question, but now, the coffee is offered after the opening of the  
doors.






Since it is this differentiation that is in question, the question  
is disingenuous: it can only be answered as I have done above.


Oh nice! The Helsinki guy, as a coffee addict, is very please you tell  
him that he will drink two cups of coffee.


Now, I hope, you agree that 'drinking two cups of coffee' entails  
'drinking coffee', and in this case, the Helsinki addicted guy has  
less reason to worry about lacking coffee. You do answer P("drinking  
coffee") = 1.


So, just to be clear, and a bit more general: do you agree with the  
Principle 1:


Principle 1: if a first person event x is guarantied to happen to  
*all* its immediate (transportation-like) copies, then, before the  
copy the person can expect x to happen with the same probability it  
would have if there was only one copy.


OK? (We *assume computationalism. We have agreed already that it  
entails P(x) = 1 if x is guarantied to be presented to the guy with  
the artificial brain, or to the teleported (classically) person.


Bruno







Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 5/08/2016 10:11 pm, Bruce Kellett wrote:

On 5/08/2016 9:30 pm, Bruno Marchal wrote:

Just tell me if you are OK with question 1. The Helsinki guy is told 
that BOTH copies will have a hot drink after the reconstitutions, in 
both Moscow and Washington. Do you agree that the Helsinki guy (a 
believer in computationalism) will believe that he can expect, in 
Helsinki, with probability, or credibility, or plausibility ONE (resp 
maximal) to have some hot drink after pushing the button in Helsinki?


As I said, the H-guy can expect to drink two cups of coffee.


Once again, some amplification of the this answer is perhaps in order. I 
cannot answer your question with a Yes/No as you wish because the 
question is basically dishonest -- of the form of "Have you stopped 
beating your wife yet?". The question contains an implicit assumption 
that the differentiation takes place. Since it is this differentiation 
that is in question, the question is disingenuous: it can only be 
answered as I have done above.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: If you win the lottery, don't expect to live the rest of your life as a millionaire

2016-08-05 Thread spudboy100 via Everything List
This bet is akin to believing that there are super civilizations in the galaxy, 
but we don't know they exist. Could be, but, meh!



-Original Message-
From: John Clark 
To: everything-list 
Sent: Thu, Aug 4, 2016 2:02 pm
Subject: Re: If you win the lottery, don't expect to live the rest of your life 
as a millionaire



A few years ago on this list I made a modest proposal, it's a low tech way to 
test the Many World's interpretation of Quantum Mechanics and as a bonus it'll 
make you rich too. First you buy one Powerball lottery ticket, the drawing of 
the winning number is on Saturday at 11pm, now make a simple machine that will 
pull the trigger on a 44 magnum revolver aimed at your head at exactly 
11:00:01pm UNLESS yours is the winning ticket. Your subjective experience can 
only be that at 11:00:01pm despite 80 million to one odds stacked against you a 
miracle occurs and the gun does not go off and you're rich beyond the dreams of 
avarice. Of course for every universe you're rich in there are 80 million in 
which your friends watch your head explode, but that's a minor point, your 
consciousness no longer exists in any of those worlds so you never have to see 
the mess, it's their problem not yours.

Actually I like Many Worlds and think it may very well be right, but I wouldn't 
bet my life on it.

  John K Clark







-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett


On 5/08/2016 9:30 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 00:31, Bruce Kellett wrote:

On 5/08/2016 1:28 am, Brent Meeker wrote:

On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states 
where the
conscious state differs by at least one bit - the W/M bit. 
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of the
single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by 
assumption (YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single 
consciousness through time. We get different input data all the 
time but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two 
identical consciousnesses are created, and by the identity of 
indiscernibles, they form just a single consciousness. Then data is 
input. It seems to me that there is no reason why this should lead 
the initial consciousness to differentiate, or split into two. In 
normal life we get inputs from many sources simultaneously -- we 
see complex scenes, smell the air, feel impacts on our body, and 
hear many sounds from the environment. None of this leads our 
consciousness to disintegrate. Indeed, our evolutionary experience 
has made us adept at coping with these multifarious inputs and 
sorting through them very efficiently to concentrate on what is 
most important, while keeping other inputs at an appropriate level 
in our minds.


I have previously mentioned our ability to multitask in complex 
ways: while I am driving my car, I am aware of the car, the road, 
other traffic and so on; while, at the same time, I can be talking 
to my wife; thinking about what to cook for dinner; and reflecting 
on philosophical issues that are important to me. And this is by no 
means an exhaustive list of our ability to multitask -- to run many 
separate conscious modules within the one unified consciousness.


Given that this experience is common to us all, it is not in the 
least bit difficult to think that the adding of yet another stream 
of inputs via a separate body will not change the basic structure 
of our consciousness -- we will just take this additional data and 
process in the way we already process multiple data inputs and 
streams of consciousness. This would seem, indeed, to be the 
default understanding of the consequences of person duplication. 
One would have to add some further constraints in order for it to 
be clear that the separate bodies would necessarily have 
differentiated conscious streams. No such additional constraints 
are currently in evidence.


Not empirically proven constraints, but current physics strongly 
suggests that the duplicates would almost immediately, in the 
decoherence time for a brain, differentiate; i.e. the consciousness 
is not separate from the physics.  It's only "not in evidence" if 
your trying to derive the physics from the consciousness.


Of course,  that is what I was trying to get people to see: the 
additional constraint that is necessary for differentiation is 
essentially a mind-brain identity thesis.


Not really. To get differentiation, you need only different memories 
or different first person report, not different brain.


The differentiation we are talking about is into two separate persons 
who do not share a consciousness. You need the differentiation before 
you get two first person reports: one consciousness could store several 
different memories.


What you say is very weird. If there is no differentiation of the 
first person experience, then how could the diary in W contains W, and 
the diary in M contains M.


I explained that in the previous post. It is not in the least mysterious 
-- no different from seeing different things with each eye and recording 
what is seen with different hands. Ther emight be twoexperiences, but 
that does not need two persons.


You lost me with your last post, as they seem to conflict immediatey 
with step 1 and "step 0", the definition of (weak) computationalism 
used in the UD Argument.


I don't see any conflict with ordinary teleportation, with or without a 
delay. There is no duplication in those cases, so ordinary 
considerations apply. Of course, if there is a delay that the 
teletransported person has no way of knowing about, then he will not 
know about that delay -- so what?


I presume by "step 0" you mean YD + CT. There is no problem with these 
assumptions; it is just that you appear not to be able to prove the 
differentiation at step 3 from these assumptions.



And my suspicion is that 

Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 00:31, Bruce Kellett wrote:


On 5/08/2016 1:28 am, Brent Meeker wrote:

On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states  
where the
conscious state differs by at least one bit - the W/M bit.  
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of  
the

single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by  
assumption (YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single  
consciousness through time. We get different input data all the  
time but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two  
identical consciousnesses are created, and by the identity of  
indiscernibles, they form just a single consciousness. Then data  
is input. It seems to me that there is no reason why this should  
lead the initial consciousness to differentiate, or split into  
two. In normal life we get inputs from many sources simultaneously  
-- we see complex scenes, smell the air, feel impacts on our body,  
and hear many sounds from the environment. None of this leads our  
consciousness to disintegrate. Indeed, our evolutionary experience  
has made us adept at coping with these multifarious inputs and  
sorting through them very efficiently to concentrate on what is  
most important, while keeping other inputs at an appropriate level  
in our minds.


I have previously mentioned our ability to multitask in complex  
ways: while I am driving my car, I am aware of the car, the road,  
other traffic and so on; while, at the same time, I can be talking  
to my wife; thinking about what to cook for dinner; and reflecting  
on philosophical issues that are important to me. And this is by  
no means an exhaustive list of our ability to multitask -- to run  
many separate conscious modules within the one unified  
consciousness.


Given that this experience is common to us all, it is not in the  
least bit difficult to think that the adding of yet another stream  
of inputs via a separate body will not change the basic structure  
of our consciousness -- we will just take this additional data and  
process in the way we already process multiple data inputs and  
streams of consciousness. This would seem, indeed, to be the  
default understanding of the consequences of person duplication.  
One would have to add some further constraints in order for it to  
be clear that the separate bodies would necessarily have  
differentiated conscious streams. No such additional constraints  
are currently in evidence.


Not empirically proven constraints, but current physics strongly  
suggests that the duplicates would almost immediately, in the  
decoherence time for a brain, differentiate; i.e. the consciousness  
is not separate from the physics.  It's only "not in evidence" if  
your trying to derive the physics from the consciousness.


Of course,  that is what I was trying to get people to see: the  
additional constraint that is necessary for differentiation is  
essentially a mind-brain identity thesis.


Not really. To get differentiation, you need only different memories  
or different first person report, not different brain.


What you say is very weird. If there is no differentiation of the  
first person experience, then how could the diary in W contains W, and  
the diary in M contains M. You lost me with your last post, as they  
seem to conflict immediatey with step 1 and "step 0", the definition  
of (weak) computationalism used in the UD Argument.





And my suspicion is that the mind-brain identity thesis plays havoc  
with the rest of Bruno's argument.


The identity thesis is refuted in the computationalist frame. That  
might be a problem for materialist, which will need at that stage to  
assume a small physical universe without UD running in it forever, and  
without too much Boltzmann Brain, a move which is shown to not work  
later.


Just tell me if you are OK with question 1. The Helsinki guy is told  
that BOTH copies will have a hot drink after the reconstitutions, in  
both Moscow and Washington. Do you agree that the Helsinki guy (a  
believer in computationalism) will believe that he can expect, in  
Helsinki, with probability, or credibility, or plausibility ONE (resp  
maximal) to have some hot drink after pushing the button in Helsinki?


We need to decompose step 3 in sub-steps, so that we can see if there  
is a real disagreement, and in that case where and which one, or if it  
is just pseudo-philosophy or bad 

Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 5/08/2016 6:12 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 04:13, Bruce Kellett wrote:

On 5/08/2016 3:41 am, Bruno Marchal wrote:

On 04 Aug 2016, at 04:37, Bruce Kellett wrote:

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:
You use the assumption that the duplicated consciousnesses 
automatically differentiate when receiving different inputs.


It is not an assumption.


Of course it is an assumption. You have not derived it from 
anything previously in evidence.


See my answer to Brent. It is just obvious that the first person 
experience differentiated when it get different experience, leading 
to different memories. We *assume* computationalism. How coud the 
diaries not differentiate? What you say does not make any sense.


I have been at pains to argue (in several different ways) that the 
differentiation of consciousness is not automatic. It is very easy to 
conceive of a situation in which a single consciousness continues in 
two bodies, with the streams of consciousness arising from both 
easily identifiable, but still unified in the consciousness of a 
single person. (I copy below my recent argument for this in a post 
replying to Russell.) So the differentiation you require is not 
necessary or automatic -- it has to be justified separately because 
it is not "just obvious".


Your recent expansion of the argument of step 3 in discussions with 
John Clark does not alter the situation in any way -- you still just 
assert that the differentiation takes place on the receipt of 
different input data.


I had thought that the argument for such differentiation of 
consciousness in different physical bodies was a consequence of some 
mind-brain identity thesis. But I am no longer sure that even that is 
sufficient -- the differentiation clearly requires separate 
bodies/brains (separate input data streams), but separate bodies are 
not sufficient for differentiation, as I have shown.


That was shown and explained before and is not contested here.


I thought I was contesting it.


Please read the posts.
That is why I introduce a painting in question 2.


That still just gives differentiation on different data inputs -- it 
changes nothing.



But let us first see if you agree with question 1.

Do you agree that if the H-guy is told that a hot drink will be 
offered to both reconstitution in W and in M, he is entitled to expect 
a hot drink with probability one (assuming computationalisme and the 
default hypothesis)


I do not assume computationalism, I am questioning its validity.


Do you agree that P(X) = 1 in Helsinki, if X will occur in both city?


I think that it is entirely possible that the H-guy will, after the 
duplication, experience drinking two coffees.


What is required is a much stronger additional assumption, namely an 
association between minds and brains such that a mind can occupy only 
one brain.


Not at all. We can say that one mind occupy both brain in the 
WM-duplication , before the opening of the door, assuming the 
reconstitution box identical. The mind brain identity fails right at 
step 3.


Mind-brain identity need not fail: what fails in my interpretation of 
duplication is the one-to-one correspondence of one mind with one body. 
One need something stronger that mind-brain identity to justify the 
differentiation on different data inputs because we can have one-many 
and many-one mind-body relationships.


We can associate a mind to a body, but the mind itself (the 1p) can be 
(and must be) associated with many different bodies, in the physical 
universe and later in arithmetic.


You seem to accept my point -- there is still only one mind even after 
different data are fed to the copies: one mind in two bodies in this 
case (a one-many relationship).


(Whether a single brain can host only one mind is a separate matter, 
involving one's attitude to the results of split brain studies and 
the psychological issues surrounding multiple personalities/minds.) 
In other words, the differentiation assumption is an additional 
assumption that does not appear to follow from either physicalism or 
YD+CT.


It follows from very elementary computer science, and in our case, it 
follows necessarily, as the 1p is identified, in this setting with the 
content of the personal diary, which obviously differentiate on the 
self-localization result made by the reconstitutions.


I think the diaries are just confusing you. The copy in M can write M in 
the diary in Moscow, and the copy in W write W in the diary in 
Washington. That is not necessarily different from me writing M in one 
diary with my left hand while writing W in a separate diary with my 
right hand. No differentiation into two separate persons is necessary in 
either case. There is no "self-localization" if there is only ever one 
consciousness -- the person experiences both W and M simultaneously.


As I have further pointed out, one cannot just make this an 

Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 06:27, Brent Meeker wrote:




On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:



On 5 August 2016 at 04:01, Brent Meeker  wrote:


On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:
The problem with (3) is a general problem with multiverses.  A  
single, infinite universe is an example of a multiverse theory,  
since there will be infinite copies of everything and every  
possible variation of everything, including your brain and your  
mind.


That implicitly assumes a digital universe, yet the theory that  
suggests it, quantum mechanics, is based on continua; which is why  
I don't take "the multiverse" too seriously.


It appears that our brains are finite state machines. Each neuron  
can either be "on" or "off", there are a finite number of neurons,  
so a finite number of possible brain states, and a finite number of  
possible mental states. This is analogous to a digital computer:


Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  Otherwise "the  
state" is ill defined.  The finite speed of light means that  
spacially separated regions cannot be synchronous.  Even if neurons  
were only ON or OFF, which they aren't, they have frequency  
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital machine,  
and that is all what is needed for the reasoning.


Bruno





even if you postulate that electric circuit variables are  
continuous, transistors can only be on or off. If the number of  
possible mental states is finite, then in an infinite universe,  
whether continuous or discrete, mental states will repeat.
We live in an orderly world with consistent physical laws. It  
seems to me that you are suggesting that if everything possible  
existed then we would not live in such an orderly world,


Unless the worlds were separated in some way, which current  
physical theories provide - but which is not explicable if you  
divorce conscious thoughts from physics.


The worlds are physically separated - there can be no communication  
between separate worlds in the multiverse and none between  
sufficiently widely separated copies of subsets of the world in an  
infinite single universe. But the separate copies are connected  
insofar as they share memories and sense of identity, even if there  
is no causal connection between them.


Of course "copy" implies a shared past in which there was an  
"original", they have a cause in common.


Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 04:13, Bruce Kellett wrote:


On 5/08/2016 3:41 am, Bruno Marchal wrote:

On 04 Aug 2016, at 04:37, Bruce Kellett wrote:

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:



You use the assumption that the duplicated consciousnesses  
automatically differentiate when receiving different inputs.


It is not an assumption.


Of course it is an assumption. You have not derived it from  
anything previously in evidence.


See my answer to Brent. It is just obvious that the first person  
experience differentiated when it get different experience, leading  
to different memories. We *assume* computationalism. How coud the  
diaries not differentiate? What you say does not make any sense.


I have been at pains to argue (in several different ways) that the  
differentiation of consciousness is not automatic. It is very easy  
to conceive of a situation in which a single consciousness continues  
in two bodies, with the streams of consciousness arising from both  
easily identifiable, but still unified in the consciousness of a  
single person. (I copy below my recent argument for this in a post  
replying to Russell.) So the differentiation you require is not  
necessary or automatic -- it has to be justified separately because  
it is not "just obvious".


Your recent expansion of the argument of step 3 in discussions with  
John Clark does not alter the situation in any way -- you still just  
assert that the differentiation takes place on the receipt of  
different input data.


I had thought that the argument for such differentiation of  
consciousness in different physical bodies was a consequence of some  
mind-brain identity thesis. But I am no longer sure that even that  
is sufficient -- the differentiation clearly requires separate  
bodies/brains (separate input data streams), but separate bodies are  
not sufficient for differentiation, as I have shown.


That was shown and explained before and is not contested here. Please  
read the posts.
That is why I introduce a painting in question 2. But let us first see  
if you agree with question 1.



Do you agree that if the H-guy is told that a hot drink will be  
offered to both reconstitution in W and in M, he is entitled to expect  
a hot drink with probability one (assuming computationalisme and the  
default hypothesis)


Do you agree that P(X) = 1 in Helsinki, if X will occur in both city?




What is required is a much stronger additional assumption, namely an  
association between minds and brains such that a mind can occupy  
only one brain.


Not at all. We can say that one mind occupy both brain in the WM- 
duplication , before the opening of the door, assuming the  
reconstitution box identical. The mind brain identity fails right at  
step 3. We can associate a mind to a body, but the mind itself (the  
1p) can be (and must be) associated with many different bodies, in the  
physical universe and later in arithmetic.





(Whether a single brain can host only one mind is a separate matter,  
involving one's attitude to the results of split brain studies and  
the psychological issues surrounding multiple personalities/minds.)  
In other words, the differentiation assumption is an additional  
assumption that does not appear to follow from either physicalism or  
YD+CT.


It follows from very elementary computer science, and in our case, it  
follows necessarily, as the 1p is identified, in this setting with the  
content of the personal diary, which obviously differentiate on the  
self-localization result made by the reconstitutions.






As I have further pointed out, one cannot just make this an  
additional assumption to YD+CT because it is clearly an empirical  
matter: until we have a working person duplicator, we cannot know  
whether differentiation is automatic or not. Science is, after all,  
empirical, not just a matter of definitions.


Once you agree with P(Mars) = 1 in a simple classical teleportation  
experience (step 1), then how could the diary not differentiate when  
the reconstituted guy write the result of the self-localization?


No empirical test needs to be done, as the differentiation is obvious:  
one copy experiences the city of Moscow, as his diary confirms, and  
the other experiences the city of Washington, as his diaries confirms  
too. If they did not differentiate, what would they write in the diary?


Bruno






Bruce

Here is part of my discussion with Russell:

[BK]I could perhaps expand on that response. On duplication, two  
identical consciousnesses are created, and by the identity of  
indiscernibles, they form just a single consciousness. Then data is  
input. It seems to me that there is no reason why this should lead  
the initial consciousness to differentiate, or split into two. In  
normal life we get inputs from many sources simultaneously -- we see  
complex scenes, smell the air, feel impacts on our body, and hear  
many sounds 

Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 04 Aug 2016, at 19:53, John Clark wrote:



On Thu, Aug 4, 2016 at 12:15 PM, Marchal  wrote:

​> ​The question is not about duplication.

​OK.​


​And that part is still OK. Assigning probabilities about what  
"YOU" will see next is not ambiguous as long as "YOU" duplicating  
machine are not around.


​> ​So, you are OK that the guy in Helsinki write P("drinking  
coffee") = 1.


​The guy in Helsinki​?​ NO!!! Bruno Marchal said  "The question  
is not about duplication" but the guy in Helsinki is just about to  
walk into a YOU ​duplicating machine​,​ so John Clark ​will  
not assign any probability of any sort​ about ​the​ one and  
only one thing ​that ​will happen to "​YOU​"​.​ ​It's  
just ​plain ​dumb.


Nope, question one was about duplication. Only question 2 was not. You  
ndid say that P("drinking coffee") = 1 for the helsinki guy.

Just to be sure, I quote your answer to question one:


On Tue, Aug 2, 2016 at 12:55 PM, Bruno Marchal   
wrote:


​> ​both copies will have a cup of coffee after the  
reconstitution. Are you OK that P("experience of drinking coffee") =  
1?


​Yes, and in this case it doesn't matter if Bruno Marchal says P is  
the probability John Clark will drink the coffee or says P is the  
probability ​ ​"you" will drink the coffee, there is no ambiguity  
either way. However if the Moscow man got the coffee but the  
Washington man did not then there would be a 100% probability that  
John Clark will get the coffee and also a 100% probability that John  
Clark will not get the coffee, just as I would assign a 100%  
probability that tomorrow tomatoes will be red and I would also  
assign a 100% probability that tomorrow tomatoes will be​ green.



Like I just said: QED, unless you explicitly change your mind on  
question 1. But then say it, and we come back to question 1.


Bruno








​> ​Now, the guy in Helsinki is told that we have put a painting  
by Van Gogh in one of the reconstitution box, and a painting by  
Monet in the other reconstitution box.​ ​


​Let's see if John Clark can guess what's coming. After "YOU" have  
been duplicated by a YOU duplicating machine what is the probability  
that "YOU" will blah blah blah. What on earth made Bruno Marchal  
think that substituting a painting for a cup of coffee would make  
things less ambiguous?


​> ​The key point here, is that we don't tell you which  
reconstitution box contains which painting. ​[...]


​Why is that the key point? Suppose we​ ​change the experiment  
and this time before the experiment we tell "YOU" which box contains  
which painting, we tell "YOU" that the red box on the left contains  
the Van Gogh​ ​and the blue box on the right contains the Monet ,  
and we tell "YOU" that after "YOU" are duplicated by the YOU  
duplicating machine "YOU" will be in both boxes. Does that  
information help in the slightest way in determining what one and  
only one painting "YOU" will see after "YOU" ​are​  
duplicated? ​ ​It's just plain​ ​dumb.


​>​ P("being uncertain about which city is behind the door")

​P is equal to who's uncertainty? After the experiment is​ ​ 
over how do we determine what the true value of P turned out to be?  
To find out that value we need to ask "YOU" what "YOU" saw after  
"YOU" walked into the YOU duplicating machine and opened one and  
only one door. But who exactly do we ask? We can't ask the Helsinki  
man as he's no longer around, oh I know, we ask "YOU".


​> ​OK?

​No it's not OK, it's about as far from OK as things get.​

​> ​Can we move to step 4?

​Just as soon as Bruno Marchal explains what one and only one thing  
"YOU" refers to in a world with "YOU" duplicating machines.


John K Clark ​





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 04 Aug 2016, at 19:53, John Clark wrote:



On Thu, Aug 4, 2016 at 12:15 PM, Marchal  wrote:

​> ​The question is not about duplication.

​OK.​


​And that part is still OK. Assigning probabilities about what  
"YOU" will see next is not ambiguous as long as "YOU" duplicating  
machine are not around.


​> ​So, you are OK that the guy in Helsinki write P("drinking  
coffee") = 1.


​The guy in Helsinki​?​ NO!!! Bruno Marchal said  "The question  
is not about duplication"



The question 2 was not about duplication, but the question 1 was, and  
you said that P("drinking coffee") was equal to one.


You already contradict your recent post where you said that question  
1, which was clearly about duplication, admit a positive answer.


QED.

Bruno







but the guy in Helsinki is just about to walk into a YOU ​ 
duplicating machine​,​ so John Clark ​will not assign any  
probability of any sort​ about ​the​ one and only one thing ​ 
that ​will happen to "​YOU​"​.​ ​It's just ​plain ​ 
dumb.



​> ​Now, the guy in Helsinki is told that we have put a painting  
by Van Gogh in one of the reconstitution box, and a painting by  
Monet in the other reconstitution box.​ ​


​Let's see if John Clark can guess what's coming. After "YOU" have  
been duplicated by a YOU duplicating machine what is the probability  
that "YOU" will blah blah blah. What on earth made Bruno Marchal  
think that substituting a painting for a cup of coffee would make  
things less ambiguous?


​> ​The key point here, is that we don't tell you which  
reconstitution box contains which painting. ​[...]


​Why is that the key point? Suppose we​ ​change the experiment  
and this time before the experiment we tell "YOU" which box contains  
which painting, we tell "YOU" that the red box on the left contains  
the Van Gogh​ ​and the blue box on the right contains the Monet ,  
and we tell "YOU" that after "YOU" are duplicated by the YOU  
duplicating machine "YOU" will be in both boxes. Does that  
information help in the slightest way in determining what one and  
only one painting "YOU" will see after "YOU" ​are​  
duplicated? ​ ​It's just plain​ ​dumb.


​>​ P("being uncertain about which city is behind the door")

​P is equal to who's uncertainty? After the experiment is​ ​ 
over how do we determine what the true value of P turned out to be?  
To find out that value we need to ask "YOU" what "YOU" saw after  
"YOU" walked into the YOU duplicating machine and opened one and  
only one door. But who exactly do we ask? We can't ask the Helsinki  
man as he's no longer around, oh I know, we ask "YOU".


​> ​OK?

​No it's not OK, it's about as far from OK as things get.​

​> ​Can we move to step 4?

​Just as soon as Bruno Marchal explains what one and only one thing  
"YOU" refers to in a world with "YOU" duplicating machines.


John K Clark ​





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.