Re: Holiday Exercise

2016-08-15 Thread John Clark
On Mon, Aug 15, 2016 at 10:22 AM, Bruno Marchal  wrote:

​> ​
> Answer, or re-answer to question 1.


​John Clark doesn't even remember what question 1 was and doubts it is
worth looking up because of a suspicion it involves personal pronouns and
rugs for sweeping sloppy thinking under.

 John K Clark  ​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-15 Thread Bruno Marchal


On 14 Aug 2016, at 18:26, John Clark wrote:



On Sun, Aug 14, 2016 at 11:45 AM, Bruno Marchal   
wrote:


​>> ​​I saw only a apple, my friend saw only a banana, what is  
THE one and only fruit that BOTH me AND my friend ​saw? Silly  
question.


​> ​I agree so much with you on this!​ ​Although the question  
is not silly when asked to the Helsinki guy before his duplication,  
as he will live, assuming computationalism, two different first  
person continuations.


​And here we go again with the "his" and  "he"  pronoun circus.  
John Clark will rephrase what Bruno Marchal said above but ​ ​ 
without using a "he" colored personal pronoun rug to sweep fuzzy  
thinking under:


The question​ of what one and only one city ​the Helsinki ​man  
will see after the Helsinki ​man​ is duplicated is not silly when  
asked ​to​​​ the Helsinki ​man​ before ​the Helsinki ​ 
man's ​duplication, as​ ​​​after the Helsinki ​man's  
duplication the Helsinki ​man will live in 2 different cities.


​John Clark disagrees, John Clark thinks that is a very very silly  
question.​


​> ​No diary contains a description of the first person  
(conscious or not) experience


​Then there is no reason to ever refer to that stupid diary ever  
again.​


 ​> ​"I (here and now) saw directly, after opening the door ​ 
[]​


​Please specify​ which door the above refers to, the one in  
Moscow or the one in Washington.​


​> ​I recall the main tip that you seem to forget all the time:  
to get the first person records,


​Please specify exactly who's first person records should be gotten.

​> ​listen to your copies.

 ​Please specify exactly​ who should be doing the listening. ​




Answer, or re-answer to question 1. Up to now you are inconsistent,  
making impossible to progress.


Bruno





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-14 Thread John Clark
On Sun, Aug 14, 2016 at 11:45 AM, Bruno Marchal  wrote:

​>> ​
>> ​I saw only a apple, my friend saw only a banana, what is *THE* one and
>> only fruit that *BOTH* me *AND* my friend ​saw? Silly question.
>
>
> ​> ​
> I agree so much with you on this!
> ​ ​
> Although the question is not silly when asked to the Helsinki guy before
> his duplication, as he will live, assuming computationalism, two different
> first person continuations.
>

​And here we go again with the "his" and  "he"  pronoun circus. John Clark
will rephrase what Bruno Marchal said above but ​

​without using a "he" colored personal pronoun rug to sweep fuzzy thinking
under:

*The question​ of what one and only one city ​the Helsinki ​man will see
after the Helsinki ​man​ is duplicated is not silly when asked ​to​​​ the
Helsinki ​man​ before ​the Helsinki ​man's ​duplication, as​ ​​​after the
Helsinki ​man's duplication the Helsinki ​man will live in 2 different
cities. *

​John Clark disagrees, John Clark thinks that is a very very silly
question.​


> ​> ​
> No diary contains a description of the first person (conscious or not)
> experience
>

​Then there is no reason to ever refer to that stupid diary ever again.​



>
> ​> ​
> "I (here and now) saw directly, after opening the door
> ​[]​
>

​Please specify
​ which door the above refers to, the one in Moscow or the one in
Washington.​


​> ​
> I recall the main tip that you seem to forget all the time: to get the
> first person records,
>

​Please specify exactly who's first person records should be gotten.


> ​> ​
> listen to your copies.
>

 ​Please specify exactly
​ who should be doing the listening. ​

 John K Clark
​










>
> Bruno
>
>
>
>
>
>
>
>
> John K Clark
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-14 Thread Bruno Marchal


On 12 Aug 2016, at 18:40, John Clark wrote:

On Fri, Aug 12, 2016 at 10:42 AM, Bruno Marchal   
wrote:


"The" first person experience you will live is "the" experience,  
that both copies witnesses when I interview them both


​I saw only a apple, my friend saw only a banana, what is THE one  
and only fruit that BOTH me AND my friend ​​saw? Silly question.



I agree so much with you on this!

Although the question is not silly when asked to the Helsinki guy  
before his duplication, as he will live, assuming computationalism,  
two different first person continuations.


Each of them is consistent with being the Helsinki guy, but are first  
person inconsistent when taken together, as in this precise protocol,  
computationalism makes the two copies into independent individual  
persons.
(No diary contains a description of the first person (conscious or  
not) experience:
 "I (here and now) saw directly, after opening the door the two  
cities").


As both copies are genuine survivors, we listen to them both, and both  
told us that they got a non predictable bit of information: W or M  
among {W, M}. With principle 1 and 2, it follows that the guy in  
Helsinki was in a first person indeterminacy situation, and that  
precise protocol suggests P(W) = P(M) = 1/2. Indeed its iteration  
leads naturally to Pascal Triangle and the Gaussian curve as the  
number of duplication is iterated.


I recall the main tip that you seem to forget all the time: to get the  
first person records, listen to your copies.


Bruno









John K Clark



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-12 Thread John Clark
On Fri, Aug 12, 2016 at 10:42 AM, Bruno Marchal  wrote:

"The" first person experience you will live is "the" experience, that both
> copies witnesses when I interview them both


​I saw only a apple, my friend saw only a banana, what is *THE* one and
only fruit that *BOTH* me *AND* my friend ​
​saw? Silly question.

John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-12 Thread Bruno Marchal


On 11 Aug 2016, at 20:18, John Clark wrote:



On Thu, Aug 11, 2016 at 10:55 AM, Bruno Marchal   
wrote:


​>> ​in general it's not true that they will​ perceiving  
different things​, if you were the identical copy and in a  
symmetrical  environment and facing your original the two of you  
would see identical things, and if your position was instantaneously  
exchanged with the original there would be no change in your  
consciousness or of that of the original, neither of you could even  
tell an exchange had occurred. So in that situation how could it  
make sense to talk of "​two different consciousnesses" when there  
is clearly no difference between them?


​> ​And this shows that you agree (implicitly at least) that in  
the step 3 case, the two identical bodies interact with quite  
different environment, and get their consciousness bifurcating/ 
differentiating.


​Well of course I agree with that! One is conscious of Moscow and  
the other is conscious of Washington, how could anyone say that's  
not a difference, how could anyone say they're still identical?


​> ​The "identical copies"

​They were once identical but after seeing different cities ​they  
are now nonidentical copies. And yet both are the Helsinki man. And  
that is why establishing a personal sense of self can only come from  
remembering the past and never from trying to predict the future.


​> ​could not predict the first person result of the  
differentiation.


​Nobody and nothing will ever be able to predict ​"THE first  
person result of the differentiation"



And given that we survive such duplication (assuming computationalism)  
all what you say in this post confirms the unavoidable existence of  
the First Person Indeterminacy.


"The" first person experience you will live is "the" experience, that  
both copies witnesses when I interview them both, on their personal  
experience, using my mobile phone, after their reconstitution. QED.


Bruno




because after differentiation "THE first person result of the  
differentiation" is pure triple distilled extra virgin 100%  
gibberish.  Nobody will ever be able to predict "sjhfhzbawhfd" either.




 John K Clark


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-11 Thread John Clark
On Thu, Aug 11, 2016 at 10:55 AM, Bruno Marchal  wrote:

​>> ​in general it's not true that they will
>> ​
>> perceiving different things
>> ​
>> , if you were the identical copy and in a symmetrical  environment and
>> facing your original the two of you would see identical things, and if your
>> position was instantaneously exchanged with the original there would be no
>> change in your consciousness or of that of the original, neither of you
>> could even tell an exchange had occurred. So in that situation how could it
>> make sense to talk of "
>> ​
>> two different consciousnesses" when there is clearly no difference
>> between them?
>
>
> ​> ​
> And this shows that you agree (implicitly at least) that in the step 3
> case, the two identical bodies interact with quite different environment,
> and get their consciousness bifurcating/differentiating.
>

​Well of course I agree with that! One is conscious of Moscow and the other
is conscious of Washington, how could anyone say that's not a difference,
how could anyone say they're still identical?

​> ​
> The "identical copies"
>

​They were once identical but after seeing different cities ​they are now
nonidentical copies. And yet both are the Helsinki man. And that is why
establishing a personal sense of self can only come from remembering the
past and never from trying to predict the future.

​> ​
> could not predict the first person result of the differentiation.
>

​Nobody and nothing will ever be able to predict ​"*THE* first person
result of the differentiation" because after differentiation "*THE* first
person result of the differentiation" is pure triple distilled extra virgin
100% gibberish.  Nobody will ever be able to predict "sjhfhzbawhfd" either.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-11 Thread Bruno Marchal


On 09 Aug 2016, at 19:32, John Clark wrote:

On Sun, Aug 7, 2016 at 10:40 AM, Bruno Marchal   
wrote:


​> ​a nine years old child get the point

​And I might get your point if I had the mentality of a nine year  
old child, or of something similar like an ancient Greek.



Feeling superior? That might be the root of your difficulties here.

Bruno





 John K Clark​



--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-11 Thread Bruno Marchal


On 09 Aug 2016, at 23:30, John Clark wrote:

On Mon, Aug 8, 2016 at 8:51 PM, Brent Meeker   
wrote:


​> ​I think the default assumption is that consciousness  
supervenes on the brain, so two different brains will realize two  
different consciousnesses because they are at different locations  
and perceiving different things.


​But in general it's not true that they will​ perceiving  
different things​, if you were the identical copy and in a  
symmetrical  environment and facing your original the two of you  
would see identical things, and if your position was instantaneously  
exchanged with the original there would be no change in your  
consciousness or of that of the original, neither of you could even  
tell an exchange had occurred. So in that situation how could it  
make sense to talk of "​two different consciousnesses" when there  
is clearly no difference between them?



And this shows that you agree (implicitly at least) that in the step 3  
case, the two identical bodies interact with quite different  
environment, and get their consciousness bifurcating/differentiating.


The "identical copies" could not predict the first person result of  
the differentiation. That follows from question 1 and question 2,  
notably, or question 1a and question 1b, + question 2, to take account  
of a remark by Bruce.


Bruno








​  ​John K Clark





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-11 Thread Bruno Marchal


On 09 Aug 2016, at 07:28, Bruce Kellett wrote:


On 9/08/2016 3:14 pm, Brent Meeker wrote:

On 8/8/2016 7:03 PM, Bruce Kellett wrote:

On 9/08/2016 10:51 am, Brent Meeker wrote:

On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added axiom/ 
assumption that the duplicated brain/bodies must have separate  
consciousnesses at least as soon as they have different  
perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that  
consciousness is a product of physical processes in the brain -  
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of  
course that is what we might expect -- two brains ==> two  
consciousnesses. But that says nothing about the case of two  
identical brains -- is there one or two consciousnesses? The  
default assumption around here appears to be that the identity  
of indiscernibles will mean that there is only one conscious  
being. The question is then how this consciousness evolves as  
inputs change?


I think the default assumption is that consciousness supervenes  
on the brain, so two different brains will realize two different  
consciousnesses because they are at different locations and  
perceiving different things.


That is fine if they started off different, and were never  
identical -- identical in all details, not just sharing single  
"observer moments", even if such can be well-defined.


I would speculate that it would be just like having two  
autonomous Mars rovers that "wake up" at different points on the  
surface.  They may have the same computers and sensors and  
programs, but their data and memories will immediately start to  
diverge.  They won't be "completely" different, as identical  
twins aren't completely different.  They may even occasionally  
think the same thoughts.  But relativity tells us there's no  
sense to saying they think them at the same time.


But Mars  rovers are not conscious -- or are they?

I don't think this does much to invalidate Bruno's argument.   
He just wants to show that the physical is derivative, not that  
it's irrelevant.


I disagree. I think it is crucial for Bruno's argument. He  
cannot derive the differentiation of consciousness in this  
duplication case from the YD+CT starting point, so where does it  
come from?


In his theory, it the physics and the consciousness must both  
derive from the infinite threads of computation by the UD. I'm  
just making the point that he does need to derive the physics,  
specifically the finite speed of communication in order to show  
that the duplication results in two different consciousnesses.


The finite speed of communication is a problem only if  
consciousness is localized to the physical brain -- if it is a non- 
local computation, this might not be an issue.


It seems to me an experimental matter -- until we have  
duplicated a conscious being, we will not know whether the  
consciousnesses differentiate on different incomes or not.


Suppose their is an RF link between them so they can share  
computation, memory, sensor data,...  Then we'd be inclined to  
say that they could be a single consciousness.  But now suppose  
they are moved light-years apart.  They could still share  
computation, memory, etc.  But intelligent action on the scale of  
an autonomous rover would have to be based on the local resources  
of a single rover.  So they would have to effectively  
"differentiate".  It wouldn't be some kind of axiomatic,  
mathematically provable differentiation - rather a practical,  
observable one.


Yes, that makes sense. But the rovers are not conscious.


Why not?  Suppose they are.  If you would say "yes to the doctor"  
then you must believe that AI is possible.


I have no reason to suppose that AI is not possible. But the Mars  
rovers are unlikely to be sufficiently complex/self referential to  
be conscious. Do they have an inner narrative?


And if they were placed at different points on the surface of  
Mars, they would have to start with at least some different data  
-- viz., their location on the surface relative to earth. The  
general issue I am raising is that consciousness could be non- 
local, in which case separated duplicates would not need any form  
of subluminal physical communication in order to remain a single  
conscious being.


You seem to be agreeing that this is, at bottom, an empirical  
matter. If we do the experiment and duplicate a conscious being,  
then separate the duplicates, we could ask one whether or not it  
was still aware of its duplicate. If the answer is "No", then we  
know that consciousness is localized to a particular physical  
body. If the answer is "Yes", then we know that consciousness is  
non-local, even though it might still supervene on the physical  
bodies.


I don't think 

Re: Holiday Exercise

2016-08-11 Thread Bruno Marchal


On 09 Aug 2016, at 02:57, Stathis Papaioannou wrote:




On 9 August 2016 at 03:52, Brent Meeker  wrote:


On 8/8/2016 6:18 AM, Stathis Papaioannou wrote:



On Monday, 8 August 2016, Brent Meeker  wrote:


On 8/7/2016 11:20 AM, Bruno Marchal wrote:
Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  Otherwise "the  
state" is ill defined.  The finite speed of light means that  
spacially separated regions cannot be synchronous.  Even if neurons  
were only ON or OFF, which they aren't, they have frequency  
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital  
machine, and that is all what is needed for the reasoning.


If the time variable is continuous, i.e. can't be digitized, I  
don't think you are correct.


If time is continuous, you would need infinite precision to exactly  
define the timing of a neuron's excitation, so you are right, that  
would not be digitisable. Practically, however, brains would have  
to have a non-zero engineering tolerance, or they would be too  
unstable. The gravitational attraction of a passing ant would  
slightly change the timing of neural activity, leading to a change  
in mental state and behaviour.


I agree that brains must be essentially classical computers, but no  
necessarily digital.  The question arose as to what was contained in  
an Observer Moment and whether, in an infinite universe there would  
necessarily be infinitely many exact instances of the same OM.


Even in a continuum, there would be brain states and mental states  
that are effectively identical to an arbitrary level of precision.  
We maintain a sense of continuity of identity despite sometimes even  
gross changes to our brain. At some threshold there will be a  
perceptible change, but the threshold is not infinitesimal.


  But having a continuous variable doesn't imply instability.
First, the passing ant is also instantiated infinitely many times.   
Second, if a small cause has only a proportionately small effect  
then there is no "instability", more likely the dynamics diverge as  
in deterministic chaos.  But in any case it would allow an aleph-1  
order infinity of  OMs which would differ by infinitesimal amounts.


But I also question the coherence of this idea.  As discussed (at  
great length) by Bruno and JKC, two or more identical brains must  
instantiate the same experience, i.e. the same OM.  So if there are  
only a finite number of possible brain-states and universes are made  
of OMs, then there can only be a finite number of finite universes.


A human brain can probably only have a finite number of thoughts,  
being of finite size, but a turing machine is not so limited.




Turing machines, combinators, programs, numbers, ... are finite  
entities. The universal Turing machine is a number/finite-code. During  
any computation, the Turing machine look only at a finite portion of  
its tape, but it can be as great as needed. Similarly, the humans can  
see only finite number of things at a time, and when they have a  
memory brain overflow, they will use wall, or paper, or magnetic tape.  
It is the essence of a machine to be a finite entity---I would say,  
and it is the essence of a digital machine to be a finite entity  
admitting a finite description (that we can put into a number, and  
store on a disk).


Bruno







--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread John Clark
On Mon, Aug 8, 2016 at 8:51 PM, Brent Meeker  wrote:

>
​> ​
> I think the default assumption is that consciousness supervenes on the
> brain, so two different brains will realize two different consciousnesses
> because they are at different locations and perceiving different things.


​
But in general it's not true that they will
​
perceiving different things
​
, if you were the identical copy and in a symmetrical  environment and
facing your original the two of you would see identical things, and if your
position was instantaneously exchanged with the original there would be no
change in your consciousness or of that of the original, neither of you
could even tell an exchange had occurred. So in that situation how could it
make sense to talk of "
​
two different consciousnesses" when there is clearly no difference between
them?

​  ​
John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-09 Thread John Clark
On Sun, Aug 7, 2016 at 10:40 AM, Bruno Marchal  wrote:

​> ​
> a nine years old child get the point


​And I might get your point if I had the mentality of a nine year old
child, or of something similar like an ancient Greek.

 John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread Bruno Marchal


On 09 Aug 2016, at 01:03, Russell Standish wrote:


On Mon, Aug 08, 2016 at 09:06:20PM +1000, Bruce Kellett wrote:

On 8/08/2016 8:38 pm, Russell Standish wrote:

On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:

However, still no justification has been given for the assumption
that the duplicated consciousness differentiates on different
inputs. And consciousness is what computationalism is supposed to  
be

giving an account of.


Obviously different inputs does not entail the differentiation of
consciousness.


In duplication there is still only one consciousness: and as you
say, different inputs do not entail the differentiation of a single
consciousness (associated with a single brain/body). So why would it
be different if the body were also duplicated?


However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs.


There is no difficulty in understanding that differentiated
consciousness entails different persons, who may or may not
experience different inputs, but I doubt that differentiation of
consciousness necessarily entails different inputs - two people can
experience the same stimuli.


This directly contradicts computational supervenience. I'm pretty sure
that if you read the fine print, you'll find that computational
supervenience is part of the YD assumption, although that fact is
often glossed over. I vaguely recall challenging Bruno on this a
couple of years ago.



The computationalist assumption is, a priori: the physical-and- 
computational supervenience, then step 8 shows that the "physical", in  
this case, is not usable if not derived from arithmetic. That means  
that physics is a branch of universal number theology (itself  
derivable from Peano Arithmetic, or meta-derivable from Robinson  
Arithmetic).
And that theology get testable through its physical part that we can  
compare with nature.


I think our disagreement was on a subtler question concerning the very  
sense of supervenience. But it is prematured to discuss this, I think,  
with respect of the current thread.


Bruno








In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been  
teleported to
a different city, and recognise where they they've been teleported  
to.


There is no difficulty in accepting that there is consciousness of
two cities, but is that one consciousness, or two? You beg the
question by referring to plural 'persons'.



Two, because each consciousness is aware of different cities. They
each answer the question "Which city I am in?" in a different way, iw
it is a difference that makes a difference.


--


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread Bruno Marchal


On 08 Aug 2016, at 19:52, Brent Meeker wrote:




On 8/8/2016 6:18 AM, Stathis Papaioannou wrote:



On Monday, 8 August 2016, Brent Meeker  wrote:


On 8/7/2016 11:20 AM, Bruno Marchal wrote:
Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  Otherwise "the  
state" is ill defined.  The finite speed of light means that  
spacially separated regions cannot be synchronous.  Even if neurons  
were only ON or OFF, which they aren't, they have frequency  
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital  
machine, and that is all what is needed for the reasoning.


If the time variable is continuous, i.e. can't be digitized, I  
don't think you are correct.


If time is continuous, you would need infinite precision to exactly  
define the timing of a neuron's excitation, so you are right, that  
would not be digitisable. Practically, however, brains would have  
to have a non-zero engineering tolerance, or they would be too  
unstable. The gravitational attraction of a passing ant would  
slightly change the timing of neural activity, leading to a change  
in mental state and behaviour.


I agree that brains must be essentially classical computers, but no  
necessarily digital.  The question arose as to what was contained in  
an Observer Moment and whether, in an infinite universe there would  
necessarily be infinitely many exact instances of the same OM.  But  
having a continuous variable doesn't imply instability.   First, the  
passing ant is also instantiated infinitely many times.  Second, if  
a small cause has only a proportionately small effect then there is  
no "instability", more likely the dynamics diverge as in  
deterministic chaos.  But in any case it would allow an aleph-1  
order infinity of  OMs which would differ by infinitesimal amounts.


But I also question the coherence of this idea.  As discussed (at  
great length) by Bruno and JKC, two or more identical brains must  
instantiate the same experience, i.e. the same OM.


"OM" is ambiguous. I guess you mean the same 1-OM. I say this, because  
many people can easily confuse a "OM" with a computational state, when  
in the computationalist frame. But the whole goal here is to show that  
a "1-OM" is associated, from the 1-OMs' view, to an infinity of 3p-OM.






So if there are only a finite number of possible brain-states and  
universes are made of OMs, then there can only be a finite number of  
finite universes.


There can only be a finite number of finite universes, if there are  
universes. But with the FPI, universe are emerging pattern coming from  
a statistics on all infinite works of all finite universal machine/ 
number, so the physical reality is somehow constrained to have an  
infinite and continuous, and non computable, components. If only the  
random oracle generated by the iterated self-duplication, from the  
first person views.


Theoretical computer science makes sense of all this, thanks to the  
incompleteness phenomenon, and the intensional nuances it brings for  
knowledge and observation, as opposed to relative belief/or proof.


Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread Bruno Marchal


On 09 Aug 2016, at 01:37, Bruce Kellett wrote:


On 9/08/2016 9:05 am, Russell Standish wrote:

On Mon, Aug 08, 2016 at 10:01:49AM -0700, Brent Meeker wrote:

I think Russell is just saying we take it as an added
axiom/assumption that the duplicated brain/bodies must have separate
consciousnesses at least as soon as they have different perceptions.
This is exactly what you would predict from supposing that
consciousness is a product of physical processes in the brain -
something that is supported by lots and lots of evidence.

I don't think this does much to invalidate Bruno's argument.  He
just wants to show that the physical is derivative, not that it's
irrelevant.

Brent

Physicality in the thought experiment seems like a red herring to
me. We can just as easily consider running the duplicated
consciousnesses in virtual reality simulators of the two cities.


You would still have to build into your simulation whether or not  
the consciousness differentiates -- begging the question yet again.


Of course, not for the same reason. Once the program are duplicated  
virtually in different virtual environment, very elementary computer  
science justifies why their memories and first person records diverge.


No need to invoke consciousness at this stage, which is a subtle  
concept in need of explanation itself. But of course, computationalism  
is the doctrine that we can *associate* some consciousness of the  
first person experience to the first person account made by machine  
and humans, duplicated or not.


Maybe you are just arguing against computationalism, but then we have  
changed the topic without saying. I am agnostic, that is indeed why  
motivates me for testing it, or test some version of it. The interest  
in computationalism stems from its testable character.


you have not answered many questions I asked you, which were suppose  
to show that the differentiation of consciousness is unvaoidable, once  
we assume computationalism, and are OK that our children marry people  
having a digital brain.


Bruno





Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread Bruno Marchal


On 09 Aug 2016, at 01:15, Bruce Kellett wrote:


On 9/08/2016 12:39 am, Bruno Marchal wrote:

On 08 Aug 2016, at 01:26, Bruce Kellett wrote:
On 8/08/2016 1:30 am, Bruno Marchal wrote:


But in step 3, I ma very careful to not use the notion of  
"consciousness", and instead a simple 3p notion of first person.  
usually many relates it two consciousness and assumes that when  
the guy say "I see Moscow", they are conscious, but that is not  
needed to get the reversal.


Maybe that is the basis of the problem. In step 3 you seem to be  
claiming nothing that could not be achieved by a non-conscious  
machine:


Yes.

take a machine that can take photographs and compare the resulting  
images with a data base of images of certain cities. When a match  
is found, the machine outputs the corresponding name of the city  
from the data base. Send one such machine to Washington and an  
identical machine to Moscow. They will fulfill your requirements,  
the W-machine will output W and the M-machine will output M.


This is what you are now seeming to describe. But that is not FPI.


How could the machine predict the result of the match? Give me the  
algorithm used by that machine.


The machine program knows the protocol -- it knows that one copy  
will be transported to M and one to W. The machines are already  
physically different (different locations if nothing else), so it is  
a matter of a coin toss as to which goes where. The machines do not,  
however, share a consciousness, so this does not answer what will  
happen with a conscious being.


You forget that we assume comp. So we are machine ourselves, and so,  
for the first person points of view, it is indeed like tossing a coin,  
and that's the FPI.


Consciousness is treated later. For the reversal, only the notion of  
knowledge and/or first person is enough.





Otherwise your prediction is no different from predicting the  
outcome of a coin toss. Think of one machine, it will be unaware of  
the other, if it knows that it will go to either W or M on the  
result of a coin toss... prediction, 50/50. (But if the machine  
doesn't have the protocol programmed in, it will simply answer:  
"What?")



You make my point. Just apply computationalism.





The "P" in the acronym stands for "person", and if the "person" is  
not conscious, it is a zombie and any output you get has no  
bearing on what will happen to conscious persons.


The problem is a problem of prediction of future first person  
account.


That is a problem only if you have a person -- a conscious being.


Not at all. You forget we *assume* computationalism. Without the  
reconstitution in W, we have already agreed that P(M) = 1, and vice- 
versa, so that the guy's consciousness is linked to its first person  
experience in the usual way. So all you need is to assume that when  
you teleported on Mars, seeing Mars is a (personal) confirmation of  
your survival. Your consciousness and identity remains invariant by  
definition of computationalism. We agreed that both copies are genuine  
survivor of the duplication experience, and computationalism does not  
make them able to share consciousness "here-and-now". They share only  
the (non transitive) personal identity, that the memory of who they  
are (here: the guy who was in Helsinki and pushed on the button).



Bruno






Bruce

The zombie machines will probably not be aware of each other, but  
from that you cannot conclude that the conscious persons will not  
be aware of each other, or that consciousness necessarily  
differentiates on different inputs.


Well, you need the inputs being enough different (like seeing W,  
resp. M) so that the machine can take notice of the difference, and  
write distinct outcome in the diary, of course.


Bruno


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread Bruce Kellett

On 9/08/2016 4:08 pm, Brent Meeker wrote:


On 8/8/2016 10:28 PM, Bruce Kellett wrote:

You seem to be agreeing that this is, at bottom, an empirical 
matter. If we do the experiment and duplicate a conscious being, 
then separate the duplicates, we could ask one whether or not it 
was still aware of its duplicate. If the answer is "No", then we 
know that consciousness is localized to a particular physical body. 
If the answer is "Yes", then we know that consciousness is 
non-local, even though it might still supervene on the physical 
bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics. It can't 
just be QM entanglement, because it share computation, to make a 
difference at X due to a perception at Y requires signal transmission.


Signal transmission or awareness? Non-locality does not entail FLT 
signalling -- that makes it local. 


?? Faster than light, spacelike, signalling is what is conventionally 
called "non-local", as in "non-local hidden variable".


That is one interpretation of non-locality, but that does not apply to 
EPR correlations, for instance; the non-locality there is intrinsic, 
there is no signalling /per se/.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread Brent Meeker



On 8/8/2016 10:28 PM, Bruce Kellett wrote:


Yes, that makes sense. But the rovers are not conscious. 


Why not?  Suppose they are.  If you would say "yes to the doctor" 
then you must believe that AI is possible.


I have no reason to suppose that AI is not possible. But the Mars 
rovers are unlikely to be sufficiently complex/self referential to be 
conscious. Do they have an inner narrative?


I wrote "autonomous rover" to indicate it had AI, without committing to 
whether that implied consciousness.  But it's interesting that you ask 
whether it has an inner narrative.  I think that our inner narrative is 
a way of summarizing for memory what we think is significant so that we 
later learn from it by recalling it in similar circumstances.  If I were 
designing a Mar Rover to be truly autonomous over a period of years, I 
would provide it some kind of episodic memory like that as part of it's 
learning algorithms.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-09 Thread Brent Meeker



On 8/8/2016 10:28 PM, Bruce Kellett wrote:
You seem to be agreeing that this is, at bottom, an empirical 
matter. If we do the experiment and duplicate a conscious being, 
then separate the duplicates, we could ask one whether or not it was 
still aware of its duplicate. If the answer is "No", then we know 
that consciousness is localized to a particular physical body. If 
the answer is "Yes", then we know that consciousness is non-local, 
even though it might still supervene on the physical bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics. It can't 
just be QM entanglement, because it share computation, to make a 
difference at X due to a perception at Y requires signal transmission.


Signal transmission or awareness? Non-locality does not entail FLT 
signalling -- that makes it local. 


?? Faster than light, spacelike, signalling is what is conventionally 
called "non-local", as in "non-local hidden variable".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 3:14 pm, Brent Meeker wrote:

On 8/8/2016 7:03 PM, Bruce Kellett wrote:

On 9/08/2016 10:51 am, Brent Meeker wrote:

On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have 
separate consciousnesses at least as soon as they have different 
perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of 
course that is what we might expect -- two brains ==> two 
consciousnesses. But that says nothing about the case of two 
identical brains -- is there one or two consciousnesses? The 
default assumption around here appears to be that the identity of 
indiscernibles will mean that there is only one conscious being. 
The question is then how this consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on 
the brain, so two different brains will realize two different 
consciousnesses because they are at different locations and 
perceiving different things.


That is fine if they started off different, and were never identical 
-- identical in all details, not just sharing single "observer 
moments", even if such can be well-defined.


I would speculate that it would be just like having two autonomous 
Mars rovers that "wake up" at different points on the surface.  They 
may have the same computers and sensors and programs, but their data 
and memories will immediately start to diverge.  They won't be 
"completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at 
the same time.


But Mars  rovers are not conscious -- or are they?

I don't think this does much to invalidate Bruno's argument.  He 
just wants to show that the physical is derivative, not that it's 
irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication 
case from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD. I'm just making 
the point that he does need to derive the physics, specifically the 
finite speed of communication in order to show that the duplication 
results in two different consciousnesses.


The finite speed of communication is a problem only if consciousness 
is localized to the physical brain -- if it is a non-local 
computation, this might not be an issue.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share 
computation, memory, sensor data,...  Then we'd be inclined to say 
that they could be a single consciousness.  But now suppose they are 
moved light-years apart.  They could still share computation, 
memory, etc.  But intelligent action on the scale of an autonomous 
rover would have to be based on the local resources of a single 
rover.  So they would have to effectively "differentiate".  It 
wouldn't be some kind of axiomatic, mathematically provable 
differentiation - rather a practical, observable one.


Yes, that makes sense. But the rovers are not conscious. 


Why not?  Suppose they are.  If you would say "yes to the doctor" then 
you must believe that AI is possible.


I have no reason to suppose that AI is not possible. But the Mars rovers 
are unlikely to be sufficiently complex/self referential to be 
conscious. Do they have an inner narrative?


And if they were placed at different points on the surface of Mars, 
they would have to start with at least some different data -- viz., 
their location on the surface relative to earth. The general issue I 
am raising is that consciousness could be non-local, in which case 
separated duplicates would not need any form of subluminal physical 
communication in order to remain a single conscious being.


You seem to be agreeing that this is, at bottom, an empirical matter. 
If we do the experiment and duplicate a conscious being, then 
separate the duplicates, we could ask one whether or not it was still 
aware of its duplicate. If the answer is "No", then we know that 
consciousness is localized to a particular physical body. If the 
answer is "Yes", then we know that consciousness is non-local, even 
though it might still supervene on the physical bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics.  

Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 7:03 PM, Bruce Kellett wrote:

On 9/08/2016 10:51 am, Brent Meeker wrote:

On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have 
separate consciousnesses at least as soon as they have different 
perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of 
course that is what we might expect -- two brains ==> two 
consciousnesses. But that says nothing about the case of two 
identical brains -- is there one or two consciousnesses? The default 
assumption around here appears to be that the identity of 
indiscernibles will mean that there is only one conscious being. The 
question is then how this consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on 
the brain, so two different brains will realize two different 
consciousnesses because they are at different locations and 
perceiving different things.


That is fine if they started off different, and were never identical 
-- identical in all details, not just sharing single "observer 
moments", even if such can be well-defined.


I would speculate that it would be just like having two autonomous 
Mars rovers that "wake up" at different points on the surface.  They 
may have the same computers and sensors and programs, but their data 
and memories will immediately start to diverge.  They won't be 
"completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at the 
same time.


But Mars  rovers are not conscious -- or are they?

I don't think this does much to invalidate Bruno's argument.  He 
just wants to show that the physical is derivative, not that it's 
irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication case 
from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD.  I'm just making 
the point that he does need to derive the physics, specifically the 
finite speed of communication in order to show that the duplication 
results in two different consciousnesses.


The finite speed of communication is a problem only if consciousness 
is localized to the physical brain -- if it is a non-local 
computation, this might not be an issue.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share 
computation, memory, sensor data,...  Then we'd be inclined to say 
that they could be a single consciousness.  But now suppose they are 
moved light-years apart.  They could still share computation, memory, 
etc.  But intelligent action on the scale of an autonomous rover 
would have to be based on the local resources of a single rover.  So 
they would have to effectively "differentiate".  It wouldn't be some 
kind of axiomatic, mathematically provable differentiation - rather a 
practical, observable one.


Yes, that makes sense. But the rovers are not conscious. 


Why not?  Suppose they are.  If you would say "yes to the doctor" then 
you must believe that AI is possible.


And if they were placed at different points on the surface of Mars, 
they would have to start with at least some different data -- viz., 
their location on the surface relative to earth. The general issue I 
am raising is that consciousness could be non-local, in which case 
separated duplicates would not need any form of subluminal physical 
communication in order to remain a single conscious being.


You seem to be agreeing that this is, at bottom, an empirical matter. 
If we do the experiment and duplicate a conscious being, then separate 
the duplicates, we could ask one whether or not it was still aware of 
its duplicate. If the answer is "No", then we know that consciousness 
is localized to a particular physical body. If the answer is "Yes", 
then we know that consciousness is non-local, even though it might 
still supervene on the physical bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics.  It can't just 
be QM entanglement, because it share computation, to make a difference 
at X due to a perception at Y requires signal transmission.


The latter possibility seems the more likely if consciousness is, at 
root, 

Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 10:51 am, Brent Meeker wrote:

On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have separate 
consciousnesses at least as soon as they have different perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of 
course that is what we might expect -- two brains ==> two 
consciousnesses. But that says nothing about the case of two 
identical brains -- is there one or two consciousnesses? The default 
assumption around here appears to be that the identity of 
indiscernibles will mean that there is only one conscious being. The 
question is then how this consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on the 
brain, so two different brains will realize two different 
consciousnesses because they are at different locations and perceiving 
different things.


That is fine if they started off different, and were never identical -- 
identical in all details, not just sharing single "observer moments", 
even if such can be well-defined.


I would speculate that it would be just like having two autonomous 
Mars rovers that "wake up" at different points on the surface.  They 
may have the same computers and sensors and programs, but their data 
and memories will immediately start to diverge.  They won't be 
"completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at the 
same time.


But Mars  rovers are not conscious -- or are they?

I don't think this does much to invalidate Bruno's argument.  He 
just wants to show that the physical is derivative, not that it's 
irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication case 
from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD.  I'm just making 
the point that he does need to derive the physics, specifically the 
finite speed of communication in order to show that the duplication 
results in two different consciousnesses.


The finite speed of communication is a problem only if consciousness is 
localized to the physical brain -- if it is a non-local computation, 
this might not be an issue.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share 
computation, memory, sensor data,...  Then we'd be inclined to say 
that they could be a single consciousness.  But now suppose they are 
moved light-years apart.  They could still share computation, memory, 
etc.  But intelligent action on the scale of an autonomous rover would 
have to be based on the local resources of a single rover.  So they 
would have to effectively "differentiate".  It wouldn't be some kind 
of axiomatic, mathematically provable differentiation - rather a 
practical, observable one.


Yes, that makes sense. But the rovers are not conscious. And if they 
were placed at different points on the surface of Mars, they would have 
to start with at least some different data -- viz., their location on 
the surface relative to earth. The general issue I am raising is that 
consciousness could be non-local, in which case separated duplicates 
would not need any form of subluminal physical communication in order to 
remain a single conscious being.


You seem to be agreeing that this is, at bottom, an empirical matter. If 
we do the experiment and duplicate a conscious being, then separate the 
duplicates, we could ask one whether or not it was still aware of its 
duplicate. If the answer is "No", then we know that consciousness is 
localized to a particular physical body. If the answer is "Yes", then we 
know that consciousness is non-local, even though it might still 
supervene on the physical bodies. The latter possibility seems the more 
likely if consciousness is, at root, non-physical, so that the physical 
is an epiphenomenon of consciousness.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at 

Re: Holiday Exercise

2016-08-08 Thread Stathis Papaioannou
On 9 August 2016 at 03:52, Brent Meeker  wrote:

>
>
> On 8/8/2016 6:18 AM, Stathis Papaioannou wrote:
>
>
>
> On Monday, 8 August 2016, Brent Meeker  wrote:
>
>>
>>
>> On 8/7/2016 11:20 AM, Bruno Marchal wrote:
>>
>>> Not necessarily. A digital computer also requires that time be digitized
 so that its registers run synchronously.  Otherwise "the state" is ill
 defined.  The finite speed of light means that spacially separated regions
 cannot be synchronous.  Even if neurons were only ON or OFF, which they
 aren't, they have frequency modulation, they are not synchronous.

>>>
>>> Synchronous digital machine can emulate asynchronous digital machine,
>>> and that is all what is needed for the reasoning.
>>>
>>
>> If the time variable is continuous, i.e. can't be digitized, I don't
>> think you are correct.
>>
>
> If time is continuous, you would need infinite precision to exactly define
> the timing of a neuron's excitation, so you are right, that would not be
> digitisable. Practically, however, brains would have to have a non-zero
> engineering tolerance, or they would be too unstable. The gravitational
> attraction of a passing ant would slightly change the timing of neural
> activity, leading to a change in mental state and behaviour.
>
>
> I agree that brains must be essentially classical computers, but no
> necessarily digital.  The question arose as to what was contained in an
> Observer Moment and whether, in an infinite universe there would
> necessarily be infinitely many exact instances of the same OM.
>

Even in a continuum, there would be brain states and mental states that are
effectively identical to an arbitrary level of precision. We maintain a
sense of continuity of identity despite sometimes even gross changes to our
brain. At some threshold there will be a perceptible change, but the
threshold is not infinitesimal.


>   But having a continuous variable doesn't imply instability.   First, the
> passing ant is also instantiated infinitely many times.  Second, if a small
> cause has only a proportionately small effect then there is no
> "instability", more likely the dynamics diverge as in deterministic chaos.
> But in any case it would allow an aleph-1 order infinity of  OMs which
> would differ by infinitesimal amounts.
>
> But I also question the coherence of this idea.  As discussed (at great
> length) by Bruno and JKC, two or more identical brains must instantiate the
> same experience, i.e. the same OM.  So if there are only a finite number of
> possible brain-states and universes are made of OMs, then there can only be
> a finite number of finite universes.
>

A human brain can probably only have a finite number of thoughts, being of
finite size, but a turing machine is not so limited.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have separate 
consciousnesses at least as soon as they have different perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of course 
that is what we might expect -- two brains ==> two consciousnesses. 
But that says nothing about the case of two identical brains -- is 
there one or two consciousnesses? The default assumption around here 
appears to be that the identity of indiscernibles will mean that there 
is only one conscious being. The question is then how this 
consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on the 
brain, so two different brains will realize two different 
consciousnesses because they are at different locations and perceiving 
different things.  I would speculate that it would be just like having 
two autonomous Mars rovers that "wake up" at different points on the 
surface.  They may have the same computers and sensors and programs, but 
their data and memories will immediately start to diverge.  They won't 
be "completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at the 
same time.




I don't think this does much to invalidate Bruno's argument.  He just 
wants to show that the physical is derivative, not that it's irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication case 
from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD.  I'm just making the 
point that he does need to derive the physics, specifically the finite 
speed of communication in order to show that the duplication results in 
two different consciousnesses.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share computation, 
memory, sensor data,...  Then we'd be inclined to say that they could be 
a single consciousness.  But now suppose they are moved light-years 
apart.  They could still share computation, memory, etc.  But 
intelligent action on the scale of an autonomous rover would have to be 
based on the local resources of a single rover.  So they would have to 
effectively "differentiate".  It wouldn't be some kind of axiomatic, 
mathematically provable differentiation - rather a practical, observable 
one.


Brent

It seems far from obvious to me, one way or the other. I can think of 
no general principles that would give a definitive answer here. 
Physics alone does not seem to be enough. Any attempted derivation 
from physics seems just to beg the question.


Bruce



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 9:05 am, Russell Standish wrote:

On Mon, Aug 08, 2016 at 10:01:49AM -0700, Brent Meeker wrote:

I think Russell is just saying we take it as an added
axiom/assumption that the duplicated brain/bodies must have separate
consciousnesses at least as soon as they have different perceptions.
This is exactly what you would predict from supposing that
consciousness is a product of physical processes in the brain -
something that is supported by lots and lots of evidence.

I don't think this does much to invalidate Bruno's argument.  He
just wants to show that the physical is derivative, not that it's
irrelevant.

Brent

Physicality in the thought experiment seems like a red herring to
me. We can just as easily consider running the duplicated
consciousnesses in virtual reality simulators of the two cities.


You would still have to build into your simulation whether or not the 
consciousness differentiates -- begging the question yet again.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 9:03 am, Russell Standish wrote:

On Mon, Aug 08, 2016 at 09:06:20PM +1000, Bruce Kellett wrote:

On 8/08/2016 8:38 pm, Russell Standish wrote:

On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:

However, still no justification has been given for the assumption
that the duplicated consciousness differentiates on different
inputs. And consciousness is what computationalism is supposed to be
giving an account of.


Obviously different inputs does not entail the differentiation of
consciousness.

In duplication there is still only one consciousness: and as you
say, different inputs do not entail the differentiation of a single
consciousness (associated with a single brain/body). So why would it
be different if the body were also duplicated?


However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs.

There is no difficulty in understanding that differentiated
consciousness entails different persons, who may or may not
experience different inputs, but I doubt that differentiation of
consciousness necessarily entails different inputs - two people can
experience the same stimuli.

This directly contradicts computational supervenience. I'm pretty sure
that if you read the fine print, you'll find that computational
supervenience is part of the YD assumption, although that fact is
often glossed over. I vaguely recall challenging Bruno on this a
couple of years ago.


In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.

There is no difficulty in accepting that there is consciousness of
two cities, but is that one consciousness, or two? You beg the
question by referring to plural 'persons'.

Two, because each consciousness is aware of different cities. They
each answer the question "Which city I am in?" in a different way, iw
it is a difference that makes a difference.


That still begs the question.

Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added axiom/assumption 
that the duplicated brain/bodies must have separate consciousnesses at 
least as soon as they have different perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of course 
that is what we might expect -- two brains ==> two consciousnesses. But 
that says nothing about the case of two identical brains -- is there one 
or two consciousnesses? The default assumption around here appears to be 
that the identity of indiscernibles will mean that there is only one 
conscious being. The question is then how this consciousness evolves as 
inputs change?


I don't think this does much to invalidate Bruno's argument.  He just 
wants to show that the physical is derivative, not that it's irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot derive 
the differentiation of consciousness in this duplication case from the 
YD+CT starting point, so where does it come from? It seems to me an 
experimental matter -- until we have duplicated a conscious being, we 
will not know whether the consciousnesses differentiate on different 
incomes or not. It seems far from obvious to me, one way or the other. I 
can think of no general principles that would give a definitive answer 
here. Physics alone does not seem to be enough. Any attempted derivation 
from physics seems just to beg the question.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 4:05 PM, Russell Standish wrote:

On Mon, Aug 08, 2016 at 10:01:49AM -0700, Brent Meeker wrote:

I think Russell is just saying we take it as an added
axiom/assumption that the duplicated brain/bodies must have separate
consciousnesses at least as soon as they have different perceptions.
This is exactly what you would predict from supposing that
consciousness is a product of physical processes in the brain -
something that is supported by lots and lots of evidence.

I don't think this does much to invalidate Bruno's argument.  He
just wants to show that the physical is derivative, not that it's
irrelevant.

Brent

Physicality in the thought experiment seems like a red herring to
me. We can just as easily consider running the duplicated
consciousnesses in virtual reality simulators of the two cities.


What if they are linked as one simulator by RF or by the internet? The 
physicality is being used to assert that there is not one consciousness 
supervening on different brains/computers/simulators. I think that's 
true, but it's because I think you are right that supervenience is 
implicit in YD.  But if consciousness is generated by a kind of statmech 
of UD computations then one can't rely on this implicit supervenience 
before have derived spacetime and the finite speed of communication.


Brent





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 12:39 am, Bruno Marchal wrote:

On 08 Aug 2016, at 01:26, Bruce Kellett wrote:
On 8/08/2016 1:30 am, Bruno Marchal wrote:


But in step 3, I ma very careful to not use the notion of 
"consciousness", and instead a simple 3p notion of first person. 
usually many relates it two consciousness and assumes that when the 
guy say "I see Moscow", they are conscious, but that is not needed 
to get the reversal.


Maybe that is the basis of the problem. In step 3 you seem to be 
claiming nothing that could not be achieved by a non-conscious machine:


Yes.

take a machine that can take photographs and compare the resulting 
images with a data base of images of certain cities. When a match is 
found, the machine outputs the corresponding name of the city from 
the data base. Send one such machine to Washington and an identical 
machine to Moscow. They will fulfill your requirements, the W-machine 
will output W and the M-machine will output M.


This is what you are now seeming to describe. But that is not FPI.


How could the machine predict the result of the match? Give me the 
algorithm used by that machine.


The machine program knows the protocol -- it knows that one copy will be 
transported to M and one to W. The machines are already physically 
different (different locations if nothing else), so it is a matter of a 
coin toss as to which goes where. The machines do not, however, share a 
consciousness, so this does not answer what will happen with a conscious 
being. Otherwise your prediction is no different from predicting the 
outcome of a coin toss. Think of one machine, it will be unaware of the 
other, if it knows that it will go to either W or M on the result of a 
coin toss... prediction, 50/50. (But if the machine doesn't have the 
protocol programmed in, it will simply answer: "What?")


The "P" in the acronym stands for "person", and if the "person" is 
not conscious, it is a zombie and any output you get has no bearing 
on what will happen to conscious persons.


The problem is a problem of prediction of future first person account.


That is a problem only if you have a person -- a conscious being.

Bruce

The zombie machines will probably not be aware of each other, but 
from that you cannot conclude that the conscious persons will not be 
aware of each other, or that consciousness necessarily differentiates 
on different inputs.


Well, you need the inputs being enough different (like seeing W, resp. 
M) so that the machine can take notice of the difference, and write 
distinct outcome in the diary, of course.


Bruno


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Russell Standish
On Mon, Aug 08, 2016 at 10:01:49AM -0700, Brent Meeker wrote:
> I think Russell is just saying we take it as an added
> axiom/assumption that the duplicated brain/bodies must have separate
> consciousnesses at least as soon as they have different perceptions.
> This is exactly what you would predict from supposing that
> consciousness is a product of physical processes in the brain -
> something that is supported by lots and lots of evidence.
> 
> I don't think this does much to invalidate Bruno's argument.  He
> just wants to show that the physical is derivative, not that it's
> irrelevant.
> 
> Brent

Physicality in the thought experiment seems like a red herring to
me. We can just as easily consider running the duplicated
consciousnesses in virtual reality simulators of the two cities.

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Russell Standish
On Mon, Aug 08, 2016 at 09:06:20PM +1000, Bruce Kellett wrote:
> On 8/08/2016 8:38 pm, Russell Standish wrote:
> >On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:
> >>However, still no justification has been given for the assumption
> >>that the duplicated consciousness differentiates on different
> >>inputs. And consciousness is what computationalism is supposed to be
> >>giving an account of.
> >>
> >Obviously different inputs does not entail the differentiation of
> >consciousness.
> 
> In duplication there is still only one consciousness: and as you
> say, different inputs do not entail the differentiation of a single
> consciousness (associated with a single brain/body). So why would it
> be different if the body were also duplicated?
> 
> >However computational supervenience does imply the
> >opposite: differentiated consciousness entails a difference in
> >inputs.
> 
> There is no difficulty in understanding that differentiated
> consciousness entails different persons, who may or may not
> experience different inputs, but I doubt that differentiation of
> consciousness necessarily entails different inputs - two people can
> experience the same stimuli.

This directly contradicts computational supervenience. I'm pretty sure
that if you read the fine print, you'll find that computational
supervenience is part of the YD assumption, although that fact is
often glossed over. I vaguely recall challenging Bruno on this a
couple of years ago.

> 
> >In the W/M experiment we are asked to suppose that the
> >duplicated persons do, in fact, notice that they've been teleported to
> >a different city, and recognise where they they've been teleported to.
> 
> There is no difficulty in accepting that there is consciousness of
> two cities, but is that one consciousness, or two? You beg the
> question by referring to plural 'persons'.
> 

Two, because each consciousness is aware of different cities. They
each answer the question "Which city I am in?" in a different way, iw
it is a difference that makes a difference.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 7:27 AM, Bruno Marchal wrote:


On 07 Aug 2016, at 22:32, Brent Meeker wrote:




On 8/7/2016 7:27 AM, Bruno Marchal wrote:


So I suggest that instead of starting with the hypothesis that 
consciousness is a computation,


Please, I insist that consciousness is NOT a computation. 
Consciousness is an 1p notion, and you cannot identify it with *any* 
3p.


But then you must say "No." to the doctor, because what he proposes 
to is a 3p equivalent substitute for your brain.


On the contrary, once we say "yes" to the doctor, we can know that we 
are not the brain or the body,  we own them. We borrow their relative 
appearances, not to be conscious, but to manifest our first person 
experiences relatively to the (probable) others in the normal histories.


Without the brain and body and physics, what experience would we have?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker
I think Russell is just saying we take it as an added axiom/assumption 
that the duplicated brain/bodies must have separate consciousnesses at 
least as soon as they have different perceptions.  This is exactly what 
you would predict from supposing that consciousness is a product of 
physical processes in the brain - something that is supported by lots 
and lots of evidence.


I don't think this does much to invalidate Bruno's argument.  He just 
wants to show that the physical is derivative, not that it's irrelevant.


Brent


On 8/8/2016 4:06 AM, Bruce Kellett wrote:

On 8/08/2016 8:38 pm, Russell Standish wrote:

On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:

However, still no justification has been given for the assumption
that the duplicated consciousness differentiates on different
inputs. And consciousness is what computationalism is supposed to be
giving an account of.


Obviously different inputs does not entail the differentiation of
consciousness.


In duplication there is still only one consciousness: and as you say, 
different inputs do not entail the differentiation of a single 
consciousness (associated with a single brain/body). So why would it 
be different if the body were also duplicated?



However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs.


There is no difficulty in understanding that differentiated 
consciousness entails different persons, who may or may not experience 
different inputs, but I doubt that differentiation of consciousness 
necessarily entails different inputs - two people can experience the 
same stimuli.



In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.


There is no difficulty in accepting that there is consciousness of two 
cities, but is that one consciousness, or two? You beg the question by 
referring to plural 'persons'.


Bruce


Ie, W/M is a difference that makes a difference.

Cheers





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruno Marchal


On 08 Aug 2016, at 01:26, Bruce Kellett wrote:


On 8/08/2016 1:30 am, Bruno Marchal wrote:


But in step 3, I ma very careful to not use the notion of  
"consciousness", and instead a simple 3p notion of first person.  
usually many relates it two consciousness and assumes that when the  
guy say "I see Moscow", they are conscious, but that is not needed  
to get the reversal.


Maybe that is the basis of the problem. In step 3 you seem to be  
claiming nothing that could not be achieved by a non-conscious  
machine:


Yes.





take a machine that can take photographs and compare the resulting  
images with a data base of images of certain cities. When a match is  
found, the machine outputs the corresponding name of the city from  
the data base. Send one such machine to Washington and an identical  
machine to Moscow. They will fulfill your requirements, the W- 
machine will output W and the M-machine will output M.


This is what you are now seeming to describe. But that is not FPI.


How could the machine predict the result of the match? Give me the  
algorithm used by that machine.





The "P" in the acronym stands for "person", and if the "person" is  
not conscious, it is a zombie and any output you get has no bearing  
on what will happen to conscious persons.


The problem is a problem of prediction of future first person account.






The zombie machines will probably not be aware of each other, but  
from that you cannot conclude that the conscious persons will not be  
aware of each other, or that consciousness necessarily  
differentiates on different inputs.


Well, you need the inputs being enough different (like seeing W, resp.  
M) so that the machine can take notice of the difference, and write  
distinct outcome in the diary, of course.


Bruno







Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruno Marchal


On 07 Aug 2016, at 23:14, Brent Meeker wrote:




On 8/7/2016 11:20 AM, Bruno Marchal wrote:
Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  Otherwise "the  
state" is ill defined.  The finite speed of light means that  
spacially separated regions cannot be synchronous.  Even if  
neurons were only ON or OFF, which they aren't, they have  
frequency modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital  
machine, and that is all what is needed for the reasoning.


If the time variable is continuous, i.e. can't be digitized, I don't  
think you are correct.



Nothing in physics needs to be digital for the computationalist  
hypothesis to be true. In fact, the FPI suggest that the physical must  
have continuous parts, although I have some doubt it could be space or  
time.


Then, if the brain exploits a continuum which would be not FPI- 
recoverable, then we are out of the scope of the computationalist  
theory. Keep in mind that it is my working hypothesis.


Now, like Stathis just said, if the brain exploits the continuum,  
evolution, mind, and many things get harder to explain. Biology  
illustrates that nature exploits a lot redundancy, which would be  
impossible if we need all decimal exact in the continuous relations.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruno Marchal


On 07 Aug 2016, at 22:32, Brent Meeker wrote:




On 8/7/2016 7:27 AM, Bruno Marchal wrote:


So I suggest that instead of starting with the hypothesis that  
consciousness is a computation,


Please, I insist that consciousness is NOT a computation.  
Consciousness is an 1p notion, and you cannot identify it with  
*any* 3p.


But then you must say "No." to the doctor, because what he proposes  
to is a 3p equivalent substitute for your brain.


On the contrary, once we say "yes" to the doctor, we can know that we  
are not the brain or the body,  we own them. We borrow their relative  
appearances, not to be conscious, but to manifest our first person  
experiences relatively to the (probable) others in the normal histories.


Bruno





ttp://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Stathis Papaioannou
On Monday, 8 August 2016, Brent Meeker  wrote:

>
>
> On 8/7/2016 11:20 AM, Bruno Marchal wrote:
>
>> Not necessarily. A digital computer also requires that time be digitized
>>> so that its registers run synchronously.  Otherwise "the state" is ill
>>> defined.  The finite speed of light means that spacially separated regions
>>> cannot be synchronous.  Even if neurons were only ON or OFF, which they
>>> aren't, they have frequency modulation, they are not synchronous.
>>>
>>
>> Synchronous digital machine can emulate asynchronous digital machine, and
>> that is all what is needed for the reasoning.
>>
>
> If the time variable is continuous, i.e. can't be digitized, I don't think
> you are correct.
>

If time is continuous, you would need infinite precision to exactly define
the timing of a neuron's excitation, so you are right, that would not be
digitisable. Practically, however, brains would have to have a non-zero
engineering tolerance, or they would be too unstable. The gravitational
attraction of a passing ant would slightly change the timing of neural
activity, leading to a change in mental state and behaviour.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 8/08/2016 8:38 pm, Russell Standish wrote:

On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:

However, still no justification has been given for the assumption
that the duplicated consciousness differentiates on different
inputs. And consciousness is what computationalism is supposed to be
giving an account of.


Obviously different inputs does not entail the differentiation of
consciousness.


In duplication there is still only one consciousness: and as you say, 
different inputs do not entail the differentiation of a single 
consciousness (associated with a single brain/body). So why would it be 
different if the body were also duplicated?



However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs.


There is no difficulty in understanding that differentiated 
consciousness entails different persons, who may or may not experience 
different inputs, but I doubt that differentiation of consciousness 
necessarily entails different inputs - two people can experience the 
same stimuli.



In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.


There is no difficulty in accepting that there is consciousness of two 
cities, but is that one consciousness, or two? You beg the question by 
referring to plural 'persons'.


Bruce


Ie, W/M is a difference that makes a difference.

Cheers



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Russell Standish
On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:
> 
> However, still no justification has been given for the assumption
> that the duplicated consciousness differentiates on different
> inputs. And consciousness is what computationalism is supposed to be
> giving an account of.
> 

Obviously different inputs does not entail the differentiation of
consciousness. However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs. In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.

Ie, W/M is a difference that makes a difference.

Cheers

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-07 Thread Bruce Kellett

On 8/08/2016 1:30 am, Bruno Marchal wrote:


But in step 3, I ma very careful to not use the notion of 
"consciousness", and instead a simple 3p notion of first person. 
usually many relates it two consciousness and assumes that when the 
guy say "I see Moscow", they are conscious, but that is not needed to 
get the reversal.


Maybe that is the basis of the problem. In step 3 you seem to be 
claiming nothing that could not be achieved by a non-conscious machine: 
take a machine that can take photographs and compare the resulting 
images with a data base of images of certain cities. When a match is 
found, the machine outputs the corresponding name of the city from the 
data base. Send one such machine to Washington and an identical machine 
to Moscow. They will fulfill your requirements, the W-machine will 
output W and the M-machine will output M.


This is what you are now seeming to describe. But that is not FPI. The 
"P" in the acronym stands for "person", and if the "person" is not 
conscious, it is a zombie and any output you get has no bearing on what 
will happen to conscious persons.


The zombie machines will probably not be aware of each other, but from 
that you cannot conclude that the conscious persons will not be aware of 
each other, or that consciousness necessarily differentiates on 
different inputs.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-07 Thread Brent Meeker



On 8/7/2016 11:20 AM, Bruno Marchal wrote:
Not necessarily. A digital computer also requires that time be 
digitized so that its registers run synchronously.  Otherwise "the 
state" is ill defined.  The finite speed of light means that 
spacially separated regions cannot be synchronous.  Even if neurons 
were only ON or OFF, which they aren't, they have frequency 
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital machine, 
and that is all what is needed for the reasoning.


If the time variable is continuous, i.e. can't be digitized, I don't 
think you are correct.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-07 Thread Brent Meeker



On 8/7/2016 7:27 AM, Bruno Marchal wrote:


So I suggest that instead of starting with the hypothesis that 
consciousness is a computation,


Please, I insist that consciousness is NOT a computation. 
Consciousness is an 1p notion, and you cannot identify it with *any* 3p.


But then you must say "No." to the doctor, because what he proposes to 
is a 3p equivalent substitute for your brain.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-07 Thread Bruno Marchal


On 07 Aug 2016, at 15:06, Stathis Papaioannou wrote:




On Friday, 5 August 2016, Bruno Marchal  wrote:

On 05 Aug 2016, at 06:27, Brent Meeker wrote:




On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:



On 5 August 2016 at 04:01, Brent Meeker   
wrote:



On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:
The problem with (3) is a general problem with multiverses.  A  
single, infinite universe is an example of a multiverse theory,  
since there will be infinite copies of everything and every  
possible variation of everything, including your brain and your  
mind.


That implicitly assumes a digital universe, yet the theory that  
suggests it, quantum mechanics, is based on continua; which is why  
I don't take "the multiverse" too seriously.


It appears that our brains are finite state machines. Each neuron  
can either be "on" or "off", there are a finite number of neurons,  
so a finite number of possible brain states, and a finite number  
of possible mental states. This is analogous to a digital computer:


Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  Otherwise "the  
state" is ill defined.  The finite speed of light means that  
spacially separated regions cannot be synchronous.  Even if neurons  
were only ON or OFF, which they aren't, they have frequency  
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital  
machine, and that is all what is needed for the reasoning.


Bruno





even if you postulate that electric circuit variables are  
continuous, transistors can only be on or off. If the number of  
possible mental states is finite, then in an infinite universe,  
whether continuous or discrete, mental states will repeat.
We live in an orderly world with consistent physical laws. It  
seems to me that you are suggesting that if everything possible  
existed then we would not live in such an orderly world,


Unless the worlds were separated in some way, which current  
physical theories provide - but which is not explicable if you  
divorce conscious thoughts from physics.


The worlds are physically separated - there can be no  
communication between separate worlds in the multiverse and none  
between sufficiently widely separated copies of subsets of the  
world in an infinite single universe. But the separate copies are  
connected insofar as they share memories and sense of identity,  
even if there is no causal connection between them.


Of course "copy" implies a shared past in which there was an  
"original", they have a cause in common.


Brent


A copy can be prepared using the original as template but it can  
also be prepared by exhaustively enumerating every possible variant  
of an entity,


Like in the sigma_1 arithmetic, or the UD. OK.




in which case there is no causal link.



Absolutely.

 That shows that machine can share a past, or better: a memory of the  
past, without any causal link. That happens an infinity of time in the  
(sigma_1) arithmetic. of course at step 3, we have reason to related  
the memories to the physical history. But that leads to the difficulty  
in step seven;


That is part of the measure problem, and eventually physical causality  
has to be an emergent pattern. It works at the limit of the FPI. This  
is not directly 3p descriptible, as the FPI abstracts all number-of- 
steps delays on all computations, and technically, we get only first  
person singular and first person plural notion.


We cannot use the physical causality as the selector of computation,  
for precisely what you say: we can share memories of past and of goals  
for the future without any causal link. That is the case for all  
"Maury-effect" programs, which are large program starting from a big  
input describing your current computational states, and leading to  
white rabbits dreams or white noise. We have to show how, whatever has  
emerged below our substitution level manage to keep the white rabbits  
away.
Self-reference indicates a quantization which promises the needed  
"anti-white rabbits", and the minimization of aberrance by phase  
randomization.


Bruno




--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, 

Re: Holiday Exercise

2016-08-07 Thread PGC


On Sunday, August 7, 2016 at 4:27:56 PM UTC+2, Bruno Marchal wrote:
>
>
> On 06 Aug 2016, at 20:00, Brent Meeker wrote:
>
>
> C. An UD will realize all possible computation, and hence the totality of 
> reality.
>
>
>
> Brent, please reread the UDA. 
>

Perhaps but perhaps you should reread it. 

Or rewrite it to be more communicable or... dare I say the word "fun". As 
in how Smullyan presented Gödel to a wider audience. There's a voice in 
your head right now that says "Smullyan has no real contribution"; but 
understand that Smullyan was able to relate to people without pushing his 
buddies into "doing their homework" on public list. He was able to relate 
to children and make his classrooms laugh. 

Ok, the fact that you have to refer people to "go do their homework" so 
often (with Telmo a few weeks ago, with Russell few weeks ago, with John 
every day, with Brent today) can reflect that the teacher is so far beyond 
his students that they should be less lazy and catch up... but 
pedagogically this is medieval stuff with catholic overtones. We are 
further than that in pedagogical terms today: this state of affairs could 
just as well reflect that the homework problem or the resources it 
presupposes are not clear or accessible to anybody interested, and that the 
teacher may be doing a bad job sorting his teaching material or organizing 
his presentation, perhaps because he uses every free second to convert John 
Clark and Bruce.

Brent shows good faith in exposing his particular summary understanding and 
your medieval pedagogical approach tells him to "do the work" and possibly 
soon to confess why he isn't doing said work. We may agree on possibility 
of comp but as teacher I prefer softer approaches, which is why you and I 
have our differences regarding communicability problem and its approaches 
and why I stay mute on most matters here: my school practices its own 
interpretation and doesn't need public advertising. Plato is not only alive 
in academic sense: he allows me to make comfortable living, as people are 
not used to that distance, perspective, respect, and decency. Also, it 
mixes well with musical pedagogy.

Are we so addicted to the format of posting fast informal messages, that we 
loose sight of the major communicability problem that education faces, 
particularly in insecure political and anxiety ridden times where every 
message gets oversimplified in the insecure fetish to render everything 
secure and clear? I thought you and this list are immune to this media 
trend, but instead you post more, expecting it to clarify more. This is 
naive in a way I know you to be familiar with, so I remain astonished with 
your answers and pedagogical moves of late. If we don't have faith in 
ourselves we will always project negativity onto others instead of 
focussing on and enlarging the potential of new beginning. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-07 Thread Bruno Marchal


On 07 Aug 2016, at 01:24, Bruce Kellett wrote:


On 7/08/2016 9:00 am, Russell Standish wrote:

On Thu, Aug 04, 2016 at 07:51:22PM +1000, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

Methinks you are unnecessarily assuming transitivity again.

No, I was just referring to the continuation of a single
consciousness through time. We get different input data all the
time but we do not differentiate according to that data.

I could perhaps expand on that response. On duplication, two
identical consciousnesses are created, and by the identity of
indiscernibles, they form just a single consciousness. Then data is
input. It seems to me that there is no reason why this should lead
the initial consciousness to differentiate, or split into two. In
normal life we get inputs from many sources simultaneously -- we see
complex scenes, smell the air, feel impacts on our body, and hear
many sounds from the environment. None of this leads our
consciousness to disintegrate. Indeed, our evolutionary experience
has made us adept at coping with these multifarious inputs and
sorting through them very efficiently to concentrate on what is most
important, while keeping other inputs at an appropriate level in our
minds.


You appear to be conflating differentiation with disintegration. Yes
my consciousness now differs from what it was one second ago, but by
the YD I'm supposed to have survived from one second to the next.


Poetic use of language.


Hmm, not sure about that.






We're not supposed to ask what does "survive" mean. Actually it is a
very interesting question, but nobody, least of all Bruno, has
anything remotely resembling an answer.


With Parfit, it seems plausible that "survival" is what matters:  
"persons" are secondary. This certainly needs more analysis.


But in the W/M experiment, both W and M can say they survived from  
the

ancester H, but W did not survive from M, nor vice-versa. All three
are different from each other - they have differentiated.


'Survivor of' is not a transitive relationship, identity is.


Same conflation again. Personal identity is NOT the same as identity.




But survival is akin to continuation as in Nozick's theory. If the  
end result of this is that two new "persons" are created as  
"survivors/continuers" of the original, then maybe we can resolve  
some of the difficulties.


However, still no justification has been given for the assumption  
that the duplicated consciousness differentiates on different  
inputs. And consciousness is what computationalism is supposed to be  
giving an account of.



Yes, but consciousness will be eas. But the point is that we have also  
to give an account of matter, without assuming matter. I show that the  
mind-body problem is a priori two times more difficult for a  
computationalist, because he must explain not just consciousness and  
mind, but also matter and its apparent persistance.


But in step 3, I ma very careful to not use the notion of  
"consciousness", and instead a simple 3p notion of first person.  
usually many relates it two consciousness and assumes that when the  
guy say "I see Moscow", they are conscious, but that is not needed to  
get the reversal.


Bruno






Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-07 Thread Bruno Marchal


On 06 Aug 2016, at 20:35, John Clark wrote:

On Sat, Aug 6, 2016 at 9:17 AM, Bruno Marchal   
wrote:


​> ​It was the question 2 which does not involve duplication.
Question 2:  if I am sure at time t that at time q, q > t, I will be  
uncertain of the outcome of some experience x, then I am uncertain  
about the outcome of that experience at time t.


​As I explained in my previous post that is not universally true,  
it depends on if you have forgotten something or not. ​When i was  
in the fourth grade I had to learn the state capitals and knew with  
certainty what the capital of Wyoming was, and I was not only  
certain I was undoubtedly correct too. Today I could look it up but  
right now, although much time has passed, I am uncertain what the  
capital of Wyoming is.


​> ​You have answered both questions positively in your posts of  
the 02 August and 03 August respectively.


​Yes that's true I did, and both questions involved either no  
duplication or identical environments after duplication so, as I  
also explained in my previous post, personal pronouns and the  
identity of the mysterious Mr. You is not important. And I might add  
if the environments are identical then although there are 2 brains  
there is only one individual because thinking is what brains do and  
the structure of the 2 brains are identical and the 2 environmental  
inputs to the 2 brains are identical so what the 2 brains are doing  
is also identical. ​



​> ​Then I have shown that the step 3 FPI is a direct consequence  
of answering "yes" to the questions 1 and 2.


​Not if YOU walk into a YOU duplicating machine and one YOU goes to  
Moscow and the other YOU goes to Washington! Then talking about THE  
FPI and the probabilities of what YOU will see after duplication is  
just ridiculous.




On the contrary. Once you have a bit of empathy with yourself you  
listen to whatever the copies can say, and a nine years old child get  
the point when doing that.


The rest is playing with words and ad hominem boring distractions.


Bruno







And don't start the bit about the duplicating machine being  
equivalent to one of Everett's branching worlds because it's not.  
With Everett the identity of the personal pronoun "YOU" is crystal  
clear and can always be uniquely and unambiguously defined:


YOU is the one and only chunk of matter in the observable universe  
that behaves in a Brunomarchalian way.


But if duplicating machines are around then "YOU" has no definition  
and talking about THE FPI as if there were only one is just silly.


​> ​do you see why it entails the FPI?

​What I don't see is how THE FPI can exist at all in a world with  
person duplicating machines because the "P" in FPI stands for  
"person" and the person has been duplicated, YOU have been  
duplicated, all of YOU has been duplicated. All. I think your  
confusion stems entirely from something you said a few posts ago,  
something you've used as a unnamed hidden axiom from day one at the  
very start of your "proof":


 "Nothing can duplicate a first person view from its first person  
point of view​"


If that's true then computationalism​ is false, but you can't use  
an assumption that ​computationalism​ is false to prove that  
computationalism​ is false.











 John K Clark


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-07 Thread Bruno Marchal


On 06 Aug 2016, at 20:00, Brent Meeker wrote:




On 8/6/2016 4:58 AM, Bruno Marchal wrote:
 If we think about engineering an autonomous being it becomes  
obvious that this is a good architecture.   Decision making should  
be hierarchical with only a few requiring system-wide  
consideration.  With RF communication this autonomous being could  
easily "be"in both Moscow and Washington.


I agree. Of course this does not change the step 3 conclusion if  
that is needed to say. There is just no RF communication available,


But that's where you're taking for granted physics which, later,  
you're going to conclude is to be inferred from statistics on  
computations and is otiose.


But in this thread, we are not "later". I guess you allude to step  
seven. You cannot use step seven to confuse people on step 3.








It's like saying A, B, C, D, ...entail Z,  but Z  shows there's no  
reason believe A.



This we will discuss when we arrive at step 7.

(And francly, where is the problem:? it happens that a conclusion  
leads to discharge some hypotheses, but here, you seem also to confuse  
the assumption that there is a physical reality at the metalevel, and  
the assumption that there is a primary physical, at the actual level  
of a metaphysical theory).


Again, we are at the step 3 only, which is just the first person  
indeterminacy (imagine in a physical implementation of the protocols).


To solve the mind body problem, we must suppose there is a mind, and  
there is a body, before reducing one to the second, or vice versa, or  
both from something else.


I understand that the conclusion can seem startling, so that we can  
come back often on older step in the reasoning, but still, if you do  
find something invalid in the step_0 to step_3 reasoning, you cannot  
invoke step seven to claim that something is invalid.


I guess you do accept step 3, and just worried that it will be misused  
later. But that must be discussed later.







I think you've agreed that physics is necessary to our world,  
whether primary or not.



I would say it is the main result: physics is necessary of the  
universal machine, from its 1p view, because physics for it is a  
consequence of being a machine.


But if physics is shown necessary in arithmetic, physics is no more  
primary. It is explained by what the numbers observe, and what is  
observable.





So I suggest that instead of starting with the hypothesis that  
consciousness is a computation,


Please, I insist that consciousness is NOT a computation.  
Consciousness is an 1p notion, and you cannot identify it with *any*  
3p.  I prefer to use knowledge, for which incompleteness makes the  
classical definition working (and saving the 1p from *any* 3p- 
reductionisme.







let's leave questions of consciousness to the end and start with  
Tegmark 2.0, "Physics is computation".  Then I take Bruno's version  
to be:


A. The totality of reality consists of all possible computable  
universes and histories



It is just ultra-elementary arithmetic (Robinson Arithmetic).


Nothing else is assumed, except, here at the meta-level,  
computationalism (the belief I can survive with a physical digital  
transplant + Church-Thesis)


The existence of all computation is a metatheorem of RA, and a theorem  
of PA.


The TOE does not assume anything more.




B. Mathematics is real.



Nope. I just assume that 0 + x = x, ...

I do not philosophy of that type.




C. An UD will realize all possible computation, and hence the  
totality of reality.



Brent, please reread the UDA. The UD, and thus elementary (sigma_1)  
arithmetical truth realizes all possible computations, but the  
realities must be recovered by the measure self-referential problem.  
We get an intuitionist logic for the first person, and a quantum logic  
for the 1p-plural, has needed. And incompleteness provides the  
separation between what the machine can justify and what is true, and  
this for each different points of view notion.


I translate the mind-body problem into a an arithmetical body problem  
for the universal machine, and let you know what the universal machine  
already told us.


UD does not realize the totality of reality, it realizes only the base  
3p domain of the 1p indeterminacy, which gives consciousness and  
physics, but refer to a highly non-computable reality.


It is like the Skolem-paradox: Elementary arithmetic, or elementary  
combinator algebra, seen from inside is big, bigger than arithmetic,  
even bigger than analysis, and with comp, plausibly bigger than  
mathematics.






D. The world of our experience is a thread, or threads, of the UD  
computation that, according to some measure, have statistical  
coherence and hence realize a world with the regularity that we  
interpret as "the laws of physics".



On the contrary, the physical reality is defined by what is observable  
(from the sigma_ true sentences (the leaves of the UD)  in all the  

Re: Holiday Exercise

2016-08-07 Thread Stathis Papaioannou
On Friday, 5 August 2016, Bruno Marchal  wrote:

>
> On 05 Aug 2016, at 06:27, Brent Meeker wrote:
>
>
>
> On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:
>
>
>
> On 5 August 2016 at 04:01, Brent Meeker  > wrote:
>
>>
>>
>> On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:
>>
>> The problem with (3) is a general problem with multiverses.  A single,
>> infinite universe is an example of a multiverse theory, since there will be
>> infinite copies of everything and every possible variation of everything,
>> including your brain and your mind.
>>
>>
>> That implicitly assumes a digital universe, yet the theory that suggests
>> it, quantum mechanics, is based on continua; which is why I don't take "the
>> multiverse" too seriously.
>>
>
> It appears that our brains are finite state machines. Each neuron can
> either be "on" or "off", there are a finite number of neurons, so a finite
> number of possible brain states, and a finite number of possible mental
> states. This is analogous to a digital computer:
>
>
> Not necessarily. A digital computer also requires that time be digitized
> so that its registers run synchronously.  Otherwise "the state" is ill
> defined.  The finite speed of light means that spacially separated regions
> cannot be synchronous.  Even if neurons were only ON or OFF, which they
> aren't, they have frequency modulation, they are not synchronous.
>
>
> Synchronous digital machine can emulate asynchronous digital machine, and
> that is all what is needed for the reasoning.
>
> Bruno
>
>
>
>
> even if you postulate that electric circuit variables are continuous,
> transistors can only be on or off. If the number of possible mental states
> is finite, then in an infinite universe, whether continuous or discrete,
> mental states will repeat.
>
>> We live in an orderly world with consistent physical laws. It seems to me
>> that you are suggesting that if everything possible existed then we would
>> not live in such an orderly world,
>>
>>
>> Unless the worlds were separated in some way, which current physical
>> theories provide - but which is not explicable if you divorce conscious
>> thoughts from physics.
>>
>
> The worlds are physically separated - there can be no communication
> between separate worlds in the multiverse and none between sufficiently
> widely separated copies of subsets of the world in an infinite single
> universe. But the separate copies are connected insofar as they share
> memories and sense of identity, even if there is no causal connection
> between them.
>
>
> Of course "copy" implies a shared past in which there was an "original",
> they have a cause in common.
>
> Brent
>
> A copy can be prepared using the original as template but it can also be
prepared by exhaustively enumerating every possible variant of an entity,
in which case there is no causal link.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-06 Thread Bruce Kellett

On 7/08/2016 9:00 am, Russell Standish wrote:

On Thu, Aug 04, 2016 at 07:51:22PM +1000, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

Methinks you are unnecessarily assuming transitivity again.

No, I was just referring to the continuation of a single
consciousness through time. We get different input data all the
time but we do not differentiate according to that data.

I could perhaps expand on that response. On duplication, two
identical consciousnesses are created, and by the identity of
indiscernibles, they form just a single consciousness. Then data is
input. It seems to me that there is no reason why this should lead
the initial consciousness to differentiate, or split into two. In
normal life we get inputs from many sources simultaneously -- we see
complex scenes, smell the air, feel impacts on our body, and hear
many sounds from the environment. None of this leads our
consciousness to disintegrate. Indeed, our evolutionary experience
has made us adept at coping with these multifarious inputs and
sorting through them very efficiently to concentrate on what is most
important, while keeping other inputs at an appropriate level in our
minds.


You appear to be conflating differentiation with disintegration. Yes
my consciousness now differs from what it was one second ago, but by
the YD I'm supposed to have survived from one second to the next.


Poetic use of language.


We're not supposed to ask what does "survive" mean. Actually it is a
very interesting question, but nobody, least of all Bruno, has
anything remotely resembling an answer.


With Parfit, it seems plausible that "survival" is what matters: 
"persons" are secondary. This certainly needs more analysis.



But in the W/M experiment, both W and M can say they survived from the
ancester H, but W did not survive from M, nor vice-versa. All three
are different from each other - they have differentiated.


'Survivor of' is not a transitive relationship, identity is. But 
survival is akin to continuation as in Nozick's theory. If the end 
result of this is that two new "persons" are created as 
"survivors/continuers" of the original, then maybe we can resolve some 
of the difficulties.


However, still no justification has been given for the assumption that 
the duplicated consciousness differentiates on different inputs. And 
consciousness is what computationalism is supposed to be giving an 
account of.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-06 Thread Russell Standish
On Thu, Aug 04, 2016 at 07:51:22PM +1000, Bruce Kellett wrote:
> On 4/08/2016 6:00 pm, Bruce Kellett wrote:
> >On 4/08/2016 5:52 pm, Russell Standish wrote:
> >>Methinks you are unnecessarily assuming transitivity again.
> >
> >No, I was just referring to the continuation of a single
> >consciousness through time. We get different input data all the
> >time but we do not differentiate according to that data.
> 
> I could perhaps expand on that response. On duplication, two
> identical consciousnesses are created, and by the identity of
> indiscernibles, they form just a single consciousness. Then data is
> input. It seems to me that there is no reason why this should lead
> the initial consciousness to differentiate, or split into two. In
> normal life we get inputs from many sources simultaneously -- we see
> complex scenes, smell the air, feel impacts on our body, and hear
> many sounds from the environment. None of this leads our
> consciousness to disintegrate. Indeed, our evolutionary experience
> has made us adept at coping with these multifarious inputs and
> sorting through them very efficiently to concentrate on what is most
> important, while keeping other inputs at an appropriate level in our
> minds.
> 

You appear to be conflating differentiation with disintegration. Yes
my consciousness now differs from what it was one second ago, but by
the YD I'm supposed to have survived from one second to the next.

We're not supposed to ask what does "survive" mean. Actually it is a
very interesting question, but nobody, least of all Bruno, has
anything remotely resembling an answer.

But in the W/M experiment, both W and M can say they survived from the
ancester H, but W did not survive from M, nor vice-versa. All three
are different from each other - they have differentiated.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-06 Thread John Clark
On Sat, Aug 6, 2016 at 9:17 AM, Bruno Marchal  wrote:

​> ​
> It was the question 2 which does not involve duplication.
> *Question 2*:  if I am sure at time t that at time q, q > t, I will be
> uncertain of the outcome of some experience x, then I am uncertain about
> the outcome of that experience at time t.
>

​As I explained in my previous post that is not universally true, it
depends on if you have forgotten something or not. ​When i was in the
fourth grade I had to learn the state capitals and knew with certainty what
the capital of Wyoming was, and I was not only certain I was undoubtedly
correct too. Today I could look it up but right now, although much time has
passed, I am uncertain what the capital of Wyoming is.

​> ​
> You have answered both questions positively in your posts of the 02 August
> and 03 August respectively.
>

​Yes that's true I did, and both questions involved either no duplication
or identical environments after duplication so, as I also explained in my
previous post, personal pronouns and the identity of the mysterious Mr. You
is not important. And I might add if the environments are identical then
although there are 2 brains there is only one individual because thinking
is what brains do and the structure of the 2 brains are identical and the 2
environmental inputs to the 2 brains are identical so what the 2 brains are
doing is also identical. ​



​> ​
> Then I have shown that the step 3 FPI is a direct consequence of answering
> "yes" to the questions 1 and 2.
>

​Not if YOU walk into a YOU duplicating machine and one YOU goes to Moscow
and the other YOU goes to Washington! Then talking about *THE* FPI and the
probabilities of what YOU will see after duplication is just ridiculous.
And don't start the bit about the duplicating machine being equivalent to
one of Everett's branching worlds because it's not. With Everett the
identity of the personal pronoun "YOU" is crystal clear and can always
be uniquely and unambiguously defined:

YOU is the one and only chunk of matter in the observable universe that
behaves in a Brunomarchalian way.

But if duplicating machines are around then "YOU" has no definition and
talking about *THE *FPI as if there were only one is just silly.


> ​> ​
> do you see why it entails the FPI?
>

​What I don't see is how *THE* FPI can exist at all in a world with person
duplicating machines because the "P" in FPI stands for "person" and the
person has been duplicated, *YOU* have been duplicated, all of YOU has
been duplicated. All. I think your confusion stems entirely from something
you said a few posts ago, something you've used as a unnamed hidden axiom
from day one at the very start of your "proof":

 "
*Nothing can duplicate a first person view from its first person point of
view​"*

If that's true then computationalism
​ is false, but you can't use an assumption that ​
computationalism
​ is false to prove that
computationalism
​ is false.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-06 Thread Brent Meeker



On 8/6/2016 4:58 AM, Bruno Marchal wrote:
 If we think about engineering an autonomous being it becomes obvious 
that this is a good architecture.   Decision making should be 
hierarchical with only a few requiring system-wide consideration.  
With RF communication this autonomous being could easily "be"in both 
Moscow and Washington.


I agree. Of course this does not change the step 3 conclusion if that 
is needed to say. There is just no RF communication available,


But that's where you're taking for granted physics which, later, you're 
going to conclude is to be inferred from statistics on computations and 
is otiose.  It's like saying A, B, C, D, ...entail Z,  but Z  shows 
there's no reason believe A.


I think you've agreed that physics is necessary to our world, whether 
primary or not.  So I suggest that instead of starting with the 
hypothesis that consciousness is a computation, let's leave questions of 
consciousness to the end and start with Tegmark 2.0, "Physics is 
computation".  Then I take Bruno's version to be:


A. The totality of reality consists of all possible computable universes 
and histories

B. Mathematics is real.
C. An UD will realize all possible computation, and hence the totality 
of reality.
D. The world of our experience is a thread, or threads, of the UD 
computation that, according to some measure, have statistical coherence 
and hence realize a world with the regularity that we interpret as "the 
laws of physics".


If that argument is accepted, then even if consciousness is an 
epiphenomenon of physics, it will still be a some computational entity 
in the universe picked out by our "laws of physics".  From there Bruno 
may well argue that the threads of consciousness are certainly 
epistemologically prior and we should regard them as fundamental and 
"the laws of physics" are inferences (which change with new data).


Brent




--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-06 Thread Bruno Marchal


On 06 Aug 2016, at 03:43, John Clark wrote:

On Fri, Aug 5, 2016 at 3:32 AM, Bruno Marchal   
wrote:



​ ​>>>​>Assigning probabilities about what "YOU" will see next  
is not ambiguous as long as "YOU" duplicating machine are not around.


​>​>>​ ​So, you are OK that the guy in Helsinki write  
P("drinking coffee") = 1.


​​>> ​The guy in Helsinki​?​ NO!!! Bruno Marchal said   
"The question is not about duplication"


​> ​The question 2 was not about duplication,

​If duplication was not involved then why ​on god's green earth  
were you talking about the goddamn HELSINKI MAN?!



Do you read the posts?

Question 1: in the duplication protocol, if event X is presented to  
both copies (like getting coffee), P(X) = 1. You have agreed on this  
last week. I quote your post (of 02 augustus):


<<
​> ​both copies will have a cup of coffee after the  
reconstitution. Are you OK that P("experience of drinking coffee") =  
1?


​Yes, and in this case it doesn't matter if Bruno Marchal says P is  
the probability John Clark will drink the coffee or says P is the  
probability ​ ​"you" will drink the coffee, there is no ambiguity  
either way. However if the Moscow man got the coffee but the  
Washington man did not then there would be a 100% probability that  
John Clark will get the coffee and also a 100% probability that John  
Clark will not get the coffee, just as I would assign a 100%  
probability that tomorrow tomatoes will be red and I would also  
assign a 100% probability that tomorrow tomatoes will be​ green.


>>

It was the question 2 which does not involve duplication.

Question 2:  if I am sure at time t that at time q, q > t, I will be  
uncertain of the outcome of some experience x, then I am uncertain  
about the outcome of that experience at time t.


You have answered both questions positively in your posts of the 02  
August and 03 August respectively.


Then I have shown that the step 3 FPI is a direct consequence of  
answering "yes" to the questions 1 and 2.

But your reply to that was not referring to the question correctly.

So, do you still agree with yourself on those two questions, and if  
yes, do you see why it entails the FPI?


Bruno








​> ​but the question 1 was, and you said that P("drinking  
coffee") was equal to one.


​P can always be equal to 1, it depends on what P means, and if P  
has no meaning, if for example too many unspecified personal  
pronouns are used, then P has no value at all, not even zero. In the  
first case BOTH the Moscow man and the Washington man got the coffee  
so the identity of the mysterious Mr. You does not need to be  
specified and so P had both a meaning and a value.


​If one gets the coffee and ​one does not what is the probability  
(P) that "YOU" will get the coffee? Is it 1? No. Is it 1/2? No. Is  
it 0? No, P has no value at all because P is gibberish.


 John K Clark






--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-06 Thread Bruno Marchal


On 05 Aug 2016, at 20:20, Brent Meeker wrote:




On 8/5/2016 1:15 AM, Bruno Marchal wrote:


On 05 Aug 2016, at 06:27, Brent Meeker wrote:




On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:



On 5 August 2016 at 04:01, Brent Meeker   
wrote:



On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:
The problem with (3) is a general problem with multiverses.  A  
single, infinite universe is an example of a multiverse theory,  
since there will be infinite copies of everything and every  
possible variation of everything, including your brain and your  
mind.


That implicitly assumes a digital universe, yet the theory that  
suggests it, quantum mechanics, is based on continua; which is  
why I don't take "the multiverse" too seriously.


It appears that our brains are finite state machines. Each neuron  
can either be "on" or "off", there are a finite number of  
neurons, so a finite number of possible brain states, and a  
finite number of possible mental states. This is analogous to a  
digital computer:


Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  
Otherwise "the state" is ill defined.  The finite speed of light  
means that spacially separated regions cannot be 
synchronous.  Even if neurons were only ON or OFF, which they  
aren't, they have frequency modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital  
machine, and that is all what is needed for the reasoning.


True, but only going to a level far below a "state of consciousness"  
so that in this finer level of emulation there are no longer  
identifiable states of consciousness.


There is no algorithm to identify the semantic of a program in general  
(Rice theorem), but this is not a problem. The domain of the global  
indeterminacy is necessarily non computable.  That is why people will  
in general bet on a as lower as possible substitution level. In  
practice, they will take the one they can afford ...
The state of consciousness needs not to be identified by a third  
person: only by the first person itself, and that is "automatic", in  
virtue of computationalism, and only this is used in the reasoning.



Rather "states" are coming into being and fading away, with various  
overlaps.


OK. No Problem. Eventually, the first person is associated to an  
infinity of 3p states in arithmetic, and consciousness (first person  
histories) emerges from the statistical interference of those  
computations. The math shows that we recover a quantization allowing a  
quantum logic, the symmetries at the bottom, linearity and 1p plural,  
etc. It is an open problem if the Hamiltonian are physical or  
geopgraphical, but note that this is an open problem also in String  
Theory, and physics does not even seem to have the tools to defined  
differently physics and geography. Only for this, computationalism is  
quite interesting.



Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-06 Thread Bruno Marchal


On 06 Aug 2016, at 07:19, Bruce Kellett wrote:


On 6/08/2016 12:58 am, Bruno Marchal wrote:

On 05 Aug 2016, at 13:23, Bruce Kellett wrote


He writes in the diaries what he sees: it is just a matter of the  
protocol whether he writes the name of the city in 
which each diary is located in that particular diary, or if he  
writes in both diaries what he sees in total, in which 
case he writes W in both diaries. It need be no different from  
my seeing one thing with my right eye and writing that down with  
my right hand, and seeing something different with my left eye and  
writing that down with my left hand, or writing down both things  
with both hands. (This is not a split-brain experiment.)


All the things that you bring up could easily happen without any  
differentiation into two separate consciousnesses. You might find  
the non-locality of the unified experience a little surprising,  
but that is only because you are not used to the concept of non- 
locality.


I say again, even though it seems obvious to you that the  
differentiation must occur,


It is just trivial, by the definition of first person experience.


That is not a suitable answer -- there is only one person  
experiencing both cities.


May be. That is out of topic, and depends on what you mean by person.  
If you are right here, and we keep computationalism, then there is  
only one person in the arithmetical reality. You and me are the same  
person, in that case. May be, but useless to get the point that if  
comp is correct, physics is given by a self-referential modality.






If you were right, and using the definition I provided at the  
start,  we would have a situation where a guy is in Moscow, and  
write in his diary "Washington". But then he did not survive  
sanely, and if that is the case, P get ≠ from 1 at step 1.


Your phrasing of this is wrong. There is no such thing as "a guy in  
Moscow"


Of course there is. Like there us a guy on Mars after the simple  
teleportation. My be study the whole proof, as you make irrelevant,  
with respect to what is proved, remarks.






-- there is a guy who is in both places simultaneously. If there are  
diaries in both W and M, and one person writing in these diaries, it  
is not inconsistent to write W in the M diary and vice versa --  
maybe not what was intended,


With respect to what is proved, indeed.



but since it is just one person writing in diaries, what is written  
is not incorrect.


As correct with me being the president of the united state.






that is just a failure of imagination on your part. Try to put  
yourself in the situation in which some of the many strands of  
your conscious thoughts relate to bodies in different cities.  
There is no logical impossibility in this. You seem to accept that  
a single mind can be associated with more that one body: "We can  
associate a mind to a body, but the mind itself (the 1p) can be  
(and must be) associated with many different bodies, in the  
physical universe and later in arithmetic." (quoted from your  
comment above.) Hold on to this notion, and consider the  
possibility that there is no differentiation into separate  
conscious persons in such a case (the 1p is singular -- there is  
only ever just one person).


I love the idea, but it is not relevant for the problem of  
prediction.


There is no problem of prediction -- there is only a question as to  
whether differentiation necessarily occurs.



Read the argument. It is only on prediction. You can't change that, or  
you are just out of topic.








And I am not sure it makes sense, even legally.


Why should it make sense legally? Legal systems were not drawn up to  
take account of person duplicating machines.


If the W-man commits a murder in W, with your bizarre theory, we  
can put the M man in prison. Your non-locality  assumption  
is a bit frightening. Some will say, we are all that type of human,  
but not this type, etc. If you consider  the W-man and the  
M-man as the same person, then, all living creature on this earth  
is the same person, and 'to eat' becomes equivalent with 'to be  
eaten'.


Such bizarre consequences do not follow from what I have said -- not  
all people are the result of digital duplication experiments.


Why not eventually, but this has no relevance at all in the  
reasoning, where we assume digital mechanism, so that the M and W  
man would not be aware of their existence in a protocol where they  
would not known the protocol.


That doesn't matter -- they would know that they were one person,  
experiencing two cities at once.


 And the duplications gives a simple distinction between the 1p and  
3p, and we can see, in very simple simulation, that all copies  
feels 1p-separate from the others, in the protocol described.


You have still not proved this,


Read the posts, or the papers, or the books.





or given any cogent reason as to why it should be 

Re: Holiday Exercise

2016-08-06 Thread Bruno Marchal


On 05 Aug 2016, at 20:35, Brent Meeker wrote:




On 8/5/2016 4:23 AM, Bruce Kellett wrote:
We can associate a mind to a body, but the mind itself (the 1p)  
can be (and must be) associated with many different bodies, in the  
physical universe and later in arithmetic.


You seem to accept my point -- there is still only one mind even  
after different data are fed to the copies: one mind in two bodies  
in this case (a one-many relationship).


Which is empirically supported by neurological studies that indicate  
the brain consists of "modules" each of which has "a mind of it's  
own."


Yes. Incidentally that is well illustrated in the report of experience  
with dissociative drugs (ketamine, salvia, etc.)




 If we think about engineering an autonomous being it becomes  
obvious that this is a good architecture.   Decision making should  
be hierarchical with only a few requiring system-wide  
consideration.  With RF communication this autonomous being could  
easily "be"in both Moscow and Washington.


I agree. Of course this does not change the step 3 conclusion if that  
is needed to say. There is just no RF communication available, and the  
differentiation of the first person experience follows from the fact  
that computationalisme and the rightness of the substitution level  
make the two copies one and entire in different cities.
Some people add complexity in talking about person and/or  
consciousness, instead of local immediate first person experience,  
which is about what the indeterminacy is evaluated.


I guess you are OK with this.

Bruno






Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 6/08/2016 2:36 am, Bruno Marchal wrote:

On 05 Aug 2016, at 14:11, Bruce Kellett wrote:

The difficulty is with your assumption that differentiation into two 
persons is inevitable.


It is not an assumption. With the protocol and the hypothesis, the 
diaries have differentiated.


Diaries are not people.


The first person are approximated/associated by their personal diaries.


I am not defined by any diary I might keep -- that is merely an 
irrelevant adjunct.


Everett use a similar theory of mind, and indeed most account of the 
QM-without-collapse use digital mechanism, more or less implicitly.



Accusations of bad faith are not required.


Sorry for the accusation of bad faith, but I hope now we can move on 
step 4. I mean, come back to the original definition of first person 
discourse.


The notion of first person and third person have been defined since 
long, and you were persisting in talking like if it could be possible 
that the first person experience does not bifurcate, differentiate. 
When we comp we admit that the only way to know is asked the copies or 
consulted their opinions and experiences, and then simple elementary 
logic shows that they all differentiate.


I suggested doing the experiment and determining the answer empirically. 
Logic can only tell us what follows from certain premises, and your 
premises do not entail differentiation in the described circumstances.


We admit P=1 in the simple teleportation case, then the 
differentiation is a simple consequence that the robot in W sees W, 
believes he is in W, and as it is in W, he knows that he is in W (with 
the antic notion of knowledge: true belief). The same for the robot in 
M. They are both right, they have just differentiated. They both 
confirmed "W v M", and refute "W & M", as, by computationalism, the 
W-machine has been made independent from the M-machine.


Again, you merely assume differentiation, you do not prove its necessity.

The W-machine has no first person clue if the M-machine even exist, 
and vice versa. (Or you bring telepathy, etc.).


I don't need telepathy to unify the various streams of my consciousness 
-- to know that I am the person driving the car, talking to my wife, 
etc, at a given moment. Neither is telepathy need if one person is in 
two places at once.


You can't invalidate a reasoning by changing, in the reasoning, the 
definition which have been given in the reasoning.The differentiation 
are obvious. In the n-iterated case, the differentiations are given by 
the 2^n sequences of W and M.


You continue to assume what you are required to prove.

Keep well in mind that I am not arguing for or against 
computationalism. I assume it, and study the consequences.


There is little sense in studying the consequences of an inconsistent 
theory: you have to defend computationalism against the charge that it 
is not well-established.




Later, I can explain that the "P=1" of 'UDA step one' belongs to the 
machine's G*\G type of true but non- justifiable proposition, which 
can explain the uneasiness. "P=1" requires a strong axiom, and indeed 
both CT and YD are strong axioms in "cognitive science/computer 
science/theology".


So derive the necessity of differentiation from these axioms.

Bruce


Computationalism could be the most insane theology except for all the 
others. I don't know if comp is true or not, but I am pretty sure that 
IF digital mechanism is true, then the "correct theology" will be more 
close to Plato than to Aristotle 


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 6/08/2016 12:58 am, Bruno Marchal wrote:

On 05 Aug 2016, at 13:23, Bruce Kellett wrote


He writes in the diaries what he sees: it is just a matter of the 
protocol whether he writes the name of the city in which each diary 
is located in that particular diary, or if he writes in both diaries 
what he sees in total, in which case he writes W in both diaries. 
It need be no different from my seeing one thing with my right eye 
and writing that down with my right hand, and seeing something 
different with my left eye and writing that down with my left hand, 
or writing down both things with both hands. (This is not a 
split-brain experiment.)


All the things that you bring up could easily happen without any 
differentiation into two separate consciousnesses. You might find the 
non-locality of the unified experience a little surprising, but that 
is only because you are not used to the concept of non-locality.


I say again, even though it seems obvious to you that the 
differentiation must occur,


It is just trivial, by the definition of first person experience.


That is not a suitable answer -- there is only one person experiencing 
both cities.


If you were right, and using the definition I provided at the start, 
 we would have a situation where a guy is in Moscow, and write in his 
diary "Washington". But then he did not survive sanely, and if that is 
the case, P get ≠ from 1 at step 1.


Your phrasing of this is wrong. There is no such thing as "a guy in 
Moscow" -- there is a guy who is in both places simultaneously. If there 
are diaries in both W and M, and one person writing in these diaries, it 
is not inconsistent to write W in the M diary and vice versa -- maybe 
not what was intended, but since it is just one person writing in 
diaries, what is written is not incorrect.


that is just a failure of imagination on your part. Try to put 
yourself in the situation in which some of the many strands of your 
conscious thoughts relate to bodies in different cities. There is no 
logical impossibility in this. You seem to accept that a single mind 
can be associated with more that one body: "We can associate a mind 
to a body, but the mind itself (the 1p) can be (and must be) 
associated with many different bodies, in the physical universe and 
later in arithmetic." (quoted from your comment above.) Hold on to 
this notion, and consider the possibility that there is no 
differentiation into separate conscious persons in such a case (the 
1p is singular -- there is only ever just one person).


I love the idea, but it is not relevant for the problem of prediction.


There is no problem of prediction -- there is only a question as to 
whether differentiation necessarily occurs.



And I am not sure it makes sense, even legally.


Why should it make sense legally? Legal systems were not drawn up to 
take account of person duplicating machines.


If the W-man commits a murder in W, with your bizarre theory, we can 
put the M man in prison. Your non-locality assumption is a bit 
frightening. Some will say, we are all that type of human, but not 
this type, etc. If you consider the W-man and the M-man as the same 
person, then, all living creature on this earth is the same person, 
and 'to eat' becomes equivalent with 'to be eaten'.


Such bizarre consequences do not follow from what I have said -- not all 
people are the result of digital duplication experiments.


Why not eventually, but this has no relevance at all in the reasoning, 
where we assume digital mechanism, so that the M and W man would not 
be aware of their existence in a protocol where they would not known 
the protocol.


That doesn't matter -- they would know that they were one person, 
experiencing two cities at once.


 And the duplications gives a simple distinction between the 1p and 
3p, and we can see, in very simple simulation, that all copies feels 
1p-separate from the others, in the protocol described.


You have still not proved this, or given any cogent reason as to why it 
should be the case. You suffer from what, in the philosophy of science, 
is known as the problem of unconsidered alternatives. You simply have 
not considered non-differentiation as a relevant possibility in your 
theory/model. Now that this alternative has been raised, you have to 
give reasons against it, or revise your original thesis.


I hope you understand well that we assume computationalism, with an 
open mind that the theory might lead to a contradiction, in which case 
we would learn a lot. But up to now, we get only (quantum?) weirdness.


You are very keen to assume computationalism, i.e., that your theory is 
at least internally consistent. But I have raised a relevant 
consideration that counts against the coherence of your theory. You have 
not yet given any substantial argument for your assumption that 
differentiation into separate persons is inevitable in the circumstances 
described -- lots of assertions, but no arguments.



Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-05 Thread John Clark
On Fri, Aug 5, 2016 at 3:32 AM, Bruno Marchal  wrote:


​
>>> ​>>>​>
>>> Assigning probabilities about what "YOU" will see next is not ambiguous
>>> as long as "YOU" duplicating machine are not around.
>>
>>
>> ​>
>>> ​>>​
>>> ​
>>> So, you are OK that the guy in Helsinki write P("drinking coffee") = 1.
>>>
>>
>> ​
>> ​>> ​
>> The
>>  guy in Helsinki
>> ​?​
>> NO!!! Bruno Marchal said  "The question is not about duplication"
>>
>
> ​> ​
> The question 2 was not about duplication,
>

​If duplication was not involved then why ​on god's green earth were you
talking about the goddamn* HELSINKI MAN*?!


> ​> ​
> but the question 1 was, and you said that P("drinking coffee") was equal
> to one.
>

​P can always be equal to 1, it depends on what P means, and if P has no
meaning, if for example too many unspecified personal pronouns are used,
then P has no value at all, not even zero. In the first case
BOTH the Moscow man and the Washington man got the coffee so the identity
of the mysterious Mr. You does not need to be specified and so P had both a
meaning and a value.

​If one gets the coffee and ​one does not what is the probability (P) that "
*YOU*" will get the coffee? Is it 1? No. Is it 1/2? No. Is it 0? No, P has
no value at all because P is gibberish.

 John K Clark


>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Brent Meeker



On 8/5/2016 4:23 AM, Bruce Kellett wrote:
We can associate a mind to a body, but the mind itself (the 1p) can 
be (and must be) associated with many different bodies, in the 
physical universe and later in arithmetic.


You seem to accept my point -- there is still only one mind even after 
different data are fed to the copies: one mind in two bodies in this 
case (a one-many relationship).


Which is empirically supported by neurological studies that indicate the 
brain consists of "modules" each of which has "a mind of it's own."  If 
we think about engineering an autonomous being it becomes obvious that 
this is a good architecture.   Decision making should be hierarchical 
with only a few requiring system-wide consideration.  With RF 
communication this autonomous being could easily "be"in both Moscow and 
Washington.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Brent Meeker



On 8/5/2016 1:15 AM, Bruno Marchal wrote:


On 05 Aug 2016, at 06:27, Brent Meeker wrote:




On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:



On 5 August 2016 at 04:01, Brent Meeker > wrote:




On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:

The problem with (3) is a general problem with multiverses.  A
single, infinite universe is an example of a multiverse theory,
since there will be infinite copies of everything and every
possible variation of everything, including your brain and your
mind.


That implicitly assumes a digital universe, yet the theory that
suggests it, quantum mechanics, is based on continua; which is
why I don't take "the multiverse" too seriously.


It appears that our brains are finite state machines. Each neuron 
can either be "on" or "off", there are a finite number of neurons, 
so a finite number of possible brain states, and a finite number of 
possible mental states. This is analogous to a digital computer:


Not necessarily. A digital computer also requires that time be 
digitized so that its registers run synchronously. Otherwise "the 
state" is ill defined.  The finite speed of light means that 
spacially separated regions cannot be synchronous.  Even if neurons 
were only ON or OFF, which they aren't, they have frequency 
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital machine, 
and that is all what is needed for the reasoning.


True, but only going to a level far below a "state of consciousness" so 
that in this finer level of emulation there are no longer identifiable 
states of consciousness.  Rather "states" are coming into being and 
fading away, with various overlaps.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 14:11, Bruce Kellett wrote:



On 5/08/2016 9:30 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 00:31, Bruce Kellett wrote:

On 5/08/2016 1:28 am, Brent Meeker wrote:

On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states  
where the
conscious state differs by at least one bit - the W/M bit.  
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because  
of the

single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by  
assumption (YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single  
consciousness through time. We get different input data all the  
time but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two  
identical consciousnesses are created, and by the identity of  
indiscernibles, they form just a single consciousness. Then data  
is input. It seems to me that there is no reason why this should  
lead the initial consciousness to differentiate, or split into  
two. In normal life we get inputs from many sources  
simultaneously -- we see complex scenes, smell the air, feel  
impacts on our body, and hear many sounds from the environment.  
None of this leads our consciousness to disintegrate. Indeed,  
our evolutionary experience has made us adept at coping with  
these multifarious inputs and sorting through them very  
efficiently to concentrate on what is most important, while  
keeping other inputs at an appropriate level in our minds.


I have previously mentioned our ability to multitask in complex  
ways: while I am driving my car, I am aware of the car, the  
road, other traffic and so on; while, at the same time, I can be  
talking to my wife; thinking about what to cook for dinner; and  
reflecting on philosophical issues that are important to me. And  
this is by no means an exhaustive list of our ability to  
multitask -- to run many separate conscious modules within the  
one unified consciousness.


Given that this experience is common to us all, it is not in the  
least bit difficult to think that the adding of yet another  
stream of inputs via a separate body will not change the basic  
structure of our consciousness -- we will just take this  
additional data and process in the way we already process  
multiple data inputs and streams of consciousness. This would  
seem, indeed, to be the default understanding of the  
consequences of person duplication. One would have to add some  
further constraints in order for it to be clear that the  
separate bodies would necessarily have differentiated conscious  
streams. No such additional constraints are currently in evidence.


Not empirically proven constraints, but current physics strongly  
suggests that the duplicates would almost immediately, in the  
decoherence time for a brain, differentiate; i.e. the  
consciousness is not separate from the physics.  It's only "not  
in evidence" if your trying to derive the physics from the  
consciousness.


Of course,  that is what I was trying to get people to see: the  
additional constraint that is necessary for differentiation is  
essentially a mind-brain identity thesis.


Not really. To get differentiation, you need only different  
memories or different first person report, not different brain.


The differentiation we are talking about is into two separate  
persons who do not share a consciousness. You need the  
differentiation before you get two first person reports: one  
consciousness could store several different memories.


What you say is very weird. If there is no differentiation of the  
first person experience, then how could the diary in W contains W,  
and the diary in M contains M.


I explained that in the previous post. It is not in the least  
mysterious -- no different from seeing different things with each  
eye and recording what is seen with different hands. Ther emight be  
twoexperiences, but that does not need two persons.


You lost me with your last post, as they seem to conflict  
immediatey with step 1 and "step 0", the definition of (weak)  
computationalism used in the UD Argument.


I don't see any conflict with ordinary teleportation, with or  
without a delay. There is no duplication in those cases, so ordinary  
considerations apply. Of course, if there is a delay that the  
teletransported person has no way of knowing about, then he will not  
know about that delay -- so what?


I presume by "step 0" you mean YD + CT. There is no problem with  
these assumptions; it is just that you 

Re: Holiday Exercise

2016-08-05 Thread Platonist Guitar Cowboy
On Fri, Aug 5, 2016 at 4:17 PM, Bruno Marchal  wrote:

>
> On 05 Aug 2016, at 15:01, Bruce Kellett wrote:
>
> On 5/08/2016 10:11 pm, Bruce Kellett wrote:
>>
>>> On 5/08/2016 9:30 pm, Bruno Marchal wrote:
>>>
>>> Just tell me if you are OK with question 1. The Helsinki guy is told
 that BOTH copies will have a hot drink after the reconstitutions, in both
 Moscow and Washington. Do you agree that the Helsinki guy (a believer in
 computationalism) will believe that he can expect, in Helsinki, with
 probability, or credibility, or plausibility ONE (resp maximal) to have
 some hot drink after pushing the button in Helsinki?

>>>
>>> As I said, the H-guy can expect to drink two cups of coffee.
>>>
>>
>> Once again, some amplification of the this answer is perhaps in order. I
>> cannot answer your question with a Yes/No as you wish because the question
>> is basically dishonest -- of the form of "Have you stopped beating your
>> wife yet?". The question contains an implicit assumption that the
>> differentiation takes place.
>>
>
>
> Not at all. Question 1 is neutral on this, but if you prefer I split
> question 1 into two different questions.
>
> Question 1a.
> The H-guy is told that the coffee is offered *in* the reconstitution
> boxes, and that it has the same taste. Put it differently, we ensure that
> the differentiation has not yet occurred.
> And the question 1a is the same, assuming he is a coffee addict, and that
> he wants drink coffee as soon as possible, should he worried, knowing the
> protocol telling the coffee is offered, or can he argue that he is not
> worried, and that if comp is true and everything go well, P("drinking
> coffee") = 1?
>
> Question 1b
> Same question, but now, the coffee is offered after the opening of the
> doors.
>
>
>
>
>
> Since it is this differentiation that is in question, the question is
>> disingenuous: it can only be answered as I have done above.
>>
>
> Oh nice! The Helsinki guy, as a coffee addict, is very please you tell him
> that he will drink two cups of coffee.
>

If this kind of connection can be made, then you play right into the hands
of the people who accuse you or your work to be "anything goes". And I say
this because I believe your work has some merit to it, when you're not
trying to shove it down people's throat a la "WHAT IS YOUR THEOLOGY?" in
setting of a public list.

The kind of pushiness of late, tactics of flooding the list with posts
where you set discussion forcibly, and explicitly demanding your questions
to be answered seem to paint a picture where you abandon your own
convictions: modesty, avoidance of blasphemy, use of linguistic games where
only you can set the frame, argument from authority etc. I liked the old
Bruno from 2015 better who didn't need to resort to these things to make a
point.

Particularly the cheap way of trying to ensnare people into discussing your
research interests. So obvious and so out of character, it makes one wonder
as to your general welfare. Play nice, folks! Take care of yourselves. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 13:23, Bruce Kellett wrote:


On 5/08/2016 6:12 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 04:13, Bruce Kellett wrote:

On 5/08/2016 3:41 am, Bruno Marchal wrote:

On 04 Aug 2016, at 04:37, Bruce Kellett wrote:

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:


You use the assumption that the duplicated consciousnesses  
automatically differentiate when receiving different inputs.


It is not an assumption.


Of course it is an assumption. You have not derived it from  
anything previously in evidence.


See my answer to Brent. It is just obvious that the first person  
experience differentiated when it get different experience,  
leading to different memories. We *assume* computationalism. How  
coud the diaries not differentiate? What you say does not make  
any sense.


I have been at pains to argue (in several different ways) that the  
differentiation of consciousness is not automatic. It is very easy  
to conceive of a situation in which a single consciousness  
continues in two bodies, with the streams of consciousness arising  
from both easily identifiable, but still unified in the  
consciousness of a single person. (I copy below my recent argument  
for this in a post replying to Russell.) So the differentiation  
you require is not necessary or automatic -- it has to be  
justified separately because it is not "just obvious".


Your recent expansion of the argument of step 3 in discussions  
with John Clark does not alter the situation in any way -- you  
still just assert that the differentiation takes place on the  
receipt of different input data.


I had thought that the argument for such differentiation of  
consciousness in different physical bodies was a consequence of  
some mind-brain identity thesis. But I am no longer sure that even  
that is sufficient -- the differentiation clearly requires  
separate bodies/brains (separate input data streams), but separate  
bodies are not sufficient for differentiation, as I have shown.


That was shown and explained before and is not contested here.


I thought I was contesting it.


Please read the posts.
That is why I introduce a painting in question 2.


That still just gives differentiation on different data inputs -- it  
changes nothing.



But let us first see if you agree with question 1.

Do you agree that if the H-guy is told that a hot drink will be  
offered to both reconstitution in W and in M, he is entitled to  
expect a hot drink with probability one (assuming computationalisme  
and the default hypothesis)


I do not assume computationalism, I am questioning its validity.


Do you agree that P(X) = 1 in Helsinki, if X will occur in both city?


I think that it is entirely possible that the H-guy will, after the  
duplication, experience drinking two coffees.


What is required is a much stronger additional assumption, namely  
an association between minds and brains such that a mind can  
occupy only one brain.


Not at all. We can say that one mind occupy both brain in the WM- 
duplication , before the opening of the door, assuming the  
reconstitution box identical. The mind brain identity fails right  
at step 3.


Mind-brain identity need not fail: what fails in my interpretation  
of duplication is the one-to-one correspondence of one mind with one  
body. One need something stronger that mind-brain identity to  
justify the differentiation on different data inputs because we can  
have one-many and many-one mind-body relationships.


We can associate a mind to a body, but the mind itself (the 1p) can  
be (and must be) associated with many different bodies, in the  
physical universe and later in arithmetic.


You seem to accept my point -- there is still only one mind even  
after different data are fed to the copies: one mind in two bodies  
in this case (a one-many relationship).


(Whether a single brain can host only one mind is a separate  
matter, involving one's attitude to the results of split brain  
studies and the psychological issues surrounding multiple  
personalities/minds.) In other words, the differentiation  
assumption is an additional assumption that does not appear to  
follow from either physicalism or YD+CT.


It follows from very elementary computer science, and in our case,  
it follows necessarily, as the 1p is identified, in this setting  
with the content of the personal diary, which obviously  
differentiate on the self-localization result made by the  
reconstitutions.


I think the diaries are just confusing you. The copy in M can write  
M in the diary in Moscow, and the copy in W write W in the diary in  
Washington. That is not necessarily different from me writing M in  
one diary with my left hand while writing W in a separate diary with  
my right hand. No differentiation into two separate persons is  
necessary in either case. There is no "self-localization" if there  
is only ever one consciousness -- the person 

Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 15:01, Bruce Kellett wrote:


On 5/08/2016 10:11 pm, Bruce Kellett wrote:

On 5/08/2016 9:30 pm, Bruno Marchal wrote:

Just tell me if you are OK with question 1. The Helsinki guy is  
told that BOTH copies will have a hot drink after the  
reconstitutions, in both Moscow and Washington. Do you agree that  
the Helsinki guy (a believer in computationalism) will believe  
that he can expect, in Helsinki, with probability, or credibility,  
or plausibility ONE (resp maximal) to have some hot drink after  
pushing the button in Helsinki?


As I said, the H-guy can expect to drink two cups of coffee.


Once again, some amplification of the this answer is perhaps in  
order. I cannot answer your question with a Yes/No as you wish  
because the question is basically dishonest -- of the form of "Have  
you stopped beating your wife yet?". The question contains an  
implicit assumption that the differentiation takes place.



Not at all. Question 1 is neutral on this, but if you prefer I split  
question 1 into two different questions.


Question 1a.
The H-guy is told that the coffee is offered *in* the reconstitution  
boxes, and that it has the same taste. Put it differently, we ensure  
that the differentiation has not yet occurred.
And the question 1a is the same, assuming he is a coffee addict, and  
that he wants drink coffee as soon as possible, should he worried,  
knowing the protocol telling the coffee is offered, or can he argue  
that he is not worried, and that if comp is true and everything go  
well, P("drinking coffee") = 1?


Question 1b
Same question, but now, the coffee is offered after the opening of the  
doors.






Since it is this differentiation that is in question, the question  
is disingenuous: it can only be answered as I have done above.


Oh nice! The Helsinki guy, as a coffee addict, is very please you tell  
him that he will drink two cups of coffee.


Now, I hope, you agree that 'drinking two cups of coffee' entails  
'drinking coffee', and in this case, the Helsinki addicted guy has  
less reason to worry about lacking coffee. You do answer P("drinking  
coffee") = 1.


So, just to be clear, and a bit more general: do you agree with the  
Principle 1:


Principle 1: if a first person event x is guarantied to happen to  
*all* its immediate (transportation-like) copies, then, before the  
copy the person can expect x to happen with the same probability it  
would have if there was only one copy.


OK? (We *assume computationalism. We have agreed already that it  
entails P(x) = 1 if x is guarantied to be presented to the guy with  
the artificial brain, or to the teleported (classically) person.


Bruno







Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 5/08/2016 10:11 pm, Bruce Kellett wrote:

On 5/08/2016 9:30 pm, Bruno Marchal wrote:

Just tell me if you are OK with question 1. The Helsinki guy is told 
that BOTH copies will have a hot drink after the reconstitutions, in 
both Moscow and Washington. Do you agree that the Helsinki guy (a 
believer in computationalism) will believe that he can expect, in 
Helsinki, with probability, or credibility, or plausibility ONE (resp 
maximal) to have some hot drink after pushing the button in Helsinki?


As I said, the H-guy can expect to drink two cups of coffee.


Once again, some amplification of the this answer is perhaps in order. I 
cannot answer your question with a Yes/No as you wish because the 
question is basically dishonest -- of the form of "Have you stopped 
beating your wife yet?". The question contains an implicit assumption 
that the differentiation takes place. Since it is this differentiation 
that is in question, the question is disingenuous: it can only be 
answered as I have done above.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett


On 5/08/2016 9:30 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 00:31, Bruce Kellett wrote:

On 5/08/2016 1:28 am, Brent Meeker wrote:

On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states 
where the
conscious state differs by at least one bit - the W/M bit. 
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of the
single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by 
assumption (YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single 
consciousness through time. We get different input data all the 
time but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two 
identical consciousnesses are created, and by the identity of 
indiscernibles, they form just a single consciousness. Then data is 
input. It seems to me that there is no reason why this should lead 
the initial consciousness to differentiate, or split into two. In 
normal life we get inputs from many sources simultaneously -- we 
see complex scenes, smell the air, feel impacts on our body, and 
hear many sounds from the environment. None of this leads our 
consciousness to disintegrate. Indeed, our evolutionary experience 
has made us adept at coping with these multifarious inputs and 
sorting through them very efficiently to concentrate on what is 
most important, while keeping other inputs at an appropriate level 
in our minds.


I have previously mentioned our ability to multitask in complex 
ways: while I am driving my car, I am aware of the car, the road, 
other traffic and so on; while, at the same time, I can be talking 
to my wife; thinking about what to cook for dinner; and reflecting 
on philosophical issues that are important to me. And this is by no 
means an exhaustive list of our ability to multitask -- to run many 
separate conscious modules within the one unified consciousness.


Given that this experience is common to us all, it is not in the 
least bit difficult to think that the adding of yet another stream 
of inputs via a separate body will not change the basic structure 
of our consciousness -- we will just take this additional data and 
process in the way we already process multiple data inputs and 
streams of consciousness. This would seem, indeed, to be the 
default understanding of the consequences of person duplication. 
One would have to add some further constraints in order for it to 
be clear that the separate bodies would necessarily have 
differentiated conscious streams. No such additional constraints 
are currently in evidence.


Not empirically proven constraints, but current physics strongly 
suggests that the duplicates would almost immediately, in the 
decoherence time for a brain, differentiate; i.e. the consciousness 
is not separate from the physics.  It's only "not in evidence" if 
your trying to derive the physics from the consciousness.


Of course,  that is what I was trying to get people to see: the 
additional constraint that is necessary for differentiation is 
essentially a mind-brain identity thesis.


Not really. To get differentiation, you need only different memories 
or different first person report, not different brain.


The differentiation we are talking about is into two separate persons 
who do not share a consciousness. You need the differentiation before 
you get two first person reports: one consciousness could store several 
different memories.


What you say is very weird. If there is no differentiation of the 
first person experience, then how could the diary in W contains W, and 
the diary in M contains M.


I explained that in the previous post. It is not in the least mysterious 
-- no different from seeing different things with each eye and recording 
what is seen with different hands. Ther emight be twoexperiences, but 
that does not need two persons.


You lost me with your last post, as they seem to conflict immediatey 
with step 1 and "step 0", the definition of (weak) computationalism 
used in the UD Argument.


I don't see any conflict with ordinary teleportation, with or without a 
delay. There is no duplication in those cases, so ordinary 
considerations apply. Of course, if there is a delay that the 
teletransported person has no way of knowing about, then he will not 
know about that delay -- so what?


I presume by "step 0" you mean YD + CT. There is no problem with these 
assumptions; it is just that you appear not to be able to prove the 
differentiation at step 3 from these assumptions.



And my suspicion is that 

Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 00:31, Bruce Kellett wrote:


On 5/08/2016 1:28 am, Brent Meeker wrote:

On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states  
where the
conscious state differs by at least one bit - the W/M bit.  
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of  
the

single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by  
assumption (YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single  
consciousness through time. We get different input data all the  
time but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two  
identical consciousnesses are created, and by the identity of  
indiscernibles, they form just a single consciousness. Then data  
is input. It seems to me that there is no reason why this should  
lead the initial consciousness to differentiate, or split into  
two. In normal life we get inputs from many sources simultaneously  
-- we see complex scenes, smell the air, feel impacts on our body,  
and hear many sounds from the environment. None of this leads our  
consciousness to disintegrate. Indeed, our evolutionary experience  
has made us adept at coping with these multifarious inputs and  
sorting through them very efficiently to concentrate on what is  
most important, while keeping other inputs at an appropriate level  
in our minds.


I have previously mentioned our ability to multitask in complex  
ways: while I am driving my car, I am aware of the car, the road,  
other traffic and so on; while, at the same time, I can be talking  
to my wife; thinking about what to cook for dinner; and reflecting  
on philosophical issues that are important to me. And this is by  
no means an exhaustive list of our ability to multitask -- to run  
many separate conscious modules within the one unified  
consciousness.


Given that this experience is common to us all, it is not in the  
least bit difficult to think that the adding of yet another stream  
of inputs via a separate body will not change the basic structure  
of our consciousness -- we will just take this additional data and  
process in the way we already process multiple data inputs and  
streams of consciousness. This would seem, indeed, to be the  
default understanding of the consequences of person duplication.  
One would have to add some further constraints in order for it to  
be clear that the separate bodies would necessarily have  
differentiated conscious streams. No such additional constraints  
are currently in evidence.


Not empirically proven constraints, but current physics strongly  
suggests that the duplicates would almost immediately, in the  
decoherence time for a brain, differentiate; i.e. the consciousness  
is not separate from the physics.  It's only "not in evidence" if  
your trying to derive the physics from the consciousness.


Of course,  that is what I was trying to get people to see: the  
additional constraint that is necessary for differentiation is  
essentially a mind-brain identity thesis.


Not really. To get differentiation, you need only different memories  
or different first person report, not different brain.


What you say is very weird. If there is no differentiation of the  
first person experience, then how could the diary in W contains W, and  
the diary in M contains M. You lost me with your last post, as they  
seem to conflict immediatey with step 1 and "step 0", the definition  
of (weak) computationalism used in the UD Argument.





And my suspicion is that the mind-brain identity thesis plays havoc  
with the rest of Bruno's argument.


The identity thesis is refuted in the computationalist frame. That  
might be a problem for materialist, which will need at that stage to  
assume a small physical universe without UD running in it forever, and  
without too much Boltzmann Brain, a move which is shown to not work  
later.


Just tell me if you are OK with question 1. The Helsinki guy is told  
that BOTH copies will have a hot drink after the reconstitutions, in  
both Moscow and Washington. Do you agree that the Helsinki guy (a  
believer in computationalism) will believe that he can expect, in  
Helsinki, with probability, or credibility, or plausibility ONE (resp  
maximal) to have some hot drink after pushing the button in Helsinki?


We need to decompose step 3 in sub-steps, so that we can see if there  
is a real disagreement, and in that case where and which one, or if it  
is just pseudo-philosophy or bad 

Re: Holiday Exercise

2016-08-05 Thread Bruce Kellett

On 5/08/2016 6:12 pm, Bruno Marchal wrote:

On 05 Aug 2016, at 04:13, Bruce Kellett wrote:

On 5/08/2016 3:41 am, Bruno Marchal wrote:

On 04 Aug 2016, at 04:37, Bruce Kellett wrote:

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:
You use the assumption that the duplicated consciousnesses 
automatically differentiate when receiving different inputs.


It is not an assumption.


Of course it is an assumption. You have not derived it from 
anything previously in evidence.


See my answer to Brent. It is just obvious that the first person 
experience differentiated when it get different experience, leading 
to different memories. We *assume* computationalism. How coud the 
diaries not differentiate? What you say does not make any sense.


I have been at pains to argue (in several different ways) that the 
differentiation of consciousness is not automatic. It is very easy to 
conceive of a situation in which a single consciousness continues in 
two bodies, with the streams of consciousness arising from both 
easily identifiable, but still unified in the consciousness of a 
single person. (I copy below my recent argument for this in a post 
replying to Russell.) So the differentiation you require is not 
necessary or automatic -- it has to be justified separately because 
it is not "just obvious".


Your recent expansion of the argument of step 3 in discussions with 
John Clark does not alter the situation in any way -- you still just 
assert that the differentiation takes place on the receipt of 
different input data.


I had thought that the argument for such differentiation of 
consciousness in different physical bodies was a consequence of some 
mind-brain identity thesis. But I am no longer sure that even that is 
sufficient -- the differentiation clearly requires separate 
bodies/brains (separate input data streams), but separate bodies are 
not sufficient for differentiation, as I have shown.


That was shown and explained before and is not contested here.


I thought I was contesting it.


Please read the posts.
That is why I introduce a painting in question 2.


That still just gives differentiation on different data inputs -- it 
changes nothing.



But let us first see if you agree with question 1.

Do you agree that if the H-guy is told that a hot drink will be 
offered to both reconstitution in W and in M, he is entitled to expect 
a hot drink with probability one (assuming computationalisme and the 
default hypothesis)


I do not assume computationalism, I am questioning its validity.


Do you agree that P(X) = 1 in Helsinki, if X will occur in both city?


I think that it is entirely possible that the H-guy will, after the 
duplication, experience drinking two coffees.


What is required is a much stronger additional assumption, namely an 
association between minds and brains such that a mind can occupy only 
one brain.


Not at all. We can say that one mind occupy both brain in the 
WM-duplication , before the opening of the door, assuming the 
reconstitution box identical. The mind brain identity fails right at 
step 3.


Mind-brain identity need not fail: what fails in my interpretation of 
duplication is the one-to-one correspondence of one mind with one body. 
One need something stronger that mind-brain identity to justify the 
differentiation on different data inputs because we can have one-many 
and many-one mind-body relationships.


We can associate a mind to a body, but the mind itself (the 1p) can be 
(and must be) associated with many different bodies, in the physical 
universe and later in arithmetic.


You seem to accept my point -- there is still only one mind even after 
different data are fed to the copies: one mind in two bodies in this 
case (a one-many relationship).


(Whether a single brain can host only one mind is a separate matter, 
involving one's attitude to the results of split brain studies and 
the psychological issues surrounding multiple personalities/minds.) 
In other words, the differentiation assumption is an additional 
assumption that does not appear to follow from either physicalism or 
YD+CT.


It follows from very elementary computer science, and in our case, it 
follows necessarily, as the 1p is identified, in this setting with the 
content of the personal diary, which obviously differentiate on the 
self-localization result made by the reconstitutions.


I think the diaries are just confusing you. The copy in M can write M in 
the diary in Moscow, and the copy in W write W in the diary in 
Washington. That is not necessarily different from me writing M in one 
diary with my left hand while writing W in a separate diary with my 
right hand. No differentiation into two separate persons is necessary in 
either case. There is no "self-localization" if there is only ever one 
consciousness -- the person experiences both W and M simultaneously.


As I have further pointed out, one cannot just make this an 

Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 06:27, Brent Meeker wrote:




On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:



On 5 August 2016 at 04:01, Brent Meeker  wrote:


On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:
The problem with (3) is a general problem with multiverses.  A  
single, infinite universe is an example of a multiverse theory,  
since there will be infinite copies of everything and every  
possible variation of everything, including your brain and your  
mind.


That implicitly assumes a digital universe, yet the theory that  
suggests it, quantum mechanics, is based on continua; which is why  
I don't take "the multiverse" too seriously.


It appears that our brains are finite state machines. Each neuron  
can either be "on" or "off", there are a finite number of neurons,  
so a finite number of possible brain states, and a finite number of  
possible mental states. This is analogous to a digital computer:


Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  Otherwise "the  
state" is ill defined.  The finite speed of light means that  
spacially separated regions cannot be synchronous.  Even if neurons  
were only ON or OFF, which they aren't, they have frequency  
modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital machine,  
and that is all what is needed for the reasoning.


Bruno





even if you postulate that electric circuit variables are  
continuous, transistors can only be on or off. If the number of  
possible mental states is finite, then in an infinite universe,  
whether continuous or discrete, mental states will repeat.
We live in an orderly world with consistent physical laws. It  
seems to me that you are suggesting that if everything possible  
existed then we would not live in such an orderly world,


Unless the worlds were separated in some way, which current  
physical theories provide - but which is not explicable if you  
divorce conscious thoughts from physics.


The worlds are physically separated - there can be no communication  
between separate worlds in the multiverse and none between  
sufficiently widely separated copies of subsets of the world in an  
infinite single universe. But the separate copies are connected  
insofar as they share memories and sense of identity, even if there  
is no causal connection between them.


Of course "copy" implies a shared past in which there was an  
"original", they have a cause in common.


Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 05 Aug 2016, at 04:13, Bruce Kellett wrote:


On 5/08/2016 3:41 am, Bruno Marchal wrote:

On 04 Aug 2016, at 04:37, Bruce Kellett wrote:

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:



You use the assumption that the duplicated consciousnesses  
automatically differentiate when receiving different inputs.


It is not an assumption.


Of course it is an assumption. You have not derived it from  
anything previously in evidence.


See my answer to Brent. It is just obvious that the first person  
experience differentiated when it get different experience, leading  
to different memories. We *assume* computationalism. How coud the  
diaries not differentiate? What you say does not make any sense.


I have been at pains to argue (in several different ways) that the  
differentiation of consciousness is not automatic. It is very easy  
to conceive of a situation in which a single consciousness continues  
in two bodies, with the streams of consciousness arising from both  
easily identifiable, but still unified in the consciousness of a  
single person. (I copy below my recent argument for this in a post  
replying to Russell.) So the differentiation you require is not  
necessary or automatic -- it has to be justified separately because  
it is not "just obvious".


Your recent expansion of the argument of step 3 in discussions with  
John Clark does not alter the situation in any way -- you still just  
assert that the differentiation takes place on the receipt of  
different input data.


I had thought that the argument for such differentiation of  
consciousness in different physical bodies was a consequence of some  
mind-brain identity thesis. But I am no longer sure that even that  
is sufficient -- the differentiation clearly requires separate  
bodies/brains (separate input data streams), but separate bodies are  
not sufficient for differentiation, as I have shown.


That was shown and explained before and is not contested here. Please  
read the posts.
That is why I introduce a painting in question 2. But let us first see  
if you agree with question 1.



Do you agree that if the H-guy is told that a hot drink will be  
offered to both reconstitution in W and in M, he is entitled to expect  
a hot drink with probability one (assuming computationalisme and the  
default hypothesis)


Do you agree that P(X) = 1 in Helsinki, if X will occur in both city?




What is required is a much stronger additional assumption, namely an  
association between minds and brains such that a mind can occupy  
only one brain.


Not at all. We can say that one mind occupy both brain in the WM- 
duplication , before the opening of the door, assuming the  
reconstitution box identical. The mind brain identity fails right at  
step 3. We can associate a mind to a body, but the mind itself (the  
1p) can be (and must be) associated with many different bodies, in the  
physical universe and later in arithmetic.





(Whether a single brain can host only one mind is a separate matter,  
involving one's attitude to the results of split brain studies and  
the psychological issues surrounding multiple personalities/minds.)  
In other words, the differentiation assumption is an additional  
assumption that does not appear to follow from either physicalism or  
YD+CT.


It follows from very elementary computer science, and in our case, it  
follows necessarily, as the 1p is identified, in this setting with the  
content of the personal diary, which obviously differentiate on the  
self-localization result made by the reconstitutions.






As I have further pointed out, one cannot just make this an  
additional assumption to YD+CT because it is clearly an empirical  
matter: until we have a working person duplicator, we cannot know  
whether differentiation is automatic or not. Science is, after all,  
empirical, not just a matter of definitions.


Once you agree with P(Mars) = 1 in a simple classical teleportation  
experience (step 1), then how could the diary not differentiate when  
the reconstituted guy write the result of the self-localization?


No empirical test needs to be done, as the differentiation is obvious:  
one copy experiences the city of Moscow, as his diary confirms, and  
the other experiences the city of Washington, as his diaries confirms  
too. If they did not differentiate, what would they write in the diary?


Bruno






Bruce

Here is part of my discussion with Russell:

[BK]I could perhaps expand on that response. On duplication, two  
identical consciousnesses are created, and by the identity of  
indiscernibles, they form just a single consciousness. Then data is  
input. It seems to me that there is no reason why this should lead  
the initial consciousness to differentiate, or split into two. In  
normal life we get inputs from many sources simultaneously -- we see  
complex scenes, smell the air, feel impacts on our body, and hear  
many sounds 

Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 04 Aug 2016, at 19:53, John Clark wrote:



On Thu, Aug 4, 2016 at 12:15 PM, Marchal  wrote:

​> ​The question is not about duplication.

​OK.​


​And that part is still OK. Assigning probabilities about what  
"YOU" will see next is not ambiguous as long as "YOU" duplicating  
machine are not around.


​> ​So, you are OK that the guy in Helsinki write P("drinking  
coffee") = 1.


​The guy in Helsinki​?​ NO!!! Bruno Marchal said  "The question  
is not about duplication" but the guy in Helsinki is just about to  
walk into a YOU ​duplicating machine​,​ so John Clark ​will  
not assign any probability of any sort​ about ​the​ one and  
only one thing ​that ​will happen to "​YOU​"​.​ ​It's  
just ​plain ​dumb.


Nope, question one was about duplication. Only question 2 was not. You  
ndid say that P("drinking coffee") = 1 for the helsinki guy.

Just to be sure, I quote your answer to question one:


On Tue, Aug 2, 2016 at 12:55 PM, Bruno Marchal   
wrote:


​> ​both copies will have a cup of coffee after the  
reconstitution. Are you OK that P("experience of drinking coffee") =  
1?


​Yes, and in this case it doesn't matter if Bruno Marchal says P is  
the probability John Clark will drink the coffee or says P is the  
probability ​ ​"you" will drink the coffee, there is no ambiguity  
either way. However if the Moscow man got the coffee but the  
Washington man did not then there would be a 100% probability that  
John Clark will get the coffee and also a 100% probability that John  
Clark will not get the coffee, just as I would assign a 100%  
probability that tomorrow tomatoes will be red and I would also  
assign a 100% probability that tomorrow tomatoes will be​ green.



Like I just said: QED, unless you explicitly change your mind on  
question 1. But then say it, and we come back to question 1.


Bruno








​> ​Now, the guy in Helsinki is told that we have put a painting  
by Van Gogh in one of the reconstitution box, and a painting by  
Monet in the other reconstitution box.​ ​


​Let's see if John Clark can guess what's coming. After "YOU" have  
been duplicated by a YOU duplicating machine what is the probability  
that "YOU" will blah blah blah. What on earth made Bruno Marchal  
think that substituting a painting for a cup of coffee would make  
things less ambiguous?


​> ​The key point here, is that we don't tell you which  
reconstitution box contains which painting. ​[...]


​Why is that the key point? Suppose we​ ​change the experiment  
and this time before the experiment we tell "YOU" which box contains  
which painting, we tell "YOU" that the red box on the left contains  
the Van Gogh​ ​and the blue box on the right contains the Monet ,  
and we tell "YOU" that after "YOU" are duplicated by the YOU  
duplicating machine "YOU" will be in both boxes. Does that  
information help in the slightest way in determining what one and  
only one painting "YOU" will see after "YOU" ​are​  
duplicated? ​ ​It's just plain​ ​dumb.


​>​ P("being uncertain about which city is behind the door")

​P is equal to who's uncertainty? After the experiment is​ ​ 
over how do we determine what the true value of P turned out to be?  
To find out that value we need to ask "YOU" what "YOU" saw after  
"YOU" walked into the YOU duplicating machine and opened one and  
only one door. But who exactly do we ask? We can't ask the Helsinki  
man as he's no longer around, oh I know, we ask "YOU".


​> ​OK?

​No it's not OK, it's about as far from OK as things get.​

​> ​Can we move to step 4?

​Just as soon as Bruno Marchal explains what one and only one thing  
"YOU" refers to in a world with "YOU" duplicating machines.


John K Clark ​





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-05 Thread Bruno Marchal


On 04 Aug 2016, at 19:53, John Clark wrote:



On Thu, Aug 4, 2016 at 12:15 PM, Marchal  wrote:

​> ​The question is not about duplication.

​OK.​


​And that part is still OK. Assigning probabilities about what  
"YOU" will see next is not ambiguous as long as "YOU" duplicating  
machine are not around.


​> ​So, you are OK that the guy in Helsinki write P("drinking  
coffee") = 1.


​The guy in Helsinki​?​ NO!!! Bruno Marchal said  "The question  
is not about duplication"



The question 2 was not about duplication, but the question 1 was, and  
you said that P("drinking coffee") was equal to one.


You already contradict your recent post where you said that question  
1, which was clearly about duplication, admit a positive answer.


QED.

Bruno







but the guy in Helsinki is just about to walk into a YOU ​ 
duplicating machine​,​ so John Clark ​will not assign any  
probability of any sort​ about ​the​ one and only one thing ​ 
that ​will happen to "​YOU​"​.​ ​It's just ​plain ​ 
dumb.



​> ​Now, the guy in Helsinki is told that we have put a painting  
by Van Gogh in one of the reconstitution box, and a painting by  
Monet in the other reconstitution box.​ ​


​Let's see if John Clark can guess what's coming. After "YOU" have  
been duplicated by a YOU duplicating machine what is the probability  
that "YOU" will blah blah blah. What on earth made Bruno Marchal  
think that substituting a painting for a cup of coffee would make  
things less ambiguous?


​> ​The key point here, is that we don't tell you which  
reconstitution box contains which painting. ​[...]


​Why is that the key point? Suppose we​ ​change the experiment  
and this time before the experiment we tell "YOU" which box contains  
which painting, we tell "YOU" that the red box on the left contains  
the Van Gogh​ ​and the blue box on the right contains the Monet ,  
and we tell "YOU" that after "YOU" are duplicated by the YOU  
duplicating machine "YOU" will be in both boxes. Does that  
information help in the slightest way in determining what one and  
only one painting "YOU" will see after "YOU" ​are​  
duplicated? ​ ​It's just plain​ ​dumb.


​>​ P("being uncertain about which city is behind the door")

​P is equal to who's uncertainty? After the experiment is​ ​ 
over how do we determine what the true value of P turned out to be?  
To find out that value we need to ask "YOU" what "YOU" saw after  
"YOU" walked into the YOU duplicating machine and opened one and  
only one door. But who exactly do we ask? We can't ask the Helsinki  
man as he's no longer around, oh I know, we ask "YOU".


​> ​OK?

​No it's not OK, it's about as far from OK as things get.​

​> ​Can we move to step 4?

​Just as soon as Bruno Marchal explains what one and only one thing  
"YOU" refers to in a world with "YOU" duplicating machines.


John K Clark ​





--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Brent Meeker



On 8/4/2016 7:40 PM, Stathis Papaioannou wrote:



On 5 August 2016 at 04:01, Brent Meeker > wrote:




On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:

The problem with (3) is a general problem with multiverses.  A
single, infinite universe is an example of a multiverse theory,
since there will be infinite copies of everything and every
possible variation of everything, including your brain and your
mind.


That implicitly assumes a digital universe, yet the theory that
suggests it, quantum mechanics, is based on continua; which is why
I don't take "the multiverse" too seriously.


It appears that our brains are finite state machines. Each neuron can 
either be "on" or "off", there are a finite number of neurons, so a 
finite number of possible brain states, and a finite number of 
possible mental states. This is analogous to a digital computer:


Not necessarily. A digital computer also requires that time be digitized 
so that its registers run synchronously.  Otherwise "the state" is ill 
defined.  The finite speed of light means that spacially separated 
regions cannot be synchronous.  Even if neurons were only ON or OFF, 
which they aren't, they have frequency modulation, they are not synchronous.


even if you postulate that electric circuit variables are continuous, 
transistors can only be on or off. If the number of possible mental 
states is finite, then in an infinite universe, whether continuous or 
discrete, mental states will repeat.



We live in an orderly world with consistent physical laws. It
seems to me that you are suggesting that if everything possible
existed then we would not live in such an orderly world,


Unless the worlds were separated in some way, which current
physical theories provide - but which is not explicable if you
divorce conscious thoughts from physics.


The worlds are physically separated - there can be no communication 
between separate worlds in the multiverse and none between 
sufficiently widely separated copies of subsets of the world in an 
infinite single universe. But the separate copies are connected 
insofar as they share memories and sense of identity, even if there is 
no causal connection between them.


Of course "copy" implies a shared past in which there was an "original", 
they have a cause in common.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Stathis Papaioannou
On 5 August 2016 at 04:01, Brent Meeker  wrote:

>
>
> On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:
>
> The problem with (3) is a general problem with multiverses.  A single,
> infinite universe is an example of a multiverse theory, since there will be
> infinite copies of everything and every possible variation of everything,
> including your brain and your mind.
>
>
> That implicitly assumes a digital universe, yet the theory that suggests
> it, quantum mechanics, is based on continua; which is why I don't take "the
> multiverse" too seriously.
>

It appears that our brains are finite state machines. Each neuron can
either be "on" or "off", there are a finite number of neurons, so a finite
number of possible brain states, and a finite number of possible mental
states. This is analogous to a digital computer: even if you postulate that
electric circuit variables are continuous, transistors can only be on or
off. If the number of possible mental states is finite, then in an infinite
universe, whether continuous or discrete, mental states will repeat.

> We live in an orderly world with consistent physical laws. It seems to me
> that you are suggesting that if everything possible existed then we would
> not live in such an orderly world,
>
>
> Unless the worlds were separated in some way, which current physical
> theories provide - but which is not explicable if you divorce conscious
> thoughts from physics.
>

The worlds are physically separated - there can be no communication between
separate worlds in the multiverse and none between sufficiently widely
separated copies of subsets of the world in an infinite single universe.
But the separate copies are connected insofar as they share memories and
sense of identity, even if there is no causal connection between them.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Bruce Kellett

On 5/08/2016 3:41 am, Bruno Marchal wrote:

On 04 Aug 2016, at 04:37, Bruce Kellett wrote:

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:


You use the assumption that the duplicated consciousnesses 
automatically differentiate when receiving different inputs.


It is not an assumption.


Of course it is an assumption. You have not derived it from anything 
previously in evidence.


See my answer to Brent. It is just obvious that the first person 
experience differentiated when it get different experience, leading to 
different memories. We *assume* computationalism. How coud the diaries 
not differentiate? What you say does not make any sense.


I have been at pains to argue (in several different ways) that the 
differentiation of consciousness is not automatic. It is very easy to 
conceive of a situation in which a single consciousness continues in two 
bodies, with the streams of consciousness arising from both easily 
identifiable, but still unified in the consciousness of a single person. 
(I copy below my recent argument for this in a post replying to 
Russell.) So the differentiation you require is not necessary or 
automatic -- it has to be justified separately because it is not "just 
obvious".


Your recent expansion of the argument of step 3 in discussions with John 
Clark does not alter the situation in any way -- you still just assert 
that the differentiation takes place on the receipt of different input data.


I had thought that the argument for such differentiation of 
consciousness in different physical bodies was a consequence of some 
mind-brain identity thesis. But I am no longer sure that even that is 
sufficient -- the differentiation clearly requires separate 
bodies/brains (separate input data streams), but separate bodies are not 
sufficient for differentiation, as I have shown. What is required is a 
much stronger additional assumption, namely an association between minds 
and brains such that a mind can occupy only one brain. (Whether a single 
brain can host only one mind is a separate matter, involving one's 
attitude to the results of split brain studies and the psychological 
issues surrounding multiple personalities/minds.) In other words, the 
differentiation assumption is an additional assumption that does not 
appear to follow from either physicalism or YD+CT.


As I have further pointed out, one cannot just make this an additional 
assumption to YD+CT because it is clearly an empirical matter: until we 
have a working person duplicator, we cannot know whether differentiation 
is automatic or not. Science is, after all, empirical, not just a matter 
of definitions.


Bruce

Here is part of my discussion with Russell:

[BK]I could perhaps expand on that response. On duplication, two 
identical consciousnesses are created, and by the identity of 
indiscernibles, they form just a single consciousness. Then data is 
input. It seems to me that there is no reason why this should lead the 
initial consciousness to differentiate, or split into two. In normal 
life we get inputs from many sources simultaneously -- we see complex 
scenes, smell the air, feel impacts on our body, and hear many sounds 
from the environment. None of this leads our consciousness to 
disintegrate. Indeed, our evolutionary experience has made us adept at 
coping with these multifarious inputs and sorting through them very 
efficiently to concentrate on what is most important, while keeping 
other inputs at an appropriate level in our minds.


[BK]I have previously mentioned our ability to multitask in complex 
ways: while I am driving my car, I am aware of the car, the road, other 
traffic and so on; while, at the same time, I can be talking to my wife; 
thinking about what to cook for dinner; and reflecting on philosophical 
issues that are important to me. And this is by no means an exhaustive 
list of our ability to multitask -- to run many separate conscious 
modules within the one unified consciousness.


[BK]Given that this experience is common to us all, it is not in the 
least bit difficult to think that the adding of yet another stream of 
inputs via a separate body will not change the basic structure of our 
consciousness -- we will just take this additional data and process in 
the way we already process multiple data inputs and streams of 
consciousness. This would seem, indeed, to be the default understanding 
of the consequences of person duplication. One would have to add some 
further constraints in order for it to be clear that the separate bodies 
would necessarily have differentiated conscious streams. No such 
additional constraints are currently in evidence.



PS. Please keep your personal comments and insults to yourself.

You are inventing this. Step 3 does not use step 7. Please follows the 
thread or avoid trolling the discussion. Read my exchange with Clark, 
I just give him a new proof of the FPI.


Bruce, I have to say that you 

Re: Holiday Exercise

2016-08-04 Thread Bruce Kellett

On 5/08/2016 1:28 am, Brent Meeker wrote:

On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states 
where the
conscious state differs by at least one bit - the W/M bit. 
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of the
single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by 
assumption (YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single 
consciousness through time. We get different input data all the time 
but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two 
identical consciousnesses are created, and by the identity of 
indiscernibles, they form just a single consciousness. Then data is 
input. It seems to me that there is no reason why this should lead 
the initial consciousness to differentiate, or split into two. In 
normal life we get inputs from many sources simultaneously -- we see 
complex scenes, smell the air, feel impacts on our body, and hear 
many sounds from the environment. None of this leads our 
consciousness to disintegrate. Indeed, our evolutionary experience 
has made us adept at coping with these multifarious inputs and 
sorting through them very efficiently to concentrate on what is most 
important, while keeping other inputs at an appropriate level in our 
minds.


I have previously mentioned our ability to multitask in complex ways: 
while I am driving my car, I am aware of the car, the road, other 
traffic and so on; while, at the same time, I can be talking to my 
wife; thinking about what to cook for dinner; and reflecting on 
philosophical issues that are important to me. And this is by no 
means an exhaustive list of our ability to multitask -- to run many 
separate conscious modules within the one unified consciousness.


Given that this experience is common to us all, it is not in the 
least bit difficult to think that the adding of yet another stream of 
inputs via a separate body will not change the basic structure of our 
consciousness -- we will just take this additional data and process 
in the way we already process multiple data inputs and streams of 
consciousness. This would seem, indeed, to be the default 
understanding of the consequences of person duplication. One would 
have to add some further constraints in order for it to be clear that 
the separate bodies would necessarily have differentiated conscious 
streams. No such additional constraints are currently in evidence.


Not empirically proven constraints, but current physics strongly 
suggests that the duplicates would almost immediately, in the 
decoherence time for a brain, differentiate; i.e. the consciousness is 
not separate from the physics.  It's only "not in evidence" if your 
trying to derive the physics from the consciousness.


Of course,  that is what I was trying to get people to see: the 
additional constraint that is necessary for differentiation is 
essentially a mind-brain identity thesis. And my suspicion is that the 
mind-brain identity thesis plays havoc with the rest of Bruno's argument.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Brent Meeker



On 8/4/2016 2:57 AM, Stathis Papaioannou wrote:



On 4 August 2016 at 11:16, Brent Meeker > wrote:




On 8/3/2016 5:55 PM, Stathis Papaioannou wrote:



On 3 August 2016 at 16:02, Brent Meeker > wrote:



On 8/2/2016 10:19 PM, Stathis Papaioannou wrote:



On Wednesday, 3 August 2016, Brent Meeker
 wrote:



On 8/2/2016 3:29 PM, Stathis Papaioannou wrote:



On Wednesday, 3 August 2016, Brent Meeker
 wrote:



On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:

It's not that it can't, but rather that it
doesn't, and if it does then that would require
some extra physical explanation, a radio link
between brains or something.


That's what I mean by illegitimately appealing to
physics while claiming that physics must be derived
from computation of consciousness.


 Whatever theory we propose must be consistent with
observation.


But, "if it does then*/that would require some extra
physical explanation/*, a radio link between brains or
something." Is not an observation, it's an assumption
that all information transfer must be physical.


There is no convincing evidence for telepathic
communication, so a theory that predicts it should occur
would have to explain why we don't observe it.


Yes, and physical theories of consciousness do that quite
well.  But computationalist theories of consciousness can't
invoke the physics they're trying to derive.


Bruno, I believe, proposes that his theory accounts for the
universe that we observe.


ISTM his argument is of the form:

1) Consciousness is instantiated by certain computation.
2) All possible computation is realized by a UDA that exists
because arithmetic is true.
3) Then the conscious thoughts that constitute our experience of a
physical world are among those instantiated by the UDA and the
physical world need not be anything more than threads of those
computations that exhibit the consistent patterns which we explain
as an external reality.

The problem I have with this is that "arithmetic is true" doesn't
make anything, much less a UDA, exist. And the conclusion (3) just
brings in Everett's measure problem amplified to the nth degree. 
It explains too much as "existing" and doesn't assign

probabilities to anything.  So far as I can tell Bruno is just
relying on 1-3 as a "proof" that the physics we observe MUST BE
derived from the UDA.


The problem with (3) is a general problem with multiverses.  A single, 
infinite universe is an example of a multiverse theory, since there 
will be infinite copies of everything and every possible variation of 
everything, including your brain and your mind.


That implicitly assumes a digital universe, yet the theory that suggests 
it, quantum mechanics, is based on continua; which is why I don't take 
"the multiverse" too seriously.


We live in an orderly world with consistent physical laws. It seems to 
me that you are suggesting that if everything possible existed then we 
would not live in such an orderly world,


Unless the worlds were separated in some way, which current physical 
theories provide - but which is not explicable if you divorce conscious 
thoughts from physics.


Brent

and we would not be able to have coherent thoughts. So the fact that 
we do have coherent thoughts implies that multiverses cannot exist, 
and we must live in a finite universe. That seems a lot to conclude 
from the mere fact that you are able to think.



--
Stathis Papaioannou
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To post to this group, send email to everything-list@googlegroups.com 
.

Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-04 Thread John Clark
On Thu, Aug 4, 2016 at 12:15 PM, Marchal  wrote:
>
> ​> ​
>> The question is not about duplication.
>
>
> ​OK.​
>
>
> ​And that part is still OK. Assigning probabilities about what "YOU" will
see next is not ambiguous as long as "YOU" duplicating machine are not
around.

​> ​
> So, you are OK that the guy in Helsinki write P("drinking coffee") = 1.
>

​The
 guy in Helsinki
​?​
NO!!! Bruno Marchal said  "The question is not about duplication" but the
guy in Helsinki is just about to walk into a *YOU*
 ​
duplicating machine
​,​
so John Clark
​will not
 assign any probability of any sort​ about
​the​
 one and only one thing
​that ​
will happen to "
​*YOU​*
"
​.​

​
It's just
​plain ​
dumb.


​> ​
> Now, the guy in Helsinki is told that we have put a painting by Van Gogh
> in one of the reconstitution box, and a painting by Monet in the other
> reconstitution box.
> ​ ​
>
>

​Let's see if John Clark can guess what's coming. After "*YOU*" have been
duplicated by a *YOU* duplicating machine what is the probability that "
*YOU*" will blah blah blah. What on earth made Bruno Marchal think that
substituting a painting for a cup of coffee would make things less
ambiguous?

​> ​
> The key point here, is that we don't tell you which reconstitution box
> contains which painting.
> ​[...]
>

​
Why is that the key point? Suppose we
​ ​
change the experiment and this time before the experiment we tell "Y*OU*"
which box contains which painting, we tell "*YOU*" that the red box on the
left contains the Van Gogh
​ ​
and the blue box on the right contains the Monet , and we tell "*YOU*" that
after "*YOU*" are duplicated by the *YOU* duplicating machine "*YOU*" will
be in both boxes. Does that information help in the slightest way in
determining what one and only one painting "*YOU*" will see after "*YOU*"
​are​
 duplicated?
​ ​
It's just plain
​ ​
dumb.

​>​
>  P("being uncertain about which city is behind the door")


​P is equal to who's uncertainty? After the experiment is​

​over how do we determine what the true value of P turned out to be? To
find out that value we need to ask "*YOU*" what "*YOU*" saw after "*YOU*"
walked into the *YOU *duplicating machine and opened one and only one door.
But who exactly do we ask? We can't ask the Helsinki man as he's no longer
around, oh I know, we ask "*YOU*".


​> ​
> OK?
>

​No it's not OK, it's about as far from OK as things get.​


​> ​
> Can we move to step 4?
>

​Just as soon as Bruno Marchal explains what one and only one thing "*YOU*"
refers to in a world with "*YOU*" duplicating machines.

John K Clark ​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Bruno Marchal


On 04 Aug 2016, at 04:37, Bruce Kellett wrote:


On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:

On 3/08/2016 2:55 am, Bruno Marchal wrote:

On 02 Aug 2016, at 14:40, Bruce Kellett wrote:

On 2/08/2016 3:07 am, Bruno Marchal wrote:

On 01 Aug 2016, at 09:04, Bruce Kellett wrote:

Consider ordinary consequences of introspection: I can be  
conscious of several unrelated things at once. I can be  
driving my car, conscious of the road and traffic conditions  
(and responding to them appropriately), while at the same time  
carrying on an intelligent conversation with my wife, thinking  
about what I will make for dinner, and, in the back of my mind  
thinking about a philosophical email exchange. These, and many  
other things, can be present to my conscious mind at the same  
time. I can bring any one of these things to the forefront of  
my mind at will, but processing of the separate streams goes  
on all the time.


Given this, it is quite easy to imagine that a subset of these  
simultaneous streams of consciousness might be associated with  
myself in a different body -- in a different place at a  
different time. I would be aware of things happening to the  
other body in real time in my own consciousness -- because  
they would, in fact, be happening to me.


If you dissociate consciousness from an actual single brain,  
then these things are quite conceivable.


Dissociating consciousness from any actual single brain is what  
UDA explains in detail. Then the math shows that this  
dissociation run even deeper, as your 1p consciousness is  
associated with the infinitely many relative and faithful (at  
the correct substitution level or below) state in the (sigma_1)  
arithmetical relations.


Duplication experiments would then be a real test of the  
hypothesis that consciousness could be separated from the  
physical brain. If the duplicates are essentially separate  
conscious beings, unaware of the thoughts and happenings of  
the other, then consciousness is tied to a particular physical  
brain (or brain substitute).


Not at all, but it might look like that at that stage, but what  
you say does not follow from computationalism. The same  
consciousness present at both place before the door is open  
*only* differentiated when they get the different bit of  
information W or M.


However, if consciousness is actually an abstract computation  
that is tied to a physical brain only in a statistical sense,  
then we should expect that the single consciousness could  
inhabit several bodies simultaneously.


It is irrelevant to decide how many consciousness or first  
person there is. We need only to listen to  
those  which have differentiated to extract the  
statistics.


The point that I am trying to make here is that a person's  
consciousness at any moment can consist of many independent  
threads. From this I speculate that some of these separate  
threads could actually be associated with separate physical  
bodies. In other words, it is conceivable that a duplication  
experiment would not result in two separate consciousnesses, but  
a single consciousness in separate bodies. If this is so, the  
fact that the separate bodies receive different inputs does not  
necessarily mean that they differentiate into separate conscious  
beings, any more than the fact that I receive different inputs  
from moment to moment means that I dissociate into multiple  
consciousnesses.


It seems that the only reason that one might expect that the  
different inputs experienced by the separate duplicates would  
lead to a differentiation of the consciousnesses -- i.e., two  
separate and distinct conscious beings -- is that one is  
implicitly making the physicalist assumption that a single  
consciousness is necessarily associated with a single body, such  
that separate physical bodies necessarily have separate  
consciousnesses.


I suggest that for step 3 to go through, you need to demonstrate  
that computationalism requires that a single consciousness  
cannot inhabit two or more separate physical bodies: without  
such a demonstration you cannot conclude that W is not a  
possible outcome that the duplicated person could experience.  
You must demonstrate that different inputs lead to a  
differentiation of the consciousnesses in the duplication case,  
while not so differentiating the consciousness of a single  
person. The required demonstration must be based on the  
assumptions of computationalism alone, you cannot rely on  
physics that is not yet in evidence.


In other words, start from your basic assumptions:
(1) The "yes doctor" hypothesis;
(2) The Church-Turing thesis; and
(3) Arithmetical realism;


(3) is redundant. There is no (2) without (3).


Yes there is. Arithmetical realism, as you use the term, is  
different from the ability to calculate.


No. I define arithmetical realism by the belief in elementary  
arithmetic.

Re: Holiday Exercise

2016-08-04 Thread Bruno Marchal


On 04 Aug 2016, at 03:16, Brent Meeker wrote:




On 8/3/2016 5:55 PM, Stathis Papaioannou wrote:



On 3 August 2016 at 16:02, Brent Meeker  wrote:


On 8/2/2016 10:19 PM, Stathis Papaioannou wrote:



On Wednesday, 3 August 2016, Brent Meeker   
wrote:



On 8/2/2016 3:29 PM, Stathis Papaioannou wrote:



On Wednesday, 3 August 2016, Brent Meeker   
wrote:



On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:
It's not that it can't, but rather that it doesn't, and if it  
does then that would require some extra physical explanation, a  
radio link between brains or something.


That's what I mean by illegitimately appealing to physics while  
claiming that physics must be derived from computation of  
consciousness.


 Whatever theory we propose must be consistent with observation.


But, "if it does then that would require some extra physical  
explanation, a radio link between brains or something." Is not an  
observation, it's an assumption that all information transfer must  
be physical.


There is no convincing evidence for telepathic communication, so a  
theory that predicts it should occur would have to explain why we  
don't observe it.


Yes, and physical theories of consciousness do that quite well.   
But computationalist theories of consciousness can't invoke the  
physics they're trying to derive.


Bruno, I believe, proposes that his theory accounts for the  
universe that we observe.


ISTM his argument is of the form:

1) Consciousness is instantiated by certain computation.
2) All possible computation is realized by a UDA that exists because  
arithmetic is true.
3) Then the conscious thoughts that constitute our experience of a  
physical world are among those instantiated by the UDA and the  
physical world need not be anything more than threads of those  
computations that exhibit the consistent patterns which we explain  
as an external reality.



Not correct. By the (step 7) global FPI on the (sigma-1) arithmetic,  
we have to extract physics from the statistics on *all* computations  
(universal number in activity).


That is the key, making mechanism testable. It is not a theory of  
mine, it is a problem in a well accepted theory. It works on machines  
+ oracles too.






The problem I have with this is that "arithmetic is true" doesn't  
make anything, much less a UDA, exist.


UDA is my Argument.

Use UD for the Universal Dovetailer program, and UD* for its  
execution, in a physical universe (step 7), or in the sigma_1  
arithmetic (step 8).


Step 8 is not the proof that arithmetic generate and run UD* (which is  
an easy exercise in introductory book in the domain), but is the  
explanation why assuming the physical cannot help in the mind-body  
problem once we assume digital mechanism.


I only show that with digital mechanism, we can take the mind-body  
problem back from under the rug.


 It is not yet solved, although the math solves apparently the  
propositional part of the problem, we could say. UDA is not a theory,  
it is the enunciation of a problem, which we tend to abstract from  
since we have given the theological science to the politics, roughly  
speaking.


We would have evidence that the logic of the observable is boolean,  
and not quantum, then classical computationalism would be in trouble.






  And the conclusion (3) just brings in Everett's measure problem  
amplified to the nth degree.


Exactly. But the math shows already that the "explosion of 1p- 
realities" is pretty well confined.





It explains too much as "existing" and doesn't assign probabilities  
to anything.  So far as I can tell Bruno is just relying on 1-3 as a  
"proof" that the physics we observe MUST BE derived from the UDA.


Yes, but that was a lightening in my young time. The work is what has  
been done by Gödel, Löb, Solovay, ... and also Post, Turing, Church,  
and many others, and which has made possible to see the shape of the  
(neopythagorean) solution (that has been 30 years of work). UDA is the  
enunciation of the problem, and AUDA is the translation of the problem  
in the language of the Löbian numbers, and what they can already  
answer, or more aptly what we can already listen.


I am just serious on the mind-body problem, in the mechanist frame.

Only devoted super-bigot christian fundamentalist atheists have a  
problem, and perhaps even only those molesting the children in secret  
(that is the difference between the temple and the church, the church  
molests the kids publicly, the temple molest the kids secretly).


Then it is obvious that when you see how hard it is for people to get  
that the danger of cannabis are lies (despite they are only 75 years  
old, and are rather gross), you can imagine that about the 'Glass of  
Milk', it will take time to recover from the 1500 years of lies, and  
superstition, wishful thinking, etc.
As things are going, it looks like we try to prepare 

Re: Holiday Exercise

2016-08-04 Thread Platonist Guitar Cowboy
On Thu, Aug 4, 2016 at 5:15 PM, Bruno Marchal  wrote:

>
> On 03 Aug 2016, at 21:01, Brent Meeker wrote:
>
>
>>
>> On 8/3/2016 7:26 AM, Bruno Marchal wrote:
>>
>>>
>>> On 02 Aug 2016, at 20:52, Brent Meeker wrote:
>>>
>>>

 On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:

> It's not that it can't, but rather that it doesn't, and if it does
> then that would require some extra physical explanation, a radio link
> between brains or something.
>

 That's what I mean by illegitimately appealing to physics while
 claiming that physics must be derived from computation of consciousness.

>>>
>>> Are you OK with Clark's answer to question 1?
>>> What about question 2?
>>>
>>
>> I've given up following your exchanges with Clark.  They seem to be about
>> semantics.
>>
>
> It is enough to interpret "gibberish" by "I have no more argument".
>
>
You can't fault people for not reading "gibberish" which you continue to
entertain daily, as worthy of replies and full attention of the list for
years. If they are gibberish why the years of daily replies in my inbox?

Exchanges of friendship? I'm not sure friends talk to each other that way
for years, where it remains unclear from both sides if there even is a real
difference in between all the vain orgies of righteous hair splitting. To
prove people stature and erudition? Regardless of the content, the guy
entertaining gibberish, telling people to interpret it as "I have no
argument" for him to consequently "have the real argument" doesn't paint
the best picture. Blatant assumption of superiority in a context of
supposedly rational exchange. Decidedly un-mystical and dogmatic from both
of you. Both are using each other and the list for more self-promotion than
discussion and should count themselves lucky to have each other, as both
obviously fulfill each others' need.

You could thank each other for providing yourselves with raison d'etre.
That would be funny: a photo of you both having a beer instead of all the
disguised rancour.

For those that hope for some political or theological reference/orientation
that comp may provide, look right here: this is merely human same old.
Establishing and keeping a medieval pecking order, enforcing it with old
tactics, fetishizing status and names => THIS is the theory of personal
identity and the "scientific reasoning" it has always produced that Bruno
points toward. Somebody indeed bring us a teleportation device.

If anything the broader tendency here tarnishes possible points from any
side as closer to advertising, forcing opinions with linguistic games etc.
Welcome to the internet. PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Step 4? (Re: QUESTION 2 (Re: Holiday Exercise

2016-08-04 Thread Bruno Marchal


On 03 Aug 2016, at 19:20, John Clark wrote:

On Tue, Aug 2, 2016 at 2:56 PM, Bruno Marchal   
wrote:


​> ​The question is not about duplication.

​OK.​

​> ​Do you agree that if today, someone is "sure" that tomorrow  
(or any precise time later) he will be uncertain of an outcome of a  
certain experience, then he can say, today, that he is uncertain  
about that future outcome.


​Sure, he can say whatever he wants because being sure ​and being  
correct are two entirely different things.


​> ​For example, if I promise myself to buy a lottery ticket next  
week. I am pretty sure now that next week I will​ ​be unsure  
winning something


​I've been known to break promises to myself before. If I didn't  
buy the ticket I'd absolutely certain I won't win the lottery next  
week, if I do buy the ticket I'd be almost certain I won't win next  
week. I'll have to wait till next week to find out if in addition to  
being certain I was also correct. And because you said right at the  
start that people duplicating machines ​are not involved this time  
personal pronouns can be used without ​ambiguity. ​


​> ​or not with that ticket, so I consider myself to be uncertain  
right now about winning or not the lottery next week.​ ​So I  
repeat, the principle questioned here says that if at t_0​ ​P("I  
will be uncertain of the outcome of some experience at t_1") =  
1​ ​then​ ​The outcome of the experience at t_1 is uncertain  
at t_0.


​You can be certain and wrong, and uncertain and correct. I will  
say that if I don't know fact X tomorow but I do know fact X now  
then sometime between today and tomorrow part of my memory must have  
been be erased. It's called "forgetting". But I haven't forgotten  
you said " The question is not about duplication" and that means "I"  
duplicating Machines are not involved, and that is the only reason  
it wasn't gibberish when you said "I will be uncertain of the...".



OK.

So, you are OK that the guy in Helsinki write P("drinking coffee") =  
1. (Question 1).


I notice also that you did not mention that the coffee should taste  
exactly the same, and I could have just propose a hot drink, we would  
still have P("drinking hot drink") = 1. All right?


And you agree (question 2) that if am pretty sure that tomorrow I will  
make an experience with a random/uncertain result (lottery, quantum  
lottery, whatever), I can say that I am already uncertain today about  
the result of that experience, assuming I keep my promise to myself to  
do the experience of course (buying the lottery ticket, measuring that  
spin, etc.).


Good!

Now, I will prove, assuming computationalism (alias digital mechanist  
hypothesis in cognitive science), that there is a first person  
indeterminacy in some still modified step 3 protocol. Then I will  
explain that the modification does not change the uncertainty, and  
thus proved step 3.



Now, the guy in Helsinki is told that we have put a painting by Van  
Gogh in one of the reconstitution box, and a painting by Monet in the  
other reconstitution box. The key point here, is that we don't tell  
you which reconstitution box contains which painting. After the  
reconstitutions, the doors will remain close for some short time,  
which I call delta-t_1, so that t_0 is when the guy is in Helsinki,  
and delta-t_2 is the interval of time when the reconstitutions are  
done, simultaneously (say) in Washington and in Moscow.


The guy in Helsinki reasons like this: by the question 1 principle,  
P("seeing a painting") = 1, given that there will be a painting in  
both reconstitution boxes. Now, by Digital Mechanism, both copies will  
see different paintings, given that they have been reconstituted in  
different boxes containing different paintings. But the difference  
between the paintings differentiates the first persn experience of  
each copies, and they know that. Both will see a specific painting,  
like a Monet, or a Van Gogh, and both will conclude that by seiing the  
painting, they have already differentiate, so that the city behind the  
door is already determined. But as we have not told the guy in  
Helsinki where the paintings have been placed, the differentiation is  
not enough for them to deduce with certainty what city is behind the  
door. The guy in Helsinki I just prove that P("being uncertain about  
which city is behind the door") = 1, in the same sense of the question  
one principle (if X occurs at both places then P(X) = 1).


The guy in Helsinki expect (with P=1, modulo assumption and default  
hyp) to get a cup of coffee, to see a painting, and to live an  
interval of time where he will be aware that the differentiation has  
occurred, despite not knowing which city is behind the doors. By the  
principle of the question 2, he is already uncertain about the outcome  
of the opening of the door tomorrow. The delta-t_2 interval of  
uncertainty is lifted to the day before.


Now, obviously, 

Re: Holiday Exercise

2016-08-04 Thread Brent Meeker



On 8/4/2016 2:51 AM, Bruce Kellett wrote:

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:
However, we are being asked to consider two conscious states where 
the
conscious state differs by at least one bit - the W/M bit. 
Clearly, by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of the
single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by assumption 
(YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single 
consciousness through time. We get different input data all the time 
but we do not differentiate according to that data.


I could perhaps expand on that response. On duplication, two identical 
consciousnesses are created, and by the identity of indiscernibles, 
they form just a single consciousness. Then data is input. It seems to 
me that there is no reason why this should lead the initial 
consciousness to differentiate, or split into two. In normal life we 
get inputs from many sources simultaneously -- we see complex scenes, 
smell the air, feel impacts on our body, and hear many sounds from the 
environment. None of this leads our consciousness to disintegrate. 
Indeed, our evolutionary experience has made us adept at coping with 
these multifarious inputs and sorting through them very efficiently to 
concentrate on what is most important, while keeping other inputs at 
an appropriate level in our minds.


I have previously mentioned our ability to multitask in complex ways: 
while I am driving my car, I am aware of the car, the road, other 
traffic and so on; while, at the same time, I can be talking to my 
wife; thinking about what to cook for dinner; and reflecting on 
philosophical issues that are important to me. And this is by no means 
an exhaustive list of our ability to multitask -- to run many separate 
conscious modules within the one unified consciousness.


Given that this experience is common to us all, it is not in the least 
bit difficult to think that the adding of yet another stream of inputs 
via a separate body will not change the basic structure of our 
consciousness -- we will just take this additional data and process in 
the way we already process multiple data inputs and streams of 
consciousness. This would seem, indeed, to be the default 
understanding of the consequences of person duplication. One would 
have to add some further constraints in order for it to be clear that 
the separate bodies would necessarily have differentiated conscious 
streams. No such additional constraints are currently in evidence.


Not empirically proven constraints, but current physics strongly 
suggests that the duplicates would almost immediately, in the 
decoherence time for a brain, differentiate; i.e. the consciousness is 
not separate from the physics.  It's only "not in evidence" if your 
trying to derive the physics from the consciousness.


Brent




Bruce



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Bruno Marchal


On 03 Aug 2016, at 21:01, Brent Meeker wrote:




On 8/3/2016 7:26 AM, Bruno Marchal wrote:


On 02 Aug 2016, at 20:52, Brent Meeker wrote:




On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:
It's not that it can't, but rather that it doesn't, and if it  
does then that would require some extra physical explanation, a  
radio link between brains or something.


That's what I mean by illegitimately appealing to physics while  
claiming that physics must be derived from computation of  
consciousness.


Are you OK with Clark's answer to question 1?
What about question 2?


I've given up following your exchanges with Clark.  They seem to be  
about semantics.


It is enough to interpret "gibberish" by "I have no more argument".





 I don't see any problem with your arguments there,


OK.



except recognizing that it is an assumption that duplicated brains  
will have separate consciousness.



I don't use the term consciousness, except in the replies of those who  
does.



I need only that the duplicated memories (diaries) get the notice or  
the imprint of the memory of seeing Moscow (respectively Washington).


And that follows trivially from the digital mechanist assumption DM,  
or from P=1 (conditioned by DM and the default hypotheses). "P=1"  
itself will be shwon to belong to G* minus G. It needs an act of faith.


It is not relevant, at this stage, to identify or not "I see Moscow"  
in the first person sense, with "I am conscious that I see Moscow".


To get the reversal, the differentiation of the first person diaries  
is quite enough. OK?






That's probably true, based on a physicalist model.  Whether it is  
probable on a computationalist model is less clear.


You survive intact, with your eyes intact, and the first person  
discourse is defined by the memory of the outcome of the first person  
self-localization. They do both see different cities. Assuming DM, it  
is plain obvious that the first person discourse (and consciousness)  
differentiate. If each copy repeat the experience 10 times, the many  
first person discourses available will differentiate into 2^10  
discourses, from the guy with the (conscious) first person experience  
WW to the guy with the first person conscious first person  
experience MM, and the 1022 (conscious) first person  
experience in between.


It is irrelevant if we talk about an emulation of all this in  
arithmetic or in some physical reality. We need only to assume some  
stable relative number relations, like, notably, physicists measures  
and extrapolate through physical assumption usually expressed in  
mathematical formula.


I use physics like Turing made its machine looking physical, unlike  
Church Lamdda calculus, or Robinson Arithmetic.










You argument is not valid, here. We can make local assumption, and  
then eliminate them later, which is done in this case in step 7  
(with or without step 8).


You can't make a "local" assumption in step 3, use it to argue for  
step 7 and then use truth of step 7 as evidence for the assumption  
in step 3.  That's circular



I don't do that at all.

I assume a physical reality, to make things easier. I do not assume it  
to be primary, and the contradiction that we will obtain is only with  
the idea that such a physical reality is primary and that it is the  
selector of the (conscious) histories.


There is no problem at all with physics. I would not have taken so  
much time to show that DM is empirically testable if I did not believe  
in a physical reality, and in its importance for searching truth.


The problem is only with physicalism (and with weak materialism).

Let us discuss step seven when everybody agree on step 3.

Oh! I see other post by you. I comment them here.


No, because the physical assumption is eliminated at step 7.


But I don't think that step is correct.  As I've argued several  
times I don't think there can be consciousness without a physical  
context.


And as I replied, you are correct. But that is non relevant for the  
understanding of the reversal imposed by DM..



The point is that the physical context does not need to be primary for  
having consciousness.


The physical context is, so to speak, one half historico-geographical,  
and one half theological, where theological means, here, the unknown  
first person result of the infinitely many universal numbers which  
competes (in elementary arithmetic) to bring your most probable  
continuations.
It is the many computations interpretation of arithmetic made by the  
universal numbers in arithmetic. That is testable, and the MWI of QM  
confirmed, intuitively, and formally.


You need to understand that the models (the intended realities/meaning  
in the logician sense of model) of Robinson arithmetic realize or  
emulate all computations. That is the part I have been asked to  
suppress in my french thesis as that is too much well known, and  
trivial, once we assume the 

Re: Holiday Exercise

2016-08-04 Thread Stathis Papaioannou
On 4 August 2016 at 11:16, Brent Meeker  wrote:

>
>
> On 8/3/2016 5:55 PM, Stathis Papaioannou wrote:
>
>
>
> On 3 August 2016 at 16:02, Brent Meeker  wrote:
>
>>
>>
>> On 8/2/2016 10:19 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wednesday, 3 August 2016, Brent Meeker  wrote:
>>
>>>
>>>
>>> On 8/2/2016 3:29 PM, Stathis Papaioannou wrote:
>>>
>>>
>>>
>>> On Wednesday, 3 August 2016, Brent Meeker  wrote:
>>>


 On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:

> It's not that it can't, but rather that it doesn't, and if it does
> then that would require some extra physical explanation, a radio link
> between brains or something.
>

 That's what I mean by illegitimately appealing to physics while
 claiming that physics must be derived from computation of consciousness.

>>>
>>>  Whatever theory we propose must be consistent with observation.
>>>
>>>
>>> But, "if it does then* that would require some extra physical
>>> explanation*, a radio link between brains or something." Is not an
>>> observation, it's an assumption that all information transfer must be
>>> physical.
>>>
>>
>> There is no convincing evidence for telepathic communication, so a
>> theory that predicts it should occur would have to explain why we don't
>> observe it.
>>
>>
>> Yes, and physical theories of consciousness do that quite well.  But
>> computationalist theories of consciousness can't invoke the physics they're
>> trying to derive.
>>
>
> Bruno, I believe, proposes that his theory accounts for the universe that
> we observe.
>
>
> ISTM his argument is of the form:
>
> 1) Consciousness is instantiated by certain computation.
> 2) All possible computation is realized by a UDA that exists because
> arithmetic is true.
> 3) Then the conscious thoughts that constitute our experience of a
> physical world are among those instantiated by the UDA and the physical
> world need not be anything more than threads of those computations that
> exhibit the consistent patterns which we explain as an external reality.
>
> The problem I have with this is that "arithmetic is true" doesn't make
> anything, much less a UDA, exist.  And the conclusion (3) just brings in
> Everett's measure problem amplified to the nth degree.  It explains too
> much as "existing" and doesn't assign probabilities to anything.  So far as
> I can tell Bruno is just relying on 1-3 as a "proof" that the physics we
> observe MUST BE derived from the UDA.
>

The problem with (3) is a general problem with multiverses.  A single,
infinite universe is an example of a multiverse theory, since there will be
infinite copies of everything and every possible variation of everything,
including your brain and your mind. We live in an orderly world with
consistent physical laws. It seems to me that you are suggesting that if
everything possible existed then we would not live in such an orderly
world, and we would not be able to have coherent thoughts. So the fact that
we do have coherent thoughts implies that multiverses cannot exist, and we
must live in a finite universe. That seems a lot to conclude from the mere
fact that you are able to think.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Bruce Kellett

On 4/08/2016 6:00 pm, Bruce Kellett wrote:

On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:

However, we are being asked to consider two conscious states where the
conscious state differs by at least one bit - the W/M bit. Clearly, 
by the YD

assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of the
single bit difference.

By that reasoning, no consciousness survives through time.

Not at all! Both conscious states survive through time by assumption 
(YD).


Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single consciousness 
through time. We get different input data all the time but we do not 
differentiate according to that data.


I could perhaps expand on that response. On duplication, two identical 
consciousnesses are created, and by the identity of indiscernibles, they 
form just a single consciousness. Then data is input. It seems to me 
that there is no reason why this should lead the initial consciousness 
to differentiate, or split into two. In normal life we get inputs from 
many sources simultaneously -- we see complex scenes, smell the air, 
feel impacts on our body, and hear many sounds from the environment. 
None of this leads our consciousness to disintegrate. Indeed, our 
evolutionary experience has made us adept at coping with these 
multifarious inputs and sorting through them very efficiently to 
concentrate on what is most important, while keeping other inputs at an 
appropriate level in our minds.


I have previously mentioned our ability to multitask in complex ways: 
while I am driving my car, I am aware of the car, the road, other 
traffic and so on; while, at the same time, I can be talking to my wife; 
thinking about what to cook for dinner; and reflecting on philosophical 
issues that are important to me. And this is by no means an exhaustive 
list of our ability to multitask -- to run many separate conscious 
modules within the one unified consciousness.


Given that this experience is common to us all, it is not in the least 
bit difficult to think that the adding of yet another stream of inputs 
via a separate body will not change the basic structure of our 
consciousness -- we will just take this additional data and process in 
the way we already process multiple data inputs and streams of 
consciousness. This would seem, indeed, to be the default understanding 
of the consequences of person duplication. One would have to add some 
further constraints in order for it to be clear that the separate bodies 
would necessarily have differentiated conscious streams. No such 
additional constraints are currently in evidence.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Bruce Kellett



On 4/08/2016 5:52 pm, Russell Standish wrote:

On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:

On 3/08/2016 12:01 pm, Russell Standish wrote:

However, we are being asked to consider two conscious states where the
conscious state differs by at least one bit - the W/M bit. Clearly, by the YD
assumption, both states are survivor states from the original
conscious state, but are not the same consciousness because of the
single bit difference.

By that reasoning, no consciousness survives through time.


Not at all! Both conscious states survive through time by assumption (YD).

Methinks you are unnecessarily assuming transitivity again.


No, I was just referring to the continuation of a single consciousness 
through time. We get different input data all the time but we do not 
differentiate according to that data.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-04 Thread Russell Standish
On Wed, Aug 03, 2016 at 04:27:21PM +1000, Bruce Kellett wrote:
> On 3/08/2016 12:01 pm, Russell Standish wrote:
> >
> >However, we are being asked to consider two conscious states where the
> >conscious state differs by at least one bit - the W/M bit. Clearly, by the YD
> >assumption, both states are survivor states from the original
> >conscious state, but are not the same consciousness because of the
> >single bit difference.
> 
> By that reasoning, no consciousness survives through time.
> 

Not at all! Both conscious states survive through time by assumption (YD).

Methinks you are unnecessarily assuming transitivity again.

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-03 Thread Bruce Kellett

On 4/08/2016 1:04 am, Bruno Marchal wrote:

On 03 Aug 2016, at 07:16, Bruce Kellett wrote:

On 3/08/2016 2:55 am, Bruno Marchal wrote:

On 02 Aug 2016, at 14:40, Bruce Kellett wrote:

On 2/08/2016 3:07 am, Bruno Marchal wrote:

On 01 Aug 2016, at 09:04, Bruce Kellett wrote:

Consider ordinary consequences of introspection: I can be 
conscious of several unrelated things at once. I can be driving 
my car, conscious of the road and traffic conditions (and 
responding to them appropriately), while at the same time 
carrying on an intelligent conversation with my wife, thinking 
about what I will make for dinner, and, in the back of my mind 
thinking about a philosophical email exchange. These, and many 
other things, can be present to my conscious mind at the same 
time. I can bring any one of these things to the forefront of my 
mind at will, but processing of the separate streams goes on all 
the time.


Given this, it is quite easy to imagine that a subset of these 
simultaneous streams of consciousness might be associated with 
myself in a different body -- in a different place at a different 
time. I would be aware of things happening to the other body in 
real time in my own consciousness -- because they would, in fact, 
be happening to me.


If you dissociate consciousness from an actual single brain, then 
these things are quite conceivable.


Dissociating consciousness from any actual single brain is what 
UDA explains in detail. Then the math shows that this dissociation 
run even deeper, as your 1p consciousness is associated with the 
infinitely many relative and faithful (at the correct substitution 
level or below) state in the (sigma_1) arithmetical relations.


Duplication experiments would then be a real test of the 
hypothesis that consciousness could be separated from the 
physical brain. If the duplicates are essentially separate 
conscious beings, unaware of the thoughts and happenings of the 
other, then consciousness is tied to a particular physical brain 
(or brain substitute).


Not at all, but it might look like that at that stage, but what 
you say does not follow from computationalism. The same 
consciousness present at both place before the door is open *only* 
differentiated when they get the different bit of information W or M.


However, if consciousness is actually an abstract computation 
that is tied to a physical brain only in a statistical sense, 
then we should expect that the single consciousness could inhabit 
several bodies simultaneously.


It is irrelevant to decide how many consciousness or first person 
there is. We need only to listen to those which have 
differentiated to extract the statistics.


The point that I am trying to make here is that a person's 
consciousness at any moment can consist of many independent 
threads. From this I speculate that some of these separate threads 
could actually be associated with separate physical bodies. In 
other words, it is conceivable that a duplication experiment would 
not result in two separate consciousnesses, but a single 
consciousness in separate bodies. If this is so, the fact that the 
separate bodies receive different inputs does not necessarily mean 
that they differentiate into separate conscious beings, any more 
than the fact that I receive different inputs from moment to moment 
means that I dissociate into multiple consciousnesses.


It seems that the only reason that one might expect that the 
different inputs experienced by the separate duplicates would lead 
to a differentiation of the consciousnesses -- i.e., two separate 
and distinct conscious beings -- is that one is implicitly making 
the physicalist assumption that a single consciousness is 
necessarily associated with a single body, such that separate 
physical bodies necessarily have separate consciousnesses.


I suggest that for step 3 to go through, you need to demonstrate 
that computationalism requires that a single consciousness cannot 
inhabit two or more separate physical bodies: without such a 
demonstration you cannot conclude that W is not a possible 
outcome that the duplicated person could experience. You must 
demonstrate that different inputs lead to a differentiation of the 
consciousnesses in the duplication case, while not so 
differentiating the consciousness of a single person. The required 
demonstration must be based on the assumptions of computationalism 
alone, you cannot rely on physics that is not yet in evidence.


In other words, start from your basic assumptions:
(1) The "yes doctor" hypothesis;
(2) The Church-Turing thesis; and
(3) Arithmetical realism;


(3) is redundant. There is no (2) without (3).


Yes there is. Arithmetical realism, as you use the term, is different 
from the ability to calculate.


No. I define arithmetical realism by the belief in elementary arithmetic.


I don't "believe in" elementary arithmetic -- I use it to do calculations.

I have even redefined an Arithmetical Realist by someone who 

Re: Holiday Exercise

2016-08-03 Thread Brent Meeker



On 8/3/2016 5:55 PM, Stathis Papaioannou wrote:



On 3 August 2016 at 16:02, Brent Meeker > wrote:




On 8/2/2016 10:19 PM, Stathis Papaioannou wrote:



On Wednesday, 3 August 2016, Brent Meeker 
wrote:



On 8/2/2016 3:29 PM, Stathis Papaioannou wrote:



On Wednesday, 3 August 2016, Brent Meeker
 wrote:



On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:

It's not that it can't, but rather that it doesn't,
and if it does then that would require some extra
physical explanation, a radio link between brains or
something.


That's what I mean by illegitimately appealing to
physics while claiming that physics must be derived from
computation of consciousness.


 Whatever theory we propose must be consistent with observation.


But, "if it does then*/that would require some extra physical
explanation/*, a radio link between brains or something." Is
not an observation, it's an assumption that all information
transfer must be physical.


There is no convincing evidence for telepathic communication, so
a theory that predicts it should occur would have to explain why
we don't observe it.


Yes, and physical theories of consciousness do that quite well. 
But computationalist theories of consciousness can't invoke the

physics they're trying to derive.


Bruno, I believe, proposes that his theory accounts for the universe 
that we observe.


ISTM his argument is of the form:

1) Consciousness is instantiated by certain computation.
2) All possible computation is realized by a UDA that exists because 
arithmetic is true.
3) Then the conscious thoughts that constitute our experience of a 
physical world are among those instantiated by the UDA and the physical 
world need not be anything more than threads of those computations that 
exhibit the consistent patterns which we explain as an external reality.


The problem I have with this is that "arithmetic is true" doesn't make 
anything, much less a UDA, exist.  And the conclusion (3) just brings in 
Everett's measure problem amplified to the nth degree.  It explains too 
much as "existing" and doesn't assign probabilities to anything.  So far 
as I can tell Bruno is just relying on 1-3 as a "proof" that the physics 
we observe MUST BE derived from the UDA.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-03 Thread Stathis Papaioannou
On 3 August 2016 at 16:02, Brent Meeker  wrote:

>
>
> On 8/2/2016 10:19 PM, Stathis Papaioannou wrote:
>
>
>
> On Wednesday, 3 August 2016, Brent Meeker  wrote:
>
>>
>>
>> On 8/2/2016 3:29 PM, Stathis Papaioannou wrote:
>>
>>
>>
>> On Wednesday, 3 August 2016, Brent Meeker  wrote:
>>
>>>
>>>
>>> On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:
>>>
 It's not that it can't, but rather that it doesn't, and if it does then
 that would require some extra physical explanation, a radio link between
 brains or something.

>>>
>>> That's what I mean by illegitimately appealing to physics while claiming
>>> that physics must be derived from computation of consciousness.
>>>
>>
>>  Whatever theory we propose must be consistent with observation.
>>
>>
>> But, "if it does then* that would require some extra physical
>> explanation*, a radio link between brains or something." Is not an
>> observation, it's an assumption that all information transfer must be
>> physical.
>>
>
> There is no convincing evidence for telepathic communication, so a theory
> that predicts it should occur would have to explain why we don't observe it.
>
>
> Yes, and physical theories of consciousness do that quite well.  But
> computationalist theories of consciousness can't invoke the physics they're
> trying to derive.
>

Bruno, I believe, proposes that his theory accounts for the universe that
we observe.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-03 Thread John Clark
On Wed, Aug 3, 2016 at 3:01 PM, Brent Meeker  wrote:

>
​> ​
> I've given up following your exchanges with Clark.  They seem to be about
> semantics.


​I certainly hope so, semantics is the branch of ​
logic concerned with meaning
​.​


​  John K Clark​

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-03 Thread Brent Meeker



On 8/3/2016 7:43 AM, Bruno Marchal wrote:


On 02 Aug 2016, at 22:41, Brent Meeker wrote:




On 8/2/2016 11:05 AM, Bruno Marchal wrote:
 But the argument seems somewhat circular since you assume that the 
different physical processes associated with location make the 
thoughts different.



Yes, it is more pedagogical, but the "physical" used here is not 
assumed to be primary, and the reasoning will just show that if we 
do survive with a physical digital brain, then the physical is 
reduced to an (fundamental and important) aspect of the "theology of 
number", or if you prefer, of the mathematics of universal machine 
self-reference. 


But if you must assume the physical in order for that argument to be 
valid, it seems that physical is as "primary" as anything else in the 
ontology.


No, because the physical assumption is eliminated at step 7.


But I don't think that step is correct.  As I've argued several times I 
don't think there can be consciousness without a physical context.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-03 Thread Brent Meeker



On 8/3/2016 7:26 AM, Bruno Marchal wrote:


On 02 Aug 2016, at 20:52, Brent Meeker wrote:




On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:
It's not that it can't, but rather that it doesn't, and if it does 
then that would require some extra physical explanation, a radio 
link between brains or something.


That's what I mean by illegitimately appealing to physics while 
claiming that physics must be derived from computation of consciousness.


Are you OK with Clark's answer to question 1?
What about question 2?


I've given up following your exchanges with Clark.  They seem to be 
about semantics.  I don't see any problem with your arguments there, 
except recognizing that it is an assumption that duplicated brains will 
have separate consciousness.  That's probably true, based on a 
physicalist model.  Whether it is probable on a computationalist model 
is less clear.




You argument is not valid, here. We can make local assumption, and 
then eliminate them later, which is done in this case in step 7 (with 
or without step 8).


You can't make a "local" assumption in step 3, use it to argue for step 
7 and then use truth of step 7 as evidence for the assumption in step 
3.  That's circular


Brent



Let us not go to much quickly. The "Holiday exercise" is about step 3 
only. It does not presuppose a primary physical reality. Only a 
physical reality, which later is shown as possibly existing, but non 
primary. But that is for later. Just tell me if you are OK with the 
question 1, and the principle exposed in question 2.


Bruno






Brent

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, 
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: QUESTION 2 (Re: Holiday Exercise

2016-08-03 Thread John Clark
On Tue, Aug 2, 2016 at 2:56 PM, Bruno Marchal  wrote:

​> ​
> The question is not about duplication.


​OK.​


​> ​
> Do you agree that if today, someone is "sure" that tomorrow (or any
> precise time later) he will be uncertain of an outcome of a certain
> experience, then he can say, today, that he is uncertain about that future
> outcome.
>

​Sure, he can say whatever he wants because being sure ​and being correct
are two entirely different things.

​> ​
> For example, if I promise myself to buy a lottery ticket next week. I am
> pretty sure now that next week I will
> ​ ​
> be unsure winning something
>

​I've been known to break promises to myself before. If I didn't buy the
ticket I'd absolutely certain I won't win the lottery next week, if I do
buy the ticket I'd be almost certain I won't win next week. I'll have to
wait till next week to find out if in addition to being certain I was also
correct.
And because you said right at the start that people duplicating machines
​are not involved this time personal pronouns can be used without
​ambiguity. ​


> ​> ​
> or not with that ticket, so I consider myself to be uncertain right now
> about winning or not the lottery next week.
> ​ ​
> So I repeat, the principle questioned here says that if at t_0
> ​ ​
> P("I will be uncertain of the outcome of some experience at t_1") = 1
> ​ ​
> then
> ​ ​
> The outcome of the experience at t_1 is uncertain at t_0.
>

​You can be certain and wrong, and uncertain and correct. I will say that
if I don't know fact X tomorow but I do know fact X now then sometime
between today and tomorrow part of my memory must have been be erased. It's
called "forgetting". But I haven't forgotten you said " The question is not
about duplication" and that means "*I"* duplicating Machines are not
involved, and that is the only reason it wasn't gibberish when you said "*I
*will be uncertain of the...".

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-03 Thread Platonist Guitar Cowboy
On Wed, Aug 3, 2016 at 4:40 PM, Bruno Marchal  wrote:

>
> On 02 Aug 2016, at 22:39, Brent Meeker wrote:
>
>
>>
>> On 8/2/2016 11:05 AM, Bruno Marchal wrote:
>>
>>> Not at all. The existence of the computations is an elementary
>>> metatheorem about Robinson Arithmetic, and already a theorem of Peano
>>> Arithmetic (still less that what is needed to enunciate Church Thesis).
>>>
>>
>> I don't think you can get existence from axioms.  Otherwise I could take
>> "unicorns exist" as an axiom and they would.
>>
>
> You can, and if you derive the physical laws from the existence of
> unicorn, then it might be an interesting theory. But more people believe
> that 2+2 = 4, than in the existence of unicorn, and no interesting results
> have ever been obtained from the existence of unicorn.
>
>
ROTFL. Not bad, Bruno.

Thank goodness "more people believe" but "no interesting results" is a
matter of taste, definitions, ultimately what you call theology. 5 Million
people think differently and have purchased "The Last Unicorn" by Peter
Beagle. Not bad for a publication of any kind.

It features the character *King Haggard as a withdrawn, misanthropic, and
miserable King who cares for no one, not even his adopted son Prince Lír.
His loneliness and misery is only alleviated by the sight of unicorns. *

That IS interesting in that it features a character who withdraws from the
world, only accepting the parts that fit his personal mysticism of
unicorns. The rest, he rejects, for example if they answer his questions in
a fashion that he dislikes. Of course, one day he lets in the girl/unicorn
that will be his downfall, thinking in his isolated madness that she can be
controlled enough from the distance/defenses of his safe tower. I wonder
why he still let her into the castle in the first place, even if his
loneliness is obvious.

I don't believe in some kind of pissing contest between science,
mathematics, theology, and fiction because I enjoy them all. To the rest of
you: JUST ANSWER THE QUESTION SO WE CAN PROCEED. THANKS! WOULD ANYBODY DENY
THE PROGRESS IN MOVING FORWARD? SO WHY NOT JUST DO IT? ANSWER THE
QUESTIONS! PGC

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: R: Re: Holiday Exercise

2016-08-03 Thread Bruno Marchal


On 03 Aug 2016, at 08:50, Bruce Kellett wrote:


On 3/08/2016 4:37 pm, 'scerir' via Everything List wrote:
 The suggestion that the one consciousness could inhabit more  
than one physical
body does not predict telepathy -- it could merely indicate that  
consciousness
is not localized to a single physical body, that it is non-local,  
for instance.
Or, indeed, that physics is not fundamental but derivative on  
consciousness.

Bruce

This reminds me of Schroedinger. “The doctrine of identity can  
claim that it
is clinched by the empirical fact that consciousness is never  
experienced in
the plural, only in the singular. Not only has none of us even  
experienced more
than one consciousness, but there is no trace of circumstantial  
evidence of
this even happening anywhere in the world” (much more in "What is  
Life?")


I don't think Schrödinger was considering person duplicating  
machines. He is using the normal transitive understanding of  
identity, which is also under question in the duplication protocol.


Sure, like  in Everett's formulation of QM, also.

Bruno






Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-03 Thread Bruno Marchal


On 03 Aug 2016, at 07:28, Bruce Kellett wrote:


On 3/08/2016 3:19 pm, Stathis Papaioannou wrote:
On Wednesday, 3 August 2016, Brent Meeker   
wrote:

On 8/2/2016 3:29 PM, Stathis Papaioannou wrote:
On Wednesday, 3 August 2016, Brent Meeker   
wrote:



On 8/2/2016 6:15 AM, Stathis Papaioannou wrote:
It's not that it can't, but rather that it doesn't, and if it does  
then that would require some extra physical explanation, a radio  
link between brains or something.


That's what I mean by illegitimately appealing to physics while  
claiming that physics must be derived from computation of  
consciousness.


 Whatever theory we propose must be consistent with observation.


But, "if it does then that would require some extra physical  
explanation, a radio link between brains or something." Is not an  
observation, it's an assumption that all information transfer must  
be physical.


There is no convincing evidence for telepathic communication, so a  
theory that predicts it should occur would have to explain why we  
don't observe it.


The suggestion that the one consciousness could inhabit more than  
one physical body does not predict telepathy -- it could merely  
indicate that consciousness is not localized to a single physical  
body, that it is non-local, for instance. Or, indeed, that physics  
is not fundamental but derivative on consciousness.


You cannot rule out these possibilities on the available evidence  
(since we do not have person duplicating machines at the moment). As  
I said, this is probably a question that can only be answered  
empirically once we have actually duplicated people. Random  
theorizing is just not going to cut it.


Sure. But once we assume computationalism, then the use of computer  
science is no more random theorizing.


Bruno






Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


  1   2   3   4   >