Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 4:08 pm, Brent Meeker wrote:


On 8/8/2016 10:28 PM, Bruce Kellett wrote:

You seem to be agreeing that this is, at bottom, an empirical 
matter. If we do the experiment and duplicate a conscious being, 
then separate the duplicates, we could ask one whether or not it 
was still aware of its duplicate. If the answer is "No", then we 
know that consciousness is localized to a particular physical body. 
If the answer is "Yes", then we know that consciousness is 
non-local, even though it might still supervene on the physical 
bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics. It can't 
just be QM entanglement, because it share computation, to make a 
difference at X due to a perception at Y requires signal transmission.


Signal transmission or awareness? Non-locality does not entail FLT 
signalling -- that makes it local. 


?? Faster than light, spacelike, signalling is what is conventionally 
called "non-local", as in "non-local hidden variable".


That is one interpretation of non-locality, but that does not apply to 
EPR correlations, for instance; the non-locality there is intrinsic, 
there is no signalling /per se/.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 10:28 PM, Bruce Kellett wrote:


Yes, that makes sense. But the rovers are not conscious. 


Why not?  Suppose they are.  If you would say "yes to the doctor" 
then you must believe that AI is possible.


I have no reason to suppose that AI is not possible. But the Mars 
rovers are unlikely to be sufficiently complex/self referential to be 
conscious. Do they have an inner narrative?


I wrote "autonomous rover" to indicate it had AI, without committing to 
whether that implied consciousness.  But it's interesting that you ask 
whether it has an inner narrative.  I think that our inner narrative is 
a way of summarizing for memory what we think is significant so that we 
later learn from it by recalling it in similar circumstances.  If I were 
designing a Mar Rover to be truly autonomous over a period of years, I 
would provide it some kind of episodic memory like that as part of it's 
learning algorithms.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 10:28 PM, Bruce Kellett wrote:
You seem to be agreeing that this is, at bottom, an empirical 
matter. If we do the experiment and duplicate a conscious being, 
then separate the duplicates, we could ask one whether or not it was 
still aware of its duplicate. If the answer is "No", then we know 
that consciousness is localized to a particular physical body. If 
the answer is "Yes", then we know that consciousness is non-local, 
even though it might still supervene on the physical bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics. It can't 
just be QM entanglement, because it share computation, to make a 
difference at X due to a perception at Y requires signal transmission.


Signal transmission or awareness? Non-locality does not entail FLT 
signalling -- that makes it local. 


?? Faster than light, spacelike, signalling is what is conventionally 
called "non-local", as in "non-local hidden variable".


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 3:14 pm, Brent Meeker wrote:

On 8/8/2016 7:03 PM, Bruce Kellett wrote:

On 9/08/2016 10:51 am, Brent Meeker wrote:

On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have 
separate consciousnesses at least as soon as they have different 
perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of 
course that is what we might expect -- two brains ==> two 
consciousnesses. But that says nothing about the case of two 
identical brains -- is there one or two consciousnesses? The 
default assumption around here appears to be that the identity of 
indiscernibles will mean that there is only one conscious being. 
The question is then how this consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on 
the brain, so two different brains will realize two different 
consciousnesses because they are at different locations and 
perceiving different things.


That is fine if they started off different, and were never identical 
-- identical in all details, not just sharing single "observer 
moments", even if such can be well-defined.


I would speculate that it would be just like having two autonomous 
Mars rovers that "wake up" at different points on the surface.  They 
may have the same computers and sensors and programs, but their data 
and memories will immediately start to diverge.  They won't be 
"completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at 
the same time.


But Mars  rovers are not conscious -- or are they?

I don't think this does much to invalidate Bruno's argument.  He 
just wants to show that the physical is derivative, not that it's 
irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication 
case from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD. I'm just making 
the point that he does need to derive the physics, specifically the 
finite speed of communication in order to show that the duplication 
results in two different consciousnesses.


The finite speed of communication is a problem only if consciousness 
is localized to the physical brain -- if it is a non-local 
computation, this might not be an issue.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share 
computation, memory, sensor data,...  Then we'd be inclined to say 
that they could be a single consciousness.  But now suppose they are 
moved light-years apart.  They could still share computation, 
memory, etc.  But intelligent action on the scale of an autonomous 
rover would have to be based on the local resources of a single 
rover.  So they would have to effectively "differentiate".  It 
wouldn't be some kind of axiomatic, mathematically provable 
differentiation - rather a practical, observable one.


Yes, that makes sense. But the rovers are not conscious. 


Why not?  Suppose they are.  If you would say "yes to the doctor" then 
you must believe that AI is possible.


I have no reason to suppose that AI is not possible. But the Mars rovers 
are unlikely to be sufficiently complex/self referential to be 
conscious. Do they have an inner narrative?


And if they were placed at different points on the surface of Mars, 
they would have to start with at least some different data -- viz., 
their location on the surface relative to earth. The general issue I 
am raising is that consciousness could be non-local, in which case 
separated duplicates would not need any form of subluminal physical 
communication in order to remain a single conscious being.


You seem to be agreeing that this is, at bottom, an empirical matter. 
If we do the experiment and duplicate a conscious being, then 
separate the duplicates, we could ask one whether or not it was still 
aware of its duplicate. If the answer is "No", then we know that 
consciousness is localized to a particular physical body. If the 
answer is "Yes", then we know that consciousness is non-local, even 
though it might still supervene on the physical bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics.  It

Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 7:03 PM, Bruce Kellett wrote:

On 9/08/2016 10:51 am, Brent Meeker wrote:

On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have 
separate consciousnesses at least as soon as they have different 
perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of 
course that is what we might expect -- two brains ==> two 
consciousnesses. But that says nothing about the case of two 
identical brains -- is there one or two consciousnesses? The default 
assumption around here appears to be that the identity of 
indiscernibles will mean that there is only one conscious being. The 
question is then how this consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on 
the brain, so two different brains will realize two different 
consciousnesses because they are at different locations and 
perceiving different things.


That is fine if they started off different, and were never identical 
-- identical in all details, not just sharing single "observer 
moments", even if such can be well-defined.


I would speculate that it would be just like having two autonomous 
Mars rovers that "wake up" at different points on the surface.  They 
may have the same computers and sensors and programs, but their data 
and memories will immediately start to diverge.  They won't be 
"completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at the 
same time.


But Mars  rovers are not conscious -- or are they?

I don't think this does much to invalidate Bruno's argument.  He 
just wants to show that the physical is derivative, not that it's 
irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication case 
from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD.  I'm just making 
the point that he does need to derive the physics, specifically the 
finite speed of communication in order to show that the duplication 
results in two different consciousnesses.


The finite speed of communication is a problem only if consciousness 
is localized to the physical brain -- if it is a non-local 
computation, this might not be an issue.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share 
computation, memory, sensor data,...  Then we'd be inclined to say 
that they could be a single consciousness.  But now suppose they are 
moved light-years apart.  They could still share computation, memory, 
etc.  But intelligent action on the scale of an autonomous rover 
would have to be based on the local resources of a single rover.  So 
they would have to effectively "differentiate".  It wouldn't be some 
kind of axiomatic, mathematically provable differentiation - rather a 
practical, observable one.


Yes, that makes sense. But the rovers are not conscious. 


Why not?  Suppose they are.  If you would say "yes to the doctor" then 
you must believe that AI is possible.


And if they were placed at different points on the surface of Mars, 
they would have to start with at least some different data -- viz., 
their location on the surface relative to earth. The general issue I 
am raising is that consciousness could be non-local, in which case 
separated duplicates would not need any form of subluminal physical 
communication in order to remain a single conscious being.


You seem to be agreeing that this is, at bottom, an empirical matter. 
If we do the experiment and duplicate a conscious being, then separate 
the duplicates, we could ask one whether or not it was still aware of 
its duplicate. If the answer is "No", then we know that consciousness 
is localized to a particular physical body. If the answer is "Yes", 
then we know that consciousness is non-local, even though it might 
still supervene on the physical bodies. 


I don't think that's logically impossible, but it would imply FTL 
signaling and hence be inconsistent with current physics.  It can't just 
be QM entanglement, because it share computation, to make a difference 
at X due to a perception at Y requires signal transmission.


The latter possibility seems the more likely if consciousness is, at 
root, n

Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 10:51 am, Brent Meeker wrote:

On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have separate 
consciousnesses at least as soon as they have different perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of 
course that is what we might expect -- two brains ==> two 
consciousnesses. But that says nothing about the case of two 
identical brains -- is there one or two consciousnesses? The default 
assumption around here appears to be that the identity of 
indiscernibles will mean that there is only one conscious being. The 
question is then how this consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on the 
brain, so two different brains will realize two different 
consciousnesses because they are at different locations and perceiving 
different things.


That is fine if they started off different, and were never identical -- 
identical in all details, not just sharing single "observer moments", 
even if such can be well-defined.


I would speculate that it would be just like having two autonomous 
Mars rovers that "wake up" at different points on the surface.  They 
may have the same computers and sensors and programs, but their data 
and memories will immediately start to diverge.  They won't be 
"completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at the 
same time.


But Mars  rovers are not conscious -- or are they?

I don't think this does much to invalidate Bruno's argument.  He 
just wants to show that the physical is derivative, not that it's 
irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication case 
from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD.  I'm just making 
the point that he does need to derive the physics, specifically the 
finite speed of communication in order to show that the duplication 
results in two different consciousnesses.


The finite speed of communication is a problem only if consciousness is 
localized to the physical brain -- if it is a non-local computation, 
this might not be an issue.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share 
computation, memory, sensor data,...  Then we'd be inclined to say 
that they could be a single consciousness.  But now suppose they are 
moved light-years apart.  They could still share computation, memory, 
etc.  But intelligent action on the scale of an autonomous rover would 
have to be based on the local resources of a single rover.  So they 
would have to effectively "differentiate".  It wouldn't be some kind 
of axiomatic, mathematically provable differentiation - rather a 
practical, observable one.


Yes, that makes sense. But the rovers are not conscious. And if they 
were placed at different points on the surface of Mars, they would have 
to start with at least some different data -- viz., their location on 
the surface relative to earth. The general issue I am raising is that 
consciousness could be non-local, in which case separated duplicates 
would not need any form of subluminal physical communication in order to 
remain a single conscious being.


You seem to be agreeing that this is, at bottom, an empirical matter. If 
we do the experiment and duplicate a conscious being, then separate the 
duplicates, we could ask one whether or not it was still aware of its 
duplicate. If the answer is "No", then we know that consciousness is 
localized to a particular physical body. If the answer is "Yes", then we 
know that consciousness is non-local, even though it might still 
supervene on the physical bodies. The latter possibility seems the more 
likely if consciousness is, at root, non-physical, so that the physical 
is an epiphenomenon of consciousness.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/

Re: Holiday Exercise

2016-08-08 Thread Stathis Papaioannou
On 9 August 2016 at 03:52, Brent Meeker  wrote:

>
>
> On 8/8/2016 6:18 AM, Stathis Papaioannou wrote:
>
>
>
> On Monday, 8 August 2016, Brent Meeker  wrote:
>
>>
>>
>> On 8/7/2016 11:20 AM, Bruno Marchal wrote:
>>
>>> Not necessarily. A digital computer also requires that time be digitized
 so that its registers run synchronously.  Otherwise "the state" is ill
 defined.  The finite speed of light means that spacially separated regions
 cannot be synchronous.  Even if neurons were only ON or OFF, which they
 aren't, they have frequency modulation, they are not synchronous.

>>>
>>> Synchronous digital machine can emulate asynchronous digital machine,
>>> and that is all what is needed for the reasoning.
>>>
>>
>> If the time variable is continuous, i.e. can't be digitized, I don't
>> think you are correct.
>>
>
> If time is continuous, you would need infinite precision to exactly define
> the timing of a neuron's excitation, so you are right, that would not be
> digitisable. Practically, however, brains would have to have a non-zero
> engineering tolerance, or they would be too unstable. The gravitational
> attraction of a passing ant would slightly change the timing of neural
> activity, leading to a change in mental state and behaviour.
>
>
> I agree that brains must be essentially classical computers, but no
> necessarily digital.  The question arose as to what was contained in an
> Observer Moment and whether, in an infinite universe there would
> necessarily be infinitely many exact instances of the same OM.
>

Even in a continuum, there would be brain states and mental states that are
effectively identical to an arbitrary level of precision. We maintain a
sense of continuity of identity despite sometimes even gross changes to our
brain. At some threshold there will be a perceptible change, but the
threshold is not infinitesimal.


>   But having a continuous variable doesn't imply instability.   First, the
> passing ant is also instantiated infinitely many times.  Second, if a small
> cause has only a proportionately small effect then there is no
> "instability", more likely the dynamics diverge as in deterministic chaos.
> But in any case it would allow an aleph-1 order infinity of  OMs which
> would differ by infinitesimal amounts.
>
> But I also question the coherence of this idea.  As discussed (at great
> length) by Bruno and JKC, two or more identical brains must instantiate the
> same experience, i.e. the same OM.  So if there are only a finite number of
> possible brain-states and universes are made of OMs, then there can only be
> a finite number of finite universes.
>

A human brain can probably only have a finite number of thoughts, being of
finite size, but a turing machine is not so limited.

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 4:32 PM, Bruce Kellett wrote:

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added 
axiom/assumption that the duplicated brain/bodies must have separate 
consciousnesses at least as soon as they have different perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of course 
that is what we might expect -- two brains ==> two consciousnesses. 
But that says nothing about the case of two identical brains -- is 
there one or two consciousnesses? The default assumption around here 
appears to be that the identity of indiscernibles will mean that there 
is only one conscious being. The question is then how this 
consciousness evolves as inputs change?


I think the default assumption is that consciousness supervenes on the 
brain, so two different brains will realize two different 
consciousnesses because they are at different locations and perceiving 
different things.  I would speculate that it would be just like having 
two autonomous Mars rovers that "wake up" at different points on the 
surface.  They may have the same computers and sensors and programs, but 
their data and memories will immediately start to diverge.  They won't 
be "completely" different, as identical twins aren't completely 
different.  They may even occasionally think the same thoughts.  But 
relativity tells us there's no sense to saying they think them at the 
same time.




I don't think this does much to invalidate Bruno's argument.  He just 
wants to show that the physical is derivative, not that it's irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot 
derive the differentiation of consciousness in this duplication case 
from the YD+CT starting point, so where does it come from? 


In his theory, it the physics and the consciousness must both derive 
from the infinite threads of computation by the UD.  I'm just making the 
point that he does need to derive the physics, specifically the finite 
speed of communication in order to show that the duplication results in 
two different consciousnesses.


It seems to me an experimental matter -- until we have duplicated a 
conscious being, we will not know whether the consciousnesses 
differentiate on different incomes or not. 


Suppose their is an RF link between them so they can share computation, 
memory, sensor data,...  Then we'd be inclined to say that they could be 
a single consciousness.  But now suppose they are moved light-years 
apart.  They could still share computation, memory, etc.  But 
intelligent action on the scale of an autonomous rover would have to be 
based on the local resources of a single rover.  So they would have to 
effectively "differentiate".  It wouldn't be some kind of axiomatic, 
mathematically provable differentiation - rather a practical, observable 
one.


Brent

It seems far from obvious to me, one way or the other. I can think of 
no general principles that would give a definitive answer here. 
Physics alone does not seem to be enough. Any attempted derivation 
from physics seems just to beg the question.


Bruce



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: "We spent a long time trying to convince ourselves this wasn’t real"

2016-08-08 Thread John Clark
On Sun, Aug 7, 2016 spudboy100 via Everything List <
everything-list@googlegroups.com> wrote:

> ​
> What do you feel would be the reaction of our species if magically, it
> gets determined that it is indeed Dyson builders?
>

​It would be a very odd star system for the Evolution of Dyson builders to
evolve on. The star has 1.43 times the mass of our sun and a star's
lifetime is inversely proportional to the cube of its mass, so that star's
lifetime would only be about a third of the sun's. The sun and the Earth
are about the same age, 4 and a half billion years, and in another half a
billion years the sun will be too hot for life on Earth. So the sun can
provide about 5 billion year window for intelligent life to evolve. On
Earth it took 4 billion years to go from chemicals to worms and another
half a billion years to go from worms to present day people.
The star you're talking about would only have a 1.6 billion year window for
life, and that doesn't seem like enough time
​ for Evolution
to
​ turn chemicals into
Dyson
​builders​
. If you were on the Earth when the sun was
​only ​
1.6 billion years old you'd have to wait another 2.4 billion years to see
the first worm.

We only have one example to look at so maybe life on Earth evolved
unusually slowly but I think it much more likely that life on Earth evolved
unusually rapidly, because Earth not only produced life it produced
intelligent life. ​And however common intelligence is in the universe
bacteria must be even more common.

 John K Clark

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 9:05 am, Russell Standish wrote:

On Mon, Aug 08, 2016 at 10:01:49AM -0700, Brent Meeker wrote:

I think Russell is just saying we take it as an added
axiom/assumption that the duplicated brain/bodies must have separate
consciousnesses at least as soon as they have different perceptions.
This is exactly what you would predict from supposing that
consciousness is a product of physical processes in the brain -
something that is supported by lots and lots of evidence.

I don't think this does much to invalidate Bruno's argument.  He
just wants to show that the physical is derivative, not that it's
irrelevant.

Brent

Physicality in the thought experiment seems like a red herring to
me. We can just as easily consider running the duplicated
consciousnesses in virtual reality simulators of the two cities.


You would still have to build into your simulation whether or not the 
consciousness differentiates -- begging the question yet again.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 9:03 am, Russell Standish wrote:

On Mon, Aug 08, 2016 at 09:06:20PM +1000, Bruce Kellett wrote:

On 8/08/2016 8:38 pm, Russell Standish wrote:

On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:

However, still no justification has been given for the assumption
that the duplicated consciousness differentiates on different
inputs. And consciousness is what computationalism is supposed to be
giving an account of.


Obviously different inputs does not entail the differentiation of
consciousness.

In duplication there is still only one consciousness: and as you
say, different inputs do not entail the differentiation of a single
consciousness (associated with a single brain/body). So why would it
be different if the body were also duplicated?


However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs.

There is no difficulty in understanding that differentiated
consciousness entails different persons, who may or may not
experience different inputs, but I doubt that differentiation of
consciousness necessarily entails different inputs - two people can
experience the same stimuli.

This directly contradicts computational supervenience. I'm pretty sure
that if you read the fine print, you'll find that computational
supervenience is part of the YD assumption, although that fact is
often glossed over. I vaguely recall challenging Bruno on this a
couple of years ago.


In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.

There is no difficulty in accepting that there is consciousness of
two cities, but is that one consciousness, or two? You beg the
question by referring to plural 'persons'.

Two, because each consciousness is aware of different cities. They
each answer the question "Which city I am in?" in a different way, iw
it is a difference that makes a difference.


That still begs the question.

Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 3:01 am, Brent Meeker wrote:
I think Russell is just saying we take it as an added axiom/assumption 
that the duplicated brain/bodies must have separate consciousnesses at 
least as soon as they have different perceptions


If that is what you have to do, why not admit it openly?

This is exactly what you would predict from supposing that 
consciousness is a product of physical processes in the brain - 
something that is supported by lots and lots of evidence.


Yes, if consciousness supervenes on the physical brain, then of course 
that is what we might expect -- two brains ==> two consciousnesses. But 
that says nothing about the case of two identical brains -- is there one 
or two consciousnesses? The default assumption around here appears to be 
that the identity of indiscernibles will mean that there is only one 
conscious being. The question is then how this consciousness evolves as 
inputs change?


I don't think this does much to invalidate Bruno's argument.  He just 
wants to show that the physical is derivative, not that it's irrelevant.


I disagree. I think it is crucial for Bruno's argument. He cannot derive 
the differentiation of consciousness in this duplication case from the 
YD+CT starting point, so where does it come from? It seems to me an 
experimental matter -- until we have duplicated a conscious being, we 
will not know whether the consciousnesses differentiate on different 
incomes or not. It seems far from obvious to me, one way or the other. I 
can think of no general principles that would give a definitive answer 
here. Physics alone does not seem to be enough. Any attempted derivation 
from physics seems just to beg the question.


Bruce

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 4:05 PM, Russell Standish wrote:

On Mon, Aug 08, 2016 at 10:01:49AM -0700, Brent Meeker wrote:

I think Russell is just saying we take it as an added
axiom/assumption that the duplicated brain/bodies must have separate
consciousnesses at least as soon as they have different perceptions.
This is exactly what you would predict from supposing that
consciousness is a product of physical processes in the brain -
something that is supported by lots and lots of evidence.

I don't think this does much to invalidate Bruno's argument.  He
just wants to show that the physical is derivative, not that it's
irrelevant.

Brent

Physicality in the thought experiment seems like a red herring to
me. We can just as easily consider running the duplicated
consciousnesses in virtual reality simulators of the two cities.


What if they are linked as one simulator by RF or by the internet? The 
physicality is being used to assert that there is not one consciousness 
supervening on different brains/computers/simulators. I think that's 
true, but it's because I think you are right that supervenience is 
implicit in YD.  But if consciousness is generated by a kind of statmech 
of UD computations then one can't rely on this implicit supervenience 
before have derived spacetime and the finite speed of communication.


Brent





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 9/08/2016 12:39 am, Bruno Marchal wrote:

On 08 Aug 2016, at 01:26, Bruce Kellett wrote:
On 8/08/2016 1:30 am, Bruno Marchal wrote:


But in step 3, I ma very careful to not use the notion of 
"consciousness", and instead a simple 3p notion of first person. 
usually many relates it two consciousness and assumes that when the 
guy say "I see Moscow", they are conscious, but that is not needed 
to get the reversal.


Maybe that is the basis of the problem. In step 3 you seem to be 
claiming nothing that could not be achieved by a non-conscious machine:


Yes.

take a machine that can take photographs and compare the resulting 
images with a data base of images of certain cities. When a match is 
found, the machine outputs the corresponding name of the city from 
the data base. Send one such machine to Washington and an identical 
machine to Moscow. They will fulfill your requirements, the W-machine 
will output W and the M-machine will output M.


This is what you are now seeming to describe. But that is not FPI.


How could the machine predict the result of the match? Give me the 
algorithm used by that machine.


The machine program knows the protocol -- it knows that one copy will be 
transported to M and one to W. The machines are already physically 
different (different locations if nothing else), so it is a matter of a 
coin toss as to which goes where. The machines do not, however, share a 
consciousness, so this does not answer what will happen with a conscious 
being. Otherwise your prediction is no different from predicting the 
outcome of a coin toss. Think of one machine, it will be unaware of the 
other, if it knows that it will go to either W or M on the result of a 
coin toss... prediction, 50/50. (But if the machine doesn't have the 
protocol programmed in, it will simply answer: "What?")


The "P" in the acronym stands for "person", and if the "person" is 
not conscious, it is a zombie and any output you get has no bearing 
on what will happen to conscious persons.


The problem is a problem of prediction of future first person account.


That is a problem only if you have a person -- a conscious being.

Bruce

The zombie machines will probably not be aware of each other, but 
from that you cannot conclude that the conscious persons will not be 
aware of each other, or that consciousness necessarily differentiates 
on different inputs.


Well, you need the inputs being enough different (like seeing W, resp. 
M) so that the machine can take notice of the difference, and write 
distinct outcome in the diary, of course.


Bruno


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Russell Standish
On Mon, Aug 08, 2016 at 10:01:49AM -0700, Brent Meeker wrote:
> I think Russell is just saying we take it as an added
> axiom/assumption that the duplicated brain/bodies must have separate
> consciousnesses at least as soon as they have different perceptions.
> This is exactly what you would predict from supposing that
> consciousness is a product of physical processes in the brain -
> something that is supported by lots and lots of evidence.
> 
> I don't think this does much to invalidate Bruno's argument.  He
> just wants to show that the physical is derivative, not that it's
> irrelevant.
> 
> Brent

Physicality in the thought experiment seems like a red herring to
me. We can just as easily consider running the duplicated
consciousnesses in virtual reality simulators of the two cities.

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Russell Standish
On Mon, Aug 08, 2016 at 09:06:20PM +1000, Bruce Kellett wrote:
> On 8/08/2016 8:38 pm, Russell Standish wrote:
> >On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:
> >>However, still no justification has been given for the assumption
> >>that the duplicated consciousness differentiates on different
> >>inputs. And consciousness is what computationalism is supposed to be
> >>giving an account of.
> >>
> >Obviously different inputs does not entail the differentiation of
> >consciousness.
> 
> In duplication there is still only one consciousness: and as you
> say, different inputs do not entail the differentiation of a single
> consciousness (associated with a single brain/body). So why would it
> be different if the body were also duplicated?
> 
> >However computational supervenience does imply the
> >opposite: differentiated consciousness entails a difference in
> >inputs.
> 
> There is no difficulty in understanding that differentiated
> consciousness entails different persons, who may or may not
> experience different inputs, but I doubt that differentiation of
> consciousness necessarily entails different inputs - two people can
> experience the same stimuli.

This directly contradicts computational supervenience. I'm pretty sure
that if you read the fine print, you'll find that computational
supervenience is part of the YD assumption, although that fact is
often glossed over. I vaguely recall challenging Bruno on this a
couple of years ago.

> 
> >In the W/M experiment we are asked to suppose that the
> >duplicated persons do, in fact, notice that they've been teleported to
> >a different city, and recognise where they they've been teleported to.
> 
> There is no difficulty in accepting that there is consciousness of
> two cities, but is that one consciousness, or two? You beg the
> question by referring to plural 'persons'.
> 

Two, because each consciousness is aware of different cities. They
each answer the question "Which city I am in?" in a different way, iw
it is a difference that makes a difference.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 7:27 AM, Bruno Marchal wrote:


On 07 Aug 2016, at 22:32, Brent Meeker wrote:




On 8/7/2016 7:27 AM, Bruno Marchal wrote:


So I suggest that instead of starting with the hypothesis that 
consciousness is a computation,


Please, I insist that consciousness is NOT a computation. 
Consciousness is an 1p notion, and you cannot identify it with *any* 
3p.


But then you must say "No." to the doctor, because what he proposes 
to is a 3p equivalent substitute for your brain.


On the contrary, once we say "yes" to the doctor, we can know that we 
are not the brain or the body,  we own them. We borrow their relative 
appearances, not to be conscious, but to manifest our first person 
experiences relatively to the (probable) others in the normal histories.


Without the brain and body and physics, what experience would we have?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Brent Meeker



On 8/8/2016 6:18 AM, Stathis Papaioannou wrote:



On Monday, 8 August 2016, Brent Meeker > wrote:




On 8/7/2016 11:20 AM, Bruno Marchal wrote:

Not necessarily. A digital computer also requires that
time be digitized so that its registers run synchronously.
Otherwise "the state" is ill defined.  The finite speed of
light means that spacially separated regions cannot be
synchronous.  Even if neurons were only ON or OFF, which
they aren't, they have frequency modulation, they are not
synchronous.


Synchronous digital machine can emulate asynchronous digital
machine, and that is all what is needed for the reasoning.


If the time variable is continuous, i.e. can't be digitized, I
don't think you are correct.


If time is continuous, you would need infinite precision to exactly 
define the timing of a neuron's excitation, so you are right, that 
would not be digitisable. Practically, however, brains would have to 
have a non-zero engineering tolerance, or they would be too unstable. 
The gravitational attraction of a passing ant would slightly change 
the timing of neural activity, leading to a change in mental state and 
behaviour.


I agree that brains must be essentially classical computers, but no 
necessarily digital.  The question arose as to what was contained in an 
Observer Moment and whether, in an infinite universe there would 
necessarily be infinitely many exact instances of the same OM.  But 
having a continuous variable doesn't imply instability.   First, the 
passing ant is also instantiated infinitely many times.  Second, if a 
small cause has only a proportionately small effect then there is no 
"instability", more likely the dynamics diverge as in deterministic 
chaos.  But in any case it would allow an aleph-1 order infinity of  OMs 
which would differ by infinitesimal amounts.


But I also question the coherence of this idea.  As discussed (at great 
length) by Bruno and JKC, two or more identical brains must instantiate 
the same experience, i.e. the same OM.  So if there are only a finite 
number of possible brain-states and universes are made of OMs, then 
there can only be a finite number of finite universes.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: That stupid diary

2016-08-08 Thread John Clark
On Sun, Aug 7, 2016 Telmo Menezes  wrote:


> >
>> ​>​
>> Is this really that difficult to comprehend? If computationalism is true
>> ​ ​
>> then the machine will be able to make 2 copies that are identical to
>> each other in every way and will remain identical until the outside
>> environment
>> ​
>> or perhaps random quantum variations changes one but not the other.
>
>
> ​> ​
> I agree and never argued the opposite.
>

​I'm glad to hear we agree on that, then we both disagree with Bruno
because Bruno said:​


*"Nothing can duplicate a first person view from its first person point of
view, with or without computationalism." *

​I think that is the key to Bruno's confusion; that and trying to establish
personal identity by looking from the present to the future rather than by
remembering the past from the present​.



> >
>> ​>​
>>  Bruno asks "before the duplication what is the
>> ​ ​
>> probability that "YOU" will be inside the house looking out?". That is
>> not a
>> ​ ​
>> question that is gibberish because Bruno isn't asking about what will
>> happen
>> ​ ​
>> to Telmo Menezes, in a world with personal pronoun duplicating machines
>> ​ ​
>> Bruno wants to know about the one and only one thing that will happen to
>> ​ ​
>> YOU. And that's just silly.
>
>
> ​> ​
> We discussed this before. The MWI introduces the same problem.


​And as I've explained several times MWI does NOT have the same problem.
Before *you* perform the 2 slit experiment it would NOT be gibberish to ask*
you* "After the experiment what do *you* expect to see?", because both
before and after the experiment the meaning of the personal pronoun "*you*"
is crystal clear unique and unambiguous, "*you*" is the only chunk of
matter in the observable universe that behaves in a Telmomenezesian way.
And because things are stated so clearly
​ ​
after it's all over we can check and see if the prediction *you* made about
*you* turned out to be correct or not; it might have been right and it
might have been wrong but it wasn't gibberish.

It's entirely different with a duplicating machine,  "What one and only one
city will *you* see after *you* are duplicated?"  is just words with a
question mark at the end and is not a question because after "*you*" is
duplicated there would be 2 chunks of matter
that behaves in a Telmomenezesian way
​.​



> ​> ​
> If I am about to
> ​ ​
> open Schrödinger's cat box, then one branch of me will see a live cat
> ​ ​
> and another one a dead one.


​OK, and personal pronouns cause no confusion because with
Schrödinger's cat box
​ there is only one "me" per multiverse branch; a situation that is not
true with duplication​ machines. And I might add that if "me" is defined as
the person having this thought right now then "me" will have no future at
all and has had no past.


> ​> ​
> By your reasoning, the probability of
> ​ ​
> Telmo Menezes seeing a dead cat is 1,


​Correct. And by my
reasoning the probability of
​ ​
Telmo Menezes seeing a
​live​
 cat is
​ also ​
1
​, always assuming that MWI is correct and we don't know for a fact that it
is.

​> ​
> but from the first person
> ​ ​
> perspective
> ​ ​
> of any of the branches it is 1/2.


​
Yes but you almost make that sound like a contradiction. All the above
​ ​
means
​is ​
that if the experiment is performed many times about half the time "*you*"
​,​
that is to say the only chunk of matter in the observable universe that
behaves in a
​ ​
Telmomenezesian way
​,​
will see a live cat and
​ ​
about half the time the only chunk of matter in the observable universe
that behaves in a Telmomenezesian way will see a dead cat.

​> ​
> Bruno's argument only move this to a scenario where both copies can
> ​ ​
> coexist in the same branch, which can lead to some social awkwardness
> ​ ​
> but  does not fundamentally change the first person / third person
> ​ ​
> distinction


​One thing does change when
both copies coexist in the same
​observable universe, ​
"What​

​will you see next?" changes from a meaningful question ​into a meaningless
sequence of words with a question mark at the end.

​> ​
> Duplicating a first person view is the same as doing nothing. 1=1.


​No, something has changed. Before the duplication only one chunk of matter
in the observable universe behaves in a Telmmenezesian way, but after
the duplication there are two
chunks​​
of matter in the observable universe
​that ​
behave in a Telmmenezesian way
​.​
  ​



> ​> ​
> If you are
> ​
> facing your clone, the content of your respective experiences is
> ​ ​
> already different. Do you disagree?
>

​Yes I disagree. Your clone is also facing its clone so you both change in
​exactly the same way, so both chunks of matter still behave in the same
identical way.


> >
>> ​> ​
>> Until that divergence there is no Moscow man or Washington man, there is
>> still only the Helsinki man regardless of how many bodies are around.
>
>
> ​> ​
> True.
>

​Then if the definition of the Helsinki man is the man who is c

Re: Holiday Exercise

2016-08-08 Thread Brent Meeker
I think Russell is just saying we take it as an added axiom/assumption 
that the duplicated brain/bodies must have separate consciousnesses at 
least as soon as they have different perceptions.  This is exactly what 
you would predict from supposing that consciousness is a product of 
physical processes in the brain - something that is supported by lots 
and lots of evidence.


I don't think this does much to invalidate Bruno's argument.  He just 
wants to show that the physical is derivative, not that it's irrelevant.


Brent


On 8/8/2016 4:06 AM, Bruce Kellett wrote:

On 8/08/2016 8:38 pm, Russell Standish wrote:

On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:

However, still no justification has been given for the assumption
that the duplicated consciousness differentiates on different
inputs. And consciousness is what computationalism is supposed to be
giving an account of.


Obviously different inputs does not entail the differentiation of
consciousness.


In duplication there is still only one consciousness: and as you say, 
different inputs do not entail the differentiation of a single 
consciousness (associated with a single brain/body). So why would it 
be different if the body were also duplicated?



However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs.


There is no difficulty in understanding that differentiated 
consciousness entails different persons, who may or may not experience 
different inputs, but I doubt that differentiation of consciousness 
necessarily entails different inputs - two people can experience the 
same stimuli.



In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.


There is no difficulty in accepting that there is consciousness of two 
cities, but is that one consciousness, or two? You beg the question by 
referring to plural 'persons'.


Bruce


Ie, W/M is a difference that makes a difference.

Cheers





--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruno Marchal


On 08 Aug 2016, at 01:26, Bruce Kellett wrote:


On 8/08/2016 1:30 am, Bruno Marchal wrote:


But in step 3, I ma very careful to not use the notion of  
"consciousness", and instead a simple 3p notion of first person.  
usually many relates it two consciousness and assumes that when the  
guy say "I see Moscow", they are conscious, but that is not needed  
to get the reversal.


Maybe that is the basis of the problem. In step 3 you seem to be  
claiming nothing that could not be achieved by a non-conscious  
machine:


Yes.





take a machine that can take photographs and compare the resulting  
images with a data base of images of certain cities. When a match is  
found, the machine outputs the corresponding name of the city from  
the data base. Send one such machine to Washington and an identical  
machine to Moscow. They will fulfill your requirements, the W- 
machine will output W and the M-machine will output M.


This is what you are now seeming to describe. But that is not FPI.


How could the machine predict the result of the match? Give me the  
algorithm used by that machine.





The "P" in the acronym stands for "person", and if the "person" is  
not conscious, it is a zombie and any output you get has no bearing  
on what will happen to conscious persons.


The problem is a problem of prediction of future first person account.






The zombie machines will probably not be aware of each other, but  
from that you cannot conclude that the conscious persons will not be  
aware of each other, or that consciousness necessarily  
differentiates on different inputs.


Well, you need the inputs being enough different (like seeing W, resp.  
M) so that the machine can take notice of the difference, and write  
distinct outcome in the diary, of course.


Bruno







Bruce

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruno Marchal


On 07 Aug 2016, at 23:14, Brent Meeker wrote:




On 8/7/2016 11:20 AM, Bruno Marchal wrote:
Not necessarily. A digital computer also requires that time be  
digitized so that its registers run synchronously.  Otherwise "the  
state" is ill defined.  The finite speed of light means that  
spacially separated regions cannot be synchronous.  Even if  
neurons were only ON or OFF, which they aren't, they have  
frequency modulation, they are not synchronous.


Synchronous digital machine can emulate asynchronous digital  
machine, and that is all what is needed for the reasoning.


If the time variable is continuous, i.e. can't be digitized, I don't  
think you are correct.



Nothing in physics needs to be digital for the computationalist  
hypothesis to be true. In fact, the FPI suggest that the physical must  
have continuous parts, although I have some doubt it could be space or  
time.


Then, if the brain exploits a continuum which would be not FPI- 
recoverable, then we are out of the scope of the computationalist  
theory. Keep in mind that it is my working hypothesis.


Now, like Stathis just said, if the brain exploits the continuum,  
evolution, mind, and many things get harder to explain. Biology  
illustrates that nature exploits a lot redundancy, which would be  
impossible if we need all decimal exact in the continuous relations.


Bruno





Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,  
send an email to everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruno Marchal


On 07 Aug 2016, at 22:32, Brent Meeker wrote:




On 8/7/2016 7:27 AM, Bruno Marchal wrote:


So I suggest that instead of starting with the hypothesis that  
consciousness is a computation,


Please, I insist that consciousness is NOT a computation.  
Consciousness is an 1p notion, and you cannot identify it with  
*any* 3p.


But then you must say "No." to the doctor, because what he proposes  
to is a 3p equivalent substitute for your brain.


On the contrary, once we say "yes" to the doctor, we can know that we  
are not the brain or the body,  we own them. We borrow their relative  
appearances, not to be conscious, but to manifest our first person  
experiences relatively to the (probable) others in the normal histories.


Bruno





ttp://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Stathis Papaioannou
On Monday, 8 August 2016, Brent Meeker  wrote:

>
>
> On 8/7/2016 11:20 AM, Bruno Marchal wrote:
>
>> Not necessarily. A digital computer also requires that time be digitized
>>> so that its registers run synchronously.  Otherwise "the state" is ill
>>> defined.  The finite speed of light means that spacially separated regions
>>> cannot be synchronous.  Even if neurons were only ON or OFF, which they
>>> aren't, they have frequency modulation, they are not synchronous.
>>>
>>
>> Synchronous digital machine can emulate asynchronous digital machine, and
>> that is all what is needed for the reasoning.
>>
>
> If the time variable is continuous, i.e. can't be digitized, I don't think
> you are correct.
>

If time is continuous, you would need infinite precision to exactly define
the timing of a neuron's excitation, so you are right, that would not be
digitisable. Practically, however, brains would have to have a non-zero
engineering tolerance, or they would be too unstable. The gravitational
attraction of a passing ant would slightly change the timing of neural
activity, leading to a change in mental state and behaviour.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Bruce Kellett

On 8/08/2016 8:38 pm, Russell Standish wrote:

On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:

However, still no justification has been given for the assumption
that the duplicated consciousness differentiates on different
inputs. And consciousness is what computationalism is supposed to be
giving an account of.


Obviously different inputs does not entail the differentiation of
consciousness.


In duplication there is still only one consciousness: and as you say, 
different inputs do not entail the differentiation of a single 
consciousness (associated with a single brain/body). So why would it be 
different if the body were also duplicated?



However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs.


There is no difficulty in understanding that differentiated 
consciousness entails different persons, who may or may not experience 
different inputs, but I doubt that differentiation of consciousness 
necessarily entails different inputs - two people can experience the 
same stimuli.



In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.


There is no difficulty in accepting that there is consciousness of two 
cities, but is that one consciousness, or two? You beg the question by 
referring to plural 'persons'.


Bruce


Ie, W/M is a difference that makes a difference.

Cheers



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


Re: Holiday Exercise

2016-08-08 Thread Russell Standish
On Sun, Aug 07, 2016 at 09:24:31AM +1000, Bruce Kellett wrote:
> 
> However, still no justification has been given for the assumption
> that the duplicated consciousness differentiates on different
> inputs. And consciousness is what computationalism is supposed to be
> giving an account of.
> 

Obviously different inputs does not entail the differentiation of
consciousness. However computational supervenience does imply the
opposite: differentiated consciousness entails a difference in
inputs. In the W/M experiment we are asked to suppose that the
duplicated persons do, in fact, notice that they've been teleported to
a different city, and recognise where they they've been teleported to.

Ie, W/M is a difference that makes a difference.

Cheers

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Senior Research Fellowhpco...@hpcoders.com.au
Economics, Kingston University http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.