Re: David Eagleman on CHOICE

2011-10-08 Thread Bruno Marchal


On 08 Oct 2011, at 04:11, Stathis Papaioannou wrote:

On Tue, Oct 4, 2011 at 3:02 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:



Nevertheless, you talk about swapping your brain for a suitably
designed computer and consciousness surviving teleportation and
pauses/restarts of the computer.


Yes.




As a starting point, these ideas
assume the physical supervenience thesis.


It does not. At the start it is neutral on this. A computationalist
practitioner (knowing UDA, for example) can associate his  
consciousness with
all the computations going through its state, and believe that he  
will
survive locally on the normal computations (the usual physical  
reality)
only because all the pieces of matter used by the doctors share his  
normal
histories, and emulate the right computation on the right level.  
But the
consciousness is not attributed to some physical happening hereby,  
it is

attributed to the infinitely many arithmetical relations defining his
possible and most probable histories.
Only in step 8 is the physical supervenience assumed, but only to  
get the

reductio ad absurdum.

There is no [consciousness] evolving in [time and space]. There is  
only
[consciousness of time and space], evolving (from the internal  
indexical
perspective), but relying and associated on infinities of  
arithmetical

relations (in the 3-view).


The progression surely must be to start by assuming that your mind is
generated as a result of brain activity, rather than an immaterial
soul.


Why?
Given the number of Aristotelians, it is wise to let them interpret it  
in that way.


But I don't do it. I am neutral, agnostic. No need to assume a  
primitively real physical universe at the start. I assume yes doctor  
and Church thesis.
saying yes doctor does not imply that we believe in a primitively  
material doctor, nor a primitively material brains. We need only  
stable patterns, up to the level we bet on.
We need a sufficiently deterministic neighborhood we can trust, but it  
does not matter where that trust come from (a physical world, a wavy  
multiverse or the numbers, ...)






You then consider whether you would accept a computerised brain
and retain consciousness. If you decide yes, you accept
computationalism, and if you accept computationalism you can show that
physical supervenience is problematic.


Yes. But I bring up the physical supervenience, including the  
assumption of a primary physical universe, only explicitly in step 7  
( with some role), and eliminate it (assuming it again, but for the  
reductio ad absurdum) in step 8.





You then adjust your theory to
keep computationalism and drop physical supervenience or drop
computationalism altogether. This is the sequence in which most people
would think about it.


Hmm..., comp admits exactly the same definition throughout all UDA. In  
some presentation I make it explicit that science has not find any  
evidence for a primitive material reality. The founders of QM did  
doubt this for physical reason, too. Physicists never use such  
hypothesis, except as a tool in everyday life, like each of us. It is  
an obvious extrapolation from how the animals conceive their  
neighborhoods. I thought naively that, all scientists knew since  
Plato, that physicalism and the existence of *primary* physical  
universe is an hypothesis, and that it is just hard to decide on this  
before some reasonable progress are made on the mind-body problem.
 I was naive; It took me time to understand that for some scientists,  
such physical primitive existence was a non questionable taboo. In it  
from bit, Wheeler did put some doubt, though. Tegmark and Schmidhuber  
were close, but dismiss the first persons, which comp illustrates some  
key role.


But you are right, most people will look in that sequence. Most  
aristotelians confuse mechanism and materialism. And mechanism is  
often used to eliminate the notion of soul from the materialis view.  
But digital mechanism and weak materialism don't  fit well. It defies  
Occam. And digital mechanism show machines have quite reasonable  
notion of souls.


Bruno






--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-07 Thread Stathis Papaioannou
On Tue, Oct 4, 2011 at 3:02 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 Nevertheless, you talk about swapping your brain for a suitably
 designed computer and consciousness surviving teleportation and
 pauses/restarts of the computer.

 Yes.



 As a starting point, these ideas
 assume the physical supervenience thesis.

 It does not. At the start it is neutral on this. A computationalist
 practitioner (knowing UDA, for example) can associate his consciousness with
 all the computations going through its state, and believe that he will
 survive locally on the normal computations (the usual physical reality)
 only because all the pieces of matter used by the doctors share his normal
 histories, and emulate the right computation on the right level. But the
 consciousness is not attributed to some physical happening hereby, it is
 attributed to the infinitely many arithmetical relations defining his
 possible and most probable histories.
 Only in step 8 is the physical supervenience assumed, but only to get the
 reductio ad absurdum.

 There is no [consciousness] evolving in [time and space]. There is only
 [consciousness of time and space], evolving (from the internal indexical
 perspective), but relying and associated on infinities of arithmetical
 relations (in the 3-view).

The progression surely must be to start by assuming that your mind is
generated as a result of brain activity, rather than an immaterial
soul. You then consider whether you would accept a computerised brain
and retain consciousness. If you decide yes, you accept
computationalism, and if you accept computationalism you can show that
physical supervenience is problematic. You then adjust your theory to
keep computationalism and drop physical supervenience or drop
computationalism altogether. This is the sequence in which most people
would think about it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-07 Thread meekerdb

On 10/7/2011 7:11 PM, Stathis Papaioannou wrote:

On Tue, Oct 4, 2011 at 3:02 AM, Bruno Marchalmarc...@ulb.ac.be  wrote:


Nevertheless, you talk about swapping your brain for a suitably
designed computer and consciousness surviving teleportation and
pauses/restarts of the computer.

Yes.




As a starting point, these ideas
assume the physical supervenience thesis.

It does not. At the start it is neutral on this. A computationalist
practitioner (knowing UDA, for example) can associate his consciousness with
all the computations going through its state, and believe that he will
survive locally on the normal computations (the usual physical reality)
only because all the pieces of matter used by the doctors share his normal
histories, and emulate the right computation on the right level. But the
consciousness is not attributed to some physical happening hereby, it is
attributed to the infinitely many arithmetical relations defining his
possible and most probable histories.
Only in step 8 is the physical supervenience assumed, but only to get the
reductio ad absurdum.

There is no [consciousness] evolving in [time and space]. There is only
[consciousness of time and space], evolving (from the internal indexical
perspective), but relying and associated on infinities of arithmetical
relations (in the 3-view).

The progression surely must be to start by assuming that your mind is
generated as a result of brain activity, rather than an immaterial
soul. You then consider whether you would accept a computerised brain
and retain consciousness.


There might be two different choices here.  One would be a kind of artificial neuron or 
bundle of neurons that would be physically placed in your head and designed with the same 
connectivity as your natural neurons.  The other would be a transceiver that would send 
out the afferent signals intended for your brain to a computer outside your body which 
would do some calculation emulating your brain and then sending the result back to the 
efferent nerves connections.  Within the multiverse that is being instantiated by the UD 
these might correspond to very different states of computation even though they are the 
same so far as your input/output is concerned.


Brent


If you decide yes, you accept
computationalism, and if you accept computationalism you can show that
physical supervenience is problematic. You then adjust your theory to
keep computationalism and drop physical supervenience or drop
computationalism altogether. This is the sequence in which most people
would think about it.




--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-04 Thread Bruno Marchal


On 04 Oct 2011, at 02:27, smi...@zonnet.nl wrote:

Ok, so this is where I would disagree. It only seems that to define  
a computation you need to look at the time evolution, because a  
snapshot doesn't contain enough information about the dynamics of  
the system. But here one considers all of the enormous amount of  
information stored in the brain, and that is a mistake, as we are  
only ever aware of a small fraction of this information.


So, the OM has to be defined as some properly coarse grained picture  
of the full information content of the entire brain. In the MWI  
picture, the full brain-enviroment state is in  state of the form:


Sum over i of |brain_i|environment_i

where all the |brain_i define the same macrostate. This state  
contains also the information about how the brain has computed the  
output from the input, so it is a valid computatonal state. If you  
were to observe exactly which of the many microstates the brain is  
in, then you would lose this information. But no human can ever  
observe this informainion in another brain (obviously it wouldn't  
fit in his brain).


So, the simplistic picture of some machine being in a precisely  
defined bit state is misleading. That would only be accessible to a  
superobserver who has much more memory than that machine. The  
machine's subjective world should be thought as a set of paralllel  
worlds each having a slightly different information content  
entangled with the environment.


I agree. Even without QM, and just DM, once we get the many dreams  
interpretation of arithmetic (to be short).


Bruno






Saibal

Citeren meekerdb meeke...@verizon.net:

My point is not that a snapshot brain (or computer) state lacks  
content, but that if it is an emulation of a brain (or a real  
brain) the snapshot cannot be an observer moment or a thought.  The  
latter must have much longer duration and overlap one another in  
time.  I think there has been a casual, but wrong, implicit  
identification of the discrete states of a Turing machine emulating  
a brain with some rather loosely defined observer moments.   
That's why I thought Eagleman's talk was interesting.


Brent

On 10/3/2011 8:01 AM, smi...@zonnet.nl wrote:
I can't answer for Brent, but my take in this is that what matters  
is whether the state of the system at any time represents a  
computation being performed. So, this whole duration
requirment is not necessary, a snapshot of the system  contains  
information about what program is being run. So, it is a mistake  
to think that OMs lack content and are therefore not computational  
states.


Saibal

Citeren Stathis Papaioannou stath...@gmail.com:

On Mon, Oct 3, 2011 at 9:47 AM, meekerdb meeke...@verizon.net  
wrote:



But this doesn't
change the argument that, to the extent that the physics allows  
it,

the machine states may be arbitrarily divided. It then becomes a
matter of definition whether we say the conscious states can  
also be
arbitrarily divided. If stream of consciousness A-B-C  
supervenes on
machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C  
alone are
of sufficient duration to count as consciousness should we say  
the
observer moments are A-B, B-C and A-B-C, or should we say that  
the
observer moments are A, B, C? I think it's simpler to say that  
the
atomic observer moments are A, B, C even though individually  
they lack

content.




I think we've discussed this before.  It you define them as A,  
B, C then the
lack of content means they don't have inherent order; where as  
AB, BC,
CD,... do have inherent order because they overlap.  I don't  
think this

affects the argument except to note that OMs are not the same as
computational states.


Do you think that if you insert pauses between a, b and c so that
there is no overlap you create a zombie?


--
Stathis Papaioannou

--
You received this message because you are subscribed to the  
Google Groups Everything List group.
To post to this group, send email to everything-list@googlegroups.com 
.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.








--
You received this message because you are subscribed to the Google  
Groups Everything List group.
To post to this group, send email to everything- 
l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.



--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.





Re: David Eagleman on CHOICE

2011-10-04 Thread Bruno Marchal


On 03 Oct 2011, at 19:12, meekerdb wrote:


On 10/3/2011 9:38 AM, Bruno Marchal wrote:


On 03 Oct 2011, at 00:47, meekerdb wrote:


On 10/2/2011 7:13 AM, Stathis Papaioannou wrote:
On Sun, Oct 2, 2011 at 3:01 AM, meekerdbmeeke...@verizon.net   
wrote:


It's a strange, almost paradoxical result but I think observer  
moments
can be sub-conscious. If we say the minimum duration of a  
conscious
moment is 100ms then 99ms and the remaining 1ms of this can  
occur at
different times, perhaps billions of years of real time apart,  
perhaps
simultaneously or in the reverse order. You would have the  
experience
provided only that the full 100ms even if broken up into  
infinitesimal

intervals occurs somewhere, sometime.



That sounds like a temporal homunculus.  :-)

Note that on a nanosecond scale there is no state of the brain.
Relativity applies to brains too and so the time order of events  
on
opposite sides of your head only defined to within about a  
nanosecond.
The brain is limited for technical reasons, relativity being the  
least

of them.


Sure.  Action potentials are only few hundred meters/sec.


It isn't possible to stop it for a microsecond and restart it
at exactly the same state. With a computer you can do this although
you are limited to discrete digital states: you can't save the  
state

as logic circuits are transitioning from 1 to 0.


But you can do it, and in fact it's implicit in a Turing machine,  
i.e. an abstract computation.  So I'm wondering what consequences  
this has for Bruno's idea that you are a bundle of computations  
that are passing through your current state?


Some care has to be taken on the wording. With the computational  
supervenience thesis, you are not a bundle of computations that  
are passing through your current state, you  (1-you) are a  
person, with referential and self-referential means


I thought you were trying to explain what a person is in terms of  
arithmetic and computations.  Now you seem to be invoking person  
as a separate entity.


I am not sure to understand you. Both in UDA and AUDA I define notion  
of person. In UDA I use the notion of personal diary or memory being  
annihilated and reconstituted, and in AUDA I use the theory of  
machine's self-reference. This relates that separate entity to  
arithmetic, even if the relation are less trivial than assuming some  
link between mind and instantiation of computation.







and that 1-you only supervene on that bundle of computations. Your  
actions and decisions, through the computational state of the self- 
referential programs, can select among quite different bundles  
of computations .


You put select in scare quotes.  So are you saying that you select  
(via free will?) which bundles of computations you supervene on?   
or which are your most probable continuation?


Both. You choose between being duplicated in Washington/Moscow or  
Sidney/Beijing. That choice influence your future? If you choose  
Sidney/Beijing, you will still select Sidney or Beijing, but this you  
cannot influence.
Of course a sort of God could see all what happened in your brain, and  
determine you choice, but that God is not available to you, and your  
choice remains a free choice, in the compatibilist approach to free- 
will.






You are a living conscious person with partial free will and  
taxes, and gravitational constraints, and things like that  
apparently, you can memorize them, make planning, scheduling, etc.  
As UM knowing we are UMs (like any LUMs) we know we can change  
ourselves, it is part of our first personhood.






The computational states are sharp, discrete things.  The brains  
states are fuzzy distributed things.


Brain states are computational states. Just take a Turing machine  
emulating a brain (at the right level).


A crisp computational state can represent a fuzzy brain state, and  
also can belong to a fuzzy set of crisp state, which is relevant  
for the 1-p statistics.


Fuzzy Turing machine are Turing emulable, like quantum computer are  
Turing emulable too, despite the extravagant relative slow down  
that we can suspect.


Yes, I understand that.  But brain states are not states of  
consciousness, i.e. thoughts or observer moments.


I think that I will abandon the notion of OMs. At least for awhile.   
It is quite misleading in the context of the comp-supervenience  
thesis. I thought that I could use it by distinguishing 3-OMs  
(computational states) and 1-OMs (the subjectivity of someone going  
through that states). But the subjectivity is related to the whole set  
arithmetical neighborhoods which makes that state an element of many  
computations.
I think that I have to dig deeper on the semantics of the X1* logics  
(the true (driven by G*) logic of Bp  Dt  p), to see if some sense  
can be retrieved for Bostrom (first person) OMs.


Bruno




Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List 

Re: David Eagleman on CHOICE

2011-10-03 Thread Stathis Papaioannou
On Mon, Oct 3, 2011 at 9:47 AM, meekerdb meeke...@verizon.net wrote:

 But this doesn't
 change the argument that, to the extent that the physics allows it,
 the machine states may be arbitrarily divided. It then becomes a
 matter of definition whether we say the conscious states can also be
 arbitrarily divided. If stream of consciousness A-B-C supervenes on
 machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone are
 of sufficient duration to count as consciousness should we say the
 observer moments are A-B, B-C and A-B-C, or should we say that the
 observer moments are A, B, C? I think it's simpler to say that the
 atomic observer moments are A, B, C even though individually they lack
 content.



 I think we've discussed this before.  It you define them as A, B, C then the
 lack of content means they don't have inherent order; where as AB, BC,
 CD,... do have inherent order because they overlap.  I don't think this
 affects the argument except to note that OMs are not the same as
 computational states.

Do you think that if you insert pauses between a, b and c so that
there is no overlap you create a zombie?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-03 Thread smitra
I can't answer for Brent, but my take in this is that what matters is 
whether the state of the system at any time represents a computation 
being performed. So, this whole duration
requirment is not necessary, a snapshot of the system  contains 
information about what program is being run. So, it is a mistake to 
think that OMs lack content and are therefore not computational states.


Saibal

Citeren Stathis Papaioannou stath...@gmail.com:


On Mon, Oct 3, 2011 at 9:47 AM, meekerdb meeke...@verizon.net wrote:


But this doesn't
change the argument that, to the extent that the physics allows it,
the machine states may be arbitrarily divided. It then becomes a
matter of definition whether we say the conscious states can also be
arbitrarily divided. If stream of consciousness A-B-C supervenes on
machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone are
of sufficient duration to count as consciousness should we say the
observer moments are A-B, B-C and A-B-C, or should we say that the
observer moments are A, B, C? I think it's simpler to say that the
atomic observer moments are A, B, C even though individually they lack
content.




I think we've discussed this before.  It you define them as A, B, C then the
lack of content means they don't have inherent order; where as AB, BC,
CD,... do have inherent order because they overlap.  I don't think this
affects the argument except to note that OMs are not the same as
computational states.


Do you think that if you insert pauses between a, b and c so that
there is no overlap you create a zombie?


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.






--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-03 Thread Bruno Marchal


On 02 Oct 2011, at 16:21, Stathis Papaioannou wrote:

On Sun, Oct 2, 2011 at 4:16 AM, Bruno Marchal marc...@ulb.ac.be  
wrote:


It's a strange, almost paradoxical result but I think observer  
moments

can be sub-conscious. If we say the minimum duration of a conscious
moment is 100ms then 99ms and the remaining 1ms of this can occur at
different times, perhaps billions of years of real time apart,  
perhaps
simultaneously or in the reverse order. You would have the  
experience
provided only that the full 100ms even if broken up into  
infinitesimal

intervals occurs somewhere, sometime.



I think that you are crossing the limit of your pedagogical use of  
the
physical supervenience thesis. You might be led to a direct  
contradiction,

which might lead to a new proof of its inconsistency.
Consciousness cannot be associated with any particular implementation
(physical or not) of a computation. It is related to an infinity of
computations, structured by the self (or possible self-reference).


Nevertheless, you talk about swapping your brain for a suitably
designed computer and consciousness surviving teleportation and
pauses/restarts of the computer.


Yes.




As a starting point, these ideas
assume the physical supervenience thesis.


It does not. At the start it is neutral on this. A computationalist  
practitioner (knowing UDA, for example) can associate his  
consciousness with all the computations going through its state, and  
believe that he will survive locally on the normal computations (the  
usual physical reality) only because all the pieces of matter used  
by the doctors share his normal histories, and emulate the right  
computation on the right level. But the consciousness is not  
attributed to some physical happening hereby, it is attributed to the  
infinitely many arithmetical relations defining his possible and most  
probable histories.
Only in step 8 is the physical supervenience assumed, but only to get  
the reductio ad absurdum.


There is no [consciousness] evolving in [time and space]. There is  
only [consciousness of time and space], evolving (from the internal  
indexical perspective), but relying and associated on infinities of  
arithmetical relations (in the 3-view).


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-03 Thread Bruno Marchal


On 03 Oct 2011, at 00:47, meekerdb wrote:


On 10/2/2011 7:13 AM, Stathis Papaioannou wrote:
On Sun, Oct 2, 2011 at 3:01 AM, meekerdbmeeke...@verizon.net   
wrote:


It's a strange, almost paradoxical result but I think observer  
moments

can be sub-conscious. If we say the minimum duration of a conscious
moment is 100ms then 99ms and the remaining 1ms of this can occur  
at
different times, perhaps billions of years of real time apart,  
perhaps
simultaneously or in the reverse order. You would have the  
experience
provided only that the full 100ms even if broken up into  
infinitesimal

intervals occurs somewhere, sometime.



That sounds like a temporal homunculus.  :-)

Note that on a nanosecond scale there is no state of the brain.
 Relativity applies to brains too and so the time order of events on
opposite sides of your head only defined to within about a  
nanosecond.
The brain is limited for technical reasons, relativity being the  
least

of them.


Sure.  Action potentials are only few hundred meters/sec.


It isn't possible to stop it for a microsecond and restart it
at exactly the same state. With a computer you can do this although
you are limited to discrete digital states: you can't save the state
as logic circuits are transitioning from 1 to 0.


But you can do it, and in fact it's implicit in a Turing machine,  
i.e. an abstract computation.  So I'm wondering what consequences  
this has for Bruno's idea that you are a bundle of computations  
that are passing through your current state?


Some care has to be taken on the wording. With the computational  
supervenience thesis, you are not a bundle of computations that are  
passing through your current state, you  (1-you) are a person,  
with referential and self-referential means and that 1-you only  
supervene on that bundle of computations. Your actions and decisions,  
through the computational state of the self-referential programs, can  
select among quite different bundles of computations . You are a  
living conscious person with partial free will and taxes, and  
gravitational constraints, and things like that apparently, you can  
memorize them, make planning, scheduling, etc. As UM knowing we are  
UMs (like any LUMs) we know we can change ourselves, it is part of our  
first personhood.






 The computational states are sharp, discrete things.  The brains  
states are fuzzy distributed things.


Brain states are computational states. Just take a Turing machine  
emulating a brain (at the right level).


A crisp computational state can represent a fuzzy brain state, and  
also can belong to a fuzzy set of crisp state, which is relevant for  
the 1-p statistics.


Fuzzy Turing machine are Turing emulable, like quantum computer are  
Turing emulable too, despite the extravagant relative slow down that  
we can suspect.


Bruno






But this doesn't
change the argument that, to the extent that the physics allows it,
the machine states may be arbitrarily divided. It then becomes a
matter of definition whether we say the conscious states can also be
arbitrarily divided. If stream of consciousness A-B-C supervenes on
machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone  
are

of sufficient duration to count as consciousness should we say the
observer moments are A-B, B-C and A-B-C, or should we say that the
observer moments are A, B, C? I think it's simpler to say that the
atomic observer moments are A, B, C even though individually they  
lack

content.




I think we've discussed this before.  It you define them as A, B, C  
then the lack of content means they don't have inherent order; where  
as AB, BC, CD,... do have inherent order because they overlap.  I  
don't think this affects the argument except to note that OMs are  
not the same as computational states.


Brent

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-03 Thread meekerdb

On 10/3/2011 4:48 AM, Stathis Papaioannou wrote:

On Mon, Oct 3, 2011 at 9:47 AM, meekerdbmeeke...@verizon.net  wrote:


But this doesn't
change the argument that, to the extent that the physics allows it,
the machine states may be arbitrarily divided. It then becomes a
matter of definition whether we say the conscious states can also be
arbitrarily divided. If stream of consciousness A-B-C supervenes on
machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone are
of sufficient duration to count as consciousness should we say the
observer moments are A-B, B-C and A-B-C, or should we say that the
observer moments are A, B, C? I think it's simpler to say that the
atomic observer moments are A, B, C even though individually they lack
content.



I think we've discussed this before.  It you define them as A, B, C then the
lack of content means they don't have inherent order; where as AB, BC,
CD,... do have inherent order because they overlap.  I don't think this
affects the argument except to note that OMs are not the same as
computational states.

Do you think that if you insert pauses between a, b and c so that
there is no overlap you create a zombie?


I have trouble thinking how you would create those pauses.  As a classical device a brain 
or a computer cannot just be stopped and restarted.  You have to save all the variable 
values *and* their first derivatives.  The abstraction of what the computer (or brain) 
does as a Turing computation ignores the derivatives and just considers a sequence of 
discrete states.  In the real computer the CPU clock provides the physical connection 
between successive states.  In the brain it's a lot of distributed action potentials and 
chemical diffusion in parallel.  Of course a computer can emulate what the brain or the 
simpler computer is doing by simulating all the rates-of-change and intermediate states at 
some finer level of time and space resolution.  You could create pauses in that level of 
emulation.  But those states don't correspond to Observer Moments - something in 
consciousness.  In Bruno's Washington/Moscow thought experiments it seems obvious to me 
that he would lose some period of consciousness in being transported; e.g. at least 80ms 
according to Eagleman.  So if you teleported every 80ms, you would prevent consciousness.  
You wouldn't create a zombie though, just an unconscious person.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-03 Thread meekerdb

On 10/3/2011 9:38 AM, Bruno Marchal wrote:


On 03 Oct 2011, at 00:47, meekerdb wrote:


On 10/2/2011 7:13 AM, Stathis Papaioannou wrote:

On Sun, Oct 2, 2011 at 3:01 AM, meekerdbmeeke...@verizon.net  wrote:


It's a strange, almost paradoxical result but I think observer moments
can be sub-conscious. If we say the minimum duration of a conscious
moment is 100ms then 99ms and the remaining 1ms of this can occur at
different times, perhaps billions of years of real time apart, perhaps
simultaneously or in the reverse order. You would have the experience
provided only that the full 100ms even if broken up into infinitesimal
intervals occurs somewhere, sometime.



That sounds like a temporal homunculus.  :-)

Note that on a nanosecond scale there is no state of the brain.
 Relativity applies to brains too and so the time order of events on
opposite sides of your head only defined to within about a nanosecond.

The brain is limited for technical reasons, relativity being the least
of them.


Sure.  Action potentials are only few hundred meters/sec.


It isn't possible to stop it for a microsecond and restart it
at exactly the same state. With a computer you can do this although
you are limited to discrete digital states: you can't save the state
as logic circuits are transitioning from 1 to 0.


But you can do it, and in fact it's implicit in a Turing machine, i.e. an abstract 
computation.  So I'm wondering what consequences this has for Bruno's idea that you 
are a bundle of computations that are passing through your current state?


Some care has to be taken on the wording. With the computational supervenience thesis, 
you are not a bundle of computations that are passing through your current state, 
you  (1-you) are a person, with referential and self-referential means 


I thought you were trying to explain what a person is in terms of arithmetic and 
computations.  Now you seem to be invoking person as a separate entity.


and that 1-you only supervene on that bundle of computations. Your actions and 
decisions, through the computational state of the self-referential programs, can 
select among quite different bundles of computations . 


You put select in scare quotes.  So are you saying that you select (via free will?) 
which bundles of computations you supervene on?  or which are your most probable 
continuation?


You are a living conscious person with partial free will and taxes, and gravitational 
constraints, and things like that apparently, you can memorize them, make planning, 
scheduling, etc. As UM knowing we are UMs (like any LUMs) we know we can change 
ourselves, it is part of our first personhood.






 The computational states are sharp, discrete things.  The brains states are fuzzy 
distributed things.


Brain states are computational states. Just take a Turing machine emulating a brain (at 
the right level).


A crisp computational state can represent a fuzzy brain state, and also can belong to a 
fuzzy set of crisp state, which is relevant for the 1-p statistics.


Fuzzy Turing machine are Turing emulable, like quantum computer are Turing emulable too, 
despite the extravagant relative slow down that we can suspect.


Yes, I understand that.  But brain states are not states of consciousness, i.e. 
thoughts or observer moments.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-03 Thread meekerdb
My point is not that a snapshot brain (or computer) state lacks content, but that if it is 
an emulation of a brain (or a real brain) the snapshot cannot be an observer moment or a 
thought.  The latter must have much longer duration and overlap one another in time.  I 
think there has been a casual, but wrong, implicit identification of the discrete states 
of a Turing machine emulating a brain with some rather loosely defined observer 
moments.  That's why I thought Eagleman's talk was interesting.


Brent

On 10/3/2011 8:01 AM, smi...@zonnet.nl wrote:
I can't answer for Brent, but my take in this is that what matters is whether the state 
of the system at any time represents a computation being performed. So, this whole 
duration
requirment is not necessary, a snapshot of the system  contains information about what 
program is being run. So, it is a mistake to think that OMs lack content and are 
therefore not computational states.


Saibal

Citeren Stathis Papaioannou stath...@gmail.com:


On Mon, Oct 3, 2011 at 9:47 AM, meekerdb meeke...@verizon.net wrote:


But this doesn't
change the argument that, to the extent that the physics allows it,
the machine states may be arbitrarily divided. It then becomes a
matter of definition whether we say the conscious states can also be
arbitrarily divided. If stream of consciousness A-B-C supervenes on
machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone are
of sufficient duration to count as consciousness should we say the
observer moments are A-B, B-C and A-B-C, or should we say that the
observer moments are A, B, C? I think it's simpler to say that the
atomic observer moments are A, B, C even though individually they lack
content.




I think we've discussed this before.  It you define them as A, B, C then the
lack of content means they don't have inherent order; where as AB, BC,
CD,... do have inherent order because they overlap.  I don't think this
affects the argument except to note that OMs are not the same as
computational states.


Do you think that if you insert pauses between a, b and c so that
there is no overlap you create a zombie?


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups Everything 
List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.








--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-03 Thread Stathis Papaioannou
On Tue, Oct 4, 2011 at 3:58 AM, meekerdb meeke...@verizon.net wrote:
 On 10/3/2011 4:48 AM, Stathis Papaioannou wrote:

 On Mon, Oct 3, 2011 at 9:47 AM, meekerdbmeeke...@verizon.net  wrote:

 But this doesn't
 change the argument that, to the extent that the physics allows it,
 the machine states may be arbitrarily divided. It then becomes a
 matter of definition whether we say the conscious states can also be
 arbitrarily divided. If stream of consciousness A-B-C supervenes on
 machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone are
 of sufficient duration to count as consciousness should we say the
 observer moments are A-B, B-C and A-B-C, or should we say that the
 observer moments are A, B, C? I think it's simpler to say that the
 atomic observer moments are A, B, C even though individually they lack
 content.


 I think we've discussed this before.  It you define them as A, B, C then
 the
 lack of content means they don't have inherent order; where as AB, BC,
 CD,... do have inherent order because they overlap.  I don't think this
 affects the argument except to note that OMs are not the same as
 computational states.

 Do you think that if you insert pauses between a, b and c so that
 there is no overlap you create a zombie?


 I have trouble thinking how you would create those pauses.  As a classical
 device a brain or a computer cannot just be stopped and restarted.  You have
 to save all the variable values *and* their first derivatives.  The
 abstraction of what the computer (or brain) does as a Turing computation
 ignores the derivatives and just considers a sequence of discrete states.
  In the real computer the CPU clock provides the physical connection between
 successive states.  In the brain it's a lot of distributed action potentials
 and chemical diffusion in parallel.  Of course a computer can emulate what
 the brain or the simpler computer is doing by simulating all the
 rates-of-change and intermediate states at some finer level of time and
 space resolution.  You could create pauses in that level of emulation.  But
 those states don't correspond to Observer Moments - something in
 consciousness.  In Bruno's Washington/Moscow thought experiments it seems
 obvious to me that he would lose some period of consciousness in being
 transported; e.g. at least 80ms according to Eagleman.  So if you teleported
 every 80ms, you would prevent consciousness.  You wouldn't create a zombie
 though, just an unconscious person.

Computers are turned on and off all the time, saving their last state
to disc and taking up where they left off in the computation. Smart
phones with solid state drives do this very quickly. There is no
reason why a person with an artificial brain couldn't turn on and off
every 80ms. If the off interval were short enough an external observer
would not notice anything unusual. Would he be a zombie, behaving
normally but lacking consciousness?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-02 Thread Stathis Papaioannou
On Sun, Oct 2, 2011 at 3:01 AM, meekerdb meeke...@verizon.net wrote:

 It's a strange, almost paradoxical result but I think observer moments
 can be sub-conscious. If we say the minimum duration of a conscious
 moment is 100ms then 99ms and the remaining 1ms of this can occur at
 different times, perhaps billions of years of real time apart, perhaps
 simultaneously or in the reverse order. You would have the experience
 provided only that the full 100ms even if broken up into infinitesimal
 intervals occurs somewhere, sometime.


 That sounds like a temporal homunculus.  :-)

 Note that on a nanosecond scale there is no state of the brain.
  Relativity applies to brains too and so the time order of events on
 opposite sides of your head only defined to within about a nanosecond.

The brain is limited for technical reasons, relativity being the least
of them. It isn't possible to stop it for a microsecond and restart it
at exactly the same state. With a computer you can do this although
you are limited to discrete digital states: you can't save the state
as logic circuits are transitioning from 1 to 0. But this doesn't
change the argument that, to the extent that the physics allows it,
the machine states may be arbitrarily divided. It then becomes a
matter of definition whether we say the conscious states can also be
arbitrarily divided. If stream of consciousness A-B-C supervenes on
machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone are
of sufficient duration to count as consciousness should we say the
observer moments are A-B, B-C and A-B-C, or should we say that the
observer moments are A, B, C? I think it's simpler to say that the
atomic observer moments are A, B, C even though individually they lack
content.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-02 Thread Stathis Papaioannou
On Sun, Oct 2, 2011 at 4:16 AM, Bruno Marchal marc...@ulb.ac.be wrote:

 It's a strange, almost paradoxical result but I think observer moments
 can be sub-conscious. If we say the minimum duration of a conscious
 moment is 100ms then 99ms and the remaining 1ms of this can occur at
 different times, perhaps billions of years of real time apart, perhaps
 simultaneously or in the reverse order. You would have the experience
 provided only that the full 100ms even if broken up into infinitesimal
 intervals occurs somewhere, sometime.


 I think that you are crossing the limit of your pedagogical use of the
 physical supervenience thesis. You might be led to a direct contradiction,
 which might lead to a new proof of its inconsistency.
 Consciousness cannot be associated with any particular implementation
 (physical or not) of a computation. It is related to an infinity of
 computations, structured by the self (or possible self-reference).

Nevertheless, you talk about swapping your brain for a suitably
designed computer and consciousness surviving teleportation and
pauses/restarts of the computer. As a starting point, these ideas
assume the physical supervenience thesis.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-02 Thread meekerdb

On 10/2/2011 7:13 AM, Stathis Papaioannou wrote:

On Sun, Oct 2, 2011 at 3:01 AM, meekerdbmeeke...@verizon.net  wrote:


It's a strange, almost paradoxical result but I think observer moments
can be sub-conscious. If we say the minimum duration of a conscious
moment is 100ms then 99ms and the remaining 1ms of this can occur at
different times, perhaps billions of years of real time apart, perhaps
simultaneously or in the reverse order. You would have the experience
provided only that the full 100ms even if broken up into infinitesimal
intervals occurs somewhere, sometime.



That sounds like a temporal homunculus.  :-)

Note that on a nanosecond scale there is no state of the brain.
  Relativity applies to brains too and so the time order of events on
opposite sides of your head only defined to within about a nanosecond.

The brain is limited for technical reasons, relativity being the least
of them.


Sure.  Action potentials are only few hundred meters/sec.


It isn't possible to stop it for a microsecond and restart it
at exactly the same state. With a computer you can do this although
you are limited to discrete digital states: you can't save the state
as logic circuits are transitioning from 1 to 0.


But you can do it, and in fact it's implicit in a Turing machine, i.e. an abstract 
computation.  So I'm wondering what consequences this has for Bruno's idea that you are 
a bundle of computations that are passing through your current state?  The computational 
states are sharp, discrete things.  The brains states are fuzzy distributed things.



But this doesn't
change the argument that, to the extent that the physics allows it,
the machine states may be arbitrarily divided. It then becomes a
matter of definition whether we say the conscious states can also be
arbitrarily divided. If stream of consciousness A-B-C supervenes on
machine state a-b-c where A-B, B-C, A-B-C, but not A, B or C alone are
of sufficient duration to count as consciousness should we say the
observer moments are A-B, B-C and A-B-C, or should we say that the
observer moments are A, B, C? I think it's simpler to say that the
atomic observer moments are A, B, C even though individually they lack
content.




I think we've discussed this before.  It you define them as A, B, C then the lack of 
content means they don't have inherent order; where as AB, BC, CD,... do have inherent 
order because they overlap.  I don't think this affects the argument except to note that 
OMs are not the same as computational states.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-01 Thread Stathis Papaioannou
On Fri, Sep 30, 2011 at 12:26 AM, Jason Resch jasonre...@gmail.com wrote:


 On Sep 29, 2011, at 8:12 AM, Stathis Papaioannou stath...@gmail.com wrote:

 On Wed, Sep 28, 2011 at 8:55 AM, Jason Resch jasonre...@gmail.com wrote:

 If it takes the brain 100 ms to compute a moment of awareness, then you
 can
 know you were not created 1 microsecond ago.

 Suppose your brain paused for 1 us every 99 ms. To an external
 observer you would be functioning normally; do you think you would be
 a philosophical zombie? We can change the thought experiment to make
 the pauses and the duration of consciousness between the pauses
 arbitrarily long, effectively cutting up consciousness however we
 want, even if a conscious moment is smeared out over time.


 I think you missed what I was attempting to say.

 I agree that it would function normally with the introduction of pauses.
  Let's say the brain was uploaded and on a computer.  The scheduler would do
 a context switch to let another process run.  This would not affect the
 brain or create a zombie.  We could even pause the brain, send it over the
 wire to another computer and execute it there, without a problem.

 What I think would be problematic is starting a brain simulation without any
 prior computational history.  I think it might take some minimum amount of
 time (computation) before that brain could be aware of anything.

It's a strange, almost paradoxical result but I think observer moments
can be sub-conscious. If we say the minimum duration of a conscious
moment is 100ms then 99ms and the remaining 1ms of this can occur at
different times, perhaps billions of years of real time apart, perhaps
simultaneously or in the reverse order. You would have the experience
provided only that the full 100ms even if broken up into infinitesimal
intervals occurs somewhere, sometime.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-01 Thread meekerdb

On 10/1/2011 2:36 AM, Stathis Papaioannou wrote:

On Fri, Sep 30, 2011 at 12:26 AM, Jason Reschjasonre...@gmail.com  wrote:


On Sep 29, 2011, at 8:12 AM, Stathis Papaioannoustath...@gmail.com  wrote:


On Wed, Sep 28, 2011 at 8:55 AM, Jason Reschjasonre...@gmail.com  wrote:


If it takes the brain 100 ms to compute a moment of awareness, then you
can
know you were not created 1 microsecond ago.

Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you would be
a philosophical zombie? We can change the thought experiment to make
the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.


I think you missed what I was attempting to say.

I agree that it would function normally with the introduction of pauses.
  Let's say the brain was uploaded and on a computer.  The scheduler would do
a context switch to let another process run.  This would not affect the
brain or create a zombie.  We could even pause the brain, send it over the
wire to another computer and execute it there, without a problem.

What I think would be problematic is starting a brain simulation without any
prior computational history.  I think it might take some minimum amount of
time (computation) before that brain could be aware of anything.

It's a strange, almost paradoxical result but I think observer moments
can be sub-conscious. If we say the minimum duration of a conscious
moment is 100ms then 99ms and the remaining 1ms of this can occur at
different times, perhaps billions of years of real time apart, perhaps
simultaneously or in the reverse order. You would have the experience
provided only that the full 100ms even if broken up into infinitesimal
intervals occurs somewhere, sometime.



That sounds like a temporal homunculus.  :-)

Note that on a nanosecond scale there is no state of the brain.  Relativity applies to 
brains too and so the time order of events on opposite sides of your head only defined to 
within about a nanosecond.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-10-01 Thread Bruno Marchal


On 01 Oct 2011, at 11:36, Stathis Papaioannou wrote:

On Fri, Sep 30, 2011 at 12:26 AM, Jason Resch jasonre...@gmail.com  
wrote:



On Sep 29, 2011, at 8:12 AM, Stathis Papaioannou  
stath...@gmail.com wrote:


On Wed, Sep 28, 2011 at 8:55 AM, Jason Resch  
jasonre...@gmail.com wrote:


If it takes the brain 100 ms to compute a moment of awareness,  
then you

can
know you were not created 1 microsecond ago.


Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you would  
be

a philosophical zombie? We can change the thought experiment to make
the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.



I think you missed what I was attempting to say.

I agree that it would function normally with the introduction of  
pauses.
 Let's say the brain was uploaded and on a computer.  The scheduler  
would do
a context switch to let another process run.  This would not affect  
the
brain or create a zombie.  We could even pause the brain, send it  
over the

wire to another computer and execute it there, without a problem.

What I think would be problematic is starting a brain simulation  
without any
prior computational history.  I think it might take some minimum  
amount of

time (computation) before that brain could be aware of anything.


It's a strange, almost paradoxical result but I think observer moments
can be sub-conscious. If we say the minimum duration of a conscious
moment is 100ms then 99ms and the remaining 1ms of this can occur at
different times, perhaps billions of years of real time apart, perhaps
simultaneously or in the reverse order. You would have the experience
provided only that the full 100ms even if broken up into infinitesimal
intervals occurs somewhere, sometime.



I think that you are crossing the limit of your pedagogical use of the  
physical supervenience thesis. You might be led to a direct  
contradiction, which might lead to a new proof of its inconsistency.
Consciousness cannot be associated with any particular implementation  
(physical or not) of a computation. It is related to an infinity of  
computations, structured by the self (or possible self-reference).


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-30 Thread Bruno Marchal


On 29 Sep 2011, at 21:28, meekerdb wrote:


On 9/29/2011 11:23 AM, Bruno Marchal wrote:


On 29 Sep 2011, at 19:24, meekerdb wrote:


On 9/29/2011 6:12 AM, Stathis Papaioannou wrote:
On Wed, Sep 28, 2011 at 8:55 AM, Jason  
Reschjasonre...@gmail.com  wrote:


If it takes the brain 100 ms to compute a moment of awareness,  
then you can

know you were not created 1 microsecond ago.

Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you  
would be
a philosophical zombie? We can change the thought experiment to  
make

the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.


That's true, regarding the brain as a classical computer or as an  
abstract computation.  But those are the points in question.  I  
doubt that it is true regarding the brain as the quantum object it  
is.  It's not clear to me what it would mean in the QM case;  
freezing the wave function?


Use the quantum Zeno effect. Observe its state repetitively. You  
will project it again and again in its original state. That is one  
method.


That requires constructing an observable that has brain states as  
its eigenstates.  Such an observable is a quasi-classical  
interaction that entangles the state with the environment via  
decoherence.  So whether consciousness would survive this, is  
already equivalent to the question of whether you should say 'yes'  
to the doctor who proposes to replace your brain with a classical  
computation.


That makes my point. Note I was not serious about using that Quantum  
Zeno effect for freezing an object like a brain.





Or, second method, emulate the quantum object evolution on a  
classical computer, and freeze the classical computer.


Does the classical computer obey the 323 principle?


Assuming comp, consciousness supervene on the abstract relationship,  
not on any particular instantiation/emulation.





I think such computers don't exist (except in Platonia).


But assuming comp, Earth (non-platonia) is an illusion of numbers  
living in Platonia.
So, if you want to preserve both materialism and digital mechanism,  
you need having real classical computer in which physically inactive  
parts are playing a physically active role in a computation. That  
seems nonsensical to me, and if it is sensical, that would be a reason  
to refuse an artificial digital brain, which by definition preserve  
consciousness by saving what is relevant for the computation (at some  
digital level) to be processed. Negating the 323 principle for  
classical computer introduces some kind of magic in the mind-brain  
relationship.








The UD emulates also the quantum computations.


Yes that's another formulation of the same proposition.  But I  
wonder how it emulates the non-interaction experiments.  The  
conventional computation assumes true randomness.


In QM-without-collapse, true randomness is a comp first person  
indeterminacy effect. The UD emulates all non interaction experiments  
by emulating the global observer+physical devices quantum  
multiplication effects.
If you come back with collapse or true randomness, then quantum  
computation is no more emulable by classical machine, and you can  
indeed say no the doctor when he proposes a classical digital  
artificial brain. But then you have to admit that we are no more  
Turing emulable. This is just saying that comp, digital mechanism, is  
false.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-29 Thread Stathis Papaioannou
On Wed, Sep 28, 2011 at 8:55 AM, Jason Resch jasonre...@gmail.com wrote:

 If it takes the brain 100 ms to compute a moment of awareness, then you can
 know you were not created 1 microsecond ago.

Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you would be
a philosophical zombie? We can change the thought experiment to make
the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-29 Thread Jason Resch



On Sep 29, 2011, at 8:12 AM, Stathis Papaioannou stath...@gmail.com  
wrote:


On Wed, Sep 28, 2011 at 8:55 AM, Jason Resch jasonre...@gmail.com  
wrote:


If it takes the brain 100 ms to compute a moment of awareness, then  
you can

know you were not created 1 microsecond ago.


Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you would be
a philosophical zombie? We can change the thought experiment to make
the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.



I think you missed what I was attempting to say.

I agree that it would function normally with the introduction of  
pauses.  Let's say the brain was uploaded and on a computer.  The  
scheduler would do a context switch to let another process run.  This  
would not affect the brain or create a zombie.  We could even pause  
the brain, send it over the wire to another computer and execute it  
there, without a problem.


What I think would be problematic is starting a brain simulation  
without any prior computational history.  I think it might take some  
minimum amount of time (computation) before that brain could be aware  
of anything.


Jason



--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-29 Thread meekerdb

On 9/29/2011 6:12 AM, Stathis Papaioannou wrote:

On Wed, Sep 28, 2011 at 8:55 AM, Jason Reschjasonre...@gmail.com  wrote:


If it takes the brain 100 ms to compute a moment of awareness, then you can
know you were not created 1 microsecond ago.

Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you would be
a philosophical zombie? We can change the thought experiment to make
the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.


That's true, regarding the brain as a classical computer or as an abstract computation.  
But those are the points in question.  I doubt that it is true regarding the brain as the 
quantum object it is.  It's not clear to me what it would mean in the QM case; freezing 
the wave function?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-29 Thread Bruno Marchal


On 29 Sep 2011, at 19:24, meekerdb wrote:


On 9/29/2011 6:12 AM, Stathis Papaioannou wrote:
On Wed, Sep 28, 2011 at 8:55 AM, Jason Reschjasonre...@gmail.com   
wrote:


If it takes the brain 100 ms to compute a moment of awareness,  
then you can

know you were not created 1 microsecond ago.

Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you would be
a philosophical zombie? We can change the thought experiment to make
the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.


That's true, regarding the brain as a classical computer or as an  
abstract computation.  But those are the points in question.  I  
doubt that it is true regarding the brain as the quantum object it  
is.  It's not clear to me what it would mean in the QM case;  
freezing the wave function?


Use the quantum Zeno effect. Observe its state repetitively. You will  
project it again and again in its original state. That is one method.
Or, second method, emulate the quantum object evolution on a classical  
computer, and freeze the classical computer.


The UD emulates also the quantum computations.

Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-29 Thread meekerdb

On 9/29/2011 11:23 AM, Bruno Marchal wrote:


On 29 Sep 2011, at 19:24, meekerdb wrote:


On 9/29/2011 6:12 AM, Stathis Papaioannou wrote:

On Wed, Sep 28, 2011 at 8:55 AM, Jason Reschjasonre...@gmail.com  wrote:


If it takes the brain 100 ms to compute a moment of awareness, then you can
know you were not created 1 microsecond ago.

Suppose your brain paused for 1 us every 99 ms. To an external
observer you would be functioning normally; do you think you would be
a philosophical zombie? We can change the thought experiment to make
the pauses and the duration of consciousness between the pauses
arbitrarily long, effectively cutting up consciousness however we
want, even if a conscious moment is smeared out over time.


That's true, regarding the brain as a classical computer or as an abstract 
computation.  But those are the points in question.  I doubt that it is true regarding 
the brain as the quantum object it is.  It's not clear to me what it would mean in the 
QM case; freezing the wave function?


Use the quantum Zeno effect. Observe its state repetitively. You will project it again 
and again in its original state. That is one method.


That requires constructing an observable that has brain states as its eigenstates.  Such 
an observable is a quasi-classical interaction that entangles the state with the 
environment via decoherence.  So whether consciousness would survive this, is already 
equivalent to the question of whether you should say 'yes' to the doctor who proposes to 
replace your brain with a classical computation.


Or, second method, emulate the quantum object evolution on a classical computer, and 
freeze the classical computer.


Does the classical computer obey the 323 principle?   I think such computers don't exist 
(except in Platonia).




The UD emulates also the quantum computations.


Yes that's another formulation of the same proposition.  But I wonder how it emulates the 
non-interaction experiments.  The conventional computation assumes true randomness.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-28 Thread Craig Weinberg
On Sep 25, 5:45 pm, meekerdb meeke...@verizon.net wrote:
 An interesting talk relevant to what constitutes an observer moment.

 http://www.youtube.com/watch?v=0VQ1KI_Jh1QNR=1

 Brent

Very cool, thanks for posting. Of course, I think that his
observations are entirely consistent with my hypothesis. Our native
perception is a large scale view of many lesser scale sensorimotive
experiences. The subordinate (from our subjective point of view)
phenomena are higher frequency so that the top level awareness exists
through a low frequency synthesis or summary of them. It don't think
that this occurs localized only to a special, homonculous-like region
of the brain, so that it is not a literal summarizing computation, but
rather all of the relevant regions of the brain are actively
participating on a number of frequency ranges, just like what we do as
individuals every day can be summarized by looking at the behavior of
an entire population over a longer period of time.

What he is reaching for at the end I think is that energy is in fact a
subjective experience, and it is through the sensorimotive capacity to
signify and sequence it's experience, that the inference of time
arises. He is still thinking in terms of there being an actual
objective 'now' which our perception lags behind due to computation,
but that is not the case. We are not watching the pixels on the screen
change, or the screen refreshes, we are watching the images through
the screen as a whole, and that happens on a greater scale of time
relative to the pixels. It's not just a computational latency, it's a
measure of sensorimotive intensity: significance.

The qualities he mentions:

Brightness
Size
Numerosity
Motion
Looming
Sequence complexity
Number of events
Temporal frequence
Stimulus visability

These are the indicators of subjective significance in visual terms.
They are examples experiences with a high volume of sensorimotive
intensity. You can look at it as computational latency to process
heavier flows of data with more consequences as more neurons are
excited, but that is only if you compare the experience to an
inanimate object or break the perception down into it's constituent
isolated components. These obscure the universal principle at work
because the example we are using is this massive human sized
experience, sort of like trying to find out how carbonation bubbles
work by looking a giant, beach ball sized bubble. Our trillion-neuron
psyche is so huge that it warps and distorts and drifts slowly through
the air, distracting us from the intrinsic coherence and closure of
the bubble.  It wobbles and stretches, but it still does what the
champagne bubble does most of the time - maintains a coherent inertial
frame of perception, a frame from which 'time' arises, not one that
keeps up with any kind of external 'time'.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-27 Thread Jason Resch



On Sep 26, 2011, at 6:31 AM, Stathis Papaioannou stath...@gmail.com  
wrote:


On Mon, Sep 26, 2011 at 7:45 AM, meekerdb meeke...@verizon.net  
wrote:
An interesting talk relevant to what constitutes an observer  
moment.


http://www.youtube.com/watch?v=0VQ1KI_Jh1QNR=1


Even if the experience is smeared out over time


I think it is clear with mechanism that this is the case.  Imagine an  
AI with a single CPU.  Here it is obvious that it's state extends  
through the dimension of time.  With the parallel processing of the  
brain it is less, but still much greater than a Planck time.



and has a complex
relationship to real world events it could still be the case that it
can be cut up arbitrarily.


Perhaps arbitrarily in the sense of distinct observer moments, but I  
don't think so about time.



There is no way I can be sure the world was
not created a microsecond ago


Consider how many CPU cycles are required for the AI to become aware.   
Even if you think it becomes conscious as soon as the first  
instruction is executed, the instruction takes some amount of time to  
complete.


If it takes the brain 100 ms to compute a moment of awareness, then  
you can know you were not created 1 microsecond ago.


Jason


and there is no way I can be sure there
isn't a million year gap between subjective seconds.


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-27 Thread meekerdb

On 9/27/2011 3:55 PM, Jason Resch wrote:



On Sep 26, 2011, at 6:31 AM, Stathis Papaioannou stath...@gmail.com wrote:


On Mon, Sep 26, 2011 at 7:45 AM, meekerdb meeke...@verizon.net wrote:

An interesting talk relevant to what constitutes an observer moment.

http://www.youtube.com/watch?v=0VQ1KI_Jh1QNR=1


Even if the experience is smeared out over time


I think it is clear with mechanism that this is the case.  Imagine an AI with a single 
CPU.  Here it is obvious that it's state extends through the dimension of time.  With 
the parallel processing of the brain it is less, but still much greater than a Planck time.


Even assuming signals at c the brain extends about a nano-second in time, 22 orders of 
magnitude longer than the Planck time.


But doesn't this create problems for Bruno's argument, which assumes states are timeless, 
instant like things in Platonia and that they have no overlap.  Should we identify 
observer moments with bundles of UD computations going thru the same state, but also with 
extensions of those computations forward and backward over some number of states?  But 
they are not the same forward and backward.  Or do we require that the substitution 
level be pushed down to time slices short compared to a nano-second so that an observer 
moment will be a whole set of states extending over a short time.  In which case the 
sequence of states will pick out a much smaller set of UD computations that went thru all 
those states.


Brent




and has a complex
relationship to real world events it could still be the case that it
can be cut up arbitrarily.


Perhaps arbitrarily in the sense of distinct observer moments, but I don't think so 
about time.



There is no way I can be sure the world was
not created a microsecond ago


Consider how many CPU cycles are required for the AI to become aware.  Even if you think 
it becomes conscious as soon as the first instruction is executed, the instruction takes 
some amount of time to complete.


If it takes the brain 100 ms to compute a moment of awareness, then you can know you 
were not created 1 microsecond ago.


Jason


and there is no way I can be sure there
isn't a million year gap between subjective seconds.


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups Everything 
List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.






--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-27 Thread smitra
My opinion is that quantum mechanics is essential to define an OM, 
despite it being in the classical domain. The computational state of an 
AI is not the precise physical state of the system that generates the 
AI, it is some coarse grained picture of it. So, if you have a 
classical computer, then the bits that are zero or one only become 
visible when you average over the microstates.


Then, even the observer does not appear at the level of the bits, you 
need to extract the information that is present in the bits, and there 
must be a huge redundancy there too. What we are aware of are patterns 
in the information that enters our brain, but the same pattern we're 
aware of can be realized in an astronomically large number of ways.


Therefore, if you are aware of something right now, the exact quantum 
state that describes this is, in general, an entangled state which 
contains the correlations within the patterns that you are aware of and 
the information present in the environment that are mapped to those 
patterns.


This state defines the program your brain is running, at least as far 
as rendering the patterns you are aware of.



Saibal








Citeren meekerdb meeke...@verizon.net:


On 9/27/2011 3:55 PM, Jason Resch wrote:



On Sep 26, 2011, at 6:31 AM, Stathis Papaioannou stath...@gmail.com wrote:


On Mon, Sep 26, 2011 at 7:45 AM, meekerdb meeke...@verizon.net wrote:

An interesting talk relevant to what constitutes an observer moment.

http://www.youtube.com/watch?v=0VQ1KI_Jh1QNR=1


Even if the experience is smeared out over time


I think it is clear with mechanism that this is the case.  Imagine 
an AI with a single CPU.  Here it is obvious that it's state extends 
through the dimension of time.  With the parallel processing of the 
brain it is less, but still much greater than a Planck time.


Even assuming signals at c the brain extends about a nano-second in 
time, 22 orders of magnitude longer than the Planck time.


But doesn't this create problems for Bruno's argument, which assumes 
states are timeless, instant like things in Platonia and that they 
have no overlap.  Should we identify observer moments with bundles of 
UD computations going thru the same state, but also with extensions 
of those computations forward and backward over some number of 
states?  But they are not the same forward and backward.  Or do we 
require that the substitution level be pushed down to time slices 
short compared to a nano-second so that an observer moment will be a 
whole set of states extending over a short time.  In which case the 
sequence of states will pick out a much smaller set of UD 
computations that went thru all those states.


Brent




and has a complex
relationship to real world events it could still be the case that it
can be cut up arbitrarily.


Perhaps arbitrarily in the sense of distinct observer moments, but I 
don't think so about time.



There is no way I can be sure the world was
not created a microsecond ago


Consider how many CPU cycles are required for the AI to become 
aware.  Even if you think it becomes conscious as soon as the first 
instruction is executed, the instruction takes some amount of time 
to complete.


If it takes the brain 100 ms to compute a moment of awareness, then 
you can know you were not created 1 microsecond ago.


Jason


and there is no way I can be sure there
isn't a million year gap between subjective seconds.


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.






--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: David Eagleman on CHOICE

2011-09-26 Thread Stathis Papaioannou
On Mon, Sep 26, 2011 at 7:45 AM, meekerdb meeke...@verizon.net wrote:
 An interesting talk relevant to what constitutes an observer moment.

 http://www.youtube.com/watch?v=0VQ1KI_Jh1QNR=1

Even if the experience is smeared out over time and has a complex
relationship to real world events it could still be the case that it
can be cut up arbitrarily. There is no way I can be sure the world was
not created a microsecond ago and there is no way I can be sure there
isn't a million year gap between subjective seconds.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



David Eagleman on CHOICE

2011-09-25 Thread meekerdb

An interesting talk relevant to what constitutes an observer moment.

http://www.youtube.com/watch?v=0VQ1KI_Jh1QNR=1

Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.