Re: problem of size '10

2010-03-16 Thread Stathis Papaioannou
On 16 March 2010 05:51, Brent Meeker meeke...@dslextreme.com wrote:

 The hypothesis is that it would have some effect, not necessarily that you
 would feel a little pain.  Maybe the effect is that a certain thought comes
 into your consciousness, I could have been really hurt if

Even if you were unaware that there had been a near miss?

In any case, do you agree that if the counterfactuals due to unused
brain pathways have an effect on consciousness, then so should the
counterfactuals due to events out in the world?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-16 Thread Brent Meeker

On 3/16/2010 4:35 AM, Stathis Papaioannou wrote:

On 16 March 2010 05:51, Brent Meekermeeke...@dslextreme.com  wrote:

   

The hypothesis is that it would have some effect, not necessarily that you
would feel a little pain.  Maybe the effect is that a certain thought comes
into your consciousness, I could have been really hurt if
 

Even if you were unaware that there had been a near miss?
   


Yes; but you became aware that there could have been an accident.


In any case, do you agree that if the counterfactuals due to unused
brain pathways have an effect on consciousness, then so should the
counterfactuals due to events out in the world?


   
I would guess they would have an effect only in proportion to the 
probability of them affecting one's brain state (e.g. perceptions).


As I see it, if you take MWI seriously (which I usually don't) then 
there are no counterfactuals - they're all factuals just in 
different branches.   And hence it is inconsistent to argue that 
removing unused neurons can't make any difference.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-15 Thread Stathis Papaioannou
On 15 March 2010 07:28, Brent Meeker meeke...@dslextreme.com wrote:

 I don't think that's so clear.  Everett's relative state interpretation
 implies consciousness is not unitary but continually splits just as the
 states of other quantum systems.  So while these counterfactual states
 (realized in the multiple worlds) may be significant for instantiating
 consciousness, I don't think it would follow that the consciousness'es thus
 instantiated would be aware of the splitting, i.e. decoherence.  So if you
 are subject to a probabilistic event which would cause a change in your
 consciousness if it eventuated there would be a change in your consciousness
 *in another branch of the multiple worlds*.  If your brain were constructed
 so there was no such chance (or it had much lower probability) what would be
 the difference?  Maybe you would have faded qualia, e.g. if you were color
 blind you aren't aware of colors because there's zero probability of sensing
 them and your consciousness is slightly diminished by this because you
 aren't conscious of things being not red or not blue.

I'm still not clear on what you mean. If I almost have an accident
which could have left me in terrible pain should I feel something in
this world as a result of the near miss? Surely I would if the
counterfactuals have an effect on consciousness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-15 Thread Brent Meeker

On 3/15/2010 5:37 AM, Stathis Papaioannou wrote:

On 15 March 2010 07:28, Brent Meekermeeke...@dslextreme.com  wrote:

   

I don't think that's so clear.  Everett's relative state interpretation
implies consciousness is not unitary but continually splits just as the
states of other quantum systems.  So while these counterfactual states
(realized in the multiple worlds) may be significant for instantiating
consciousness, I don't think it would follow that the consciousness'es thus
instantiated would be aware of the splitting, i.e. decoherence.  So if you
are subject to a probabilistic event which would cause a change in your
consciousness if it eventuated there would be a change in your consciousness
*in another branch of the multiple worlds*.  If your brain were constructed
so there was no such chance (or it had much lower probability) what would be
the difference?  Maybe you would have faded qualia, e.g. if you were color
blind you aren't aware of colors because there's zero probability of sensing
them and your consciousness is slightly diminished by this because you
aren't conscious of things being not red or not blue.
 

I'm still not clear on what you mean. If I almost have an accident
which could have left me in terrible pain should I feel something in
this world as a result of the near miss? Surely I would if the
counterfactuals have an effect on consciousness.


The hypothesis is that it would have some effect, not necessarily that 
you would feel a little pain.  Maybe the effect is that a certain 
thought comes into your consciousness, I could have been really hurt 
if


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-14 Thread Stathis Papaioannou
On 14 March 2010 08:43, Brent Meeker meeke...@dslextreme.com wrote:

(BTW the formatting for your last few posts looks odd when I read them
with Gmail. Would it be possible to revert to plain text?)

[Stathis]
 Does that matter here? I thought the argument was that if system A is
 capable of behaviour that system B is not capable, then A has
 different/greater consciousness than B even when we consider the case
 where A and B are performing the same activity. A and B could be
 identical except that given a particular tricky question Q, A has
 access to a plugin module A' that will allow it to work out the
 answer, while B does not. For all inputs other than Q, A and B behave
 identically. Now I agree that A is more *intelligent* than B, if
 intelligence is the ability to solve problems, since A can solve one
 more problem than B. Intelligence involves potential, like specifying
 a car's top speed, so the counterfactuals here are relevant. But to
 say that A and B differ in their consciousness even when they have
 inputs other than Q (and therefore go through the same internal state
 changes),

[Brent]
 But they don't.  If A has more possible states then, per QM, it, with some
 probability, goes through them too.

Are you suggesting that consciousness is affected by some kind of
interference effect between the possible states? If that is so, it
should be affected not only by the possible states of the brain, which
is not so easy to change, but also by the possible inputs. In other
words if you are subjected to a probabilistic event which would cause
a change in your consciousness if it eventuated, there would be a
change in your consciousness even if it did not eventuate. This is an
experiment that can easily be done - that is done by everyone many
times a day - and it does not support the theory that counterfactuals
affect consciousness.

[Stathis]
 Our consciousness is instantiated by a machine that interacts with its
 environment and has a complex, but consistent, response to
 environmental stimuli. This allows one conscious entity to observe
 another conscious entity, and postulate that it is conscious. If
 consciousnesses were instantiated all around us by random processes
 (or even by nothing at all) they would not be of the sort that can be
 observed at the level of the substrate of their implementation, which
 is why they are not observed. So yes, it's all compatible with our
 physical observations.

[Brent]
 I'm not clear on what you mean by it in it's all compatible with our
 physical observations.  You mean that everything, including rocks, are
 conscious but we can't recognize them as such because their consciousness is
 so different?  Or maybe it's not different but their interaction with the
 world is too different?

Saying any object is conscious if you look at it the right way is just
another way of saying that consciousness is not a physical property of
the object: the rock won't be rendered unconscious if we blow it up
since the relevant computations could just as easily be ascribed to
the blown up atoms. So what we're talking about is Platonic
implementations of consciousness, and those we can't interact with. We
can only interact with the sort of consciousness that exhibits
intelligent behaviour, generated by brains and perhaps computers.
Superficially this seems to solve the empirical problem, albeit at the
cost of extra metaphysical baggage. However, it doesn't solve the
scientific problem because there is then the question of how do we
know that our own consciousness is one of those specially privileged
to be generated in the physical world and not in Platonia? We don't;
and in fact if it is possible that consciousness can be generated in
Platonia there is no basis for postulating an ontologically separate
real world at all - it could all be a virtual reality generated in
Platonia. But there is then the problem of how we find ourselves, as
you say, in a nomologically consistent universe. What we need is a
derivation of the observed physical laws from the principle all
possible computations are necessarily implemented. That would be
impressive.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-14 Thread Brent Meeker

On 3/14/2010 5:10 AM, Stathis Papaioannou wrote:

On 14 March 2010 08:43, Brent Meekermeeke...@dslextreme.com  wrote:

(BTW the formatting for your last few posts looks odd when I read them
with Gmail. Would it be possible to revert to plain text?)

[Stathis]
   

Does that matter here? I thought the argument was that if system A is
capable of behaviour that system B is not capable, then A has
different/greater consciousness than B even when we consider the case
where A and B are performing the same activity. A and B could be
identical except that given a particular tricky question Q, A has
access to a plugin module A' that will allow it to work out the
answer, while B does not. For all inputs other than Q, A and B behave
identically. Now I agree that A is more *intelligent* than B, if
intelligence is the ability to solve problems, since A can solve one
more problem than B. Intelligence involves potential, like specifying
a car's top speed, so the counterfactuals here are relevant. But to
say that A and B differ in their consciousness even when they have
inputs other than Q (and therefore go through the same internal state
changes),
 

[Brent]
   

But they don't.  If A has more possible states then, per QM, it, with some
probability, goes through them too.
 

Are you suggesting that consciousness is affected by some kind of
interference effect between the possible states? If that is so, it
should be affected not only by the possible states of the brain, which
is not so easy to change, but also by the possible inputs. In other
words if you are subjected to a probabilistic event which would cause
a change in your consciousness if it eventuated, there would be a
change in your consciousness even if it did not eventuate. This is an
experiment that can easily be done - that is done by everyone many
times a day - and it does not support the theory that counterfactuals
affect consciousness.
   


I don't think that's so clear.  Everett's relative state interpretation 
implies consciousness is not unitary but continually splits just as 
the states of other quantum systems.  So while these counterfactual 
states (realized in the multiple worlds) may be significant for 
instantiating consciousness, I don't think it would follow that the 
consciousness'es thus instantiated would be aware of the splitting, i.e. 
decoherence.  So if you are subject to a probabilistic event which would 
cause a change in your consciousness if it eventuated there would be a 
change in your consciousness *in another branch of the multiple 
worlds*.  If your brain were constructed so there was no such chance (or 
it had much lower probability) what would be the difference?  Maybe you 
would have faded qualia, e.g. if you were color blind you aren't aware 
of colors because there's zero probability of sensing them and your 
consciousness is slightly diminished by this because you aren't 
conscious of things being not red or not blue.


Brent


[Stathis]
   

Our consciousness is instantiated by a machine that interacts with its
environment and has a complex, but consistent, response to
environmental stimuli. This allows one conscious entity to observe
another conscious entity, and postulate that it is conscious. If
consciousnesses were instantiated all around us by random processes
(or even by nothing at all) they would not be of the sort that can be
observed at the level of the substrate of their implementation, which
is why they are not observed. So yes, it's all compatible with our
physical observations.
 

[Brent]
   

I'm not clear on what you mean by it in it's all compatible with our
physical observations.  You mean that everything, including rocks, are
conscious but we can't recognize them as such because their consciousness is
so different?  Or maybe it's not different but their interaction with the
world is too different?
 

Saying any object is conscious if you look at it the right way is just
another way of saying that consciousness is not a physical property of
the object: the rock won't be rendered unconscious if we blow it up
since the relevant computations could just as easily be ascribed to
the blown up atoms. So what we're talking about is Platonic
implementations of consciousness, and those we can't interact with. We
can only interact with the sort of consciousness that exhibits
intelligent behaviour, generated by brains and perhaps computers.
Superficially this seems to solve the empirical problem, albeit at the
cost of extra metaphysical baggage. However, it doesn't solve the
scientific problem because there is then the question of how do we
know that our own consciousness is one of those specially privileged
to be generated in the physical world and not in Platonia? We don't;
and in fact if it is possible that consciousness can be generated in
Platonia there is no basis for postulating an ontologically separate
real world at all - it could all be a virtual reality generated in
Platonia. But there is 

Re: problem of size '10

2010-03-13 Thread Stathis Papaioannou
On 12 March 2010 11:59, Brent Meeker meeke...@dslextreme.com wrote:

 The pathways are all intact and can spring into action if the person
 wakes up. There is a continuum from everything being there and ready
 to use immediately, to all there but parts of the system dormant, to
 not there at all but could be added if the person has extensive
 surgery.

 That would be a classical change and different from a MWI possibility.

Does that matter here? I thought the argument was that if system A is
capable of behaviour that system B is not capable, then A has
different/greater consciousness than B even when we consider the case
where A and B are performing the same activity. A and B could be
identical except that given a particular tricky question Q, A has
access to a plugin module A' that will allow it to work out the
answer, while B does not. For all inputs other than Q, A and B behave
identically. Now I agree that A is more *intelligent* than B, if
intelligence is the ability to solve problems, since A can solve one
more problem than B. Intelligence involves potential, like specifying
a car's top speed, so the counterfactuals here are relevant. But to
say that A and B differ in their consciousness even when they have
inputs other than Q (and therefore go through the same internal state
changes), on the grounds that A can discriminate between more possible
inputs, seems incredible. It would mean that the consciousness of A
when it was doing non-Q processing would be affected by what happens
to A': if it was destroyed, if it was disconnected, if the special
adapter needed to connect it was lost so that it couldn't be used. We
could do the experiment: A would describe changes in its experiences
as changes were made to A' or its connection to A'.

 It's not incompatible with any physical observation to say that
 consciousness is instantiated by just a recorded sequence.

 Is it incompatible with any physical observation to say that consciousness
 is instantiated by a rock?  The only consciousness we have observation of is
 our own 1st person.  It's not plausible that it's a recording, though in
 some sense it may be logically possible.

Our consciousness is instantiated by a machine that interacts with its
environment and has a complex, but consistent, response to
environmental stimuli. This allows one conscious entity to observe
another conscious entity, and postulate that it is conscious. If
consciousnesses were instantiated all around us by random processes
(or even by nothing at all) they would not be of the sort that can be
observed at the level of the substrate of their implementation, which
is why they are not observed. So yes, it's all compatible with our
physical observations.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-12 Thread Bruno Marchal


On 11 Mar 2010, at 20:38, Brent Meeker wrote:


On 3/11/2010 10:16 AM, Bruno Marchal wrote:



On 11 Mar 2010, at 17:57, Brent Meeker wrote:


On 3/11/2010 1:59 AM, Bruno Marchal wrote:



I don't see how we could use Tononi's paper to provide a physical  
or a computational role to inactive device in the actual  
supervenience of a an actual computation currently not using that  
device.


I'm not sure I understand that question.  It seems to turn on what  
is meant by using that device.  Is my brain using a neuron that  
isn't firing?  I'd say yes, it is part of the system and it's not  
firing is significant.


Two old guys A and B decide to buy each one a car. They bought  
identical cars, and paid the same price.
But B's car has a defect, above 90 mi/h the engine explode. But  
both A and B will peacefully enjoy driving their car all the rest  
of their life. They were old, and never go quicker than 60 mi/h  
until they die. Would you say that A's car was driving but that   
B's car was only partially driving.


If I'm a multiple-worlder I'd say B's car is driving with a lower  
probability than A's.


Why? The QM many worlds entails that he is old in the normal worlds,  
and he will keep going less than 60mi/h there too. Only in Harry- 
Potter worlds, where energy push him beyond that limit due to quantum  
incident accumulation.









What about a brain with clever neurons. For example the neurons N24  
anticipates that he will be useless for the next ten minutes, which  
gives him the time to make a pause cafe and to talk with some glial  
cells friends. Then after ten minutes he come back and do very well  
its job. Would that brain be less conscious? He did not miss any  
messages.


Same answer.


But this can only confirms that you put some magic in the presence of  
matter. If matter plays that role, by comp it just needs we have to  
actively emulate those inactive piece of matter, which by  
definition, where not inactive then.




If inactive piece are needed, what about inactive soft subroutine?  
Then I have to ask the doctor if the program he will put in my brain  
evaluated in the lazy way, or strictly, or by value. Again, by  
definition of comp, this is a matter of finding the right level, then  
any implementation will do, any universal system will do. And the uda  
consequences follows.


Tp prevent the contagion of the immateriality of the person to its  
environment, you can only introduce actual infinities in the local  
working of consciousness. But then you can no more say yes to the  
digitalist surgeon based on the comp assumption. This is like making a  
current theory more complex to avoid a simpler theory.


Your move looks like the move of a superstitious boss who want all its  
employees present all days, even when they have nothing to do.  
Molecular biology shows that in the cells, the proteins which have no  
functions are quickly destroyed so that its atoms are recycled, and  
they are called by need, and reconstituted only when they are useful.







The significance of the neuron (firing or not firing) is  
computational. If for the precise computation C the neuron n is not  
used in the interval of time (t1 t2), you may replace it by a  
functionally equivalent machine for the working in that time  
interval.

There is no problem


Well, there's not *that* problem.


? The point is that if you accept that non active part can be removed,  
then the movie graph expains how your immateriality extends to a sheaf  
of computational histories (that is really true-and-provable number  
relations) going through you.
It is like darwin: it gives a realm (numbers, combinators, ... choose  
your favorite base) in which we can explain how the laws of physics  
appeared and evolved: not in a space-time, but in a logical space  
(that each  Löbian number can discover in its head).


And the G/G* separation extends on the quanta (SGrz1, X1, Z1) giving  
the qualia (S4Grz1, X1*, Z1*).






I tend to work at a more general, or abstract level, and I think  
that consciousness needs some amount of self-reflection, two  
universal machines in front of each other, at least. If Mars Rover  
can add and multiply it may have the consciousness of Robinson  
Arithmetic. If Mars Rover believe in enough arithmetical induction  
rules, it can quickly be trivially Löbian. But its consciousness  
will develop when he identifies genuinely and privately itself with  
its unameable first person (Bp  p). Using Bp for public science  
and opinions. It will build a memorable and unique self-experience.


To be clear, Mars Rover may still be largely behind the fruit fly  
in matter of consciousness. The fruit fly seems capable to  
appreciate wine, for example. Mars Rover is still too much an  
infant, it wants only satisfy its mother company, not yet itself.


But it also doesn't conceive of itself and its mother company -  
only it's mission.


Our universal machine are brainwashed at their 

Re: problem of size '10

2010-03-12 Thread Brent Meeker

On 3/12/2010 6:03 AM, Bruno Marchal wrote:


On 11 Mar 2010, at 20:38, Brent Meeker wrote:


On 3/11/2010 10:16 AM, Bruno Marchal wrote:


On 11 Mar 2010, at 17:57, Brent Meeker wrote:


On 3/11/2010 1:59 AM, Bruno Marchal wrote:



I don't see how we could use Tononi's paper to provide a physical
or a computational role to inactive device in the actual
supervenience of a an actual computation currently not using that
device.


I'm not sure I understand that question. It seems to turn on what is
meant by using that device. Is my brain using a neuron that isn't
firing? I'd say yes, it is part of the system and it's not firing is
significant.


Two old guys A and B decide to buy each one a car. They bought
identical cars, and paid the same price.
But B's car has a defect, above 90 mi/h the engine explode. But both
A and B will peacefully enjoy driving their car all the rest of their
life. They were old, and never go quicker than 60 mi/h until they
die. Would you say that A's car was driving but that B's car was only
partially driving.


If I'm a multiple-worlder I'd say B's car is driving with a lower
probability than A's.


Why? The QM many worlds entails that he is old in the normal worlds, and
he will keep going less than 60mi/h there too.


In some worlds his car is a Toyota.


Only in Harry-Potter

worlds, where energy push him beyond that limit due to quantum incident
accumulation.








What about a brain with clever neurons. For example the neurons N24
anticipates that he will be useless for the next ten minutes, which
gives him the time to make a pause cafe and to talk with some glial
cells friends. Then after ten minutes he come back and do very well
its job. Would that brain be less conscious? He did not miss any
messages.


Same answer.


But this can only confirms that you put some magic in the presence of
matter. If matter plays that role, by comp it just needs we have to
actively emulate those inactive piece of matter, which by definition,
where not inactive then.



If inactive piece are needed, what about inactive soft subroutine? Then
I have to ask the doctor if the program he will put in my brain
evaluated in the lazy way, or strictly, or by value. Again, by
definition of comp, this is a matter of finding the right level, then
any implementation will do, any universal system will do. And the uda
consequences follows.

Tp prevent the contagion of the immateriality of the person to its
environment, you can only introduce actual infinities in the local
working of consciousness.


QM does introduce infinites since it assumes real values probabilities.


But then you can no more say yes to the
digitalist surgeon based on the comp assumption.


Only if the digitalist surgeon has a magically classical digital brain 
at his disposal...or if I insist on probability 1 success.



This is like making a
current theory more complex to avoid a simpler theory.

Your move looks like the move of a superstitious boss who want all its
employees present all days, even when they have nothing to do. Molecular
biology shows that in the cells, the proteins which have no functions
are quickly destroyed so that its atoms are recycled, and they are
called by need, and reconstituted only when they are useful.


I'm just taking seriously the Everett interpretation.  Since we don't 
know what consciousness is, we can as well suppose it supervenes on the 
ray in Hilbert space as on the projection to our classical subspace.  I 
haven't added anything to the ontology.











The significance of the neuron (firing or not firing) is
computational. If for the precise computation C the neuron n is not
used in the interval of time (t1 t2), you may replace it by a
functionally equivalent machine for the working in that time interval.
There is no problem


Well, there's not *that* problem.


? The point is that if you accept that non active part can be removed,
then the movie graph expains how your immateriality extends to a sheaf
of computational histories (that is really true-and-provable number
relations) going through you.
It is like darwin: it gives a realm (numbers, combinators, ... choose
your favorite base) in which we can explain how the laws of physics
appeared and evolved: not in a space-time, but in a logical space (that
each Löbian number can discover in its head).



I'll be more impressed when we can explain why *this* law rather than 
*that* law evolved and why there are laws (intersubjective agreements) 
at all.




And the G/G* separation extends on the quanta (SGrz1, X1, Z1) giving the
qualia (S4Grz1, X1*, Z1*).





I tend to work at a more general, or abstract level, and I think that
consciousness needs some amount of self-reflection, two universal
machines in front of each other, at least. If Mars Rover can add and
multiply it may have the consciousness of Robinson Arithmetic. If
Mars Rover believe in enough arithmetical induction rules, it can
quickly be trivially Löbian. But its consciousness will 

Re: problem of size '10

2010-03-12 Thread Bruno Marchal


On 12 Mar 2010, at 19:31, Brent Meeker wrote:





Why? The QM many worlds entails that he is old in the normal  
worlds, and

he will keep going less than 60mi/h there too.


In some worlds his car is a Toyota.


But he is old. He will not go faster than 60mi/h in the normal worlds.




Tp prevent the contagion of the immateriality of the person to its
environment, you can only introduce actual infinities in the local
working of consciousness.


QM does introduce infinites since it assumes real values  
probabilities.


I said, in the local working of consciousness. Not in the working of  
matter where comp justifies the appearance of actual infinities. If  
you use QM in consciousness, you have to use an analog non Turing  
emulable pice of quantum mechanism for blocking the immateriality  
contagion.






But then you can no more say yes to the
digitalist surgeon based on the comp assumption.


Only if the digitalist surgeon has a magically classical digital  
brain at his disposal...or if I insist on probability 1 success.


What does that change to the argument?





This is like making a
current theory more complex to avoid a simpler theory.

Your move looks like the move of a superstitious boss who want all  
its
employees present all days, even when they have nothing to do.  
Molecular

biology shows that in the cells, the proteins which have no functions
are quickly destroyed so that its atoms are recycled, and they are
called by need, and reconstituted only when they are useful.


I'm just taking seriously the Everett interpretation.  Since we  
don't know what consciousness is,



I think we know ver well what consciousness is. Even more when sick.  
We cannot define it, but that is different. We cannot define matter  
either.




we can as well suppose it supervenes on the ray in Hilbert space as  
on the projection to our classical subspace.  I haven't added  
anything to the ontology.


I don't see any problem with this, unless you are using all the  
decimal of the real or complex numbers in that ray, but then we are no  
more working in the digital mechanist theory.







? The point is that if you accept that non active part can be  
removed,
then the movie graph expains how your immateriality extends to a  
sheaf

of computational histories (that is really true-and-provable number
relations) going through you.
It is like darwin: it gives a realm (numbers, combinators, ... choose
your favorite base) in which we can explain how the laws of physics
appeared and evolved: not in a space-time, but in a logical space  
(that

each Löbian number can discover in its head).



I'll be more impressed when we can explain why *this* law rather  
than *that* law evolved and why there are laws (intersubjective  
agreements) at all.



I don't understand. This is exactly what comp (+ the usual classical  
definition of belief and knowledge) provides.
uda already gives theb general shape, and  those laws are derivable  
from all variants of self-reference in the manner of AUDA. (as uda  
makes obligatory).








And the G/G* separation extends on the quanta (SGrz1, X1, Z1)  
giving the

qualia (S4Grz1, X1*, Z1*).


And this is unique with comp.



Most probably. In any case, neither the body of the fruit fly, nor  
the

body of Mars Rover can think, because Bodies don't think. Persons,
intellect or souls, can think. Bodies are projection of their mind on
their distribution in the universal dovetailing (or the tiny  
equivalent

arithmetical Sigma_1 truth).


I think that means inferred components of their model of the world  
- with which I would agree.


Not their. *We* are ding the reasoning. If it was their, butterfly  
would have problem to find flowers!








If your theory assume a physical primary substance, it is up to you  
to

explain its role in consciousness.


Its role in consciousness is to realize the processes that are  
consciousness.  Of course that leaves open the question of which  
processes do that - to which Tononi has give a possible answer.


A comp subtheory. Matter does not play any role in Tononi. He takes it  
perhaps granted because he is not aware it cannot exist with comp,  
but, fortunately for him, he does not use it at all. Except in his  
three concluding line on Mary, where he does a mistake already well  
treated by Hofstadter and Dennett (and my own publications).

Tononi does not aboard the comp mind body problem at all.






But MGA forces that move to invoke
actual infinities and non turing emulable aspects of the  
(generalized)

brain.


It forces me to invoke a non-turing emulable world; but I think any  
finite part can still be turing-emulable to a given fidelity  1.


? Comp implies the worlds are not Turing emulable.  Even a nanocube of  
vaccuum is not Turing emulable (with comp, but with QM too). I don't  
see your point.





But I'm not here to be an advocate for primary matter (Peter Jones  
does that well enough).  I neither accept nor 

Re: problem of size '10

2010-03-11 Thread Bruno Marchal


On 11 Mar 2010, at 02:10, Brent Meeker wrote:


Here's an interesting theory of consciousness in which  
counterfactuals would make a difference.


The fat that the counterfactuals makes a difference is the essence of  
comp and of the comp supervenience thesis. But that is the reason why  
neither the movie, nor the boolean graph *is* conscious. What is  
conscious is the person, and by comp, the person is an abstract  
immaterial being that you can locally associate it to the boolean  
graph/brain, (and even the movie, quasi conventionally, in the case  
you decide to project the last frame of the movie on the boolean  
graph, to trigger it relatively stable in your story). But the  
consciousness of that person is then related, if only in the relative  
way, to all computational histories going through it, from its point  
of view (more exactly from the 3-p views of his 1-p views).





http://ntp.neuroscience.wisc.edu/faculty/fac-art/tononiconsciousness.pdf


Quite consistent with AUDA, but I have often explained the consistency  
of machine theology, auda, with Hobson theory of dreams and  
neurophysiological approaches. There is an implicit use of the galois  
connexion theories/model, or equation/surface, and that qualia is the  
shape of experience, is natural with the first person who lives at the  
intersection of  belief and truth (Bp  p).


This is coherent for example with his analysis of the problem of  
Mary (correct with respect to its implicit comp):


Tononi wrote


Being and describing
According to the IIT, a full description of the set of
informational relationships generated by a complex at a
given time should say all there is to say about the experience
it is having at that time: nothing else needs to be added.17
Nevertheless, the IIT also implies that to be conscious—say
to have a vivid experience of pure red— one needs to be a
complex of high ﰆ; there is no other way. Obviously,
although a full description can provide understanding of
what experience is and how it can be generated, it cannot
substitute for it: being is not describing. This point should
be uncontroversial, but it is worth mentioning because of a
well-known argument against a scientific explanation of
consciousness, best exemplified by a thought experiment
involving Mary, a neuroscientist in the 23rd century (Jack-
son, 1986). Mary knows everything about the brain pro-
cesses responsible for color vision, but has lived her whole
life in a black-and-white room and has never seen any
color.18 The argument goes that, despite her complete
knowledge of color vision, Mary does not know what it is
like to experience a color: it follows that there is some
knowledge about conscious experience that cannot be de-
duced from knowledge about brain processes. The argument
loses its strength the moment one realizes that conscious-
ness is a way of being rather than a way of knowing.
According to the IIT, being implies “knowing” from the
inside, in the sense of generating information about one’s
previous state. Describing, instead, implies “knowing” from
the outside.



But he makes a common mistakes by concluding:


This conclusion is in no way surprising: just
consider that though we understand quite well how energy
is generated by atomic fission, unless atomic fission occurs,
no energy is generated—no amount of description will
substitute.


Which is obviously incorrect. If you emulate the couple made of the  
genuine cortical integrated system + the atomic fission, there will be  
a conscious (and relatively correct) observation of energy generation.
If you want he is correct from inside, but if his own (based on comp)  
theory is correct, there is view from outside of the couple observer/ 
atomic-fission. He is not aware of the comp first person (plural or  
not) indeterminacy.


From this, he miss the mind-body problem, but this does not change  
the interest for its proposal as a theory of human consciousness. It  
is even coherent with my suggestion that a brain is either two  
universal machines in front of each other, or two brains in front of  
each other (making a brain 2 or 4 or 8 or 16 or 32 ... universal  
machines).


At some point he need times, he says, but here the UD-times, or even  
the successor on natural numbers suits perfectly.


I have no problem with his notion of graded consciousness, being an  
experiencer of lucidity in sleeps, and amateur of altered states of  
consciousness, amnesia, etc. This does not change really the nature 0  
or 1 of being conscious or not:  the abstract third person being the  
fixed point of p - Bp. Löb's formula (B(BP-p--Bp) makes such fixed  
point true and provable. (on the contrary; the first person lives at  
the intersection of p and Bp (p  Bp) and has no fixed point).


The p, Bp, Bp  p, Bp  Dt, Bp  p  Dt, does not describe partial  
states of consciousness, but different state of consciousness (like  
observing, feeling, proving, knowing, etc.). the modalities  

Re: problem of size '10

2010-03-11 Thread Stathis Papaioannou
On 11 March 2010 13:57, Jack Mallah jackmal...@yahoo.com wrote:
 --- On Mon, 3/8/10, Stathis Papaioannou stath...@gmail.com wrote:
 In the original fading qualia thought experiment the artificial neurons 
 could be considered black boxes, the consciousness status of which is 
 unknown. The conclusion is that if the artificial neurons lack 
 consciousness, then the brain would be partly zombified, which is absurd.

 That's not the argument Chalmers made, and indeed he couldn't have, since he 
 believes zombies are possible; he instead talks about fading qualia.

 If you start out believing that computer zombies are NOT possible, the 
 original thought experiment is moot; you already believe the conclusion.   
 His argument is aimed at dualists, who are NOT computationalists to start out.

 Since partial consciousness is possible, which he didn't take into account, 
 his argument _fails_; a dualist who does believe zombies are possible should 
 have no problem believing that partial zombies are.  So dualists don't have 
 to be computationalists after all.

A partial zombie is very different from a full zombie! The thing about
zombies is, although you can't tell if someone is a zombie, you know
with absolute certainty that *you* aren't a zombie (assuming that you
aren't a zombie, of course; if you are a zombie then you don't know
anything at all except in a mindless zombie way). If your visual
qualia were to fade but your behaviour remain unchanged, then that is
equivalent to partial zombification, and partial zombification is
absurd, impossible or meaningless (take your pick). The argument is
simply this: if zombie vision in an otherwise intact person is
possible, then I could have zombie vision right now. I behave as if I
can see normally, since I am typing this email, but that is consistent
with zombie vision. I am also absolutely convinced that I can see
normally, but that is also consistent with zombie vision. So it seems
that zombie vision is neither subjectively nor objectively different
from normal vision, which means it is not different from normal vision
in any way that matters. You might still say zombie vision is still
different in some metaphysical sense, a category neither objective nor
subjective, but now you are in the supernatural domain.

 I think this holds *whatever* is in the black boxes: computers, biological 
 tissue, a demon pulling strings or nothing.

 Partial consciousness is possible and again ruins any such argument.  If you 
 don't believe to start out that consciousness can be based on whatever 
 (e.g. nothing), you don't have any reason to accept the conclusion.

It goes against the grain of functionalism to assume that
consciousness is due primarily to a physical process. The primary idea
is that if the black box replicates the function of a component in a
system, then any mental states that the system has will also be
replicated. Normally this is taken as implying a kind of materialism
since the black box won't be able to replicate behaviour of brain
components unless it contains a complex physical mechanism, but if
miraculously it could - if the black box were empty but the remaining
brain tissue behaves normally anyway - then the consciousness of the
system would remain intact. If a chunk were taken out of the CPU in a
computer but, miraculously, the remaining parts of the CPU behaved
exactly the same as if nothing had happened, then that magical CPU is
just as good as an intact one, and the computations it performs just
as valid.

 whatever is going on inside the putative zombie's head, if it reproduces the 
 I/O behaviour of a human, it will have the mind of a human.

 That is behaviorism, not computationalism, and I certainly don't believe it.  
 I wouldn't say that a computer that uses a huge lookup table algorithm would 
 be conscious.

Well, functionalism reduces to a type of behaviourism. Functionalism
is OK with replacing components of a system with functionally
identical analogues, regardless of the internal processes of the
functional analogues. If the internal processes don't matter then it
should not matter if a replaced neuron, for example, is driven by a
lookup table. In fact, a practical computational model of a neuron
would probably contain lookup tables as a matter of course, and it
would seem absurd to claim that its consciousness is inversely
proportion to the number of such devices used.

 The requirement that a computer be able to handle the counterfactuals in 
 order to be conscious seems to have been brought in to make 
 computationalists feel better about computationalism.

 Not at all.  It was always part of the notion of computation.  Would you buy 
 a PC that only plays a movie?  It must handle all possible inputs in a 
 reliable manner.

But if I did buy a PC that only did addition, for example, I don't see
how it would make sense to say that it isn't really doing that
computation, that it's not real addition. It might not qualify as a
computer, 

Re: problem of size '10

2010-03-11 Thread Brent Meeker

On 3/11/2010 1:59 AM, Bruno Marchal wrote:


I don't see how we could use Tononi's paper to provide a physical or a 
computational role to inactive device in the actual supervenience of a 
an actual computation currently not using that device.


I'm not sure I understand that question.  It seems to turn on what is 
meant by using that device.  Is my brain using a neuron that isn't 
firing?  I'd say yes, it is part of the system and it's not firing is 
significant.


I see Tononi's theory as providing a kind of answer to questions like, 
Is a Mars Rover concsious and if so, what is it conscious of?  Is it 
more or less conscious than a fruit fly.


Brent

I f you have an idea, please elaborate. In my opinion this a priori 
integrate well in the comp consequences. Thanks for that interesting 
reference on a reasonable neurophysiological (and with a high 
substitution level comp) account of consciousness and qualia, 
probably consistent with auda, but not aware, like many, of the 
conceptual reversal that the basic assumption imposes.


Bruno
http://iridia.ulb.ac.be/~marchal/ http://iridia.ulb.ac.be/%7Emarchal/


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-11 Thread Brent Meeker

On 3/11/2010 4:51 AM, Stathis Papaioannou wrote:

On 11 March 2010 13:57, Jack Mallahjackmal...@yahoo.com  wrote:
   

--- On Mon, 3/8/10, Stathis Papaioannoustath...@gmail.com  wrote:
 

In the original fading qualia thought experiment the artificial neurons could 
be considered black boxes, the consciousness status of which is unknown. The 
conclusion is that if the artificial neurons lack consciousness, then the brain 
would be partly zombified, which is absurd.
   

That's not the argument Chalmers made, and indeed he couldn't have, since he 
believes zombies are possible; he instead talks about fading qualia.

If you start out believing that computer zombies are NOT possible, the original 
thought experiment is moot; you already believe the conclusion.   His argument 
is aimed at dualists, who are NOT computationalists to start out.

Since partial consciousness is possible, which he didn't take into account, his 
argument _fails_; a dualist who does believe zombies are possible should have 
no problem believing that partial zombies are.  So dualists don't have to be 
computationalists after all.
 

A partial zombie is very different from a full zombie! The thing about
zombies is, although you can't tell if someone is a zombie, you know
with absolute certainty that *you* aren't a zombie (assuming that you
aren't a zombie, of course; if you are a zombie then you don't know
anything at all except in a mindless zombie way). If your visual
qualia were to fade but your behaviour remain unchanged, then that is
equivalent to partial zombification, and partial zombification is
absurd, impossible or meaningless (take your pick). The argument is
simply this: if zombie vision in an otherwise intact person is
possible, then I could have zombie vision right now. I behave as if I
can see normally, since I am typing this email, but that is consistent
with zombie vision. I am also absolutely convinced that I can see
normally, but that is also consistent with zombie vision. So it seems
that zombie vision is neither subjectively nor objectively different
from normal vision, which means it is not different from normal vision
in any way that matters. You might still say zombie vision is still
different in some metaphysical sense, a category neither objective nor
subjective, but now you are in the supernatural domain.

   

I think this holds *whatever* is in the black boxes: computers, biological 
tissue, a demon pulling strings or nothing.
   

Partial consciousness is possible and again ruins any such argument.  If you don't believe to start 
out that consciousness can be based on whatever (e.g. nothing), you don't 
have any reason to accept the conclusion.
 

It goes against the grain of functionalism to assume that
consciousness is due primarily to a physical process. The primary idea
is that if the black box replicates the function of a component in a
system, then any mental states that the system has will also be
replicated. Normally this is taken as implying a kind of materialism
since the black box won't be able to replicate behaviour of brain
components unless it contains a complex physical mechanism, but if
miraculously it could - if the black box were empty but the remaining
brain tissue behaves normally anyway - then the consciousness of the
system would remain intact. If a chunk were taken out of the CPU in a
computer but, miraculously, the remaining parts of the CPU behaved
exactly the same as if nothing had happened, then that magical CPU is
just as good as an intact one, and the computations it performs just
as valid.

   

whatever is going on inside the putative zombie's head, if it reproduces the 
I/O behaviour of a human, it will have the mind of a human.
   

That is behaviorism, not computationalism, and I certainly don't believe it.  I 
wouldn't say that a computer that uses a huge lookup table algorithm would be 
conscious.
 

Well, functionalism reduces to a type of behaviourism. Functionalism
is OK with replacing components of a system with functionally
identical analogues, regardless of the internal processes of the
functional analogues. If the internal processes don't matter then it
should not matter if a replaced neuron, for example, is driven by a
lookup table. In fact, a practical computational model of a neuron
would probably contain lookup tables as a matter of course, and it
would seem absurd to claim that its consciousness is inversely
proportion to the number of such devices used.

   

The requirement that a computer be able to handle the counterfactuals in order 
to be conscious seems to have been brought in to make computationalists feel 
better about computationalism.
   

Not at all.  It was always part of the notion of computation.  Would you buy a 
PC that only plays a movie?  It must handle all possible inputs in a reliable 
manner.
 

But if I did buy a PC that only did addition, for example, I don't see
how it would make sense to say that 

Re: problem of size '10

2010-03-11 Thread Bruno Marchal


On 11 Mar 2010, at 17:57, Brent Meeker wrote:


On 3/11/2010 1:59 AM, Bruno Marchal wrote:



I don't see how we could use Tononi's paper to provide a physical  
or a computational role to inactive device in the actual  
supervenience of a an actual computation currently not using that  
device.


I'm not sure I understand that question.  It seems to turn on what  
is meant by using that device.  Is my brain using a neuron that  
isn't firing?  I'd say yes, it is part of the system and it's not  
firing is significant.


Two old guys A and B decide to buy each one a car. They bought  
identical cars, and paid the same price.
But B's car has a defect, above 90 mi/h the engine explode. But both A  
and B will peacefully enjoy driving their car all the rest of their  
life. They were old, and never go quicker than 60 mi/h until they die.  
Would you say that A's car was driving but that  B's car was only  
partially driving.


What about a brain with clever neurons. For example the neurons N24  
anticipates that he will be useless for the next ten minutes, which  
gives him the time to make a pause cafe and to talk with some glial  
cells friends. Then after ten minutes he come back and do very well  
its job. Would that brain be less conscious? He did not miss any  
messages.


The significance of the neuron (firing or not firing) is  
computational. If for the precise computation C the neuron n is not  
used in the interval of time (t1 t2), you may replace it by a  
functionally equivalent machine for the working in that time interval.
There is no problem when you make consciousness supervene on the  
abstract relevant computations, that the existence of some relations  
between some numbers (given that I have chosen the base elementary  
arithmetic (it is Turing Universal).


To attach consciousness on physical activity + the abstract  
counterfactual, is useless. It introduces more difficulty than what it  
solves. With comp that needed physical activity has to be turing  
emulable itself: if not it means you make consciousness depending on  
something not turing emulable, and you cannot say yes to the doctor  
qua computatio.










I see Tononi's theory as providing a kind of answer to questions  
like, Is a Mars Rover concsious and if so, what is it conscious  
of?  Is it more or less conscious than a fruit fly.



I tend to work at a more general, or abstract level, and I think that  
consciousness needs some amount of self-reflection, two universal  
machines in front of each other, at least. If Mars Rover can add and  
multiply it may have the consciousness of Robinson Arithmetic. If Mars  
Rover believe in enough arithmetical induction rules, it can quickly  
be trivially Löbian. But its consciousness will develop when he  
identifies genuinely and privately itself with its unameable first  
person (Bp  p). Using Bp for public science and opinions. It will  
build a memorable and unique self-experience.


To be clear, Mars Rover may still be largely behind the fruit fly in  
matter of consciousness. The fruit fly seems capable to appreciate  
wine, for example. Mars Rover is still too much an infant, it wants  
only satisfy its mother company, not yet itself.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-11 Thread Brent Meeker

On 3/11/2010 10:16 AM, Bruno Marchal wrote:


On 11 Mar 2010, at 17:57, Brent Meeker wrote:


On 3/11/2010 1:59 AM, Bruno Marchal wrote:



I don't see how we could use Tononi's paper to provide a physical or 
a computational role to inactive device in the actual supervenience 
of a an actual computation currently not using that device.


I'm not sure I understand that question.  It seems to turn on what is 
meant by using that device.  Is my brain using a neuron that isn't 
firing?  I'd say yes, it is part of the system and it's not firing is 
significant.


Two old guys A and B decide to buy each one a car. They bought 
identical cars, and paid the same price.
But B's car has a defect, above 90 mi/h the engine explode. But both A 
and B will peacefully enjoy driving their car all the rest of their 
life. They were old, and never go quicker than 60 mi/h until they die. 
Would you say that A's car was driving but that  B's car was only 
partially driving.


If I'm a multiple-worlder I'd say B's car is driving with a lower 
probability than A's.




What about a brain with clever neurons. For example the neurons N24 
anticipates that he will be useless for the next ten minutes, which 
gives him the time to make a pause cafe and to talk with some glial 
cells friends. Then after ten minutes he come back and do very well 
its job. Would that brain be less conscious? He did not miss any 
messages.


Same answer.



The significance of the neuron (firing or not firing) is 
computational. If for the precise computation C the neuron n is not 
used in the interval of time (t1 t2), you may replace it by a 
functionally equivalent machine for the working in that time interval.
There is no problem 


Well, there's not *that* problem.


when you make consciousness supervene on the abstract relevant 
computations, that the existence of some relations between some 
numbers (given that I have chosen the base elementary arithmetic (it 
is Turing Universal).


To attach consciousness on physical activity + the abstract 
counterfactual, is useless. It introduces more difficulty than what it 
solves. With comp that needed physical activity has to be turing 
emulable itself: if not it means you make consciousness depending on 
something not turing emulable, and you cannot say yes to the doctor 
qua computatio.










I see Tononi's theory as providing a kind of answer to questions 
like, Is a Mars Rover concsious and if so, what is it conscious of?  
Is it more or less conscious than a fruit fly.



I tend to work at a more general, or abstract level, and I think that 
consciousness needs some amount of self-reflection, two universal 
machines in front of each other, at least. If Mars Rover can add and 
multiply it may have the consciousness of Robinson Arithmetic. If Mars 
Rover believe in enough arithmetical induction rules, it can quickly 
be trivially Löbian. But its consciousness will develop when he 
identifies genuinely and privately itself with its unameable first 
person (Bp  p). Using Bp for public science and opinions. It will 
build a memorable and unique self-experience.


To be clear, Mars Rover may still be largely behind the fruit fly in 
matter of consciousness. The fruit fly seems capable to appreciate 
wine, for example. Mars Rover is still too much an infant, it wants 
only satisfy its mother company, not yet itself.


But it also doesn't conceive of itself and its mother company - only 
it's mission.  I think the interesting point is that the two may have 
incommensurable consciousness; they may be conscious of different 
things in different ways.


Brent



Bruno

http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-11 Thread Stathis Papaioannou
On 12 March 2010 04:17, Brent Meeker meeke...@dslextreme.com wrote:

[Stathis]
 We can do a thought experiment. A brain is rigged to explode unless it
 goes down one particular pathway. Does it change the computation being
 implemented if it is given the right input so that it does go down
 that pathway? Does it change the consciousness? Is it different to a
 brain that lacks the connections to begin with so that it does not
 explode but simply stops working unless it is provided with the right
 input? What do you lose if you say both brains have exactly the same
 conscious experience as a normal brain which goes down that pathway?

[Brent]
 You might have diminished consciousness.  If you identify consciousness with
 a computation, as in a digital computer, then any specific computation will
 leave some components unused.  But 0's are as much a part of the computation
 as 1's.  So just because the same causal chain of gates or neurons is used
 it is not the same computation unless it is relative to the same possible
 computations.  Or at least that's one way to look at it.  It's not magic,
 it's just that computation and consciousness maybe holistic properties of a
 system.

When a brain is not being consciously used at all, because the person
is in dreamless sleep, the counterfactuals are all still there; they
just don't have any effect. As the person is waking up their
consciousness for the first second might be very limited, while again
the counterfactual behaviour is still there. A common sense conclusion
would be that only that part of the system which is being used
contributes to consciousness. What reason is there to reject this
conclusion?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-11 Thread Brent Meeker

On 3/11/2010 2:34 PM, Stathis Papaioannou wrote:

On 12 March 2010 04:17, Brent Meekermeeke...@dslextreme.com  wrote:

[Stathis]
   

We can do a thought experiment. A brain is rigged to explode unless it
goes down one particular pathway. Does it change the computation being
implemented if it is given the right input so that it does go down
that pathway? Does it change the consciousness? Is it different to a
brain that lacks the connections to begin with so that it does not
explode but simply stops working unless it is provided with the right
input? What do you lose if you say both brains have exactly the same
conscious experience as a normal brain which goes down that pathway?
 

[Brent]
   

You might have diminished consciousness.  If you identify consciousness with
a computation, as in a digital computer, then any specific computation will
leave some components unused.  But 0's are as much a part of the computation
as 1's.  So just because the same causal chain of gates or neurons is used
it is not the same computation unless it is relative to the same possible
computations.  Or at least that's one way to look at it.  It's not magic,
it's just that computation and consciousness maybe holistic properties of a
system.
 

When a brain is not being consciously used at all, because the person
is in dreamless sleep, the counterfactuals are all still there;


Hmmm.  Are they?  Suppose instead of being asleep the person is 
anesthetized and cooled so there is no activity at all in the 
brain...it's inert.  Are counterfactuals still there?  From an 
information processing standpoint, counterfactuals only exist because 
they are alternate possibilities.  Possibility though is too vague to 
base a theory on.  Suppose it is refined by saying the counterfactuals 
are defined by the probabilities of quantum mechanics.  I think this is 
what Jack is getting at when he appeals to physical laws.




they
just don't have any effect. As the person is waking up their
consciousness for the first second might be very limited, while again
the counterfactual behaviour is still there. A common sense conclusion
would be that only that part of the system which is being used
contributes to consciousness. What reason is there to reject this
conclusion?

   


Because it leads to the MGA, where consciousness is instantiated by just 
a recorded sequence.  Bruno uses this to argue that the consciousness 
must be associated with the abstract counterfactuals which are part of 
the computation.  But that raises the problem that arbitrarily many 
abstract computations exist which include that same part.  Bruno makes a 
virtue of this by saying that is why there are multiple worlds in QM 
(although it seems to allow many more worlds than QM would).  But if 
we're going to appeal to things happening in the multiple worlds we can 
maintain the counterfactuals without going to Platonia.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-11 Thread Stathis Papaioannou
On 12 March 2010 10:46, Brent Meeker meeke...@dslextreme.com wrote:

[Stathis]
 When a brain is not being consciously used at all, because the person
 is in dreamless sleep, the counterfactuals are all still there;

[Brent]
 Hmmm.  Are they?  Suppose instead of being asleep the person is anesthetized
 and cooled so there is no activity at all in the brain...it's inert.  Are
 counterfactuals still there?  From an information processing standpoint,
 counterfactuals only exist because they are alternate possibilities.
 Possibility though is too vague to base a theory on.  Suppose it is
 refined by saying the counterfactuals are defined by the probabilities of
 quantum mechanics.  I think this is what Jack is getting at when he appeals
 to physical laws.

The pathways are all intact and can spring into action if the person
wakes up. There is a continuum from everything being there and ready
to use immediately, to all there but parts of the system dormant, to
not there at all but could be added if the person has extensive
surgery. And a firm plan to do the surgery would be different again
from the mere availability of surgery. You would need an entire theory
of how the probability of the counterfactual behaviour occurring would
affect consciousness

[Stathis]
 they
 just don't have any effect. As the person is waking up their
 consciousness for the first second might be very limited, while again
 the counterfactual behaviour is still there. A common sense conclusion
 would be that only that part of the system which is being used
 contributes to consciousness. What reason is there to reject this
 conclusion?

[Brent]
 Because it leads to the MGA, where consciousness is instantiated by just a
 recorded sequence.  Bruno uses this to argue that the consciousness must be
 associated with the abstract counterfactuals which are part of the
 computation.  But that raises the problem that arbitrarily many abstract
 computations exist which include that same part.  Bruno makes a virtue of
 this by saying that is why there are multiple worlds in QM (although it
 seems to allow many more worlds than QM would).  But if we're going to
 appeal to things happening in the multiple worlds we can maintain the
 counterfactuals without going to Platonia.

It's not incompatible with any physical observation to say that
consciousness is instantiated by just a recorded sequence.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-11 Thread Brent Meeker

On 3/11/2010 4:35 PM, Stathis Papaioannou wrote:

On 12 March 2010 10:46, Brent Meekermeeke...@dslextreme.com  wrote:

[Stathis]

When a brain is not being consciously used at all, because the person
is in dreamless sleep, the counterfactuals are all still there;


[Brent]

Hmmm.  Are they?  Suppose instead of being asleep the person is anesthetized
and cooled so there is no activity at all in the brain...it's inert.  Are
counterfactuals still there?  From an information processing standpoint,
counterfactuals only exist because they are alternate possibilities.
Possibility though is too vague to base a theory on.  Suppose it is
refined by saying the counterfactuals are defined by the probabilities of
quantum mechanics.  I think this is what Jack is getting at when he appeals
to physical laws.


The pathways are all intact and can spring into action if the person
wakes up. There is a continuum from everything being there and ready
to use immediately, to all there but parts of the system dormant, to
not there at all but could be added if the person has extensive
surgery.


That would be a classical change and different from a MWI possibility.


And a firm plan to do the surgery would be different again
from the mere availability of surgery. You would need an entire theory
of how the probability of the counterfactual behaviour occurring would
affect consciousness


I doesn't occur - that's why it's counterfactual.  We have a theory 
about the probability of counterfactuals occuring, i.e. QM.  MWI has the 
effect of making the counterfactuals available as explanations (at 
least metaphysically).


Tononi's theory is one that relates this to consciousness.  Bruno's 
theory has the same requirement, except the counterfactuals are in the 
abstract computation.





[Stathis]

they
just don't have any effect. As the person is waking up their
consciousness for the first second might be very limited, while again
the counterfactual behaviour is still there. A common sense conclusion
would be that only that part of the system which is being used
contributes to consciousness. What reason is there to reject this
conclusion?


[Brent]

Because it leads to the MGA, where consciousness is instantiated by just a
recorded sequence.  Bruno uses this to argue that the consciousness must be
associated with the abstract counterfactuals which are part of the
computation.  But that raises the problem that arbitrarily many abstract
computations exist which include that same part.  Bruno makes a virtue of
this by saying that is why there are multiple worlds in QM (although it
seems to allow many more worlds than QM would).  But if we're going to
appeal to things happening in the multiple worlds we can maintain the
counterfactuals without going to Platonia.


It's not incompatible with any physical observation to say that
consciousness is instantiated by just a recorded sequence.


Is it incompatible with any physical observation to say that 
consciousness is instantiated by a rock?  The only consciousness we have 
observation of is our own 1st person.  It's not plausible that it's a 
recording, though in some sense it may be logically possible.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-10 Thread Jack Mallah
--- On Mon, 3/8/10, Stathis Papaioannou stath...@gmail.com wrote:
 In the original fading qualia thought experiment the artificial neurons could 
 be considered black boxes, the consciousness status of which is unknown. The 
 conclusion is that if the artificial neurons lack consciousness, then the 
 brain would be partly zombified, which is absurd.

That's not the argument Chalmers made, and indeed he couldn't have, since he 
believes zombies are possible; he instead talks about fading qualia.

If you start out believing that computer zombies are NOT possible, the original 
thought experiment is moot; you already believe the conclusion.   His argument 
is aimed at dualists, who are NOT computationalists to start out.

Since partial consciousness is possible, which he didn't take into account, his 
argument _fails_; a dualist who does believe zombies are possible should have 
no problem believing that partial zombies are.  So dualists don't have to be 
computationalists after all.

 I think this holds *whatever* is in the black boxes: computers, biological 
 tissue, a demon pulling strings or nothing.

Partial consciousness is possible and again ruins any such argument.  If you 
don't believe to start out that consciousness can be based on whatever (e.g. 
nothing), you don't have any reason to accept the conclusion.

 whatever is going on inside the putative zombie's head, if it reproduces the 
 I/O behaviour of a human, it will have the mind of a human.

That is behaviorism, not computationalism, and I certainly don't believe it.  I 
wouldn't say that a computer that uses a huge lookup table algorithm would be 
conscious.

 The requirement that a computer be able to handle the counterfactuals in 
 order to be conscious seems to have been brought in to make computationalists 
 feel better about computationalism.

Not at all.  It was always part of the notion of computation.  Would you buy a 
PC that only plays a movie?  It must handle all possible inputs in a reliable 
manner.

 Brains are all probabilistic in that disaster could at any point befall them 
 causing them to deviate widely from normal behaviour

It is not a problem, it just seems like one at first glance.  Such cases 
include input to the formal system; for some inputs, the device halts or acts 
differently.  Hence my talk of derailable computations in my MCI paper.

 or else prevent them from deviating at all from a rigidly determined pathway

If that were done, that would change what computation is being implemented.  
Depending on how it was done, it might or might not affect consciousness.  We 
can't do such an experimemt.

--- On Tue, 3/9/10, Stathis Papaioannou stath...@gmail.com wrote:
 Suppose box A contains a probabilistic mechanism that displays the right I/O 
 behaviour 99% of the time. Would the consciousness of the system be perfectly 
 normal until the box misbehaved ... ?

I'd expect it to be.  As above, I'd treat it as a box with input.

Now, as far as we know, there really is no such thing as true randomness.  It's 
all down to initial conditions (which are certainly to be treated as input) or 
to quantum splitting (which is again deterministic).  I don't believe in true 
randomness.

However, if true randomness is possible, then you'd have the same problem with 
Platonia.  In addition to having all of the determininistic Turing machines, 
you'd have all of the probabilistic Turing machines.  It is not an issue that 
bears on physicalism.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-10 Thread Brent Meeker

On 3/10/2010 6:57 PM, Jack Mallah wrote:

--- On Mon, 3/8/10, Stathis Papaioannoustath...@gmail.com  wrote:
   

In the original fading qualia thought experiment the artificial neurons could 
be considered black boxes, the consciousness status of which is unknown. The 
conclusion is that if the artificial neurons lack consciousness, then the brain 
would be partly zombified, which is absurd.
 

That's not the argument Chalmers made, and indeed he couldn't have, since he 
believes zombies are possible; he instead talks about fading qualia.

If you start out believing that computer zombies are NOT possible, the original 
thought experiment is moot; you already believe the conclusion.   His argument 
is aimed at dualists, who are NOT computationalists to start out.

Since partial consciousness is possible, which he didn't take into account, his 
argument _fails_; a dualist who does believe zombies are possible should have 
no problem believing that partial zombies are.  So dualists don't have to be 
computationalists after all.

   

I think this holds *whatever* is in the black boxes: computers, biological 
tissue, a demon pulling strings or nothing.
 

Partial consciousness is possible and again ruins any such argument.  If you don't believe to start 
out that consciousness can be based on whatever (e.g. nothing), you don't 
have any reason to accept the conclusion.

   

whatever is going on inside the putative zombie's head, if it reproduces the 
I/O behaviour of a human, it will have the mind of a human.
 

That is behaviorism, not computationalism, and I certainly don't believe it.  I 
wouldn't say that a computer that uses a huge lookup table algorithm would be 
conscious.
   



Whatever consciousness is, it's almost certainly a system level 
property.  We're not going to find a neuron even a small group of 
neurons that are conscious.  If it's a system level property, then the 
system will include parts that aren't doing anything at any given time - 
yet the very fact they aren't will be part of the implementation of 
consciousness in that system.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-10 Thread Quentin Anciaux
HI,

2010/3/11 Jack Mallah jackmal...@yahoo.com

 --- On Mon, 3/8/10, Stathis Papaioannou stath...@gmail.com wrote:
  In the original fading qualia thought experiment the artificial neurons
 could be considered black boxes, the consciousness status of which is
 unknown. The conclusion is that if the artificial neurons lack
 consciousness, then the brain would be partly zombified, which is absurd.

 That's not the argument Chalmers made, and indeed he couldn't have, since
 he believes zombies are possible; he instead talks about fading qualia.

 If you start out believing that computer zombies are NOT possible, the
 original thought experiment is moot; you already believe the conclusion.
 His argument is aimed at dualists, who are NOT computationalists to start
 out.

 Since partial consciousness is possible,


Well you say so... but what is it exactly ?



 which he didn't take into account, his argument _fails_; a dualist who does
 believe zombies are possible should have no problem believing that partial
 zombies are.  So dualists don't have to be computationalists after all.

  I think this holds *whatever* is in the black boxes: computers,
 biological tissue, a demon pulling strings or nothing.

 Partial consciousness is possible


Saying it again doesn't render it true nor meaningful.


 and again ruins any such argument.  If you don't believe to start out that
 consciousness can be based on whatever (e.g. nothing), you don't have
 any reason to accept the conclusion.

  whatever is going on inside the putative zombie's head, if it reproduces
 the I/O behaviour of a human, it will have the mind of a human.

 That is behaviorism, not computationalism, and I certainly don't believe
 it.  I wouldn't say that a computer that uses a huge lookup table algorithm
 would be conscious.

  The requirement that a computer be able to handle the counterfactuals in
 order to be conscious seems to have been brought in to make
 computationalists feel better about computationalism.

 Not at all.  It was always part of the notion of computation.  Would you
 buy a PC that only plays a movie?  It must handle all possible inputs in a
 reliable manner.

 I wouldn't... but if it plays a movie, it does perform a computation, I
wouldn't buy a *general purpose* computer which does only one computation...
because it would obviously not be a general purpose computer.


  Brains are all probabilistic in that disaster could at any point befall
 them causing them to deviate widely from normal behaviour

 It is not a problem, it just seems like one at first glance.  Such cases
 include input to the formal system; for some inputs, the device halts or
 acts differently.  Hence my talk of derailable computations in my MCI
 paper.

  or else prevent them from deviating at all from a rigidly determined
 pathway

 If that were done, that would change what computation is being implemented.
  Depending on how it was done, it might or might not affect consciousness.
  We can't do such an experimemt.

 --- On Tue, 3/9/10, Stathis Papaioannou stath...@gmail.com wrote:
  Suppose box A contains a probabilistic mechanism that displays the right
 I/O behaviour 99% of the time. Would the consciousness of the system be
 perfectly normal until the box misbehaved ... ?

 I'd expect it to be.  As above, I'd treat it as a box with input.

 Now, as far as we know, there really is no such thing as true randomness.
  It's all down to initial conditions (which are certainly to be treated as
 input) or to quantum splitting (which is again deterministic).  I don't
 believe in true randomness.

 However, if true randomness is possible, then you'd have the same problem
 with Platonia.  In addition to having all of the determininistic Turing
 machines, you'd have all of the probabilistic Turing machines.  It is not an
 issue that bears on physicalism.






 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/everything-list?hl=en.




-- 
All those moments will be lost in time, like tears in rain.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-09 Thread Bruno Marchal


On 08 Mar 2010, at 06:46, Jack Mallah wrote:


--- On Tue, 3/2/10, David Nyman david.ny...@gmail.com wrote:
computationalist theory of mind would amount to the claim that  
consciousness supervenes only on realisations capable of  
instantiating this complete range of underlying physical activity  
(i.e. factual + counterfactual) in virtue of relevant physical laws.


Right (assuming physicalism).  Of course, implementing only part of  
the range of a computation that leads to consciousness might lead to  
the same consciousness, if it is the right part.


What do you mean by right part?





In the case of a mechanism with the appropriate arrangements for  
counterfactuals - i.e. one that in principle at least could be re- 
run in such a way as to elicit the counterfactual activity - the  
question of whether the relevant physical law is causal, or  
merely inferred, would appear to be incidental.


Causality is needed to define implementation of a computation  
because otherwise we only have correlations.  Correlations could be  
coincidental or due to a common cause (such as the running of a  
movie).


Is is physical causality? Or computational causality. Which needs only  
a universal mathematical base (like arithmetic, combinators, ...). I  
try to understand your physical or platonic.






--- On Fri, 3/5/10, Stathis Papaioannou stath...@gmail.com wrote:
If the inputs to the remaining brain tissue are the same as they  
would have been normally then effectively you have replaced the  
missing parts with a magical processor, and I would say that the  
thought experiment shows that the consciousness must be replicated  
in this magical processor.


No, that's wrong. Having the right inputs could be due to luck  
(which is conceptually the cleanest way), or it could be due to pre- 
recording data from a previous simulation.  The only consciousness  
present is the partial one in the remaining brain.


I have no clue by what you mean by partial consciousness. Suppose a  
brain is implementing a computation to which some pain can be  
associated to the person owning that brain. Suppose that neuron B is  
never used, and remain inactive, during that computation. Eliminating  
the neuron B does not change the physical activity, but would change  
the counterfactuals. Would such an elimination of inactive neuron  
alleviate the pain? But then the person will change its behavior  
(taking less powerful pain killer for example), and this despite the  
brain implements the same computation? Continuing in that direction,  
we could build a partial zombie. Partial consciousness does not make  
sense for me.





Computationalism doesn't necessarily mean only digital  
computations, and it can include super-Turing machines that perform  
infinite steps in finite time.  The main characteristic of  
computationalism is its identification of consciousness with systems  
that causally solve initial-value math problems given the right  
mapping from system to formal states.


That is weird. Can you give a reference?



I should also note that if you _can't_ make a partial quantum brain,  
you probably don't have to worry about the things my argument is  
designed to attack, either, such as substituting _part_ of the brain  
with a movie (with no change in the rest) and invoking the 'fading  
qualia' argument.


Like in the movie graph? Look at MGA3 thread of last year.

Bruno, do you have the link?  I searched the list archive but the  
only references to fading qualia I could find are to the argument I  
mentioned, in which a brain is progressively substituted for by a  
movie, as Bishop does to attack computationalism.  It _is_ different  
than Chalmers, who substitutes components that _do_ have the right  
counterfactuals - Chalmers' argument is a defense of  
computationalism (albeit from a dualist point of view), not an  
attack on it.


Search the thread MG11, MGA2 and MGA3. I don't use the expression  
fading qualia. I ask if consciousness disappear or not. It is is an  
old argument already in my 1988 papers and earlier talks.


All of the 'fading qualia' arguments fail, for the reason I  
discussed in my PB paper: consciousness could be partial, not  
faded.  I am sure that yours is no different in that regard.


You are the one saying there is something wrong, you are the one who  
should be sure about this, and cite the passage you have refuted. What  
do you mean by partial consciousness? In what sense this would deter  
the movie graph. You don't give any indices.



If consciousness supervenes on the physical realization of a  
computation, including the inactive part, it means you attach  
consciousness on an unknown physical phenomenon. It is a magical  
move which blurs the difficulty.


There is no new physics or magic involved in taking laws and  
counterfactuals into account, obviously.  So you seem to be just  
talking nonsense.


If consciousness supervene of laws, which laws? The (physical)  

Re: problem of size '10

2010-03-09 Thread Stathis Papaioannou
On 9 March 2010 09:06, Jack Mallah jackmal...@yahoo.com wrote:

 If consciousness supervenes on the physical realization of a computation, 
 including the inactive part, it means you attach consciousness on an unknown 
 physical phenomenon. It is a magical move which blurs the difficulty.

 There is no new physics or magic involved in taking laws and counterfactuals 
 into account, obviously.  So you seem to be just talking nonsense.

 The only charitable interpretation of what you are saying that I can think of 
 is that, like Jesse Mazer, you don't think that details of situations that 
 don't occur could have any effect on consciousness.  Did you follow the 
 'Factual Implications Conjecture' (FIC)?  I do find it basically plausible, 
 and it's no problem for physicalism.

 For example, suppose we have a pair of black boxes, A and B.  The external 
 functioning of each box is simple: it takes a single bit as input, and as 
 output it gives a single bit which has the same value as the input bit.  So 
 they are trivial gates.  We can insert them into our computer with no 
 problem.  Suppose that in the actual run, A comes into play, while B does not.

 The thing about these boxes is, while their input-output relations are 
 simple, inside are very complex Rube Goldberg devices.  If you study 
 schematics of these devices, it would be very hard to predict their 
 functioning without actually doing the experiments.

 Now, if box A were to function differently, the physical activity in our 
 computer would have been different.  But there is a chain of causality that 
 makes it work.  If you reject the idea that such a system could play a role 
 in consciousness, I would characterize that as a variant of the well-known 
 Chinese Room argument.  I don't agree that it's a problem.

 It's harder to believe that the way in which box B functions could matter.  
 Since it didn't come into play, perhaps no one knows what it would have 
 done.  That's why I agree that the FIC is plausible.  However, in principle, 
 there would be no 'magic' involved even if the functioning of B did matter.  
 It's a part of the overall system, and the overall system implements the 
 computation.

But the consciousness of the system would be the same *whatever* the
mechanism inside box A, wouldn't it? Suppose box A contains a
probabilistic mechanism that displays the right I/O behaviour 99% of
the time. Would the consciousness of the system be perfectly normal
until the box misbehaved, or would the consciousness of the system be
(somehow) 1% diminished even while the box was functioning
appropriately? The latter idea seems to me to invoke magic, as if the
system knows there is a dodgy box in there even if there is no
evidence of it.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-08 Thread Stathis Papaioannou
On 8 March 2010 16:46, Jack Mallah jackmal...@yahoo.com wrote:

 --- On Fri, 3/5/10, Stathis Papaioannou stath...@gmail.com wrote:
 If the inputs to the remaining brain tissue are the same as they would have 
 been normally then effectively you have replaced the missing parts with a 
 magical processor, and I would say that the thought experiment shows that 
 the consciousness must be replicated in this magical processor.

 No, that's wrong. Having the right inputs could be due to luck (which is 
 conceptually the cleanest way), or it could be due to pre-recording data from 
 a previous simulation.  The only consciousness present is the partial one in 
 the remaining brain.

In the original fading qualia thought experiment the artificial
neurons could be considered black boxes, the consciousness status of
which is unknown. The conclusion is that if the artificial neurons
lack consciousness, then the brain would be partly zombified, which is
absurd. I think this holds *whatever* is in the black boxes:
computers, biological tissue, a demon pulling strings or nothing. It
is of course extremely unlikely that the rest of the brain would
behave normally if the artificial neurons were in fact empty boxes,
but if it did, then intelligent behaviour would be normal and
consciousness would also be normal. Even if the brain were completely
removed and, miraculously, the empty-headed body carried on normally,
passing the Turing test and so on, then it would be conscious. This is
simply another way of saying that philosophical zombies are
impossible: whatever is going on inside the putative zombie's head, if
it reproduces the I/O behaviour of a human, it will have the mind of a
human.

The requirement that a computer be able to handle the counterfactuals
in order to be conscious seems to have been brought in to make
computationalists feel better about computationalism. Certainly, a
computer that behaves randomly or rigidly follows one pathway is not a
very useful computer, but why should that render any computations it
does correctly perform invalid or, if still valid as computations,
incapable of giving rise to consciousness? Brains are all
probabilistic in that disaster could at any point befall them causing
them to deviate widely from normal behaviour or else prevent them from
deviating at all from a rigidly determined pathway, and I don't see in
either case how their consciousness could possibly be affected as a
result.

 computationalism is only a subset of functionalism.

 I used to think so but the terms don't quite mean what they sound like they 
 should. It's a common misconception that functionalism means 
 computationalism generalized to include analog and noncomputatble systems.

 Functionalism as philosophers use it focuses on input and output.  It holds 
 that any system which behaves the same in terms of i/o and which acts the 
 same in terms of memory effects has the same consciousness.  There are 
 different ways to make this more precise, and I believe that computationalism 
 is one way, but it is not the only way.  For example, some functionalists 
 would claim that a 'swampman' who spontaneously formed in a swamp due to 
 random thermal motion of atoms, but who is physically identical to a human 
 and coincidentally speaks perfect English, would not be conscious because he 
 didn't have the right inputs.  I obviously reject that; 'swapman' would be a 
 normal human.

 Computationalism doesn't necessarily mean only digital computations, and it 
 can include super-Turing machines that perform infinite steps in finite time. 
  The main characteristic of computationalism is its identification of 
 consciousness with systems that causally solve initial-value math problems 
 given the right mapping from system to formal states.

It's perhaps just a matter of definition but I would have thought the
requirement for a hypercomputer was not compatible with
computationalism, but potentially could still come under
functionalism.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-08 Thread Bruno Marchal


On 08 Mar 2010, at 10:08, Stathis Papaioannou wrote:


It's perhaps just a matter of definition but I would have thought the
requirement for a hypercomputer was not compatible with
computationalism, but potentially could still come under
functionalism.


Putnam(*) is responsible for introducing functionalism, and he defines  
it explicitly in term of emulability by Turing Machines.
The only difference with computationalism is that computationalism  
explicitly refer to the (unknown) level of substitution, something  
which remains implicit in Putnam's paper.


Now, if UDA is simpler with computationalism (or comp +oracle), AUDA,  
and thus machine theology, works for a vast set of weakening of  
computationalism (from machine with oracles, to abstract highly non  
effective notion of probability defined in terms of subset of models  
of theories, like in Solovay paper).



(*) PUTNAM H., 1960, Minds and Machines, Dimensions of Mind : A  
Symposium, Sidney
Hook (Ed.), New-York University Press, New-York. Repris dans Anderson  
A. R. (Ed.),1964.


ANDERSON A.R. (ed.), 1964, Minds and Machine, Prentice Hall inc. New  
Jersey.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-07 Thread Bruno Marchal


On 06 Mar 2010, at 23:54, Brent Meeker wrote:


On 3/6/2010 5:41 AM, Bruno Marchal wrote:


On 06 Mar 2010, at 03:02, Brent Meeker wrote:


On 3/5/2010 11:58 AM, Bruno Marchal wrote:



In this list I have already well explained the seven step of UDA,  
and
one difficulty remains in the step 8, which is the difference  
between

a computation and a description of computation. Due to the static
character of Platonia, some believes it is the same thing, but it  
is

not, and this is hard to explain. That hardness is reflected in the
AUDA: the 'translation' of UDA in arithmetic. The subtlety is that
again, the existence of a computation is true if and only if the
existence of a description of the computation exist, but that is  
true
at the level G*, and not at the G level, so that such an  
equivalence

is not directly available, and it does not allow to confuse a
computation (a mathematical relation among numbers), and a
description of a computation (a number).


This mixing of existence and true in the context of a logic confuses
me. I understand you take a Platonic view of arithmetic so that all
propositions of arithmetic are either true or false, even though  
most

of them are not provable (from any given finite axioms), so
true=/=provable.



A computation is not true or false. Only a proposition can be true or
false. But the existence of a computation is a proposition.

I was talking about the existence of a computation. This can be  
true or

false. Let c be a description of a computation.

The following can be true or false:

c describes a computation


That's true ex hypothesi.


OK.





or c is a computation

If I interpret c as a definite description, i.e. name, that's true.



OK.




Otherwise it's false.



Indeed.





or c is the Gödel

number of a computation


I suppose that depends on the form used in the description c.  If  
the Godel numbering scheme is defined and then c is described as  
being a certain number in that scheme it's true.


Yes.




 Otherwise it's false.



Ex(x = c  c describes a computation) == the computation c exists.

OK?

To say that something exists, is the same as saying that an  
existential

proposition is true.




But what does it mean to say a computation is true at one level and
not another? Does it mean provable? or it there is some other  
meaning

of true relative to a logic?



There is only one meaning of true, in this arithmetical (digital)  
frame.


Here by computation I meant a finite computation (to make things
easier). To be a (description of a) finite computation is a decidable
predicate. You can decide in a finite time if c is a computation or  
not.


So if a particular computation c exists,


So I should think of c in the above sentence as a description -  
distinct from the computation itself.  If I informally refer to  
computing the largest prime less than 100, is that an example of c  
or is it an equivalence class of many different c's.


It is very difficult to explain this, especially by mail. We have the  
same difficulty with numbers and actually with all mathematical  
notions, and this both when we explain a notion to a machine or to a  
human.
For example the number 4 is different from 4, four, 2+2, etc.  
But to talk about the number 4 I have to use a description, and then,  
automatically, we introduce an ambiguity. With the numbers, we manage  
that difficulty very well, by the use and practice. But when we talk  
about the conceptual difference between a notion and its  
representation, it is harder to explain, given that we have to go from  
a description to a description of that description, and hope the  
reader will abstract from the first description, which in this  
context, necessitates some familiarity or training.
In particular, if I say, let c be computation, I am referring to the  
computation itself (an abstract immaterial relation between numbers,  
different from any representation of it). It is certainly not an  
equivalence class of description. In this case c refers to the real  
thing.








PA can prove that fact, and
reciprocally, if PA proves that fact then the computation c exists.
PA, or any sound Löbian machine.

Let us write k for the proposition c exists. What I just said can  
be

written c - Bc, and Bc - c. i.e. c - Bc.


What happened to k?


I should have written:  k - Bk, and Bk - k. i.e. k - Bk.
Or, I should have directly commit the language trick: let c  be the  
proposition c exists.







I recall you that G is the complete logic of provability, PROVABLE by
the machine; and G* is the complete logic of provability, TRUE for  
the

machine. As you notice PROVABLE is different from TRUE, and those two
logics are different. Given that we restrict ourself on correct  
machine,

we have that G is strictly included in G*.

What I said is that G* proves c - Bc (so the existence of a
computation is equivalent with the provability of the existence of a
computation).

But G does not prove c 

Re: problem of size '10

2010-03-07 Thread Jack Mallah
--- On Tue, 3/2/10, David Nyman david.ny...@gmail.com wrote:
 computationalist theory of mind would amount to the claim that consciousness 
 supervenes only on realisations capable of instantiating this complete range 
 of underlying physical activity (i.e. factual + counterfactual) in virtue of 
 relevant physical laws.

Right (assuming physicalism).  Of course, implementing only part of the range 
of a computation that leads to consciousness might lead to the same 
consciousness, if it is the right part.

 In the case of a mechanism with the appropriate arrangements for 
 counterfactuals - i.e. one that in principle at least could be re-run in 
 such a way as to elicit the counterfactual activity - the question of whether 
 the relevant physical law is causal, or merely inferred, would appear to be 
 incidental.

Causality is needed to define implementation of a computation because otherwise 
we only have correlations.  Correlations could be coincidental or due to a 
common cause (such as the running of a movie).

--- On Fri, 3/5/10, Stathis Papaioannou stath...@gmail.com wrote:
 If the inputs to the remaining brain tissue are the same as they would have 
 been normally then effectively you have replaced the missing parts with a 
 magical processor, and I would say that the thought experiment shows that the 
 consciousness must be replicated in this magical processor. 

No, that's wrong. Having the right inputs could be due to luck (which is 
conceptually the cleanest way), or it could be due to pre-recording data from a 
previous simulation.  The only consciousness present is the partial one in the 
remaining brain.

 computationalism is only a subset of functionalism.

I used to think so but the terms don't quite mean what they sound like they 
should. It's a common misconception that functionalism means 
computationalism generalized to include analog and noncomputatble systems.

Functionalism as philosophers use it focuses on input and output.  It holds 
that any system which behaves the same in terms of i/o and which acts the same 
in terms of memory effects has the same consciousness.  There are different 
ways to make this more precise, and I believe that computationalism is one way, 
but it is not the only way.  For example, some functionalists would claim that 
a 'swampman' who spontaneously formed in a swamp due to random thermal motion 
of atoms, but who is physically identical to a human and coincidentally speaks 
perfect English, would not be conscious because he didn't have the right 
inputs.  I obviously reject that; 'swapman' would be a normal human.

Computationalism doesn't necessarily mean only digital computations, and it 
can include super-Turing machines that perform infinite steps in finite time.  
The main characteristic of computationalism is its identification of 
consciousness with systems that causally solve initial-value math problems 
given the right mapping from system to formal states.

--- On Fri, 3/5/10, Charles charlesrobertgood...@gmail.com wrote:
 The only fundamental difficulty I can see with this is if the brain actually 
 uses quantum computation, as suggested by some evidence that photopsynthesis 
 does (quoted by Bruno in another thread) - in which case it might be 
 impossible, even in principle, to reproduce the activity of the rest of the 
 brain (I'm not sure whether it would, but it seems a lot more likely).

It seems very unlikely that the brain uses QC for neural processes, which are 
based on electrical and chemical signals which decohere rapidly.  Also, I 
wouldn't make too much of the hype about photosynthesis using it - that seems 
an exaggeration; you can't make a general purpose quantum computer just by 
having some waves interfere.  Protein folding might use it in a sense but again 
nothing that could be used for a real QC.

But, that aside, even a quantum computer could be made partial.  I think that 
due to the no-signalling condition, the partial QC's interaction with the other 
part amounts to some combination of unitary operations which can be perfomed on 
the partial QC, and entanglement-induced decoherence.  You would still have to 
have something entangled with the partial QC but it wouldn't have to perform 
the computations associated with the missing parts if you perform the right 
operations on the remaining parts and know when to entangle or recohere things, 
I think.

In any case, a normal classical computer could simulate a QC - which should be 
good enough for a computationalist - and you could make the simulation partial 
in the normal way.

I should also note that if you _can't_ make a partial quantum brain, you 
probably don't have to worry about the things my argument is designed to 
attack, either, such as substituting _part_ of the brain with a movie (with no 
change in the rest) and invoking the 'fading qualia' argument.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List 

Re: problem of size '10

2010-03-06 Thread Bruno Marchal


On 06 Mar 2010, at 03:02, Brent Meeker wrote:


On 3/5/2010 11:58 AM, Bruno Marchal wrote:



In this list I have already well explained the seven step of UDA,  
and one difficulty remains in the step 8, which is the difference  
between a computation and a description of computation. Due to the  
static character of Platonia, some believes it is the same thing,  
but it is not, and this is hard to explain. That hardness is  
reflected in the AUDA: the 'translation' of UDA in arithmetic. The  
subtlety is that again, the existence of a computation is true if  
and only if the existence of a description of the computation  
exist, but that is true at the level G*, and not at the G level, so  
that such an equivalence is not directly available, and it does not  
allow to confuse a computation (a mathematical relation among  
numbers), and a description of a computation (a number).


This mixing of existence and true in the context of a logic confuses  
me.  I understand you take a Platonic view of arithmetic so that all  
propositions of arithmetic are either true or false, even though  
most of them are not provable (from any given finite axioms), so  
true=/=provable.



A computation is not true or false. Only a proposition can be true or  
false. But the existence of a computation is a proposition.


I was talking about the existence of a computation. This can be true  
or false. Let c be a description of a computation.


The following can be true or false:

c describes a computation or c is a computation or c is the Gödel  
number of a computation


Ex(x = c  c describes a computation)  == the computation c exists.

OK?

To say that something exists, is the same as saying that an  
existential proposition is true.




But what does it mean to say a computation is true at one level and  
not another?  Does it mean provable?  or it there is some other  
meaning of true relative to a logic?



There is only one meaning of true, in this arithmetical (digital) frame.

Here by computation I meant a finite computation (to make things  
easier). To be a (description of a) finite computation is a decidable  
predicate. You can decide in a finite time if c is a computation or not.


So if a particular computation c exists, PA can prove that fact, and  
reciprocally, if PA proves that fact then the computation c exists.

PA, or any sound Löbian machine.

Let us write k for the proposition c exists. What I just said can be  
written c - Bc, and Bc - c.  i.e. c - Bc.


I recall you that G is the complete logic of provability, PROVABLE by  
the machine; and G* is the complete logic of provability, TRUE for the  
machine. As you notice PROVABLE is different from TRUE, and those two  
logics are different. Given that we restrict ourself on correct  
machine, we have that G is strictly included in G*.


What I said is that G* proves c - Bc  (so the existence of a  
computation is equivalent with the provability of the existence of a  
computation).


But G does not prove c - Bc . G does  prove c - Bc (the existence  
of a computation entails the provability of the existence of a  
computation), but G does not prove Bc - c.  G does not prove that the  
provability of the existence of a computation entails the existence of  
that computation.


c - Bc belongs to the corona G* minus G. It is true, but not  
provable by the machine.


OK?

Once we fix a Löbian machine, we keep the same notion of truth (first  
hypostase), and the same arithmetical proposition will be the provable  
one. But according to the point of view chosen (the other hypostases),  
they obey different logics.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-06 Thread Brent Meeker

On 3/6/2010 5:41 AM, Bruno Marchal wrote:


On 06 Mar 2010, at 03:02, Brent Meeker wrote:


On 3/5/2010 11:58 AM, Bruno Marchal wrote:



In this list I have already well explained the seven step of UDA, and
one difficulty remains in the step 8, which is the difference between
a computation and a description of computation. Due to the static
character of Platonia, some believes it is the same thing, but it is
not, and this is hard to explain. That hardness is reflected in the
AUDA: the 'translation' of UDA in arithmetic. The subtlety is that
again, the existence of a computation is true if and only if the
existence of a description of the computation exist, but that is true
at the level G*, and not at the G level, so that such an equivalence
is not directly available, and it does not allow to confuse a
computation (a mathematical relation among numbers), and a
description of a computation (a number).


This mixing of existence and true in the context of a logic confuses
me. I understand you take a Platonic view of arithmetic so that all
propositions of arithmetic are either true or false, even though most
of them are not provable (from any given finite axioms), so
true=/=provable.



A computation is not true or false. Only a proposition can be true or
false. But the existence of a computation is a proposition.

I was talking about the existence of a computation. This can be true or
false. Let c be a description of a computation.

The following can be true or false:

c describes a computation


That's true ex hypothesi.

or c is a computation

If I interpret c as a definite description, i.e. name, that's true. 
Otherwise it's false.



or c is the Gödel

number of a computation


I suppose that depends on the form used in the description c.  If the 
Godel numbering scheme is defined and then c is described as being a 
certain number in that scheme it's true.  Otherwise it's false.




Ex(x = c  c describes a computation) == the computation c exists.

OK?

To say that something exists, is the same as saying that an existential
proposition is true.




But what does it mean to say a computation is true at one level and
not another? Does it mean provable? or it there is some other meaning
of true relative to a logic?



There is only one meaning of true, in this arithmetical (digital) frame.

Here by computation I meant a finite computation (to make things
easier). To be a (description of a) finite computation is a decidable
predicate. You can decide in a finite time if c is a computation or not.

So if a particular computation c exists,


So I should think of c in the above sentence as a description - distinct 
from the computation itself.  If I informally refer to computing the 
largest prime less than 100, is that an example of c or is it an 
equivalence class of many different c's.



PA can prove that fact, and
reciprocally, if PA proves that fact then the computation c exists.
PA, or any sound Löbian machine.

Let us write k for the proposition c exists. What I just said can be
written c - Bc, and Bc - c. i.e. c - Bc.


What happened to k?



I recall you that G is the complete logic of provability, PROVABLE by
the machine; and G* is the complete logic of provability, TRUE for the
machine. As you notice PROVABLE is different from TRUE, and those two
logics are different. Given that we restrict ourself on correct machine,
we have that G is strictly included in G*.

What I said is that G* proves c - Bc (so the existence of a
computation is equivalent with the provability of the existence of a
computation).

But G does not prove c - Bc . G does prove c - Bc (the existence of a
computation entails the provability of the existence of a computation),



Certainly for finite computations since you can just perform the 
computation to prove it exists.



but G does not prove Bc - c. G does not prove that the provability of
the existence of a computation entails the existence of that computation.


So in G, (Bc  ~c) does not lead to a contradiction.  Can you give a 
simple example of such a c in arithmetic?


Brent




c - Bc belongs to the corona G* minus G. It is true, but not
provable by the machine.

OK?

Once we fix a Löbian machine, we keep the same notion of truth (first
hypostase), and the same arithmetical proposition will be the provable
one. But according to the point of view chosen (the other hypostases),
they obey different logics.

Bruno

http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-05 Thread Bruno Marchal


On 04 Mar 2010, at 22:59, Jack Mallah wrote:


Bruno, I hope you feel better.


Thanks.



My quarrel with you is nothing personal.


Why would I think so?
Now I am warned.




--- Bruno Marchal marc...@ulb.ac.be wrote:

Jack Mallah wrote:
Bruno, you don't have to assume any 'prescience'; you just have to  
assume that counterfactuals count.  No one but you considers that  
'prescience' or any kind of problem.


This would lead to fading qualia in the case of progressive  
substitution from the Boolean Graph to the movie graph.


I thought you said you don't use the 'fading qualia' argument (see  
below), which in any case is invalid as my partial brain paper  
shows.  So, you are wrong.



It is a different fading qualia argument, older and different from  
Chlamers. It is explained in my PhD thesis, and earlier article, bur  
also in MGA3 on this list, and in a paper not yet submitted. Do you  
agree with the definition I give of the first person and third person  
in teleportation arguments? I mean I have no clue what you are missing.


You confuse MGA and Maudlin's argument. If consciousness supervenes on  
the physical realization of a computation, including the inactive  
part, it means you attach consciousness on an unknown physical  
phenomenon.
It is a magical move which blurs the difficulty. Eithr the physical  
counterfactualness is Turing emulable, or not. If it is, we can  
emulate it at a some level, and you will have to make consciousness  
supervene on something not Turing emulable to keep the physical  
supervenience.






gradually replace the components of the computer (which have the  
standard counterfactual (if-then) functioning) with components  
that only play out a pre-recorded script or which behave  
correctly by luck.


You could then invoke the 'fading qualia' argument (qualia could  
plausibly not vanish either suddenly or by gradually fading as  
the replacement proceeds) to argue that this makes no difference  
to the consciousness.  My partial brain paper shows that the  
'fading qualia' argument is invalid.


I am not using the 'fading qualia' argument.


Then someone else on the list must have brought it up at some  
point.  In any case, it was the only interesting argument in favor  
of your position, which was not trivially obviously invalid.  My  
PB paper shows that it is invalid though.


?


What do you mean by ??




You may cite the paper then, and say where things go wrong. I provide  
a deductive argument. It is a proof, if you prefer. It is not easy,  
but most who take the time to study it have not so much problem with  
the seven first steps, and eventually ask precise questions for the  
8th one, which needs some understanding of what is a computation, in  
the mathematical sense of the terms. The key consists in understanding  
the difference that exists, even in platonia, between a 'genuine  
computation, and a mere description of a computation.






I guess by 'physical supervenience' you mean supervenience on  
physical activity only.


Not at all. In the comp theory, it means supervenience on the  
physical realization of a computation.


So, it includes supervenience on the counterfactuals?



But physical is taken in the agnostic sense. It is whatever is  
(Turing) universal and stable enough in my neighborhood so that I can  
bet my immaterial self and its immaterial (mathematica) computation or  
processing will go through a functional substitution.
Eventually, that physical realization is shown to be a sum on an  
infinity of computation realized in elementary arithmetic.





 If so, the movie obviously doesn't have the right counterfactuals,



Of course. Glad you agree that the movie has no private experience.  
Most who want to block the UD argument pretend that the movie is  
conscious (but this leads to other absurdities).






so your MGA fails.


On the contrary, that was the point. It was a reductio ad absurdo. If  
consciousness supervenes, in real time and place  to a physical  
activity realizing a computation, and this qua computatio then  
consciousness supervenes on the movie (MGA2). But this is indeed  
absurd, and so consciousness does not supervene on the physical  
activity realizing the computation, but on the computation itself (and  
then on all computations by first person indeterminacy). This solves  
also Maudlin's difficulty, given that Maudlin find weird that  
consciousness supervenience needs the presence of physically inactive  
entities.







I see nothing nontrivial in your arguments.


Nice! You agree with the argument then. Or what?





  Computationalism assumes supervenience on both physical activity  
and physical laws (aka counterfactuals).


? You evacuate the computation?


I have no idea what you mean by that.  Computations are implemented  
based on both activity and counterfactuals, which is the same as  
saying they supervene on both.


Then you have to provide a physical definition of what are 

Re: problem of size '10

2010-03-05 Thread Charles
On Mar 5, 8:43 am, Jack Mallah jackmal...@yahoo.com wrote:

 and in any case is a thought experiment.

The term seems particularly appropriate in this case!

Charles

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-05 Thread Charles
 --- On Wed, 3/3/10, Stathis Papaioannou stath...@gmail.com wrote:

 I'm not sure if you overlooked it but the key condition in my paper is that 
 the inputs to the remaining brain are identical to what they would have been 
 if the whole brain were present.  Thus, the neural activity in the partial 
 brain is by definition identical to what would have occured in the 
 corresponding part of a whole brain.  It is of course grossly implausible 
 that this could be done in practice for a real biological brain (for one 
 thing, you'd pretty much have to know in advance the microscopic details of 
 everything that would have gone on in the removed part of the brain, or else 
 guess and get incredibly lucky), but it presents no difficulties in priciple 
 for a digital simulation,

The only fundamental difficulty I can see with this is if the brain
actually uses quantum computation, as suggested by some evidence that
photopsynthesis does (quoted by Bruno in another thread) - in which
case it might be impossible, even in principle, to reproduce the
activity of the rest of the brain (I'm not sure whether it would, but
it seems a lot more likely).

Charles

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-05 Thread Brent Meeker

On 3/5/2010 11:58 AM, Bruno Marchal wrote:


In this list I have already well explained the seven step of UDA, and 
one difficulty remains in the step 8, which is the difference between 
a computation and a description of computation. Due to the static 
character of Platonia, some believes it is the same thing, but it is 
not, and this is hard to explain. That hardness is reflected in the 
AUDA: the 'translation' of UDA in arithmetic. The subtlety is that 
again, the existence of a computation is true if and only if the 
existence of a description of the computation exist, but that is true 
at the level G*, and not at the G level, so that such an equivalence 
is not directly available, and it does not allow to confuse a 
computation (a mathematical relation among numbers), and a description 
of a computation (a number).


This mixing of existence and true in the context of a logic confuses 
me.  I understand you take a Platonic view of arithmetic so that all 
propositions of arithmetic are either true or false, even though most of 
them are not provable (from any given finite axioms), so 
true=/=provable.  But what does it mean to say a computation is true at 
one level and not another?  Does it mean provable?  or it there is some 
other meaning of true relative to a logic?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-05 Thread Brent Meeker

On 3/5/2010 1:29 PM, Charles wrote:

--- On Wed, 3/3/10, Stathis Papaioannoustath...@gmail.com  wrote:

I'm not sure if you overlooked it but the key condition in my paper is that the 
inputs to the remaining brain are identical to what they would have been if the 
whole brain were present.  Thus, the neural activity in the partial brain is by 
definition identical to what would have occured in the corresponding part of a 
whole brain.  It is of course grossly implausible that this could be done in 
practice for a real biological brain (for one thing, you'd pretty much have to 
know in advance the microscopic details of everything that would have gone on 
in the removed part of the brain, or else guess and get incredibly lucky), but 
it presents no difficulties in priciple for a digital simulation,
 

The only fundamental difficulty I can see with this is if the brain
actually uses quantum computation, as suggested by some evidence that
photopsynthesis does (quoted by Bruno in another thread) - in which
case it might be impossible, even in principle, to reproduce the
activity of the rest of the brain (I'm not sure whether it would, but
it seems a lot more likely).

Charles

   
That would keep you from cloning the state of the brain, but it should 
still be possible to reproduce the functionality.  So it be like 
replacing part of your brain with that same part from some other time; 
you'd lose memories, or have them scrambled, but it wouldn't affect 
whether or not you had qualia.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-04 Thread Jack Mallah
--- On Wed, 3/3/10, Stathis Papaioannou stath...@gmail.com wrote:
 Jack Mallah jackmal...@yahoo.com wrote:
  For partial replacement scenarios, where part of a brain has 
  counterfactuals and the rest doesn't, see my partial brain paper: 
  http://cogprints.org/6321/
 
 I've finally come around to reading this paper. You may or may not be aware 
 that there is a condition called Anton's syndrome in which some patients who 
 are blind as a result of a lesion to their occipital cortex are unaware that 
 they are blind. It is not a matter of denial: the patients honestly believe 
 they have normal vision, and confabulate when asked to describe things placed 
 in front of them. They are deluded about their qualia, in other words.

Interesting, Stathis. I hadn't heard of that before. Despite the superficial 
similarity, though, it's very different from the partial brains I consider in 
the paper.

 similarly in your paper where you consider a gradual removal of brain tissue. 
 It would have to be very specific surgery to produce the sort of delusional 
 state you describe.

I'm not sure if you overlooked it but the key condition in my paper is that the 
inputs to the remaining brain are identical to what they would have been if the 
whole brain were present.  Thus, the neural activity in the partial brain is by 
definition identical to what would have occured in the corresponding part of a 
whole brain.  It is of course grossly implausible that this could be done in 
practice for a real biological brain (for one thing, you'd pretty much have to 
know in advance the microscopic details of everything that would have gone on 
in the removed part of the brain, or else guess and get incredibly lucky), but 
it presents no difficulties in priciple for a digital simulation, and in any 
case is a thought experiment.





  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-04 Thread Jack Mallah
Bruno, I hope you feel better.  My quarrel with you is nothing personal.

--- Bruno Marchal marc...@ulb.ac.be wrote:
 Jack Mallah wrote:
  Bruno, you don't have to assume any 'prescience'; you just have to assume 
  that counterfactuals count.  No one but you considers that 'prescience' or 
  any kind of problem.
 
 This would lead to fading qualia in the case of progressive substitution from 
 the Boolean Graph to the movie graph.

I thought you said you don't use the 'fading qualia' argument (see below), 
which in any case is invalid as my partial brain paper shows.  So, you are 
wrong.

  gradually replace the components of the computer (which have the standard 
  counterfactual (if-then) functioning) with components that only play out 
  a pre-recorded script or which behave correctly by luck.
  
  You could then invoke the 'fading qualia' argument (qualia could 
  plausibly not vanish either suddenly or by gradually fading as the 
  replacement proceeds) to argue that this makes no difference to the 
  consciousness.  My partial brain paper shows that the 'fading qualia' 
  argument is invalid.
  
  I am not using the 'fading qualia' argument.
  
  Then someone else on the list must have brought it up at some point.  In 
  any case, it was the only interesting argument in favor of your position, 
  which was not trivially obviously invalid.  My PB paper shows that it is 
  invalid though.
 
 ?

What do you mean by ??

  I guess by 'physical supervenience' you mean supervenience on physical 
  activity only.
 
 Not at all. In the comp theory, it means supervenience on the physical 
 realization of a computation.

So, it includes supervenience on the counterfactuals?  If so, the movie 
obviously doesn't have the right counterfactuals, so your MGA fails.  I see 
nothing nontrivial in your arguments.

   Computationalism assumes supervenience on both physical activity and 
 physical laws (aka counterfactuals).
 
 ? You evacuate the computation?

I have no idea what you mean by that.  Computations are implemented based on 
both activity and counterfactuals, which is the same as saying they supervene 
on both.

 Consciousness does not arise from the movie, because the movie has the wrong 
 physical laws.  There is nothing about that that has anything to do with 
 'prescience'.
 
 This is not computationalism.

Of course it is.  Any mainstream computationalist agrees that the right 
counterfactuals (aka the right 'physical' laws) are needed.  Certainly Chalmers 
would agree.  What else would you call this position?

(I should note that when I say 'physical laws' it might instead be Platonic 
laws, if Platonic stuff exists in the right way.  I say 'physical' for short.  
I am agnostic on whether Platonic stuff exists in a strong enough sense.  In 
any case I maintain that it *could* be physical, as far as we know.)

  Bruno, try to read what I write instead of putting in your own meanings to 
  my words.
 
 I try politely to make sense to what you say by interpreting favorably your 
 term.

There is no polite way to say this: C'est merde.  You tried to twist my words 
towards your position.  Don't.

 Show the error, then.

I have already done so (for MGA): You claim that taking counterfactuals into 
account amounts to assuming 'prescience' and is thus implausible, but that's 
NOT true. Using counterfactuals/laws is how computation is defined.

Your repeated claims that the error has not been pointed out are a standard 
crackpot behavior.

 It helps to be agnostic on primitive matter before trying to understand the 
 reasoning.

In that case I should be the perfect candidate, being that I am agnostic on 
Platonism.  Your arguments don't sway me because they don't make any sense.

Remember, I came to this list because like many others here I thought up the 
'everthing that exists mathematically exists in the same way we do' idea by 
myself, and only found out online that others had thought of it too.  So I'm 
not prejudiced against it.  I just don't know if it's true, and I think it's 
important not to jump to conclusions.  Your 'work' has had no effect on my 
views on that.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-04 Thread Stathis Papaioannou
On 5 March 2010 06:43, Jack Mallah jackmal...@yahoo.com wrote:

 similarly in your paper where you consider a gradual removal of brain 
 tissue. It would have to be very specific surgery to produce the sort of 
 delusional state you describe.

 I'm not sure if you overlooked it but the key condition in my paper is that 
 the inputs to the remaining brain are identical to what they would have been 
 if the whole brain were present.  Thus, the neural activity in the partial 
 brain is by definition identical to what would have occured in the 
 corresponding part of a whole brain.  It is of course grossly implausible 
 that this could be done in practice for a real biological brain (for one 
 thing, you'd pretty much have to know in advance the microscopic details of 
 everything that would have gone on in the removed part of the brain, or else 
 guess and get incredibly lucky), but it presents no difficulties in priciple 
 for a digital simulation, and in any case is a thought experiment.

If the inputs to the remaining brain tissue are the same as they would
have been normally then effectively you have replaced the missing
parts with a magical processor, and I would say that the thought
experiment shows that the consciousness must be replicated in this
magical processor. Functionalism is sometimes used interchangeably
with computationalism, but computationalism is only a subset of
functionalism. It could be, for example, that the brain is not
computable because it uses exotic physics of the sort postulated by
Penrose. We would then fail in our efforts to make a computer that
behaves like a human. However, we could succeed if we used
non-computational components. If we replace a neuron with a demon that
reproduces its I/O behaviour, the behaviour of the whole brain will be
unchanged and its consciousness will also be unchanged. Functionalism
is saved, even if computationalism is lost.

The main problem I have with fading qualia is that it would lead to
the possibility of partial zombies. If partial zombies are possible,
then I might be a partial zombie now and not know it. I may, for
example, have zombie vision: I believe I can see, I can correctly
describe everything I look at, but in fact I am completely lacking in
visual perception. What am I missing out on? I am apparently not
missing out on anything. The zombie vision is just as good, in every
objective and subjective sense, as normal vision. So the objection to
the fading qualia is either that the qualia won't fade, or if they do
fade they will be replaced by zombie qualia that are indistinguishable
from normal qualia and we may as well call normal qualia.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-03 Thread Bruno Marchal


On 02 Mar 2010, at 20:33, Jack Mallah wrote:

I finally figured out what was happening to my emails: the spam  
filter got overly agressive and it was sending some of the list  
posts to the spam folder, but letting others into the inbox.  The  
post I'm replying to now was one that was hidden that way.


--- On Sun, 2/14/10, Bruno Marchal marc...@ulb.ac.be wrote:

Jack Mallah wrote:
What is false is your statement that The only way to escape the  
conclusion would be to attribute consciousness to a movie of a  
computation.  So your argument is not valid.


OK. I was talking in a context which is missing. You can also  
conclude in the prescience of the neurons for example. The point is  
that if you assume the physical supervenience thesis, you have to  
abandon comp and/or to introduce magical (non Turing emulable)  
property in matter.


That is false. Bruno, you don't have to assume any 'prescience'; you  
just have to assume that counterfactuals count.  No one but you  
considers that 'prescience' or any kind of problem.



This would lead to fading qualia in the case of progressive  
substitution from the Boolean Graph to the movie graph.







gradually replace the components of the computer (which have the  
standard counterfactual (if-then) functioning) with components  
that only play out a pre-recorded script or which behave correctly  
by luck.


You could then invoke the 'fading qualia' argument (qualia could  
plausibly not vanish either suddenly or by gradually fading as the  
replacement proceeds) to argue that this makes no difference to  
the consciousness.  My partial brain paper shows that the 'fading  
qualia' argument is invalid.


I am not using the 'fading qualia' argument.


Then someone else on the list must have brought it up at some  
point.  In any case, it was the only interesting argument in favor  
of your position, which was not trivially obviously invalid.  My PB  
paper shows that it is invalid though.


?





I think there was also a claim that counterfactual sensitivity  
amounts to 'prescience' but that makes no sense and I'm pretty  
sure that no one (even those who accept the rest of your  
arguments) agrees with you on that.


It is a reasoning by a an absurdum reduction. If you agree (with  
any computationalist) that we cannot attribute prescience to the  
neurons, then the physical activity of the movie is the same as the  
physical activity of the movie, so that physical supervenience +  
comp entails that the  consciousness supervenes on the movie (and  
this is absurd, mainly because the movie does not compute anything).


I guess by 'physical supervenience' you mean supervenience on  
physical activity only.



Not at all. In the comp theory, it means supervenience on the physical  
realization of a computation. MGA shows physical supervenience entails  
comp supervenience. No universal machine can know what is its most  
probable computation, and they can know that below that level, the  
appearance come from all.




 That is not what computationalism assumes. Computationalism assumes  
supervenience on both physical activity and physical laws (aka  
counterfactuals).


? You evacuate the computation?



 There is no secret about that.  Consciousness does not arise from  
the movie, because the movie has the wrong physical laws.  There is  
nothing about that that has anything to do with 'prescience'.


This is not computationalism.





Now, there is a school of thought that says that physical laws don't  
exist per se, and are merely descriptions of what is already in the  
physical activity.  A computationalist physicalist obviously rejects  
that view.


Counterfactual behaviors are properties of the overall system and  
are mathematically defined.


But that is the point: the counterfactuals are in the math.
Not in the physical activity.


Bruno, try to read what I write instead of putting in your own  
meanings to my words.


I try politely to make sense to what you say by interpreting favorably  
your term.





A physical system has mathematically describable properties.  Among  
these are the physical activity and also the counterfactuals.  There  
is no distinction to make on that basis.  That is what I was  
saying.  That has nothing whatsoever to do with Platonism.


machine ... its next personal state has to be recovered from the  
statistics on the possible relative continuations.


No, nyet, non, and hell no.  That is merely your view, which I  
obviously reject and which has nothing to recommend it - especially  
NOT computationalism, your erroneous claims to the contrary.




Show the error, then.

But I think you have not even read the step zero (of UDA) correctly.

To explain comp I assume consensual reality. Comp is really the thesis  
that I survive with a digital PHYSICAL brain. But we don't assume that  
PHYSICAL is primitive, and indeed the reasoning shows that Comp  
entails that the mind body problem is transformed into a 

Re: problem of size '10

2010-03-03 Thread Stathis Papaioannou
On 12 February 2010 03:14, Jack Mallah jackmal...@yahoo.com wrote:

 That's not true.  For partial replacement scenarios, where part of a brain 
 has counterfactuals and the rest doesn't, see my partial brain paper: 
 http://cogprints.org/6321/

I've finally come around to reading this paper. You may or may not be
aware that there is a condition called Anton's syndrome in which some
patients who are blind as a result of a lesion to their occipital
cortex are unaware that they are blind. It is not a matter of denial:
the patients honestly believe they have normal vision, and confabulate
when asked to describe things placed in front of them. They are
deluded about their qualia, in other words. It is a type of organic
delusional disorder called anosognosia, an inability to recognise an
obvious functional deficit in oneself.

This is interesting, but I don't think it damages the fading qualia
argument. For a start, the syndrome does not occur in most patients
who have such cortical lesions, and it is very rare in patients whose
lesion is downstream in the visual pathway, such as in the eye or the
optic nerve. It is not a routine response to the loss of perception,
but rather a specific delusional disorder called an anosognosia, where
the patient's reality testing is impaired and he does not recognise a
functional deficit obvious to everyone else. More to the point, the
fading qualia argument requires that there be no functional change as
a result of the neural replacement, and in Anton's syndrome there is a
gross functional change, since the patient is blind. A patient who is
cognitively intact would immediately notice that something was awry,
and even if he was hallucinating rather than blind, he would notice
that there was a discrepancy between what he thinks he sees and what
his other faculties tell him is really there. There would thus be an
immediate change in consciousness, and similarly in your paper where
you consider a gradual removal of brain tissue. It would have to be
very specific surgery to produce the sort of delusional state you
describe.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-02 Thread Jack Mallah
I finally figured out what was happening to my emails: the spam filter got 
overly agressive and it was sending some of the list posts to the spam folder, 
but letting others into the inbox.  The post I'm replying to now was one that 
was hidden that way.

--- On Sun, 2/14/10, Bruno Marchal marc...@ulb.ac.be wrote:
  Jack Mallah wrote:
  What is false is your statement that The only way to escape the conclusion 
  would be to attribute consciousness to a movie of a computation.  So your 
  argument is not valid.
 
 OK. I was talking in a context which is missing. You can also conclude in the 
 prescience of the neurons for example. The point is that if you assume the 
 physical supervenience thesis, you have to abandon comp and/or to introduce 
 magical (non Turing emulable) property in matter.

That is false. Bruno, you don't have to assume any 'prescience'; you just have 
to assume that counterfactuals count.  No one but you considers that 
'prescience' or any kind of problem.

  gradually replace the components of the computer (which have the standard 
  counterfactual (if-then) functioning) with components that only play out a 
  pre-recorded script or which behave correctly by luck.
 
  You could then invoke the 'fading qualia' argument (qualia could plausibly 
  not vanish either suddenly or by gradually fading as the replacement 
  proceeds) to argue that this makes no difference to the consciousness.  My 
  partial brain paper shows that the 'fading qualia' argument is invalid.
 
 I am not using the 'fading qualia' argument.

Then someone else on the list must have brought it up at some point.  In any 
case, it was the only interesting argument in favor of your position, which was 
not trivially obviously invalid.  My PB paper shows that it is invalid though.

  I think there was also a claim that counterfactual sensitivity amounts to 
  'prescience' but that makes no sense and I'm pretty sure that no one (even 
  those who accept the rest of your arguments) agrees with you on that.
 
 It is a reasoning by a an absurdum reduction. If you agree (with any 
 computationalist) that we cannot attribute prescience to the neurons, then 
 the physical activity of the movie is the same as the physical activity of 
 the movie, so that physical supervenience + comp entails that the  
 consciousness supervenes on the movie (and this is absurd, mainly because the 
 movie does not compute anything).

I guess by 'physical supervenience' you mean supervenience on physical activity 
only.  That is not what computationalism assumes. Computationalism assumes 
supervenience on both physical activity and physical laws (aka 
counterfactuals).  There is no secret about that.  Consciousness does not arise 
from the movie, because the movie has the wrong physical laws.  There is 
nothing about that that has anything to do with 'prescience'.

Now, there is a school of thought that says that physical laws don't exist per 
se, and are merely descriptions of what is already in the physical activity.  A 
computationalist physicalist obviously rejects that view.

  Counterfactual behaviors are properties of the overall system and are 
  mathematically defined.
 
 But that is the point: the counterfactuals are in the math.
 Not in the physical activity.

Bruno, try to read what I write instead of putting in your own meanings to my 
words.

A physical system has mathematically describable properties.  Among these are 
the physical activity and also the counterfactuals.  There is no distinction to 
make on that basis.  That is what I was saying.  That has nothing whatsoever to 
do with Platonism.

 machine ... its next personal state has to be recovered from the statistics 
 on the possible relative continuations.

No, nyet, non, and hell no.  That is merely your view, which I obviously reject 
and which has nothing to recommend it - especially NOT computationalism, your 
erroneous claims to the contrary.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-03-02 Thread David Nyman
2010/3/2 Jack Mallah jackmal...@yahoo.com:

 I guess by 'physical supervenience' you mean supervenience on physical 
 activity only.  That is not what computationalism assumes. Computationalism 
 assumes supervenience on both physical activity and physical laws (aka 
 counterfactuals).  There is no secret about that.  Consciousness does not 
 arise from the movie, because the movie has the wrong physical laws.  There 
 is nothing about that that has anything to do with 'prescience'.

Just so that I can be sure I've understood what you're saying here:

The physical laws you refer to above would be deemed to mediate
whatever physical activity is required to realise any (and all)
logically possible execution paths implicit in the relevant
computations.  And if so, the computationalist theory of mind would
amount to the claim that consciousness supervenes only on realisations
capable of instantiating this complete range of underlying physical
activity (i.e. factual + counterfactual) in virtue of relevant
physical laws.  IOW, there would always be a fully efficacious
physical mechanism - of some kind - underlying the computational one.

Under this interpretation, the idea would be that the absence of such
physical arrangements for realising counterfactual execution paths
would disqualify a mechanism as being efficacious in producing
consciousness.  I'm not entirely clear, however, why you say:

 Now, there is a school of thought that says that physical laws don't exist 
 per se, and are merely descriptions of what is already in the physical 
 activity.  A computationalist physicalist obviously rejects that view.

In the case of a mechanism with the appropriate arrangements for
counterfactuals - i.e. one that in principle at least could be
re-run in such a way as to elicit the counterfactual activity - the
question of whether the relevant physical law is causal, or merely
inferred, would appear to be incidental.  Is the metaphysical status
of physical law deemed in some way to be relevant to consciousness?

David

 I finally figured out what was happening to my emails: the spam filter got 
 overly agressive and it was sending some of the list posts to the spam 
 folder, but letting others into the inbox.  The post I'm replying to now was 
 one that was hidden that way.

 --- On Sun, 2/14/10, Bruno Marchal marc...@ulb.ac.be wrote:
  Jack Mallah wrote:
  What is false is your statement that The only way to escape the 
  conclusion would be to attribute consciousness to a movie of a 
  computation.  So your argument is not valid.

 OK. I was talking in a context which is missing. You can also conclude in 
 the prescience of the neurons for example. The point is that if you assume 
 the physical supervenience thesis, you have to abandon comp and/or to 
 introduce magical (non Turing emulable) property in matter.

 That is false. Bruno, you don't have to assume any 'prescience'; you just 
 have to assume that counterfactuals count.  No one but you considers that 
 'prescience' or any kind of problem.

  gradually replace the components of the computer (which have the standard 
  counterfactual (if-then) functioning) with components that only play out a 
  pre-recorded script or which behave correctly by luck.

  You could then invoke the 'fading qualia' argument (qualia could plausibly 
  not vanish either suddenly or by gradually fading as the replacement 
  proceeds) to argue that this makes no difference to the consciousness.  My 
  partial brain paper shows that the 'fading qualia' argument is invalid.

 I am not using the 'fading qualia' argument.

 Then someone else on the list must have brought it up at some point.  In any 
 case, it was the only interesting argument in favor of your position, which 
 was not trivially obviously invalid.  My PB paper shows that it is invalid 
 though.

  I think there was also a claim that counterfactual sensitivity amounts to 
  'prescience' but that makes no sense and I'm pretty sure that no one (even 
  those who accept the rest of your arguments) agrees with you on that.

 It is a reasoning by a an absurdum reduction. If you agree (with any 
 computationalist) that we cannot attribute prescience to the neurons, then 
 the physical activity of the movie is the same as the physical activity of 
 the movie, so that physical supervenience + comp entails that the  
 consciousness supervenes on the movie (and this is absurd, mainly because 
 the movie does not compute anything).

 I guess by 'physical supervenience' you mean supervenience on physical 
 activity only.  That is not what computationalism assumes. Computationalism 
 assumes supervenience on both physical activity and physical laws (aka 
 counterfactuals).  There is no secret about that.  Consciousness does not 
 arise from the movie, because the movie has the wrong physical laws.  There 
 is nothing about that that has anything to do with 'prescience'.

 Now, there is a school of thought that says that physical 

RE: problem of size '10

2010-02-26 Thread Jesse Mazer
 From: stath...@gmail.com
 Date: Tue, 23 Feb 2010 20:23:55 +1100
 Subject: Re: problem of size '10
 To: everything-list@googlegroups.com

 On 23 February 2010 04:45, Jesse Mazer laserma...@hotmail.com wrote:

  It seems that these thought experiments inevitably lead to considering
a
  digital simulation of the brain in a virtual environment. This is
  usually brushed over as an inessential aspect, but I'm coming to the
  opinion that it is essential. Once you have encapsulated the whole
  thought experiment in a closed virtual environment in a digital
computer
  you have the paradox of the rock that computes everything. How we know
  what is being computed in this virtual environment? Ordinarily the
  answer to this is that we wrote the program and so we provide the
  interpretation of the calculation *in this world*. But it seems that in
  these thought experiments we are implicitly supposing that the
  simulation is inherently providing it's own interpretation. Maybe, so;
  but I see no reason to have confidence that this inherent
interpretation
  is either unique or has anything to do with the interpretation we
  intended. I suspect that this simulated consciousness is only
  consciousness *in our external interpretation*.
 
  Brent
 
  In that case, aren't you saying that there is no objective answer to
whether
  a particular physical process counts as an implementation of a given
  computation, and that absolutely any process can be seen as implementing
any
  computation if outside observers choose to interpret it that way? That's
  basically the conclusion Chalmers was trying to avoid in his Does a
Rock
  Implement Every Finite-State Automaton paper
  at http://consc.net/papers/rock.html which discussed the implementation
  problem. One possible answer to this problem is that implementations
*are*
  totally subjective, but this would seem to rule out the possibility of
there
  ever being any sort of objective measure on computations (unless you
imagine
  some privileged observers who are themselves *not* identified with
  computations and whose interpretations are the only ones that 'count')
which
  makes it hard to solve things like the white rabbit problem that's
been
  discussed often on this list.
  Jesse

 It seems to me that perhaps the main reason for assuming that
 counterfactual behaviour in the brain is needed for consciousness is
 that otherwise any physical system implements any computation, or
 equivalently every computation is implemented independently of any
 physical reality that may or may not exist, and this would be a
 terrible conclusion for materialists.


Well, this is the conclusion I'm trying to avoid with my idea about defining
causal structure in terms of logical implications between propositions about
events and the laws governing them. I think this idea could avoid the
conclusion that any physical system implements any computation, but also
avoid the conclusion that implementations of computations need to be defined
in terms of counterfactuals. Were you reading the discussion I was having
about this with Jack? If not, the old post where I first brought up the idea
to Bruno is at
http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.htmland
a more recent post from my discussion with Jack where I give a simple
illustration of how it's supposed to work is at
http://www.mail-archive.com/everything-list@googlegroups.com/msg18335.html

If you see any major problems with this idea, let me know!

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-24 Thread Jack Mallah
Last post didn't show up in email.  Seems random.

--- On Tue, 2/23/10, Jesse Mazer laserma...@gmail.com wrote:
 -even if there was a one-to-one relationship between distinct computations 
 and distinct observer-moments with distinct qualia, very similar computations 
 could produce very similar qualia,

Sure. So you want to know if there are different (though similar in certain 
ways) computations that would produce _identical_ consciousness?  I'd say yes, 
and see below.  Some cases are obvious - e.g. simulating a brain + other 
stuff and varying the other stuff, which does change the computation.

I think though that you are trying to get at something a little more subtle, so 
I'll go further.  In my MCI paper (arxiv.org/abs/0709.0544), I note that 

One computation may simulate some other computation and give rise to conscious 
experience only because it does so. In this case it would be unjustified double 
counting to allow the implementations of both computations to contribute to the 
measure. This problem is easily avoided by only considering computations which 
give rise to consciousness in a way that is not due merely to simulation of 
some other conscious computation.
Such a computation is a fundamental conscious computation (FCC).

So what you really want to know is whether different FCCs could give rise to 
the same consciousness.  Again I would say yes.

 you're not really saying that the Earth computation *taken as a whole* is 
 associated with multiple qualia. It's as if we associated distinct qualia 
 with distinct sets-

Again I think you are trying to get at FCCs.  So now you want to know if a 
single FCC can give rise to multiple observers.  That one is a bit harder but I 
suspect it could.

 Well, the idea is that to determine what causal structures are contained in a 
 given universe (whether a physical universe or a computation), we adopt the 
 self-imposed rule that we *only* look at a set of propositions concerning 
 events that actually occurred

 Aside from that though, the counterfactuals you mention are of a very limited 
 kind, just involving negations of propositions about events that actually 
 occurred. Perhaps I'm misunderstanding, but I thought that the way you (and 
 Chalmers) wanted to define implementations of computations using 
 counterfactuals involved a far richer set of counterfactuals about detailed 
 alternate histories of what could have occurred if the inputs were different.

Yes - computations are defined using a full spectrum of counterfactual 
behaviors.  I would certainly not change that definition as it is the simplest 
way to describe the dynamics of the system.

However, I think there could be some common ground between what you want to do 
and my approach.  As I wrote in the MCI paper (p. 21), 

... if a computer is built that ‘derails’ for the wrong input, that does not 
mean the computer does not implement any computations. It is true that it will 
not implement the same CSSA as it would if it did not suffer from the 
derailment issue, but it will still implement some CSSA which is related to the 
normal one. This new CSSA may be sufficient to give rise to consciousness.

Now, I think your approach is equivalent to the following conjecture:

Factual Implications Conjecture (FIC): If different computations have the same 
logical implication relationships among states (and conjuctions of states) that 
actually occur in the actual run, then they give rise to the same type of 
consciousness regardless of their dynamics for other (counterfactual) 
situations.

I'm not sure the FIC holds in all cases but it does seem plausible at least for 
many cases.





  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-23 Thread Stathis Papaioannou
On 23 February 2010 04:45, Jesse Mazer laserma...@hotmail.com wrote:

 It seems that these thought experiments inevitably lead to considering a
 digital simulation of the brain in a virtual environment. This is
 usually brushed over as an inessential aspect, but I'm coming to the
 opinion that it is essential. Once you have encapsulated the whole
 thought experiment in a closed virtual environment in a digital computer
 you have the paradox of the rock that computes everything. How we know
 what is being computed in this virtual environment? Ordinarily the
 answer to this is that we wrote the program and so we provide the
 interpretation of the calculation *in this world*. But it seems that in
 these thought experiments we are implicitly supposing that the
 simulation is inherently providing it's own interpretation. Maybe, so;
 but I see no reason to have confidence that this inherent interpretation
 is either unique or has anything to do with the interpretation we
 intended. I suspect that this simulated consciousness is only
 consciousness *in our external interpretation*.

 Brent

 In that case, aren't you saying that there is no objective answer to whether
 a particular physical process counts as an implementation of a given
 computation, and that absolutely any process can be seen as implementing any
 computation if outside observers choose to interpret it that way? That's
 basically the conclusion Chalmers was trying to avoid in his Does a Rock
 Implement Every Finite-State Automaton paper
 at http://consc.net/papers/rock.html which discussed the implementation
 problem. One possible answer to this problem is that implementations *are*
 totally subjective, but this would seem to rule out the possibility of there
 ever being any sort of objective measure on computations (unless you imagine
 some privileged observers who are themselves *not* identified with
 computations and whose interpretations are the only ones that 'count') which
 makes it hard to solve things like the white rabbit problem that's been
 discussed often on this list.
 Jesse

It seems to me that perhaps the main reason for assuming that
counterfactual behaviour in the brain is needed for consciousness is
that otherwise any physical system implements any computation, or
equivalently every computation is implemented independently of any
physical reality that may or may not exist, and this would be a
terrible conclusion for materialists.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: problem of size '10

2010-02-23 Thread Jack Mallah
My last post worked (I got it in my email).  I'll repost one later and then 
post on the measure thread - though it's still a very busy time for me so maybe 
not today.

--- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
 OK, so you're suggesting there may not be a one-to-one relationship between 
 distinct observer-moments in the sense of distinct qualia, and distinct 
 computations defined in terms of counterfactuals? Distinct computations might 
 be associated with identical qualia, in other words?

Sure.  Otherwise, there'd be little point in trying to simulate someone, if any 
detail could change everything.

 What about the reverse--might a single computation be associated with 
 multiple distinct observer-moments with different qualia?

Certainly. For example, a sufficiently detailed simulation of the Earth would 
be associated with an entire population of observers.

 You say Suppose that a(t),b(t),and c(t) are all true, but that's not enough 
 information--the notion of causal structure I was describing involved not 
 just the truth or falsity of propositions, but also the logical relationships 
 between these propositions given the axioms of the system.

OK, I see what you're saying, Jesse.  I don't think it's a good solution though.

First, you are implicitly including a lot of counterfactual information 
already, which is the reason it works at all.  B implies A is logically 
equivalent to Not A implies Not B.  I'll use ~ for Not, -- for 
implies, and the axiom context is assumed.  A,B are Boolean variables / bits. 
 So if you say

A -- B
B -- A

that's the same as saying

A -- B
~A -- ~B

which is the same as saying B = A.  Your way is just a clumsy way to provide 
some of the counterfactual information, which is often most consisely expressed 
as equations.  So if you think you have escaped counterfactuals, I disagree.

The next problem is that for a larger number of bits, you won't express the 
full dynamics of the system.  For example with 10 bits, there are more possible 
combinations than your system will have statements.  I guess you see that as a 
feature rather than a bug - after all, it's what allows you to ignore inert 
machinery.  I don't like it but perhaps that's a matter a taste.

Now, that may work OK for bits, but it really seems to lose a lot for more 
general systems.  For example, suppose A,B,C are trits, or perhaps qubits, or 
real numbers such as positions.  Your logical implications remain limited to 
Boolean statements.  Do you really want to disregard so much of the system's 
dynamics?  I see no reason to do so when using counterfactuals in the usual way 
works just fine.  I consider any initial value problem to be a computation, 
including those that use differential equations.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: problem of size '10

2010-02-23 Thread Jesse Mazer
Aaargh, I see from looking at my last message to Jack Mallah at
http://www.mail-archive.com/everything-list@googlegroups.com/msg18314.htmlthat
hotmail completely ignored the paragraph breaks I put in between the
numbered items on the list of propositions, making the lists extremely hard
to read. I'll try resending from my gmail account and hopefully it'll work
better!

 Date: Mon, 22 Feb 2010 11:41:38 -0800
 From: jackmal...@yahoo.com
 Subject: RE: problem of size '10
 To: everything-list@googlegroups.com

 Jesse, how do you access the everything list? I ask because I have not
recieved my own posts in my inbox, nor have others such as Bruno replied. I
use yahoo email. I may need to use a different method to prevent my posts
from getting lost. They do seem to show up on Google groups though. There
was never a problem until recently, so I'll see if this one works.


I just get the messages in my email--if you want to give a link to one of
the emails that didn't show up in your inbox, either from google groups or
from
http://www.mail-archive.com/everything-list@googlegroups.com/maillist.html ,
then I can check if that email showed up in my own inbox, since I haven't
deleted any of the everything-list emails for a few days.



 --- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
  Hi Jack, to me the idea that counterfactuals would be essential to
defining what counts as an implementation has always seemed
counterintuitive for reasons separate from the Olympia or movie-graph
argument. The thought-experiment I'd like to consider is one where some
device is implanted in my brain that passively monitors the activity of a
large group of neurons, and only if it finds them firing in some precise
prespecified sequence does it activate and stimulate my brain in some way,
causing a change in brain activity; otherwise it remains causally inert
  According to the counterfactual definition of implementations, would the
mere presence of this device change my qualia from what they'd be if it
wasn't present, even if the neurons required to activate it never actually
fire in the correct sequence and the device remains completely inert? That
would seem to divorce qualia from behavior in a pretty significant way...

 The link between qualia and computations is, of course, hard to know
anything about. But it seems to me quite likely that qualia would be
insensitive to the sort of changes in computations that you are talking
about. Such modified computations could give rise to the same (or nearly the
same) set of qualia for the 'inert device' runs as unmodified ones would
have. I am not saying that this must always be the case, since if you take
it too far you could run into Maudlin-type problems, but in many cases it
would make sense.



OK, so you're suggesting there may not be a one-to-one relationship between
distinct observer-moments in the sense of distinct qualia, and distinct
computations defined in terms of counterfactuals? Distinct computations
might be associated with identical qualia, in other words? What about the
reverse--might a single computation be associated with multiple distinct
observer-moments with different qualia?



  If you have time, perhaps you could take a look at my post
 
http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
  where I discussed a vague idea for how one might define isomorphic
causal structures that could be used to address the implementation
problem, in a way that wouldn't depend on counterfactuals at all

 You do need counterfactuals to define implementations.

 Consider the computation c(t+1) = a(t) AND b(t), where a,b,c, are bits.
Suppose that a(t),b(t),and c(t) are all true. Without counterfactuals, how
would you distinguish the above from another computation such as c(t+1) =
a(t)?

 Even worse, suppose that c(t+1) is true no matter what. a(t) and b(t)
happen to be true. Is the above computation implemented?


You say Suppose that a(t),b(t),and c(t) are all true, but that's not
enough information--the notion of causal structure I was describing involved
not just the truth or falsity of propositions, but also the logical
relationships between these propositions given the axioms of the system. For
example, if we are looking at three propositions A, B, and C in the context
of an axiomatic system, we can ask whether or not the axioms (which might
represent the laws of physics, or the internal rules of a turing machine)
along with propositions A and B (which could represent specific physical
facts such as initial conditions, or facts about particular cells on the
turing machine's tape at a particular time) can together be used to prove C,
or whether they are insufficient to prove C. The causal structure for a
given set of propositions could then be defined in terms of all possible
combinations of logical implications for those propositions, like this:

1. Axioms + A imply B: true or false?
2. Axioms + A imply C: true or false?
3. Axioms + B imply

Re: problem of size '10

2010-02-23 Thread Jesse Mazer
Aaargh, I see from looking at my last message to Jack Mallah at
http://www.mail-archive.com/everything-list@googlegroups.com/msg18314.htmlthat
hotmail completely ignored the paragraph breaks I put in between the
numbered items on the list of propositions, making the lists extremely hard
to read. I'll try resending from my gmail account and hopefully it'll work
better!

 Date: Mon, 22 Feb 2010 11:41:38 -0800
 From: jackmal...@yahoo.com
 Subject: RE: problem of size '10
 To: everything-list@googlegroups.com

 Jesse, how do you access the everything list? I ask because I have not
recieved my own posts in my inbox, nor have others such as Bruno replied. I
use yahoo email. I may need to use a different method to prevent my posts
from getting lost. They do seem to show up on Google groups though. There
was never a problem until recently, so I'll see if this one works.


I just get the messages in my email--if you want to give a link to one of
the emails that didn't show up in your inbox, either from google groups or
from
http://www.mail-archive.com/everything-list@googlegroups.com/maillist.html ,
then I can check if that email showed up in my own inbox, since I haven't
deleted any of the everything-list emails for a few days.



 --- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
  Hi Jack, to me the idea that counterfactuals would be essential to
defining what counts as an implementation has always seemed
counterintuitive for reasons separate from the Olympia or movie-graph
argument. The thought-experiment I'd like to consider is one where some
device is implanted in my brain that passively monitors the activity of a
large group of neurons, and only if it finds them firing in some precise
prespecified sequence does it activate and stimulate my brain in some way,
causing a change in brain activity; otherwise it remains causally inert
  According to the counterfactual definition of implementations, would the
mere presence of this device change my qualia from what they'd be if it
wasn't present, even if the neurons required to activate it never actually
fire in the correct sequence and the device remains completely inert? That
would seem to divorce qualia from behavior in a pretty significant way...

 The link between qualia and computations is, of course, hard to know
anything about. But it seems to me quite likely that qualia would be
insensitive to the sort of changes in computations that you are talking
about. Such modified computations could give rise to the same (or nearly the
same) set of qualia for the 'inert device' runs as unmodified ones would
have. I am not saying that this must always be the case, since if you take
it too far you could run into Maudlin-type problems, but in many cases it
would make sense.



OK, so you're suggesting there may not be a one-to-one relationship between
distinct observer-moments in the sense of distinct qualia, and distinct
computations defined in terms of counterfactuals? Distinct computations
might be associated with identical qualia, in other words? What about the
reverse--might a single computation be associated with multiple distinct
observer-moments with different qualia?



  If you have time, perhaps you could take a look at my post
 
http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
  where I discussed a vague idea for how one might define isomorphic
causal structures that could be used to address the implementation
problem, in a way that wouldn't depend on counterfactuals at all

 You do need counterfactuals to define implementations.

 Consider the computation c(t+1) = a(t) AND b(t), where a,b,c, are bits.
Suppose that a(t),b(t),and c(t) are all true. Without counterfactuals, how
would you distinguish the above from another computation such as c(t+1) =
a(t)?

 Even worse, suppose that c(t+1) is true no matter what. a(t) and b(t)
happen to be true. Is the above computation implemented?


You say Suppose that a(t),b(t),and c(t) are all true, but that's not
enough information--the notion of causal structure I was describing involved
not just the truth or falsity of propositions, but also the logical
relationships between these propositions given the axioms of the system. For
example, if we are looking at three propositions A, B, and C in the context
of an axiomatic system, we can ask whether or not the axioms (which might
represent the laws of physics, or the internal rules of a turing machine)
along with propositions A and B (which could represent specific physical
facts such as initial conditions, or facts about particular cells on the
turing machine's tape at a particular time) can together be used to prove C,
or whether they are insufficient to prove C. The causal structure for a
given set of propositions could then be defined in terms of all possible
combinations of logical implications for those propositions, like this:

1. Axioms + A imply B: true or false?
2. Axioms + A imply C: true or false?
3. Axioms + B imply

Re: problem of size '10

2010-02-23 Thread Jesse Mazer
On Tue, Feb 23, 2010 at 10:40 AM, Jack Mallah jackmal...@yahoo.com wrote:

 My last post worked (I got it in my email).  I'll repost one later and then
 post on the measure thread - though it's still a very busy time for me so
 maybe not today.

 --- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
  OK, so you're suggesting there may not be a one-to-one relationship
 between distinct observer-moments in the sense of distinct qualia, and
 distinct computations defined in terms of counterfactuals? Distinct
 computations might be associated with identical qualia, in other words?

 Sure.  Otherwise, there'd be little point in trying to simulate someone, if
 any detail could change everything.



If by change everything you mean radical differences in the qualia, that
wasn't really what I was suggesting--even if there was a one-to-one
relationship between distinct computations and distinct observer-moments
with distinct qualia, very similar computations could produce very similar
qualia, so if you produced a good enough simulation of anyone's brain then
the simulation's experience could be nearly identical to that of the
original brain (and of course the simulation's experience would start to
diverge from the original brain's anyway as they'd receive different sensory
input)




  What about the reverse--might a single computation be associated with
 multiple distinct observer-moments with different qualia?

 Certainly. For example, a sufficiently detailed simulation of the Earth
 would be associated with an entire population of observers.


But isn't that just breaking up the computation into various
sub-computations and saying that each sub-computation has distinct
experiences? In this case you're not really saying that the Earth
computation *taken as a whole* is associated with multiple qualia. It's as
if we associated distinct qualia with distinct sets--the set {{}, {{}}}
might be associated with different qualia than the set {} which is contained
within it, but that's not the same as saying that the set {{}, {{}}} is
*itself* associated with multiple distinct qualia.




  You say Suppose that a(t),b(t),and c(t) are all true, but that's not
 enough information--the notion of causal structure I was describing involved
 not just the truth or falsity of propositions, but also the logical
 relationships between these propositions given the axioms of the system.

 OK, I see what you're saying, Jesse.  I don't think it's a good solution
 though.

 First, you are implicitly including a lot of counterfactual information
 already, which is the reason it works at all.  B implies A is logically
 equivalent to Not A implies Not B.  I'll use ~ for Not, -- for
 implies, and the axiom context is assumed.  A,B are Boolean variables /
 bits.  So if you say

 A -- B
 B -- A

 that's the same as saying

 A -- B
 ~A -- ~B

 which is the same as saying B = A.  Your way is just a clumsy way to
 provide some of the counterfactual information, which is often most
 consisely expressed as equations.  So if you think you have escaped
 counterfactuals, I disagree.



Well, the idea is that to determine what causal structures are contained in
a given universe (whether a physical universe or a computation), we adopt
the self-imposed rule that we *only* look at a set of propositions
concerning events that actually occurred in that universe, not at other
propositions concerning events that didn't occur in that universe. Then the
only causal structures contained in this universe are the ones that can be
found in the logical relations between this restricted set of propositions.

Aside from that though, the counterfactuals you mention are of a very
limited kind, just involving negations of propositions about events that
actually occurred. Perhaps I'm misunderstanding, but I thought that the way
you (and Chalmers) wanted to define implementations of computations using
counterfactuals involved a far richer set of counterfactuals about
detailed alternate histories of what could have occurred if the inputs were
different. For example, if the computation were a simulation of my brain
receiving sensory input from the external world, and it so happened that
this sensory input involved me seeing my desk and computer in front of me,
then your type of proposed solution to the implementation problem would
require considering how the simulation would have responded if it had
instead been fed some very different sensory input such as the sudden
appearance of a miniature dragon flying out of my computer monitor. In your
proposal, two computers running identical brain simulations and being fed
identical sensory inputs can only be considered implementations of the same
computation if it's true that they both *would* respond the same way to
totally different inputs, in other words. Is this understanding correct or
have I got it wrong?




 The next problem is that for a larger number of bits, you won't express the
 full dynamics of the system.  

RE: problem of size '10

2010-02-22 Thread Jesse Mazer



 Date: Sat, 13 Feb 2010 10:48:28 -0800
 From: jackmal...@yahoo.com
 Subject: Re: problem of size '10
 To: everything-list@googlegroups.com
 
 --- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
  Jack Mallah wrote:
  --- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
MGA is more general (and older).
The only way to escape the conclusion would be to attribute 
consciousness to a movie of a computation
  
   That's not true.  For partial replacement scenarios, where part of a 
   brain has counterfactuals and the rest doesn't, see my partial brain 
   paper: http://cogprints.org/6321/
 
  It is not a question of true or false, but of presenting a valid or non 
  valid deduction.
 
 What is false is your statement that The only way to escape the conclusion 
 would be to attribute consciousness to a movie of a computation.  So your 
 argument is not valid.
 
  I don't see anything in your comment or links which prevents the 
  conclusions of being reached from the assumptions. If you think so, tell me 
  at which step, and provide a justification.
 
 Bruno, I don't intend to be drawn into a detailed discussion of your 
 arguments at this time.  The key idea though is that a movie could replace a 
 computer brain.  The strongest argument for that is that you could gradually 
 replace the components of the computer (which have the standard 
 counterfactual (if-then) functioning) with components that only play out a 
 pre-recorded script or which behave correctly by luck.  You could then invoke 
 the 'fading qualia' argument (qualia could plausibly not vanish either 
 suddenly or by gradually fading as the replacement proceeds) to argue that 
 this makes no difference to the consciousness.  My partial brain paper shows 
 that the 'fading qualia' argument is invalid.



Hi Jack, to me the idea that counterfactuals would be essential to defining 
what counts as an implementation has always seemed counterintuitive for 
reasons separate from the Olympia or movie-graph argument. The 
thought-experiment I'd like to consider is one where some device is implanted 
in my brain that passively monitors the activity of a large group of neurons, 
and only if it finds them firing in some precise prespecified sequence does it 
activate and stimulate my brain in some way, causing a change in brain 
activity; otherwise it remains causally inert (I suppose because of the 
butterfly effect, the mere presence of the device would eventually affect my 
brain activity, but we can imagine replacing the device with a subroutine in a 
deterministic program simulating my brain in a deterministic virtual 
environment, with the subroutine only being activated and influencing the 
simulation if certain simulated neurons fire in a precise sequence). According 
to the counterfactual definition of implementations, would the mere presence of 
this device change my qualia from what they'd be if it wasn't present, even if 
the neurons required to activate it never actually fire in the correct sequence 
and the device remains completely inert? That would seem to divorce qualia from 
behavior in a pretty significant way...
If you have time, perhaps you could take a look at my post at 
http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html 
where I discussed a vague idea for how one might define isomorphic causal 
structures that could be used to address the implementation problem, in a way 
that wouldn't depend on counterfactuals at all (there was some additional 
discussion in the followup posts on that thread, linked at the bottom of that 
mail-archive.com page). The basic idea was to treat the physical world as a 
formal axiomatic system, the axioms being laws of physics and initial 
conditions, the theorems being statements about physical events at later points 
in spacetime; then causal structure could be defined in terms of the patterns 
of logical relations between theorems, like given the axioms along with 
theorems A and B, we can derive theorem C. Since all theorems concern events 
that actually did happen, counterfactuals would not be involved, but we could 
still perhaps avoid the type of problem Chalmers discussed where a rock can be 
viewed as implementing any possible computation. If you do have time to look 
over the idea and you see some obvious problems with it, let me know...
Jesse 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-22 Thread Brent Meeker

Jesse Mazer wrote:



 Date: Sat, 13 Feb 2010 10:48:28 -0800
 From: jackmal...@yahoo.com
 Subject: Re: problem of size '10
 To: everything-list@googlegroups.com

 --- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
  Jack Mallah wrote:
  --- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
MGA is more general (and older).
The only way to escape the conclusion would be to attribute 
consciousness to a movie of a computation

  
   That's not true.  For partial replacement scenarios, where part 
of a brain has counterfactuals and the rest doesn't, see my partial 
brain paper: http://cogprints.org/6321/

 
  It is not a question of true or false, but of presenting a valid 
or non valid deduction.


 What is false is your statement that The only way to escape the 
conclusion would be to attribute consciousness to a movie of a 
computation.  So your argument is not valid.


  I don't see anything in your comment or links which prevents the 
conclusions of being reached from the assumptions. If you think so, 
tell me at which step, and provide a justification.


 Bruno, I don't intend to be drawn into a detailed discussion of your 
arguments at this time.  The key idea though is that a movie could 
replace a computer brain.  The strongest argument for that is that you 
could gradually replace the components of the computer (which have the 
standard counterfactual (if-then) functioning) with components that 
only play out a pre-recorded script or which behave correctly by 
luck.  You could then invoke the 'fading qualia' argument (qualia 
could plausibly not vanish either suddenly or by gradually fading as 
the replacement proceeds) to argue that this makes no difference to 
the consciousness.  My partial brain paper shows that the 'fading 
qualia' argument is invalid.




Hi Jack, to me the idea that counterfactuals would be essential to 
defining what counts as an implementation has always seemed 
counterintuitive for reasons separate from the Olympia or movie-graph 
argument. The thought-experiment I'd like to consider is one where 
some device is implanted in my brain that passively monitors the 
activity of a large group of neurons, and only if it finds them firing 
in some precise prespecified sequence does it activate and stimulate 
my brain in some way, causing a change in brain activity; otherwise it 
remains causally inert (I suppose because of the butterfly effect, the 
mere presence of the device would eventually affect my brain activity, 
but we can imagine replacing the device with a subroutine in a 
deterministic program simulating my brain in a deterministic virtual 
environment, with the subroutine only being activated and influencing 
the simulation if certain simulated neurons fire in a precise sequence).


It seems that these thought experiments inevitably lead to considering a 
digital simulation of the brain in a virtual environment.  This is 
usually brushed over as an inessential aspect, but I'm coming to the 
opinion that it is essential.  Once you have encapsulated the whole 
thought experiment in a closed virtual environment in a digital computer 
you have the paradox of the rock that computes everything.  How we know 
what is being computed in this virtual environment? Ordinarily the 
answer to this is that we wrote the program and so we provide the 
interpretation of the calculation *in this world*.  But it seems that in 
these thought experiments we are implicitly supposing that the 
simulation is inherently providing it's own interpretation.  Maybe, so; 
but I see no reason to have confidence that this inherent interpretation 
is either unique or has anything to do with the interpretation we 
intended.  I suspect that this simulated consciousness is only 
consciousness *in our external interpretation*.


Brent

According to the counterfactual definition of implementations, would 
the mere presence of this device change my qualia from what they'd be 
if it wasn't present, even if the neurons required to activate it 
never actually fire in the correct sequence and the device remains 
completely inert? That would seem to divorce qualia from behavior in a 
pretty significant way...


If you have time, perhaps you could take a look at my post 
at http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html 
where I discussed a vague idea for how one might define isomorphic 
causal structures that could be used to address the implementation 
problem, in a way that wouldn't depend on counterfactuals at all 
(there was some additional discussion in the followup posts on that 
thread, linked at the bottom of that mail-archive.com page). The basic 
idea was to treat the physical world as a formal axiomatic system, the 
axioms being laws of physics and initial conditions, the theorems 
being statements about physical events at later points in spacetime; 
then causal structure could be defined in terms of the patterns of 
logical relations between theorems, like

RE: problem of size '10

2010-02-22 Thread Jesse Mazer



 Date: Mon, 22 Feb 2010 08:42:17 -0800
 From: meeke...@dslextreme.com
 To: everything-list@googlegroups.com
 Subject: Re: problem of size '10
 
 Jesse Mazer wrote:
 
 
   Date: Sat, 13 Feb 2010 10:48:28 -0800
   From: jackmal...@yahoo.com
   Subject: Re: problem of size '10
   To: everything-list@googlegroups.com
  
   --- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
Jack Mallah wrote:
--- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
  MGA is more general (and older).
  The only way to escape the conclusion would be to attribute 
  consciousness to a movie of a computation

 That's not true.  For partial replacement scenarios, where part 
  of a brain has counterfactuals and the rest doesn't, see my partial 
  brain paper: http://cogprints.org/6321/
   
It is not a question of true or false, but of presenting a valid 
  or non valid deduction.
  
   What is false is your statement that The only way to escape the 
  conclusion would be to attribute consciousness to a movie of a 
  computation.  So your argument is not valid.
  
I don't see anything in your comment or links which prevents the 
  conclusions of being reached from the assumptions. If you think so, 
  tell me at which step, and provide a justification.
  
   Bruno, I don't intend to be drawn into a detailed discussion of your 
  arguments at this time.  The key idea though is that a movie could 
  replace a computer brain.  The strongest argument for that is that you 
  could gradually replace the components of the computer (which have the 
  standard counterfactual (if-then) functioning) with components that 
  only play out a pre-recorded script or which behave correctly by 
  luck.  You could then invoke the 'fading qualia' argument (qualia 
  could plausibly not vanish either suddenly or by gradually fading as 
  the replacement proceeds) to argue that this makes no difference to 
  the consciousness.  My partial brain paper shows that the 'fading 
  qualia' argument is invalid.
 
 
 
  Hi Jack, to me the idea that counterfactuals would be essential to 
  defining what counts as an implementation has always seemed 
  counterintuitive for reasons separate from the Olympia or movie-graph 
  argument. The thought-experiment I'd like to consider is one where 
  some device is implanted in my brain that passively monitors the 
  activity of a large group of neurons, and only if it finds them firing 
  in some precise prespecified sequence does it activate and stimulate 
  my brain in some way, causing a change in brain activity; otherwise it 
  remains causally inert (I suppose because of the butterfly effect, the 
  mere presence of the device would eventually affect my brain activity, 
  but we can imagine replacing the device with a subroutine in a 
  deterministic program simulating my brain in a deterministic virtual 
  environment, with the subroutine only being activated and influencing 
  the simulation if certain simulated neurons fire in a precise sequence).
 
 It seems that these thought experiments inevitably lead to considering a 
 digital simulation of the brain in a virtual environment.  This is 
 usually brushed over as an inessential aspect, but I'm coming to the 
 opinion that it is essential.  Once you have encapsulated the whole 
 thought experiment in a closed virtual environment in a digital computer 
 you have the paradox of the rock that computes everything.  How we know 
 what is being computed in this virtual environment? Ordinarily the 
 answer to this is that we wrote the program and so we provide the 
 interpretation of the calculation *in this world*.  But it seems that in 
 these thought experiments we are implicitly supposing that the 
 simulation is inherently providing it's own interpretation.  Maybe, so; 
 but I see no reason to have confidence that this inherent interpretation 
 is either unique or has anything to do with the interpretation we 
 intended.  I suspect that this simulated consciousness is only 
 consciousness *in our external interpretation*.
 
 Brent

In that case, aren't you saying that there is no objective answer to whether a 
particular physical process counts as an implementation of a given 
computation, and that absolutely any process can be seen as implementing any 
computation if outside observers choose to interpret it that way? That's 
basically the conclusion Chalmers was trying to avoid in his Does a Rock 
Implement Every Finite-State Automaton paper at 
http://consc.net/papers/rock.html which discussed the implementation problem. 
One possible answer to this problem is that implementations *are* totally 
subjective, but this would seem to rule out the possibility of there ever being 
any sort of objective measure on computations (unless you imagine some 
privileged observers who are themselves *not* identified with computations and 
whose interpretations are the only ones that 'count') which makes it hard to 
solve things like

RE: problem of size '10

2010-02-22 Thread Jack Mallah
Jesse, how do you access the everything list?  I ask because I have not 
recieved my own posts in my inbox, nor have others such as Bruno replied.  I 
use yahoo email.  I may need to use a different method to prevent my posts from 
getting lost.  They do seem to show up on Google groups though.  There was 
never a problem until recently, so I'll see if this one works.

--- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
 Hi Jack, to me the idea that counterfactuals would be essential to defining 
 what counts as an implementation has always seemed counterintuitive for 
 reasons separate from the Olympia or movie-graph argument. The 
 thought-experiment I'd like to consider is one where some device is implanted 
 in my brain that passively monitors the activity of a large group of neurons, 
 and only if it finds them firing in some precise prespecified sequence does 
 it activate and stimulate my brain in some way, causing a change in brain 
 activity; otherwise it remains causally inert
 According to the counterfactual definition of implementations, would the mere 
 presence of this device change my qualia from what they'd be if it wasn't 
 present, even if the neurons required to activate it never actually fire in 
 the correct sequence and the device remains completely inert? That would seem 
 to divorce qualia from behavior in a pretty significant way...

The link between qualia and computations is, of course, hard to know anything 
about.  But it seems to me quite likely that qualia would be insensitive to the 
sort of changes in computations that you are talking about.  Such modified 
computations could give rise to the same (or nearly the same) set of qualia for 
the 'inert device' runs as unmodified ones would have.  I am not saying that 
this must always be the case, since if you take it too far you could run into 
Maudlin-type problems, but in many cases it would make sense.

 If you have time, perhaps you could take a look at my post
 http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
 where I discussed a vague idea for how one might define isomorphic causal 
 structures that could be used to address the implementation problem, in a 
 way that wouldn't depend on counterfactuals at all

You do need counterfactuals to define implementations.

Consider the computation c(t+1) = a(t) AND b(t), where a,b,c, are bits.  
Suppose that a(t),b(t),and c(t) are all true.  Without counterfactuals, how 
would you distinguish the above from another computation such as c(t+1) = a(t)?

Even worse, suppose that c(t+1) is true no matter what.  a(t) and b(t) happen 
to be true.  Is the above computation implemented?

This gets even worse when you allow time-dependent mappings, which make a lot 
of intuitive sense in many practical cases.  Now c=1 can mean c is true at 
time t+1, but so can c=0 under a different mapping.

All of these problems go away when you require correct counterfactual behavior.

You might wonder about time dependent mappings.  If a(t)=1, b(t)=1, and c(t+1) 
= 0, can that implement the computation, considering a,b as true and c=0 as c 
is true?  Only if c(t+1) _would have been 1_ (thus, c is false) if a(t) or 
b(t) had been zero.

Clearly, due to the various and time-dependent mappings, there are a lot of 
computations that end up equivalent.  But the point is that real distinctions 
remain.  No matter what mappings you choose, as long as counterfactual 
behaviors are required, there is NO mapping that would make a AND b 
equivalent to a XOR b.  If you drop the counterfactual requirement, that is 
no longer the case.

--- On Mon, 2/22/10, Brent Meeker meeke...@dslextreme.com wrote:
 It seems that these thought experiments inevitably lead to considering a 
 digital simulation of the brain in a virtual environment.  This is usually 
 brushed over as an inessential aspect, but I'm coming to the opinion that it 
 is essential.

It's not essential, just convenient for thought experiments.

 Once you have encapsulated the whole thought experiment in a closed virtual 
 environment in a digital computer you have the paradox of the rock that 
 computes everything.

No. Input/output is not the solution for that; restrictions on mappings is.  
See my MCI paper:  http://arxiv.org/abs/0709.0544




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-22 Thread Brent Meeker

Jesse Mazer wrote:



 Date: Mon, 22 Feb 2010 08:42:17 -0800
 From: meeke...@dslextreme.com
 To: everything-list@googlegroups.com
 Subject: Re: problem of size '10

 Jesse Mazer wrote:
 
 
   Date: Sat, 13 Feb 2010 10:48:28 -0800
   From: jackmal...@yahoo.com
   Subject: Re: problem of size '10
   To: everything-list@googlegroups.com
  
   --- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
Jack Mallah wrote:
--- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
  MGA is more general (and older).
  The only way to escape the conclusion would be to attribute
  consciousness to a movie of a computation

 That's not true. For partial replacement scenarios, where part
  of a brain has counterfactuals and the rest doesn't, see my partial
  brain paper: http://cogprints.org/6321/
   
It is not a question of true or false, but of presenting a valid
  or non valid deduction.
  
   What is false is your statement that The only way to escape the
  conclusion would be to attribute consciousness to a movie of a
  computation. So your argument is not valid.
  
I don't see anything in your comment or links which prevents the
  conclusions of being reached from the assumptions. If you think so,
  tell me at which step, and provide a justification.
  
   Bruno, I don't intend to be drawn into a detailed discussion of 
your

  arguments at this time. The key idea though is that a movie could
  replace a computer brain. The strongest argument for that is that you
  could gradually replace the components of the computer (which have 
the

  standard counterfactual (if-then) functioning) with components that
  only play out a pre-recorded script or which behave correctly by
  luck. You could then invoke the 'fading qualia' argument (qualia
  could plausibly not vanish either suddenly or by gradually fading as
  the replacement proceeds) to argue that this makes no difference to
  the consciousness. My partial brain paper shows that the 'fading
  qualia' argument is invalid.
 
 
 
  Hi Jack, to me the idea that counterfactuals would be essential to
  defining what counts as an implementation has always seemed
  counterintuitive for reasons separate from the Olympia or movie-graph
  argument. The thought-experiment I'd like to consider is one where
  some device is implanted in my brain that passively monitors the
  activity of a large group of neurons, and only if it finds them 
firing

  in some precise prespecified sequence does it activate and stimulate
  my brain in some way, causing a change in brain activity; 
otherwise it
  remains causally inert (I suppose because of the butterfly effect, 
the
  mere presence of the device would eventually affect my brain 
activity,

  but we can imagine replacing the device with a subroutine in a
  deterministic program simulating my brain in a deterministic virtual
  environment, with the subroutine only being activated and influencing
  the simulation if certain simulated neurons fire in a precise 
sequence).


 It seems that these thought experiments inevitably lead to 
considering a

 digital simulation of the brain in a virtual environment. This is
 usually brushed over as an inessential aspect, but I'm coming to the
 opinion that it is essential. Once you have encapsulated the whole
 thought experiment in a closed virtual environment in a digital 
computer

 you have the paradox of the rock that computes everything. How we know
 what is being computed in this virtual environment? Ordinarily the
 answer to this is that we wrote the program and so we provide the
 interpretation of the calculation *in this world*. But it seems that in
 these thought experiments we are implicitly supposing that the
 simulation is inherently providing it's own interpretation. Maybe, so;
 but I see no reason to have confidence that this inherent 
interpretation

 is either unique or has anything to do with the interpretation we
 intended. I suspect that this simulated consciousness is only
 consciousness *in our external interpretation*.

 Brent

In that case, aren't you saying that there is no objective answer to 
whether a particular physical process counts as an implementation of 
a given computation, and that absolutely any process can be seen as 
implementing any computation if outside observers choose to interpret 
it that way? That's basically the conclusion Chalmers was trying to 
avoid in his Does a Rock Implement Every Finite-State Automaton 
paper at http://consc.net/papers/rock.html which discussed the 
implementation problem. One possible answer to this problem is that 
implementations *are* totally subjective, but this would seem to rule 
out the possibility of there ever being any sort of objective measure 
on computations (unless you imagine some privileged observers who are 
themselves *not* identified with computations and whose 
interpretations are the only ones that 'count') which makes it hard to 
solve things like the white rabbit problem

RE: problem of size '10

2010-02-22 Thread Jesse Mazer



 Date: Mon, 22 Feb 2010 11:41:38 -0800
 From: jackmal...@yahoo.com
 Subject: RE: problem of size '10
 To: everything-list@googlegroups.com
 
 Jesse, how do you access the everything list?  I ask because I have not 
 recieved my own posts in my inbox, nor have others such as Bruno replied.  I 
 use yahoo email.  I may need to use a different method to prevent my posts 
 from getting lost.  They do seem to show up on Google groups though.  There 
 was never a problem until recently, so I'll see if this one works.
I just get the messages in my email--if you want to give a link to one of the 
emails that didn't show up in your inbox, either from google groups or from 
http://www.mail-archive.com/everything-list@googlegroups.com/maillist.html , 
then I can check if that email showed up in my own inbox, since I haven't 
deleted any of the everything-list emails for a few days.

 
 --- On Mon, 2/22/10, Jesse Mazer laserma...@hotmail.com wrote:
  Hi Jack, to me the idea that counterfactuals would be essential to defining 
  what counts as an implementation has always seemed counterintuitive for 
  reasons separate from the Olympia or movie-graph argument. The 
  thought-experiment I'd like to consider is one where some device is 
  implanted in my brain that passively monitors the activity of a large group 
  of neurons, and only if it finds them firing in some precise prespecified 
  sequence does it activate and stimulate my brain in some way, causing a 
  change in brain activity; otherwise it remains causally inert
  According to the counterfactual definition of implementations, would the 
  mere presence of this device change my qualia from what they'd be if it 
  wasn't present, even if the neurons required to activate it never actually 
  fire in the correct sequence and the device remains completely inert? That 
  would seem to divorce qualia from behavior in a pretty significant way...
 
 The link between qualia and computations is, of course, hard to know anything 
 about.  But it seems to me quite likely that qualia would be insensitive to 
 the sort of changes in computations that you are talking about.  Such 
 modified computations could give rise to the same (or nearly the same) set of 
 qualia for the 'inert device' runs as unmodified ones would have.  I am not 
 saying that this must always be the case, since if you take it too far you 
 could run into Maudlin-type problems, but in many cases it would make sense.

OK, so you're suggesting there may not be a one-to-one relationship between 
distinct observer-moments in the sense of distinct qualia, and distinct 
computations defined in terms of counterfactuals? Distinct computations might 
be associated with identical qualia, in other words? What about the 
reverse--might a single computation be associated with multiple distinct 
observer-moments with different qualia?
 
  If you have time, perhaps you could take a look at my post
  http://www.mail-archive.com/everything-list@googlegroups.com/msg16244.html
  where I discussed a vague idea for how one might define isomorphic causal 
  structures that could be used to address the implementation problem, in a 
  way that wouldn't depend on counterfactuals at all
 
 You do need counterfactuals to define implementations.
 
 Consider the computation c(t+1) = a(t) AND b(t), where a,b,c, are bits.  
 Suppose that a(t),b(t),and c(t) are all true.  Without counterfactuals, how 
 would you distinguish the above from another computation such as c(t+1) = 
 a(t)?
 
 Even worse, suppose that c(t+1) is true no matter what.  a(t) and b(t) happen 
 to be true.  Is the above computation implemented?

You say Suppose that a(t),b(t),and c(t) are all true, but that's not enough 
information--the notion of causal structure I was describing involved not just 
the truth or falsity of propositions, but also the logical relationships 
between these propositions given the axioms of the system. For example, if we 
are looking at three propositions A, B, and C in the context of an axiomatic 
system, we can ask whether or not the axioms (which might represent the laws of 
physics, or the internal rules of a turing machine) along with propositions A 
and B (which could represent specific physical facts such as initial 
conditions, or facts about particular cells on the turing machine's tape at a 
particular time) can together be used to prove C, or whether they are 
insufficient to prove C. The causal structure for a given set of propositions 
could then be defined in terms of all possible combinations of logical 
implications for those propositions, like this:
1. Axioms + A imply B: true or false?2. Axioms + A imply C: true or false?3. 
Axioms + B imply A: true or false?4. Axioms + B imply C: true or false?5. 
Axioms + C imply A: true or false?6. Axioms + C imply B: true or false?7. 
Axioms + A + B imply C: true or false?8. Axioms + A + C imply B: true or 
false?9. Axioms + B + C imply A: true or false?
For example, one

RE: problem of size '10

2010-02-17 Thread Jack Mallah
--- On Mon, 2/15/10, Stephen P. King stephe...@charter.net wrote:
 On reading the first page of your paper a thought occurred to me. What 
 actually happens in the case of progressive Alzheimer’s disease is a bit 
 different from the idea that I get from the discussion.

Hi Stephen.  Certainly, Alzheimer's disease is not the same as the kind of 
partial brains that I talk about in my paper, which maintain the same inputs as 
they would have within a full normal brain.

 Are you really considering “something” that I can realistically map to my own 
 1st person experience or could it be merely some abstract idea.

That brings in the 'hard problem' discussion, which has been brought up on this 
list recently and which I have also been thinking about recently.  I won't 
attempt to answer it right now.  I will say that ALL approaches (eliminativism, 
reductionism, epiphenomenal dualism, interactionist dualism, and idealism) seem 
to have severe problems.  'None of the above' is no better as the list seems 
exhaustive.  In any case, if my work sheds light on only some of the approaches 
that is still progress.

BTW, I replied to Bruno and the reply appeared on Google groups but I don't 
think I got a copy in my email so I am putting a copy of what I posted here:

--- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
 Jack Mallah wrote:
 --- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
   MGA is more general (and older).
   The only way to escape the conclusion would be to attribute consciousness 
   to a movie of a computation
 
  That's not true.  For partial replacement scenarios, where part of a brain 
  has counterfactuals and the rest doesn't, see my partial brain paper: 
  http://cogprints.org/6321/

 It is not a question of true or false, but of presenting a valid or non valid 
 deduction.

What is false is your statement that The only way to escape the conclusion 
would be to attribute consciousness to a movie of a computation.  So your 
argument is not valid.

 I don't see anything in your comment or links which prevents the conclusions 
 of being reached from the assumptions. If you think so, tell me at which 
 step, and provide a justification.

Bruno, I don't intend to be drawn into a detailed discussion of your arguments 
at this time.  The key idea though is that a movie could replace a computer 
brain.  The strongest argument for that is that you could gradually replace the 
components of the computer (which have the standard counterfactual (if-then) 
functioning) with components that only play out a pre-recorded script or which 
behave correctly by luck.  You could then invoke the 'fading qualia' argument 
(qualia could plausibly not vanish either suddenly or by gradually fading as 
the replacement proceeds) to argue that this makes no difference to the 
consciousness.  My partial brain paper shows that the 'fading qualia' argument 
is invalid.

I think there was also a claim that counterfactual sensitivity amounts to 
'prescience' but that makes no sense and I'm pretty sure that no one (even 
those who accept the rest of your arguments) agrees with you on that.  
Counterfactual behaviors are properties of the overall system and are 
mathematically defined.

 Jack Mallah wrote:
  It could be physicalist or platonist - mathematical systems can implement 
  computations if the exist in a strong enough (Platonic) sense.  I am 
  agnostic on Platonism.
 
 This contradicts your definition of computationalism given in your papers.
 I quote your glossary: Computationalism:  The philosophical belief that 
 consciousness arises as a result of implementation of computations by 
 physical systems. 

It's true that I didn't mention Platonism in that glossary entry (in the MCI 
paper), which was an oversight, but not a big deal given that the paper was 
aimed at physicists.  The paper has plenty of jobs to do already, and 
championing the possibility of the Everything Hypothesis was not the focus.

On p. 14 of the the MCI paper I wrote A computation can be implemented by a 
physical system which shares appropriate features with it, or (in an analogous 
way) by another computation.  If a computation exists in a Platonic sense, 
then it could implement other computations.

On p. 46 of the paper I briefly discussed the All-Universes Hypothesis.  That 
should leave no doubt as to my position.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: problem of size '10

2010-02-14 Thread Stephen P. King
Hi Jack,

 

On reading the first page of your paper a thought occurred to
me. What actually happens in the case of progressive Alzheimer's disease is
a bit different from the idea that I get from the discussion. It could be
that there is a problem with the unstated premise that consciousness is a
quantity/quality that can be increased or decreased, like the volume of the
music that I'm listening to as I write this. I have a family member that
cares for the elderly and there is a consistent pattern of phenomena
associated with the degradation of the brain that does not resemble anything
like that which is considered as consciousness.  This question equally
applies to D. Chalmers. Are you really considering something that I can
realistically map to my own 1st person experience or could it be merely some
abstract idea. 

I remember the joke about spherical cows, could this be
happening here? Seriously! 

 

Onward!

 

Stephen

 

 

 

From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com] On Behalf Of Bruno Marchal
Sent: Friday, February 12, 2010 11:39 AM
To: everything-list@googlegroups.com
Subject: Re: problem of size '10

 

On 11 Feb 2010, at 17:14, Jack Mallah wrote:





--- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be wrote:



A little thin brain would produce a zombie?


Even if size affects measure, a zombie is not a brain with low measure; it's
a brain with zero measure.  So the answer is obviously no - it would not be
a zombie.  Stop abusing the language.

We know that small terms in the wavefunction have low measure.  I would not
call these terms 'zombies'.  Many small terms together can equal or exceed
the measure of big terms.




MGA is more general (and older). The only way to escape the conclusion would
be to attribute consciousness to a movie of a computation


That's not true.  For partial replacement scenarios, where part of a brain
has counterfactuals and the rest doesn't, see my partial brain paper:
http://cogprints.org/6321/

 

 

 

It is not a question of true or false, but of presenting a valid or non
valid deduction. I don't see anything in your comment or links which
prevents the conclusions of being reached from the assumptions. If you think
so, tell me at which step, and provide a justification.

 

You may read the archives, for new recent presentation or my papers, and
eventually point on something you don't understand.

 

snip

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-13 Thread Jack Mallah
--- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:
 Jack Mallah wrote:
 --- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be
   MGA is more general (and older).
   The only way to escape the conclusion would be to attribute consciousness 
   to a movie of a computation
 
  That's not true.  For partial replacement scenarios, where part of a brain 
  has counterfactuals and the rest doesn't, see my partial brain paper: 
  http://cogprints.org/6321/

 It is not a question of true or false, but of presenting a valid or non valid 
 deduction.

What is false is your statement that The only way to escape the conclusion 
would be to attribute consciousness to a movie of a computation.  So your 
argument is not valid.

 I don't see anything in your comment or links which prevents the conclusions 
 of being reached from the assumptions. If you think so, tell me at which 
 step, and provide a justification.

Bruno, I don't intend to be drawn into a detailed discussion of your arguments 
at this time.  The key idea though is that a movie could replace a computer 
brain.  The strongest argument for that is that you could gradually replace the 
components of the computer (which have the standard counterfactual (if-then) 
functioning) with components that only play out a pre-recorded script or which 
behave correctly by luck.  You could then invoke the 'fading qualia' argument 
(qualia could plausibly not vanish either suddenly or by gradually fading as 
the replacement proceeds) to argue that this makes no difference to the 
consciousness.  My partial brain paper shows that the 'fading qualia' argument 
is invalid.

I think there was also a claim that counterfactual sensitivity amounts to 
'prescience' but that makes no sense and I'm pretty sure that no one (even 
those who accept the rest of your arguments) agrees with you on that.  
Counterfactual behaviors are properties of the overall system and are 
mathematically defined.

 Jack Mallah wrote:
  It could be physicalist or platonist - mathematical systems can implement 
  computations if the exist in a strong enough (Platonic) sense.  I am 
  agnostic on Platonism.
 
 This contradicts your definition of computationalism given in your papers.
 I quote your glossary: Computationalism:  The philosophical belief that 
 consciousness arises as a result of implementation of computations by 
 physical systems. 

It's true that I didn't mention Platonism in that glossary entry (in the MCI 
paper), which was an oversight, but not a big deal given that the paper was 
aimed at physicists.  The paper has plenty of jobs to do already, and 
championing the possibility of the Everything Hypothesis was not the focus.

On p. 14 of the the MCI paper I wrote A computation can be implemented by a 
physical system which shares appropriate features with it, or (in an analogous 
way) by another computation.  If a computation exists in a Platonic sense, 
then it could implement other computations.

On p. 46 of the paper I briefly discussed the All-Universes Hypothesis.  That 
should leave no doubt as to my position.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-13 Thread Bruno Marchal


On 13 Feb 2010, at 19:48, Jack Mallah wrote:


--- On Fri, 2/12/10, Bruno Marchal marc...@ulb.ac.be wrote:

Jack Mallah wrote:
--- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be

MGA is more general (and older).
The only way to escape the conclusion would be to attribute  
consciousness to a movie of a computation


That's not true.  For partial replacement scenarios, where part of  
a brain has counterfactuals and the rest doesn't, see my partial  
brain paper: http://cogprints.org/6321/


It is not a question of true or false, but of presenting a valid or  
non valid deduction.


What is false is your statement that The only way to escape the  
conclusion would be to attribute consciousness to a movie of a  
computation.  So your argument is not valid.


OK. I was talking in a context which is missing. You can also conclude  
in the prescience of the neurons for example. The point is that if you  
assume the physical supervenience thesis, you have to abandon comp and/ 
or to introduce magical (non Turing emulable) property in matter.








I don't see anything in your comment or links which prevents the  
conclusions of being reached from the assumptions. If you think so,  
tell me at which step, and provide a justification.


Bruno, I don't intend to be drawn into a detailed discussion of your  
arguments at this time.  The key idea though is that a movie could  
replace a computer brain.


What do you mean? I guess you mean: COMP + phys. supervenience entails  
that a movie can replace a computer brain.




The strongest argument for that is that you could gradually replace  
the components of the computer (which have the standard  
counterfactual (if-then) functioning) with components that only play  
out a pre-recorded script or which behave correctly by luck.


OK.



You could then invoke the 'fading qualia' argument (qualia could  
plausibly not vanish either suddenly or by gradually fading as the  
replacement proceeds) to argue that this makes no difference to the  
consciousness.  My partial brain paper shows that the 'fading  
qualia' argument is invalid.


I may be OK with your point, but I am not using the 'fading qualia'  
argument. Just the physical supervenience (to show it absurd by  
'reductio').





I think there was also a claim that counterfactual sensitivity  
amounts to 'prescience' but that makes no sense and I'm pretty sure  
that no one (even those who accept the rest of your arguments)  
agrees with you on that.


It is a reasoning by a an absurdum reduction. If you agree (with any  
computationalist) that we cannot attribute prescience to the neurons,  
then the physical activity of the movie is the same as the physical  
activity of the movie, so that physical supervenience + comp entails  
that the consciousness supervenes on the movie (and this is absurd,  
mainly because the movie does not compute anything).





Counterfactual behaviors are properties of the overall system and  
are mathematically defined.


But that is the point: the counterfactuals are in the math. Not in the  
physical activity. That is why comp forces the computational  
supervenience. But then the appearance of the physical world(s) will  
eventually be in need to be recover from the mathematical computation.







Jack Mallah wrote:
It could be physicalist or platonist - mathematical systems can  
implement computations if the exist in a strong enough (Platonic)  
sense.  I am agnostic on Platonism.


This contradicts your definition of computationalism given in your  
papers.
I quote your glossary: Computationalism:  The philosophical  
belief that consciousness arises as a result of implementation of  
computations by physical systems. 


It's true that I didn't mention Platonism in that glossary entry (in  
the MCI paper), which was an oversight, but not a big deal given  
that the paper was aimed at physicists.  The paper has plenty of  
jobs to do already, and championing the possibility of the  
Everything Hypothesis was not the focus.


I don't follow you. The point is not the everything hypothesis, just  
that the movie graph makes consciousness supervene on the computations  
(in their usual mathematical meaning) and not on the physical  
activity, which has to be redefine from  the structures of the many  
computations.






On p. 14 of the the MCI paper I wrote A computation can be  
implemented by a physical system which shares appropriate features  
with it, or (in an analogous way) by another computation.  If a  
computation exists in a Platonic sense, then it could implement  
other computations.


But here you don't distinguish the first and third person points of  
view. No universal (mathematical) machine can know which computations  
bear it, and its next personal state has to be recovered from the  
statistics on the possible relative  continuations.


Bruno Marchal
http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the 

Re: problem of size '10

2010-02-12 Thread Bruno Marchal

On 11 Feb 2010, at 17:14, Jack Mallah wrote:


--- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be wrote:

A little thin brain would produce a zombie?


Even if size affects measure, a zombie is not a brain with low  
measure; it's a brain with zero measure.  So the answer is obviously  
no - it would not be a zombie.  Stop abusing the language.


We know that small terms in the wavefunction have low measure.  I  
would not call these terms 'zombies'.  Many small terms together can  
equal or exceed the measure of big terms.


MGA is more general (and older). The only way to escape the  
conclusion would be to attribute consciousness to a movie of a  
computation


That's not true.  For partial replacement scenarios, where part of a  
brain has counterfactuals and the rest doesn't, see my partial brain  
paper: http://cogprints.org/6321/








It is not a question of true or false, but of presenting a valid or  
non valid deduction. I don't see anything in your comment or links  
which prevents the conclusions of being reached from the assumptions.  
If you think so, tell me at which step, and provide a justification.


You may read the archives, for new recent presentation or my papers,  
and eventually point on something you don't understand.








 What you call computationalism is a form of physicalist  
computationalism.


Not true.  It could be physicalist or platonist - mathematical  
systems can implement computations if the exist in a strong enough  
(Platonic) sense.  I am agnostic on Platonism.





This contradicts your definition of computationalism given in your  
papers. I have no more clue about your assumptions, nor what you mean  
by computation.


I quote your glossary: Computationalism:  The philosophical belief  
that consciousness arises as a result of

implementation of computations by physical systems.  (my emphasis)




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-11 Thread Bruno Marchal


On 11 Feb 2010, at 06:46, Jack Mallah wrote:

It's been a very busy week. I will reply to the measure thread  
(which is actually more important) but that could be in a few days.


--- On Thu, 1/28/10, Jason Resch jasonre...@gmail.com wrote:
What about if half of your neurons were 1/2 their normal size, and  
the other half were twice their normal size?  How would this be  
predicted to effect your measure?


If it had any effect - and as I said, I don't think it would in a QM  
universe - I guess it would decrease the measure of part of your  
brain and increase that of the other part.  That may sound weird but  
it's certainly possible for one part of a parallel computation to  
have more measure than the rest which can be done by duplicating  
only that part of the brain.  See my paper on partial brains:


http://cogprints.org/6321/

--- On Thu, 1/28/10, Stathis Papaioannou stath...@gmail.com wrote:
Do you think that simply doubling up the size of electronic  
components (much easier to do than making brains bigger) would  
double measure?


The effect should be the same for brains or electronics.

You could then flick the switch and alternate between two separate  
but parallel circuits or one circuit. Would flicking the switch  
cause a doubling/halving of measure?


If the circuits don't interact, then it is two separate  
implementations, and measure would double.  If they do interact, we  
are back to 'big components' which as I said could go either way.


Would it be tantamount to killing one of the consciousnesses every  
time you did it?


Basically.  Killing usually implies an irreversible process;  
otherwise, someone is liable to come along and flick the switch  
back, so it's more like knocking someone out.  If the measure is  
halved and then you break the switch so it can't go back, that would  
be, yes.


--- On Thu, 1/28/10, Bruno Marchal marc...@ulb.ac.be wrote:

Does the size of the components affects the computation?


Other than measure, the implemented computation would be the same,  
at least for the cases that matter.



So, the behavior would not change, but the consciousness would be  
different? A little thin brain would produce a zombie?






I don't assume the quantum stuff. It is what I want to understand.  
I gave an argument showing that if we assume computationalism, then  
we have to derive physics from (classical) computer science


Of course I know about your argument. It's false.



I guess you mean invalid. What is invalid in the reasoning?  Have you  
follow the last year new exposition on this list MGA (Movie Graph  
Argument). I have understood eventually that we don't need to use the  
counterfactual analysis à-la-Maudlin. MGA is more general (and older).  
The only way to escape the conclusion would be to attribute  
consciousness to a movie of a computation, but this forces to confuse  
a computation (relation between numbers, or combinators) and a  
description of a computation (like a Gödel number of a (finite) piece  
of a computation). Those things are related, but different.






You wrote convincing posts on the implementation problem. I  
thought, and still think, that you understood that there is no  
obvious way to attribute a computation to a physical process. With  
strict criteria we get nothing, with weak criteria even a rock  
thinks.


The implementation problem is: Given a physical or mathematical  
system, does it implement a given computation?  As you say, if the  
answer is always yes - as it is on a naive definition of  
implementation - then computationalism can not work.


This was an important problem - which I presented a solution for in  
my '07 MCI paper:


http://arxiv.org/abs/0709.0544



What you call computationalism is a form of physicalist  
computationalism. The Movie graph argument show it is inconsistent  
with yes doctor + Church thesis and yes doctor follows from the  
physicalist computationalism assumption.
And as you know, physicalist computationalism raises the  
implementation problem.






So I now consider it a solved problem, using my CSSA framework.  The  
solution presented there does need a bit of refinement and I plan to  
write up a separate paper to present it more clearly and hopefully  
get some attention for it, but the main ideas are there.


But that's only half the story.  There is still the measure problem:  
Given that a system does implement some set of computations, what is  
the measure for each?  Without the answer to that, you can't predict  
what a typical observer would see.  This problem remains unsolved  
(though I do have proposals in the paper) and relates to the problem  
of size.



The measure is determined relatively by the universal machine by the  
set of the maximal consistent extensions of its beliefs. The Gödel-Löb- 
Solovay logics of self-reference provides the math, and explains how  
the coupling consciousness realities emerges from the numbers.


-- Bruno Marchal



Re: problem of size '10

2010-02-11 Thread Jack Mallah
--- On Thu, 2/11/10, Bruno Marchal marc...@ulb.ac.be wrote:
 A little thin brain would produce a zombie?

Even if size affects measure, a zombie is not a brain with low measure; it's a 
brain with zero measure.  So the answer is obviously no - it would not be a 
zombie.  Stop abusing the language.

We know that small terms in the wavefunction have low measure.  I would not 
call these terms 'zombies'.  Many small terms together can equal or exceed the 
measure of big terms.

 MGA is more general (and older). The only way to escape the conclusion would 
 be to attribute consciousness to a movie of a computation

That's not true.  For partial replacement scenarios, where part of a brain has 
counterfactuals and the rest doesn't, see my partial brain paper: 
http://cogprints.org/6321/

 What you call computationalism is a form of physicalist computationalism.

Not true.  It could be physicalist or platonist - mathematical systems can 
implement computations if the exist in a strong enough (Platonic) sense.  I am 
agnostic on Platonism.

 The measure is determined relatively by the universal machine by the set of 
 the maximal consistent extensions of its beliefs.

Also not true.  That's just your idea for how it should be done, which stems 
from your false beliefs in QTI.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-02-10 Thread Jack Mallah
It's been a very busy week. I will reply to the measure thread (which is 
actually more important) but that could be in a few days.

--- On Thu, 1/28/10, Jason Resch jasonre...@gmail.com wrote:
 What about if half of your neurons were 1/2 their normal size, and the other 
 half were twice their normal size?  How would this be predicted to effect 
 your measure?

If it had any effect - and as I said, I don't think it would in a QM universe - 
I guess it would decrease the measure of part of your brain and increase that 
of the other part.  That may sound weird but it's certainly possible for one 
part of a parallel computation to have more measure than the rest which can be 
done by duplicating only that part of the brain.  See my paper on partial 
brains:

http://cogprints.org/6321/

--- On Thu, 1/28/10, Stathis Papaioannou stath...@gmail.com wrote:
 Do you think that simply doubling up the size of electronic components (much 
 easier to do than making brains bigger) would double measure?

The effect should be the same for brains or electronics.

 You could then flick the switch and alternate between two separate but 
 parallel circuits or one circuit. Would flicking the switch cause a 
 doubling/halving of measure? 

If the circuits don't interact, then it is two separate implementations, and 
measure would double.  If they do interact, we are back to 'big components' 
which as I said could go either way.

 Would it be tantamount to killing one of the consciousnesses every time you 
 did it?

Basically.  Killing usually implies an irreversible process; otherwise, someone 
is liable to come along and flick the switch back, so it's more like knocking 
someone out.  If the measure is halved and then you break the switch so it 
can't go back, that would be, yes.

--- On Thu, 1/28/10, Bruno Marchal marc...@ulb.ac.be wrote:
 Does the size of the components affects the computation?

Other than measure, the implemented computation would be the same, at least for 
the cases that matter.

 I don't assume the quantum stuff. It is what I want to understand. I gave an 
 argument showing that if we assume computationalism, then we have to derive 
 physics from (classical) computer science

Of course I know about your argument. It's false.

 You wrote convincing posts on the implementation problem. I thought, and 
 still think, that you understood that there is no obvious way to attribute a 
 computation to a physical process. With strict criteria we get nothing, with 
 weak criteria even a rock thinks.

The implementation problem is: Given a physical or mathematical system, does it 
implement a given computation?  As you say, if the answer is always yes - as 
it is on a naive definition of implementation - then computationalism can not 
work.

This was an important problem - which I presented a solution for in my '07 MCI 
paper:

http://arxiv.org/abs/0709.0544

So I now consider it a solved problem, using my CSSA framework.  The solution 
presented there does need a bit of refinement and I plan to write up a separate 
paper to present it more clearly and hopefully get some attention for it, but 
the main ideas are there.

But that's only half the story.  There is still the measure problem: Given that 
a system does implement some set of computations, what is the measure for each? 
 Without the answer to that, you can't predict what a typical observer would 
see.  This problem remains unsolved (though I do have proposals in the paper) 
and relates to the problem of size.




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-01-28 Thread Stathis Papaioannou
On 28 January 2010 12:46, Jack Mallah jackmal...@yahoo.com wrote:
 I'm replying to this bit seperately since Bruno touched on a different issue 
 than the others have.  My reply to the main measure again '10 thread will 
 follow under the original title.

 --- On Wed, 1/27/10, Bruno Marchal marc...@ulb.ac.be wrote:
 I would also not say yes to a computationalist doctor, because my 
 consciousness will be related to the diameter of the simulated neurons, or 
 to the redundancy of the gates, etc.  (and this despite the behavior remains 
 unaffected). This entails also the existence of zombie. If the neurons are 
 very thin , my absolute measure can be made quasi null, despite my 
 behavior remains again non affected.

 This relates to what I call the 'problem of size', namely: Does the size of 
 the components affect the measure?  The answer is not obvious.

 My belief is that, given that it is all made of quantum stuff, the size will 
 not matter - because the set of quantum variables involved actually doesn't 
 change if you leave some of them out of the computer - they are still 
 parameters of the overall system.

 But there is an important and obvious way in which size does matter - the 
 size of the amplitude of the wavefunction, the square of which is 
 proportional to measure according to the Born Rule.

 I would say that if we really had a classical world and made a computer out 
 of classical water waves, the measure might be proportional to the square 
 of the amplitude of those waves.  I don't know - I have different proposals 
 for how the actual Born Rule comes about, and depending on how it works, it 
 could come out either way.

 I don't think there is any experimental evidence that size matters.  But some 
 might disagree.  If they do, there are a few points they could make:

 - Maybe big brains have more measure.  This could help explain why we are men 
 and not mice.

 - Maybe in the future, people will upload their brains into micro-electronic 
 systems.  If those have small measure, it could explain the Doomsday 
 argument - if the future people have low measure, it makes sense that we are 
 not in that era.

 - Maybe neural pathways that recieve more reinforcement get bigger and give 
 rise to more measure.  This could result in increased effective probablility 
 to observe more coincidences in your life than would be expected by chance.  
 Now, coincidences often are noticed by us and we tend to think there are 
 many.  I think this has more to do with psychology than physics - but who 
 knows?

Do you think that simply doubling up the size of electronic components
(much easier to do than making brains bigger) would double measure?
For example, you could make the copper tracks on a circuit board twice
as thick, put two transistors in parallel rather than one, double the
surface area as well as the separation of the plates in the
capacitors, and so on. It would take a bit of design effort, but you
could make a circuit where every component was doubled up and
connected by a wire bridge with a switch, and all the switches
controlled by one master switch. You could then flick the switch and
alternate between two separate but parallel circuits or one circuit.
Would flicking the switch cause a doubling/halving of measure? Would
it be tantamount to killing one of the consciousnesses every time you
did it?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-01-28 Thread Bruno Marchal


On 28 Jan 2010, at 02:46, Jack Mallah wrote:

I'm replying to this bit seperately since Bruno touched on a  
different issue than the others have.  My reply to the main measure  
again '10 thread will follow under the original title.


--- On Wed, 1/27/10, Bruno Marchal marc...@ulb.ac.be wrote:
I would also not say yes to a computationalist doctor, because my  
consciousness will be related to the diameter of the simulated  
neurons, or to the redundancy of the gates, etc.  (and this despite  
the behavior remains unaffected). This entails also the existence  
of zombie. If the neurons are very thin , my absolute measure can  
be made quasi null, despite my behavior remains again non affected.


This relates to what I call the 'problem of size', namely: Does the  
size of the components affect the measure?  The answer is not obvious.



Does the size of the components affects the computation?




My belief is that, given that it is all made of quantum stuff, the  
size will not matter - because the set of quantum variables involved  
actually doesn't change if you leave some of them out of the  
computer - they are still parameters of the overall system.


I don't assume the quantum stuff. It is what I want to understand. I  
gave an argument showing that if we assume computationalism, then we  
have to derive physics from (classical) computer science if we want to  
note annihilate the chance to progress on the consciousness/reality  
riddle.






But there is an important and obvious way in which size does matter  
- the size of the amplitude of the wavefunction, the square of which  
is proportional to measure according to the Born Rule.


I would say that if we really had a classical world and made a  
computer out of classical water waves, the measure might be  
proportional to the square of the amplitude of those waves.  I don't  
know - I have different proposals for how the actual Born Rule comes  
about, and depending on how it works, it could come out either way.


I don't think there is any experimental evidence that size matters.   
But some might disagree.  If they do, there are a few points they  
could make:


- Maybe big brains have more measure.  This could help explain why  
we are men and not mice.


- Maybe in the future, people will upload their brains into micro- 
electronic systems.  If those have small measure, it could explain  
the Doomsday argument - if the future people have low measure, it  
makes sense that we are not in that era.


- Maybe neural pathways that recieve more reinforcement get bigger  
and give rise to more measure.  This could result in increased  
effective probablility to observe more coincidences in your life  
than would be expected by chance.  Now, coincidences often are  
noticed by us and we tend to think there are many.  I think this has  
more to do with psychology than physics - but who knows?







You wrote convincing posts on the implementation problem. I thought,  
and still think, that you understood that there is no obvious way to  
attribute a computation to a physical process. With strict criteria we  
get nothing, with weak criteria even a rock thinks.
But comp, well as I understand it, attributes consciousness to  
computations, in the digital sense made precise by mathematicians. In  
that case we get a theory of mind, indeed, the theory of what the  
universal machines (computers, interpreters, ...) are able to prove,  
and bet, about themselves and anything else. We just don't try to  
ascribe consciousness to anything material or observable. It is a  
mathematical phenomenon which appears when a universal machine  
observes itself. It accelerates with two universal machines in front  
of each other, and admit innumerable n-couplings.


Digital Mechanism makes Kant right, I think, that time and space  
belongs to the category of mind, with mind being arithmetic as viewed  
from the average universal machine inside. It is idealism, but it is  
realism too, with respect to the elementary aritmetical truth,  
including computer science and computer's computer science. And so, it  
is precise and testable, thanks to the hard work of Gödel, Löb and  
many others.


To understand this you have to be agnostic, not just about the  
Creator, but also about the Creation. I hope yo are not religious on  
materialism or physicalism.


Arithmetic, through comp,  determines a (very vast and intricate) web  
of relative consistent (n-person) histories, and it is an open problem  
if that web coheres enough to determine a unique, in which sense?,  
physical reality.  It is probably simpler to (re)define physics as  
invariant for the universal computable base.  Computer scientists say  
machine independent.  No doubt we share deep computations, which are  
made stable below our substitution level by multiplications on the  
reals, or some ring. Comp is Church thesis, mainly, and the idea that  
we are Turing emulable.



Bruno Marchal



Re: problem of size '10

2010-01-27 Thread Jack Mallah
I'm replying to this bit seperately since Bruno touched on a different issue 
than the others have.  My reply to the main measure again '10 thread will 
follow under the original title.

--- On Wed, 1/27/10, Bruno Marchal marc...@ulb.ac.be wrote:
 I would also not say yes to a computationalist doctor, because my 
 consciousness will be related to the diameter of the simulated neurons, or to 
 the redundancy of the gates, etc.  (and this despite the behavior remains 
 unaffected). This entails also the existence of zombie. If the neurons are 
 very thin , my absolute measure can be made quasi null, despite my behavior 
 remains again non affected.

This relates to what I call the 'problem of size', namely: Does the size of the 
components affect the measure?  The answer is not obvious.

My belief is that, given that it is all made of quantum stuff, the size will 
not matter - because the set of quantum variables involved actually doesn't 
change if you leave some of them out of the computer - they are still 
parameters of the overall system.

But there is an important and obvious way in which size does matter - the size 
of the amplitude of the wavefunction, the square of which is proportional to 
measure according to the Born Rule.

I would say that if we really had a classical world and made a computer out of 
classical water waves, the measure might be proportional to the square of the 
amplitude of those waves.  I don't know - I have different proposals for how 
the actual Born Rule comes about, and depending on how it works, it could come 
out either way.

I don't think there is any experimental evidence that size matters.  But some 
might disagree.  If they do, there are a few points they could make:

- Maybe big brains have more measure.  This could help explain why we are men 
and not mice.

- Maybe in the future, people will upload their brains into micro-electronic 
systems.  If those have small measure, it could explain the Doomsday argument 
- if the future people have low measure, it makes sense that we are not in that 
era.

- Maybe neural pathways that recieve more reinforcement get bigger and give 
rise to more measure.  This could result in increased effective probablility to 
observe more coincidences in your life than would be expected by chance.  Now, 
coincidences often are noticed by us and we tend to think there are many.  I 
think this has more to do with psychology than physics - but who knows?




  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: problem of size '10

2010-01-27 Thread Jason Resch
On Wed, Jan 27, 2010 at 7:46 PM, Jack Mallah jackmal...@yahoo.com wrote:

 I'm replying to this bit seperately since Bruno touched on a different
 issue than the others have.  My reply to the main measure again '10 thread
 will follow under the original title.

 --- On Wed, 1/27/10, Bruno Marchal marc...@ulb.ac.be wrote:
  I would also not say yes to a computationalist doctor, because my
 consciousness will be related to the diameter of the simulated neurons, or
 to the redundancy of the gates, etc.  (and this despite the behavior remains
 unaffected). This entails also the existence of zombie. If the neurons are
 very thin , my absolute measure can be made quasi null, despite my
 behavior remains again non affected.


What about if half of your neurons were 1/2 their normal size, and the other
half were twice their normal size?  How would this be predicted to effect
your measure?

What about beings who have higher resolution senses, and thus a greater
possibility for variation in senses due to the higher number of possible
states?



 This relates to what I call the 'problem of size', namely: Does the size of
 the components affect the measure?  The answer is not obvious.


It is an interesting question I hadn't considered.  Given the relative state
interpretation, how large is the system really?  Is it bounded by one's
skull, one's nerve cells, one's light cone?


 My belief is that, given that it is all made of quantum stuff, the size
 will not matter - because the set of quantum variables involved actually
 doesn't change if you leave some of them out of the computer - they are
 still parameters of the overall system.

 But there is an important and obvious way in which size does matter - the
 size of the amplitude of the wavefunction, the square of which is
 proportional to measure according to the Born Rule.

 I would say that if we really had a classical world and made a computer out
 of classical water waves, the measure might be proportional to the square
 of the amplitude of those waves.  I don't know - I have different proposals
 for how the actual Born Rule comes about, and depending on how it works, it
 could come out either way.

 I don't think there is any experimental evidence that size matters.  But
 some might disagree.  If they do, there are a few points they could make:

 - Maybe big brains have more measure.  This could help explain why we are
 men and not mice.


But the mice are mice, and would admit as much if you asked one and it could
respond.  We're also not whales.



 - Maybe in the future, people will upload their brains into
 micro-electronic systems.  If those have small measure, it could explain the
 Doomsday argument - if the future people have low measure, it makes sense
 that we are not in that era.


Maybe we are already in that era.  Also given we would be effectively
immortal, in the long run, the experiences of uploaded minds should greatly
outweigh organic ones, if they engage in game-worlds for leisure.



 - Maybe neural pathways that recieve more reinforcement get bigger and give
 rise to more measure.  This could result in increased effective probablility
 to observe more coincidences in your life than would be expected by chance.
  Now, coincidences often are noticed by us and we tend to think there are
 many.  I think this has more to do with psychology than physics - but who
 knows?




-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.