Re: Jack's partial brain paper

2010-03-17 Thread Stathis Papaioannou
On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com wrote:

 I think this is a dubious argument based on our lack of understanding of
 qualia.  Presumably one has many thoughts that do not result in any overt
 action.  So if I lost a few neurons (which I do continuously) it might mean
 that there are some thoughts I don't have or some associations I don't make,
 so eventually I may fade to the level of consciousness of my dog.  Is my
 dog a partial zombie?

It's certainly possible that qualia can fade without the subject
noticing, either because the change is slow and gradual or because the
change fortuitously causes a cognitive deficit as well. But this not
what the fading qualia argument is about. The argument requires
consideration of a brain change which would cause an unequivocal
change in consciousness, such as a removal of the subject's occipital
lobes. If this happened, the subject would go completely blind: he
would be unable to describe anything placed in front of his eyes, and
he would report that he could not see anything at all. That's what it
means to go blind. But now consider the case where the occipital lobes
are replaced with a black box that reproduces the I/O behaviour of the
occipital lobes, but which is postulated to lack visual qualia. The
rest of the subject's brain is intact and is forced to behave exactly
as it would if the change had not been made, since it is receiving
normal inputs from the black box. So the subject will correctly
describe anything placed in front of him, and he will report that
everything looks perfectly normal. More than that, he will have an
appropriate emotional response to what he sees, be able to paint it or
write poetry about it, make a working model of it from an image he
retains in his mind: whatever he would normally do if he saw
something. And yet, he would be a partial zombie: he would behave
exactly as if he had normal visual qualia while completely lacking
visual qualia. Now it is part of the definition of a full zombie that
it doesn't understand that it is blind, since a requirement for
zombiehood is that it doesn't understand anything at all, it just
behaves as if it does. But if the idea of qualia is meaningful at all,
you would think that a sudden drastic change like going blind should
produce some realisation in a cognitively intact subject; otherwise
how do we know that we aren't blind now, and what reason would we have
to prefer normal vision to zombie vision? The conclusion is that it
isn't possible to make a device that replicates brain function but
lacks qualia: either it is not possible to make such a device at all
because the brain is not computable, or if such a device could be made
(even a magical one) then it would necessarily reproduce the qualia as
well.

 I think the question of whether there could be a philosophical zombie is ill
 posed because we don't know what is responsible for qualia.  I speculate
 that they are tags of importance or value that get attached to perceptions
 so that they are stored in short term memory.  Then, because evolution
 cannot redesign things, the same tags are used for internal thoughts that
 seem important enough to put in memory.  If this is the case then it might
 be possible to design a robot which used a different method of evaluating
 experience for storage and it would not have qualia like humans - but would
 it have some other kind of qualia?  Since we don't know what qualia are in a
 third person sense there seems to be no way to answer that.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread Stathis Papaioannou
On 17 March 2010 06:09, John Mikes jami...@gmail.com wrote:
 Stathis,

 I feel we are riding the human restrictive imaging in a complex nature.
 While I DO feel completely comfortable to say that there is a neuron through
 which connectivity is established to a next segment in our mental
 complexity, and if that neuron dies, the connectivity to that particular
 quale broke - on 2nd thought the diversity and multiplicity we do experience
 in nature (knownw domains and presumed for the still unknown ones) provides
 hope for more than one connecting link to ALL reducing the exclusivity of
 that particular neuron.

 Nature's complexity, however, shows redundancy and 'multiple emergency
 breaks' to the features, according to their 'importance' , that may be
 beyond our present grasp.

 In human logic (engineering/physical thinking as well) we think in THE
 way how things occur.   O N E   is enough. (This is the basis of our
 one-track causality-thinking as well: we find in our (known) model ONE most
 valued initiating factor and satisfy ourselves with that one, as THE Cause
 while from 'beyond our model' there may be multiple factors contributing to
 the effect assigned to that ONE in-model factor. This is the reason why our
 knowledge is almost, sometimes even paradoxical and ambiguous).

 Stathis asked:

 Are you prepared to say that it is possible
 there is a single subatomic particle in your brain which makes the
 difference between consciousness and zombiehood?

 I am propared to say that we may do that, i.e. to assign such differences to
 a figmentous 'particle' - in what we may be no more right than in other
 'presumed' mental explanations based on tissue/energy/bio science of the
 brain.
 Am I far out to compare a 'zombie' to a binary computer in 'basic' while the
 more advanced (still!) 'partial zombie' variants come in the advanced AI
 versions? It still does not commute with the wholeness of mentality, but
 follows certain leads beyond the strictly mechanistically  prefabricated
 machine connectivities. We still program within our known domains.
 We still cannot exceed our limited (model-view) knowledge base.

The question (which has got a bit lost in the discussion) is whether
it is possible to make an artificial brain component which exactly
reproduces the behaviour of the biological component, so that it if
replaces the biological component the surrounding tissue cannot tell
that it is an impostor, but which lacks consciousness. If it is
possible, then it would either be possible to create a partial zombie
who is blind, deaf, aphasic etc. but behaves normally, or perhaps a
abrupt full zombie when one vital component (which would have to be an
indivisible part of a neuron) was changed. This does not assume any
scientific theory about brain function: we can imagine that the
artificial component is created and installed by God. Is it possible
to make such a component, or is it a logical impossibility, such that
even God could not do it?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread HZ
I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of zombies? Perhaps a
silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know whether
we are zombies or one can solve the problem of zombie identification.
I guess I'm new in the zombieness business.

But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter, and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there, but
consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not disconnected
to cognition. In this view only damaging the cognitive capacities of a
person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind one.
The use of the term 'qualia' here looks like a remake of the mind/body
problem.


On Wed, Mar 17, 2010 at 11:34 AM, Stathis Papaioannou
stath...@gmail.com wrote:
 On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com wrote:

 I think this is a dubious argument based on our lack of understanding of
 qualia.  Presumably one has many thoughts that do not result in any overt
 action.  So if I lost a few neurons (which I do continuously) it might mean
 that there are some thoughts I don't have or some associations I don't make,
 so eventually I may fade to the level of consciousness of my dog.  Is my
 dog a partial zombie?

 It's certainly possible that qualia can fade without the subject
 noticing, either because the change is slow and gradual or because the
 change fortuitously causes a cognitive deficit as well. But this not
 what the fading qualia argument is about. The argument requires
 consideration of a brain change which would cause an unequivocal
 change in consciousness, such as a removal of the subject's occipital
 lobes. If this happened, the subject would go completely blind: he
 would be unable to describe anything placed in front of his eyes, and
 he would report that he could not see anything at all. That's what it
 means to go blind. But now consider the case where the occipital lobes
 are replaced with a black box that reproduces the I/O behaviour of the
 occipital lobes, but which is postulated to lack visual qualia. The
 rest of the subject's brain is intact and is forced to behave exactly
 as it would if the change had not been made, since it is receiving
 normal inputs from the black box. So the subject will correctly
 describe anything placed in front of him, and he will report that
 everything looks perfectly normal. More than that, he will have an
 appropriate emotional response to what he sees, be able to paint it or
 write poetry about it, make a working model of it from an image he
 retains in his mind: whatever he would normally do if he saw
 something. And yet, he would be a partial zombie: he would behave
 exactly as if he had normal visual qualia while completely lacking
 visual qualia. Now it is part of the definition of a full zombie that
 it doesn't understand that it is blind, since a requirement for
 zombiehood is that it doesn't understand anything at all, it just
 behaves as if it does. But if the idea of qualia is meaningful at all,
 you would think that a sudden drastic change like going blind should
 produce some realisation in a cognitively intact subject; otherwise
 how do we know that we aren't blind now, and what reason would we have
 to prefer normal vision to zombie vision? The conclusion is that it
 isn't possible to make a device that replicates brain function but
 lacks qualia: either it is not possible to make such a device at all
 because the brain is not computable, or if such a device could be made
 (even a magical one) then it would necessarily reproduce the qualia as
 well.

 I think the question of whether there could be a philosophical zombie is ill
 posed because we don't know what is responsible for qualia.  I speculate
 that they are tags of importance or value that get attached to perceptions
 so that they are stored in short term memory.  Then, because evolution
 cannot redesign things, the same 

Re: Free will: Wrong entry.

2010-03-17 Thread m.a.

  - Original Message - 
  From: Bruno Marchal 
  To: everything-list@googlegroups.com 
  Sent: Tuesday, March 16, 2010 2:29 PM
  Subject: Re: Free will: Wrong entry.





Or, are you saying here that choices made by the (3rd 
person) UD tend to be influenced by one's life-history to the extent of (often) 
providing the very alternatives that the (1st) person would have chosen? 


  Exactly. Except I would not say that the UD, or arithmetic, makes choices. 
But the first person did, and can realize her consistent choice. Our 
consciousness is related to the normal histories which makes us (the lobian 
numbers) having a relative partial self control with respect to our most 
probable universal history. That can be reflected in notion like 
responsibility, remorse, conscience, well founded feeling of guiltiness,  badly 
founded feeling of guiltiness, etc.).
   In other term free will is more related to determinist chaos, or Gödelian 
self-reference, than to the abrupt indeterminacy provided by the 'matter' of 
comp or  the 'matter' of quantum mechanics.

  But is there a deliberate feedback (of any kind) between first person and UD? 
How does the UD identify and favor our normal histories? How do the lobian 
numbers affect the UD. (I think you've answered these questions before but not 
in ways that are clear to me. Please give it one last try.) m.a.





  Bruno




  http://iridia.ulb.ac.be/~marchal/







  -- 
  You received this message because you are subscribed to the Google Groups 
Everything List group.
  To post to this group, send email to everything-l...@googlegroups.com.
  To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
  For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread Stathis Papaioannou
On 17 March 2010 23:47, HZ hzen...@gmail.com wrote:
 I'm quite confused about the state of zombieness. If the requirement
 for zombiehood is that it doesn't understand anything at all but it
 behaves as if it does what makes us not zombies? How do we not we are
 not? But more importantly, are there known cases of zombies? Perhaps a
 silly question because it might be just a thought experiment but if
 so, I wonder on what evidence one is so freely speaking about,
 specially when connected to cognition for which we now (should) know
 more. The questions seem related because either we don't know whether
 we are zombies or one can solve the problem of zombie identification.
 I guess I'm new in the zombieness business.

*I* know with absolute certainty that I am not a zombie, but I don't
know if anyone else is. It is just a philosophical idea: there are no
known cases of zombies, and we could never know if there are. Some
philosophers of mind, such as Daniel Dennett (who has said that they
are an embarrassment to philosophy) don't believe that zombies are
even conceptually possible. This attitude goes along with an
epiphenomenal view of consciousness as a necessary side-effect of
intelligent behaviour.

The fading qualia argument we have been discussing is due to David Chalmers:

http://cogprints.org/318/0/qualia.html

It purports to show that functionally equivalent zombie brain
components are impossible. Chalmers, unlike Dennett, still believes
that zombies are conceptually possible, although he thinks they are
probably physically impossible.

 But leaving the zombie definition and identification apart, I think
 current science would/should see no difference between consciousness
 and cognition, the former is an emergent property of the latter, and
 just as there are levels of cognition there are levels of
 consciousness. Between the human being and other animals there is a
 wide gradation of levels, it is not that any other animal lacks of
 'qualia'. Perhaps there is an upper level defined by computational
 limits and as such once reached that limit one just remains there, but
 consciousness seems to depend on the complexity of the brain (size,
 convolutions or whatever provides the full power) but not disconnected
 to cognition. In this view only damaging the cognitive capacities of a
 person would damage its 'qualia', while its 'qualia' could not get
 damaged but by damaging the brain which will likewise damage the
 cognitive capabilities. In other words, there seems to be no
 cognition/consciousness duality as long as there is no brain/mind one.
 The use of the term 'qualia' here looks like a remake of the mind/body
 problem.

Yes, that's what it is: the mind-brain problem.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread Bruno Marchal


On 17 Mar 2010, at 13:47, HZ wrote:


I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of zombies? Perhaps a
silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know whether
we are zombies or one can solve the problem of zombie identification.
I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my  
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if  
human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial machine  
imitating a humans which is sleeping, in a way that we may be confused  
and believe it is a dreaming human being ...


The notion of zombie makes sense (logical sense). Its existence may  
depend on the choice of theory.
With the axiom of comp, a counterfactually correct relation between  
numbers define the channel through which consciousness flows (select  
the consistent extensions). So with comp we could argue that as far as  
we are bodies, we are zombies, but from our first person perspective  
we never are.




But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter,



I would have said the contrary:

consciousness - sensibility - emotion - cognition - language -  
recognition - self-consciousness - ...


(and: number - universal number - consciousness - ...)

Something like that, follows, I argue, from the assumption that we are  
Turing emulable at some (necessarily unknown) level of description.



and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there, but
consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not disconnected
to cognition. In this view only damaging the cognitive capacities of a
person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind one.
The use of the term 'qualia' here looks like a remake of the mind/body
problem.



Qualia is the part of the mind consisting in the directly  
apprehensible subjective experience. Typical examples are pain, seeing  
red, smell, feeling something, ... It is roughly the non transitive  
part of cognition.


The question here is not the question of the existence of degrees of  
consciousness, but the existence of a link between a possible  
variation of consciousness in presence of non causal perturbation  
during a particular run of a brain or a machine.


If big blue wins a chess tournament without having used the register  
344, no doubt big blue would have win in case the register 344 would  
have been broken. Some people seems to believe that if big blue was  
conscious in the first case, it could loose consciousness in the  
second case. I don't think this is tenable when we assume that we are  
Turing emulable.
The reason is that consciousness is not ascribable to any particular  
implementation, but only to an abstract but precise infinity of  
computations, already 'realized' in elementary arithmetic.


Bruno





On Wed, Mar 17, 2010 at 11:34 AM, Stathis Papaioannou
stath...@gmail.com wrote:

On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com wrote:

I think this is a dubious argument based on our lack of  
understanding of
qualia.  Presumably one has many thoughts that do not result in  
any overt
action.  So if I lost a few neurons (which I do continuously) it  
might mean
that there are some thoughts I don't have or some associations I  
don't make,
so eventually I may fade to the level of consciousness of my  
dog.  Is my

dog a partial zombie?


It's certainly possible that qualia can fade without the subject
noticing, either because the change is slow and gradual or because  
the

change fortuitously causes a cognitive deficit as well. But this not
what the fading qualia argument is about. The argument requires
consideration of a brain change which would cause an unequivocal
change in consciousness, such as a 

Re: Free will: Wrong entry.

2010-03-17 Thread Bruno Marchal


On 17 Mar 2010, at 14:06, m.a. wrote:

But is there a deliberate feedback (of any kind) between first  
person and UD?


No. The UD can be seen as a set of elementary arithmetical truth,  
realizing through their many proofs, the many computations. It is the  
least block-universe fro the mindscape. (Assuming comp).





How does the UD identify and favor our normal histories?


Excellent question. This is the reason why we are hunting white  
rabbits and white noise. This why we have to extracts the structure of  
matter and time from a sum on infinity of computations (those below or  
even aside our level and sphere of definition). If we show that such  
sum does not normalize, then we refute comp.




How do the lobian numbers affect the UD. (I think you've answered  
these questions before but not in ways that are clear to me. Please  
give it one last try.)m.a.





Löbian machine survives only in their consistent extension. It is the  
couple lobian-machine/its realities which emerge from inside the UD*  
(the execution of the UD, or that part of arithmetic).


The free-will of a lobian number is defined with respect to its most  
probable realities. They can affect such realities, and be affected by  
them. But no lobian number/machine/entity/soul (if you think at its  
first person view) can affect the UD, for the same reason we cannot  
affect elementary arithmetic.  (or the physical laws, for a  
physicalist).


Look at UD* (the infinite run of the UD), or arithmetic, as the block  
universe of the mindscape. Matter is a projective view of arithmetic,  
when viewed by universal numbers from inside it. Normality is ensured  
by relative self-multiplication, making us both very rare in the  
absolute, and very numerous in the relative. Like with Everett, except  
we start from the numbers, and shows how to derive the wave, not just  
the collapse.


I just explain that if we take comp seriously, the mind body problem  
leads to a mathematical body problem.


Bruno





--
You received this message because you are subscribed to the Google  
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread Brent Meeker

On 3/17/2010 3:34 AM, Stathis Papaioannou wrote:

On 17 March 2010 05:29, Brent Meekermeeke...@dslextreme.com  wrote:

   

I think this is a dubious argument based on our lack of understanding of
qualia.  Presumably one has many thoughts that do not result in any overt
action.  So if I lost a few neurons (which I do continuously) it might mean
that there are some thoughts I don't have or some associations I don't make,
so eventually I may fade to the level of consciousness of my dog.  Is my
dog a partial zombie?
 

It's certainly possible that qualia can fade without the subject
noticing, either because the change is slow and gradual or because the
change fortuitously causes a cognitive deficit as well. But this not
what the fading qualia argument is about. The argument requires
consideration of a brain change which would cause an unequivocal
change in consciousness, such as a removal of the subject's occipital
lobes. If this happened, the subject would go completely blind: he
would be unable to describe anything placed in front of his eyes, and
he would report that he could not see anything at all. That's what it
means to go blind. But now consider the case where the occipital lobes
are replaced with a black box that reproduces the I/O behaviour of the
occipital lobes, but which is postulated to lack visual qualia. The
rest of the subject's brain is intact and is forced to behave exactly
as it would if the change had not been made, since it is receiving
normal inputs from the black box. So the subject will correctly
describe anything placed in front of him, and he will report that
everything looks perfectly normal. More than that, he will have an
appropriate emotional response to what he sees, be able to paint it or
write poetry about it, make a working model of it from an image he
retains in his mind: whatever he would normally do if he saw
something. And yet, he would be a partial zombie: he would behave
exactly as if he had normal visual qualia while completely lacking
visual qualia. Now it is part of the definition of a full zombie that
it doesn't understand that it is blind, since a requirement for
zombiehood is that it doesn't understand anything at all, it just
behaves as if it does. But if the idea of qualia is meaningful at all,
you would think that a sudden drastic change like going blind should
produce some realisation in a cognitively intact subject; otherwise
how do we know that we aren't blind now, and what reason would we have
to prefer normal vision to zombie vision? The conclusion is that it
isn't possible to make a device that replicates brain function but
lacks qualia: either it is not possible to make such a device at all
because the brain is not computable, or if such a device could be made
(even a magical one) then it would necessarily reproduce the qualia as
well.
   


I generally agree with the above.  Maybe I misunderstood the question; 
but I was considering the possibility of having a continuum of lesser 
qualia AND corresponding lesser behavior.


However I think there is something in the above that creates the just a 
recording problem.  It's the hypothesis that the black box reproduces 
the I/O behavior.  This implies the black box realizes a function, not a 
recording.  But then the argument slips over to replacing the black box 
with a recording which just happens to produce the same I/O and we're 
led to an absurdum that a recording is conscious.  But what step of the 
argument should we reject?  The plausible possibility is that it is the 
different response to counterfactuals that the functional box and the 
recording realize.  That would seem like magic - a different response 
depending on all the things that don't happen - except in the MWI of QM 
all those counterfactuals are available to make a difference..


Brent

   

I think the question of whether there could be a philosophical zombie is ill
posed because we don't know what is responsible for qualia.  I speculate
that they are tags of importance or value that get attached to perceptions
so that they are stored in short term memory.  Then, because evolution
cannot redesign things, the same tags are used for internal thoughts that
seem important enough to put in memory.  If this is the case then it might
be possible to design a robot which used a different method of evaluating
experience for storage and it would not have qualia like humans - but would
it have some other kind of qualia?  Since we don't know what qualia are in a
third person sense there seems to be no way to answer that.
 


   


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread Brent Meeker

On 3/17/2010 5:47 AM, HZ wrote:

I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of zombies? Perhaps a
silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know whether
we are zombies or one can solve the problem of zombie identification.
I guess I'm new in the zombieness business.
   


For me the question of zombieness seems meaningful if I put it in the 
form of creating an artifiicially intelligent being, as opposed to 
replacing the components of a brain by functionally identical elements.  
Julian Jaynes has a theory of the evolutionary development of 
consciousness as an internalization of hearing speech.  He supposes that 
early humans did not hear an inner narrative as we do but only heard 
external sounds and the speech of others and due to some biogenetic 
changes this became internalized so that we heard the instructions of 
parents in our heads even when they weren't present.  Then we came to 
hear ourselves in our head too, i.e. became conscious.


I don't know if this is true - it sounded like nonsense when I first 
heard of it - but after reading Jaynes I was impressed by the arguments 
he could muster for it.  But if it's true it would mean that I could 
create an artificially intelligent being who, for example, did not 
process verbal thoughts thru the same module used for hearing and then 
this being would not have the same qualia corresponding to hearing 
yourself in your head.  It might very well have some different qualia.  
But since we don't know what qualia are in a third person sense, it's 
impossible to make sense of having qualia, but different from those we 
know.


As I understand Bruno's theory, he identifies qualia with certain kinds 
of computation; a third person characterization.  But I'm not sure what 
kind or whether I could say that my artificially intelligent being had them.


Brent



But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter, and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there, but
consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not disconnected
to cognition. In this view only damaging the cognitive capacities of a
person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind one.
The use of the term 'qualia' here looks like a remake of the mind/body
problem.


On Wed, Mar 17, 2010 at 11:34 AM, Stathis Papaioannou
stath...@gmail.com  wrote:
   

On 17 March 2010 05:29, Brent Meekermeeke...@dslextreme.com  wrote:

 

I think this is a dubious argument based on our lack of understanding of
qualia.  Presumably one has many thoughts that do not result in any overt
action.  So if I lost a few neurons (which I do continuously) it might mean
that there are some thoughts I don't have or some associations I don't make,
so eventually I may fade to the level of consciousness of my dog.  Is my
dog a partial zombie?
   

It's certainly possible that qualia can fade without the subject
noticing, either because the change is slow and gradual or because the
change fortuitously causes a cognitive deficit as well. But this not
what the fading qualia argument is about. The argument requires
consideration of a brain change which would cause an unequivocal
change in consciousness, such as a removal of the subject's occipital
lobes. If this happened, the subject would go completely blind: he
would be unable to describe anything placed in front of his eyes, and
he would report that he could not see anything at all. That's what it
means to go blind. But now consider the case where the occipital lobes
are replaced with a black box that reproduces the I/O behaviour of the
occipital lobes, but which is postulated to lack visual qualia. The
rest of the subject's brain is intact and is forced to behave exactly
as it would if the change had not been made, since it is receiving
normal inputs from the black box. So the subject will correctly
describe anything placed in front of him, and he 

Re: Jack's partial brain paper

2010-03-17 Thread Brent Meeker

On 3/17/2010 10:01 AM, Bruno Marchal wrote:


On 17 Mar 2010, at 13:47, HZ wrote:


I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are
not? But more importantly, are there known cases of zombies? Perhaps a
silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know whether
we are zombies or one can solve the problem of zombie identification.
I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my 
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if 
human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial machine 
imitating a humans which is sleeping, in a way that we may be confused 
and believe it is a dreaming human being ...


The notion of zombie makes sense (logical sense). Its existence may 
depend on the choice of theory.
With the axiom of comp, a counterfactually correct relation between 
numbers define the channel through which consciousness flows (select 
the consistent extensions). So with comp we could argue that as far as 
we are bodies, we are zombies, but from our first person perspective 
we never are.




But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter,



I would have said the contrary:

consciousness - sensibility - emotion - cognition - language - 
recognition - self-consciousness - ...


(and: number - universal number - consciousness - ...)

Something like that, follows, I argue, from the assumption that we are 
Turing emulable at some (necessarily unknown) level of description.



and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there, but
consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not disconnected
to cognition. In this view only damaging the cognitive capacities of a
person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind one.
The use of the term 'qualia' here looks like a remake of the mind/body
problem.



Qualia is the part of the mind consisting in the directly 
apprehensible subjective experience. Typical examples are pain, seeing 
red, smell, feeling something, ... It is roughly the non transitive 
part of cognition.


The question here is not the question of the existence of degrees of 
consciousness, but the existence of a link between a possible 
variation of consciousness in presence of non causal perturbation 
during a particular run of a brain or a machine.


If big blue wins a chess tournament without having used the register 
344, no doubt big blue would have win in case the register 344 would 
have been broken. 


Not with probability 1.0, because given QM the game might have (and in 
other worlds did) gone differently and required register 344.


Some people seems to believe that if big blue was conscious in the 
first case, it could loose consciousness in the second case. I don't 
think this is tenable when we assume that we are Turing emulable.


But the world is only Turing emulable if it is deterministic and it's 
only deterministic if everything happens as in MWI QM.


Brent

The reason is that consciousness is not ascribable to any particular 
implementation, but only to an abstract but precise infinity of 
computations, already 'realized' in elementary arithmetic.


Bruno





On Wed, Mar 17, 2010 at 11:34 AM, Stathis Papaioannou
stath...@gmail.com wrote:

On 17 March 2010 05:29, Brent Meeker meeke...@dslextreme.com wrote:

I think this is a dubious argument based on our lack of 
understanding of
qualia.  Presumably one has many thoughts that do not result in any 
overt
action.  So if I lost a few neurons (which I do continuously) it 
might mean
that there are some thoughts I don't have or some associations I 
don't make,
so eventually I may fade to the level of consciousness of my 
dog.  Is my

dog a partial zombie?


It's certainly possible that qualia can fade without the subject
noticing, either 

Re: Jack's partial brain paper

2010-03-17 Thread John Mikes
Brent:
why do you believe IN *QUALIA?* they are just as human assumptions (in our
belief system) as* VALUE*  (or, for that matter: to take seriously your
short (long?) term memories).
A* ZOMBIE* is the subject of a thought experiment in our humanly
aggrandizing anthropocentric boasting. A dog?
With the incredible complexity we must assume for (mental) brain(function)
it is almost ridiculous to speak about partial brains - especially in the
same breath where we assume what the loss of 1 (one) or even of an
infinitesimally small part of ONE neuron may do. How about the non-neuronal
ingredients, like prions, a little structural change of which may cause (I
would rather say: 'indicate') mad cow disease.
John


On 3/16/10, Brent Meeker meeke...@dslextreme.com wrote:

  On 3/16/2010 6:03 AM, Stathis Papaioannou wrote:

 On 16 March 2010 20:29, russell standish li...@hpcoders.com.au 
 li...@hpcoders.com.au wrote:


 I've been following the thread on Jack's partial brains paper,
 although I've been too busy to comment. I did get a moment to read the
 paper this evening, and I was abruptly stopped by a comment on page 2:

 On the second hypothesis [Sudden Disappearing Qualia], the
 replacement of a single neuron could be responsible for the vanishing
 of an entire field of conscious experience. This seems antecedently
 implausible, if not entirely bizarre.

 Why? Why isn't it like the straw that broke the camel's back? When
 pulling apart a network, link by link, there will be a link removed
 that causes the network to go from being almost fully connected to
 being disconnected. It need not be the same link each time, it will
 depend on the order in which the links are removed.

 I made a similar criticism against David Parfitt's Napoleon thought
 experiment a couple of years ago on this list - I understand that
 fading qualia is a popular intuition, but it just seems wrong to
 me. Can anyone give me a convincing reason why the suddenly
 disappearing qualia notion is absurd?


 Fading qualia would result in a partial zombie, and that concept is
 self-contradictory. It means I could be a partial zombie now,
 completely blind since waking up this morning, but behaving normally
 and unaware that anything unusual had happened. The implications of
 this is that zombie vision is just as good as normal vision in every
 objective and subjective way, so we may as well say that it is the
 same as normal vision. In other words, the qualia can't fade and leave
 the behaviour of the brain unchanged.



 I think this is a dubious argument based on our lack of understanding of
 qualia.  Presumably one has many thoughts that do not result in any overt
 action.  So if I lost a few neurons (which I do continuously) it might mean
 that there are some thoughts I don't have or some associations I don't make,
 so eventually I may fade to the level of consciousness of my dog.  Is my
 dog a partial zombie?

 I think the question of whether there could be a philosophical zombie is
 ill posed because we don't know what is responsible for qualia.  I speculate
 that they are tags of importance or value that get attached to perceptions
 so that they are stored in short term memory.  Then, because evolution
 cannot redesign things, the same tags are used for internal thoughts that
 seem important enough to put in memory.  If this is the case then it might
 be possible to design a robot which used a different method of evaluating
 experience for storage and it would not have qualia like humans - but would
 it have some other kind of qualia?  Since we don't know what qualia are in a
 third person sense there seems to be no way to answer that.

 Brent

 Chalmers thinks partial zombies are absurd but does not believe that
 full zombies are prima facie absurd. Accepting this, it would seem to
 be possible that one could suddenly transition from fully conscious to
 fully zombified without going through an intermediate stage. For
 example, this could happen with the swapping of one neuron. However,
 it wouldn't be the neuron that causes the change, it would be an
 infinitesimally small part of the neuron. This is because the neuron
 itself, like the brain, could be replaced with functionally identical
 components. For the same reason that qualia can't fade for the whole
 brain, qualia can't fade for the neuron. So the qualia would have to
 suddenly disappear with the swapping of one single indivisible
 component of the neuron. Are you prepared to say that it is possible
 there is a single subatomic particle in your brain which makes the
 difference between consciousness and zombiehood?





 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to
 everything-list+unsubscr...@googlegroups.comeverything-list%2bunsubscr...@googlegroups.com
 .
 For more options, visit this group at
 

Re: Jack's partial brain paper

2010-03-17 Thread Brent Meeker

On 3/17/2010 11:39 AM, John Mikes wrote:

Brent:
why do you believe IN *QUALIA?* they are just as human assumptions 
(in our belief system) as* VALUE*  (or, for that matter: to take 
seriously your short (long?) term memories).


I don't  believe *IN* anything.  They are just something that 
/occurred/ to me.


A* ZOMBIE* is the subject of a thought experiment in our humanly 
aggrandizing anthropocentric boasting.


How is it *boasting* to consider a thought experiment?


A dog?


/Is that a question?/  Yes I have *dog* (three of them in fact).

With the incredible complexity we must assume for (mental) 
brain(function) it is almost ridiculous to speak about partial 
brains - especially in the same breath where we assume what the loss 
of 1 (one) or even of an infinitesimally small part of ONE neuron may do.


I didn't assume what it would do.  I noted that I'm/* losing*/ them all 
the time (/faster when have a whiskey/).


Brent

How about the non-neuronal ingredients, like prions, a little 
structural change of which may cause (I would rather say: 'indicate') 
mad cow disease.

John

On 3/16/10, *Brent Meeker* meeke...@dslextreme.com 
mailto:meeke...@dslextreme.com wrote:


On 3/16/2010 6:03 AM, Stathis Papaioannou wrote:

On 16 March 2010 20:29, russell standishli...@hpcoders.com.au  
mailto:li...@hpcoders.com.au  wrote:
   

I've been following the thread on Jack's partial brains paper,
although I've been too busy to comment. I did get a moment to read the
paper this evening, and I was abruptly stopped by a comment on page 2:

On the second hypothesis [Sudden Disappearing Qualia], the
replacement of a single neuron could be responsible for the vanishing
of an entire field of conscious experience. This seems antecedently
implausible, if not entirely bizarre.

Why? Why isn't it like the straw that broke the camel's back? When
pulling apart a network, link by link, there will be a link removed
that causes the network to go from being almost fully connected to
being disconnected. It need not be the same link each time, it will
depend on the order in which the links are removed.

I made a similar criticism against David Parfitt's Napoleon thought
experiment a couple of years ago on this list - I understand that
fading qualia is a popular intuition, but it just seems wrong to
me. Can anyone give me a convincing reason why the suddenly
disappearing qualia notion is absurd?
 

Fading qualia would result in a partial zombie, and that concept is
self-contradictory. It means I could be a partial zombie now,
completely blind since waking up this morning, but behaving normally
and unaware that anything unusual had happened. The implications of
this is that zombie vision is just as good as normal vision in every
objective and subjective way, so we may as well say that it is the
same as normal vision. In other words, the qualia can't fade and leave
the behaviour of the brain unchanged.
   


I think this is a dubious argument based on our lack of
understanding of qualia.  Presumably one has many thoughts that do
not result in any overt action.  So if I lost a few neurons (which
I do continuously) it might mean that there are some thoughts I
don't have or some associations I don't make, so eventually I may
fade to the level of consciousness of my dog.  Is my dog a
partial zombie?

I think the question of whether there could be a philosophical
zombie is ill posed because we don't know what is responsible for
qualia.  I speculate that they are tags of importance or value
that get attached to perceptions so that they are stored in short
term memory.  Then, because evolution cannot redesign things, the
same tags are used for internal thoughts that seem important
enough to put in memory.  If this is the case then it might be
possible to design a robot which used a different method of
evaluating experience for storage and it would not have qualia
like humans - but would it have some other kind of qualia?  Since
we don't know what qualia are in a third person sense there seems
to be no way to answer that.

Brent


Chalmers thinks partial zombies are absurd but does not believe that
full zombies are prima facie absurd. Accepting this, it would seem to
be possible that one could suddenly transition from fully conscious to
fully zombified without going through an intermediate stage. For
example, this could happen with the swapping of one neuron. However,
it wouldn't be the neuron that causes the change, it would be an
infinitesimally small part of the neuron. This is because the neuron
itself, like the brain, could be replaced with functionally identical
components. For the same reason that qualia can't fade for the whole
brain, qualia can't fade for the neuron. So the qualia would have 

Re: Jack's partial brain paper

2010-03-17 Thread L.W. Sterritt

Hi Gentlemen,

I start out with the bias that the brain as a neural network with ~  
10^11 neurons, given the exogenous and endogenous inputs presented to  
it, continuously computes our perception of the world around us. 
Some neuroscientists suggest that each neuron in the brain is  
separated by only a few synapses from every other neuron.  No nerve  
impulse ever encounters a dead end in the brain.  The same bits (and  
pieces) of information may be processed simultaneously in multiple  
brain sites. This is massively parallel architecture, and even without  
a thorough understanding of quailia, it is difficult (for me) to  
understand how the loss of a few neurons here and there would affect  
quailia. Without redundancy, we could not recover from minor brain  
insults, such as the common ischemia, that we accumulate.   
Operationally, the brains neurons make the most significant  
connections with only certain specific neurons, but there are parallel  
circuits.  With the recent introduction of very high resolution MRI, a  
lot of damage is observed in all brains, as we age.  This has posed a  
problem for clinicians and neuroscientists:  What is a normal brain?   
One of the midwestern medical centers has undertaken a project with a  
few thousand apparently healthy individuals, with no history of mental  
health issues, in an effort to learn how much damage can exist in the  
brain and we would still consider it normal.


Astronauts in orbit, have commented on observing bright flashes,   
which are thought to be cosmic rays / high energy protons ripping  
through the brain, optic nerve and retina.  Does this change  
astronauts quailia?  Not as far as we know.  However, on very long  
exposure, such as the proposed trip to Mars, there is a concern that  
the astronauts would arrive brain dead - apparently something  
different than being a zombie.Just as an aside, it has been  
commented that with billions of circuits operating in (feedback?)  
loops, it is impossible to have entirely rational thoughts, or purely  
emotional reactions - another subject for another time.


William




On Mar 17, 2010, at 11:12 AM, Brent Meeker wrote:


On 3/17/2010 10:01 AM, Bruno Marchal wrote:



On 17 Mar 2010, at 13:47, HZ wrote:


I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we  
are
not? But more importantly, are there known cases of zombies?  
Perhaps a

silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know  
whether
we are zombies or one can solve the problem of zombie  
identification.

I guess I'm new in the zombieness business.




I know I am conscious, and I can doubt all content of my  
consciousness, except this one, that I am conscious.

I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if  
human in appearance sort of zombie.
Tomorrow, we may be able to put in a museum an artificial machine  
imitating a humans which is sleeping, in a way that we may be  
confused and believe it is a dreaming human being ...


The notion of zombie makes sense (logical sense). Its existence may  
depend on the choice of theory.
With the axiom of comp, a counterfactually correct relation between  
numbers define the channel through which consciousness flows  
(select the consistent extensions). So with comp we could argue  
that as far as we are bodies, we are zombies, but from our first  
person perspective we never are.




But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter,



I would have said the contrary:

consciousness - sensibility - emotion - cognition - language -  
recognition - self-consciousness - ...


(and: number - universal number - consciousness - ...)

Something like that, follows, I argue, from the assumption that we  
are Turing emulable at some (necessarily unknown) level of  
description.



and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there,  
but

consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not  
disconnected
to cognition. In this view only damaging the cognitive capacities  
of a

person would damage its 'qualia', while its 'qualia' could not get
damaged 

RE: Zombies (was: Jack's partial brain paper)

2010-03-17 Thread Stephen P. King
Hi Bruno and Fellow Listers,

 

 

   As I have been following this conversation a question
occurred to me, how is a Zombie (as defined by Chalmers et al.) any
different functionally from the notion of other persons (dogs, etc.) that a
Solipsist might have? They seem equivalent, both behaving exactly as a “real
person would” yet having no consciousness or 1-p reality of their own. What
am I missing here?

 

Onward!

 

Stephen P. King

 

 

 

From: everything-list@googlegroups.com
[mailto:everything-l...@googlegroups.com] On Behalf Of Bruno Marchal
Sent: Wednesday, March 17, 2010 1:45 AM
To: everything-list@googlegroups.com
Subject: Re: Jack's partial brain paper

 

 

On 16 Mar 2010, at 19:29, Brent Meeker wrote:





On 3/16/2010 6:03 AM, Stathis Papaioannou wrote: 

On 16 March 2010 20:29, russell standish  mailto:li...@hpcoders.com.au
li...@hpcoders.com.au wrote:
  

I've been following the thread on Jack's partial brains paper,
although I've been too busy to comment. I did get a moment to read the
paper this evening, and I was abruptly stopped by a comment on page 2:
 
On the second hypothesis [Sudden Disappearing Qualia], the
replacement of a single neuron could be responsible for the vanishing
of an entire field of conscious experience. This seems antecedently
implausible, if not entirely bizarre.
 
Why? Why isn't it like the straw that broke the camel's back? When
pulling apart a network, link by link, there will be a link removed
that causes the network to go from being almost fully connected to
being disconnected. It need not be the same link each time, it will
depend on the order in which the links are removed.
 
I made a similar criticism against David Parfitt's Napoleon thought
experiment a couple of years ago on this list - I understand that
fading qualia is a popular intuition, but it just seems wrong to
me. Can anyone give me a convincing reason why the suddenly
disappearing qualia notion is absurd?


Fading qualia would result in a partial zombie, and that concept is
self-contradictory. It means I could be a partial zombie now,
completely blind since waking up this morning, but behaving normally
and unaware that anything unusual had happened. The implications of
this is that zombie vision is just as good as normal vision in every
objective and subjective way, so we may as well say that it is the
same as normal vision. In other words, the qualia can't fade and leave
the behaviour of the brain unchanged.
  


I think this is a dubious argument based on our lack of understanding of
qualia.  Presumably one has many thoughts that do not result in any overt
action.  So if I lost a few neurons (which I do continuously) it might mean
that there are some thoughts I don't have or some associations I don't make,
so eventually I may fade to the level of consciousness of my dog.  Is my
dog a partial zombie? 

 

A priori the dog is not a zombie at all. It may be like us after taking some
strong psych-active substance, disabling it intellectually. If enough
neurons are disabled, it may lose Löbianity, but not yet necessarliy
consciousness. If even more neurons are disabled, it will lose the ability
to manifest his consciousness relatively to you, and it will be senseless to
attribute him consciousness, but from its own perspective it will be
another dog or another universal machine in Platonia.

 






I think the question of whether there could be a philosophical zombie is ill
posed because we don't know what is responsible for qualia.  I speculate
that they are tags of importance or value that get attached to perceptions
so that they are stored in short term memory.  Then, because evolution
cannot redesign things, the same tags are used for internal thoughts that
seem important enough to put in memory.  If this is the case then it might
be possible to design a robot which used a different method of evaluating
experience for storage and it would not have qualia like humans - but would
it have some other kind of qualia?  Since we don't know what qualia are in a
third person sense there seems to be no way to answer that.

 

 

If the robot can reason logically and believes in the induction axioms, it
will be Löbian, and the 8 arithmetical hypostases will necessarily apply. In
that case, if you find Theaetetus' theory of knowledge plausible, then it is
plausible that it has a personhood, and its qualia are described by S4Grz1,
X1* and Z1*, whatever the means of storage are used.

 

Bruno

 

 

http://iridia.ulb.ac.be/~marchal/

 

 

 

-- 
You received this message because you are subscribed to the Google Groups
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List 

Re: Zombies (was: Jack's partial brain paper)

2010-03-17 Thread Stathis Papaioannou
On 18 March 2010 06:32, Stephen P. King stephe...@charter.net wrote:

    As I have been following this conversation a question
 occurred to me, how is a Zombie (as defined by Chalmers et al.) any
 different functionally from the notion of other persons (dogs, etc.) that a
 Solipsist might have? They seem equivalent, both behaving exactly as a “real
 person would” yet having no consciousness or 1-p reality of their own. What
 am I missing here?

The problem of zombies is a version of the problem of other minds.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread Stathis Papaioannou
On 18 March 2010 04:34, Brent Meeker meeke...@dslextreme.com wrote:

 However I think there is something in the above that creates the just a
 recording problem.  It's the hypothesis that the black box reproduces the
 I/O behavior.  This implies the black box realizes a function, not a
 recording.  But then the argument slips over to replacing the black box with
 a recording which just happens to produce the same I/O and we're led to an
 absurdum that a recording is conscious.  But what step of the argument
 should we reject?  The plausible possibility is that it is the different
 response to counterfactuals that the functional box and the recording
 realize.  That would seem like magic - a different response depending on all
 the things that don't happen - except in the MWI of QM all those
 counterfactuals are available to make a difference..

I think that was Jack's problem with the fading qualia argument: it
would imply that a recording or random process could be conscious,
which is a no-no. He therefore contrives to explain how fading qualia
(with identical behaviour) could in fact happen. But I don't buy it: I
still think the idea of the partial zombie is incoherent.

If a chunk were removed out of my computer's CPU and replaced with a
black box which accidentally reproduces the I/O behaviour of the
missing part the computer would function perfectly normally. We would
not say that it isn't really running Windows and Firefox. Why do we
say this about consciousness?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Jack's partial brain paper

2010-03-17 Thread Brent Meeker

On 3/17/2010 9:28 PM, Stathis Papaioannou wrote:

On 18 March 2010 04:34, Brent Meekermeeke...@dslextreme.com  wrote:

   

However I think there is something in the above that creates the just a
recording problem.  It's the hypothesis that the black box reproduces the
I/O behavior.  This implies the black box realizes a function, not a
recording.  But then the argument slips over to replacing the black box with
a recording which just happens to produce the same I/O and we're led to an
absurdum that a recording is conscious.  But what step of the argument
should we reject?  The plausible possibility is that it is the different
response to counterfactuals that the functional box and the recording
realize.  That would seem like magic - a different response depending on all
the things that don't happen - except in the MWI of QM all those
counterfactuals are available to make a difference..
 

I think that was Jack's problem with the fading qualia argument: it
would imply that a recording or random process could be conscious,
which is a no-no. He therefore contrives to explain how fading qualia
(with identical behaviour) could in fact happen. But I don't buy it: I
still think the idea of the partial zombie is incoherent.

If a chunk were removed out of my computer's CPU and replaced with a
black box which accidentally reproduces the I/O behaviour of the
missing part the computer would function perfectly normally. We would
not say that it isn't really running Windows and Firefox. Why do we
say this about consciousness?
   


Is it coherent to say a black box accidentally reproduces the I/O?  It 
is over some relatively small number to of I/Os, but over a large enough 
number and range to sustain human behavior - that seems very doubtful.  
One would be tempted to say the black box was obeying a natural law.  
It would be the same as the problem of induction.  How do we know 
natural laws are consistent - because we define them to be so.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.