Hi Gentlemen,

I start out with the bias that the brain as a neural network with ~ 10^11 neurons, given the exogenous and endogenous inputs presented to it, continuously computes our perception of the world around us. Some neuroscientists suggest that each neuron in the brain is separated by only a few synapses from every other neuron. No nerve impulse ever encounters a dead end in the brain. The same bits (and pieces) of information may be processed simultaneously in multiple brain sites. This is massively parallel architecture, and even without a thorough understanding of quailia, it is difficult (for me) to understand how the loss of a few neurons here and there would affect quailia. Without redundancy, we could not recover from minor brain insults, such as the common ischemia, that we accumulate. Operationally, the brains neurons make the most significant connections with only certain specific neurons, but there are parallel circuits. With the recent introduction of very high resolution MRI, a lot of damage is observed in all brains, as we age. This has posed a problem for clinicians and neuroscientists: What is a normal brain? One of the midwestern medical centers has undertaken a project with a few thousand apparently healthy individuals, with no history of mental health issues, in an effort to learn how much damage can exist in the brain and we would still consider it "normal."

Astronauts in orbit, have commented on observing bright flashes, which are thought to be cosmic rays / high energy protons ripping through the brain, optic nerve and retina. Does this change astronauts quailia? Not as far as we know. However, on very long exposure, such as the proposed trip to Mars, there is a concern that the astronauts would arrive "brain dead" - apparently something different than being a zombie. Just as an aside, it has been commented that with billions of circuits operating in (feedback?) loops, it is impossible to have entirely rational thoughts, or purely emotional reactions - another subject for another time.

William




On Mar 17, 2010, at 11:12 AM, Brent Meeker wrote:

On 3/17/2010 10:01 AM, Bruno Marchal wrote:


On 17 Mar 2010, at 13:47, HZ wrote:

I'm quite confused about the state of zombieness. If the requirement
for zombiehood is that it doesn't understand anything at all but it
behaves as if it does what makes us not zombies? How do we not we are not? But more importantly, are there known cases of zombies? Perhaps a
silly question because it might be just a thought experiment but if
so, I wonder on what evidence one is so freely speaking about,
specially when connected to cognition for which we now (should) know
more. The questions seem related because either we don't know whether we are zombies or one can solve the problem of zombie identification.
I guess I'm new in the zombieness business.



I know I am conscious, and I can doubt all content of my consciousness, except this one, that I am conscious.
I cannot prove that I am conscious, neither to some others.

Dolls and sculptures are, with respect to what they represent, if human in appearance sort of zombie. Tomorrow, we may be able to put in a museum an artificial machine imitating a humans which is sleeping, in a way that we may be confused and believe it is a dreaming human being ...

The notion of zombie makes sense (logical sense). Its existence may depend on the choice of theory. With the axiom of comp, a counterfactually correct relation between numbers define the channel through which consciousness flows (select the consistent extensions). So with comp we could argue that as far as we are bodies, we are zombies, but from our first person perspective we never are.


But leaving the zombie definition and identification apart, I think
current science would/should see no difference between consciousness
and cognition, the former is an emergent property of the latter,


I would have said the contrary:

consciousness -> sensibility -> emotion -> cognition -> language -> recognition -> self-consciousness -> ...

(and: number -> universal number -> consciousness -> ...)

Something like that, follows, I argue, from the assumption that we are Turing emulable at some (necessarily unknown) level of description.

and
just as there are levels of cognition there are levels of
consciousness. Between the human being and other animals there is a
wide gradation of levels, it is not that any other animal lacks of
'qualia'. Perhaps there is an upper level defined by computational
limits and as such once reached that limit one just remains there, but
consciousness seems to depend on the complexity of the brain (size,
convolutions or whatever provides the full power) but not disconnected to cognition. In this view only damaging the cognitive capacities of a
person would damage its 'qualia', while its 'qualia' could not get
damaged but by damaging the brain which will likewise damage the
cognitive capabilities. In other words, there seems to be no
cognition/consciousness duality as long as there is no brain/mind one. The use of the term 'qualia' here looks like a remake of the mind/ body
problem.


Qualia is the part of the mind consisting in the directly apprehensible subjective experience. Typical examples are pain, seeing red, smell, feeling something, ... It is roughly the non transitive part of cognition.

The question here is not the question of the existence of degrees of consciousness, but the existence of a link between a possible variation of consciousness in presence of non causal perturbation during a particular run of a brain or a machine.

If big blue wins a chess tournament without having used the register 344, no doubt big blue would have win in case the register 344 would have been broken.

Not with probability 1.0, because given QM the game might have (and in other worlds did) gone differently and required register 344.

Some people seems to believe that if big blue was conscious in the first case, it could loose consciousness in the second case. I don't think this is tenable when we assume that we are Turing emulable.

But the world is only Turing emulable if it is deterministic and it's only deterministic if "everything" happens as in MWI QM.

Brent

The reason is that consciousness is not ascribable to any particular implementation, but only to an abstract but precise infinity of computations, already 'realized' in elementary arithmetic.

Bruno




On Wed, Mar 17, 2010 at 11:34 AM, Stathis Papaioannou
<stath...@gmail.com> wrote:
On 17 March 2010 05:29, Brent Meeker <meeke...@dslextreme.com> wrote:

I think this is a dubious argument based on our lack of understanding of qualia. Presumably one has many thoughts that do not result in any overt action. So if I lost a few neurons (which I do continuously) it might mean that there are some thoughts I don't have or some associations I don't make, so eventually I may "fade" to the level of consciousness of my dog. Is my
dog a "partial zombie"?

It's certainly possible that qualia can fade without the subject
noticing, either because the change is slow and gradual or because the change fortuitously causes a cognitive deficit as well. But this not
what the fading qualia argument is about. The argument requires
consideration of a brain change which would cause an unequivocal
change in consciousness, such as a removal of the subject's occipital
lobes. If this happened, the subject would go completely blind: he
would be unable to describe anything placed in front of his eyes, and he would report that he could not see anything at all. That's what it means to go blind. But now consider the case where the occipital lobes are replaced with a black box that reproduces the I/O behaviour of the
occipital lobes, but which is postulated to lack visual qualia. The
rest of the subject's brain is intact and is forced to behave exactly
as it would if the change had not been made, since it is receiving
normal inputs from the black box. So the subject will correctly
describe anything placed in front of him, and he will report that
everything looks perfectly normal. More than that, he will have an
appropriate emotional response to what he sees, be able to paint it or
write poetry about it, make a working model of it from an image he
retains in his mind: whatever he would normally do if he saw
something. And yet, he would be a partial zombie: he would behave
exactly as if he had normal visual qualia while completely lacking
visual qualia. Now it is part of the definition of a full zombie that
it doesn't understand that it is blind, since a requirement for
zombiehood is that it doesn't understand anything at all, it just
behaves as if it does. But if the idea of qualia is meaningful at all, you would think that a sudden drastic change like going blind should
produce some realisation in a cognitively intact subject; otherwise
how do we know that we aren't blind now, and what reason would we have
to prefer normal vision to zombie vision? The conclusion is that it
isn't possible to make a device that replicates brain function but
lacks qualia: either it is not possible to make such a device at all because the brain is not computable, or if such a device could be made (even a magical one) then it would necessarily reproduce the qualia as
well.

I think the question of whether there could be a philosophical zombie is ill posed because we don't know what is responsible for qualia. I speculate that they are tags of importance or value that get attached to perceptions so that they are stored in short term memory. Then, because evolution cannot redesign things, the same tags are used for internal thoughts that seem important enough to put in memory. If this is the case then it might be possible to design a robot which used a different method of evaluating experience for storage and it would not have qualia like humans - but would it have some other kind of qualia? Since we don't know what qualia are in a
third person sense there seems to be no way to answer that.


--
Stathis Papaioannou

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com . To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .



--
You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com . To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .


http://iridia.ulb.ac.be/~marchal/





--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com . For more options, visit this group at http://groups.google.com/group/everything-list?hl=en .

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to