On 16 Mar 2010, at 19:29, Brent Meeker wrote:
On 3/16/2010 6:03 AM, Stathis Papaioannou wrote:
On 16 March 2010 20:29, russell standish <li...@hpcoders.com.au>
I've been following the thread on Jack's partial brains paper,
although I've been too busy to comment. I did get a moment to read
paper this evening, and I was abruptly stopped by a comment on
"On the second hypothesis [Sudden Disappearing Qualia], the
replacement of a single neuron could be responsible for the
of an entire field of conscious experience. This seems antecedently
implausible, if not entirely bizarre."
Why? Why isn't it like the straw that broke the camel's back? When
pulling apart a network, link by link, there will be a link removed
that causes the network to go from being almost fully connected to
being disconnected. It need not be the same link each time, it will
depend on the order in which the links are removed.
I made a similar criticism against David Parfitt's Napoleon thought
experiment a couple of years ago on this list - I understand that
fading qualia is a popular intuition, but it just seems wrong to
me. Can anyone give me a convincing reason why the suddenly
disappearing qualia notion is absurd?
Fading qualia would result in a partial zombie, and that concept is
self-contradictory. It means I could be a partial zombie now,
completely blind since waking up this morning, but behaving normally
and unaware that anything unusual had happened. The implications of
this is that zombie vision is just as good as normal vision in every
objective and subjective way, so we may as well say that it is the
same as normal vision. In other words, the qualia can't fade and
the behaviour of the brain unchanged.
I think this is a dubious argument based on our lack of
understanding of qualia. Presumably one has many thoughts that do
not result in any overt action. So if I lost a few neurons (which I
do continuously) it might mean that there are some thoughts I don't
have or some associations I don't make, so eventually I may "fade"
to the level of consciousness of my dog. Is my dog a "partial
A priori the dog is not a zombie at all. It may be like us after
taking some strong psych-active substance, disabling it
intellectually. If enough neurons are disabled, it may lose Löbianity,
but not yet necessarliy consciousness. If even more neurons are
disabled, it will lose the ability to manifest his consciousness
relatively to you, and it will be senseless to attribute him
consciousness, but from its own perspective it will be "another dog"
or "another universal machine" in Platonia.
I think the question of whether there could be a philosophical
zombie is ill posed because we don't know what is responsible for
qualia. I speculate that they are tags of importance or value that
get attached to perceptions so that they are stored in short term
memory. Then, because evolution cannot redesign things, the same
tags are used for internal thoughts that seem important enough to
put in memory. If this is the case then it might be possible to
design a robot which used a different method of evaluating
experience for storage and it would not have qualia like humans -
but would it have some other kind of qualia? Since we don't know
what qualia are in a third person sense there seems to be no way to
If the robot can reason logically and believes in the induction
axioms, it will be Löbian, and the 8 arithmetical hypostases will
necessarily apply. In that case, if you find Theaetetus' theory of
knowledge plausible, then it is plausible that it has a personhood,
and its qualia are described by S4Grz1, X1* and Z1*, whatever the
means of storage are used.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to
For more options, visit this group at