On 11 March 2010 13:57, Jack Mallah <jackmal...@yahoo.com> wrote:
> --- On Mon, 3/8/10, Stathis Papaioannou <stath...@gmail.com> wrote:
>> In the original fading qualia thought experiment the artificial neurons 
>> could be considered black boxes, the consciousness status of which is 
>> unknown. The conclusion is that if the artificial neurons lack 
>> consciousness, then the brain would be partly zombified, which is absurd.
>
> That's not the argument Chalmers made, and indeed he couldn't have, since he 
> believes zombies are possible; he instead talks about fading qualia.
>
> If you start out believing that computer zombies are NOT possible, the 
> original thought experiment is moot; you already believe the conclusion.   
> His argument is aimed at dualists, who are NOT computationalists to start out.
>
> Since partial consciousness is possible, which he didn't take into account, 
> his argument _fails_; a dualist who does believe zombies are possible should 
> have no problem believing that partial zombies are.  So dualists don't have 
> to be computationalists after all.

A partial zombie is very different from a full zombie! The thing about
zombies is, although you can't tell if someone is a zombie, you know
with absolute certainty that *you* aren't a zombie (assuming that you
aren't a zombie, of course; if you are a zombie then you don't "know"
anything at all except in a mindless zombie way). If your visual
qualia were to fade but your behaviour remain unchanged, then that is
equivalent to partial zombification, and partial zombification is
absurd, impossible or meaningless (take your pick). The argument is
simply this: if zombie vision in an otherwise intact person is
possible, then I could have zombie vision right now. I behave as if I
can see normally, since I am typing this email, but that is consistent
with zombie vision. I am also absolutely convinced that I can see
normally, but that is also consistent with zombie vision. So it seems
that zombie vision is neither subjectively nor objectively different
from normal vision, which means it is not different from normal vision
in any way that matters. You might still say zombie vision is still
different in some metaphysical sense, a category neither objective nor
subjective, but now you are in the supernatural domain.

>> I think this holds *whatever* is in the black boxes: computers, biological 
>> tissue, a demon pulling strings or nothing.
>
> Partial consciousness is possible and again ruins any such argument.  If you 
> don't believe to start out that consciousness can be based on "whatever" 
> (e.g. "nothing"), you don't have any reason to accept the conclusion.

It goes against the grain of functionalism to assume that
consciousness is due primarily to a physical process. The primary idea
is that if the black box replicates the function of a component in a
system, then any mental states that the system has will also be
replicated. Normally this is taken as implying a kind of materialism
since the black box won't be able to replicate behaviour of brain
components unless it contains a complex physical mechanism, but if
miraculously it could - if the black box were empty but the remaining
brain tissue behaves normally anyway - then the consciousness of the
system would remain intact. If a chunk were taken out of the CPU in a
computer but, miraculously, the remaining parts of the CPU behaved
exactly the same as if nothing had happened, then that magical CPU is
just as good as an intact one, and the computations it performs just
as valid.

>> whatever is going on inside the putative zombie's head, if it reproduces the 
>> I/O behaviour of a human, it will have the mind of a human.
>
> That is behaviorism, not computationalism, and I certainly don't believe it.  
> I wouldn't say that a computer that uses a huge lookup table algorithm would 
> be conscious.

Well, functionalism reduces to a type of behaviourism. Functionalism
is OK with replacing components of a system with functionally
identical analogues, regardless of the internal processes of the
functional analogues. If the internal processes don't matter then it
should not matter if a replaced neuron, for example, is driven by a
lookup table. In fact, a practical computational model of a neuron
would probably contain lookup tables as a matter of course, and it
would seem absurd to claim that its consciousness is inversely
proportion to the number of such devices used.

>> The requirement that a computer be able to handle the counterfactuals in 
>> order to be conscious seems to have been brought in to make 
>> computationalists feel better about computationalism.
>
> Not at all.  It was always part of the notion of computation.  Would you buy 
> a PC that only plays a movie?  It must handle all possible inputs in a 
> reliable manner.

But if I did buy a PC that only did addition, for example, I don't see
how it would make sense to say that it isn't really doing that
computation, that it's not real addition. It might not qualify as a
"computer", but the one type of computation it does is still correct
and valid. But if there is some primitive consciousness associated
with addition, the idea is that the crippled computer would have it to
a lesser degree compared to a second general purpose computer that
goes through exactly the same sequence of physical states, simply
because the second machine might potentially do other computations as
well. It is this idea that seems to me to be ad hoc, but also quite
unnecessary: nothing is lost by dropping it.

>> Brains are all probabilistic in that disaster could at any point befall them 
>> causing them to deviate widely from normal behaviour
>
> It is not a problem, it just seems like one at first glance.  Such cases 
> include input to the formal system; for some inputs, the device halts or acts 
> differently.  Hence my talk of "derailable computations" in my MCI paper.
>
>> or else prevent them from deviating at all from a rigidly determined pathway
>
> If that were done, that would change what computation is being implemented.  
> Depending on how it was done, it might or might not affect consciousness.  We 
> can't do such an experimemt.

We can do a thought experiment. A brain is rigged to explode unless it
goes down one particular pathway. Does it change the computation being
implemented if it is given the right input so that it does go down
that pathway? Does it change the consciousness? Is it different to a
brain that lacks the connections to begin with so that it does not
explode but simply stops working unless it is provided with the right
input? What do you lose if you say both brains have exactly the same
conscious experience as a normal brain which goes down that pathway?



-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to