Hal wrote:

> We had some discussion of Maudlin's paper on the everything-list in 1999.
> I summarized the paper at http://www.escribe.com/science/theory/m898.html
> .
> Subsequent discussion under the thread title "implementation" followed
> up; I will point to my posting at
> http://www.escribe.com/science/theory/m962.html regarding Bruno's version
> of Maudlin's result.

Thanks for those links.
> I suggested a flaw in Maudlin's argument at
> http://www.escribe.com/science/theory/m1010.html with followup
> http://www.escribe.com/science/theory/m1015.html .
> In a nutshell, my point was that Maudlin fails to show that physical
> supervenience (that is, the principle that whether a system is
> conscious or not depends solely on the physical activity of the system)
> is inconsistent with computationalism.  What he does show is that you
> can change the computation implemented by a system without altering it
> physically (by some definition).  But his desired conclusion does not
> follow logically, because it is possible that the new computation is
> also conscious.

So the system instantiates two different computations, when all things are
considered. The first instantiation is when the counterfactuals are enabled
(block removed) and the second instantiation is when the counterfactuals are
disabled (block added). Because there are two different computations, we
can't conclude that the second instantiation does not lead to a phenomenal
state of consciousness. But would you agree though that there does not
appear to be sufficient physical activity taking place in the second
instantiation to sustain phenomenal awareness? After all, Maudlin went to a
lot of trouble to construct a lazy machine! To carry out the second
computation, all that needs to happen is that the armature travel from left
to right emptying or filling troughs (or, as in my summary, triggering tape
locations). It is supposed to be transparently obvious from the lack of
activity that if it is conscious then it can't be as a result of physical
activity. Now you maintain it is conscious, so wherein lies the

> (In fact, I argued that the new computation is very plausibly conscious,
> but that doesn't even matter, because it is sufficient to consider that
> it might be, in order to see that Maudlin's argument doesn't go through.
> To repair his argument it would be necessary to prove that the altered
> computation is unconscious.)
> You can follow the thread and date index links off the messages above
> to see much more discussion of the issue of implementation.

OK, I'm making my way through those. Apologies to the list if the points I
raise have been covered previously.

Brian Scurfield

Reply via email to