On 12/14/2011 2:09 PM, Joseph Knight wrote:



On Wed, Dec 14, 2011 at 1:51 PM, meekerdb <meeke...@verizon.net <mailto:meeke...@verizon.net>> wrote:

    On 12/14/2011 10:40 AM, Joseph Knight wrote:


    On Tue, Dec 13, 2011 at 11:32 PM, Kim Jones <kimjo...@ozemail.com.au
    <mailto:kimjo...@ozemail.com.au>> wrote:

        Any chance someone might précis for me/us dummies out here in maybe 3 
sentences
        what Tim Maudlin's argument is? Nothing too heavy - just a quick 
refresher.

    I'll try, but with a few more than 3 sentences. Suppose the consciousness 
of a
    machine can be said to supervene on the running of some program X. We can 
have a
    machine run the program but only running a constant program Y that gives 
the same
    output as X for one given input. In other words, it cannot "handle" 
counterfactual
    inputs because it is just a constant program that does the same thing no 
matter
    what. Surely such a machine is not conscious. It would be like, if I decided 
"I
    will answer A B D B D D C A C..." in response to the Chemistry test I am 
about to
    run off and take, and happened to get them all correct, I wouldn't really 
know
    Chemistry, right?

    But I think Russell has reasonably questioned this.  You say X wouldn't know
    chemistry.  But that's a matter of intelligence, not necessarily 
consciousness.  We
    already know that computers can be intelligent, and there's nothing 
mysterious about
    intelligence "supervening" on machines.  Intelligence includes returning 
appropriate
    outputs for many different inputs.  But does consciousness?


I was really just using my Chemistry test as an imperfect analogy to the machine running Y being conscious (or not), so it doesn't affect the rest of the argument. But I see your point. Would you argue that a constant program (giving the same output no matter the input) can be conscious in principle?

I don't think something can be conscious in the human sense unless it is intelligent. The question is can something be intelligent without being conscious. I incline to not, but I'm not sure. I think the interesting point is that there tends to be a unjustified slip from consciousness to intelligence in some arguments. In particular the "323" argument implicitly assumes that not-intelligent=>not-conscious.

Brent

Maudlin assumes that such a program cannot be conscious, in his words, "it would make a mockery of the computational theory of mind." I am agnostic. In my opinion the Filmed Graph argument is more convincing than Maudlin, because with Maudlin one can still fall back to the position "consciousness can in principle supervene on a constant program".

(For those interested, here is the article itself <http://www.finney.org/%7Ehal/maudlin.pdf>)


    Brent



    So consciousness doesn't supervene on Y. But Maudlin (basically) shows that 
you can
    just add some additional parts to the machine that handle the 
counterfactuals as
    needed. These extra parts don't actually do anything, but their "presence" 
means
    the machine now could exactly emulate program X, i.e., is conscious. So a
    computationalist is forced to assert that the machine's consciousness 
supervenes on
    the presence of these extra parts, which in fact perform no computations at 
all.

    I think what Russell said about this earlier, i.e., in a multiverse the 
extra parts
    are doing things, so consciousness then appears at the scale of the 
multiverse --
    is fascinating. But I am out of time. Hope this helped. I would recommend 
reading
    the original paper for the details.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

No virus found in this message.
Checked by AVG - www.avg.com <http://www.avg.com>
Version: 2012.0.1890 / Virus Database: 2108/4680 - Release Date: 12/14/11


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to