On 5/24/2023 10:41 AM, Stathis Papaioannou wrote:


On Thu, 25 May 2023 at 02:14, Brent Meeker <meekerbr...@gmail.com> wrote:



    On 5/24/2023 12:19 AM, Stathis Papaioannou wrote:


    On Wed, 24 May 2023 at 15:37, Jason Resch <jasonre...@gmail.com>
    wrote:



        On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou
        <stath...@gmail.com> wrote:



            On Wed, 24 May 2023 at 04:03, Jason Resch
            <jasonre...@gmail.com> wrote:



                On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou
                <stath...@gmail.com> wrote:



                    On Tue, 23 May 2023 at 21:09, Jason Resch
                    <jasonre...@gmail.com> wrote:

                        As I see this thread, Terren and Stathis are
                        both talking past each other. Please either
                        of you correct me if i am wrong, but in an
                        effort to clarify and perhaps resolve this
                        situation:

                        I believe Stathis is saying the functional
                        substitution having the same fine-grained
                        causal organization *would* have the same
                        phenomenology, the same experience, and the
                        same qualia as the brain with the same
                        fine-grained causal organization.

                        Therefore, there is no disagreement between
                        your positions with regards to symbols
                        groundings, mappings, etc.

                        When you both discuss the problem of
                        symbology, or bits, etc. I believe this is
                        partly responsible for why you are both
                        talking past each other, because there are
                        many levels involved in brains (and
                        computational systems). I believe you were
                        discussing completely different levels in the
                        hierarchical organization.

                        There are high-level parts of minds, such as
                        ideas, thoughts, feelings, quale, etc. and
                        there are low-level, be they neurons,
                        neurotransmitters, atoms, quantum fields, and
                        laws of physics as in human brains, or
                        circuits, logic gates, bits, and instructions
                        as in computers.

                        I think when Terren mentions a "symbol for
                        the smell of grandmother's kitchen" (GMK) the
                        trouble is we are crossing a myriad of
                        levels. The quale or idea or memory of the
                        smell of GMK is a very high-level feature of
                        a mind. When Terren asks for or discusses a
                        symbol for it, a complete answer/description
                        for it can only be supplied in terms of a
                        vast amount of information concerning low
                        level structures, be they patterns of neuron
                        firings, or patterns of bits being processed.
                        When we consider things down at this low
                        level, however, we lose all context for what
                        the meaning, idea, and quale are or where or
                        how they come in. We cannot see or find the
                        idea of GMK in any neuron, no more than we
                        can see or find it in any neuron.

                        Of course then it should seem deeply
                        mysterious, if not impossible, how we get
                        "it" (GMK or otherwise) from "bit", but to
                        me, this is no greater a leap from how we get
                        "it" from a bunch of cells squirting ions
                        back and forth. Trying to understand a
                        smartphone by looking at the flows of
                        electrons is a similar kind of problem, it
                        would seem just as difficult or impossible to
                        explain and understand the high-level
                        features and complexity out of the low-level
                        simplicity.

                        This is why it's crucial to bear in mind and
                        explicitly discuss the level one is operation
                        on when one discusses symbols, substrates, or
                        quale. In summary, I think a chief reason you
                        have been talking past each other is because
                        you are each operating on different assumed
                        levels.

                        Please correct me if you believe I am
                        mistaken and know I only offer my perspective
                        in the hope it might help the conversation.


                    I think you’ve captured my position. But in
                    addition I think replicating the fine-grained
                    causal organisation is not necessary in order to
                    replicate higher level phenomena such as GMK. By
                    extension of Chalmers’ substitution experiment,


                Note that Chalmers's argument is based on assuming
                the functional substitution occurs at a certain level
                of fine-grained-ness. If you lose this step, and look
                at only the top-most input-output of the mind as
                black box, then you can no longer distinguish a rock
                from a dreaming person, nor a calculator computing
                2+3 and a human computing 2+3, and one also runs into
                the Blockhead "lookup table" argument against
                functionalism.


            Yes, those are perhaps problems with functionalism. But a
            major point in Chalmers' argument is that if qualia were
            substrate-specific (hence, functionalism false) it would
            be possible to make a partial zombie or an entity whose
            consciousness and behaviour diverged from the point the
            substitution was made. And this argument works not just
            by replacing the neurons with silicon chips, but by
            replacing any part of the human with anything that
            reproduces the interactions with the remaining parts.



        How deeply do you have to go when you consider or define
        those "other parts" though? That seems to be a critical but
        unstated assumption, and something that depends on how finely
        grained you consider the relevant/important parts of a brain
        to be.

        For reference, this is what Chalmers says:


        "In this paper I defend this view. Specifically, I defend a
        principle of organizational invariance, holding that
        experience is invariant across systems with the same
        fine-grained functional organization. More precisely, the
        principle states that given any system that has conscious
        experiences, then any system that has the same functional
        organization at a fine enough grain will have qualitatively
        identical conscious experiences. A full specification of a
        system's fine-grained functional organization will fully
        determine any conscious experiences that arise."
        https://consc.net/papers/qualia.html

        By substituting a fine-grained functional organization for a
        coarse-grained one, you change the functional definition and
        can no longer guarantee identical experiences, nor identical
        behaviors in all possible situations. They're no
        longer"functional isomorphs" as Chalmers's argument requires.

        By substituting a recording of a computation for a
        computation, you replace a conscious mind with a tape
        recording of the prior behavior of a conscious mind. This is
        what happens in the Blockhead thought experiment. The result
        is something that passes a Turing test, but which is itself
        not conscious (though creating such a recording requires
        prior invocation of a conscious mind or extraordinary luck).


    The replaced part must of course be functionally identical,
    otherwise both the behaviour and the qualia could change. But
    this does not mean that it must replicate the functional
    organisation at a particular scale. If a volume of brain tissue
    is removed, in order to guarantee identical behaviour the
    replacement part must interact at the cut surfaces of the
    surrounding tissue in the same way as the original. It is at
    these surfaces that the interactions must be sufficiently
    fine-grained, but what goes inside the volume doesn't matter: it
    could be conventional simulation of neurons, it could be a giant
    lookup table. Also, the volume could be any size, and could
    comprise an arbitrarily large proportion of the subject.

    Doesn't it need to be able to change in order to have memory and
    to learn?


Yes, I meant change from what the original parts would do. If you get a neural implant you would want it to leave your brain functioning as it was originally, which means all the remaining neurons firing in the same way as they were originally. This would guarantee that your consciousness would also continue as it was originally.
Except I couldn't learn anything or form any new memories; at least not if they depended on the implant.  Right?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/1516ed12-e220-6c8b-e034-b360316e11f6%40gmail.com.

Reply via email to