On 5/15/2019 11:18 AM, Bruno Marchal wrote:

On 13 May 2019, at 21:28, 'Brent Meeker' via Everything List <[email protected] <mailto:[email protected]>> wrote:



On 5/13/2019 4:50 AM, Bruno Marchal wrote:

On 10 May 2019, at 17:36, Jason Resch <[email protected] <mailto:[email protected]>> wrote:



On Fri, May 10, 2019 at 1:02 AM 'Brent Meeker' via Everything List <[email protected] <mailto:[email protected]>> wrote:



    On 5/9/2019 7:58 PM, Jason Resch wrote:


    On Thu, May 9, 2019 at 7:47 PM Bruce Kellett
    <[email protected] <mailto:[email protected]>> wrote:

        On Fri, May 10, 2019 at 10:18 AM Jason Resch
        <[email protected] <mailto:[email protected]>> wrote:

            On Thu, May 9, 2019 at 7:02 PM Bruce Kellett
            <[email protected] <mailto:[email protected]>>
            wrote:

                On Fri, May 10, 2019 at 9:36 AM Jason Resch
                <[email protected]
                <mailto:[email protected]>> wrote:


                    On Thu, May 9, 2019 at 3:09 PM 'Brent Meeker'
                    via Everything List
                    <[email protected]
                    <mailto:[email protected]>> wrote:


                        Would it make a difference if they compute
                        the same function?


                    Not from the perspective of the function. If
                    the computation is truly the same, there is no
                    way the software can determine it's hardware.

                        If so  then you might as well say it would
                        make a difference if they were run on
                        different hardware.


                    From the outside it might seem different. E.g.
                    instead of silicon some other element, foreign
                    to the chemistry of this universe, might make
                    for a more appropriate substrate.


                But the computations that comprise a conscious
                mind also, ipso facto, comprise the whole universe.


            I don't see how this follows. Is the computer on your
            desk the whole universe?  Is it not able to run an
            isolated computation which is not affected by what
            other parts of the universe are doing?


        The computer on my desk is not conscious!


    Maybe. I'm not sure we can conclude anything so easily.  But
    in any case it can illustrate the point that a computation
    need not be identical with the whole of the universe that
    contains it.

                So if the computations are the same, the
                conscious, AND THE UNIVERSE in which it resides,
                are the same. There can, therefore, be no
                "outside" from which the consciousnesses and
                universes are different.


            Couldn't what we take to be the physical universe be a
            simulation run in computer within a very different
            universe?  Clearly then the outside and inside view
            would be very different.


        But the theory is that the physical universe is a
        statistical construct over all computations running
        through your conscious self.


    You're jumping ahead to the final result of the computation,
    and continue to jump back and forth between different
    levels/definitions of universe.  To clarify, let me enumerate
    stages of the argument such that we can be clear which one we
    are speaking of:

    1. Your brain can be replaced with a functionally equivalent
    physical component which implements its functions digitally
    (here we change nothing about our assumption of what the
    physical universe is)

    But what are its functions?  Do they include quantum level
    entanglements? Dissipation of heat in erasure of information? 
    Does it have the ability to perceive and act in the world?


I don't know. This is a matter you would need to discuss with your doctor and take on some level of faith, perhaps from user reviews of others that have taken the same leap of faith before you.  I think Bruno has a result that this necessarily requires some act of faith, regardless of how far neuroscience advances.


    2. Following from #1, your consciousness can supervene on an
    appropriately programmed digital computer

    To what accuracy over what domain?  Does it matter whether the
    accuracy is 99% or 10%?


Let's say functional equivalence at 100%, the indecision is how much of the low-level to capture.  At the highest level you might have a lookup table and nothing below is the same (this was Ned Block's "Blockhead" argument against functionalism--he missed the notion of a substitution level), at a lower level you might simulate the neurons, again, 100% accurately, but you might miss some computational step that is important for your consciousness, and so on.  For example, the steps your brain goes through when I ask you to add 2 and 3 is very different and results in very different conscious states than when I ask a pocket calculator to do the same.  If I substituted the part of your brain that does arithmetic with a pocket calculator, this would alter your conscious perception, even if it left you outwardly, functionally identical.

Exactly.

OK. I agree too, with Jason. (I have not written the text above, but it makes sense).



So how do you know it wouldn't do it without conscious perception at all, i.e. alter it to nothing?  And in fact isn't that what learning the multiplication table does, it eliminates computation for single digit numbers.  So that's the point of my question.  What do "functional equivalence" really mean.

Good question. I would avoid functional equivalence at this level. The “computational equivalence” is conceptually simpler, although provably highly not constructive: two computation are equivalent if the first person experience is the same.

But the first person experience is by definition only experienced by one person, so there can never be a judgement the two different first person experiences are the same.

By definition of the substitution level, all computations “below” that level are equivalent, and the consciousness flux will be multiplied by the consistent extensions differentiating “continuously”, with the topology given by the semantics of the relevant modes of self (in this case []p & <>t & p).




Does it just mean "no noticeable difference in behavior", i.e. third person equivalence?


Usually “functional equivalence” is an extensional 3p notion in computer science, and then you have a zoo of syntactical weaker equivalence. I said two words about this in the combinators thread. Pure functional equivalence is equal to having the same input output,

Input and output depend on some well defined boundary between the parts of the computation.  A condition I think is lacking in this case.

even if one is run by a quantum computer using a quantum algorithm and the other is run by a Babbage machine. But with mention to the level of substitution, computational equivalence requires not only to compute the right local function, but also the way that function is computed (above the substitution level). Now, when you have that right loop in place, it will not matter if you compute it in this or that way, in a physical reality, or in the arithmetical reality, or in the fortranical reality, or in a (rich) combinatory algebra, etc.



  But then it seems the theory talks about "preserving consciousness", a first-person...what? perception that I'm me?  What perception could you have that told you your consciousness had changed or been lost?

It is the difference between going out of the hospital, feeling alive and well, or dying (whatever that means for the 1p). Of course, in “reality” you will have intermediate, like feeling alive, but not so well, …, needing many post arraignment, discovering new consciousness state pleasant or unpleasant, etc.

No.  That doesn't work because the feeling is the computation. There isn't some separate feeling of the computation.


Accident happens, like the guy who said, when asked if he was happy with his artificial brain: “- I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic  I am completely happy with my …tclic (cf Britannia Hospital).

All third person, observable behavior.  Can you prove he's not completely happy?



Please note that the provable ethic of mechanism consists in the right to say “no” to the doctor, at least for adults.

An interesting qualification!  Do you think children are not conscious?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/64e84a1a-b02c-7e80-5761-ee3e0089809d%40verizon.net.

Reply via email to