On Thu, 25 May 2023 at 11:48, Jason Resch <jasonre...@gmail.com> wrote:

>
>
> On Wed, May 24, 2023, 9:32 PM Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 06:46, Jason Resch <jasonre...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023 at 12:20 PM Stathis Papaioannou <stath...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, 24 May 2023 at 21:56, Jason Resch <jasonre...@gmail.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, May 24, 2023, 3:20 AM Stathis Papaioannou <stath...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, 24 May 2023 at 15:37, Jason Resch <jasonre...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, May 24, 2023, 1:15 AM Stathis Papaioannou <
>>>>>>> stath...@gmail.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, 24 May 2023 at 04:03, Jason Resch <jasonre...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, May 23, 2023 at 7:15 AM Stathis Papaioannou <
>>>>>>>>> stath...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, 23 May 2023 at 21:09, Jason Resch <jasonre...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> As I see this thread, Terren and Stathis are both talking past
>>>>>>>>>>> each other. Please either of you correct me if i am wrong, but in 
>>>>>>>>>>> an effort
>>>>>>>>>>> to clarify and perhaps resolve this situation:
>>>>>>>>>>>
>>>>>>>>>>> I believe Stathis is saying the functional substitution having
>>>>>>>>>>> the same fine-grained causal organization *would* have the same
>>>>>>>>>>> phenomenology, the same experience, and the same qualia as the 
>>>>>>>>>>> brain with
>>>>>>>>>>> the same fine-grained causal organization.
>>>>>>>>>>>
>>>>>>>>>>> Therefore, there is no disagreement between your positions with
>>>>>>>>>>> regards to symbols groundings, mappings, etc.
>>>>>>>>>>>
>>>>>>>>>>> When you both discuss the problem of symbology, or bits, etc. I
>>>>>>>>>>> believe this is partly responsible for why you are both talking 
>>>>>>>>>>> past each
>>>>>>>>>>> other, because there are many levels involved in brains (and 
>>>>>>>>>>> computational
>>>>>>>>>>> systems). I believe you were discussing completely different levels 
>>>>>>>>>>> in the
>>>>>>>>>>> hierarchical organization.
>>>>>>>>>>>
>>>>>>>>>>> There are high-level parts of minds, such as ideas, thoughts,
>>>>>>>>>>> feelings, quale, etc. and there are low-level, be they neurons,
>>>>>>>>>>> neurotransmitters, atoms, quantum fields, and laws of physics as in 
>>>>>>>>>>> human
>>>>>>>>>>> brains, or circuits, logic gates, bits, and instructions as in 
>>>>>>>>>>> computers.
>>>>>>>>>>>
>>>>>>>>>>> I think when Terren mentions a "symbol for the smell of
>>>>>>>>>>> grandmother's kitchen" (GMK) the trouble is we are crossing a 
>>>>>>>>>>> myriad of
>>>>>>>>>>> levels. The quale or idea or memory of the smell of GMK is a very
>>>>>>>>>>> high-level feature of a mind. When Terren asks for or discusses a 
>>>>>>>>>>> symbol
>>>>>>>>>>> for it, a complete answer/description for it can only be supplied 
>>>>>>>>>>> in terms
>>>>>>>>>>> of a vast amount of information concerning low level structures, be 
>>>>>>>>>>> they
>>>>>>>>>>> patterns of neuron firings, or patterns of bits being processed. 
>>>>>>>>>>> When we
>>>>>>>>>>> consider things down at this low level, however, we lose all 
>>>>>>>>>>> context for
>>>>>>>>>>> what the meaning, idea, and quale are or where or how they come in. 
>>>>>>>>>>> We
>>>>>>>>>>> cannot see or find the idea of GMK in any neuron, no more than we 
>>>>>>>>>>> can see
>>>>>>>>>>> or find it in any neuron.
>>>>>>>>>>>
>>>>>>>>>>> Of course then it should seem deeply mysterious, if not
>>>>>>>>>>> impossible, how we get "it" (GMK or otherwise) from "bit", but to 
>>>>>>>>>>> me, this
>>>>>>>>>>> is no greater a leap from how we get "it" from a bunch of cells 
>>>>>>>>>>> squirting
>>>>>>>>>>> ions back and forth. Trying to understand a smartphone by looking 
>>>>>>>>>>> at the
>>>>>>>>>>> flows of electrons is a similar kind of problem, it would seem just 
>>>>>>>>>>> as
>>>>>>>>>>> difficult or impossible to explain and understand the high-level 
>>>>>>>>>>> features
>>>>>>>>>>> and complexity out of the low-level simplicity.
>>>>>>>>>>>
>>>>>>>>>>> This is why it's crucial to bear in mind and explicitly discuss
>>>>>>>>>>> the level one is operation on when one discusses symbols, 
>>>>>>>>>>> substrates, or
>>>>>>>>>>> quale. In summary, I think a chief reason you have been talking 
>>>>>>>>>>> past each
>>>>>>>>>>> other is because you are each operating on different assumed levels.
>>>>>>>>>>>
>>>>>>>>>>> Please correct me if you believe I am mistaken and know I only
>>>>>>>>>>> offer my perspective in the hope it might help the conversation.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I think you’ve captured my position. But in addition I think
>>>>>>>>>> replicating the fine-grained causal organisation is not necessary in 
>>>>>>>>>> order
>>>>>>>>>> to replicate higher level phenomena such as GMK. By extension of 
>>>>>>>>>> Chalmers’
>>>>>>>>>> substitution experiment,
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Note that Chalmers's argument is based on assuming the functional
>>>>>>>>> substitution occurs at a certain level of fine-grained-ness. If you 
>>>>>>>>> lose
>>>>>>>>> this step, and look at only the top-most input-output of the mind as 
>>>>>>>>> black
>>>>>>>>> box, then you can no longer distinguish a rock from a dreaming 
>>>>>>>>> person, nor
>>>>>>>>> a calculator computing 2+3 and a human computing 2+3, and one also 
>>>>>>>>> runs
>>>>>>>>> into the Blockhead "lookup table" argument against functionalism.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Yes, those are perhaps problems with functionalism. But a major
>>>>>>>> point in Chalmers' argument is that if qualia were substrate-specific
>>>>>>>> (hence, functionalism false) it would be possible to make a partial 
>>>>>>>> zombie
>>>>>>>> or an entity whose consciousness and behaviour diverged from the point 
>>>>>>>> the
>>>>>>>> substitution was made. And this argument works not just by replacing 
>>>>>>>> the
>>>>>>>> neurons with silicon chips, but by replacing any part of the human with
>>>>>>>> anything that reproduces the interactions with the remaining parts.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> How deeply do you have to go when you consider or define those
>>>>>>> "other parts" though? That seems to be a critical but unstated 
>>>>>>> assumption,
>>>>>>> and something that depends on how finely grained you consider the
>>>>>>> relevant/important parts of a brain to be.
>>>>>>>
>>>>>>> For reference, this is what Chalmers says:
>>>>>>>
>>>>>>>
>>>>>>> "In this paper I defend this view. Specifically, I defend a
>>>>>>> principle of organizational invariance, holding that experience is
>>>>>>> invariant across systems with the same fine-grained functional
>>>>>>> organization. More precisely, the principle states that given any system
>>>>>>> that has conscious experiences, then any system that has the same
>>>>>>> functional organization at a fine enough grain will have qualitatively
>>>>>>> identical conscious experiences. A full specification of a system's
>>>>>>> fine-grained functional organization will fully determine any conscious
>>>>>>> experiences that arise."
>>>>>>> https://consc.net/papers/qualia.html
>>>>>>>
>>>>>>> By substituting a fine-grained functional organization for a
>>>>>>> coarse-grained one, you change the functional definition and can no 
>>>>>>> longer
>>>>>>> guarantee identical experiences, nor identical behaviors in all possible
>>>>>>> situations. They're no longer"functional isomorphs" as Chalmers's 
>>>>>>> argument
>>>>>>> requires.
>>>>>>>
>>>>>>> By substituting a recording of a computation for a computation, you
>>>>>>> replace a conscious mind with a tape recording of the prior behavior of 
>>>>>>> a
>>>>>>> conscious mind. This is what happens in the Blockhead thought 
>>>>>>> experiment.
>>>>>>> The result is something that passes a Turing test, but which is itself 
>>>>>>> not
>>>>>>> conscious (though creating such a recording requires prior invocation 
>>>>>>> of a
>>>>>>> conscious mind or extraordinary luck).
>>>>>>>
>>>>>>
>>>>>> The replaced part must of course be functionally identical, otherwise
>>>>>> both the behaviour and the qualia could change. But this does not mean 
>>>>>> that
>>>>>> it must replicate the functional organisation at a particular scale. If a
>>>>>> volume of brain tissue is removed, in order to guarantee identical
>>>>>> behaviour the replacement part must interact at the cut surfaces of the
>>>>>> surrounding tissue in the same way as the original. It is at these 
>>>>>> surfaces
>>>>>> that the interactions must be sufficiently fine-grained, but what goes
>>>>>> inside the volume doesn't matter: it could be conventional simulation of
>>>>>> neurons, it could be a giant lookup table. Also, the volume could be any
>>>>>> size, and could comprise an arbitrarily large proportion of the subject.
>>>>>>
>>>>>>
>>>>>
>>>>> Can I ask you what you would believe would happen to the conscious of
>>>>> the individual if you replaced the right hemisphere of the brain with a
>>>>> black box that interfaced identically with the left hemisphere, but
>>>>> internal to this black box is nothing but a random number generator, and 
>>>>> it
>>>>> is only by fantastic luck that the output of the RNG happens to have 
>>>>> caused
>>>>> it's interfacing with the left hemisphere to remain unchanged?
>>>>>
>>>>
>>>> I was going to propose just that next: nothing, the consciousness would
>>>> continue.
>>>>
>>>
>>> A RNG has a different functional description though, from any
>>> conventional mind. It seems to me you may be operating within a more
>>> physicalist notion of consciousness than a functionalist one, in that you
>>> seem to be putting more weight on the existence of a particular physical
>>> state being reached, regardless of how it got there. In my view (as a
>>> functionalist), being in a particular physical state is not sufficient. It
>>> also matters how one reached that particular state. A RNG and a human can
>>> both output the string "I am conscious", but in my view only one of them is.
>>>
>>
>> An RNG would be a bad design choice because it would be extremely
>> unreliable. However, as a thought experiment, it could work. If the visual
>> cortex were removed and replaced with an RNG which for five minutes
>> replicated the interactions with the remaining brain, the subject would
>> behave as if they had normal vision and report that they had normal vision,
>> then after five minutes behave as if they were blind and report that they
>> were blind. It is perhaps contrary to intuition that the subject would
>> really have visual experiences in that five minute period, but I don't
>> think there is any other plausible explanation.
>>
>
> I think they would be a visual zombie in that five minute period, though
> as described they would not be able to report any difference.
>
> I think if one's entire brain were replaced by an RNG, they would be a
> total zombie who would fool us into thinking they were conscious and we
> would not notice a difference. So by extension a brain partially replaced
> by an RNG would be a partial zombie that fooled the other parts of the
> brain into thinking nothing was amiss.
>
>
>>
>>
>>> After answering that, let me ask what you think would happen to the
>>>>> conscious of the individual if we replaced all but one neuron in the brain
>>>>> with this RNG-driven black box that continues to stimulate this sole
>>>>> remaining neuron in exactly the same way as the rest of the brain would
>>>>> have?
>>>>>
>>>>
>>>> The consciousness would continue. And then we could get rid of the
>>>> neuron and the consciousness would continue. So we end up with the same
>>>> result as the rock implementing all computations and hence all
>>>> consciousnesses,
>>>>
>>>
>>> Rocks don't implement all computations. I am aware some philosophers
>>> have said as much, but they achieve this trick by labeling successive
>>> states of a computation to each time-ordered state of the rock. I don't
>>> think any computer scientist accepts this as valid. The transitions of the
>>> rock states lack the counterfactual relations which are necessary for
>>> computation. If you were to try to map states S_1 to state S_5000 of a rock
>>> to a program computing Pi, looking at state S_6000 of the rock won't
>>> provide you any meaningful information about what the next digit of Pi
>>> happens to be.
>>>
>>
>> Yes, so it can't be used as a computer that interacts with its
>> environment and provides useful results. But we could say that the
>> computation is still in there hidden, in the way every possible sculpture
>> is hidden inside a block of marble.
>>
>
> I am not so sure. All the work is offloaded to the one doing the
> interpretation, none of the relations are inherent in the state
> transitions. If you change one of the preceding states, it does not alter
> the flow of the computation in the expected way, and the period of the
> rock's state transitions (it's Poincare recurrence time) bears no relation
> to the period of the purported computation being executed.
>

All the work is offloaded onto the one doing the interpretation, but what
if we consider a virtual environment with no outside input?

-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypVfTxkfeb39uu8ACZh%3DwE%2BjN_qQJvqdX1g%3DX-5mAUj81g%40mail.gmail.com.

Reply via email to