On Thu, 25 May 2023 at 21:28, Jason Resch <jasonre...@gmail.com> wrote:

>
>
> On Thu, May 25, 2023, 12:30 AM Stathis Papaioannou <stath...@gmail.com>
> wrote:
>
>>
>>
>> On Thu, 25 May 2023 at 13:59, Jason Resch <jasonre...@gmail.com> wrote:
>>
>>>
>>>
>>> On Wed, May 24, 2023, 9:56 PM Stathis Papaioannou <stath...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Thu, 25 May 2023 at 11:48, Jason Resch <jasonre...@gmail.com> wrote:
>>>>
>>>> >An RNG would be a bad design choice because it would be extremely
>>>>> unreliable. However, as a thought experiment, it could work. If the visual
>>>>> cortex were removed and replaced with an RNG which for five minutes
>>>>> replicated the interactions with the remaining brain, the subject would
>>>>> behave as if they had normal vision and report that they had normal 
>>>>> vision,
>>>>> then after five minutes behave as if they were blind and report that they
>>>>> were blind. It is perhaps contrary to intuition that the subject would
>>>>> really have visual experiences in that five minute period, but I don't
>>>>> think there is any other plausible explanation.
>>>>>
>>>>
>>>>> I think they would be a visual zombie in that five minute period,
>>>>> though as described they would not be able to report any difference.
>>>>>
>>>>> I think if one's entire brain were replaced by an RNG, they would be a
>>>>> total zombie who would fool us into thinking they were conscious and we
>>>>> would not notice a difference. So by extension a brain partially replaced
>>>>> by an RNG would be a partial zombie that fooled the other parts of the
>>>>> brain into thinking nothing was amiss.
>>>>>
>>>>
>>>> I think the concept of a partial zombie makes consciousness nonsensical.
>>>>
>>>
>>> It borders on the nonsensical, but between the two bad alternatives I
>>> find the idea of a RNG instantiating human consciousness somewhat less
>>> sensical than the idea of partial zombies.
>>>
>>
>> If consciousness persists no matter what the brain is replaced with as
>> long as the output remains the same this is consistent with the idea that
>> consciousness does not reside in a particular substance (even a magical
>> substance) or in a particular process.
>>
>
> Yes but this is a somewhat crude 1960s version of functionalism, which as
> I described and as you recognized, is vulnerable to all kinds of attacks.
> Modern functionalism is about more than high level inputs and outputs, and
> includes causal organization and implementation details at some level (the
> functional substitution level).
>
> Don't read too deeply into the mathematical definition of function as
> simply inputs and outputs, think of it more in terms of what a mind does,
> rather than what a mind is, this is the thinking that led to functionalism
> and an acceptance of multiple realizability.
>
>
>
> This is a strange idea, but it is akin to the existence of platonic
>> objects. The number three can be implemented by arranging three objects in
>> a row but it does not depend those three objects unless it is being used
>> for a particular purpose, such as three beads on an abacus.
>>
>
> Bubble sort and merge sort both compute the same thing and both have the
> same inputs and outputs, but they are different mathematical objects, with
> different behaviors, steps, subroutines and runtime efficiency.
>
>
>
>>
>>> How would I know that I am not a visual zombie now, or a visual zombie
>>>> every Tuesday, Thursday and Saturday?
>>>>
>>>
>>> Here, we have to be careful what we mean by "I". Our own brains have
>>> various spheres of consciousness as demonstrated by the Wada Test: we can
>>> shut down one hemisphere of the brain and lose partial awareness and
>>> functionality such as the ability to form words and yet one remains
>>> conscious. I think being a partial zombie would be like that, having one's
>>> sphere of awareness shrink.
>>>
>>
>> But the subject's sphere of awareness would not shrink in the thought
>> experiment,
>>
>
> Have you ever wondered what delineates the mind from its environment? Why
> it is that you are not aware of my thoughts but you see me as an object
> that only affects your senses, even though we could represent the whole
> earth as one big functional system?
>
> I don't have a good answer to this question but it seems it might be a
> factor here. The randomly generated outputs from the RNG would seem an
> environmental noise/sensation coming from the outside, rather than a
> recursively linked and connected loop of processing as would exist in a
> genuinely functioning brain of two hemispheres.
>
>
> since by assumption their behaviour stays the same, while if their sphere
>> of awareness shrank they notice that something was different and say so.
>>
>
> But here (almost by magic), the RNG outputs have forced the physical
> behavior of the remaining hemisphere to remain the same while fundamentally
> altering the definition of the computation that underlies the mind.
>
> If this does not alter the consciousness, if neurons don't need to
> interact in a computationally meaningful way with other neurons, then in
> principle all we need is one neuron to fire once, and this can stand for
> all possible consciousness invoked by all possible minds.
>
> Arnold Zuboff has written a thought experiment to this effect.
>
> I think it leads to a kind of absurdity. Why write books or emails when
> every possible combination of letters is already inherent in the alphabet.
> We just had to write the alphabet down once and we could call it a day. Or:
> combinations, patterns, and interrelations *are* important and meaningful,
> in ways that isolated instances of letters (or neurons) are not.
>
>
>>
>>> What is the advantage of having "real" visual experiences if they make
>>>> no objective difference and no subjective difference either?
>>>>
>>>
>>> The advantage of real computations (which imply having real
>>> awareness/experiences) is that real computations are more reliable than
>>> RNGs for producing intelligent behavioral responses.
>>>
>>
>> Yes, so an RNG would be a bad design choice. But the point remains that
>> if the output of the system remains the same, the consciousness remains the
>> same, regardless of how the system functions.
>>
>
> If you don't care about how the system functions and care only about
> outputs, then I think you are operating within an older, and I think
> largely abandoned, version of functionalism.
>
> Consider: an electron has the same outputs as a dreaming brain locked
> inside a skull: none.
>
> But if a theory cannot acknowledge a difference in the conscious between
> an electron and a dreaming brain inside a skull, then the theory is (in my
> opinion) operationally useless.
>
>
> The reasonable-sounding belief that somehow the consciousness resides in
>> the brain, in particular biochemical reactions or even in electronic
>> circuits simulating the brain is wrong.
>>
>
> Right, I fully accept multiple realizability.
>

But it does not follow from the ability to multiply realize functions with
> different substrates that the internal details of a function's
> implementation can be ignored and we can focus only on the output of a
> function.
>
> I don't know to what degree you are familiar with programming or computer
> code but I would like you to consider these two functions for a moment:
>
> int sum1(int a, int b) {
>     return a + b;
> }
>
> int sum2(int a, int b) {
>     runBrainSimulation();
>     return a + b;
> }
>
> Here we have two functions defined, sum1() and sum2(). Both take in two
> integers as inputs. Both return one integer as an output. Both return the
> mathematical sum of the two inputs. At a high level functional definition,
> they are identical and we can abstract away the internal implementation
> details.
>
> But, what these two functions compute are very different. The function
> sum2(), before computing and returning the sum, continues the computation
> of an emulation of a human brain by invoking another function
> "runBrainSimulation()". This function advances the simulation of an
> uploaded human brain by five subjective minutes. But this simulation
> function itself has no outputs, and it has no effect on what sum2() returns.
>
> Given this, are you still of the opinion that the only thing that matters
> in a mind are high level outputs, or does this example reveal that
> sometimes implementation details of a function are relevant and bear on the
> states of consciousness that a function realizes?
>

In your example with the two functions there is a conscious process which
is separate from the outputs. The analogous case in Chalmers’ experiment is
that the visual qualia are altered by the replacement process, the subject
notices, but he continues to say that everything is fine, because the
inputs to his language centres etc. are the same. But what part of the
brain does the noticing, the trying to speak, the experience of horror at
helplessly observing oneself say that everything is fine? There isn’t a
special part of the brain that runs conscious subroutines disconnected from
the outputs.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWPQxfDj5Qv5JPnufxSXNWq%3DXdkqxZd1tkDk-cEPy6xfw%40mail.gmail.com.

Reply via email to