On Tue, 23 May 2023 at 14:40, Jesse Mazer <laserma...@gmail.com> wrote:

>
>
> On Mon, May 22, 2023 at 11:37 PM Terren Suydam <terren.suy...@gmail.com>
> wrote:
>
>>
>>
>> On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <stath...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, 23 May 2023 at 10:48, Terren Suydam <terren.suy...@gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <stath...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, 23 May 2023 at 10:03, Terren Suydam <terren.suy...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> it is true that my brain has been trained on a large amount of data -
>>>>>> data that contains intelligence outside of my own. But when I 
>>>>>> introspect, I
>>>>>> notice that my understanding of things is ultimately rooted/grounded in 
>>>>>> my
>>>>>> phenomenal experience. Ultimately, everything we know, we know either by
>>>>>> our experience, or by analogy to experiences we've had. This is in
>>>>>> opposition to how LLMs train on data, which is strictly about how
>>>>>> words/symbols relate to one another.
>>>>>>
>>>>>
>>>>> The functionalist position is that phenomenal experience supervenes on
>>>>> behaviour, such that if the behaviour is replicated (same output for same
>>>>> input) the phenomenal experience will also be replicated. This is what
>>>>> philosophers like Searle (and many laypeople) can’t stomach.
>>>>>
>>>>
>>>> I think the kind of phenomenal supervenience you're talking about is
>>>> typically asserted for behavior at the level of the neuron, not the level
>>>> of the whole agent. Is that what you're saying?  That chatGPT must be
>>>> having a phenomenal experience if it talks like a human?   If so, that is
>>>> stretching the explanatory domain of functionalism past its breaking point.
>>>>
>>>
>>> The best justification for functionalism is David Chalmers' "Fading
>>> Qualia" argument. The paper considers replacing neurons with functionally
>>> equivalent silicon chips, but it could be generalised to replacing any part
>>> of the brain with a functionally equivalent black box, the whole brain, the
>>> whole person.
>>>
>>
>> You're saying that an algorithm that provably does not have experiences
>> of rabbits and lollipops - but can still talk about them in a way that's
>> indistinguishable from a human - essentially has the same phenomenology as
>> a human talking about rabbits and lollipops. That's just absurd on its
>> face. You're essentially hand-waving away the grounding problem. Is that
>> your position? That symbols don't need to be grounded in any sort of
>> phenomenal experience?
>>
>> Terren
>>
>
> Are you talking here about Chalmer's thought experiment in which each
> neuron is replaced by a functional duplicate, or about an algorithm like
> ChatGPT that has no detailed resemblance to the structure of a human
> being's brain? I think in the former case the case for identical experience
> is very strong, though note Chalmers is not really a functionalist, he
> postulates "psychophysical laws" which map physical patterns to
> experiences, and uses the replacement argument to argue that such laws
> would have the property of "functional invariance".
>
> In you are just talking about ChatGPT style programs, I would agree with
> you, a system trained only on the high-level symbols of human language (as
> opposed to symbols representing neural impulses or other low-level events
> on the microscopic level) is not likely to have experience anything like a
> human being using the same symbols. If Stathis' black box argument is meant
> to suggest otherwise I don't the logic, since it's not like a ChatGPT style
> program would replicate the detailed output of a composite group of neurons
> either, or even the exact verbal output of a specific person, so there is
> no equivalent to gradual replacement of parts of a real human. If we are
> just talking about qualitatively behaving in a "human-like" way without
> replicating the behavior of a specific person or sub-component of a person
> like a group of neurons in their brain, Chalmer's thought-experiment
> doesn't apply. And even in a qualitative sense, count me as very skeptical
> that a LLM trained only on human writing will ever pass any really rigorous
> Turing test.
>

Chalmers considers replacing individual neurons and then extending this to
groups of neurons with silicon chips. My variation on this is to replace
any part of a human with a black box that replicates the interactions of
that part with the surrounding tissue. This preserves the behaviour of the
behaviour of the human and also the consciousness, otherwise, the argument
goes, we could make a partial zombie, which is absurd. We could extend the
replacement to any arbitrarily large proportion of the human, say all but a
few cells on the tip of his nose, and the argument still holds. Once those
cells are replaced, the entire human is replaced, and his consciousness
remains unchanged. It is not necessary that inside the black box is
anything resembling or even simulating human physiological processes: that
would be one way to do it, but a completely different method would work as
long as the I/O behaviour of the human was preserved. If techniques
analogous to LLM's could be used to train AI's on human movements instead
of words, for example, it might be possible to perfectly replicate human
behaviour, and from the above argument, the resulting robot should also
have human-like consciousness. And if that is the case, I don't see why a
more limited system such as ChatGPT should not have a more limited form of
consciousness.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWPAsAkOT96bEhhMHV617QM-0Y9y804jvUb%3DvLnXr2K0g%40mail.gmail.com.

Reply via email to