On Tue, 23 May 2023 at 13:37, Terren Suydam <[email protected]> wrote:
> > > On Mon, May 22, 2023 at 11:13 PM Stathis Papaioannou <[email protected]> > wrote: > >> >> >> On Tue, 23 May 2023 at 10:48, Terren Suydam <[email protected]> >> wrote: >> >>> >>> >>> On Mon, May 22, 2023 at 8:42 PM Stathis Papaioannou <[email protected]> >>> wrote: >>> >>>> >>>> >>>> On Tue, 23 May 2023 at 10:03, Terren Suydam <[email protected]> >>>> wrote: >>>> >>>>> >>>>> it is true that my brain has been trained on a large amount of data - >>>>> data that contains intelligence outside of my own. But when I introspect, >>>>> I >>>>> notice that my understanding of things is ultimately rooted/grounded in my >>>>> phenomenal experience. Ultimately, everything we know, we know either by >>>>> our experience, or by analogy to experiences we've had. This is in >>>>> opposition to how LLMs train on data, which is strictly about how >>>>> words/symbols relate to one another. >>>>> >>>> >>>> The functionalist position is that phenomenal experience supervenes on >>>> behaviour, such that if the behaviour is replicated (same output for same >>>> input) the phenomenal experience will also be replicated. This is what >>>> philosophers like Searle (and many laypeople) can’t stomach. >>>> >>> >>> I think the kind of phenomenal supervenience you're talking about is >>> typically asserted for behavior at the level of the neuron, not the level >>> of the whole agent. Is that what you're saying? That chatGPT must be >>> having a phenomenal experience if it talks like a human? If so, that is >>> stretching the explanatory domain of functionalism past its breaking point. >>> >> >> The best justification for functionalism is David Chalmers' "Fading >> Qualia" argument. The paper considers replacing neurons with functionally >> equivalent silicon chips, but it could be generalised to replacing any part >> of the brain with a functionally equivalent black box, the whole brain, the >> whole person. >> > > You're saying that an algorithm that provably does not have experiences of > rabbits and lollipops - but can still talk about them in a way that's > indistinguishable from a human - essentially has the same phenomenology as > a human talking about rabbits and lollipops. That's just absurd on its > face. You're essentially hand-waving away the grounding problem. Is that > your position? That symbols don't need to be grounded in any sort of > phenomenal experience? > It's not just talking about them in a way that is indistinguishable from a human, in order to have human-like consciousness the entire I/O behaviour of the human would need to be replicated. But in principle, I don't see why a LLM could not have some other type of phenomenal experience. And I don't think the grounding problem is a problem: I was never grounded in anything, I just grew up associating one symbol with another symbol, it's symbols all the way down. -- Stathis Papaioannou -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXViwvq0TnbJXnPt7VVDoy8zASJyZeq-O3ZpOpMSx6cwg%40mail.gmail.com.

