I probably didn't pay enough attention to the thread some time ago on 
serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.
________________________________
From: Friam <[email protected]> on behalf of glen <[email protected]>
Sent: Tuesday, January 17, 2023 2:21 PM
To: [email protected] <[email protected]>
Subject: Re: [FRIAM] NickC channels DaveW

Being a too-literal person, who never gets the joke, I have to say that these 
simple scalings, combinatorial or not, don't capture the interconnectionist 
point being made in the pain article. The absolute numbers of elements 
(neurons, synapses, signaling molecules, etc.) flatten it all out. But 
_ganglion_, that's a different thing. What we're looking for are loops and 
"integratory" structures. I think that's where we can start to find a scaling 
for smartness.

In that context, my guess is the heart is closer to ChatGPT in its smartness 
than either of those are to the human gut. But structure-based assessments like 
these merely complement behavior-based assessments. We could quantify the 
number of *jobs* done by the thing. The heart has fewer jobs to do than the 
gut. And the gut has fewer jobs to do than the dog. Etc. Of course, the lines 
between jobs aren't all that crisp, especially as the complexity of the thing 
grows. Behaviors in complex things are composable and polymorphic. In spite of 
our imagining what ChatGPT is doing, it's really only doing 1 thing: choosing 
the most likely next token given the previous tokens. You *might* be able to 
serialize your dog and suggest she's really just choosing the most likely next 
behavior given the previous behaviors. But my guess is dog owners perceive (or 
impute) that dogs resolve contradictions that arise in parallel. (chase the 
ball? chew the bone? continue chewing the bone until you get to the ball?) 
Contradiction resolution is evidence of more than 1 task. You could gussy up 
the model by providing a single interface to an ensemble of models. Then it 
might look more like a dog, depending on the algorithm(s) used to resolve 
contradictions between models. But to get closer to dog-complexity, you'd have 
to wire the models together so that they could contradict each other but still 
feed off each other in some way. A model that changes its mind midway through 
its response would be good. I haven't had a dog in a long time. But I seem to 
remember they were easy to redirect, despite the old saying "like a dog with a 
bone".

On 1/17/23 12:51, Prof David West wrote:
> Apropos of nothing:
>
> The human heart has roughly 40,000 neurons and the human gut around 0.1 
> billion neurons (sensory neurons, neurotransmitters, ganglia, and motor 
> neurons).
>
> So the human gut is about 1/5 as smart as Marcus's dog??
>
> davew
>
>
> On Tue, Jan 17, 2023, at 1:08 PM, Marcus Daniels wrote:
>> Dogs have about 500 million neurons in their cortex.  Neurons have
>> about 7,000 synaptic connections, so I think my dog is a lot smarter
>> than a billion parameter LLM.  :-)
>>
>> Sent from my iPhone
>>
>>> On Jan 17, 2023, at 11:35 AM, glen <[email protected]> wrote:
>>>
>>> 
>>> 1) "I asked Chat GPT to write a song in the style of Nick Cave and this is 
>>> what it produced. What do you think?"
>>> https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
>>>
>>> 2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
>>> https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#
>>>
>>> Taken separately, (1) and (2) are each interesting, if seemingly 
>>> orthogonal. But what twines them, I think, is the concept of "mutual 
>>> information". I read (2) before I read (1) because, for some bizarre 
>>> reason, my day job involves trying to understand pain mechanisms. And (2) 
>>> speaks directly (if only implicitly) to things like IIT. If you read (1) 
>>> first, it's difficult to avoid snapping quickly into NickC's canal. Despite 
>>> NickT's objection to an inner life, it seems clear that the nuance we see 
>>> on the surface, at least longitudinally, *needs* an inner life. You simply 
>>> can't get good stuff out of an entirely flat/transparent/reactive/Markovian 
>>> object.
>>>
>>> However, what NickC misses is that LLMs *have* some intertwined mutual 
>>> information within them. Similar to asking whether an insect experiences 
>>> pain, we can ask whether a X billion parameter LLM experiences something 
>>> like "suffering". My guess is the answer is "yes". It may not be a good 
>>> analog to what we call "suffering", though ... maybe "friction"? ... maybe 
>>> "release"? My sense is that when you engage a LLM (embedded in a larger 
>>> construct that handles the prompts and live learning, of course) in such a 
>>> way that it assembles a response that nobody else has evoked, it might get 
>>> something akin to a tingle ... or like the relief you feel when scratching 
>>> an itch ... of course it would be primordial because the self-attention in 
>>> such a system is hopelessly disabled compared to the rich self-attention 
>>> loops we have in our meaty bodies. But it just *might* be there in some 
>>> primitive sense.
>>>
>>> As always, agnosticism is the only rational stance. And I won't trust the 
>>> songs written by LLMs until I see a few of them commit suicide, overdose, 
>>> or punch a TMZ cameraman in the face.

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to