I think you're too specific by calling out the prefrontal cortex areas. So my answer would be "no" or "not quite 
but close enough". To the gist of your question, I'd say "yes", because you used "contain" rather than 
"is/are". It's reasonable to model our CNS as a collection of things in the same category as LLMs. My own guess would 
be that we have a plurality of large X modelers, some model language(s), some for hitting little balls with batlike sticks, some 
for motorcycle riding, types of mathematics, types of whatever. Then I suspect there are mechanisms for integrating or 
cross-polinating those, perhaps including error correcting interactions (where your ball hitting modeler disagrees with your 
ceramic throwing modeler).

Strong "monists" might argue that there's a singular "general intelligence" 
(GI) lurking about somewhere in there that does *all* the modeling and the particular domains are 
simply decorated applications of the GI. But I doubt that given how robust we are to lesions in 
various parts (and how some people with one endowment can seem so impoverished in another area). 
Perhaps GIs do exist in some true polymaths. But I doubt them in ordinary people/organisms.

On 2/7/23 10:35, Jochen Fromm wrote:
I was just wondering if our prefrontal cortex areas in the brain contain a 
large language model too - but each of them trained on slightly different 
datasets. Similar enough to understand each other, but different enough so that 
everyone has a unique experience and point of view o_O

-J.


-------- Original message --------
From: Marcus Daniels <[email protected]>
Date: 2/6/23 9:39 PM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group <[email protected]>
Subject: Re: [FRIAM] Datasets as Experience

It depends if it is given boundaries between the datasets.   Is it learning one 
distribution or two?

*From:* Friam <[email protected]> *On Behalf Of *Jochen Fromm
*Sent:* Sunday, February 5, 2023 4:38 AM
*To:* The Friday Morning Applied Complexity Coffee Group <[email protected]>
*Subject:* [FRIAM] Datasets as Experience

Would a CV of a large language model contain all the datasets it has seen? As 
adaptive agents of our selfish genes we are all trained on slightly different 
datasets. A Spanish speaker is a person trained on a Spanish dataset. An 
Italian speaker is a trained on an Italian dataset, etc. Speakers of different 
languages are trained on different datasets, therefore the same sentence is 
easy for a native speaker but impossible to understand for those who do not 
know the language.

Do all large language models need to be trained on the same datasets? Or could many large 
language models be combined to a society of mind as Marvin Minsky describes it in his 
book "The society of mind"? Now that they are able to understand language it 
seems to be possible that one large language model replies to the questions from another. 
And we would even be able to understand the conversations.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to