Oh Google has already created a "mixture of experts" architecture.
Interesting.https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.htmlThe
amount of data they use to train and implement large language models is
mind-boggling. I am curious what Google and OpenAI will present this year. -J.
-------- Original message --------From: Jochen Fromm <[email protected]> Date:
2/5/23 1:38 PM (GMT+01:00) To: The Friday Morning Applied Complexity Coffee
Group <[email protected]> Subject: [FRIAM] Datasets as Experience Would a CV of
a large language model contain all the datasets it has seen? As adaptive agents
of our selfish genes we are all trained on slightly different datasets. A
Spanish speaker is a person trained on a Spanish dataset. An Italian speaker is
a trained on an Italian dataset, etc. Speakers of different languages are
trained on different datasets, therefore the same sentence is easy for a native
speaker but impossible to understand for those who do not know the language. Do
all large language models need to be trained on the same datasets? Or could
many large language models be combined to a society of mind as Marvin Minsky
describes it in his book "The society of mind"? Now that they are able to
understand language it seems to be possible that one large language model
replies to the questions from another. And we would even be able to understand
the conversations.-J.
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/