Re: [FRIAM] Language Model Understanding

2023-10-13 Thread Roger Critchlow
Did anyone else notice

   Geoffrey Hinton https://www.youtube.com/watch?v=qrvK_KuIeJk

and

Peter Norvig
https://www.noemamag.com/artificial-general-intelligence-is-already-here/

among everything else that's been going on?

Oh, and Youtube just rudely informed me that my adblocker is no longer
welcome when I looked up the Hinton video.  Goodbye monetized videos.

-- rec --
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-12 Thread Marcus Daniels
A nihilist can still have preferences.  

> On Oct 12, 2023, at 10:21 AM, glen  wrote:
> 
> I think that's an ideological stance, not a brute fact. The use of the term 
> "better" is nothing but an "ought", which is difficult to derive from an "is".
> 
>> On 10/9/23 10:07, Marcus Daniels wrote:
>> We are better off if the ones that carry demonstrably false claims are 
>> proportionately devalued.
> 
> --
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-12 Thread glen

I think that's an ideological stance, not a brute fact. The use of the term "better" is nothing but 
an "ought", which is difficult to derive from an "is".

On 10/9/23 10:07, Marcus Daniels wrote:

 We are better off if the ones that carry demonstrably false claims are 
proportionately devalued.



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-09 Thread Marcus Daniels
Being that LLMs are probabilistic may get the user of a LLM in some trouble 
here.

https://en.wikipedia.org/wiki/Biblical_and_Quranic_narratives

From: Friam  On Behalf Of Frank Wimberly
Sent: Monday, October 9, 2023 12:45 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Language Model Understanding

"Truth" is well defined.  It is the set of propositions which assert that what 
is the case is the case.  Determining whether a given proposition is true may 
be difficult.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Oct 9, 2023, 10:23 AM glen 
mailto:geprope...@gmail.com>> wrote:
OK, I agree, mostly. But "truth" is no more well-defined than any other 
specific grounding style. E.g. the insistence that there is truth in fiction. 
There is affective truth in MAGA, just like there's truth in whatever 
justification Hamas might give for its reaction to the bloodshed of the Israeli 
settlements. But such truths are so abstracted, they can be [a|mis]used at will 
and the narrative spin used to whip up the adherents provides any glue needed 
to make it seem as true as it needs to seem to spur the adherents to action.

It's a bad analogy from, say, Hamas to shut up and calculate. But it can be 
made. It's fun watching intra-science tribe members pick at each other for 
their sloppiness in communicating science. E.g. Sabine's take on transitioning. 
Whatever. If a tribe polices itself, then their trustworthiness is much higher 
... for me, at least. I'm glad the Republicans are in a civil war. It's 
evidence they may recover as a party. If people stop telling me I'm wrong, then 
I'm most likely very wrong. As long as I've still got people telling me I'm 
wrong, then I'm at least somewhere near not-wrong.


On 10/9/23 08:24, Marcus Daniels wrote:
> I mean there are some categories that are disjoint or mostly disjoint.   
> Similarly, the grounding is not total.   I agree that value systems like MAGA 
> have power, but they don't have truth.  There is no truth.  All there is, is 
> power, which is my point.  QM and demagoguery are both tools, with different 
> contexts for use.
>
>> On Oct 9, 2023, at 7:48 AM, glen 
>> mailto:geprope...@gmail.com>> wrote:
>>
>> Hm. Even with the caveat of "generally", I think this complementarity 
>> argument fails because all the various categories are not disjoint. And it's 
>> the (somewhat) lack(ing) of grounding/binding that allows the mixing of the 
>> modes. I'd tried to point this out by using "computation", the idea that 
>> human innovation might be more universal than microbial innovation. It's not 
>> really that the values *lack* grounding. It's that their grounding is 
>> complicated, perhaps iterative? maybe heterarchical? IDK, but certainly not 
>> lacking any grounding.
>>
>> An abstracted value system like that of the 09A OR MAGA cults may have 
>> *more* power, more chances to hook and unhook because it gives the donner 
>> and doffer of that value system more opportunities to do the donning and 
>> doffing at whatever arbitrary points they choose, to lazily benefit 
>> themselves without having to handle any unintended/unconsidered entailments.
>>
>>> On 10/8/23 18:18, Marcus Daniels wrote:
>>> This doesn't make them more valuable because they lack grounding.
>>
>>> On 10/8/23 13:21, Marcus Daniels wrote:
>>> Generally attaching to one value system means not attaching to another 
>>> value system.   For example, adopting the value of tolerance logically is 
>>> at odds with policing intolerance, e.g., one Jewish neighbor remarked this 
>>> morning he drove past a home with a Hamas flag on it and was scared.   
>>> (Reducing that fear by removing the flag would be reducing tolerance.)
>>> It seems to me that ideas that work have power and things that don’t work 
>>> don’t have power.

--
glen

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-09 Thread Frank Wimberly
"Truth" is well defined.  It is the set of propositions which assert that
what is the case is the case.  Determining whether a given proposition is
true may be difficult.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Mon, Oct 9, 2023, 10:23 AM glen  wrote:

> OK, I agree, mostly. But "truth" is no more well-defined than any other
> specific grounding style. E.g. the insistence that there is truth in
> fiction. There is affective truth in MAGA, just like there's truth in
> whatever justification Hamas might give for its reaction to the bloodshed
> of the Israeli settlements. But such truths are so abstracted, they can be
> [a|mis]used at will and the narrative spin used to whip up the adherents
> provides any glue needed to make it seem as true as it needs to seem to
> spur the adherents to action.
>
> It's a bad analogy from, say, Hamas to shut up and calculate. But it can
> be made. It's fun watching intra-science tribe members pick at each other
> for their sloppiness in communicating science. E.g. Sabine's take on
> transitioning. Whatever. If a tribe polices itself, then their
> trustworthiness is much higher ... for me, at least. I'm glad the
> Republicans are in a civil war. It's evidence they may recover as a party.
> If people stop telling me I'm wrong, then I'm most likely very wrong. As
> long as I've still got people telling me I'm wrong, then I'm at least
> somewhere near not-wrong.
>
>
> On 10/9/23 08:24, Marcus Daniels wrote:
> > I mean there are some categories that are disjoint or mostly disjoint.
>  Similarly, the grounding is not total.   I agree that value systems like
> MAGA have power, but they don't have truth.  There is no truth.  All there
> is, is power, which is my point.  QM and demagoguery are both tools, with
> different contexts for use.
> >
> >> On Oct 9, 2023, at 7:48 AM, glen  wrote:
> >>
> >> Hm. Even with the caveat of "generally", I think this complementarity
> argument fails because all the various categories are not disjoint. And
> it's the (somewhat) lack(ing) of grounding/binding that allows the mixing
> of the modes. I'd tried to point this out by using "computation", the idea
> that human innovation might be more universal than microbial innovation.
> It's not really that the values *lack* grounding. It's that their grounding
> is complicated, perhaps iterative? maybe heterarchical? IDK, but certainly
> not lacking any grounding.
> >>
> >> An abstracted value system like that of the 09A OR MAGA cults may have
> *more* power, more chances to hook and unhook because it gives the donner
> and doffer of that value system more opportunities to do the donning and
> doffing at whatever arbitrary points they choose, to lazily benefit
> themselves without having to handle any unintended/unconsidered entailments.
> >>
> >>> On 10/8/23 18:18, Marcus Daniels wrote:
> >>> This doesn't make them more valuable because they lack grounding.
> >>
> >>> On 10/8/23 13:21, Marcus Daniels wrote:
> >>> Generally attaching to one value system means not attaching to another
> value system.   For example, adopting the value of tolerance logically is
> at odds with policing intolerance, e.g., one Jewish neighbor remarked this
> morning he drove past a home with a Hamas flag on it and was scared.
>  (Reducing that fear by removing the flag would be reducing tolerance.)
> >>> It seems to me that ideas that work have power and things that don’t
> work don’t have power.
>
> --
> glen
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-09 Thread Steve Smith


On 10/9/23 9:24 AM, Marcus Daniels wrote:

I mean there are some categories that are disjoint or mostly disjoint.   
Similarly, the grounding is not total.   I agree that value systems like MAGA 
have power, but they don't have truth.  There is no truth.  All there is, is 
power, which is my point.  QM and demagoguery are both tools, with different 
contexts for use.


Which is why "speaking truth to power" is considered a vain conceit by 
some?  Or "by some" are we speaking only of the cynical who seem to be 
close cousin to the nihil?


BTW... I ran this by my "new bar friend" ChatGPT and they set me 
straight...  there are plenty of other reasons to take this position 
besides mere cynicism and nihilism...  


One of my more profound weaknesses seems to be conflating skepticism 
with cynicism.




-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-09 Thread Marcus Daniels
People can interact with the world and find groundings for symbols using their 
senses.   Some of their semantic bindings may be learned from works of fiction. 
 Some of their bindings may be copied from other people that did the same.  
Other bindings may be from planting things in dirt or playing on a jungle gym.  
 The people that learn things about using dirt in New Mexico may learn 
different things that people that use dirt in Washington.   The more indirect 
the source of the bindings, the more suspicious it is.   I wouldn't agree that 
all these bindings add value to the world.  We are better off if the ones that 
carry demonstrably false claims are proportionately devalued.   

-Original Message-
From: Friam  On Behalf Of glen
Sent: Monday, October 9, 2023 9:23 AM
To: friam@redfish.com
Subject: Re: [FRIAM] Language Model Understanding

OK, I agree, mostly. But "truth" is no more well-defined than any other 
specific grounding style. E.g. the insistence that there is truth in fiction. 
There is affective truth in MAGA, just like there's truth in whatever 
justification Hamas might give for its reaction to the bloodshed of the Israeli 
settlements. But such truths are so abstracted, they can be [a|mis]used at will 
and the narrative spin used to whip up the adherents provides any glue needed 
to make it seem as true as it needs to seem to spur the adherents to action.

It's a bad analogy from, say, Hamas to shut up and calculate. But it can be 
made. It's fun watching intra-science tribe members pick at each other for 
their sloppiness in communicating science. E.g. Sabine's take on transitioning. 
Whatever. If a tribe polices itself, then their trustworthiness is much higher 
... for me, at least. I'm glad the Republicans are in a civil war. It's 
evidence they may recover as a party. If people stop telling me I'm wrong, then 
I'm most likely very wrong. As long as I've still got people telling me I'm 
wrong, then I'm at least somewhere near not-wrong.


On 10/9/23 08:24, Marcus Daniels wrote:
> I mean there are some categories that are disjoint or mostly disjoint.   
> Similarly, the grounding is not total.   I agree that value systems like MAGA 
> have power, but they don't have truth.  There is no truth.  All there is, is 
> power, which is my point.  QM and demagoguery are both tools, with different 
> contexts for use.
> 
>> On Oct 9, 2023, at 7:48 AM, glen  wrote:
>>
>> Hm. Even with the caveat of "generally", I think this complementarity 
>> argument fails because all the various categories are not disjoint. And it's 
>> the (somewhat) lack(ing) of grounding/binding that allows the mixing of the 
>> modes. I'd tried to point this out by using "computation", the idea that 
>> human innovation might be more universal than microbial innovation. It's not 
>> really that the values *lack* grounding. It's that their grounding is 
>> complicated, perhaps iterative? maybe heterarchical? IDK, but certainly not 
>> lacking any grounding.
>>
>> An abstracted value system like that of the 09A OR MAGA cults may have 
>> *more* power, more chances to hook and unhook because it gives the donner 
>> and doffer of that value system more opportunities to do the donning and 
>> doffing at whatever arbitrary points they choose, to lazily benefit 
>> themselves without having to handle any unintended/unconsidered entailments.
>>
>>> On 10/8/23 18:18, Marcus Daniels wrote:
>>> This doesn't make them more valuable because they lack grounding.
>>
>>> On 10/8/23 13:21, Marcus Daniels wrote:
>>> Generally attaching to one value system means not attaching to another 
>>> value system.   For example, adopting the value of tolerance logically is 
>>> at odds with policing intolerance, e.g., one Jewish neighbor remarked this 
>>> morning he drove past a home with a Hamas flag on it and was scared.   
>>> (Reducing that fear by removing the flag would be reducing tolerance.)
>>> It seems to me that ideas that work have power and things that don’t work 
>>> don’t have power.

-- 
glen

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-09 Thread glen

OK, I agree, mostly. But "truth" is no more well-defined than any other 
specific grounding style. E.g. the insistence that there is truth in fiction. There is 
affective truth in MAGA, just like there's truth in whatever justification Hamas might 
give for its reaction to the bloodshed of the Israeli settlements. But such truths are so 
abstracted, they can be [a|mis]used at will and the narrative spin used to whip up the 
adherents provides any glue needed to make it seem as true as it needs to seem to spur 
the adherents to action.

It's a bad analogy from, say, Hamas to shut up and calculate. But it can be 
made. It's fun watching intra-science tribe members pick at each other for 
their sloppiness in communicating science. E.g. Sabine's take on transitioning. 
Whatever. If a tribe polices itself, then their trustworthiness is much higher 
... for me, at least. I'm glad the Republicans are in a civil war. It's 
evidence they may recover as a party. If people stop telling me I'm wrong, then 
I'm most likely very wrong. As long as I've still got people telling me I'm 
wrong, then I'm at least somewhere near not-wrong.


On 10/9/23 08:24, Marcus Daniels wrote:

I mean there are some categories that are disjoint or mostly disjoint.   
Similarly, the grounding is not total.   I agree that value systems like MAGA 
have power, but they don't have truth.  There is no truth.  All there is, is 
power, which is my point.  QM and demagoguery are both tools, with different 
contexts for use.


On Oct 9, 2023, at 7:48 AM, glen  wrote:

Hm. Even with the caveat of "generally", I think this complementarity argument fails 
because all the various categories are not disjoint. And it's the (somewhat) lack(ing) of 
grounding/binding that allows the mixing of the modes. I'd tried to point this out by using 
"computation", the idea that human innovation might be more universal than microbial 
innovation. It's not really that the values *lack* grounding. It's that their grounding is 
complicated, perhaps iterative? maybe heterarchical? IDK, but certainly not lacking any grounding.

An abstracted value system like that of the 09A OR MAGA cults may have *more* 
power, more chances to hook and unhook because it gives the donner and doffer 
of that value system more opportunities to do the donning and doffing at 
whatever arbitrary points they choose, to lazily benefit themselves without 
having to handle any unintended/unconsidered entailments.


On 10/8/23 18:18, Marcus Daniels wrote:
This doesn't make them more valuable because they lack grounding.



On 10/8/23 13:21, Marcus Daniels wrote:
Generally attaching to one value system means not attaching to another value 
system.   For example, adopting the value of tolerance logically is at odds 
with policing intolerance, e.g., one Jewish neighbor remarked this morning he 
drove past a home with a Hamas flag on it and was scared.   (Reducing that fear 
by removing the flag would be reducing tolerance.)
It seems to me that ideas that work have power and things that don’t work don’t 
have power.


--
glen

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-09 Thread Marcus Daniels
I mean there are some categories that are disjoint or mostly disjoint.   
Similarly, the grounding is not total.   I agree that value systems like MAGA 
have power, but they don't have truth.  There is no truth.  All there is, is 
power, which is my point.  QM and demagoguery are both tools, with different 
contexts for use.

> On Oct 9, 2023, at 7:48 AM, glen  wrote:
> 
> Hm. Even with the caveat of "generally", I think this complementarity 
> argument fails because all the various categories are not disjoint. And it's 
> the (somewhat) lack(ing) of grounding/binding that allows the mixing of the 
> modes. I'd tried to point this out by using "computation", the idea that 
> human innovation might be more universal than microbial innovation. It's not 
> really that the values *lack* grounding. It's that their grounding is 
> complicated, perhaps iterative? maybe heterarchical? IDK, but certainly not 
> lacking any grounding.
> 
> An abstracted value system like that of the 09A OR MAGA cults may have *more* 
> power, more chances to hook and unhook because it gives the donner and doffer 
> of that value system more opportunities to do the donning and doffing at 
> whatever arbitrary points they choose, to lazily benefit themselves without 
> having to handle any unintended/unconsidered entailments.
> 
>> On 10/8/23 18:18, Marcus Daniels wrote:
>> This doesn't make them more valuable because they lack grounding.
> 
>> On 10/8/23 13:21, Marcus Daniels wrote:
>> Generally attaching to one value system means not attaching to another value 
>> system.   For example, adopting the value of tolerance logically is at odds 
>> with policing intolerance, e.g., one Jewish neighbor remarked this morning 
>> he drove past a home with a Hamas flag on it and was scared.   (Reducing 
>> that fear by removing the flag would be reducing tolerance.)
>> It seems to me that ideas that work have power and things that don’t work 
>> don’t have power.
> 
> --
> glen
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-09 Thread glen

Hm. Even with the caveat of "generally", I think this complementarity argument fails 
because all the various categories are not disjoint. And it's the (somewhat) lack(ing) of 
grounding/binding that allows the mixing of the modes. I'd tried to point this out by using 
"computation", the idea that human innovation might be more universal than microbial 
innovation. It's not really that the values *lack* grounding. It's that their grounding is 
complicated, perhaps iterative? maybe heterarchical? IDK, but certainly not lacking any grounding.

An abstracted value system like that of the 09A OR MAGA cults may have *more* 
power, more chances to hook and unhook because it gives the donner and doffer 
of that value system more opportunities to do the donning and doffing at 
whatever arbitrary points they choose, to lazily benefit themselves without 
having to handle any unintended/unconsidered entailments.

On 10/8/23 18:18, Marcus Daniels wrote:

This doesn't make them more valuable because they lack grounding.


On 10/8/23 13:21, Marcus Daniels wrote:

Generally attaching to one value system means not attaching to another value 
system.   For example, adopting the value of tolerance logically is at odds 
with policing intolerance, e.g., one Jewish neighbor remarked this morning he 
drove past a home with a Hamas flag on it and was scared.   (Reducing that fear 
by removing the flag would be reducing tolerance.)

It seems to me that ideas that work have power and things that don’t work don’t 
have power.


--
glen

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-08 Thread Marcus Daniels

Values systems are the entailments of arbitrary preferences.  Some preferences 
have had longer to churn than others and so have had more time to be elaborated 
and more people reflecting on them.  This doesn't make them more valuable 
because they lack grounding.  The adherents of these values often have more 
members though.  If these people could be freed, they could ponder the 
entailments of other arbitrary preferences.   Maybe this collective activity 
can all be written down and some general patterns captured by a LLM or similar. 
  I find the entire exercise is a lot of effort for no clear purpose, and a 
bike ride more satisfying.

On Oct 8, 2023, at 5:26 PM, Steve Smith  wrote:




Hope: "doing the right thing, no matter how you think it will turn out"


On 10/8/23 2:21 PM, Marcus Daniels wrote:
I would center nihilism not on devaluation but on deconstruction.  If value is 
about a set of lived experiences, is it a detached reflection on something 
special that was discovered, or more because of membership in that ecology – 
because of an investment?
Deep membership makes it harder to talk objectively about discoveries and to 
move on to making other discoveries.  Generally attaching to one value system 
means not attaching to another value system.   For example, adopting the value 
of tolerance logically is at odds with policing intolerance, e.g., one Jewish 
neighbor remarked this morning he drove past a home with a Hamas flag on it and 
was scared.   (Reducing that fear by removing the flag would be reducing 
tolerance.)

It seems to me that ideas that work have power and things that don’t work don’t 
have power.   It is reasonable to be skeptical about investment in ideas that 
may require years or decades of investment and not result in a return.  
Unpacking an idea to a Silicon Valley venture capitalist as to why there could 
be a return is deconstruction not devaluation.

I see these folks on MSNBC talking about the nihilism of DJT or Bannon and I 
don’t really get it.   Democracy isn’t a value system; it is a way to avoid 
(violent) conflict in a pluralistic society.   There’s a case to be made for 
the merits of such societies, and a case to be made for limiting violence.  
They should make those cases, not just throw out a placeholder word like 
nihilism to judge the people who don’t value those types of societies.   
(Apparently many people in North America.)

Marcus

From: Friam <mailto:friam-boun...@redfish.com> on 
behalf of David Eric Smith <mailto:desm...@santafe.edu>
Date: Sunday, October 8, 2023 at 10:40 AM
To: The Friday Morning Applied Complexity Coffee Group 
<mailto:friam@redfish.com>
Subject: Re: [FRIAM] Language Model Understanding
Zang!  I had not made that connection.

Hmmh.  What do I think?  I seem to have viewed uninterpreted models much as I 
view a hammer or a shovel; as a tool in the world for doing a certain job (in 
this case, a job of perceiving the world in valid ways), neither here nor there 
w.r.t. questions of nihilism.

Where then would I try to center nihilism?  (It’s being a topic I haven’t spent 
a lot of time on historically).  Maybe if I had to choose one phrase, it would 
be “the devaluation of values”.   Brown uses it in her book, but I think it is 
old and standardized.  Don’t know if it came from Nietzsche or Weber, or is 
much older than either of them.

How would I orient to try to address such questions, that I do not want to go 
into support of bombs?  I think my current cast of mind is that there is a 
large class of “discovered thing”, meaning that they are not willed into 
existence, but are brought into existence (if there is even any bringing) 
through lived experience, and “noticed” after the fact.  Or maybe given in the 
ineffable from the start, and noticed along the way (all of Descartes’s cogito, 
the rest of “the self”, and much else).  It seems to me that there is room for 
aesthetics to be given much more and better attention than perhaps it has had 
in philosophy (or whoever is in charge of this question).  Aesthetics certainly 
not being the only domain from which discovered things can originate, but 
useful in that we can recognize it as a source, but not have the impulse to 
conflate it with dogma, as many other notions of “belief” tend to drift into.

Hmm.

Eric



On Oct 8, 2023, at 12:30 PM, Marcus Daniels 
<mailto:mar...@snoutfarm.com> wrote:

Eric writes:

“Bears on how many things make up the machinery of nihilism, extending well 
beyond relations I recognized as part of an integration, though mentioning many 
things I rail against in daily life.”

The willingness of physicists to use uninterpreted models, e.g., quantum 
mechanics, seems like nihilism to me.  I don’t mean that in a judgmental way, 
nor do I mean it in an admiring way.  On the other hand, there are many people, 
I reckon most people, that provide their beliefs as both explanations and 
justifications.  

Re: [FRIAM] Language Model Understanding

2023-10-08 Thread Steve Smith


   Hope: "/doing the right thing, no matter how you think it will turn
   out/"


On 10/8/23 2:21 PM, Marcus Daniels wrote:


I would center nihilism not on devaluation but on deconstruction. If 
value is about a set of lived experiences, is it a detached reflection 
on something special that was discovered, or more because of 
membership in that ecology – because of an investment?
Deep membership makes it harder to talk objectively about discoveries 
and to move on to making other discoveries. Generally attaching to one 
value system means not attaching to another value system.   For 
example, adopting the value of tolerance logically is at odds with 
policing intolerance, e.g., one Jewish neighbor remarked this morning 
he drove past a home with a Hamas flag on it and was scared. (Reducing 
that fear by removing the flag would be reducing tolerance.)


It seems to me that ideas that work have power and things that don’t 
work don’t have power.   It is reasonable to be skeptical about 
investment in ideas that may require years or decades of investment 
and not result in a return. Unpacking an idea to a Silicon Valley 
venture capitalist as to why there could be a return is deconstruction 
not devaluation.


I see these folks on MSNBC talking about the nihilism of DJT or Bannon 
and I don’t really get it.   Democracy isn’t a value system; it is a 
way to avoid (violent) conflict in a pluralistic society.   There’s a 
case to be made for the merits of such societies, and a case to be 
made for limiting violence.  They should make those cases, not just 
throw out a placeholder word like nihilism to judge the people who 
don’t value those types of societies.   (Apparently many people in 
North America.)


Marcus

*From: *Friam  on behalf of David Eric 
Smith 

*Date: *Sunday, October 8, 2023 at 10:40 AM
*To: *The Friday Morning Applied Complexity Coffee Group 


*Subject: *Re: [FRIAM] Language Model Understanding

Zang!  I had not made that connection.

Hmmh.  What do I think?  I seem to have viewed uninterpreted models 
much as I view a hammer or a shovel; as a tool in the world for doing 
a certain job (in this case, a job of perceiving the world in valid 
ways), neither here nor there w.r.t. questions of nihilism.


Where then would I try to center nihilism?  (It’s being a topic I 
haven’t spent a lot of time on historically).  Maybe if I had to 
choose one phrase, it would be “the devaluation of values”.   Brown 
uses it in her book, but I think it is old and standardized.  Don’t 
know if it came from Nietzsche or Weber, or is much older than either 
of them.


How would I orient to try to address such questions, that I do not 
want to go into support of bombs?  I think my current cast of mind is 
that there is a large class of “discovered thing”, meaning that they 
are not willed into existence, but are brought into existence (if 
there is even any bringing) through lived experience, and “noticed” 
after the fact.  Or maybe given in the ineffable from the start, and 
noticed along the way (all of Descartes’s cogito, the rest of “the 
self”, and much else).  It seems to me that there is room for 
aesthetics to be given much more and better attention than perhaps it 
has had in philosophy (or whoever is in charge of this question). 
 Aesthetics certainly not being the only domain from which discovered 
things can originate, but useful in that we can recognize it as a 
source, but not have the impulse to conflate it with dogma, as many 
other notions of “belief” tend to drift into.


Hmm.

Eric



On Oct 8, 2023, at 12:30 PM, Marcus Daniels 
wrote:

Eric writes:

“Bears on how many things make up the machinery of nihilism,
extending well beyond relations I recognized as part of an
integration, though mentioning many things I rail against in daily
life.”


The willingness of physicists to use uninterpreted models, e.g.,
quantum mechanics, seems like nihilism to me.  I don’t mean that
in a judgmental way, nor do I mean it in an admiring way.  On the
other hand, there are many people, I reckon most people, that
provide their beliefs as both explanations and justifications.  
Bombs follow soon after.


Marcus

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p

Zoomhttps://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam=E,1,ShGlU_SgSFn6qQxIQzSbX1zV_TbPcFyWbM7JZI-jGJqjGA6nfiLQbKWbCtLQ_vYw77xQZecqwHoUIMWWeJXUUuptPPLgcybwh2w3dGiW=1

<https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam=E,1,ShGlU_SgSFn6qQxIQzSbX1zV_TbPcFyWbM7JZI-jGJqjGA6nfiLQbKWbCtLQ_vYw77xQZecqwHoUIMWWeJXUUuptPPLgcybwh2w3dGiW=1>
to

(un)subscribehttps://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com=E,1,Mf35VNzl6xjcJG5_4Mncrxke5ecS

Re: [FRIAM] Language Model Understanding

2023-10-08 Thread Marcus Daniels
I would center nihilism not on devaluation but on deconstruction.  If value is 
about a set of lived experiences, is it a detached reflection on something 
special that was discovered, or more because of membership in that ecology – 
because of an investment?
Deep membership makes it harder to talk objectively about discoveries and to 
move on to making other discoveries.  Generally attaching to one value system 
means not attaching to another value system.   For example, adopting the value 
of tolerance logically is at odds with policing intolerance, e.g., one Jewish 
neighbor remarked this morning he drove past a home with a Hamas flag on it and 
was scared.   (Reducing that fear by removing the flag would be reducing 
tolerance.)

It seems to me that ideas that work have power and things that don’t work don’t 
have power.   It is reasonable to be skeptical about investment in ideas that 
may require years or decades of investment and not result in a return.  
Unpacking an idea to a Silicon Valley venture capitalist as to why there could 
be a return is deconstruction not devaluation.

I see these folks on MSNBC talking about the nihilism of DJT or Bannon and I 
don’t really get it.   Democracy isn’t a value system; it is a way to avoid 
(violent) conflict in a pluralistic society.   There’s a case to be made for 
the merits of such societies, and a case to be made for limiting violence.  
They should make those cases, not just throw out a placeholder word like 
nihilism to judge the people who don’t value those types of societies.   
(Apparently many people in North America.)

Marcus

From: Friam  on behalf of David Eric Smith 

Date: Sunday, October 8, 2023 at 10:40 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Language Model Understanding
Zang!  I had not made that connection.

Hmmh.  What do I think?  I seem to have viewed uninterpreted models much as I 
view a hammer or a shovel; as a tool in the world for doing a certain job (in 
this case, a job of perceiving the world in valid ways), neither here nor there 
w.r.t. questions of nihilism.

Where then would I try to center nihilism?  (It’s being a topic I haven’t spent 
a lot of time on historically).  Maybe if I had to choose one phrase, it would 
be “the devaluation of values”.   Brown uses it in her book, but I think it is 
old and standardized.  Don’t know if it came from Nietzsche or Weber, or is 
much older than either of them.

How would I orient to try to address such questions, that I do not want to go 
into support of bombs?  I think my current cast of mind is that there is a 
large class of “discovered thing”, meaning that they are not willed into 
existence, but are brought into existence (if there is even any bringing) 
through lived experience, and “noticed” after the fact.  Or maybe given in the 
ineffable from the start, and noticed along the way (all of Descartes’s cogito, 
the rest of “the self”, and much else).  It seems to me that there is room for 
aesthetics to be given much more and better attention than perhaps it has had 
in philosophy (or whoever is in charge of this question).  Aesthetics certainly 
not being the only domain from which discovered things can originate, but 
useful in that we can recognize it as a source, but not have the impulse to 
conflate it with dogma, as many other notions of “belief” tend to drift into.

Hmm.

Eric



On Oct 8, 2023, at 12:30 PM, Marcus Daniels  wrote:

Eric writes:

“Bears on how many things make up the machinery of nihilism, extending well 
beyond relations I recognized as part of an integration, though mentioning many 
things I rail against in daily life.”

The willingness of physicists to use uninterpreted models, e.g., quantum 
mechanics, seems like nihilism to me.  I don’t mean that in a judgmental way, 
nor do I mean it in an admiring way.  On the other hand, there are many people, 
I reckon most people, that provide their beliefs as both explanations and 
justifications.   Bombs follow soon after.


Marcus
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam=E,1,ShGlU_SgSFn6qQxIQzSbX1zV_TbPcFyWbM7JZI-jGJqjGA6nfiLQbKWbCtLQ_vYw77xQZecqwHoUIMWWeJXUUuptPPLgcybwh2w3dGiW=1
to (un)subscribe 
https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com=E,1,Mf35VNzl6xjcJG5_4Mncrxke5ecSEFMW1rG6s3NHef9F-wPO05iiNYiR45MSKazTnJWSrz1NeYWbWhCmeJC3ylViuKDMz4IsfZZtP9M-Py9xRWdqyGM,=1
FRIAM-COMIC 
https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f=E,1,087gn5rrF7FE5ZFlhfGgKEw7FNCCq1uyVbDO_8s5g8ZAKfYH6v2V2iDtwUY_BU4AyYW7uPGg4MlSYXWwMdPyyC4O452Vdb0Pj1gUAeQ6R3kqyNmc=1
archives:  5/2017 thru present 
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f=E,1

Re: [FRIAM] Language Model Understanding

2023-10-08 Thread Roger Critchlow
 LLMs can generate written content, from news
>>> articles to summaries and even creative pieces. Journalists can use them to
>>> draft stories faster and get ideas flowing.*
>>>
>>> *2. **Fact-Checking**: They're handy for fact-checking, helping you
>>> quickly verify information by scanning a vast database of knowledge.*
>>>
>>> *3. **Summarization**: LLMs can take long, complex documents and create
>>> concise summaries, saving you time on research.*
>>>
>>> *4. **Translation**: They can also translate text between languages,
>>> which can be incredibly useful for international reporting.*
>>>
>>> *5. **Personalization**: Some news websites use LLMs to tailor content
>>> to individual readers, providing a more personalized news experience.*
>>>
>>> *6. **Social Media Analysis**: LLMs can sift through social media data,
>>> helping journalists understand public sentiment and trends.*
>>>
>>> ***Potential Concerns***
>>>
>>> *While LLMs are incredibly powerful, there are some concerns to keep in
>>> mind. They might unintentionally spread misinformation if not used
>>> carefully, and they can't replace human judgment or critical thinking.*
>>>
>>> ***Conclusion***
>>>
>>> *In a nutshell, modern language models (LLMs) are like digital writing
>>> assistants that can help journalists write better, research faster, and
>>> adapt to the changing landscape of digital journalism. While they're
>>> powerful tools, it's important to use them responsibly, keeping in mind
>>> their strengths and limitations, as well as the ethical considerations of
>>> automated content creation in journalism.*
>>>
>>> Can you elaborate on some of the technical jargon?
>>>
>>> *Certainly! Here are some common technical jargon related to Large
>>> Language Models (LLMs) explained in journalist-friendly terms:*
>>>
>>> *1. **Pretraining**: This is the initial phase where the LLM learns from
>>> a massive amount of text data on the internet. Think of it as the model
>>> absorbing a vast library of information to understand language better.*
>>>
>>> *2. **Fine-tuning**: After pretraining, LLMs undergo fine-tuning to
>>> specialize them for specific tasks. It's like giving the model extra
>>> training in a particular subject, like finance or healthcare, to make it
>>> more useful for specialized reporting.*
>>>
>>> *3. **Token**: In LLMs, text is broken down into smaller units called
>>> tokens. A token can be as short as one character or as long as one word.
>>> Knowing the number of tokens is important because it affects how much text
>>> the model can handle at once.*
>>>
>>> *4. **Inference**: This term refers to the process of using a trained
>>> LLM to generate text or make predictions. Journalists use inference to
>>> create content, answer questions, or analyze data.*
>>>
>>> *5. **Prompt**: A prompt is the input you provide to an LLM to get a
>>> response. It's like giving the model a question or a starting point for it
>>> to generate text. Journalists often craft prompts to get specific
>>> information or insights.*
>>>
>>> *6. **Bias**: LLMs can sometimes produce biased or unfair content
>>> because they've learned from biased internet data. Journalists should be
>>> aware of this and critically evaluate the model's outputs for any biases.*
>>>
>>> *7. **Algorithm**: This is a fancy word for a set of rules or
>>> instructions that an LLM follows to perform tasks. Algorithms guide how the
>>> model generates text and makes decisions.*
>>>
>>> *8. **NLP (Natural Language Processing)**: NLP is the field of study
>>> that focuses on teaching computers to understand and generate human
>>> language. LLMs are a product of NLP research.*
>>>
>>> *9. **ML (Machine Learning)**: LLMs are a type of machine learning
>>> model. Machine learning is a way to make computers learn from data and
>>> improve their performance over time.*
>>>
>>> *10. **Training Data**: This is the text data used to teach an LLM. It's
>>> like the model's textbooks, and the quality and diversity of this data
>>> influence how well the LLM performs.*
>>>
>>> *11. **Bias Mitigation**: This is the process of reducing bias in LLMs.
>>> It's important in journalism to ensure that the information provided by the
>>> model is fai

Re: [FRIAM] Language Model Understanding

2023-10-08 Thread Roger Critchlow
gt; their strengths and limitations, as well as the ethical considerations of
>> automated content creation in journalism.*
>>
>> Can you elaborate on some of the technical jargon?
>>
>> *Certainly! Here are some common technical jargon related to Large
>> Language Models (LLMs) explained in journalist-friendly terms:*
>>
>> *1. **Pretraining**: This is the initial phase where the LLM learns from
>> a massive amount of text data on the internet. Think of it as the model
>> absorbing a vast library of information to understand language better.*
>>
>> *2. **Fine-tuning**: After pretraining, LLMs undergo fine-tuning to
>> specialize them for specific tasks. It's like giving the model extra
>> training in a particular subject, like finance or healthcare, to make it
>> more useful for specialized reporting.*
>>
>> *3. **Token**: In LLMs, text is broken down into smaller units called
>> tokens. A token can be as short as one character or as long as one word.
>> Knowing the number of tokens is important because it affects how much text
>> the model can handle at once.*
>>
>> *4. **Inference**: This term refers to the process of using a trained LLM
>> to generate text or make predictions. Journalists use inference to create
>> content, answer questions, or analyze data.*
>>
>> *5. **Prompt**: A prompt is the input you provide to an LLM to get a
>> response. It's like giving the model a question or a starting point for it
>> to generate text. Journalists often craft prompts to get specific
>> information or insights.*
>>
>> *6. **Bias**: LLMs can sometimes produce biased or unfair content because
>> they've learned from biased internet data. Journalists should be aware of
>> this and critically evaluate the model's outputs for any biases.*
>>
>> *7. **Algorithm**: This is a fancy word for a set of rules or
>> instructions that an LLM follows to perform tasks. Algorithms guide how the
>> model generates text and makes decisions.*
>>
>> *8. **NLP (Natural Language Processing)**: NLP is the field of study that
>> focuses on teaching computers to understand and generate human language.
>> LLMs are a product of NLP research.*
>>
>> *9. **ML (Machine Learning)**: LLMs are a type of machine learning model.
>> Machine learning is a way to make computers learn from data and improve
>> their performance over time.*
>>
>> *10. **Training Data**: This is the text data used to teach an LLM. It's
>> like the model's textbooks, and the quality and diversity of this data
>> influence how well the LLM performs.*
>>
>> *11. **Bias Mitigation**: This is the process of reducing bias in LLMs.
>> It's important in journalism to ensure that the information provided by the
>> model is fair and unbiased.*
>>
>> *12. **Ethical AI**: This refers to the practice of using LLMs
>> responsibly and ethically. Journalists should consider the ethical
>> implications of using these tools, especially when it comes to issues like
>> privacy and misinformation.*
>>
>> *13. **API (Application Programming Interface)**: Some LLMs offer APIs
>> that allow journalists to interact with the model through code, enabling
>> custom applications and integrations.*
>>
>> *14. **Fine-Tuned Models**: These are LLMs that have been trained for
>> specific tasks, like summarization, translation, or question-answering.
>> They are more specialized and can be particularly useful for journalism.*
>>
>> *Understanding these terms can help journalists effectively leverage LLMs
>> in their work while also being mindful of their limitations and ethical
>> considerations.*
>>
>>
>> - Steve
>>
>> On 10/7/23 5:13 PM, Tom Johnson wrote:
>>
>> Thanks. Those are impressive numbers.
>> Tom
>>
>> On Sat, Oct 7, 2023 at 4:11 PM Marcus Daniels 
>> wrote:
>>
>>> The “large” refers to the number of parameters used.  A smaller large
>>> language model – a deep neural net -- start about 3 billion parameters, but
>>> larger ones like Claude 2 (the latest large language model of the company
>>> that wrote the paper Steve mentioned) have more than 130 billion
>>> parameters.   Amazingly, it is possible using (rooms of) GPUs and other
>>> accelerators to optimize in this a space of this size.   The billions of
>>> parameters come from the vocabulary size – the number of tokens that need
>>> to be discriminated, the many layers of transformers that are needed to
>>> capture the complexity of human and non-human languages (like DNA), and the
>

Re: [FRIAM] Language Model Understanding

2023-10-08 Thread David Eric Smith
gths and limitations, as well as the ethical considerations of 
>>> automated content creation in journalism.
>>> Can you elaborate on some of the technical jargon?
>>> 
>>> Certainly! Here are some common technical jargon related to Large Language 
>>> Models (LLMs) explained in journalist-friendly terms:
>>> 
>>> 1. **Pretraining**: This is the initial phase where the LLM learns from a 
>>> massive amount of text data on the internet. Think of it as the model 
>>> absorbing a vast library of information to understand language better.
>>> 
>>> 2. **Fine-tuning**: After pretraining, LLMs undergo fine-tuning to 
>>> specialize them for specific tasks. It's like giving the model extra 
>>> training in a particular subject, like finance or healthcare, to make it 
>>> more useful for specialized reporting.
>>> 
>>> 3. **Token**: In LLMs, text is broken down into smaller units called 
>>> tokens. A token can be as short as one character or as long as one word. 
>>> Knowing the number of tokens is important because it affects how much text 
>>> the model can handle at once.
>>> 
>>> 4. **Inference**: This term refers to the process of using a trained LLM to 
>>> generate text or make predictions. Journalists use inference to create 
>>> content, answer questions, or analyze data.
>>> 
>>> 5. **Prompt**: A prompt is the input you provide to an LLM to get a 
>>> response. It's like giving the model a question or a starting point for it 
>>> to generate text. Journalists often craft prompts to get specific 
>>> information or insights.
>>> 
>>> 6. **Bias**: LLMs can sometimes produce biased or unfair content because 
>>> they've learned from biased internet data. Journalists should be aware of 
>>> this and critically evaluate the model's outputs for any biases.
>>> 
>>> 7. **Algorithm**: This is a fancy word for a set of rules or instructions 
>>> that an LLM follows to perform tasks. Algorithms guide how the model 
>>> generates text and makes decisions.
>>> 
>>> 8. **NLP (Natural Language Processing)**: NLP is the field of study that 
>>> focuses on teaching computers to understand and generate human language. 
>>> LLMs are a product of NLP research.
>>> 
>>> 9. **ML (Machine Learning)**: LLMs are a type of machine learning model. 
>>> Machine learning is a way to make computers learn from data and improve 
>>> their performance over time.
>>> 
>>> 10. **Training Data**: This is the text data used to teach an LLM. It's 
>>> like the model's textbooks, and the quality and diversity of this data 
>>> influence how well the LLM performs.
>>> 
>>> 11. **Bias Mitigation**: This is the process of reducing bias in LLMs. It's 
>>> important in journalism to ensure that the information provided by the 
>>> model is fair and unbiased.
>>> 
>>> 12. **Ethical AI**: This refers to the practice of using LLMs responsibly 
>>> and ethically. Journalists should consider the ethical implications of 
>>> using these tools, especially when it comes to issues like privacy and 
>>> misinformation.
>>> 
>>> 13. **API (Application Programming Interface)**: Some LLMs offer APIs that 
>>> allow journalists to interact with the model through code, enabling custom 
>>> applications and integrations.
>>> 
>>> 14. **Fine-Tuned Models**: These are LLMs that have been trained for 
>>> specific tasks, like summarization, translation, or question-answering. 
>>> They are more specialized and can be particularly useful for journalism.
>>> 
>>> Understanding these terms can help journalists effectively leverage LLMs in 
>>> their work while also being mindful of their limitations and ethical 
>>> considerations.
>>> 
>>> 
>>> 
>>> - Steve
>>> 
>>> On 10/7/23 5:13 PM, Tom Johnson wrote:
>>> 
>>>> Thanks. Those are impressive numbers.
>>>> Tom
>>>> 
>>>> On Sat, Oct 7, 2023 at 4:11 PM Marcus Daniels >>> <mailto:mar...@snoutfarm.com>> wrote:
>>>>> The “large” refers to the number of parameters used.  A smaller large 
>>>>> language model – a deep neural net -- start about 3 billion parameters, 
>>>>> but larger ones like Claude 2 (the latest large language model of the 
>>>>> company that wrote the paper Steve mentioned) have more than 130 bil

Re: [FRIAM] Language Model Understanding

2023-10-08 Thread David Eric Smith
Zang!  I had not made that connection.

Hmmh.  What do I think?  I seem to have viewed uninterpreted models much as I 
view a hammer or a shovel; as a tool in the world for doing a certain job (in 
this case, a job of perceiving the world in valid ways), neither here nor there 
w.r.t. questions of nihilism.

Where then would I try to center nihilism?  (It’s being a topic I haven’t spent 
a lot of time on historically).  Maybe if I had to choose one phrase, it would 
be “the devaluation of values”.   Brown uses it in her book, but I think it is 
old and standardized.  Don’t know if it came from Nietzsche or Weber, or is 
much older than either of them.  

How would I orient to try to address such questions, that I do not want to go 
into support of bombs?  I think my current cast of mind is that there is a 
large class of “discovered thing”, meaning that they are not willed into 
existence, but are brought into existence (if there is even any bringing) 
through lived experience, and “noticed” after the fact.  Or maybe given in the 
ineffable from the start, and noticed along the way (all of Descartes’s cogito, 
the rest of “the self”, and much else).  It seems to me that there is room for 
aesthetics to be given much more and better attention than perhaps it has had 
in philosophy (or whoever is in charge of this question).  Aesthetics certainly 
not being the only domain from which discovered things can originate, but 
useful in that we can recognize it as a source, but not have the impulse to 
conflate it with dogma, as many other notions of “belief” tend to drift into.

Hmm.

Eric


> On Oct 8, 2023, at 12:30 PM, Marcus Daniels  wrote:
> 
> Eric writes:
> 
> “Bears on how many things make up the machinery of nihilism, extending well 
> beyond relations I recognized as part of an integration, though mentioning 
> many things I rail against in daily life.”
> 
> The willingness of physicists to use uninterpreted models, e.g., quantum 
> mechanics, seems like nihilism to me.  I don’t mean that in a judgmental way, 
> nor do I mean it in an admiring way.  On the other hand, there are many 
> people, I reckon most people, that provide their beliefs as both explanations 
> and justifications.   Bombs follow soon after.
> 
> Marcus
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam=E,1,ShGlU_SgSFn6qQxIQzSbX1zV_TbPcFyWbM7JZI-jGJqjGA6nfiLQbKWbCtLQ_vYw77xQZecqwHoUIMWWeJXUUuptPPLgcybwh2w3dGiW=1
> to (un)subscribe 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com=E,1,Mf35VNzl6xjcJG5_4Mncrxke5ecSEFMW1rG6s3NHef9F-wPO05iiNYiR45MSKazTnJWSrz1NeYWbWhCmeJC3ylViuKDMz4IsfZZtP9M-Py9xRWdqyGM,=1
> FRIAM-COMIC 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f=E,1,087gn5rrF7FE5ZFlhfGgKEw7FNCCq1uyVbDO_8s5g8ZAKfYH6v2V2iDtwUY_BU4AyYW7uPGg4MlSYXWwMdPyyC4O452Vdb0Pj1gUAeQ6R3kqyNmc=1
> archives:  5/2017 thru present 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f=E,1,ylCec7QvPBnRtW8HhpVtYcWVNCaZmBr5toTnnopf2mcqpjzxSsKNxSF3SNaCu1bbhR9aZTYzmJgXy02a_XHnNqp5ehv-9HQy-wOFmhlpPjFuVnn-Ves,=1
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-08 Thread Marcus Daniels
Eric writes:

“Bears on how many things make up the machinery of nihilism, extending well 
beyond relations I recognized as part of an integration, though mentioning many 
things I rail against in daily life.”

The willingness of physicists to use uninterpreted models, e.g., quantum 
mechanics, seems like nihilism to me.  I don’t mean that in a judgmental way, 
nor do I mean it in an admiring way.  On the other hand, there are many people, 
I reckon most people, that provide their beliefs as both explanations and 
justifications.   Bombs follow soon after.

Marcus
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-07 Thread Steve Smith
onse. It's like giving the model a question or a starting
   point for it to generate text. Journalists often craft prompts to
   get specific information or insights.//
   //
   //6. **Bias**: LLMs can sometimes produce biased or unfair content
   because they've learned from biased internet data. Journalists
   should be aware of this and critically evaluate the model's outputs
   for any biases.//
   //
   //7. **Algorithm**: This is a fancy word for a set of rules or
   instructions that an LLM follows to perform tasks. Algorithms guide
   how the model generates text and makes decisions.//
   //
   //8. **NLP (Natural Language Processing)**: NLP is the field of
   study that focuses on teaching computers to understand and generate
   human language. LLMs are a product of NLP research.//
   //
   //9. **ML (Machine Learning)**: LLMs are a type of machine learning
   model. Machine learning is a way to make computers learn from data
   and improve their performance over time.//
   //
   //10. **Training Data**: This is the text data used to teach an LLM.
   It's like the model's textbooks, and the quality and diversity of
   this data influence how well the LLM performs.//
   //
   //11. **Bias Mitigation**: This is the process of reducing bias in
   LLMs. It's important in journalism to ensure that the information
   provided by the model is fair and unbiased.//
   //
   //12. **Ethical AI**: This refers to the practice of using LLMs
   responsibly and ethically. Journalists should consider the ethical
   implications of using these tools, especially when it comes to
   issues like privacy and misinformation.//
   //
   //13. **API (Application Programming Interface)**: Some LLMs offer
   APIs that allow journalists to interact with the model through code,
   enabling custom applications and integrations.//
   //
   //14. **Fine-Tuned Models**: These are LLMs that have been trained
   for specific tasks, like summarization, translation, or
   question-answering. They are more specialized and can be
   particularly useful for journalism.//
   //
   //Understanding these terms can help journalists effectively
   leverage LLMs in their work while also being mindful of their
   limitations and ethical considerations./


- Steve

On 10/7/23 5:13 PM, Tom Johnson wrote:


Thanks. Those are impressive numbers.
Tom

On Sat, Oct 7, 2023 at 4:11 PM Marcus Daniels  
wrote:


The “large” refers to the number of parameters used.  A smaller
large language model – a deep neural net -- start about 3 billion
parameters, but larger ones like Claude 2 (the latest large
language model of the company that wrote the paper Steve
mentioned) have more than 130 billion parameters.   Amazingly, it
is possible using (rooms of) GPUs and other accelerators to
optimize in this a space of this size.   The billions of
parameters come from the vocabulary size – the number of tokens
that need to be discriminated, the many layers of transformers
that are needed to capture the complexity of human and non-human
languages (like DNA), and the context window size – how many
paragraphs or pages the model is trained on at a time.   A small
language model might be suitable for understanding the geometries
of chemicals, say.

*From:* Friam  *On Behalf Of *Tom Johnson
*Sent:* Saturday, October 7, 2023 2:38 PM
*To:* The Friday Morning Applied Complexity Coffee Group

*Subject:* Re: [FRIAM] Language Model Understanding

Thanks for passing this along, Steve. I wish, however, the authors
of this short piece would have included a definition of, in their
usage, "Large Language Models" and "Small Language Models." 
Perhaps I can find those in the larger paper.

Tom

On Sat, Oct 7, 2023 at 12:34 PM Steve Smith  wrote:

This popular-press article came through my Google News feed
recently which I thought might be useful to the
Journalists/English-Majors on the list to help understand how
LLMs work, etc.   When I read it in detail (forwarded from my
TS (TinyScreenPhone) on my LS (Large Screen Laptop)) I found
it a bit more detailed and technical than I'd expected, but
nevertheless rewarding and possibly offering some traction to
Journalism/English majors as well as those with a larger
investment in the CS/Math implied.

Decomposing Language Models into Understandable Components

<https://www.anthropic.com/index/decomposing-language-models-into-understandable-components>

and the (more) technical paper behind the article


https://transformer-circuits.pub/2023/monosemantic-features/index.html

<https://transformer-circuits.pub/2023/monosemantic-features/index.html>

Despite having sent a few dogs into vaguely similar scuffles
in my careen(r):

Faceted Ontologies for Pre Inci

Re: [FRIAM] Language Model Understanding

2023-10-07 Thread Tom Johnson
Thanks. Those are impressive numbers.
Tom

On Sat, Oct 7, 2023 at 4:11 PM Marcus Daniels  wrote:

> The “large” refers to the number of parameters used.  A smaller large
> language model – a deep neural net -- start about 3 billion parameters, but
> larger ones like Claude 2 (the latest large language model of the company
> that wrote the paper Steve mentioned) have more than 130 billion
> parameters.   Amazingly, it is possible using (rooms of) GPUs and other
> accelerators to optimize in this a space of this size.   The billions of
> parameters come from the vocabulary size – the number of tokens that need
> to be discriminated, the many layers of transformers that are needed to
> capture the complexity of human and non-human languages (like DNA), and the
> context window size – how many paragraphs or pages the model is trained on
> at a time.   A small language model might be suitable for understanding the
> geometries of chemicals, say.
>
>
>
> *From:* Friam  *On Behalf Of *Tom Johnson
> *Sent:* Saturday, October 7, 2023 2:38 PM
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] Language Model Understanding
>
>
>
> Thanks for passing this along, Steve. I wish, however, the authors of this
> short piece would have included a definition of, in their usage, "Large
> Language Models" and "Small Language Models."  Perhaps I can find those in
> the larger paper.
>
> Tom
>
>
>
> On Sat, Oct 7, 2023 at 12:34 PM Steve Smith  wrote:
>
> This popular-press article came through my Google News feed recently which
> I thought might be useful to the Journalists/English-Majors on the list to
> help understand how LLMs work, etc.   When I read it in detail (forwarded
> from my TS (TinyScreenPhone) on my LS (Large Screen Laptop)) I found it a
> bit more detailed and technical than I'd expected, but nevertheless
> rewarding and possibly offering some traction to Journalism/English majors
> as well as those with a larger investment in the CS/Math implied.
>
> Decomposing Language Models into Understandable Components
>
> <https://www.anthropic.com/index/decomposing-language-models-into-understandable-components>
>
> and the (more) technical paper behind the article
>
> https://transformer-circuits.pub/2023/monosemantic-features/index.html
>
> Despite having sent a few dogs into vaguely similar scuffles in my
> careen(r):
>
> Faceted Ontologies for Pre Incident Indicator Analysis
> <https://apps.dtic.mil/sti/tr/pdf/ADA588086.pdf>
> SpindleViz <https://www.ehu.eus/ccwintco/uploads/c/c6/HAIS2010_925.pdf>
> ...
>
> ... I admit to finding this both intriguing and well over my head on
> casual inspection...  the (metaphorical?) keywords that drew me in  most
> strongly included *Superposition* and *Thought Vectors*, though they are
> (nod to Glen) probably riddled (heaped, overflowing, bursting, bloated ...
> )  with excess meaning.
>
> https://gabgoh.github.io/ThoughtVectors/
>
> This leads me (surprise!) to an open ended discursive series of thoughts
> probably better left for a separate posting (probably rendered in a
> semasiographic language like Heptapod B
> <https://en.wikipedia.org/wiki/Heptapod_languages#Orthography>).
>
> 
>
> - Steve
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-07 Thread Marcus Daniels
The “large” refers to the number of parameters used.  A smaller large language 
model – a deep neural net -- start about 3 billion parameters, but larger ones 
like Claude 2 (the latest large language model of the company that wrote the 
paper Steve mentioned) have more than 130 billion parameters.   Amazingly, it 
is possible using (rooms of) GPUs and other accelerators to optimize in this a 
space of this size.   The billions of parameters come from the vocabulary size 
– the number of tokens that need to be discriminated, the many layers of 
transformers that are needed to capture the complexity of human and non-human 
languages (like DNA), and the context window size – how many paragraphs or 
pages the model is trained on at a time.   A small language model might be 
suitable for understanding the geometries of chemicals, say.

From: Friam  On Behalf Of Tom Johnson
Sent: Saturday, October 7, 2023 2:38 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Language Model Understanding

Thanks for passing this along, Steve. I wish, however, the authors of this 
short piece would have included a definition of, in their usage, "Large 
Language Models" and "Small Language Models."  Perhaps I can find those in the 
larger paper.
Tom

On Sat, Oct 7, 2023 at 12:34 PM Steve Smith 
mailto:sasm...@swcp.com>> wrote:

This popular-press article came through my Google News feed recently which I 
thought might be useful to the Journalists/English-Majors on the list to help 
understand how LLMs work, etc.   When I read it in detail (forwarded from my TS 
(TinyScreenPhone) on my LS (Large Screen Laptop)) I found it a bit more 
detailed and technical than I'd expected, but nevertheless rewarding and 
possibly offering some traction to Journalism/English majors as well as those 
with a larger investment in the CS/Math implied.

Decomposing Language Models into Understandable Components
<https://www.anthropic.com/index/decomposing-language-models-into-understandable-components>
[https://efficient-manatee.transforms.svdcdn.com/production/images/Untitled-Artwork-11.png?w=2880=1620=compress%2Cformat=crop=1696477668=d32264d5f5e32c79026b8e310e415c74]

and the (more) technical paper behind the article

https://transformer-circuits.pub/2023/monosemantic-features/index.html
Despite having sent a few dogs into vaguely similar scuffles in my careen(r):
Faceted Ontologies for Pre Incident Indicator Analysis 
<https://apps.dtic.mil/sti/tr/pdf/ADA588086.pdf>
SpindleViz<https://www.ehu.eus/ccwintco/uploads/c/c6/HAIS2010_925.pdf>
...

... I admit to finding this both intriguing and well over my head on casual 
inspection...  the (metaphorical?) keywords that drew me in  most strongly 
included Superposition and Thought Vectors, though they are (nod to Glen) 
probably riddled (heaped, overflowing, bursting, bloated ... )  with excess 
meaning.

https://gabgoh.github.io/ThoughtVectors/

This leads me (surprise!) to an open ended discursive series of thoughts 
probably better left for a separate posting (probably rendered in a 
semasiographic language like Heptapod 
B<https://en.wikipedia.org/wiki/Heptapod_languages#Orthography>).



- Steve
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Language Model Understanding

2023-10-07 Thread Tom Johnson
Thanks for passing this along, Steve. I wish, however, the authors of this
short piece would have included a definition of, in their usage, "Large
Language Models" and "Small Language Models."  Perhaps I can find those in
the larger paper.
Tom

On Sat, Oct 7, 2023 at 12:34 PM Steve Smith  wrote:

> This popular-press article came through my Google News feed recently which
> I thought might be useful to the Journalists/English-Majors on the list to
> help understand how LLMs work, etc.   When I read it in detail (forwarded
> from my TS (TinyScreenPhone) on my LS (Large Screen Laptop)) I found it a
> bit more detailed and technical than I'd expected, but nevertheless
> rewarding and possibly offering some traction to Journalism/English majors
> as well as those with a larger investment in the CS/Math implied.
>
> Decomposing Language Models into Understandable Components
>
> 
>
> and the (more) technical paper behind the article
>
> https://transformer-circuits.pub/2023/monosemantic-features/index.html
>
> Despite having sent a few dogs into vaguely similar scuffles in my
> careen(r):
>
> Faceted Ontologies for Pre Incident Indicator Analysis
> 
> SpindleViz 
> ...
>
> ... I admit to finding this both intriguing and well over my head on
> casual inspection...  the (metaphorical?) keywords that drew me in  most
> strongly included *Superposition* and *Thought Vectors*, though they are
> (nod to Glen) probably riddled (heaped, overflowing, bursting, bloated ...
> )  with excess meaning.
>
> https://gabgoh.github.io/ThoughtVectors/
>
> This leads me (surprise!) to an open ended discursive series of thoughts
> probably better left for a separate posting (probably rendered in a
> semasiographic language like Heptapod B
> ).
>
> 
>
> - Steve
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Language Model Understanding

2023-10-07 Thread Steve Smith
This popular-press article came through my Google News feed recently 
which I thought might be useful to the Journalists/English-Majors on the 
list to help understand how LLMs work, etc.   When I read it in detail 
(forwarded from my TS (TinyScreenPhone) on my LS (Large Screen Laptop)) 
I found it a bit more detailed and technical than I'd expected, but 
nevertheless rewarding and possibly offering some traction to 
Journalism/English majors as well as those with a larger investment in 
the CS/Math implied.


   Decomposing Language Models into Understandable Components
   


and the (more) technical paper behind the article

   https://transformer-circuits.pub/2023/monosemantic-features/index.html
   

Despite having sent a few dogs into vaguely similar scuffles in my 
careen(r):


   Faceted Ontologies for Pre Incident Indicator Analysis
   
   SpindleViz 
   ...

... I admit to finding this both intriguing and well over my head on 
casual inspection...  the (metaphorical?) keywords that drew me in  most 
strongly included /Superposition/ and /Thought Vectors/, though they are 
(nod to Glen) probably riddled (heaped, overflowing, bursting, bloated 
... )  with excess meaning.


https://gabgoh.github.io/ThoughtVectors/

This leads me (surprise!) to an open ended discursive series of thoughts 
probably better left for a separate posting (probably rendered in a 
semasiographic language like Heptapod B 
).




- Steve
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/