Oops, found the roundtable discussion appendix and missed Wendy Brown's
actual Tanner lecture:

Politics and Knowledge in Nihilistic Times:  Thinking with Max Weber -
  https://www.youtube.com/watch?v=nG52tEGghTA - “Politics”
  https://www.youtube.com/watch?v=qXm_0DxoS_k - “Knowledge”

-- rec --


On Sun, Oct 8, 2023 at 1:27 PM Roger Critchlow <r...@elf.org> wrote:

> The book sounds intriguing, but it's not in my lending library.
>
> Happily, there are lots of Tanner Lecture videos available online:
>     https://www.youtube.com/watch?v=yI-MCqeCILs Wendy Brown from 2019
>     https://www.youtube.com/watch?v=NJprCvmrpoY Kim Stanley Robinson from
> March 2023
>
> -- rec --
>
>
> -- rec --
>
> On Sun, Oct 8, 2023 at 11:56 AM David Eric Smith <desm...@santafe.edu>
> wrote:
>
>> It’s a good point/counterpinet, Pieter.
>>
>> I don’t want to stand against using new tools to relieve a scarcity
>> constraint, and surely education has been a severe one.
>>
>> I have what appears to be an enormous suspicion of a bait and switch, in
>> which suddenly we find ourselves in a world where simultaneously:
>>
>> 1. Sal Khan and Daphne Koller own everything, and nobody else owns
>> anything; and
>>
>> 2. People have no skills (say I, as I use the scary-good google translate
>> to take enormous pressure, though less shame, off me for not learning a
>> language I should have learned by now).
>>
>> Or, as a popular bumper sticker in Santa Fe used to say post-2016: Where
>> are we going?  And what am I doing in this handbasket?
>>
>> I should mention, on this point, that part of my cast of mind was shaped
>> by an article I read some time ago in some education chronicle, about how
>> the command-and-control interface to the computer was taking up more and
>> more space in classrooms, and kids were starting to show severe systematic
>> deficits in particular areas.  They could command anything they wanted into
>> existence on the computer, and failed to learn that the rest of the world
>> of people doesn’t work that way.  Negotiating skills were mentioned in the
>> article.  My description of a dystopian non-education was meant to suggest
>> that for several students to have to hear exactly the same thing would
>> require them to come to terms with the fact that it will fit them
>> differently, and they should become used to handling that too.
>>
>> It is interesting that I could never use a chatbot as a first stage of
>> writing something, as the optimistic chatbots suggest everybody will do,
>> and which I can believe for some people will be a good fit.  For me, I
>> can’t even take looking at a page of my own writing when I am in a tangle,
>> as a start to getting out of it.  All that pre-formed text is in the way of
>> whatever part of my brain tries to bring into focus what I should do next.
>> I know that chatbots and writing is not your point about education, and
>> don’t mean to derail those points; the chatbots-and-journalism theme was
>> one of the things from the original thread that put me onto the education
>> sidetrack.  What (at least in my imagination) they have in common is that
>> there are certain mental operations that I only do when I have “space”, and
>> many of these tools seem designed to make sure nobody ever has to leave any
>> “space” unfilled by some external stimulus, ever again.  I worry about it
>> as a development-weakening prosthetic, though I understand that the
>> argument for that would need to be made case by case, versus the likely
>> (and I believe you, in many cases demonstrated) economies and advantages of
>> personalization.
>>
>> Fortunately, I am neither designing anything nor voting on anything just
>> now, so there is no danger I will hold anybody else back who is trying to
>> make something work.
>>
>> Eric
>>
>>
>> On Oct 8, 2023, at 7:45 AM, Pieter Steenekamp <piet...@randcontrols.co.za>
>> wrote:
>>
>> I'd like to reflect how LLM's and AI could disrupt education.
>>
>> My paradigm is not to think in terms of the current educational model,
>> but to throw out all preconceived ideas and to design it from scratch.
>> There are obviously zillions of ways to do it and a good model will need to
>> evolve, but the following are merely two examples to trigger new and novel
>> approaches, one from Sal Khan and one from my own experience.
>>
>> 1 Sal Khan's approach to use AI to personalize education is based on the
>> idea that every student can benefit from a customized learning path that
>> adapts to their needs and preferences. He argues that AI can help scale the
>> benefits of one-to-one tutoring, which has been shown to improve students'
>> performance significantly. He also believes that AI can enhance the role of
>> teachers by providing them with useful feedback, insights, and assistance.
>> He envisions a future where every student has access to an AI-powered
>> personal tutor and every teacher has an AI teaching assistant.
>>
>> Sal Khan is the founder and CEO of Khan Academy, a nonprofit education
>> platform that offers free online courses, videos, and exercises for
>> learners of all ages. He has been a pioneer in using technology to
>> democratize education and make it accessible to anyone, anywhere. He has
>> also been exploring the potential of AI to transform education for the
>> better. In his 2023 TED talk¹, he demonstrated Khanmigo, an AI-powered
>> guide that can help students with various subjects, skills, and tasks.
>> Khanmigo can detect students' mistakes and misconceptions, provide
>> effective feedback and explanations, encourage Socratic dialogue and
>> debate, and assist with writing and storytelling. Khanmigo also acts as a
>> teaching assistant for teachers, helping them with lesson planning,
>> grading, and progress reports.
>>
>> Sal Khan's approach to use AI to personalize education is based on his
>> vision of creating a more humanistic and learner-centered education system.
>> He believes that AI can empower students to learn at their own pace,
>> explore their interests, and develop their creativity. He also believes
>> that AI can support teachers to focus more on the social and emotional
>> aspects of teaching, such as building relationships, fostering curiosity,
>> and inspiring passion. He hopes that by using AI to personalize education,
>> we can create a more equitable and inclusive learning environment for
>> everyone..
>>
>> Source: Conversation with Bing, 2023/10/08
>> (1) . https://bing.com/search?q=Sal+Khan+AI+education.
>> (2) Sal Khan’s 2023 TED Talk: AI in the classroom can transform
>> education.
>> https://blog.khanacademy.org/sal-khans-2023-ted-talk-ai-in-the-classroom-can-transform-education/
>> .
>> (3) Sal Khan: How AI could save (not destroy) education | TED Talk.
>> https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education/c
>> .
>> (4) The Future Of AI And Education: Insights From Sal Khan.
>> https://www.openbusinesscouncil.org/the-future-of-ai-and-education-insights-from-sal-khan-in-conversation-with-bill-gates
>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.openbusinesscouncil.org%2fthe-future-of-ai-and-education-insights-from-sal-khan-in-conversation-with-bill-gates&c=E,1,2zh-o7p-ZRG5J2pDpv8lLYBDKVFUgyvlCSvOCnrGzy0OEqHE_M-1ND6g52xVjiUgyBncevhmKdnOnnuuTkeZ66oyJXl1nY6Pan2Zr01pkGQWF2U,&typo=1>
>> .
>> (5) undefined.
>> https://www.ted.com/podcasts/rethinking-with-adam-grant/khan-academy-founder-sal-khan-on-ai-and-the-future-of-education-transcript
>> .
>> (6) undefined.
>> https://podcasts.apple.com/us/podcast/how-ai-could-save-not-destroy-education-sal-khan/id160904630?i=1000611325934
>> .
>> (7) undefined. https://www.ted.com/contact%29.
>> (8) en.wikipedia.org. https://en.wikipedia.org/wiki/Sal_Khan.
>>
>> 2 In South Africa, we are currently facing a significant education
>> crisis. While there are some excellent private schools, they come with a
>> high price tag. On the other hand, the state of public schools is a
>> complete disaster.
>>
>> During the COVID-19 lockdowns, I had the opportunity to work with
>> children ranging from 6 to 16 years old, and I decided to adopt a radically
>> different educational approach. This approach was inspired by Sugata
>> Mittra's belief that teachers may not be necessary if the learning
>> environment is well-managed (as elaborated below).
>>
>> As part of our informal schooling setup, we incorporated a daily "speech"
>> session at 11 am. This session was not limited to just speaking; the
>> children were also required to submit handwritten copies of their speeches
>> and present them to the group. Following each child's presentation, we
>> engaged in a Q&A and discussion session.
>>
>> Even though we didn't have access to ChatGPT at that time, if I were to
>> recreate this experience today, I would encourage the children to utilize
>> resources like ChatGPT and other freely available AI tools on the internet.
>> This approach aligns with preparing them for the demands of today's
>> workplace.
>>
>> Children possess an innate desire to learn, seek challenges, and take
>> pride in their accomplishments. Providing them with maximum freedom within
>> defined boundaries is an excellent approach to help them flourish.
>>
>> Reference, info on Sugata Mitra from the internet
>> The School in the Cloud is a concept developed by Sugata Mitra, an
>> educational researcher and TED Prize winner. He proposes that children can
>> learn anything by themselves, with the help of the Internet and their
>> peers, without the need for formal teaching or supervision. He calls this
>> method Self-Organized Learning Environments (SOLE)¹.
>>
>> Mitra conducted several experiments to test his idea, such as the "Hole
>> in the Wall" project, where he installed computers in slums and villages in
>> India and observed how children learned to use them without any guidance¹.
>> He also created learning labs in different locations, where children could
>> access online resources and mentors, called the "Granny Cloud", who
>> supported their curiosity and exploration².
>>
>> Mitra's vision is to transform the way children learn, by giving them
>> more autonomy, creativity and collaboration. He believes that this approach
>> can prepare them for the future, where memorization of facts is less
>> important than critical thinking and problem-solving. He also invites
>> anyone, anywhere, to join his global experiment in self-organized learning,
>> by creating their own SOLEs and sharing their discoveries².
>>
>> The School in the Cloud is also the subject of a documentary by Jerry
>> Rothwell, which follows Mitra's three-year experiment and the impact of his
>> work on children and educators around the world³. You can also read more
>> about Mitra's ideas and experiences in his book "The School in the Cloud:
>> The Emerging Future of Learning"⁴..
>>
>> Source: Conversation with Bing, 2023/10/08
>> (1) Sugata Mitra: Build a School in the Cloud | TED Talk.
>> https://www.ted.com/talks/sugata_mitra_build_a_school_in_the_cloud.
>> (2) Sugata Mitra creates a School in the Cloud | Prize-winning wishes |
>> TED ....
>> https://www.ted.com/participate/ted-prize/prize-winning-wishes/school-in-the-cloud
>> .
>> (3) The School in the Cloud - United Nations Western Europe.
>> https://unric.org/en/the-school-in-the-cloud/
>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2funric.org%2fen%2fthe-school-in-the-cloud%2f&c=E,1,P04gf7AkTCBw5Pu_pIZ15Sa4DD1-gfP_8k-B2baFWOI2jWFHYyPbbaBsn0OsnINICl6TI9QThLvR2A5qOglBMS7zz_Tmtt90LZBHRFp-&typo=1>
>> .
>> (4) The School in the Cloud by Sugata Mitra (ebook) - eBooks.com
>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2feBooks.com&c=E,1,wsixh_9R4QKLkD8DtVqVcLh8kGp_7BfjsaQ5St9_onGfLCa4Ou6JZjv0T2UFtSbIYPqSp0o5p5t2ZJssBHfXMnLUgHb4gPP2_oEKNle93g,,&typo=1&ancr_add=1>
>> https://www.ebooks.com/en-us/book/209765777/the-school-in-the-cloud/sugata-mitra/
>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.ebooks.com%2fen-us%2fbook%2f209765777%2fthe-school-in-the-cloud%2fsugata-mitra%2f&c=E,1,ALPLtj1Pd8Kx3SVvle5l5lE2km7dGj6PWIWblhALEROHD-5OllWa3vIAV_dpaEw1Tt9GJyLnLfyIb5iufGiV8vo3uAWeoE-zwANhJFQ5BQ,,&typo=1>
>> .
>> (5) undefined. https://www.ted.com/contact%29.
>> (6) en.wikipedia.org. https://en.wikipedia.org/wiki/Sugata_Mitra.
>>
>> On Sun, 8 Oct 2023 at 11:20, David Eric Smith <desm...@santafe.edu>
>> wrote:
>>
>>> In reading about the role of LLMs in personalization, I just realized
>>> how to further destroy education.  As in many of these things, one starts
>>> with an insight that is basically good, and then hands it to a SV company
>>> to be turned into something profoundly nihilistic.
>>>
>>> The lecturer sits at “the front” of wherever this happens.  The body of
>>> students sit in something that makes them manifestly a group.  But what the
>>> lecturer says isn’t what they hear.  The lecturer speaks into a computer,
>>> and the students all wear headsets (probably full VR immersions).  An
>>> intervening LLM, which has been trained on exchanges with each student and
>>> with the lecturer, replaces whatever the lecturer says with a
>>> “personalized” version of the “content” which is then fed to each student.
>>>
>>> No two people will have heard quite the same version of anything.  I
>>> guess, if they ever wanted to discuss the material with each other, another
>>> LLM could be recruited to stand between them to convert what either of them
>>> said into a different stream for the other to hear.  That can be an
>>> additional service for being “in the class”, which one can purchase for a
>>> fee, like the fee to get on an airplane earlier in the line than some other
>>> passenger who paid a lower fee.
>>>
>>> The only thing I find weird is that I haven’t already head this
>>> suggested 1000 times even in the popular press.  It seems like the first
>>> idea that would occur to any SV engineer, or (though probably requiring a
>>> cognitive exoskeleton) any university administrator.
>>>
>>> Eric
>>>
>>> Oh yeah, p.s.:  The insight that wasn’t evil is that this is the
>>> difference between lectures and books or other broadcast media.  In a real
>>> lecture, in the old world where people spent time with each other, the
>>> teacher gets to know the students enough to learn what they can use, and
>>> renders the material with empathy for that.  I’ll leave the reader to put
>>> together which parts of the resulting exchange get left out of my LLM
>>> implementation.
>>>
>>> p.p.s.  I should mention, has any of you read this book?
>>> [image: 9780674279384-lg.jpg]
>>>
>>> Nihilistic Times — Wendy Brown
>>> <https://www.hup.harvard.edu/catalog.php?isbn=9780674279384>
>>> hup.harvard.edu
>>> <https://www.hup.harvard.edu/catalog.php?isbn=9780674279384>
>>> <https://www.hup.harvard.edu/catalog.php?isbn=9780674279384>
>>>
>>> I got to spend a little time in the orbit of this woman last year
>>> (didn’t really “meet”, since I was just a quiet sitter-in in a group’s
>>> weekly discussion), and she seems to me like just a fine mind.  The book is
>>> a chautauqua of sorts, not aspiring to a destination per se.  But it is a
>>> good combination of empathy, analytic clarity, and erudition, nicely
>>> combined.  Bears on how many things make up the machinery of nihilism,
>>> extending well beyond relations I recognized as part of an integration,
>>> though mentioning many things I rail against in daily life.
>>>
>>>
>>>
>>>
>>> On Oct 7, 2023, at 8:36 PM, Steve Smith <sasm...@swcp.com> wrote:
>>>
>>> Tom -
>>>
>>> This is probably a little less relevant to the business of journalism
>>> than you deserve but this is a pretty good (IMO) fleshing out of some of
>>> the technical details of (L)LMs which might fill in some blanks:
>>>
>>> https://developers.google.com/machine-learning/resources/intro-llms
>>>
>>> There must be (by now) some (yet) more relevant resources for
>>> Journalists (for example), not only to de-Jargonize a little but also to
>>> make it (yet more) relevant to your domain.
>>>
>>> Here, straight from the *Horses' Mouth* (GPT 3.5) itself ( I leave it
>>> to you to tell us how well these answers work for you?).  It is probably
>>> worth engaging with chatGPT yourself and asking more and more focused
>>> questions...   a good self-study in "prompt engineering"!
>>>
>>> http://chat.openai.com
>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fchat.openai.com&c=E,1,5AQR_YXXM2e5NRiZ2jwiMxV8_p-R8Bm41a-VJIvezvBxlpk5Dl9j1kvUr04X8zxM96dWITUcudcORIY8fwPup8paM-YALfjNmU7xfsuVh_3ViFZl3ak,&typo=1>
>>>
>>> ***
>>>
>>> *Introduction to Modern Language Models for Veteran Journalists*
>>>
>>> *In today's digital age, the field of journalism is undergoing a
>>> significant transformation, largely driven by advancements in technology.
>>> One key innovation that has had a profound impact on the industry is the
>>> development of modern language models, often abbreviated as LLMs. While you
>>> may not be a tech expert, understanding the basics of LLMs can be
>>> invaluable for journalists. So, let's break it down in simple terms.*
>>>
>>> ***What Are Language Models?***
>>>
>>> *At its core, a language model is like a super-smart digital writer.
>>> It's a computer program trained to understand and generate human language.
>>> Think of it as a tool that helps computers understand and use words,
>>> sentences, and paragraphs just like humans do.*
>>>
>>> ***The Power of Large Language Models (LLMs)***
>>>
>>> *Now, let's talk about Large Language Models (LLMs). These are the big
>>> players in the world of language models. They're like language
>>> supercomputers because they're trained on massive amounts of text from the
>>> internet. This means they've learned a lot about how people write and talk.*
>>>
>>> ***How LLMs Help Journalists***
>>>
>>> *1. **Content Generation**: LLMs can generate written content, from news
>>> articles to summaries and even creative pieces. Journalists can use them to
>>> draft stories faster and get ideas flowing.*
>>>
>>> *2. **Fact-Checking**: They're handy for fact-checking, helping you
>>> quickly verify information by scanning a vast database of knowledge.*
>>>
>>> *3. **Summarization**: LLMs can take long, complex documents and create
>>> concise summaries, saving you time on research.*
>>>
>>> *4. **Translation**: They can also translate text between languages,
>>> which can be incredibly useful for international reporting.*
>>>
>>> *5. **Personalization**: Some news websites use LLMs to tailor content
>>> to individual readers, providing a more personalized news experience.*
>>>
>>> *6. **Social Media Analysis**: LLMs can sift through social media data,
>>> helping journalists understand public sentiment and trends.*
>>>
>>> ***Potential Concerns***
>>>
>>> *While LLMs are incredibly powerful, there are some concerns to keep in
>>> mind. They might unintentionally spread misinformation if not used
>>> carefully, and they can't replace human judgment or critical thinking.*
>>>
>>> ***Conclusion***
>>>
>>> *In a nutshell, modern language models (LLMs) are like digital writing
>>> assistants that can help journalists write better, research faster, and
>>> adapt to the changing landscape of digital journalism. While they're
>>> powerful tools, it's important to use them responsibly, keeping in mind
>>> their strengths and limitations, as well as the ethical considerations of
>>> automated content creation in journalism.*
>>>
>>> Can you elaborate on some of the technical jargon?
>>>
>>> *Certainly! Here are some common technical jargon related to Large
>>> Language Models (LLMs) explained in journalist-friendly terms:*
>>>
>>> *1. **Pretraining**: This is the initial phase where the LLM learns from
>>> a massive amount of text data on the internet. Think of it as the model
>>> absorbing a vast library of information to understand language better.*
>>>
>>> *2. **Fine-tuning**: After pretraining, LLMs undergo fine-tuning to
>>> specialize them for specific tasks. It's like giving the model extra
>>> training in a particular subject, like finance or healthcare, to make it
>>> more useful for specialized reporting.*
>>>
>>> *3. **Token**: In LLMs, text is broken down into smaller units called
>>> tokens. A token can be as short as one character or as long as one word.
>>> Knowing the number of tokens is important because it affects how much text
>>> the model can handle at once.*
>>>
>>> *4. **Inference**: This term refers to the process of using a trained
>>> LLM to generate text or make predictions. Journalists use inference to
>>> create content, answer questions, or analyze data.*
>>>
>>> *5. **Prompt**: A prompt is the input you provide to an LLM to get a
>>> response. It's like giving the model a question or a starting point for it
>>> to generate text. Journalists often craft prompts to get specific
>>> information or insights.*
>>>
>>> *6. **Bias**: LLMs can sometimes produce biased or unfair content
>>> because they've learned from biased internet data. Journalists should be
>>> aware of this and critically evaluate the model's outputs for any biases.*
>>>
>>> *7. **Algorithm**: This is a fancy word for a set of rules or
>>> instructions that an LLM follows to perform tasks. Algorithms guide how the
>>> model generates text and makes decisions.*
>>>
>>> *8. **NLP (Natural Language Processing)**: NLP is the field of study
>>> that focuses on teaching computers to understand and generate human
>>> language. LLMs are a product of NLP research.*
>>>
>>> *9. **ML (Machine Learning)**: LLMs are a type of machine learning
>>> model. Machine learning is a way to make computers learn from data and
>>> improve their performance over time.*
>>>
>>> *10. **Training Data**: This is the text data used to teach an LLM. It's
>>> like the model's textbooks, and the quality and diversity of this data
>>> influence how well the LLM performs.*
>>>
>>> *11. **Bias Mitigation**: This is the process of reducing bias in LLMs.
>>> It's important in journalism to ensure that the information provided by the
>>> model is fair and unbiased.*
>>>
>>> *12. **Ethical AI**: This refers to the practice of using LLMs
>>> responsibly and ethically. Journalists should consider the ethical
>>> implications of using these tools, especially when it comes to issues like
>>> privacy and misinformation.*
>>>
>>> *13. **API (Application Programming Interface)**: Some LLMs offer APIs
>>> that allow journalists to interact with the model through code, enabling
>>> custom applications and integrations.*
>>>
>>> *14. **Fine-Tuned Models**: These are LLMs that have been trained for
>>> specific tasks, like summarization, translation, or question-answering.
>>> They are more specialized and can be particularly useful for journalism.*
>>>
>>> *Understanding these terms can help journalists effectively leverage
>>> LLMs in their work while also being mindful of their limitations and
>>> ethical considerations.*
>>>
>>>
>>> - Steve
>>>
>>> On 10/7/23 5:13 PM, Tom Johnson wrote:
>>>
>>> Thanks. Those are impressive numbers.
>>> Tom
>>>
>>> On Sat, Oct 7, 2023 at 4:11 PM Marcus Daniels <mar...@snoutfarm.com>
>>> wrote:
>>>
>>>> The “large” refers to the number of parameters used.  A smaller large
>>>> language model – a deep neural net -- start about 3 billion parameters, but
>>>> larger ones like Claude 2 (the latest large language model of the company
>>>> that wrote the paper Steve mentioned) have more than 130 billion
>>>> parameters.   Amazingly, it is possible using (rooms of) GPUs and other
>>>> accelerators to optimize in this a space of this size.   The billions of
>>>> parameters come from the vocabulary size – the number of tokens that need
>>>> to be discriminated, the many layers of transformers that are needed to
>>>> capture the complexity of human and non-human languages (like DNA), and the
>>>> context window size – how many paragraphs or pages the model is trained on
>>>> at a time.   A small language model might be suitable for understanding the
>>>> geometries of chemicals, say.
>>>>
>>>>
>>>> *From:* Friam <friam-boun...@redfish.com> *On Behalf Of *Tom Johnson
>>>> *Sent:* Saturday, October 7, 2023 2:38 PM
>>>> *To:* The Friday Morning Applied Complexity Coffee Group <
>>>> friam@redfish.com>
>>>> *Subject:* Re: [FRIAM] Language Model Understanding
>>>>
>>>>
>>>> Thanks for passing this along, Steve. I wish, however, the authors of
>>>> this short piece would have included a definition of, in their usage,
>>>> "Large Language Models" and "Small Language Models."  Perhaps I can find
>>>> those in the larger paper.
>>>>
>>>> Tom
>>>>
>>>>
>>>> On Sat, Oct 7, 2023 at 12:34 PM Steve Smith <sasm...@swcp.com> wrote:
>>>>
>>>> This popular-press article came through my Google News feed recently
>>>> which I thought might be useful to the Journalists/English-Majors on the
>>>> list to help understand how LLMs work, etc.   When I read it in detail
>>>> (forwarded from my TS (TinyScreenPhone) on my LS (Large Screen Laptop)) I
>>>> found it a bit more detailed and technical than I'd expected, but
>>>> nevertheless rewarding and possibly offering some traction to
>>>> Journalism/English majors as well as those with a larger investment in the
>>>> CS/Math implied.
>>>>
>>>> Decomposing Language Models into Understandable Components
>>>>
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.anthropic.com%2findex%2fdecomposing-language-models-into-understandable-components&c=E,1,m6E2NyQiQqZTmLRighr8jjGwfjDVwi00TYuM_4iskHnmAw78zWd55VlGTXwTdzWWkVGzInmkQvZFTQYuycBPX1Ii3Lxo1Bb5DZUFcSW0rUdv4Ma-WSM,&typo=1>
>>>>
>>>> and the (more) technical paper behind the article
>>>>
>>>> https://transformer-circuits.pub/2023/monosemantic-features/index.html
>>>>
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2ftransformer-circuits.pub%2f2023%2fmonosemantic-features%2findex.html&c=E,1,PIXQOjp_g2tr-D-_MpOt_B6uVdfil6N3FViwvJ78HkMFtSu7LpKOXlQSpqz_GQuJyD31X5eqiYtu2d2_VgXFqA4pekVbchFYvvf9GwiQ1zIgLfEt&typo=1>
>>>>
>>>> Despite having sent a few dogs into vaguely similar scuffles in my
>>>> careen(r):
>>>>
>>>> Faceted Ontologies for Pre Incident Indicator Analysis
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fapps.dtic.mil%2fsti%2ftr%2fpdf%2fADA588086.pdf&c=E,1,nPyDgDuU0nMwlV3W24mUTteOTIfbrs9SipIKre1b1_XIxOIWJ1ok_IKz181AKiB6geR5Mb1z6-DrUslqOaBkYUxiG5O8Zhc2mG3ALmDJQjCULUXzR9v8IUhidw,,&typo=1>
>>>> SpindleViz
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.ehu.eus%2fccwintco%2fuploads%2fc%2fc6%2fHAIS2010_925.pdf&c=E,1,YwucoE_ffMQO_zO0A6WWAg8Y0J7wyHAGiJfcntyFBphchxgl4Tx6INQAtF7qA9XB07R52Jwnqfs4YO9s2MHP3Lnh-nUkpbULJfatlSIig_XB5ixy7yxF&typo=1>
>>>> ...
>>>>
>>>> ... I admit to finding this both intriguing and well over my head on
>>>> casual inspection...  the (metaphorical?) keywords that drew me in  most
>>>> strongly included *Superposition* and *Thought Vectors*, though they
>>>> are (nod to Glen) probably riddled (heaped, overflowing, bursting, bloated
>>>> ... )  with excess meaning.
>>>>
>>>> https://gabgoh.github.io/ThoughtVectors/
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fgabgoh.github.io%2fThoughtVectors%2f&c=E,1,WnnKNla0iVte02BV2Imnk1J5revdzixjz7Z3w4XFowr2Ti3fkKSk8kKj8BQdnL11w3MtGPIKl2FP5Y-6zSsJ5O9A0luCjDPRrFpM_kDhF0jacJbhFA,,&typo=1>
>>>>
>>>> This leads me (surprise!) to an open ended discursive series of
>>>> thoughts probably better left for a separate posting (probably rendered in
>>>> a semasiographic language like Heptapod B
>>>> <https://en.wikipedia.org/wiki/Heptapod_languages#Orthography>).
>>>>
>>>> <must... stop... now... >
>>>>
>>>> - Steve
>>>>
>>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>>> FRIAM Applied Complexity Group listserv
>>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>>> https://bit.ly/virtualfriam
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,7Se1kU5K0qRIvWJRD4XXT6OVdefOkjmd8iBwB4_ScBm8nmwT9MUlDa2OlXu7gsTvtdy7EwO5VnLshsDYL-lYhTu-S1P_ak0_11QPeoudf_Pg&typo=1>
>>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,XgC24CHCqH-MiGAYACD7z6PtLS8L3PA6Ntnnm8-GFS-8K9DlJz1wbcz4YhFucq-Ucbr9oeSbxnsUcDSZ0XfT5aalKerL34Pq0R57j_mDeX4RrFvxbg,,&typo=1>
>>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,1GkLbYQODhOcaZjv5c91-iHu4986bkpXTFuaNhmX9fFK-EzJOVs6x1AKJrFBa7SMS9GHwxtjJZQ_CrzTmsDpTJ-KHGBzYStX4zSsywCdFT2GG8q4MO_gHwI,&typo=1>
>>>> archives:  5/2017 thru present
>>>> https://redfish.com/pipermail/friam_redfish.com/
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,pPB6Gx8uMJadpxScEftegu18poh3jvMC3dByqfb7-kH5q9a1uN-Hx18Hpf2mWKYR1aATb5b9IUgqwfoulMiH2oh8K03lnbespQNIBfFm-D_etq1aSira&typo=1>
>>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>>
>>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>>> FRIAM Applied Complexity Group listserv
>>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>>> https://bit.ly/virtualfriam
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,9R_t1NDn9DvauNKttOqQgxrGaUrdMJxeYm5N63GEWkEvm1uQHRIPj5EryTCG4F8-FBeu18NA6Iqbyzr0qZmJUE2HF57NAov7tcuXG1CFAJ7ISv0,&typo=1>
>>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,lFTqZFH-R9D5jiRQ0m9XGZNNd5Rsr-TYbo8WKOmosICjS8S7fpAggB06ojoO4DIXx07lOw0UyZuhpBT_IsxNgIEKtnTtzzaLELhd606vpB5mrMmIKA,,&typo=1>
>>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,xRMd22AvNM1vduoxz-Cd6-edLdgmkb_sifY1_C_kq0J3nQo_3JxTOJwh26ybgsQDy5PV7BrokU2_MLcMFRgluXk3WJSSCAoZVLgtMCpIcY07gVC_vTKmq4s,&typo=1>
>>>> archives:  5/2017 thru present
>>>> https://redfish.com/pipermail/friam_redfish.com/
>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,KQ2mCnilVXt2leucw3caOQ8GNGViF5aUsirFIbdwFgZJeCh48I4mlovIE97rJIszL6HttaJg58sbOf85zTBi0aq7vxRF9a4Q94NIPMK5kpk3Gbb-tTFwcOZ0pNE,&typo=1>
>>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>>
>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>>> https://bit.ly/virtualfriam 
>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,JZG5Ry4qLJlJfRcyvniC-8nt5cXg6jFMhE3BkxTASN407yQ-YlvbXpA2Zd_l4ZYoxSROdQ-iJzUyJID9bt1wqtH1fEcb3cdiL6OmzuOItbP4tmY,&typo=1>
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com 
>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,iThoLpYpqt8aVw23fOZyibIh1jKmvxMDkX9wTBRonsR8X4ezwG2-HhOu2w5yW0lyX3zmbFgbfy7jDe1gj23sTQK_OWFEcLCsJGKJhFl5dSqaSII6Om4d0nzZQBo,&typo=1>
>>> FRIAM-COMIC http://friam-comic.blogspot.com/ 
>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,HYzkgvDtQjTwOa5_t7MZnsd_eXg7gQdb3c3PK_qnRyw9jdfOnrNAu3PtDOXQrjRLIJbkJDvSCzh2me61wHHIThc6Y2xNFIS7zHhMQAzK&typo=1>
>>> archives:  5/2017 thru present 
>>> https://redfish.com/pipermail/friam_redfish.com/ 
>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,tQoVq56CqO6ncvHKTN9hbcVTg93ckGFVj0aCH7mntnycdBP8xmtUmWQk6vfH6tXlmj4efSkAfKSAQslYZ9xESLjg64AhcTLO8gILEjPUH05xuw,,&typo=1>
>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,Wmm9ThSA6Mz3l6f44HjYltfKE5iYDLD_dMQDomanWFe33hPZtOZx6UMs9HfvlE9qqDrpJHTiJq-6I7rnKON9ssVZvrW3vYhYayTxfEkBPA,,&typo=1
>>> to (un)subscribe
>>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,SCq73BTJlGKctIkbzRtoaDbciyn7yKRCVtCpmNnPBqLEajOEfO3ae1cuIeemIU9GwwAYpzKzxvR6walpGzl7rZXKHBzAtoeNy6j6mqtwhLpEMFRuv0XoITA,&typo=1
>>> FRIAM-COMIC
>>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,RNz26jMqvi3Xb0sjg5eWouWJOXinbcqzg63a-tE4P-K1Rmv4GlXJtlAaD8-Ij1iv6C6L7JF9LZzTygimbNX8G_fg7e6fKklz4FYSTigA5Q,,&typo=1
>>> archives:  5/2017 thru present
>>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,S1eeYxStqxoVg4yeuKl0cDdLaMoPcak1Ri2w3UANtBaag5riZ9K-IGLJIia0qNnuf-ChNm1tzsBtHZOtE9Wqe-yu2oKWkUjMwJoTFYVpi6adLzNpgY4iVPs,&typo=1
>>>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>
>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://bit.ly/virtualfriam
>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,tYXe7u30UizIRX7sCP5z134Vp_rCgZrx-HG1QllHUyYO2zfuWxe9oJB0rJBMD_fbOkbLIYPpr2aium2bSNugjk7zYsHr5bk-DHPLj56i&typo=1>
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,owxu2tT3Pnf6XPyExtN00oNk919A19XBuIfw_2ugWaBuigwyq7DFJlupgseFZ7dOIgMUnH_x2z2nLX7lUkJsFKOeHX0DTNUNT-mUAW5rsP8,&typo=1>
>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,0Dk21FivClJzcKxULsvq2ahHAaXAtwwsVxvRPMjYjKT8CN4LNIQgsTKYNRQbVsufJP8QWIPIZCVEAlkV78bQiWk7RfliP9Sp2Abk7ij92YlC6TmsJg,,&typo=1>
>>> archives:  5/2017 thru present
>>> https://redfish.com/pipermail/friam_redfish.com/
>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,xoH9CNhv1c36upLgpvFH9gpB7s-7TuKIBNfryFN2vs8QLRc8b7-KRpdiWxQKXL0o5ljIufdg0z0pvgn77zbqUXF-rGapyf9MPmtRBoxD6vgoFz4,&typo=1>
>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>
>> <9780674279384-lg.jpg><9780674279384-lg.jpg>-. --- - / ...- .- .-.. ..
>> -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,2ZlC62KWqH1N6zMvqal4Ed0Wx5XQGMEddd41-jtZtFjgO1SirsDGzGIktZ5uuoAX15NFz0taAZNEgGPxd34ZeId3Oi-wtbhVUHXBugIhOw36&typo=1
>> to (un)subscribe
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,QZaSg2mbVJz2oN4YNuVpvuZjAkolp8mlWkppzQ0-aoqWNBnx1M7v90qS6byWzuUXDqyzMJNRAQn26_WVFsAlr_i3r1TLR4E0nLYs6alQCEdr-CStPCw,&typo=1
>> FRIAM-COMIC
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,6pvfI2nyTTYQmON_tKGUbLCRCzgFGvS9QLKctMYsBFDU0Jezuy2pxUKnT-jJ7-TUHFK5mRy3GkmvuVKm8PhtSGxRaWgt-3n8l-1NT599Ul54D_JdL5xw&typo=1
>> archives:  5/2017 thru present
>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,Od6NS37sWb2LiDsHMSlWPpueHD3xdUe4deNxdljKnlJcfHERLOFE6iw9WOQqHlD3uRhr7Si6KXivdjch6wdVhh-QOKc1CsO9-2NvVI5dfgGO3M8R2Wt2C_VpvA,,&typo=1
>>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to