Hello all,

Just a quick reminder that our showcase on *AI and Communities *will
begin in just over 30 minutes. You can watch it at*
https://www.youtube.com/live/qW5IQJv84HY
<https://www.youtube.com/live/qW5IQJv84HY>*. More information about the
talks is included in the email below.

Best,
Kinneret

On Mon, Feb 23, 2026 at 3:23 PM Kinneret Gordon <[email protected]>
wrote:

> Hi everyone,
>
> The February 2026 Research Showcase will be live-streamed this Wednesday,
> February 25, at 9:30 AM PT / 17:30 UTC. Find your local time here
> <https://zonestamp.toolforge.org/1772040600>. Our theme this month is *AI
> and Communities*.
>
> *We invite you to watch via the YouTube
> stream: https://www.youtube.com/live/qW5IQJv84HY
> <https://www.youtube.com/live/qW5IQJv84HY>.* As always, you can join the
> conversation in the YouTube chat as soon as the showcase goes live.
>
> This month, we will have two presentations:
>
> *LLMs in Wikipedia: Investigating How LLMs Impact Participation in
> Knowledge Communities*By *Moyan Zhou (University of Minnesota)*Large
> language models (LLMs) are reshaping knowledge production as community
> members increasingly incorporate them into their contribution workflows.
> However, participating in knowledge communities involves more than just
> contributing content - it is also a deeply social process shaped by
> members' level of expertise. While communities must carefully consider
> appropriate and responsible LLM integration, the absence of concrete norms
> has left individual editors to experiment and navigate LLM use on their
> own. Understanding how LLMs influence community participation across
> expertise levels is therefore critical in shaping future norms and
> supporting effective adoption. To address this gap, we investigated
> Wikipedia, one of the largest knowledge production communities, to
> understand participation in three dimensions: 1) how LLMs influence the
> ways editors gather knowledge, 2) how editors leverage strategies to align
> LLM outputs with community norms, and 3) how other editors in the community
> respond to LLM-assisted contributions. Through interviews with 16 Wikipedia
> editors with different levels of expertise who had used LLMs for their
> edits, we revealed a participation gap mediated by expertise in adopting
> LLMs in knowledge contributions across knowledge gathering, alignment with
> community norms, and peer responses. Based on these findings, we challenge
> existing models of novice editors' involvement and propose design
> implications for LLMs that support community engagement, highlighting
> opportunities for LLMs to sustain mentorship, knowledge transmission, and
> legitimacy building by scaffolding and feedback, process documentation, and
> LLM disclosure by good-faith editors.*AI Didn't Start the Fire: Examining
> the Stack Exchange Moderator and Contributor Strike*By *Yiwei Wu
> (University of Texas at Austin)*Online communities and their host
> platforms are mutually dependent yet conflict-prone. When platform policies
> clash with community values, communities have resisted through strikes,
> blackouts, and even migration to other platforms. Through such collective
> actions, communities have sometimes won concessions, but these have
> frequently proved to be temporary. Although previous research has
> investigated strike events and migration chains, the processes by which
> community-platform conflict unfolds remain obscure. How do
> community-platform relationships deteriorate? How do communities organize
> collective action? How do the participants proceed in the aftermath? We
> investigate a conflict between the Stack Exchange platform and community
> that occurred in 2023 around an emergency arising from the release of large
> language models (LLMs). Based on a qualitative thematic analysis of 2,070
> messages from Meta Stack Exchange and 14 interviews with community members,
> we reveal how the 2023 conflict was preceded by a long-term deterioration
> in the community-platform relationship, driven in particular by the
> platform's disregard for the community's highly valued participatory role
> in governance. Moreover, the platform's policy response to LLMs aggravated
> the community's sense of crisis, triggering strike mobilization. We analyze
> how the mobilization was coordinated through a tiered leadership and
> communication structure, as well as how community members pivoted in the
> aftermath. Building on recent theoretical scholarship in social computing,
> we use Hirschman's exit, voice, and loyalty framework to theorize the
> challenges of community-platform relations evinced in our data. Finally, we
> recommend ways that platforms and communities can institute participatory
> governance to be durable and effective.
> Looking forward to seeing many of you,
> Kinneret
> --
>
> Kinneret Gordon
>
> Lead Research Community Officer
>
> Wikimedia Foundation <https://wikimediafoundation.org/>
>
> *Learn more about Wikimedia Research <https://research.wikimedia.org/>*
>
_______________________________________________
Wikimedia-l mailing list -- [email protected], guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines and 
https://meta.wikimedia.org/wiki/Wikimedia-l
Public archives at 
https://lists.wikimedia.org/hyperkitty/list/[email protected]/message/VFUP7KLYK7K3Q3EBKNKPF3JD5KLIP4RY/
To unsubscribe send an email to [email protected]

Reply via email to