* Lars Noodén via libreplanet-discuss <libreplanet-discuss@libreplanet.org> 
[2025-07-10 21:39]:
> On 7/7/25 04:17, Akira Urushibata wrote:
> > Maybe I'm not looking hard enough.  If anybody can point out to good
> > examples of AI usage in free software development, I'd like to know.
> 
> Hi, Akira,
> 
> Yes, LLM-based output has been affecting free software development for a
> while.  Daniel Stenberg of cURL fame has written about its impact several
> times with clear examples.  Here are two posts about it:
> 
> "The I in LLM stands for intelligence"
>  https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-for-intelligence/
> 
> "Curl takes action against time-wasting AI bug reports"
>  https://www.theregister.com/2025/05/07/curl_ai_bug_reports/

Surely, I acknowledge the report from Curl team. Though let us not
generalize, so I cannot confirm your statement "LLM-based output has
been affecting free software development for a while".

Why generalize?

It is not any output "affecting" Curl team in their development, but
irresponsible reporters.

That is isolated case, there is absolutely no need for this
generalization. It is false! Not proven.

Text generating systems can generate garbage, that depends of human,
not the LLM, so blame should be on uneducated LLM operators, not the
LLM itself.

> Now if you are looking for /positive/ examples the effects of AI slop in
> free software projects, I think your hunch is correct and there may not be
> any.

I was thinking of using this word:

The noun gibberish has 1 sense (no senses from tagged texts)
1. gibberish, gibber -- (unintelligible talking)

But instead, let me give you some straighforward examples disapproving
your statement how there may not be any positive examples of new
technologies like Large Language Model (LLM) and akin:

[2410.02091] The Impact of Generative AI on Collaborative Open-Source Software 
Development: Evidence from GitHub Copilot
https://arxiv.org/abs/2410.02091

The Impact of Generative AI on Open-Source Community Engagement <br> by Karthik 
Babu Nattamai Kannan, Narayan Ramasubbu :: SSRN
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5106441

New Study Shows Open Source AI Is Catalyst for Economic Growth
https://about.fb.com/news/2025/05/new-study-shows-open-source-ai-catalyst-economic-growth/

Open Source AI: Democratizing the Future of Artificial Intelligence | by 
Kiplangat Korir | Medium
https://medium.com/@kiplangatkorir/open-source-ai-democratizing-the-future-of-artificial-intelligence-7deaa45bd6da

Impact of Open-Source AI on Communities
https://www.arsturn.com/blog/analyzing-the-community-impact-of-open-source-ai-projects

Benefits and Applications of Open Source AI | Moesif Blog
https://www.moesif.com/blog/technical/api-development/Open-Source-AI/

> Going back to your question about the beast in Redmond, which although
> proprietary, it shows that LLMs get in the way of development, at best. One
> can conclude that because competition there between departments and even
> employees is so cutthroat that if there had been even the slightest
> advantage from using LLM slop oneself, then it would be found all over the
> place.

It is found all over the place. You have got very subjective opinion,
fine, you may have, but hey, does it really matter?

Studies have shown that LLM-assisted developers often write code
faster, with fewer errors, and spend less time on boilerplate.

LLMs are widely used in development tasks like code generation,
documentation, refactoring, and testing.

However, they can get in the way if:

- Used improperly (e.g., blindly trusting outputs).

- Replacing deep thinking or careful architecture.

LLMs can help or hinder, depending on usage. So your blanket claim is
subjective and overgeneralized.

> "Because competition... is so cutthroat... if there had been even the
> slightest advantage... then it would be found all over the place."

This argument is speculative and based on assumptions:

Yes, Microsoft (the "beast in Redmond") is known for a competitive
culture.

I get where you're coming from, but I’ve got to push back — I run an
LLM locally on my own machine, completely offline, and it’s been a
game-changer for me. The idea that LLMs "get in the way" just doesn’t
hold up in practice. And let’s not forget, even Microsoft — the
so-called beast in Redmond — has been actively contributing to the
field with solid free models like Phi, which are compact,
high-performing, and openly available. Just because you don’t see
widespread internal adoption doesn’t mean they’re not being used or
useful; plenty goes on behind the scenes, and the value is there if
you actually try it.

Reference:

microsoft/phi-4 · Hugging Face:
https://huggingface.co/microsoft/phi-4

> However, because LLM slop output does not help, you now see the
> creation of mandates there for the remaining employees to force them
> to try to (or pretend to) use LLM slop:
> 
> "Microsoft Makes AI Usage Mandatory for Employees as Performance Reviews
> Face Overhaul"
>  
> https://medium.com/@hamza_83953/microsoft-makes-ai-usage-mandatory-for-employees-as-performance-reviews-face-overhaul-8f8cc020f637

And? That is their policy, given for you to freely discuss it. They
got reasons for it. If you work there and you don't like it, feel free
to resign. It is private business. 

> I've only ever done limited scripting myself, albeit over a longer
> period.

Thanks for sharing that — but if I may be honest, how can someone
who's only done limited scripting, even over a longer period, draw
such broad conclusions with confidence? That doesn’t exactly make you
an expert, and it’s important to acknowledge the limits of experience
when making sweeping claims.

> Over that time I gather that while typing in working code is a
> challenge, understanding and defining the problem(s) to be solved
> and then breaking them down into small units is much harder and so
> is documenting the code as you go along.

Bingo — you nailed it with that one. Defining the problem clearly,
breaking it down into manageable parts, and documenting it properly is
way harder (and more important) than just typing out working
code. That part, I fully agree with — well said.

> Those two aspects are nothing which plausible sentence generators
> and plagiarism engines can help with, even if their
> attribution-stripped output does seem to compile.

I’d disagree there — from my own hands-on experience running an LLM
locally, it’s *precisely* those earlier stages — breaking down
problems, brainstorming approaches, even drafting clean documentation
— where the tool shines most. It’s not just about generating code that
compiles; it’s a thinking companion that helps structure ideas, catch
oversights, and speed up problem-solving. Calling it a "plausible
sentence generator" really undersells what these models can do when
used well.

> Currently LLM coding is a scam which is being perpetrated because so
> few understand ICT and, because of the influence of Redmond against
> education, that number is declining.

Ah, good to know you've uncovered the grand LLM coding scam — all that
with only limited scripting experience under your belt! Impressive
detective work. I guess those of us actually using LLMs productively
must have missed the memo from Redmond about dismantling education
while secretly boosting our workflows.

> Software freedom has been slowing that decline but Software Freedom
> promoting groups need to team up with educational projects and eject
> the productivity killing and knowledge killing microsofters or at
> least keep them at bay so as to turn that around and /increase/ the
> numbers of those who understand and can use ICT.

Right — because nothing says “promoting education and freedom” like
gatekeeping who gets to contribute. Good thing you’re here to save the
day from the productivity-killing hordes, even though some of us are
just quietly getting things done — using whatever tools actually work,
including those made by the so-called “microsofters.”

Personally, I’ve disliked Bill Gates for about 25 years now — mainly
because of the shady tactics he used to monopolize computing and crush
competition back in the day. But hey, credit where it’s due: Microsoft
today is putting out solid contributions like the Phi models and
actually supporting the open-source ecosystem in a meaningful
way. Just check https://huggingface.co/microsoft — it’s a prime
example of how far things have shifted.

Jean Louis


_______________________________________________
libreplanet-discuss mailing list
libreplanet-discuss@libreplanet.org
https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss

Reply via email to