Hi, On 2/19/26 21:21, Andrea Pappacoda wrote:
[...]
but for the consequences that the creation of the technology itself has on free software, society, and the environment.
That's the key: the goal of free software isn't to create the largest possible collection of gratis software, but to build something that strengthens the agency and autonomy of users.
There is a massive difference between experienced programmers and novices in how they use AI tools, and there is a difference between ongoing maintenance and drive-by contributions.
As experienced developers doing ongoing maintenance, we're only looking at experienced developers doing ongoing maintenance, and AI can easily take the place of a junior programmer that can perform tasks under guidance. The main criticism[1] here is that these are project resources not spent on onboarding new contributors: the AI agent will not learn anything from the exchange.
For the long term health of the free software ecosystem, we also need an accessible entry point for novice developers.
This means leaving some simple onboarding tasks that could probably be solved by an AI agent for a human to do: the value is not in the code created, but in the knowledge transfer.
This also means limiting the complexity of software projects, and finding a good balance between boilerplate and deeply nested dependencies that reuse so much code that modifying a single line can affect hundreds of different uses.
A good example of that in Debian is debhelper: it massively reduces boilerplate in a lot of packages, but maintenance of debhelper itself is significantly more complex than that. Do we set the desired skill level for new applicants at
1. "can use debhelper to package a simple application", 2. "can build packages without debhelper (even if they never do)", or 3. "can maintain debhelper"?I think it is obvious that option 1 is not sustainable for Debian, because these developers would not be able to work autonomously; they are always dependent on the debhelper maintainers. Option 2 is nonsensical, because we would be teaching skills that are never used, and option 3 is too steep a barrier.
AI use presents us (and the commercial software world as well) with a similar problem: there is a massive skill gap between "gets some results" and "consistently and sustainably delivers results", bridging that gap essentially requires starting from scratch, but is required to achieve independence from the operators of the AI service, and this gap is disrupting the pipeline of new entrants.
Any AI policy we come up with needs to solve this onboarding problem. We neither want to discourage people by rejecting their contributions, nor do we want to expend mentoring resources on people who do not want to be mentored.
Whether some of the experienced people in Debian have had positive or negative experiences with some AI tool is largely irrelevant, because for the long term health of the free software ecosystem it does not matter all that much if they use AI tools or not. I don't understand why anyone would pay an external service to take over the fun aspects of working on free software, leaving them with the tedious parts like reviews, but that's also not something we need to regulate.
However, I think that accepting AI-assisted drive-by contributions is actually harmful, because the best-case outcome is that a trivial problem got solved without actually onboarding a new contributor, and the worst-case outcome is that the new contributor is just proxying between an AI and the maintainer.
In addition, I believe accepting AI-assisted contributions will discourage contributors that do not have the financial means to access AI services.
Simon[1] apart from diverting resources from sanitation, medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health
OpenPGP_signature.asc
Description: OpenPGP digital signature

