Hi Simon,

On 20/02/26 at 02:45 +0900, Simon Richter wrote:
> Hi,
> 
> On 2/19/26 21:21, Andrea Pappacoda wrote:
> 
> [...]
> > but for the consequences that the creation of the technology itself has
> > on free software, society, and the environment.
> 
> That's the key: the goal of free software isn't to create the largest
> possible collection of gratis software, but to build something that
> strengthens the agency and autonomy of users.
> 
> There is a massive difference between experienced programmers and novices in
> how they use AI tools, and there is a difference between ongoing maintenance
> and drive-by contributions.
> 
> As experienced developers doing ongoing maintenance, we're only looking at
> experienced developers doing ongoing maintenance, and AI can easily take the
> place of a junior programmer that can perform tasks under guidance. The main
> criticism[1] here is that these are project resources not spent on
> onboarding new contributors: the AI agent will not learn anything from the
> exchange.
> 
> For the long term health of the free software ecosystem, we also need an
> accessible entry point for novice developers.
> 
> This means leaving some simple onboarding tasks that could probably be
> solved by an AI agent for a human to do: the value is not in the code
> created, but in the knowledge transfer.
> 
> This also means limiting the complexity of software projects, and finding a
> good balance between boilerplate and deeply nested dependencies that reuse
> so much code that modifying a single line can affect hundreds of different
> uses.

In the above, I think that you make the assumption that AI tools would
cause us to run into a situation where there would not be enough simple
tasks (suitable for new contributors) remaining.  Unfortunately, I think
that we have enough things to do, and sufficiently few new contributors,
to avoid that issue for the foreseable future.

Also, I think that this assumes that AI assistance will mainly help with
easy tasks. This is not necessarily true: it could help with making
harder tasks more accessible (by making it easier to perform refactoring to
reduce the complexity or technical debt of our software, or making it
more contributor-friendly by improving test suites or documentation).

It could also help new contributors get up to speed, using AI to improve
their understanding of a codebase. For example, there are
(experimental?) services like Google Code Wiki that take a git
repository and generate documentation for it. See e.g. the generated doc
for Apache Kafka: https://codewiki.google/github.com/apache/kafka .
Maybe it's slop today (what about in a year?). Maybe it's already better
than some of our current documentation.

> AI use presents us (and the commercial software world as well) with a
> similar problem: there is a massive skill gap between "gets some results"
> and "consistently and sustainably delivers results", bridging that gap
> essentially requires starting from scratch, but is required to achieve
> independence from the operators of the AI service, and this gap is
> disrupting the pipeline of new entrants.
> 
> Any AI policy we come up with needs to solve this onboarding problem. We
> neither want to discourage people by rejecting their contributions, nor do
> we want to expend mentoring resources on people who do not want to be
> mentored.

You might find interesting to read https://arxiv.org/abs/2601.20245
(disclaimer: authors paid by Anthropic). It's a study that evaluates, on
a given task, how much time it took developers to perform a task, and
also how much understanding they gained about the technology they used .
A takeaway is that there are very different ways to interact with AI,
that produce very different results both in terms of speed and of
understanding (see figure 11 in particular).

> I don't understand why anyone would
> pay an external service to take over the fun aspects of working on free
> software, leaving them with the tedious parts like reviews, but that's also
> not something we need to regulate.

I think it depends on what people find fun. Some people are more
interested in the mental struggle faced when solving hard,
specific, technical problems. Others are more interested in building
and contemplating large and useful "castles", and less in the detailed
lego-brick assembly. AI-assistance makes building castles easier (which
is great for people who like to contemplate the castles they built), but
risks killing some of the lego-brick-assembly fun by making it trivial.

I would argue that in the context of Debian, we have lots of castles to
build or improve that are not particularly exciting in terms of
technical problems, but are super-useful and important for our end goals.

Also, code reviews could be AI-assisted as well to make them less
tedious, or contributors could use AI to review their code before
submission (similarly to how we recommend running lintian locally before
submitting packages to mentors.debian.net).

> In addition, I believe accepting AI-assisted contributions will discourage
> contributors that do not have the financial means to access AI services.

That's true. Currently it's not really an issue because there are
sufficient actors willing to provide access to their services for free,
with quotas sufficient for typical volunteer work. But this might
change.

Lucas

Reply via email to