On 15/01/26 8:16 pm, Lucas Nussbaum wrote:
On 15/01/26 at 00:49 +0100, Matthias Geiger wrote:
IMO it's high time we had a GR banning so-called AI (read LLMs) at the very
least for project contributions, and I would suggest just to ignore mail
that looks like it was written by a machine in the meantime.
On the other hand, since this discussion and GR did not happen yet, I
think that it's unfair to assume that the result would be the rejection
of AI-assisted contributions. I suspect that things would be a bit more
nuanced.

Two data points :

1/ Back in June I volunteered to explore if we could get AI companies to
sponsor Debian with LLM access (for Debian development). This went
nowhere so far, unfortunately, but several DDs contacted me privately
(for example during DebConf) to express interest. So clearly some DDs
think that AI-assisted contributions are worth exploring in the context
of Debian.

2/ Most of my personal Free Software contributions over the last year
have been AI-assisted to some degree. Usually that degree is that I let
the agent write an initial version of the code, and then I review it,
and then either rework it manually or prompt the agent to rework it until
it matches my expectations, or a mix of both. When they were
contributions to projects where I don't consider myself the maintainer,
I think I was always clear that the contributions were AI-assisted.
My feeling is that contributing that way made me produce more and in a
more interesting way, because I was able to focus more on what the code
achieves, than on not-so-interesting implementation details. Also it
makes addressing boring bugs in legacy code a bit more fun.
So if Debian had a policy of banning AI-assisted contributions, I would
probably just decide to contribute elsewhere.

I fully recognize that the AI world is far from perfect, and raises many
questions. There's some moderate hope with open weight models and free
frontends. Things are moving very fast, so it's hard to predict what
will happen over the next months.

In the end it's again a question of where we draw line.
- Should we make a difference between AI-assisted and AI-generated
   contributions?
- What about AI-assisted contributions to upstream projects?
- If it's about non-free tools, what about bug reports generated by
   services such as Coverity Scan?
- if it's about environmental impact, what about large-scale QA checks
   and CI?

Lucas

I think one thing that people often forget is that you cannot hold an LLM accountable if anything breaks. If a human makes some kind of mistake in the code and someone points it out, the person can easily recall the steps he has done, discuss with the others and fix the bug.

But LLMs very often go into hallucinating loops.

Overtime, there will be a situation where humans cannot understand the code written by LLMs and one day the LLM breaks something and it falls into a hallucination loop. The result? a critical broken piece of software that's hard for anybody to fix.

I agree with Matthias. Debian isn't a corporate company to care about implementing LLM usage across the organisation to make shareholders happy. We're a volunteer organisation.

--
Regards,

Aryan Karamtoth,
Debian Maintainer

Homepage: https://spaceports.in
Matrix: @SpaciousCoder78:matrix.org
XMPP:[email protected]

GPG Fingerprint: 7A7D 9308 2BD1 9BAF A83B 7E34 FE90 07B8 ED64 0421

Attachment: OpenPGP_0xFE9007B8ED640421.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

Reply via email to