Hi Team, So far, my findings are that AI is : - Helping people with disabilities to explore and understand the codebase, and also to some extent write the code without straining their eyes or body too much, as they had to previously - Helping remote engineers by reducing dependence on people with knowledge across the oceans in different time zones - Helping women who are struggling to manage work and home, by speeding up their learning and saving time - Reducing the possibility that only a small group of people have expertise while others struggle to catchup
These are just a few example scenarios. Overall, AI is helping to improve inclusiveness in the community. Hence I believe we shouldn’t ban AI PRs. But I agree it is easy to get low quality PRs if people rely entirely on AI without reviewing/understanding their code. As others mentioned, review policies/guidelines and CI tools etc should be good enough to catch such issues. Thanks, Shailaja > On Mar 3, 2026, at 12:35 AM, Ekaterina Dimitrova <[email protected]> > wrote: > > I see a lot of good discussion, lots of good ideas, and at the same time > reasonable concerns. > > I would say we can experiment and we can revert/prohibit things which seem to > be not working as guidelines/policies/tooling? > > Behind every PR and reviewer - there is a person who submitted it. It is fair > to say that in both cases the person needs to confirm the output before > submitting. If there is a suspicion that this is not happening, then we > should talk. Mistakes happen even without AI in the picture, but I have trust > in this community to catch things on time and use their best judgement. That > is why we have two committers always involved on a ticket. And we should keep > on evolving our automations - CI, test coverage, static analysis, etc. > > I personally think it is good for the project to embrace new opportunities > and experiment with AI tooling but cautiously. We live in an interesting > world where everyone technically is figuring out how and where they can use > AI tooling, no too strict guidelines yet. There is a lot of room for > experimentation. As long as we as a community keep each other accountable, > and we do not regress the quality bar we have set for ourselves with so much > effort through the years, plus adhere to the ASF policies, etc. > > Something interesting I was looking into, just as an example of what others > do already - llvm.org/docs/AIToolPolicy.html > <http://llvm.org/docs/AIToolPolicy.html>. > Is it time for our own policy, to begin with? I am not sure... :-) > > Best regards, > Ekaterina > > На сб, 21.02.2026 г. в 4:49 Mick <[email protected] <mailto:[email protected]>> > написа: >> >> > I do not trust AI, inherently. It might guide you at best. Sometimes >> > even that is not true. I believe that by introducing these prompts / >> > context files we will make it nonsensical less often. >> >> >> >> Personally, I have zero interest in engaging with PRs from AI. I'm not >> here for the workload, i'm here for the people. >> >> I can see AI is gaining traction for writing ephemeral software, but that's >> not us. Infrastructure software is a different category, and the trust our >> users have in us to write the code their databases run in production has to >> remain a human relationship, I believe. I think this applies to all ASF >> projects as well (the Community over Code ethos). >> >> Our rule is that three people need to be involved in a contribution, with at >> least two being committers. Common sense can reduce this to two people one >> committer on trivial parts of the code. If AI writes a PR and then the >> first human presents it as their work, taking ownership for having read and >> understood the code in the PR, then I'm fine being a reviewer. (And so long >> as all legalities are ok.) The Cassandra codebase is never going to be >> ephemeral – if the reviewer has to review code line by line then I expect >> the author to have too. Contributions are not point-in-time solutions, they >> are additions to a codebase that needs to be maintained and evolve. >> Longevity is more important to us than customisability (both are important), >> and mistaking the two is an architectural dead end. >> >> Like the dependabot PRs, I would rather see those contributions be claimed >> and authored by a human (rather than a bot or AI account), with commit >> messages adding extra context where due. I don't believe this slows us down >> in taking advantage of AI, especially with the right docs in place like >> AGENT.md and SpecKit, just that it keeps the human interaction in the >> project a first-class tenet. Beyond the aspect of fostering trust, I think >> this is paramount because the biggest risk isn't in failing to accept all >> the right answers fast enough, but in failing to see right answers to wrong >> questions (e.g. longevity before customisability). >> >> Without a doubt, our lives are becoming orders of magnitude more productive >> by taking advantage of AI, especially in areas of assist and debugging. >> Areas like assist and debugging are particularly safe as they remain within >> our closed loops, avoiding the proliferation of slop that can easily >> overwhelm us. I can see we're also going to be forced to use them in >> defence of the slop that's coming our way. >> >> Triaging and debugging and reporting on: docs, tickets, PRs, CI, bugs – yes >> please! But with a focus on bringing people together. >> >> p.s. the emdashes are mine, thanks.
