Thank you for putting together such a thoughtful and timely proposal. I fully support establishing clear, pragmatic guidelines for AI-assisted contributions to Kvrocks.
Your proposal hits the right balance: it encourages the productive use of modern tools while protecting the project’s review capacity, code quality, and legal compliance. I’m +1 on moving forward with this proposal. On Mon, 1 Dec 2025 at 14:50, Twice <[email protected]> wrote: > > Hi all, > > LLM-based “vibe coding” is becoming more common, and we are already > seeing AI-assisted patches in the wild. To keep the project healthy > while still benefiting from these tools, I’d like to propose a > lightweight guideline for AI-assisted contributions to Kvrocks. > > ________________________________ > > Goals > > Protect reviewer bandwidth > Kvrocks is a small community with limited review capacity. We want to > avoid being flooded by low-quality or hard-to-review AI-generated PRs. > > Use AI as a benefit, not a burden > We should be able to use LLMs to speed up development and improve > quality, while avoiding the confusion, noise and legal risks they can > introduce. > > Stay aligned with ASF policy > All AI-generated content must comply with the ASF Generative Tooling > Guidance and ASF licensing policies. > > ________________________________ > > What is not allowed > > Fully LLM-generated PRs that the author does not understand are not > acceptable. > > Typical signs of such PRs: > > The author cannot explain what the change does or why it is correct. > > The author cannot discuss the impact on Kvrocks’ behavior. > > The patch ignores existing project conventions or design choices, and > the author cannot justify them. > > In these cases, it is usually better to: > > Open a high-quality issue describing the problem, expected behavior, > and context. > > Discuss possible solutions with the community before attempting a patch. > > ________________________________ > > What is allowed (and encouraged) > > Using AI tools as an assistant is welcome, as long as: > > The human author remains responsible > > You understand the change you are submitting. > > You are able to answer reviewers’ questions. > > You are willing to revise or rewrite AI-generated parts during review. > > You understand the relevant code > > You have read the “How to Contribute” documentation. > > You have at least a basic understanding of the parts of the codebase > you are touching. > > You are transparent about AI usage > > In the PR description, briefly mention if AI tools were used and for > what (e.g. tests, docs, initial draft of implementation). > > If there are small pieces of code that you don’t fully understand, > call them out explicitly and ask for help. > > Example comment in a PR: > > “Lines 120–160 in foo.cc were suggested by an LLM. I understand the > overall logic but I’m not 100% sure about edge cases around > replication. Feedback is welcome.” > > ________________________________ > > ASF Generative Tooling Guidance > > All AI-assisted content must follow ASF rules, including: > > No incompatible licensing terms introduced by the tool. > > No hidden or undisclosed third-party code. > > When in doubt, contributors should review the ASF Generative Tooling > Guidance and 3rd-party licensing policy, and ask the community or ASF > Legal through normal channels. > > We may recommend adding simple provenance markers (e.g. Generated-by: > <Tool> in commit messages), but this can be discussed further. > > ________________________________ > > For reviewers > > Reviewers are encouraged to: > > Ask whether AI tools were used if the patch looks heavily machine-generated. > > Check that the author understands the change; if not, suggest > converting it into an issue or closing the PR. > > Prioritize PRs with clear problem statements, tests, and explanations > over opaque AI dumps. > > Flag suspected ASF policy or licensing issues for PMC / ASF Legal follow-up. > > ________________________________ > > Evolution of this policy > > LLMs and ASF guidance are evolving quickly. This proposal is > intentionally simple and should be revisited when: > > ASF updates its Generative Tooling Guidance, or > > New tools significantly change how we write or review code. > > In the long term, if tools become strong enough (for example, to > reliably assist with review in a compliant way), we can relax or > adjust parts of this policy. > > ________________________________ > > Next steps > > I propose we: > > Discuss this on the list and refine the text if needed. > > Once there is rough consensus, add a short “AI-assisted contributions” > section to "How to Contribute" in the website. > > Comments and suggestions are very welcome. > > Best, > Twice -- Best Regards, - Hulk Lin
