Hi!

Based on the results of the discussion, I've made PR
https://github.com/apache/cloudberry/pull/1740

Yes, it was generated using various models - Qwen-3, Claude 4.6, GPT 5.5.

The AI_POLICY.md is partly generated, partly written by me. AGENTS.md is
fully generated.

Please review it )

On Tue, Apr 7, 2026 at 5:30 AM Max Yang <[email protected]> wrote:

> Thanks for raising this topic — it's timely and important.
>
> I fully agree that AI-generated code is already part of our daily
> workflow, and pretending otherwise doesn't help anyone. Rather than
> discouraging AI usage, I think we should embrace it with clear
> guidelines. Here are my thoughts:
>
> 1. AI is a productivity tool, not a replacement for accountability
>
> 1. Just like IDEs, linters, or Stack Overflow before it — AI is a
> tool. The developer who submits the code is still fully responsible
> for its correctness, security, and compliance. This should be the
> cornerstone of any policy we adopt.
> 2. We need a developer-friendly AI policy
>
> 2. I agree that the ASF rules are hard to parse for most contributors.
> I'd suggest we create a concise, practical document (similar to
> ClickHouse's AI_POLICY.md) that covers:
>
>   - Disclosure: Contributors should indicate when AI tools were used
> in substantial code generation (e.g., a simple tag in the PR
> description).
>   - Review standard: AI-generated code must meet the exact same
> review bar as human-written code — no shortcuts.
>   - Licensing awareness: Contributors must ensure AI-generated code
> doesn't introduce license-incompatible snippets. This is
> especially critical for ASF projects.
>   - No AI-generated code in security-sensitive areas without extra
> scrutiny: Crypto, authentication, access control, etc. deserve
> additional human review.
>   - Testing requirement: AI-generated code should come with
> corresponding tests. If the AI wrote the code, the human should
> at least write (or carefully verify) the tests.
> 3. Practical suggestions
>
>   - Add a checkbox in our PR template: "This PR contains AI-assisted
> code generation: Yes / No"
>   - Create a short AI_POLICY.md in our repo — written in plain
> language, not legalese
>   - Periodically share best practices on how to use AI agents
> effectively within our project (prompt engineering tips, common
> pitfalls, etc.)
>
> The goal should not be to create barriers, but to make it easy for
> contributors to do the right thing. A clear, simple policy actually
> encourages responsible AI usage rather than pushing it underground.
>
> What do others think?
>
> Best regards, Max Yang
>
>
> On Mon, Apr 6, 2026 at 4:54 PM Leonid Borchuk <[email protected]>
> wrote:
>
> > Hi
> >
> > I recently read an article by the Clickhouse team about using AI to
> develop
> > database kernel/infrastructure code
> > https://clickhouse.com/blog/agentic-coding and understood that the
> > everything that has been said applies to our project too.
> >
> > Really, we can't deny that AI-generated code is here. We see it in the PR
> > submitted from developers involved in a project. We use AI-agents
> ourselves
> > to write texts/code/responses and be more productive.
> >
> > So, my question is: since this is a new reality, perhaps it would be
> useful
> > for all project contributors to know the correct style of using AI
> agents.
> >
> > Something like Clickhouse AI policy
> > https://github.com/ClickHouse/ClickHouse/blob/master/AI_POLICY.md
> >
> > We also have a set of ASF rules
> > https://www.apache.org/legal/generative-tooling.html. But they are
> > difficult for a simple developer like me to understand (it contains a lot
> > of legal terms). We could say the same but in a more developer-friendly
> > manner and so avoid embarrassment. It is simpler to say that I am not
> using
> > AI agents, than to decide if it is legal and approved by society or not.
> >
> > WBW, Leonid
> >
>

Reply via email to