I think it would help to split this into two layers:
- One layer is the human-facing contributor guideline: the submitting human
is accountable, must understand the change, and the project should align
with ASF guidance.
- The other layer is contributions coming from AI-controlled accounts. That
feels like a different discussion around identity, accountability,
provenance, licensing, and what kind of contribution harness the project
wants to allow.

My suggestion would be to keep the current guideline lightweight, and start
a separate discussion for agent-operated contribution harnesses. One
possible implementation direction could be an `AGENTS.md` style manifest or
similar harness metadata, so the expectations are explicit when
contributions are produced through agent-operated workflows. That seems
easier to reason about than folding everything into `CONTRIBUTING.md`.

-ej

On Thu, Mar 12, 2026 at 12:03 PM Dmitri Bourlatchkov <[email protected]>
wrote:

> Hi EJ,
>
> Thanks for starting this discussion! I left some comments on the PR.
>
> I think the bigger question is how to deal with contributions from GH
> accounts controlled by AI rather than humans.
>
> Cheers,
> Dmitri.
>
> On Tue, Mar 3, 2026 at 1:45 PM EJ Wang <[email protected]>
> wrote:
>
> > Hi Polaris community,
> >
> > I would like to start a discussion around how Polaris should approach
> > AI-generated or AI-assisted contributions.
> >
> > Recently, Apache Iceberg merged a change that explicitly documents
> > expectations around AI-assisted contributions:
> > https://github.com/apache/iceberg/pull/15213/changes
> >
> > As AI tools become more widely used in software development, contributors
> > may rely on them in different ways - from drafting small code snippets to
> > helping structure larger changes. Rather than focusing on how these tools
> > are categorized, it may be more important to clarify contributor
> > responsibility.
> >
> > If Polaris were to define guidance in this area, I believe the core
> > principles should emphasize accountability:
> >
> >    1.
> >
> >    The human contributor submitting a PR remains fully responsible for
> the
> >    change, including correctness, design soundness, licensing compliance,
> > and
> >    long-term maintainability.
> >    2.
> >
> >    The PR author should understand the core ideas behind the
> implementation
> >    end-to-end, and be able to justify the design and code during review.
> >    3.
> >
> >    The contributor must be able to explain trade-offs, constraints, and
> >    architectural decisions reflected in the change.
> >    4.
> >
> >    Transparency around AI usage may be considered, but responsibility
> >    should not shift away from the human author.
> >
> > In other words, regardless of how a change is produced, the
> accountability
> > and authorship reside with the individual submitting it. AI systems
> should
> > not be treated as autonomous contributors.
> >
> > Questions for discussion:
> >
> >    -
> >
> >    Should Polaris explicitly define guidance around AI-generated
> >    contributions?
> >    -
> >
> >    Do we want to require or encourage disclosure?
> >    -
> >
> >    Are there ASF-level positions we should align with?
> >    -
> >
> >    Should any such policy live in CONTRIBUTING.md?
> >
> > Given Polaris is building foundational infrastructure, setting
> expectations
> > early may help maintain high review standards while adapting to evolving
> > development workflows.
> >
> > Looking forward to thoughts from the community.
> >
> > Best,
> >
> > -ej
> >
>

Reply via email to