Thanks for doing this Nic (and Gang). I agree this would be valuable to
have this stated in docs.
I'd also second Adam - having AGENTS.md (and other .md files) anecdotally
appears to provide better PRs.

Another thing we could consider (and maybe that's out of scope) is
providing https://llmstxt.org/ formatted documentation. This would
potentially improve user experience for those who use LLMs to work with
Arrow implementations downstream.

from llms.org
> Large language models increasingly rely on website information, but face
a critical limitation: context windows are too small to handle most
websites in their entirety. Converting complex HTML pages with navigation,
ads, and JavaScript into LLM-friendly plain text is both difficult and
imprecise.
>
> While websites serve both human readers and LLMs, the latter benefit from
more concise, expert-level information gathered in a single, accessible
location. This is particularly important for use cases like development
environments, where LLMs need quick access to programming documentation and
APIs.

Rok

On Mon, Jan 19, 2026 at 6:49 PM R Tyler Croy <[email protected]> wrote:

> (replies inline)
>
> On Sunday, January 18th, 2026 at 7:43 PM, Gang Wu <[email protected]>
> wrote:
>
> > - Summitters should review all lines of generated code before creating
> the
> > PR to
> > understand every piece of detail just like they are written by the
> > submitters
> > themselves.
> > - AI tools are notorious for generating overly verbose comments,
> unnecessary
> > test cases, fixing test failures using wrong approaches, etc. Make sure
> > these
> > are checked and fixed.
> > - Reviewers are humans, so please try to break down large PRs into
> smaller
> > ones to make reviewers' life easier to get PRs promptly reviewed.
>
>
> Like others I think Nic's draft is a good one, I would like to offer some
> thoughts as a maintainer (delta-rs) which has received increased
> AI-assisted pull requests over the past six months.
>
>
> The "PR may be closed without further review" statement I would strongly
> encourage moving to the very beginning of the policy.  I would also
> encourage labels being used like "ai-assisted" to signal to other
> contributors who may or may not wish to engage in reviewing potential slop.
>
> We have had repeated attempts at contributions by some folks who simply do
> not understand their generated code and when asked for clarification, have
> the LLM generate more incorrect commentary.  It's very Dunning-Krueger and
> leads to lots of frustration all around.
>
> Like most policies it's important to speak to those that are acting in
> good faith but don't rely on everybody following the rules, and come up
> with an agreed upon way to handle those that don't.
>
>
> Either way I think it's good to ship! :)
>
>
>
> Cheers
>
>

Reply via email to