On Tue, Aug 5, 2025 at 4:16 PM Guillaume Nodet <gno...@apache.org> wrote:

> Le mar. 5 août 2025 à 15:19, Otavio Rodolfo Piske <angusyo...@gmail.com> a
> écrit :
>
> > "I think it's a good idea, my only concern is giving too much trust to AI
> > agents."
> >
> > +1.
> >
> > I think we can possibly include a reference to this in the instructions.
> I
> > know certain AI coding agents tend to try to be more autonomous by
> default
> > (i.e.: by authoring commits, for instance) and I think we could include
> > instructions specifically prohibiting that.
> >
>
> I disagree.  The fact that the agent did the commit or raised the PR does
> not mean anything wrt the underlying patch quality.  I often ask an agent
> to commit and raise a PR after multiple iterations (AI and/or manual) and
> the fact that the agent did the commit (it can easily write better commit
> messages) or raised the PR, is completely irrelevant to the quality of the
> patch.  Committing / raising a PR does not imply merging to master.


I am not talking about the quality of the generated code. Ensuring quality
is the committer responsibility.

Personally, I never let an AI agent commit code on my behalf and I make
sure to review and commit the changes myself.

What I am trying to bring to discussion in this particular point is whether
we would like to suggest limiting the autonomy of the AI agent to ensure
people using these agents are in the loop.

More specifically: does letting an AI agent operate with autonomy in the
code base, either by a committer such as yourself or by an external
contributor willing to contribute to the project, pose any risk (legal,
security, etc) to the project?

Obviously, the AI instructions can only provide a recommendation and
motivated individuals can still override those, but if there is a risk we
can still provide recommendations for everyone else.




> On a related note, we have a lot of commits that break the build which are
> authored by humans.  I'd much rather improve that !
>

>
> >
> > I also think it goes in line w/ the ASF instructions in the sense that
> the
> > contributor is the ultimate person responsible for the origins of the
> code.
>
>
> RIght.  But if he is responsible for the code, let him be and not add
> burden.
>
>
> >
> >
> > On Tue, Aug 5, 2025 at 12:03 PM Andrea Cosentino <anco...@gmail.com>
> > wrote:
> >
> > > Absolutely, that's what I meant. Don't blindly trust them.
> > >
> > > Il giorno mar 5 ago 2025 alle ore 12:01 Guillaume Nodet <
> > gno...@apache.org
> > > >
> > > ha scritto:
> > >
> > > > Can that be solved by asking for PR reviews ?
> > > >
> > > > Le mar. 5 août 2025 à 11:37, Andrea Cosentino <anco...@gmail.com> a
> > > écrit
> > > > :
> > > >
> > > > > I think it's a good idea, my only concern is giving too much trust
> to
> > > AI
> > > > > agents.
> > > > >
> > > > > They still need to be supervised.
> > > > >
> > > > > Il giorno mar 5 ago 2025 alle ore 10:40 Otavio Rodolfo Piske <
> > > > > angusyo...@gmail.com> ha scritto:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > I'd like to bring to discussion that we add a set of a dedicated
> AI
> > > > > > instructions file (e.g., INSTRUCTIONS.ai) to the Apache Camel
> > > > repository
> > > > > > (core and other sub-projects).
> > > > > >
> > > > > > The purpose of these files would be to define how AI-powered
> coding
> > > > > agents
> > > > > > and tools should behave when generating code for this project. I
> > > > believe
> > > > > > this would be beneficial for a few key reasons:
> > > > > >
> > > > > >    1.
> > > > > >
> > > > > >    *Enforce ASF Generative Tooling Guidelines:* It would help us
> > > > formally
> > > > > >    adopt and point to the standards defined by the ASF on the use
> > of
> > > > > >    generative AI, ensuring all contributions are compliant.
> > > > > >    -
> > > > > >
> > > > > >       Apache Software Foundation Legal - Generative Tooling
> > > > > >       <https://www.apache.org/legal/generative-tooling.html>
> > > > > >       2.
> > > > > >
> > > > > >    *Maintain Project Coding Standards:* We can use it to instruct
> > AI
> > > > > tools
> > > > > >    on Camel's specific coding patterns, conventions, and
> > > architectural
> > > > > >    principles. This will help maintain the consistency and
> quality
> > of
> > > > the
> > > > > >    codebase.
> > > > > >    3.
> > > > > >
> > > > > >    *Define Clear Guardrails:* It allows us to establish a
> > reasonable
> > > > set
> > > > > of
> > > > > >    rules and constraints for generated code, promoting security,
> > > > > > reliability,
> > > > > >    and adherence to best practices from the start.
> > > > > >
> > > > > > This is becoming a standard practice in other major open-source
> > > > projects.
> > > > > > For example, the Linux kernel community is already discussing and
> > > > > defining
> > > > > > similar guidelines to ensure AI-assisted contributions are
> > > > constructive.
> > > > > >
> > > > > >    -
> > > > > >
> > > > > >    Linux Kernel Mailing List Discussion
> > > > > >    <
> > > > > >
> > > >
> > https://lore.kernel.org/all/20250725175358.1989323-1-sas...@kernel.org/>
> > > > > >
> > > > > > I believe that taking this proactive step will help us harness
> the
> > > > > benefits
> > > > > > of AI tooling while safeguarding the integrity of the project.
> > > > > >
> > > > > > I'd like to open a discussion on this. What are your thoughts?
> Any
> > > > other
> > > > > > projects in the ASF that have defined these instructions and that
> > we
> > > > > could
> > > > > > inspire our guidelines on?
> > > > > >
> > > > > >
> > > > > > Kind regards,
> > > > > > --
> > > > > > Otavio R. Piske
> > > > > > http://orpiske.net
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > ------------------------
> > > > Guillaume Nodet
> > > >
> > >
> >
> >
> > --
> > Otavio R. Piske
> > http://orpiske.net
> >
>
>
> --
> ------------------------
> Guillaume Nodet
>


-- 
Otavio R. Piske
http://orpiske.net

Reply via email to