And.. if we get dozen of PRs generated by AI, who is going to review them?
I suppose that in many cases review from third party may be harder than making PR itself, because author who made PR is usually within the focus to narrow the issue/feature, while reviewer sometimes has to step into authors shoes in order to understand reasons for a change.
Will above instructions help with that as well?

Best,
Łukasz

On 8/5/25 18:21, Andrea Cosentino wrote:
Il giorno mar 5 ago 2025 alle ore 16:17 Guillaume Nodet <gno...@apache.org>
ha scritto:

Le mar. 5 août 2025 à 15:19, Otavio Rodolfo Piske <angusyo...@gmail.com> a
écrit :

"I think it's a good idea, my only concern is giving too much trust to AI
agents."

+1.

I think we can possibly include a reference to this in the instructions.
I
know certain AI coding agents tend to try to be more autonomous by
default
(i.e.: by authoring commits, for instance) and I think we could include
instructions specifically prohibiting that.


I disagree.  The fact that the agent did the commit or raised the PR does
not mean anything wrt the underlying patch quality.  I often ask an agent
to commit and raise a PR after multiple iterations (AI and/or manual) and
the fact that the agent did the commit (it can easily write better commit
messages) or raised the PR, is completely irrelevant to the quality of the
patch.  Committing / raising a PR does not imply merging to master.

On a related note, we have a lot of commits that break the build which are
authored by humans.  I'd much rather improve that !


Indeed, the important part is always passing through a PR and review
process.

And yes, it's true sometimes we commit code breaking the build, so yes we
should improve on that side too.





I also think it goes in line w/ the ASF instructions in the sense that
the
contributor is the ultimate person responsible for the origins of the
code.


RIght.  But if he is responsible for the code, let him be and not add
burden.




On Tue, Aug 5, 2025 at 12:03 PM Andrea Cosentino <anco...@gmail.com>
wrote:

Absolutely, that's what I meant. Don't blindly trust them.

Il giorno mar 5 ago 2025 alle ore 12:01 Guillaume Nodet <
gno...@apache.org

ha scritto:

Can that be solved by asking for PR reviews ?

Le mar. 5 août 2025 à 11:37, Andrea Cosentino <anco...@gmail.com> a
écrit
:

I think it's a good idea, my only concern is giving too much trust
to
AI
agents.

They still need to be supervised.

Il giorno mar 5 ago 2025 alle ore 10:40 Otavio Rodolfo Piske <
angusyo...@gmail.com> ha scritto:

Hello,

I'd like to bring to discussion that we add a set of a dedicated
AI
instructions file (e.g., INSTRUCTIONS.ai) to the Apache Camel
repository
(core and other sub-projects).

The purpose of these files would be to define how AI-powered
coding
agents
and tools should behave when generating code for this project. I
believe
this would be beneficial for a few key reasons:

    1.

    *Enforce ASF Generative Tooling Guidelines:* It would help us
formally
    adopt and point to the standards defined by the ASF on the use
of
    generative AI, ensuring all contributions are compliant.
    -

       Apache Software Foundation Legal - Generative Tooling
       <https://www.apache.org/legal/generative-tooling.html>
       2.

    *Maintain Project Coding Standards:* We can use it to instruct
AI
tools
    on Camel's specific coding patterns, conventions, and
architectural
    principles. This will help maintain the consistency and
quality
of
the
    codebase.
    3.

    *Define Clear Guardrails:* It allows us to establish a
reasonable
set
of
    rules and constraints for generated code, promoting security,
reliability,
    and adherence to best practices from the start.

This is becoming a standard practice in other major open-source
projects.
For example, the Linux kernel community is already discussing and
defining
similar guidelines to ensure AI-assisted contributions are
constructive.

    -

    Linux Kernel Mailing List Discussion
    <


https://lore.kernel.org/all/20250725175358.1989323-1-sas...@kernel.org/>

I believe that taking this proactive step will help us harness
the
benefits
of AI tooling while safeguarding the integrity of the project.

I'd like to open a discussion on this. What are your thoughts?
Any
other
projects in the ASF that have defined these instructions and that
we
could
inspire our guidelines on?


Kind regards,
--
Otavio R. Piske
http://orpiske.net




--
------------------------
Guillaume Nodet




--
Otavio R. Piske
http://orpiske.net



--
------------------------
Guillaume Nodet



Reply via email to