👍 Great call, James

Suggested basic starting AI policy points -

Agree on highest level framework FIRST (3-5 points), things most everyone
agrees are critical, implement them.
Create sub-group to bring framework details piecemeal for up / down vote
from voting community until full policy exist to add one by one.
Target 4-6 months for initial complete policy. Brevity, practicality and
enforceability is more important than 100% coverage as they become models
for 100% full coverage that is supportable and maintainable.
Set review and update period (6, 9 months or annually), whatever group
thinks is actually supportable.
It should be a long term, calendared project, if community supports it.
Sometimes the raw changes in tech may force acceleration, make sure
flexibility is built into the AI governance plan and system.)

Suggested minimum starting guidelines / policy framework.  If widely
supported, up or down vote to implement THIS method and the 3-5 agreed
starting, practical and enforceable guidelines and set supportable
timelines for completing the policy one addition at a time.

1) Disallow more than one active PR under consideration.
2) *Document the specific AI Agent or Model used, along with the extent
(all or part) of the coding suggestion integrated into the project.*
3) Summary and *Intent: *Describe what the good does for non-coding person
and the intent. If AI suggestion described what / why it was used. Example:
Total solution, clean up your code, alternative to your code, etc. This is
useful for auditing and suitability analysis and Apache compliance.
4) *AI coding and system MUST allow for human oversight, intervention, and
determination to prevent or minimize harm. AI should assist, but never
displace human responsibility.*

*Paul Christison*
*Core Banking / Lending Compliance*


On Sat, Dec 13, 2025 at 3:08 AM James Dailey <[email protected]> wrote:

> Hi Devs
>
> I’d like to propose we put together some guidance and policies within our
> Fineract community for AI assisted and AI generated code.  Maybe this needs
> a vote at some point but for now I think it’s useful to discuss.  Later I
> think it belongs at the level of coding and process guides.
>
> This is an active area of discussion within the broader open source
> movement and within each project of open source foundation.  AI tooling is
> evolving much faster than copyright law and best practices can be written
> so our discussion isn’t a final destination.
>
> First it must be said, all contributors sign an ICLA (individual
> contributor license agreement), which is a legal document. It is required
> for Committers.  This states that irrespective of the manner in which the
> PR was created- any auto completing IDE or whatever- the Contributor “owns
> it”.  This is at the core of the current ASF policies for ai. Please review
> https://www.apache.org/legal/generative-tooling.html#approved-tools-list
>
> I’d like to suggest that we encourage good use of AI tools.  For example, 
> other
> projects at the ASF are reportedly using AI tools for test coverage. For
> us, this requires some Fineract and financial domain knowledge.
>
> We also have heard of examples of projects being bombarded by new “vibe
> coded” PRs.  One way to deal with that is to disallow more than one active
> PR under consideration.
>
> See also
> https://vibe-coding-manifesto.com/. For other considerations.
>
> Thoughts?
>
> Thanks
>
>

-- 
--
Paul

Reply via email to