It sure is interesting watching this space evolve.

I wanted to share a related recent experience of mine that gives me both concern and hope, and has led me to a further recommendation for contributors. First, please review this tiny PR in the repo for the fineract.apache.org website:

https://github.com/apache/fineract-site/pull/43

There's more conversation than code change in that PR, so it's a simple one to characterize as some form of incompetence, be it human or AI. The initial description is mostly incorrect: The font file is found (using the provided grep test!), and an .xcf source file is deleted that is useful for future edits to the derived .png file. The responses from @Nitinkamlesh mostly didn't make sense, and they dropped comms altogether when I asked direct questions about AI.

I'm concerned that this was done without transparency. Had they opened with "I'm an AI, here's how/why I'm doing this, here's how to work with the human operator" it would have been much easier and faster to resolve, and would have engendered rather than destroyed trust.

The part that gives me hope is that I didn't use any new/fancy AI detector tool and I didn't need to. I think we can double down on fundamentals to immunize ourselves to future malicious or incompetent behavior.

To my previous suggestions in Re: Ai assisted Dev on Apache Fineract <https://lists.apache.org/thread/q1fnzbodv5rbxjogmnxktpwvbb4qjp54>, I'd add as a general recommendation/reminder to all contributors: *Be transparent*. Share your env/tooling/experiences. Ask for help as you scour docs, code, PRs, issues, check with actual users, chat, email, write spikes, run builds/tests, write new tests, and all that with and without AI. This is foundational computer science and FOSS community competence we should all seek to continually improve at.

Reply via email to