I read this discussion on AI bots performing PRs and indeed it's an issue, and would probably be a major issue in the next few months. But again, if the code so generated, even by a AI bot, if it performs well, benefits the organisation and solves the issue, then I don't see why we should demotivate such cases. Also yes, it would need proper fine tuning of LLM if someone aims to achieve this, training it to make proper comments, PRs, branch names etc. And also yes sometimes a adversary might misuse such a bot, that's what we need to prevent. Again, opinions might vary, and yeah I would always prefer to write my own code lol and not use any bot to make PRs, I prefer getting my hands dirty.
On Tuesday, 16 December 2025 at 17:07:26 UTC+5:30 [email protected] wrote: > The problem of AI generated code is happening all across the open source. > One case that I am close is https://github.com/paradigmxyz/reth. What I > think they are doing is just adding a very strict CI for everything and > again reviewers have to put so much effort. > > > On Wednesday, December 10, 2025 at 4:28:08 AM UTC+5:30 Oscar wrote: > >> On Mon, 8 Dec 2025 at 23:15, Francesco Bonazzi <[email protected]> >> wrote: >> > >> > I fear that AI bots will start opening PRs soon (or maybe they are >> already doing it). AI can impersonate human conversation pretty well. The >> purpose of such bots is to use human feedback just to collect data. >> >> I am actually getting emails roughly once a week right now from AI >> companies offering to pay me to review AI generated PRs but I have not >> replied to any of them. >> >> I don't think that we are seeing AI bots though. It is just humans >> using AI tools sometimes in a reasonable way but more often badly. >> >> We absolutely need to have a policy about this that insists that use >> of AI to write the code needs to be disclosed. A policy should clearly >> state that it is not acceptable to submit AI generated code if it is >> not code that you understand yourself and should explain why this is >> bad and what you should do instead. >> >> Regardless of whether the policy is enforceable I think people need to >> see a clear statement of what is a reasonable way of going about >> things. Honestly I don't blame people for thinking that having an AI >> just write all the code is the modern way with all the hype around >> this. >> >> Right now the majority of PRs opened are from people who have used >> some AI tool to write the code. They have trusted the code in >> deference to the AI's seemingly superior capabilities and knowledge >> and just launched it into a PR. >> >> The end result is that most PRs now are unchecked LLM output. It is a >> waste of time to review these as long as the author thinks that >> submitting unchecked LLM output is reasonable because any review >> comments are just typed into the LLM and the LLM even writes their >> comments in reply. >> >> If we were talking about this in the context of software developers >> working in a company together then I think that there could be all >> sorts of ways of managing this. In the context of an open source >> project having loads of people appearing from nowhere and spewing LLMs >> into PRs is unmanageable. >> >> -- >> Oscar >> > -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/sympy/b3da2038-17bb-4453-8d96-35aa7cacc05en%40googlegroups.com.
