I am a user to sympy only (actually of sympy.physics.mechanics), too old 
and too ignorant to contribute, but I follow the discussions.
I had wondered before, why anybody would push a PR he/she did not do 
him/herself -and might not even understand- , but Jason told me people are 
so eager to get into GSoC,
and they need at least one PR merged.
I can understand people like Oscar: They are willing to teach others to 
improve, but surely are not interested in conversing with some LLM, a 
non-person. 

My concern is this: if key members / reviewers get too frustrated with AI 
and reduce their work, sypmy will suffer.
So, I think, reviewers should be very strict, even erring on the "wrong" 
side: If a PR looks like created by AI, close it!

But, as I said above, just the opinion of an interested old user,

Peter

Oscar schrieb am Mittwoch, 4. Februar 2026 um 17:42:04 UTC+1:

> An article yesterday in the register talking about AI spam PRs on GitHub:
> https://www.theregister.com/2026/02/03/github_kill_switch_pull_requests_ai/
>
> GitHub are apparently looking into whether anything can be done to improve 
> this:
> https://github.com/orgs/community/discussions/185387
>
> The article quotes someone summarizing the problems. I agree with all
> of these points:
>
> - Review trust model is broken: reviewers can no longer assume authors
> understand or wrote the code they submit.
> - AI-generated PRs can look structurally "fine" but be logically
> wrong, unsafe, or interact with systems the reviewer doesn't fully
> know.
> - Line-by-line review is still mandatory for shipped code, but does
> not scale with large AI-assisted or agentic PRs.
> - Maintainers are uncomfortable approving PRs they don't fully
> understand, yet AI makes it easy to submit large changes without deep
> understanding.
> - Increased cognitive load: reviewers must now evaluate both the code
> and whether the author understands it.
> - Review burden is higher than pre-AI, not lower.
>
> The article quotes someone saying
> """
> I'm generally happy to help curious people in issues and guide them
> towards contributions/solutions in the spirit of social coding," he
> wrote. "But when there is no widespread lack of disclosure of LLM use
> and increasingly automated use – it basically turns people like myself
> into unknowing AI prompters. That's insane, and is leading to a huge
> erosion of social trust.
> """
> That's basically how I feel about the situation although I would go
> further. Reviewing these PRs is not like being an AI prompter because
> the human using the AI behaves effectively like a broken AI. You would
> get better, more trustworthy results much more quickly if you were
> prompting the AI directly yourself.
>
> --
> Oscar
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/sympy/2b22ca9f-7b71-4351-999d-089dcea1ab68n%40googlegroups.com.

Reply via email to