I am also a previous GSoC contributor (2025), and a new reviewer, my 
opinion does not hold the same weight as some of the other people who have 
commented under this thread, nonetheless I want to share my thoughts. I 
totally agree with these points mentioned by Oscar and Peter's opinion too. 
Why should good reviewers waste their time, energy and mental bandwidth on 
a suspicious/totally AI slop PR? Especially when the people making the PR 
have not put even a fraction of the same effort? 

Newer contributors (aiming for GSoC ones) should understand that their AI 
nonsense is not going to get them anywhere. Only if the reviewing becomes 
strict (someone could say even a bit harsh) will the situation improve. To 
be clear: I am suggesting to close total AI slop PRs without any 
explanation. 

Reviewing of the normal (totally human or AI assisted but sufficiently 
sensible PRs) can continue as it always was. 

On another note: Maybe we can consider removing the patch requirement 
(atleast 1 open PR or merged PR) from GSoC. We should only select 
candidates with solid proposals. Someone who can actually write a good 
proposal, documenting their design, implementation details, links to 
relevant parts of the code etc says a lot more than someone who got an easy 
PR merged. Getting a merged PR does not necessarily imply a good candidate, 
but someone with a good proposal but no PRs merged could still be someone 
solid enough to get a project through. This notion of GSoC aspirants that 
"if I have a lot of contributions then I have a better chance of getting 
selected" is what is the crux of these many PRs. 

Sorry if some sentences are long and confusing, I didn't put this through 
an LLM :)

On Wednesday, 4 February 2026 at 22:58:57 UTC+5:30 [email protected] 
wrote:

> I am a user to sympy only (actually of sympy.physics.mechanics), too old 
> and too ignorant to contribute, but I follow the discussions.
> I had wondered before, why anybody would push a PR he/she did not do 
> him/herself -and might not even understand- , but Jason told me people are 
> so eager to get into GSoC,
> and they need at least one PR merged.
> I can understand people like Oscar: They are willing to teach others to 
> improve, but surely are not interested in conversing with some LLM, a 
> non-person. 
>
> My concern is this: if key members / reviewers get too frustrated with AI 
> and reduce their work, sypmy will suffer.
> So, I think, reviewers should be very strict, even erring on the "wrong" 
> side: If a PR looks like created by AI, close it!
>
> But, as I said above, just the opinion of an interested old user,
>
> Peter
>
> Oscar schrieb am Mittwoch, 4. Februar 2026 um 17:42:04 UTC+1:
>
>> An article yesterday in the register talking about AI spam PRs on GitHub: 
>>
>> https://www.theregister.com/2026/02/03/github_kill_switch_pull_requests_ai/ 
>>
>> GitHub are apparently looking into whether anything can be done to 
>> improve this: 
>> https://github.com/orgs/community/discussions/185387 
>>
>> The article quotes someone summarizing the problems. I agree with all 
>> of these points: 
>>
>> - Review trust model is broken: reviewers can no longer assume authors 
>> understand or wrote the code they submit. 
>> - AI-generated PRs can look structurally "fine" but be logically 
>> wrong, unsafe, or interact with systems the reviewer doesn't fully 
>> know. 
>> - Line-by-line review is still mandatory for shipped code, but does 
>> not scale with large AI-assisted or agentic PRs. 
>> - Maintainers are uncomfortable approving PRs they don't fully 
>> understand, yet AI makes it easy to submit large changes without deep 
>> understanding. 
>> - Increased cognitive load: reviewers must now evaluate both the code 
>> and whether the author understands it. 
>> - Review burden is higher than pre-AI, not lower. 
>>
>> The article quotes someone saying 
>> """ 
>> I'm generally happy to help curious people in issues and guide them 
>> towards contributions/solutions in the spirit of social coding," he 
>> wrote. "But when there is no widespread lack of disclosure of LLM use 
>> and increasingly automated use – it basically turns people like myself 
>> into unknowing AI prompters. That's insane, and is leading to a huge 
>> erosion of social trust. 
>> """ 
>> That's basically how I feel about the situation although I would go 
>> further. Reviewing these PRs is not like being an AI prompter because 
>> the human using the AI behaves effectively like a broken AI. You would 
>> get better, more trustworthy results much more quickly if you were 
>> prompting the AI directly yourself. 
>>
>> -- 
>> Oscar 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/sympy/16f47310-8b94-4e3b-8759-ed94d75635f7n%40googlegroups.com.

Reply via email to