GSoC is holding a live discussion about managing AI PRs and spam in the
program. Maybe someone can attend:

We hope you can join us tomorrow, Thursday March 5 on  Managing AI Spam and
Influx of Applicants for a community group discussion.

Time: 16:00 UTC-17:00 UTC
Meet link: meet.google.com/yep-puzf-uba

I'll be leading the discussion but looking to all of your for your insights
(and the problems) so we can try and find some solutions together as a
community.

We will be taking notes on the discussion and will share them with this
group list shortly after the meeting.

Best,
Stephanie Taylor
GSoC Program Lead

Jason
moorepants.info
+01 530-601-9791


On Mon, Mar 2, 2026 at 5:27 AM Sangyub Lee <[email protected]> wrote:

> > What solution do you propose to that problem?
> > Right now we are getting too many PRs on a zero trust basis and at the
> same time it has become really hard to build trust.
>
> I'm not sure this is very relevant; but I use few insights from
> distributed computing problem about effectivity of 2/3 consensus.
> 2/3 quorum is optimized number that it can prevent contamination of trust
> from byzantine actors
> and this is relevant in the aspect that junior developers and AI can be
> modeled as byzantine actors;
> junior developers are not always malicious, but can be understood that for
> their arbitrary and error behaviors
> Also, 2/3 quorum can be minimal in the aspect that keeps you from
> malicious decisions,
> while you are still able to be productive, for example the decision halted
> from a small number of people disagreeing everything
>
> I'd also note that this can be oversimplified modeling, there can be much
> more plenty of papers with sophisticated modeling
> but the conclusion I get is that in order to keep organization stable, you
> now need 2 mentors to supervise 1 mentees
> And it also means if such number is a way to converge, it is possible that
> a lot of open source communities will eventually converge to such
> senior-dominated structure
> On Thursday, February 5, 2026 at 10:32:53 PM UTC+1 Oscar wrote:
>
>> On Thu, 5 Feb 2026 at 09:46, Peter Stahlecker
>> <[email protected]> wrote:
>> >
>> > I feel the PRs for sympy (and likely for other packages, too) serve a
>> dual purpose:
>> >
>> > 1. make sympy even better
>> > 2. When the reviewers discuss with submitters, the (mostly young)
>> submitters learn something.
>> >
>> > Point 1.could be taken over by AI sometime, but them Point 2. would be
>> dead,.
>> > At least in my view point 2 is a VERY important one, and should not be
>> given up lightly.
>>
>> I was thinking about this a bit. If you look at the most recent PRs
>> then almost half of them are from one person and all of their PRs have
>> been merged by me (except one that got closed):
>> https://github.com/sympy/sympy/pulls?q=is%3Apr+is%3Amerged
>>
>> That person is a new contributor and it says in the AI disclosure that
>> he is using AI but that he checks and edits the code afterwards. I
>> actually can't tell that he is using AI because he is using it in a
>> reasonable way: the end result is still what you would have ended up
>> with if the code was written by someone who knows what they are doing
>> and is not using AI. Note that he does not use LLMs to write comments
>> or messages so the communication is all human.
>>
>> Those pull requests are mostly for a part of the codebase that I don't
>> know very well so I'm not really able to verify that everything is
>> correct (I could with more effort but don't want to). There are no
>> maintainers who know it well but I am happy to review and merge those
>> PRs based largely on trust that he knows what he is doing and is
>> improving things. You can see the beginning of that trust developing
>> through the human communication here:
>> https://github.com/sympy/sympy/pull/28994
>>
>> Over time through human communication and PRs the trust and mutual
>> understanding grows and we end up at a point where it becomes easy to
>> review and merge the PRs.
>>
>> The problems with a lot of the other PRs are that:
>>
>> - It is hard to build trust when there is no actual human
>> communication (people using LLMs to write messages).
>> - Many of those people are demonstrably less experienced and so cannot
>> really be trusted in the competency sense either.
>> - They are often trying to make changes in parts of the codebase that
>> might typically be considered "more advanced" where you actually
>> needed the author to have a higher level of competency/trust (the AI
>> made them over-confident, without AI they would have looked at the
>> code and decided not to mess with it).
>> - Before AI they would have been able to earn some trust just based on
>> the fact that they figured out how to produce seemingly working code
>> and make the tests pass but that is now meaningless.
>>
>> Basically we can't just merge all PRs on trust but we also can't
>> review all PRs on a zero trust basis (that is far too much work).
>> Right now we are getting too many PRs on a zero trust basis and at the
>> same time it has become really hard to build trust.
>>
>> I think I am realising now how damaging the LLM messages are. Not
>> having actual human-to-human communication is a massive barrier to
>> trust building.
>>
>> --
>> Oscar
>>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion visit
> https://groups.google.com/d/msgid/sympy/0bfd6b45-848d-47e9-a88b-8093a937ac8en%40googlegroups.com
> <https://groups.google.com/d/msgid/sympy/0bfd6b45-848d-47e9-a88b-8093a937ac8en%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/sympy/CAP7f1AiKdOGsSqeoDztoj7KmsTyKS8wQZEwFzH3jQ2doNnw%3DMQ%40mail.gmail.com.

Reply via email to