On Wed, Jan 07, 2026 at 04:58:16PM -0500, Steven Rostedt wrote: > On Wed, 7 Jan 2026 21:15:17 +0000 > Lorenzo Stoakes <[email protected]> wrote: > > > > I would simply argue that LLMs are not another tool on the basis of the > > drastic negative impact its had in very many areas, for which you need only > > take a cursory glance at the world to observe. > > > > Thinking LLMs are 'just another tool' is to say effectively that the kernel > > is immune from this. Which seems to me a silly position. > > But has this started to become a real problem with the kernel today?
It's becoming a problem. And as I said to Linus I seriously worry about what news coverage of the kernel's stance on these kinds of series will do. > > > > > > > > > Let's look at it another way: What we all *want* for the kernel is > > > simplicity. Simple rules, simple documentation, simple code. The > > > simplest way to deal with the LLM onslaught is to pray that our existing > > > rules will suffice. > > > > I'm not sure we really have rules quite as clearly as you say, as > > subsystems differ greatly in what they do. > > > > For one mm merges patches unless averse review is received. Which means a > > sudden influx of LLM series is likely to lead to real problems. Not all > > subsystems are alike like this. > > But has this happened yet? You're doing the 'repeat for emphasis' thing here which I respect as a useful literary tool :) but addressed above. > > > > > One rule that seems consistent is that arbitrary dismissal of series is > > seriously frowned upon. > > If it is AI slop coming in, you can say, "unless you can prove to me that > you understand this series and there's nothing wrong with it, I'm rejecting > it" > > If the series looks good then what's the issue. But if it's AI slop and > it's obvious the person behind the code doesn't understand what they are > submitting, that could even be rationale for sending that person to your > /dev/null folder. Right, sure, but I feel this sits outside of current norms, I made a case for it in my reply to Dan [0]. [0]:https://lore.kernel.org/ksummit/[email protected]/ > > > > > The document claims otherwise. > > > > > > > > For now, I think the existing rules are holding. We have the luxury of > > > > We're noticing a lot more LLM slop than we used to. It is becoming more and > > more of an issue. > > Are you noticing this in submissions? Yes. > > > > > Secondly, as I said in my MS thread and maybe even in a previous version of > > this one (can't remember) - I fear that once it becomes public that we are > > open to LLM patches, the floodgates will open. > > This document is not about addressing anything that we fear will happen. It > is only to state our current view of how things work today. > > If the floodgates do open and we get inundated with AI slop, then we can > most definitely update his document to have a bit more teeth. > > But one thing I learned about my decade on the TAB, is don't worry about > things you are afraid might happen, just make sure you address what is > currently happening. Especially when it's easy to update the rules. I mean why are we even writing the document at all in that case :) why did this discussion come up at the maintainer's summit, etc. I think it's sensible to establish a clear policy on how we deal with this _ahead of time_. And as I said to Linus (and previously in discussions on this) I fear the press reporting 'linux kernel welcomes AI submissions, sees it like any other tool'. So the tail could wag the dog here. And is it really problematic to simply underline that that doesn't mean we are ok with the unique ability of LLM's to allow submissions end-to-end in bulk? Again I'll send an incremental change showing what I actually want to change here. Maybe that'll clarify my intent. > > > > > > The kernel has a thorny reputation of people pushing back, which probably > > plays some role in holding that off. > > > > And it's not like I'm asking for much, I'm not asking you to rewrite the > > document, or take an entirely different approach, I'm just saying that we > > should highlight that : > > > > 1. LLMs _allow you to send patches end-to-end without expertise_. > > Why does this need to be added to the document. I think we should only be > addressing how we handle tool generated content. Because of maintainer/review asymmetry and this being a uniquely new situation which attacks that. > > > > > 2. As a result, even though the community (rightly) strongly disapproves of > > blanket dismissals of series, if we suspect AI slop [I think it's useful > > to actually use that term], maintains can reject it out of hand. > > > > Point 2 is absolutely a new thing in my view. > > I don't believe that is necessary. I reject patches outright all the time. > Especially checkpatch "fixes" on code that is already in the tree. I just > say: "checkpatch is for patches, not accepted content. If it's not a real > bug, don't use checkpatch." I find it interesting that both examples given here are of trivially rejectable things that nobody would object to. Again see my reply to Dan for an argument as to why I feel this is different. > > If the AI code is decent, why reject it? If it's slop, then yeah, you have > a lot of reasons to reject it. Because it takes time to review to determine that it's decent even if it might be obvious it's entirely AI-generated in the first place? > > > > > > treating LLMs like any other tool. That could change any day because > > > some new tool comes along that's better at spamming patches at us. I > > > think that's the point you're trying to make is that the dam might break > > > any day and we should be prepared for it. > > > > > > Is that what it boils down to? > > > > I feel I've answered that above. > > > > > > > > >> +As with all contributions, individual maintainers have discretion to > > > >> +choose how they handle the contribution. For example, they might: > > > >> + > > > >> + - Treat it just like any other contribution. > > > >> + - Reject it outright. > > > > > > > > This is really not correct, it's simply not acceptable in the community > > > > to > > > > reject series outright without justification. Yes perhaps people do > > > > that, > > > > but it's really not something that's accepted. > > > > > > I'm not quite sure how this gives maintainers a new ability to reject > > > things without justification, or encourages them to reject > > > tool-generated code in a new way. > > > > > > Let's say something generated by "checkpatch.pl --fix" that's trying to > > > patch arch/x86/foo.c lands in my inbox. I personally think it's OK for > > > me as a maintainer to say: "No thanks, checkpatch has burned me too many > > > times in foo.c and I don't trust its output there." To me, that's > > > rejecting it outright. > > > > > > Could you explain a bit how this might encourage bad maintainer behavior? > > > > I really don't understand your question or why you're formulating this to > > be about bad maintainer behaviour? > > > > It's generally frowned upon in the kernel to outright reject series without > > technical justification. I really don't see how you can say that is not the > > case? > > If it's AI slop, then I'm sure you could easily find lots of technical > justifications for rejecting it. Why do we need to explicitly state it > here?. Aha! Now you've honed on _exactly_ the problem. To find the technical justification, you'd need to read through the series, and with the asymmetry of maintainer/submitter resource this is an issue. > > > > > LLM generated series won't be a trivial checkpatch.pl --fix change, you've > > given a trivially identifiable case that you could absolutely justify. > > Is it trivial just because it's checkpatch? I gave another example above > too. But if AI slop is coming in, I'm sure there's lots of reasons to > reject it. I mean come on Steve :) yes it is trivial. Apologies, but I didn't pick up on the other example above? > > Are you saying that if there's good AI code coming in (I wouldn't call it > slop then) that you want to outright reject it too? No I'm saying that maintainers should be able to reserve the right in order to not be overwhelmed. > > > > > Again, I'm not really asking for much here. As a maintainer I am (very) > > concerned about the asymmetry between what can be submitted vs. review > > resource. > > > > And to me being able to reference this document and to say 'sorry this > > appears to be AI slop so we can't accept it' would be really useful. > > Then why not come up with a list of reasons AI slop is bad and make a > boiler plate and send that every time. Basically states that if you submit > AI code, the burden is on the submitter to prove that they understand the > code. Or would you like that explicitly stated in this document? Something > like: > > - If you submit any type of tool generated code, then it is the > responsibility of the submitter to prove to the maintainer that they > understand the code that they are submitting. Otherwise the maintainer > may simply reject the changes outright. > > ? I mean of course I wholeheartedly agree with that. But we to some degree already have that: + - Ask the submitter to explain in more detail about the contribution + so that the maintainer can feel comfortable that the submitter fully + understands how the code works. I think it'd be most useful to actually show what change I'd like in a diff, which I'll send in a little while. It's more about emphasis than really radically changing anything in the document. > > > > > Referencing a document that tries very hard to say 'NOP' isn't quite so > > useful. > > I don't think this document's goal was to be a pointer to show people why > you are rejecting AI submissions. This is just a guideline to how tool > generated code should be submitted. It might not be the goal but it establishes kernel policy even if it seems the desire is to say 'NOP', and would be useful for maintainers on the ground. If nobody references kernel policy in how they do things then what is the use of kernel policy? > > It's about how things work today. It's not about how things will work going > forward with AI submissions. That document is for another day. I feel I've addressed this above, but we're already mentioning things that pertain to possible AI slop. I don't think the position here can both be 'well we already address this with existing rules' and 'we have no need to address this at all' at the same time. And shouldn't we perhaps take a defensive position to make it abundantly clear that we won't tolerate this _ahead of time_? I obviously take Linus's point that many slop producers couldn't care less about norms or documentation, but with the impact on the press reporting and _general sense_ of what the kernel will tolerate + those who _will_ think they're abiding by the norms - that it actually will have a practical impact. > > -- Steve Cheers, Lorenzo

