On Thu, Jan 08, 2026 at 09:01:09AM -0500, Michael S. Tsirkin wrote: > On Thu, Jan 08, 2026 at 08:17:09AM -0500, James Bottomley wrote: > > > +you are expected to understand and to be able to defend everything > > > you > > > +submit. If you are unable to do so, maintainers may choose to reject > > > your > > > +series outright. > > > > And I thing the addition would apply to any tool used to generate a > > patch set whether AI or not. > > Exactly. I saw my share of "fix checkpatch warning" slop. This is no > different.
I'm a maintainer too and have seen this kinds of thing as well as many variations on a theme of 'bad series'. An analgous thing might be to ask anybody working in education how these tools differ from all others students have used previously. Checkpatch fixes and the like are relatively easy to identify and can only ever be trivial changes which can be reasonably dismissed. Whereas LLMs can generate entirely novel series that can't so easily be dismissed, though the sudden appearance of a new person with completely new code can be identified. At any rate, even if you feel this is exactly the same, you surely therefore cannot object to the suggested changes in [0] which would amount in your view then to the same kind of dismissal you might give to a checkpatch --fix series? The suggested change gives latitude to the maintainer to dismiss out of hand should the pattern be obvious, or to use the nuclear weapon against slop of asking somebody to explain the series (an LLM-generated explanation should be fairly easy to spot in this case also...) My motive here is the asymmetry between maintainer resource/patch influx which is already at critical levels in at least some areas of mm. An uptick would be a big problem right now. Thanks, Lorenzo [0]:https://lore.kernel.org/ksummit/[email protected]/ > > -- > MST > Cheers, Lorenzo

