On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote: > [ Coming back to this after a week of trying to clean up the disaster > that is my inbox after the merge window ] > > On Sun, 3 May 2026 at 04:35, Willy Tarreau <[email protected]> wrote: > > > > The use of automated tools to find bugs in random locations of the kernel > > induces a raise of security reports even if most of them should just be > > reported as regular bugs. This patch is an attempt at drawing a line > > between what qualifies as a security bug and what does not, hoping to > > improve the situation and ease decision on the reporter's side. > > I actually think we may want to go further than this. > > I think we should simply make it a rule that "a 'security' bug that is > found by AI is public". > > Now, I may be influenced by that "my inbox is a disaster during the > merge window" thing, but I do think this is pretty fundamental: if > somebody finds a bug with more or less standard AI tools (ie we're not > talking magical special hardware and nation-state level efforts), then > that bug pretty much by definition IS NOT SECRET.
After the past 2 weeks, and the past 2 months, I am going to violently agree with you here. We've seen so many "duplicate" bug reports it's not funny. All of the modern LLMs are feeding the output back into the model for future runs, which makes the data totally public. Even if not, the output is being monitored by external companies at the very least. > So why should be consider it special and have it be on the security list? I don't think we should anymore. Yes, having a full reproducer in public is not good, but the general "this is a bug" comments we should start redirecting to public lists more. That's the only way we are going to handle this influx as our "normal" bug workflow works very well, especially when it comes with a fix, as these LLM tools can provide very easily. So if this could be reworded somehow to reflect that, maybe? But the "what is and is not a security bug" is a good thing overall. We need a solid definition of our threat model if for no other reason to keep me from having to write over and over "Once a driver is bound to the kernel, we trust the hardware"... thanks, greg k-h

