On Wed, May 06, 2026 at 06:02:15PM +0200, Willy Tarreau wrote:
> Hi Linus,
> 
> On Wed, May 06, 2026 at 08:46:07AM -0700, Linus Torvalds wrote:
> > [ Coming back to this after a week of trying to clean up the disaster
> > that is my inbox after the merge window ]
> > 
> > On Sun, 3 May 2026 at 04:35, Willy Tarreau <[email protected]> wrote:
> > >
> > > The use of automated tools to find bugs in random locations of the kernel
> > > induces a raise of security reports even if most of them should just be
> > > reported as regular bugs. This patch is an attempt at drawing a line
> > > between what qualifies as a security bug and what does not, hoping to
> > > improve the situation and ease decision on the reporter's side.
> > 
> > I actually think we may want to go further than this.
> > 
> > I think we should simply make it a rule that "a 'security' bug that is
> > found by AI is public".
> 
> This would definitely help us a lot on [email protected], but...
> 
> > Now, I may be influenced by that "my inbox is a disaster during the
> > merge window" thing, but I do think this is pretty fundamental: if
> > somebody finds a bug with more or less standard AI tools (ie we're not
> > talking magical special hardware and nation-state level efforts), then
> > that bug pretty much by definition IS NOT SECRET.
> 
> I think it's only 99.9% true. I mean, I've used such tools myself to
> find bugs that were not found otherwise and I know that:
>   - interactions with the tools count a lot
>   - luck counts even more

Perhaps also note that including a reproducer for a crash in public is
fine, including a full blown exploit is not.

So perhaps that can serve as a guide; if they went and put in the effort
of making a full exploit (with or without LLM aid), keep it on security,
otherwise do the public thing.

And yes, I realize this too might be a very thin/short rope.

Reply via email to