On Thu, May 07, 2026 at 06:18:27AM +0200, Willy Tarreau wrote: > Another point is that for many vulns there are two types of adversaries: > - criminals > - script kiddies > > The former must be assumed to also have discovered the same vuln, possibly > earlier, and to be actively exploiting it. The latter however, is just > going to use whatever published exploit to say "look mum, I'm root". > Public reports containing too many details will speed up usability for > this group and that's not good for users. > > And we *know* that some reports contain working PoC that need very little > modification. Passing them through [email protected] for triaging feels safer than > directing them to public lists with no early validation. > > So in short, I think that: > - AI reports should be considered public, but not necessarily well known > yet > - AI reports often contain repros that shouldn't be posted publicly
So, I think a targeted repro that exposes just the initial bug is in most cases useful and shouldn't be held back. Full blown exploits on the other hand should definitely be kept from the public list. Most times, it still takes skill to get from the former to the latter, although I suppose with LLMs this gap is shrinking too. > - AI reports wording can be intimidating to developers not used to > receiving these things > > -> the security team should remain the first filtering layer for this > for new reporters even if it means continuing to see some noise. > I think that instead it's the 3rd patch about the threat model that > should help us receive less noise by explaining what is not a > vulnerability. > > I can rework that part a bit to reflect this. Yes, I think that covers my earlier point well. And yes AI babble should be sanitized, both for brevity and for explaining how to do the rest of the exploit :-)

