On 3/20/26 02:28, Clemens Lang wrote:
Hello,

Trac was again unusable today because of massive amounts of dynamic requests 
from AI crawlers to the /query and /timeline endpoints (both of which are 
disallowed by robots.txt, btw).

Most of those masquerade as Chrome browser between versions 133 and 138, so I’ve 
now adjusted our filter rules to deny requests from Chrome < 140. The current 
version is Chrome 144, and Chrome typically auto-updates.

[...]


Thanks for the information. I run another Trac instance (for the Armed Bear Common Lisp implementation) which suffers from the same overzealous clanker attention for which I have hard-wired "429 Too Many Requests" responses for the past month which unfortunately hasn't measurably decreased traffic. I guess the Trac URI structure was enough of a rich source of information that it will be forever embedded in scraping the pre-AI web. I didn't run statistics on User-Agent population to try to distinguish "legitimate" traffic, so thanks for a data point.


--
"A screaming comes across the sky. It has happened before but there is
nothing to compare to it now."

Reply via email to