Performance is still slow, and crawler traffic patterns tend to
do bad things with caches at all levels, so I've regretfully had
to experiment with robots.txt to mitigate performance problems.
The /s/ solver endpoint remains expensive but commit
8d6a50ff2a44 (www: use a dedicated limiter for blob solver, 2024-03-11)
seems to have helped significantly.
All the multi-message endpoints (/[Tt]*) are of course expensive
and have always been. git blob access over SATA 2 SSD isn't too
fast, and HTML rendering is quite expensive in Perl. Keeping
multiple zlib contexts for HTTP gzip also hurts memory usage,
so we want to minimize the amount of time clients keep
longer-lived allocations.
Anyways, this robots.txt is what I've been experimenting with
and (after a few days when bots pick it up) it seems to have
significantly cut load on my system so I can actually work on
performance problems[1] which show up.
==> robots.txt <==
User-Agent: *
Disallow: /*/s/
Disallow: /*/T/
Disallow: /*/t/
Disallow: /*/t.atom
Disallow: /*/t.mbox.gz
Allow: /
I also disable git-archive snapshots for cgit || WwwCoderepo:
Disallow: /*/snapshot/*
[1] I'm testing a glibc patch which hopefully reduces fragmentation.
I've gotten rid of many of the Disallow: entries temporarily
since