If the concern is bots/scrapers/crawlers (which are increasingly AI-powered
these days), then have you considered "tar pit" software, such as Nepenthes
or Iocaine?

https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/

CloudFlare also now offers a product to trap misbehaving crawlers, called
AI Labyrinth:

https://blog.cloudflare.com/ai-labyrinth/

-- 
Jason Liu


On Mon, Mar 16, 2026 at 2:30 AM Joshua Root <[email protected]> wrote:

> On 16/3/2026 09:32, Fred Wright wrote:
> >
> > This approach to bot-blocking is pretty lame.  There's nothing to stop
> > the bots from using User-Agent strings from current browsers.  What
> > then, require everyone to use lynx?  IP-based filtering would be more
> > robust and less user-unfriendly.
> Blocking the IP ranges that the scraping comes from would block most of
> the internet, and almost certainly block more legitimate users than we
> do now. UA blocking is not a great solution but it works (by which I
> mean, the site is usable, which it would not be otherwise). The only
> real alternative is to deploy something like Anubis, which would also
> block legitimate robots.txt-respecting scrapers.
>
> - Josh
>

Reply via email to