Hello Squid Developers. I'm a software engineer.
My team uses Squid with an ICAP server. We have noticed that Squid leaks file descriptor and memory when (reqmod) ICAP server replies with http "403 Forbidden" on http CONNECT request. Here is a step-by-step description of the problematic scenario: - An http client connects to Squid and sends CONNECT request (for example, "curl -q -x http://127.0.0.1:3128 https://example.com"); - Squid sends CONNECT request to the (reqmod) ICAP server; - ICAP server sends back a "403 Forbidden" http response; - Squid sends "403 Forbidden" http response to the http client (in the example above, curl reports "Received HTTP code 403 from proxy after CONNECT"); - Squid writes to cache.log a message like "kick abandoning <....>"; - Squid does not close the file descriptor used for http client connection. Those file descriptors and associated memory do pile up. For instance, after 200.000 forbidden requests squid (built from git master) has ~200.000 open descriptors and consumes ~4 Gb RAM. On production deployment with 1000+ users it takes less than a day for Squid to eat out all available RAM. It seems that the same problem was previously reported here: http://www.squid-cache.org/mail-archive/squid-users/201301/0096.html Message "kick abandoning <....>" comes from ConnStateData::kick() in client_side.cc. Closing clientConnection right after "debugs(<....>abandoning<....>)" fixes the leak. Is it ok to always close() clientConnection when "abandoning" thing happens? Are there any known scenarios where this close() would be inappropriate? Could you please give me some advice on a better/proper fix, if close() at "abandoning" time is wrong?
_______________________________________________ squid-dev mailing list squid-dev@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-dev