Hey Alexey,

 

What version of Squid are you using?

Can you provide a setup example for re-production?

I can write the relevant ICAP service however I am missing pcap file to 
understand the ICAP sessions.

If you can supply couple(2-3 or more) ICAP connections pcap I can try to see 
what happens in the connection level.

 

>From my experience there is much differences between holding the ICAP session 
>open or closed after once request.

The reason for this is that like HTTP/1.0 ICAP is a “blocking”(don’t remember 
the exact word, Alex might remember).

There for if the proxy has 800 requests per seconds it’s better for the setup 
to open new connection per request to match the load.

It will const memory and CPU in the short term but in the long term the clients 
requests will bock less and..

It will probably consume less then the ICAP connections memory leak.

 

Waiting,

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 

 

From: squid-dev <squid-dev-boun...@lists.squid-cache.org> On Behalf Of Alexey 
Sergin
Sent: Thursday, December 10, 2020 10:33 PM
To: squid-dev@lists.squid-cache.org
Subject: [squid-dev] File descriptor leak at ICAP reqmod rewrites of CONNECT 
requests

 

Hello Squid Developers.

I'm a software engineer.

My team uses Squid with an ICAP server. We have noticed that Squid leaks file 
descriptor and memory when (reqmod) ICAP server replies with http "403 
Forbidden" on http CONNECT request.

Here is a step-by-step description of the problematic scenario:
- An http client connects to Squid and sends CONNECT request (for example, 
"curl -q -x http://127.0.0.1:3128 https://example.com";);
- Squid sends CONNECT request to the (reqmod) ICAP server;
- ICAP server sends back a "403 Forbidden" http response;
- Squid sends "403 Forbidden" http response to the http client (in the example 
above, curl reports "Received HTTP code 403 from proxy after CONNECT");
- Squid writes to cache.log a message like "kick abandoning <....>";
- Squid does not close the file descriptor used for http client connection.

Those file descriptors and associated memory do pile up. For instance, after 
200.000 forbidden requests squid (built from git master) has ~200.000 open 
descriptors and consumes ~4 Gb RAM. On production deployment with 1000+ users 
it takes less than a day for Squid to eat out all available RAM.

It seems that the same problem was previously reported here: 
http://www.squid-cache.org/mail-archive/squid-users/201301/0096.html

Message "kick abandoning <....>" comes from ConnStateData::kick() in 
client_side.cc. Closing clientConnection right after 
"debugs(<....>abandoning<....>)" fixes the leak.

Is it ok to always close() clientConnection when "abandoning" thing happens? 
Are there any known scenarios where this close() would be inappropriate?

Could you please give me some advice on a better/proper fix, if close() at 
"abandoning" time is wrong?

_______________________________________________
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev

Reply via email to