Re: [squid-users] file descriptors leak

2015-12-15 Thread André Janna

Em 28/11/2015 22:46, André Janna escreveu:
I took another network trace this time both at Squid and Windows 
client ends.


cache.log:
2015/11/27 11:30:55.610 kid1| SECURITY ALERT: Host header forgery 
detected on local=177.43.198.106:443 remote=192.168.64.4:61802 FD 5465 
flags=33 (local IP does not match any domain IP)


--
network trace at Squid side

client connects
11:30:55.604870 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [S], 
seq 1701554341, win 8192, options [mss 1460,nop,wscale 
8,nop,nop,sackOK], length 0
11:30:55.604992 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags 
[S.], seq 3125704417, ack 1701554342, win 29200, options [mss 
1460,nop,nop,sackOK,nop,wscale 7], length 0
11:30:55.605766 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
ack 1, win 256, length 0


client sends SSL hello
11:30:55.606242 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags 
[P.], seq 1:198, ack 1, win 256, length 197
11:30:55.606306 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, length 0


client OS sends TCP keep-alive packets
11:31:05.607191 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 197:198, ack 1, win 256, length 1
11:31:05.607231 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:31:15.608966 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 197:198, ack 1, win 256, length 1
11:31:15.609005 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:31:25.614527 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 197:198, ack 1, win 256, length 1
11:31:25.614589 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0


client sends FIN
11:31:29.384280 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags 
[F.], seq 198, ack 1, win 256, length 0
11:31:29.421787 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 199, win 237, length 0


client OS sends TCP keep-alive packets
11:31:39.417426 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 198:199, ack 1, win 256, length 1
11:31:39.417489 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 199, win 237, options [nop,nop,sack 1 {198:199}], length 0
11:31:49.425366 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 198:199, ack 1, win 256, length 1
11:31:49.425443 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 199, win 237, options [nop,nop,sack 1 {198:199}], length 0
11:31:59.426153 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 198:199, ack 1, win 256, length 1
11:31:59.426233 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 199, win 237, options [nop,nop,sack 1 {198:199}], length 0
 it continues this way until I powered off Windows client after 
three hours ...



--
network trace at Windows client side

client connects
11:30:34.894242 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [S], 
seq 1701554341, win 8192, options [mss 1460,nop,wscale 
8,nop,nop,sackOK], length 0
11:30:34.898234 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags 
[S.], seq 3125704417, ack 1701554342, win 29200, options [mss 
1460,nop,nop,sackOK,nop,wscale 7], length 0
11:30:34.898298 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
ack 1, win 256, length 0


client sends SSL hello
11:30:34.898712 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags 
[P.], seq 1:198, ack 1, win 256, length 197
11:30:34.899479 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, length 0


client OS sends TCP keep-alive packets
11:30:44.899271 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 197:198, ack 1, win 256, length 1
11:30:44.899986 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:30:54.900495 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 197:198, ack 1, win 256, length 1
11:30:54.901323 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:31:04.905731 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 197:198, ack 1, win 256, length 1
11:31:04.906560 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0


client sends FIN
11:31:08.675299 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags 
[F.], seq 198, ack 1, win 256, length 0
11:31:08.713746 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 199, win 237, length 0


client OS sends TCP keep-alive packets
11:31:18.708086 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.], 
seq 198:199, ack 1, win 256, length 1
11:31:18.708917 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.], 
ack 199, win 237, options [nop,nop,sack 1 {198:199}], length 0
11:31:28.715600 IP 

Re: [squid-users] file descriptors leak

2015-11-28 Thread André Janna

Citando Amos Jeffries :


So, the first place to look is not Squid I think. But why at least 6 of
those ACK packets did not make it back to the client. That needs
resolving first to esure that the TCP level is operating correctly.

Only then if the problem remains looking at Squid, the use of port 443
indicates it is the crypto process is possibly waiting for something and
not closing the port on a 0-byte read(2) operation.



I took another network trace this time both at Squid and Windows client ends.

cache.log:
2015/11/27 11:30:55.610 kid1| SECURITY ALERT: Host header forgery  
detected on local=177.43.198.106:443 remote=192.168.64.4:61802 FD 5465  
flags=33 (local IP does not match any domain IP)


--
network trace at Squid side

client connects
11:30:55.604870 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [S],  
seq 1701554341, win 8192, options [mss 1460,nop,wscale  
8,nop,nop,sackOK], length 0
11:30:55.604992 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags  
[S.], seq 3125704417, ack 1701554342, win 29200, options [mss  
1460,nop,nop,sackOK,nop,wscale 7], length 0
11:30:55.605766 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
ack 1, win 256, length 0


client sends SSL hello
11:30:55.606242 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags  
[P.], seq 1:198, ack 1, win 256, length 197
11:30:55.606306 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, length 0


client OS sends TCP keep-alive packets
11:31:05.607191 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 197:198, ack 1, win 256, length 1
11:31:05.607231 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:31:15.608966 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 197:198, ack 1, win 256, length 1
11:31:15.609005 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:31:25.614527 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 197:198, ack 1, win 256, length 1
11:31:25.614589 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0


client sends FIN
11:31:29.384280 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags  
[F.], seq 198, ack 1, win 256, length 0
11:31:29.421787 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 199, win 237, length 0


client OS sends TCP keep-alive packets
11:31:39.417426 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 198:199, ack 1, win 256, length 1
11:31:39.417489 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 199, win 237, options [nop,nop,sack 1 {198:199}], length 0
11:31:49.425366 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 198:199, ack 1, win 256, length 1
11:31:49.425443 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 199, win 237, options [nop,nop,sack 1 {198:199}], length 0
11:31:59.426153 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 198:199, ack 1, win 256, length 1
11:31:59.426233 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 199, win 237, options [nop,nop,sack 1 {198:199}], length 0
 it continues this way until I powered off Windows client after  
three hours ...



--
network trace at Windows client side

client connects
11:30:34.894242 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [S],  
seq 1701554341, win 8192, options [mss 1460,nop,wscale  
8,nop,nop,sackOK], length 0
11:30:34.898234 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags  
[S.], seq 3125704417, ack 1701554342, win 29200, options [mss  
1460,nop,nop,sackOK,nop,wscale 7], length 0
11:30:34.898298 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
ack 1, win 256, length 0


client sends SSL hello
11:30:34.898712 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags  
[P.], seq 1:198, ack 1, win 256, length 197
11:30:34.899479 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, length 0


client OS sends TCP keep-alive packets
11:30:44.899271 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 197:198, ack 1, win 256, length 1
11:30:44.899986 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:30:54.900495 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 197:198, ack 1, win 256, length 1
11:30:54.901323 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0
11:31:04.905731 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags [.],  
seq 197:198, ack 1, win 256, length 1
11:31:04.906560 IP 177.43.198.106.443 > 192.168.64.4.61802: Flags [.],  
ack 198, win 237, options [nop,nop,sack 1 {197:198}], length 0


client sends FIN
11:31:08.675299 IP 192.168.64.4.61802 > 177.43.198.106.443: Flags  
[F.], seq 198, 

Re: [squid-users] file descriptors leak

2015-11-26 Thread Amos Jeffries
On 27/11/2015 7:36 a.m., André Janna wrote:
> 
> Assinatura
> Em 24/11/2015 00:54, Amos Jeffries escreveu:
>> FYI: unless you have a specific need for 3.5 you should be fine with
>> the 3.4 squid3 package that is available for Jesse from Debian
>> backports. The alternative is going the other way and upgrading right
>> to the latest 3.5 snapshot (and/or 4.0 snapshot) to see if it is one
>> of the CONNECT or TLS issues we have fixed recently. 
> I'm using version 3.5 because 3.4 doesn't have ssl::server_name acl.
> Debian package is not built with openssl because of licensing issues so
> I rebuilt Debian testing 3.5 source package on Debian Jessie.
> This Squid installation is in production now and cannot be easily
> migrated. But I'll perform another installation for testing in the near
> future.
> 
>> Neither. So it is time to move away from lsof and start using packet
>> capture to get a full-body packet trace to find out what exact packets
>> are happening on at least one affected TCP connection.
>>
>> If possible identifying one of these connections from its SYN onwards
>> would be great, but if not then a 20min period of activity on an
>> existing one might still how more hints.
>>
> I did a test using a Windows laptop client with IP address 192.168.64.4,
> connected via wifi.
> I browsed a few https sites until triggering Squid "local IP does not
> match any domain IP" error.
> This error appeared when I was trying to open Yahoo home page. Browser
> redirected to https://br.yahoo.com/?p=us but page remained blank.
> Please note that this error appears randomly: opening the same site in
> another browser tab succeeded.
> 
> cache.log:
> 2015/11/26 13:54:45.471 kid1| SECURITY ALERT: Host header forgery
> detected on local=206.190.56.191:443 remote=192.168.64.4:58887 FD 17244
> flags=33 (local IP does not match any domain IP)
> 
> After a couple of minutes this connection disappeared from Windows
> netstat command output. Afterward I powered off Windows laptop.
> 
> Tcpdump on Squid box:
> 13:54:45.410907 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [S],
> seq 1831867, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK],
> length 0
> 13:54:45.411000 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [S.],
> seq 3695298276, ack 1831868, win 29200, options [mss
> 1460,nop,nop,sackOK,nop,wscale 7], length 0


client 192.168.64.4:58887 connects.

> 13:54:45.411630 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> ack 1, win 256, length 0
> 13:54:45.412490 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [P.],
> seq 1:185, ack 1, win 256, length 184
> 13:54:45.412573 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, length 0

client sends 184 bytes of data.

> 13:54:55.439709 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> seq 184:185, ack 1, win 256, length 1
> 13:54:55.439761 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:05.439965 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> seq 184:185, ack 1, win 256, length 1
> 13:55:05.440022 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:15.445667 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> seq 184:185, ack 1, win 256, length 1
> 13:55:15.445737 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:25.447281 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> seq 184:185, ack 1, win 256, length 1
> 13:55:25.447351 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:35.494936 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> seq 184:185, ack 1, win 256, length 1
> 13:55:35.495005 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:45.491694 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> seq 184:185, ack 1, win 256, length 1
> 13:55:45.491761 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:55.492158 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
> seq 184:185, ack 1, win 256, length 1
> 13:55:55.492208 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0

client sends 1 byte of data - in 7x packets.

The recipient sends an ACK each and every time, but the client just
keeps repeating itself.

> 14:01:58.242748 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [F.],
> seq 185, ack 1, win 256, length 0
> 14:01:58.279916 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
> ack 186, win 237, length 0

... until the client changes and sends a FIN.

> 
> Netstat output on Squid box:
> # date; netstat -tno | grep 192.168.64.4
> Thu Nov 26 

Re: [squid-users] file descriptors leak

2015-11-26 Thread Yuri Voinov

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
 


27.11.15 0:36, André Janna пишет:
>
> Assinatura
> Em 24/11/2015 00:54, Amos Jeffries escreveu:
>> FYI: unless you have a specific need for 3.5 you should be fine with
the 3.4 squid3 package that is available for Jesse from Debian
backports. The alternative is going the other way and upgrading right to
the latest 3.5 snapshot (and/or 4.0 snapshot) to see if it is one of the
CONNECT or TLS issues we have fixed recently.
> I'm using version 3.5 because 3.4 doesn't have ssl::server_name acl.
> Debian package is not built with openssl because of licensing issues
so I rebuilt Debian testing 3.5 source package on Debian Jessie.
> This Squid installation is in production now and cannot be easily
migrated. But I'll perform another installation for testing in the near
future.
>
>> Neither. So it is time to move away from lsof and start using packet
>> capture to get a full-body packet trace to find out what exact packets
>> are happening on at least one affected TCP connection.
>>
>> If possible identifying one of these connections from its SYN onwards
>> would be great, but if not then a 20min period of activity on an
>> existing one might still how more hints.
>>
> I did a test using a Windows laptop client with IP address
192.168.64.4, connected via wifi.
> I browsed a few https sites until triggering Squid "local IP does not
match any domain IP" error.
> This error appeared when I was trying to open Yahoo home page. Browser
redirected to https://br.yahoo.com/?p=us but page remained blank.
> Please note that this error appears randomly: opening the same site in
another browser tab succeeded.
>
> cache.log:
> 2015/11/26 13:54:45.471 kid1| SECURITY ALERT: Host header forgery
detected on local=206.190.56.191:443 remote=192.168.64.4:58887 FD 17244
flags=33 (local IP does not match any domain IP)
It's so commonplace that even Wiki long time ago there article:

http://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery

>
> After a couple of minutes this connection disappeared from Windows
netstat command output. Afterward I powered off Windows laptop.
>
> Tcpdump on Squid box:
> 13:54:45.410907 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [S],
seq 1831867, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK],
length 0
> 13:54:45.411000 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags
[S.], seq 3695298276, ack 1831868, win 29200, options [mss
1460,nop,nop,sackOK,nop,wscale 7], length 0
> 13:54:45.411630 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
ack 1, win 256, length 0
> 13:54:45.412490 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags
[P.], seq 1:185, ack 1, win 256, length 184
> 13:54:45.412573 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, length 0
> 13:54:55.439709 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
seq 184:185, ack 1, win 256, length 1
> 13:54:55.439761 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:05.439965 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
seq 184:185, ack 1, win 256, length 1
> 13:55:05.440022 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:15.445667 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
seq 184:185, ack 1, win 256, length 1
> 13:55:15.445737 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:25.447281 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
seq 184:185, ack 1, win 256, length 1
> 13:55:25.447351 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:35.494936 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
seq 184:185, ack 1, win 256, length 1
> 13:55:35.495005 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:45.491694 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
seq 184:185, ack 1, win 256, length 1
> 13:55:45.491761 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 13:55:55.492158 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.],
seq 184:185, ack 1, win 256, length 1
> 13:55:55.492208 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
> 14:01:58.242748 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags
[F.], seq 185, ack 1, win 256, length 0
> 14:01:58.279916 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.],
ack 186, win 237, length 0
>
> Netstat output on Squid box:
> # date; netstat -tno | grep 192.168.64.4
> Thu Nov 26 13:59:40 BRST 2015
> tcp6   1  0 172.16.10.22:3126   192.168.64.4:58887
CLOSE_WAIT  off (0.00/0/0)
>
> And after 2 hours and a half netstat output is still the same:
> # 

Re: [squid-users] file descriptors leak

2015-11-26 Thread André Janna


Assinatura
Em 24/11/2015 00:54, Amos Jeffries escreveu:
FYI: unless you have a specific need for 3.5 you should be fine with 
the 3.4 squid3 package that is available for Jesse from Debian 
backports. The alternative is going the other way and upgrading right 
to the latest 3.5 snapshot (and/or 4.0 snapshot) to see if it is one 
of the CONNECT or TLS issues we have fixed recently. 

I'm using version 3.5 because 3.4 doesn't have ssl::server_name acl.
Debian package is not built with openssl because of licensing issues so 
I rebuilt Debian testing 3.5 source package on Debian Jessie.
This Squid installation is in production now and cannot be easily 
migrated. But I'll perform another installation for testing in the near 
future.



Neither. So it is time to move away from lsof and start using packet
capture to get a full-body packet trace to find out what exact packets
are happening on at least one affected TCP connection.

If possible identifying one of these connections from its SYN onwards
would be great, but if not then a 20min period of activity on an
existing one might still how more hints.

I did a test using a Windows laptop client with IP address 192.168.64.4, 
connected via wifi.
I browsed a few https sites until triggering Squid "local IP does not 
match any domain IP" error.
This error appeared when I was trying to open Yahoo home page. Browser 
redirected to https://br.yahoo.com/?p=us but page remained blank.
Please note that this error appears randomly: opening the same site in 
another browser tab succeeded.


cache.log:
2015/11/26 13:54:45.471 kid1| SECURITY ALERT: Host header forgery 
detected on local=206.190.56.191:443 remote=192.168.64.4:58887 FD 17244 
flags=33 (local IP does not match any domain IP)


After a couple of minutes this connection disappeared from Windows 
netstat command output. Afterward I powered off Windows laptop.


Tcpdump on Squid box:
13:54:45.410907 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [S], 
seq 1831867, win 8192, options [mss 1460,nop,wscale 8,nop,nop,sackOK], 
length 0
13:54:45.411000 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [S.], 
seq 3695298276, ack 1831868, win 29200, options [mss 
1460,nop,nop,sackOK,nop,wscale 7], length 0
13:54:45.411630 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
ack 1, win 256, length 0
13:54:45.412490 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [P.], 
seq 1:185, ack 1, win 256, length 184
13:54:45.412573 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, length 0
13:54:55.439709 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
seq 184:185, ack 1, win 256, length 1
13:54:55.439761 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
13:55:05.439965 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
seq 184:185, ack 1, win 256, length 1
13:55:05.440022 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
13:55:15.445667 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
seq 184:185, ack 1, win 256, length 1
13:55:15.445737 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
13:55:25.447281 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
seq 184:185, ack 1, win 256, length 1
13:55:25.447351 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
13:55:35.494936 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
seq 184:185, ack 1, win 256, length 1
13:55:35.495005 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
13:55:45.491694 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
seq 184:185, ack 1, win 256, length 1
13:55:45.491761 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
13:55:55.492158 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [.], 
seq 184:185, ack 1, win 256, length 1
13:55:55.492208 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 185, win 237, options [nop,nop,sack 1 {184:185}], length 0
14:01:58.242748 IP 192.168.64.4.58887 > 206.190.56.191.443: Flags [F.], 
seq 185, ack 1, win 256, length 0
14:01:58.279916 IP 206.190.56.191.443 > 192.168.64.4.58887: Flags [.], 
ack 186, win 237, length 0


Netstat output on Squid box:
# date; netstat -tno | grep 192.168.64.4
Thu Nov 26 13:59:40 BRST 2015
tcp6   1  0 172.16.10.22:3126   192.168.64.4:58887 
CLOSE_WAIT  off (0.00/0/0)


And after 2 hours and a half netstat output is still the same:
# date; netstat -tno | grep 192.168.64.4
Thu Nov 26 16:32:37 BRST 2015
tcp6   1  0 172.16.10.22:3126   192.168.64.4:58887 
CLOSE_WAIT  off (0.00/0/0)


Squid is still using the file descriptor.
# date; lsof -n | grep 192.168.64.4
Thu Nov 26 16:33:10 BRST 2015
squid  

Re: [squid-users] file descriptors leak

2015-11-25 Thread Eliezer Croitoru
Just as a side note you should know that tcpdump on a busy server needs 
bigger buffer size to prevent the drop of captured packets.


Eliezer

On 24/11/2015 04:54, Amos Jeffries wrote:

If possible identifying one of these connections from its SYN onwards
would be great, but if not then a 20min period of activity on an
existing one might still how more hints.

Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-23 Thread Amos Jeffries
On 24/11/2015 7:45 a.m., André Janna wrote:
> 
> Assin Em 22/11/2015 16:25, Eliezer Croitoru escreveu:
>> Hey Andre,
>>
>> There are couple things to the picture.
>> It's not only squid that is the "blame".
>> It depends on what your OS tcp stack settings are.
>> To verify couple things you can try to use the netstat tool.
>> run the command "netstat -nto" to see what is the timers status.
>> You can then see how long will a new connection stay in the
>> established state.
>> It might be the squid settings but if the client is not there it could
>> be because of some tcp tunable kernel settings.
> 
> Hi Eliezer and Amos,
> my kernel is a regular Debian Jessie kernel using the following tcp values.
> tcp_keepalive_time: 7200
> tcp_keepalive_intvl: 25
> tcp_keepalive_probes: 9
> tcp_retries1: 3
> tcp_retries2: 15
> tcp_fin_timeout: 60
> So in my understanding the longest timeout is set to 2 hours and a few
> minutes for keepalive connections.

Okay. It is not always your kernel Squid machine. I've seen one mobile
network where the Ethernet<->radio modem was interpreting the radio
being alive as TCP keep-alive needing to stay alive. So just having the
phones connected to the network would keep everything active.

IIRC the only fix for that scenario is reducing Squid's client_lifetime
value.


FYI: unless you have a specific need for 3.5 you should be fine with the
3.4 squid3 package that is available for Jesse from Debian backports.
The alternative is going the other way and upgrading right to the latest
3.5 snapshot (and/or 4.0 snapshot) to see if it is one of the CONNECT or
TLS issues we have fixed recently.

> 
> Today I monitored file descriptors 23 and 24 on my box during 5 hours
> and lsof always showed:
> squid  6574   proxy   23u IPv6 5320944 
> 0t0TCP 172.16.10.22:3126->192.168.90.35:34571 (CLOSE_WAIT)
> squid  6574   proxy   24u IPv6 5327276 
> 0t0TCP 172.16.10.22:3126->192.168.89.236:49435 (ESTABLISHED)
> while netstat always showed:
> tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571
> CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
> tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435   
> ESTABLISHED 6574/(squid-1)   off (0.00/0/0)
> 
> The "off" flag in netstat output tells that for these sockets keepalive
> and retransmission timers are disabled.

Oooh. That should mean 30sec timout and then RST. Not even a whole
minute of idle time.

> Right now netstat shows 15,568 connections on squid port 3126 and only
> 107 have timer set to a value other than "off".
> 
> I read that connections that are in CLOSE_WAIT state don't have any tcp
> timeout, it's Squid that must close the socket.

Squid closes the socket/FD as soon as it received the FIN or RST that
began the CLOSE_WAIT state. Unless it was Squid closing that began it.

> 
>  About the connections in ESTABLISHED state, I monitored the connection
> to mobile device 192.168.89.236 using "tcpdump -i eth2 -n host
> 192.168.89.236" during 2 hours and a half.
> Tcpdump didn't record any packet and netstat is still displaying:
> tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571
> CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
> tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435   
> ESTABLISHED 6574/(squid-1)   off (0.00/0/0)
> 
> So unfortunately I still don't understand why Squid or the kernel don't
> close these sockets.

Neither. So it is time to move away from lsof and start using packet
capture to get a full-body packet trace to find out what exact packets
are happening on at least one affected TCP connection.

If possible identifying one of these connections from its SYN onwards
would be great, but if not then a 20min period of activity on an
existing one might still how more hints.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-23 Thread André Janna


Assin Em 22/11/2015 16:25, Eliezer Croitoru escreveu:

Hey Andre,

There are couple things to the picture.
It's not only squid that is the "blame".
It depends on what your OS tcp stack settings are.
To verify couple things you can try to use the netstat tool.
run the command "netstat -nto" to see what is the timers status.
You can then see how long will a new connection stay in the 
established state.
It might be the squid settings but if the client is not there it could 
be because of some tcp tunable kernel settings.


Hi Eliezer and Amos,
my kernel is a regular Debian Jessie kernel using the following tcp values.
tcp_keepalive_time: 7200
tcp_keepalive_intvl: 25
tcp_keepalive_probes: 9
tcp_retries1: 3
tcp_retries2: 15
tcp_fin_timeout: 60
So in my understanding the longest timeout is set to 2 hours and a few 
minutes for keepalive connections.


Today I monitored file descriptors 23 and 24 on my box during 5 hours 
and lsof always showed:
squid  6574   proxy   23u IPv6 5320944  
0t0TCP 172.16.10.22:3126->192.168.90.35:34571 (CLOSE_WAIT)
squid  6574   proxy   24u IPv6 5327276  
0t0TCP 172.16.10.22:3126->192.168.89.236:49435 (ESTABLISHED)

while netstat always showed:
tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571 
CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435
ESTABLISHED 6574/(squid-1)   off (0.00/0/0)


The "off" flag in netstat output tells that for these sockets keepalive 
and retransmission timers are disabled.
Right now netstat shows 15,568 connections on squid port 3126 and only 
107 have timer set to a value other than "off".


I read that connections that are in CLOSE_WAIT state don't have any tcp 
timeout, it's Squid that must close the socket.


 About the connections in ESTABLISHED state, I monitored the connection 
to mobile device 192.168.89.236 using "tcpdump -i eth2 -n host 
192.168.89.236" during 2 hours and a half.

Tcpdump didn't record any packet and netstat is still displaying:
tcp6   1  0 172.16.10.22:3126 192.168.90.35:34571 
CLOSE_WAIT  6574/(squid-1)   off (0.00/0/0)
tcp6   0  0 172.16.10.22:3126 192.168.89.236:49435
ESTABLISHED 6574/(squid-1)   off (0.00/0/0)


So unfortunately I still don't understand why Squid or the kernel don't 
close these sockets.



Regards,
  André

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-22 Thread Amos Jeffries
On 23/11/2015 7:25 a.m., Eliezer Croitoru wrote:
> Hey Andre,
> 
> There are couple things to the picture.
> It's not only squid that is the "blame".
> It depends on what your OS tcp stack settings are.
> To verify couple things you can try to use the netstat tool.
> run the command "netstat -nto" to see what is the timers status.
> You can then see how long will a new connection stay in the established
> state.
> It might be the squid settings but if the client is not there it could
> be because of some tcp tunable kernel settings.

Eliezer is right. The TCP layer itself should be terminating the
connection within a short time (30sec default) after the clients last
packet. Even if you use the TCP level keep-alive feature, that works by
ensuring small packets go back and forth between the Squid device and
the user device to keep the router state alive.

Something is making the TCP stack itself think the client device is
still connected *and active* on the network.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-22 Thread André Janna

 Citando Amos Jeffries :


CONNECT requests with tunnels can be particularly long lived, mobiles
and their applications stay active for weeks on end with few outward
signs of what is happening inside the encrypted tunnel. The only way to
be sure the connection is finished with is when one of the client or
server remote endpoints closes it.

* The port 55815 connection _was_ closed sometimes within 15 mintes of
the lsof being run.

* The port 52288 connection is still being used.

Given the timespan between those messages and the lsof, it is also
possible they were closed and reopened in between. If you have a lot of
ports in active use, then re-use of closed ones becomes more likely.
Though I suspect it is just persistence doing what it is designed to do.
You will need a more detailed trace of the entire time period to know.


11 more hours have passed since my last "lsof" and cause it's Sunday I'm
sure that no device have been connected to 192.168.x.x network at least for
15 hours.
Right now lsof output is the same as 11 hours before:

squid 32490    proxy   12u
IPv6    4065613  0t0    TCP
172.16.10.22:3126->192.168.93.113:55815 (CLOSE_WAIT)
squid 32490    proxy   14u
IPv6    4097822  0t0    TCP
172.16.10.22:3126->192.168.90.207:52288 (ESTABLISHED)
...

Squid is still using file descriptors 12 and 14 (and a lot of others) for
the same connections as yesterday, although the mobile devices it was
connected to have not been online in our network for at least 15 hours.
Is it by design?
Raising file descriptors limit is the only solution?
The maximum number of file descriptors in my installation is set to 65535.
Is there any drawback to increasing this number let's say by a factor of
ten?

Regards,
  André
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-22 Thread Eliezer Croitoru

Hey Andre,

There are couple things to the picture.
It's not only squid that is the "blame".
It depends on what your OS tcp stack settings are.
To verify couple things you can try to use the netstat tool.
run the command "netstat -nto" to see what is the timers status.
You can then see how long will a new connection stay in the established 
state.
It might be the squid settings but if the client is not there it could 
be because of some tcp tunable kernel settings.


In any case a Linux machine can handle up to ... very very high number 
of open connections idle or not so in a case you are starting to run out 
of them try to upper the limit by 50% or even 100% and monitor your 
machine netstat for abnormal too long connections.


All The Bests,
Eliezer

On 22/11/2015 19:18, André Janna wrote:

Update: Squid released file descriptors after about 24 hours, I suppose
because expired client_lifetime which is set to 1 day default value.

Regards,
   André


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-22 Thread André Janna

 Citando André Janna:


Squid is still using file descriptors 12 and 14 (and a lot of others)
for the same connections as yesterday, although the mobile devices it
was connected to have not been online in our network for at least 15
hours.
 


Update: Squid released file descriptors after about 24 hours, I suppose
because expired client_lifetime which is set to 1 day default value.

Regards,
  André
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] file descriptors leak

2015-11-21 Thread Amos Jeffries
On 22/11/2015 4:10 p.m., André Janna wrote:
> I'm running Squid 3.5.10 on Debian Jessie and after some hours of execution
> it runs out of file descriptors.
> Squid is listening on port 3125, 3126 and 3127.
> Port 3126 is used for intercepting, via iptables redirect, https
> connections mostly from mobile devices like smartphones. On this port is
> active ssl-bump but I'm not decrypting https traffic, only "peek" to get
> https server host name.
> Port 3125 is used for intercepting http connections of the same mobile
> devices whose https traffic is intercepted on port 3126.
> Port 3127 is used for clients configured to use a proxy.
> Leaked file descriptors are all related to connection on port 3126 (https
> intercept ssl-bump).
> A sample output of lsof command gives:
> 
> squid 32490proxy   12u
> IPv64065613   0t0TCP
> 172.16.10.22:3126->192.168.93.113:55815 (CLOSE_WAIT)
> squid 32490proxy   14u
> IPv64097822   0t0TCP
> 172.16.10.22:3126->192.168.90.207:52288 (ESTABLISHED)
> ...
> 
> where 172.16.10.22 is an IP address of my Squid installation and
> 192.168.x.x are mobile devices.
> Is seems that this condition is triggered by "local IP does not match any
> domain IP" error logged by Squid in cache.log, but I'm not sure if all
> stuck connections are caused by this kind of error.
> For the 2 connections of the sample above the related cache.log errors are:
> 
> 2015/11/21 12:57:51.229 kid1| SECURITY ALERT: Host header forgery
> detected on local=23.0.163.57:443 remote=192.168.93.113:55815 FD 12
> flags=33 (local IP does not match any domain IP)
> 2015/11/21 13:59:44.230 kid1| SECURITY ALERT: Host header forgery
> detected on local=198.144.127.162:443 remote=192.168.90.207:52288 FD 14
> flags=33 (local IP does not match any domain IP)
> 
> "lsof" sample output was taken more that 10 hours after Squid logged these
> errors and it shows that Squid is still holding connections open, using a
> lot of file descriptors.

In HTTP connections stay open and get used for many requests.

You are however assuming that the connection actually contains HTTP.
There is no guarantee of that without bumping the decrypt and parsing
the content inside. There are a number of non-HTTP/1.1 protocols that
use port 443 to bypass proxy and firewall security.

CONNECT requests with tunnels can be particularly long lived, mobiles
and their applications stay active for weeks on end with few outward
signs of what is happening inside the encrypted tunnel. The only way to
be sure the connection is finished with is when one of the client or
server remote endpoints closes it.

* The port 55815 connection _was_ closed sometimes within 15 mintes of
the lsof being run.

* The port 52288 connection is still being used.

Given the timespan between those messages and the lsof, it is also
possible they were closed and reopened in between. If you have a lot of
ports in active use, then re-use of closed ones becomes more likely.
Though I suspect it is just persistence doing what it is designed to do.
You will need a more detailed trace of the entire time period to know.


PS. don't confuse file descriptors with ports. There as an absolute
maximum of 64K ports per IP on each device. But sockets FD can reach
millions, if you run out of FD just configure more to be allowed.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


RE: [squid-users] File Descriptors

2010-07-05 Thread Mellem, Dan
Did you set the limit before you compiled it? The upper limit is set at compile 
time. I ran into this problem myself.

-Dan


-Original Message-
From:   Superted666 [mailto:ruckafe...@gmail.com]
Sent:   Mon 7/5/2010 3:33 PM
To: squid-users@squid-cache.org
Cc: 
Subject:[squid-users] File Descriptors


Hello,

Got a odd problem with file descriptors im hoping you guys could help me out
with?

Background

I'm running CentOS 5.5 and squid 3.0 Stable 5.
The system is configured with 4096 file descriptors with the following : 

/etc/security/limits.conf 
*-   nofile  4096
/etc/sysctl.conf 
fs.file-max = 4096

Also /etc/init.d/squid has ulimit -HSn 4096 at the start.

Problem

Running a ulimit -n on the box does indeed show 4096 connectors but squid
states it is using 1024 despite what is said above. I noticed this because
im starting to get warnings in the logs about file descriptors...

Any help greatly appreciated.

Thanks

Ed

Ed
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html
Sent from the Squid - Users mailing list archive at Nabble.com.






Re: [squid-users] File Descriptors

2010-07-05 Thread Ivan .
I used this how to, and did not require a re-compile

http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/

cheers
Ivan

On Tue, Jul 6, 2010 at 1:43 PM, Mellem, Dan dan.mel...@pomona.k12.ca.us wrote:

 Did you set the limit before you compiled it? The upper limit is set at 
 compile time. I ran into this problem myself.

 -Dan


 -Original Message-
 From:   Superted666 [mailto:ruckafe...@gmail.com]
 Sent:   Mon 7/5/2010 3:33 PM
 To:     squid-users@squid-cache.org
 Cc:
 Subject:        [squid-users] File Descriptors


 Hello,

 Got a odd problem with file descriptors im hoping you guys could help me out
 with?

 Background

 I'm running CentOS 5.5 and squid 3.0 Stable 5.
 The system is configured with 4096 file descriptors with the following :

 /etc/security/limits.conf
 *                -       nofile          4096
 /etc/sysctl.conf
 fs.file-max = 4096

 Also /etc/init.d/squid has ulimit -HSn 4096 at the start.

 Problem

 Running a ulimit -n on the box does indeed show 4096 connectors but squid
 states it is using 1024 despite what is said above. I noticed this because
 im starting to get warnings in the logs about file descriptors...

 Any help greatly appreciated.

 Thanks

 Ed

 Ed
 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html
 Sent from the Squid - Users mailing list archive at Nabble.com.






Re: [squid-users] File Descriptors

2010-07-05 Thread balkrishna
Mr ED,

Changing the FD value from limits.conf and including the ulimit -HSn
4096 in squid daemon does not change the default squid FD limit.

This needs to be done in compile time.
Recompile your squid and run ulimit -HSn 4096 after compile and before
make install. That will work.

Regards,

Bal


 Hello,

 Got a odd problem with file descriptors im hoping you guys could help me
 out
 with?

 Background

 I'm running CentOS 5.5 and squid 3.0 Stable 5.
 The system is configured with 4096 file descriptors with the following :

 /etc/security/limits.conf
 *-   nofile  4096
 /etc/sysctl.conf
 fs.file-max = 4096

 Also /etc/init.d/squid has ulimit -HSn 4096 at the start.

 Problem

 Running a ulimit -n on the box does indeed show 4096 connectors but squid
 states it is using 1024 despite what is said above. I noticed this because
 im starting to get warnings in the logs about file descriptors...

 Any help greatly appreciated.

 Thanks

 Ed

 Ed
 --
 View this message in context:
 http://squid-web-proxy-cache.1019090.n4.nabble.com/File-Descriptors-tp2278923p2278923.html
 Sent from the Squid - Users mailing list archive at Nabble.com.





Re: [squid-users] File descriptors issue

2009-12-16 Thread Amos Jeffries
On Wed, 16 Dec 2009 18:03:53 +0100, Solaris Treize
solaristre...@gmail.com wrote:
 Hello,
 I'm running squid-3.0.STABLE18 that I've compiled with the options
 --with-filedescriptors=32768
 
 I have added to /etc/security/limits.conf the following lines on my
 Redhat host :
 squid   hardnofile  32768
 squid   softnofile  32768
 
 As squid user, ulimit -n says :
 32768
 
 But when I start squid, I got this :
 2009/12/16 17:36:49| With 1024 file descriptors available
 2009/12/16 17:38:31| client_side.cc(2834) WARNING! Your cache is
 running out of filedescriptors
 2009/12/16 17:43:31| client_side.cc(2834) WARNING! Your cache is
 running out of filedescriptors
 2009/12/16 17:44:31| client_side.cc(2834) WARNING! Your cache is
 running out of filedescriptors
 
 Could you please help me ?

http://wiki.squid-cache.org/SquidFaq/TroubleShooting#Running_out_of_filedescriptors

Note the OS-specific actions that need to be done before configuring and
compiling.

Amos


Re: [squid-users] file descriptors set to 1024 instead of user-defined one

2008-06-24 Thread Henrik Nordstrom

tis 2008-06-24 klockan 23:42 +0600 skrev Azhar H. Chowdhury:
 Hi, I am running squid 3.0 stable 6 on Fedora core 6. When I run squid from
 command prompt, it's setting up File descriptors value to 131072 what
 I defined at compile time. But running at rc.local (Just put a line with 
 path of squid exe location) set
 the file descriptors value to 1024.

Make sure that ulimit gets set properly in the script starting Squid.

  ulimit -HSn 131072

Regards
Henrik



Re: [squid-users] File Descriptors causing an issue in OpenBSD

2007-08-10 Thread Tek Bahadur Limbu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 10 Aug 2007 01:17:21 +0530
Preetish [EMAIL PROTECTED] wrote:

  Odd.. are you sure you are really running the new binary, and that the
  ulimit setting is done correctly in the start script?
 
 #Squid startup/shutdown
 
 if [ -z $1 ] ; then
 echo -n Syntax is: $0 start stop
 exit
 fi
 
 if [ $1 != start -a $1 != stop ]; then
 echo -n Wrong command
 exit
 fi
 
 if [ -x /usr/local/sbin/squid ]; then
 if [ $1 = 'start' ] ; then
 echo -n 'Running Squid: ';ulimit -HSn 8192;
 /usr/local/sbin/squid
 else
 echo -n 'Killing Squid: ';  /usr/local/sbin/squid
 -k shutdown
 fi
 else
 echo -n 'Squid not found'
 fi
 
 
 d What do you get when you issue the following 2 commands:
  limits
 No command limit.
  and
 
  ulimit -n
 
 1024

Hi Preetish,

That shows that you have only 1024 file descriptors available on your system. 
In my FreeBSD machines, I usually don't have to adjust file descriptors because 
the defaults are more than I need (7000 - 14000). 


 
  kern.maxfiles
  kern.maxfilesperproc
 
 i did
 sysctl -w  kern.maxfiles=8192
 sysctl -w  kern.maxfilesperproc=8192 --- this gives a error

I guess you don't have the kern.maxfilesperproc variable.

What do you have for your kern.maxusers variable?

If nothing helps, you may have to re-compile your kernel with the following 
added parameter:

option   MAXFILES=8192

But still, I think that there are other ways to increase your file descriptors 
besides re-compiling your kernel.

You can ask for help in the openbsd mailing list regarding your problem.

 
 Then i even made changes the Options in /etc/login.def
 {{
 default:\
 :path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/local/bin:\
 :umask=022:\
 :datasize-max=512M:\
 :datasize-cur=512M:\
 :maxproc-max=512:\
 :maxproc-cur=64:\
 :openfiles-cur=8192:\
 :stacksize-cur=4M:\
 :localcipher=blowfish,6:\
 :ypcipher=old:\
 :tc=auth-defaults:\
 :tc=auth-ftp-defaults:
 }}
 
 and
 
 {{
 daemon:\
 :ignorenologin:\
 :datasize=infinity:\
 :maxproc=infinity:\
 :openfiles-cur=8192:\
 :stacksize-cur=8M:\
 :localcipher=blowfish,8:\
 :tc=default:
 }}
 
 and after doing all these changes i uninstalled squid completely and
 all its file and everything .Then recompiled it and installed it
 againBut DAMM it gave me the same number of file descriptors. So
 now i have reduced the cache to 10 GB. I found a Squid Definitive
 guide where he said to recompile the kernel after editing the kernel
 configuration file .

Reducing just the size of your cache may not be able to help you much with your 
file descriptors limit. 

 
 
 Squid Object Cache: Version 2.6.STABLE13
 Start Time: Thu, 09 Aug 2007 19:09:36 GMT
 Current Time:   Thu, 09 Aug 2007 19:11:13 GMT
 Connection information for squid:
 Number of clients accessing cache:  321
 Number of HTTP requests received:   2649
 Number of ICP messages received:0
 Number of ICP messages sent:0
 Number of queued ICP replies:   0
 Request failure ratio:   0.00
 Average HTTP requests per minute since start:   1638.4
 Average ICP messages per minute since start:0.0
 Select loop called: 34876 times, 2.782 ms avg
 Cache information for squid:
 Request Hit Ratios: 5min: 15.1%, 60min: 15.1%
 Byte Hit Ratios:5min: 29.4%, 60min: 29.4%
 Request Memory Hit Ratios:  5min: 9.7%, 60min: 9.7%
 Request Disk Hit Ratios:5min: 44.4%, 60min: 44.4%
 Storage Swap size:  23806 KB
 Storage Mem size:   2516 KB
 Mean Object Size:   7.57 KB
 Requests given to unlinkd:  0
 Median Service Times (seconds)  5 min60 min:
 HTTP Requests (All):   0.68577  0.68577
 Cache Misses:  1.24267  1.24267
 Cache Hits:0.00179  0.00179
 Near Hits: 0.68577  0.68577
 Not-Modified Replies:  0.00091  0.00091
 DNS Lookups:   0.00190  0.00190
 ICP Queries:   0.0  0.0


- From your above data, your service response times which are under 1.5 seconds 
are good figures for a satellite link. Before, it was 15 seconds! Considering 
that, your proxy server is much faster now. But since your data above is only 2 
minutes old, you have to monitor in regularly for a longer period of time. 

Starting with a cache_dir size of 10GB is good start. You can later increase 
it's size based upon your needs and demands.


Thanking you...



 
 
 :(((
 
 Preetish
 


- -- 

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.


Re: [squid-users] File Descriptors causing an issue in OpenBSD

2007-08-10 Thread Preetish
Hi All,

  Recompilng the kernel with MAXFILES=8192 worked. I even had
to add the line :openfiles-max=infinity:\

to /etc/login.def in the daemon section. Well now the File Descriptors
has increased and even the internet speed is good ( i ll know it
better by tomorrow). I have kept my cache to 10 Gb right now.Thanks to
everyone :)

Cheers
Preetish


Re: [squid-users] File Descriptors causing an issue in OpenBSD

2007-08-09 Thread Henrik Nordstrom
On tor, 2007-08-09 at 17:00 +0530, Preetish wrote:
 Hi Everybody
 
 I have recompiled Squid the way i saw in one of the how to. this is what i did
 
 1)I uninstalled Squid
 2)
 #ulimit -HSn 8192
 #then recompiled squid with --with-maxfd=8192
 then in my starting squid script i have added ulimit -HSn 8192

Sounds right. Acutally the ulimit when compiling isn't needed when you
use the configure option.

 But still it shows the same number of file descriptors
 File descriptor usage for squid:
 Maximum number of file descriptors:   1024

Odd.. are you sure you are really running the new binary, and that the
ulimit setting is done correctly in the start script?

To verify the binary run /path/to/sbin/squid -v

 There is something fishy about it coz my cache is only 1.1G . and
 moreover there is a file squid.core in my /etc/squid and i do not
 understand its porpose.

The squid.core is a coredump from a fatal error. You can remove it.

 i searched for it online but still i did
 understand it. Is my squidclient giving me stale results. I had even
 cleaned the cache before reinstalling squid. Is there some different
 way to increase the file descriptors in OpenBSD. Kindly Help.

What you did should work from what I can tell.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] File Descriptors causing an issue in OpenBSD

2007-08-09 Thread Tek Bahadur Limbu

Preetish wrote:

Hi Everybody

I have recompiled Squid the way i saw in one of the how to. this is what i did

1)I uninstalled Squid
2)
#ulimit -HSn 8192
#then recompiled squid with --with-maxfd=8192
then in my starting squid script i have added ulimit -HSn 8192

But still it shows the same number of file descriptors
File descriptor usage for squid:
Maximum number of file descriptors:   1024
Largest file desc currently in use:939
Number of file desc currently in use:  929
Files queued for open:   1
Available number of file descriptors:   94
Reserved number of file descriptors:   100
Store Disk files open:  19
IO loop method: kqueue

There is something fishy about it coz my cache is only 1.1G . and
moreover there is a file squid.core in my /etc/squid and i do not
understand its porpose. i searched for it online but still i did
understand it. Is my squidclient giving me stale results. I had even
cleaned the cache before reinstalling squid. Is there some different
way to increase the file descriptors in OpenBSD. Kindly Help.


Hi Preetish,

On a Linux box, that should have worked right away. I assume that they 
should also work for BSD boxes too. By the way, as Henrik mentioned, did 
you verify the binary run /path/to/sbin/squid -v


What do you get when you issue the following 2 commands:

limits

and

ulimit -n

On your OpenBSD machine, I was wondering why your file descriptors is 
only 1024 in the first place.


On BSD systems, I think increasing the following sysctl tunables might 
help in general for a busy machine:


kern.maxfiles
kern.maxfilesperproc

Set those values to say 8192 or higher and save it in either your 
/boot/loader.conf or /etc/sysctl.conf in case of a reboot.



Hope it helps.

Thanking you...



Regards
Preetish






--

With best regards and good wishes,

Yours sincerely,

Tek Bahadur Limbu

(TAG/TDG Group)
Jwl Systems Department

Worldlink Communications Pvt. Ltd.

Jawalakhel, Nepal

http://www.wlink.com.np


Re: [squid-users] File Descriptors causing an issue in OpenBSD

2007-08-09 Thread Preetish
 Odd.. are you sure you are really running the new binary, and that the
 ulimit setting is done correctly in the start script?

#Squid startup/shutdown

if [ -z $1 ] ; then
echo -n Syntax is: $0 start stop
exit
fi

if [ $1 != start -a $1 != stop ]; then
echo -n Wrong command
exit
fi

if [ -x /usr/local/sbin/squid ]; then
if [ $1 = 'start' ] ; then
echo -n 'Running Squid: ';ulimit -HSn 8192;
/usr/local/sbin/squid
else
echo -n 'Killing Squid: ';  /usr/local/sbin/squid
-k shutdown
fi
else
echo -n 'Squid not found'
fi


d What do you get when you issue the following 2 commands:
 limits
No command limit.
 and

 ulimit -n

1024

 kern.maxfiles
 kern.maxfilesperproc

i did
sysctl -w  kern.maxfiles=8192
sysctl -w  kern.maxfilesperproc=8192 --- this gives a error

Then i even made changes the Options in /etc/login.def
{{
default:\
:path=/usr/bin /bin /usr/sbin /sbin /usr/X11R6/bin /usr/local/bin:\
:umask=022:\
:datasize-max=512M:\
:datasize-cur=512M:\
:maxproc-max=512:\
:maxproc-cur=64:\
:openfiles-cur=8192:\
:stacksize-cur=4M:\
:localcipher=blowfish,6:\
:ypcipher=old:\
:tc=auth-defaults:\
:tc=auth-ftp-defaults:
}}

and

{{
daemon:\
:ignorenologin:\
:datasize=infinity:\
:maxproc=infinity:\
:openfiles-cur=8192:\
:stacksize-cur=8M:\
:localcipher=blowfish,8:\
:tc=default:
}}

and after doing all these changes i uninstalled squid completely and
all its file and everything .Then recompiled it and installed it
againBut DAMM it gave me the same number of file descriptors. So
now i have reduced the cache to 10 GB. I found a Squid Definitive
guide where he said to recompile the kernel after editing the kernel
configuration file .


Squid Object Cache: Version 2.6.STABLE13
Start Time: Thu, 09 Aug 2007 19:09:36 GMT
Current Time:   Thu, 09 Aug 2007 19:11:13 GMT
Connection information for squid:
Number of clients accessing cache:  321
Number of HTTP requests received:   2649
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   1638.4
Average ICP messages per minute since start:0.0
Select loop called: 34876 times, 2.782 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 15.1%, 60min: 15.1%
Byte Hit Ratios:5min: 29.4%, 60min: 29.4%
Request Memory Hit Ratios:  5min: 9.7%, 60min: 9.7%
Request Disk Hit Ratios:5min: 44.4%, 60min: 44.4%
Storage Swap size:  23806 KB
Storage Mem size:   2516 KB
Mean Object Size:   7.57 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.68577  0.68577
Cache Misses:  1.24267  1.24267
Cache Hits:0.00179  0.00179
Near Hits: 0.68577  0.68577
Not-Modified Replies:  0.00091  0.00091
DNS Lookups:   0.00190  0.00190
ICP Queries:   0.0  0.0


:(((

Preetish


Re: [squid-users] File Descriptors

2007-02-03 Thread Henrik Nordstrom
fre 2007-02-02 klockan 14:25 -0200 skrev Michel Santos:

 When Squid sees it's short of filedescriptors it stops accepting
  new requests, focusing on finishing what it has already accepted.
 
 isn't this conflicting with what you said before?

No.

 do squid recover or do it need to be restarted?

depends on the reason to the filedescriptor shortage.

If the shortage is due to Squid using very many filedescriptors then no
action need to be taken (except perhaps increase the amount of
filedescriptors available to Squid to avoid the problem in future).
Squid automatically adjusts to the per process limit and hitting the
system wide limit if it's lower than the per-process limit.

If the shortage is due to some other process causing the systems as a
whole to temporarily run short of filedescriptors or related resources
then you need to restart Squid after fixing the problem as Squid has got
fooled in this situation into thinking that your system can not support
a reasonable amount of active connections.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] File Descriptors

2007-02-02 Thread Henrik Nordstrom
tor 2007-02-01 klockan 20:01 -0600 skrev Matt:
 What does Squid do or act like when its out of file descriptors?

When Squid sees it's short of filedescriptors it stops accepting new
requests, focusing on finishing what it has already accepted.

And long before there is a shortage it disables the use of persistent
connections to limit the pressure on concurrent filedescriptors.

 If cachemgr says it still has some left could it still really be out?

If you get to cachemgr then it's not out of filedescriptors, at least
not right then...

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] File Descriptors

2007-02-02 Thread Henrik Nordstrom
fre 2007-02-02 klockan 10:54 +0800 skrev Adrian Chadd:

 If your system or process FD limits are lower than what Squid believes it
 to be, then yup. It'll get unhappy.

Only temporarily. It automatically adjusts fd usage to what the system
can sustain when hitting the limit (see fdAdjustReserved)

But this also causes problems if there is a temporary system-wide
shortage of filedescriptors due to other processes opening too many
files. Once Squid has detected a filedescriptor limitation it won't go
above the number of filedescriptor it used at that time, and you need to
restart Squid to recover after fixing the cause to the system wide
filedescriptor shortage.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] File Descriptors

2007-02-02 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 fre 2007-02-02 klockan 10:54 +0800 skrev Adrian Chadd:

 If your system or process FD limits are lower than what Squid believes
 it
 to be, then yup. It'll get unhappy.

 Only temporarily. It automatically adjusts fd usage to what the system
 can sustain when hitting the limit (see fdAdjustReserved)

 But this also causes problems if there is a temporary system-wide
 shortage of filedescriptors due to other processes opening too many
 files. Once Squid has detected a filedescriptor limitation it won't go
 above the number of filedescriptor it used at that time, and you need to
 restart Squid to recover after fixing the cause to the system wide
 filedescriptor shortage.



In a former msg you said:

When Squid sees it's short of filedescriptors it stops accepting
 new requests, focusing on finishing what it has already accepted.

isn't this conflicting with what you said before?

do squid recover or do it need to be restarted?

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] File Descriptors

2007-02-01 Thread Adrian Chadd
On Thu, Feb 01, 2007, Matt wrote:
 What does Squid do or act like when its out of file descriptors?  If
 cachemgr says it still has some left could it still really be out?

If your system or process FD limits are lower than what Squid believes it
to be, then yup. It'll get unhappy.

(It generally won't be the process FD lmits being lower than Squid's as 
Squid does check what the process FD limits are on startup. But any other
type of system or user-wide FD limits could mess things up.)



Adrian



RE: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Gregori Parker

My /etc/init.d/squid ...I'm doing this already

#!/bin/bash
echo 1024 32768  /proc/sys/net/ipv4/ip_local_port_range
echo 1024  /proc/sys/net/ipv4/tcp_max_syn_backlog
SQUID=/usr/local/squid/sbin/squid

# increase file descriptor limits
echo 8192  /proc/sys/fs/file-max
ulimit -HSn 8192

case $1 in

start)
   $SQUID -s
   echo 'Squid started'
   ;;

stop)
   $SQUID -k shutdown
   echo 'Squid stopped'
   ;;

esac



From: kabindra shrestha [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 22, 2006 7:28 PM
To: Gregori Parker
Subject: Re: [squid-users] FILE DESCRIPTORS

u ve to run the same command ulimit -HSn 8192 before starting the squid. it 
is working fine on my server.

---

I've done everything I have read about to increase file descriptors on 
my caching box, and now I just rebuilt a fresh clean squid.  Before I
ran configure, I did ulimit -HSn 8192, and I noticed that while
configuring it said Checking File Descriptors... 8192.  I even
double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
everything was good, even ran a ulimit -n right before starting squid
and saw 8192!  So I start her up, and in cache.log I see...

2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
x86_64-unknown-linux-gnu...
2006/02/22 19:05:08| Process ID 3657
2006/02/22 19:05:08| With 1024 file descriptors available

Arggghh.

Can anyone help me out?  This is on Fedora Core 4 64-bit

Thanks, sigh - Gregori




Re: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Mark Elsen
 Sorry to be pounding the list lately, but I'm about to lose it with
 these file descriptors...

 I've done everything I have read about to increase file descriptors on
 my caching box, and now I just rebuilt a fresh clean squid.  Before I
 ran configure, I did ulimit -HSn 8192, and I noticed that while
 configuring it said Checking File Descriptors... 8192.  I even
 double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
 everything was good, even ran a ulimit -n right before starting squid
 and saw 8192!  So I start her up, and in cache.log I see...

 2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
 x86_64-unknown-linux-gnu...
 2006/02/22 19:05:08| Process ID 3657
 2006/02/22 19:05:08| With 1024 file descriptors available


To make sure that this is not bogus w.r.t. the real available amount of
FD's : do you still get warnings in cache.log about FD-shortage when
reaching the 1024 (bogus-reported ?) limit.

The reason I ask is, that I have been playing with the FAQ guidelines too,
ultimately getting the same result (stuck?) as you did.

M.


RE: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Gix, Lilian \(CI/OSR\) *
Hello,


I always had some problems with Filedescriptor.
One day, U changed my Linux version to a Debian (I don't now if it's the
reason) and I update Squid. And since, I got a configuration file under:
/etc/default/squid
On this file, there is:

#
# /etc/default/squidConfiguration settings for the Squid
proxy   server.
#

# Max. number of filedescriptors to use. You can increase this
on abusy
# cache to a maximum of (currently) 4096 filedescriptors.
Default is  1024.
SQUID_MAXFD=4096

I don't know if this can help

Gix Lilian


-Original Message-
From: Mark Elsen [mailto:[EMAIL PROTECTED] 
Sent: Donnerstag, 23. Februar 2006 09:26
To: Gregori Parker
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] FILE DESCRIPTORS

 Sorry to be pounding the list lately, but I'm about to lose it with
 these file descriptors...

 I've done everything I have read about to increase file descriptors on
 my caching box, and now I just rebuilt a fresh clean squid.  Before I
 ran configure, I did ulimit -HSn 8192, and I noticed that while
 configuring it said Checking File Descriptors... 8192.  I even
 double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
 everything was good, even ran a ulimit -n right before starting
squid
 and saw 8192!  So I start her up, and in cache.log I see...

 2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
 x86_64-unknown-linux-gnu...
 2006/02/22 19:05:08| Process ID 3657
 2006/02/22 19:05:08| With 1024 file descriptors available


To make sure that this is not bogus w.r.t. the real available amount of
FD's : do you still get warnings in cache.log about FD-shortage when
reaching the 1024 (bogus-reported ?) limit.

The reason I ask is, that I have been playing with the FAQ guidelines
too,
ultimately getting the same result (stuck?) as you did.

M.



Re: [squid-users] FILE DESCRIPTORS

2006-02-23 Thread Squidrunner Support Team
Before compiling squid, try changing the value of the __FD_SETSIZE in
the /usr/include/bits/typesizes.h file as

#define __FD_SETSIZE8192

and then in the shell prompt

ulimit -HSn 8192

and try

echo 1024 32768  /proc/sys/net/ipv4/ip_local_port_range

then compile and install squid

This should help you out with the file descriptor problem.

Thanks,
-Squid Runner Support

On 2/23/06, Gix, Lilian (CI/OSR) * [EMAIL PROTECTED] wrote:
 Hello,


 I always had some problems with Filedescriptor.
 One day, U changed my Linux version to a Debian (I don't now if it's the
 reason) and I update Squid. And since, I got a configuration file under:
 /etc/default/squid
 On this file, there is:

 #
 # /etc/default/squidConfiguration settings for the Squid
 proxy   server.
 #

 # Max. number of filedescriptors to use. You can increase this
 on abusy
 # cache to a maximum of (currently) 4096 filedescriptors.
 Default is  1024.
 SQUID_MAXFD=4096

 I don't know if this can help

 Gix Lilian


 -Original Message-
 From: Mark Elsen [mailto:[EMAIL PROTECTED]
 Sent: Donnerstag, 23. Februar 2006 09:26
 To: Gregori Parker
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] FILE DESCRIPTORS

  Sorry to be pounding the list lately, but I'm about to lose it with
  these file descriptors...
 
  I've done everything I have read about to increase file descriptors on
  my caching box, and now I just rebuilt a fresh clean squid.  Before I
  ran configure, I did ulimit -HSn 8192, and I noticed that while
  configuring it said Checking File Descriptors... 8192.  I even
  double-checked autoconf.h and saw #define SQUID_MAXFD 8192.  I thought
  everything was good, even ran a ulimit -n right before starting
 squid
  and saw 8192!  So I start her up, and in cache.log I see...
 
  2006/02/22 19:05:08| Starting Squid Cache version 2.5.STABLE12 for
  x86_64-unknown-linux-gnu...
  2006/02/22 19:05:08| Process ID 3657
  2006/02/22 19:05:08| With 1024 file descriptors available
 

 To make sure that this is not bogus w.r.t. the real available amount of
 FD's : do you still get warnings in cache.log about FD-shortage when
 reaching the 1024 (bogus-reported ?) limit.

 The reason I ask is, that I have been playing with the FAQ guidelines
 too,
 ultimately getting the same result (stuck?) as you did.

 M.




Re: [squid-users] File descriptors

2005-08-05 Thread Henrik Nordstrom

On Fri, 1 Jul 2005, Sam Reynolds wrote:


What is the maximum number of File descriptors that you can have?


Depends on your OS.

The default is 1024.


Is there a correlation between the number of File Descriptors and RAM?


YEs, but even more so between file descriptors and CPU usage.


At what point does increasing the number of File Descriptors become a problem?


Mainly when CPU starts becoming a bottleneck.

I am at 8192 File Descriptors, but am still running out, so I am moving 
to 16,000 next.  I just need to know the ramifications of what I am 
doing.


Quite likely you have some other problem with the filedescriptor problem 
only being a side effect. 8K filedescriptors is well more than plenty for 
any Squid I have seen.


How many clients do you have on this proxy?

Try

  half_closed_clients off
  quick_abort_min 0
  quick_abort_max 0
  pipeline_prefetch off

And what cache_dir type are you using, on how many drives?

Is there any swap activity on your server?

Regards
Henrik

Re: [squid-users] File descriptors

2005-08-05 Thread Awie
  I am at 8192 File Descriptors, but am still running out, so I am moving
  to 16,000 next. I just need to know the ramifications of what I am
  doing.

 Quite likely you have some other problem with the filedescriptor problem
 only being a side effect. 8K filedescriptors is well more than plenty for
 any Squid I have seen.

 How many clients do you have on this proxy?

 Try

half_closed_clients off
quick_abort_min 0
quick_abort_max 0
pipeline_prefetch off


I had same experience last year. Henrik suggestion is correct.

One of our user was infected by NIMDA and consume all filedescriptors. After
edit the squid.conf to half_closed_clients off Squid run normal.

Thx  Rgds,

Awie




Re: [squid-users] file descriptors - urgent request

2003-11-03 Thread Tom Lahti
There is, and its in the FAQ.  Check 
closer.  http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.4

At 10:55 PM 11/2/2003, [EMAIL PROTECTED] wrote:

Mark

FAQ already checked. I just wondered (grasping at straws) if there was some
way of doing it.
Thanks anyway!

Jeff
-- =
   Tom Lahti
   Tx3 Online Services
   (888)4-TX3-SVC (489-3782)
   http://www.tx3.net/
-- =


Re: [squid-users] file descriptors - urgent request

2003-11-03 Thread Tom Lahti
Sorry, I just noticed your question actually said without a 
recompile.  In that case, the answer is no: you must recompile to raise 
the file descriptor limit in squid.

At 10:55 PM 11/2/2003, [EMAIL PROTECTED] wrote:

Mark

FAQ already checked. I just wondered (grasping at straws) if there was some
way of doing it.
Thanks anyway!

Jeff
-- =
   Tom Lahti
   Tx3 Online Services
   (888)4-TX3-SVC (489-3782)
   http://www.tx3.net/
-- =


Re: [squid-users] file descriptors - urgent request

2003-11-03 Thread jeff . richards

Thanks for the feedback guys 

Jeff

--
Jeff Richards
Technical Consultant
Unix Enterprise Services
[EMAIL PROTECTED]
Tel: +61 2 6219 8125



   
   
  Tom Lahti
   
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED], 
[EMAIL PROTECTED] 
   cc: 
   
  03/11/2003 18:22 Subject:  Re: [squid-users] file 
descriptors - urgent request  
   
   
   
   
|-|
   
| ( ) Urgent(4 hours) |
   
| (*) Normal(24-48)   |
   
| ( ) Low(72 hours)   |
   
|-|  Expires on
   
   
   
   
   
   
   
   
   
   
   





Sorry, I just noticed your question actually said without a
recompile.  In that case, the answer is no: you must recompile to raise
the file descriptor limit in squid.

At 10:55 PM 11/2/2003, [EMAIL PROTECTED] wrote:

Mark

FAQ already checked. I just wondered (grasping at straws) if there was
some
way of doing it.

Thanks anyway!

Jeff

-- =
Tom Lahti
Tx3 Online Services

(888)4-TX3-SVC (489-3782)
http://www.tx3.net/
-- =








Important:  This e-mail is intended for the use of the addressee and may contain 
information that is confidential, commercially valuable or subject to legal or 
parliamentary privilege.  If you are not the intended recipient you are notified that 
any review, re-transmission, disclosure, use or dissemination of this communication is 
strictly prohibited by several Commonwealth Acts of Parliament.  If you have received 
this communication in error please notify the sender immediately and delete all copies 
of this transmission together with any attachments.



Re: [squid-users] file descriptors - urgent request

2003-11-02 Thread Marc Elsen


[EMAIL PROTECTED] wrote:
 
 If anyone can give me a definitive response in the next two hours, I would
 be extremely grateful.
 
 I believe that Squid needs a recompile, with a new system fd limit in
 place, in order to increase it's fd limit(?) Is there any way of increasing
 the fd limit for Squid withour a re-compile? System limit has already been

 No, check the  SQUID FAQ on this issue (filedescriptors).

 M.

 raised - just need to get Squid raised as well.
 
 This is Solaris 2.8 with Squid 2.4.STABLE7.
 
 TIA
 



Re: [squid-users] file descriptors - urgent request

2003-11-02 Thread jeff . richards

Mark

FAQ already checked. I just wondered (grasping at straws) if there was some
way of doing it.

Thanks anyway!

Jeff

--
Jeff Richards
Technical Consultant
Unix Enterprise Services
[EMAIL PROTECTED]
Tel: +61 2 6219 8125



   
   
  Marc Elsen   
   
  [EMAIL PROTECTED]To:   [EMAIL PROTECTED]
  
  be  cc:   [EMAIL PROTECTED] 
 
   Subject:  Re: [squid-users] file 
descriptors - urgent request  
  03/11/2003 17:48 
   
   
   
|-|
   
| ( ) Urgent(4 hours) |
   
| (*) Normal(24-48)   |
   
| ( ) Low(72 hours)   |
   
|-|  Expires on
   
   
   
   
   
   
   
   
   
   
   







[EMAIL PROTECTED] wrote:

 If anyone can give me a definitive response in the next two hours, I
would
 be extremely grateful.

 I believe that Squid needs a recompile, with a new system fd limit in
 place, in order to increase it's fd limit(?) Is there any way of
increasing
 the fd limit for Squid withour a re-compile? System limit has already
been

 No, check the  SQUID FAQ on this issue (filedescriptors).

 M.

 raised - just need to get Squid raised as well.

 This is Solaris 2.8 with Squid 2.4.STABLE7.

 TIA









Important:  This e-mail is intended for the use of the addressee and may contain 
information that is confidential, commercially valuable or subject to legal or 
parliamentary privilege.  If you are not the intended recipient you are notified that 
any review, re-transmission, disclosure, use or dissemination of this communication is 
strictly prohibited by several Commonwealth Acts of Parliament.  If you have received 
this communication in error please notify the sender immediately and delete all copies 
of this transmission together with any attachments.



Re: [squid-users] File Descriptors

2003-08-20 Thread Henrik Nordstrom
On Tuesday 19 August 2003 22.22, [EMAIL PROTECTED] wrote:

 runtime information page it shows squid seeing 1024 file
 descriptors available.

 Do I need to recompile squid to fix this?

Most likely yes.

The ulimit set when you compile Squid is built into the Squid binary 
and this binary can not support higher amounts of filedescriptors 
than the ulimit was.

Note: There is a CPU penalty if the limit is set way too high.

-- 
Donations welcome if you consider my Free Squid support helpful.
https://www.paypal.com/xclick/business=hno%40squid-cache.org

If you need commercial Squid support or cost effective Squid or
firewall appliances please refer to MARA Systems AB, Sweden
http://www.marasystems.com/, [EMAIL PROTECTED]


Re: [squid-users] File Descriptors

2003-08-19 Thread Adam Aube
Now I am having errors show up in my log saying I am running on of file
descriptors.

Using ulimit (or its equivalent), set the hard and soft limits both 
before compiling and before running Squid.

This has been recently discussed on the list (today, I think), and 
also several times in the archives. A quick check of the archives 
would have gotten you a faster answer.

Adam