Re: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-13 Thread Leonardo
On Wed, Oct 12, 2011 at 3:09 AM, Amos Jeffries squ...@treenet.co.nz wrote:

 FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.

 So what is taking up all that space?
  2GB+ objects in the cache screwing with the actual size calculation?
  logs?
  swap.state too big?
  core dumps?
  other applications?

What's puzzling is that there appears to be plenty of free space:

squid:/var/cache# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  65G   41G   22G  66% /
tmpfs 1.7G 0  1.7G   0% /lib/init/rw
udev   10M  652K  9.4M   7% /dev
tmpfs 1.7G 0  1.7G   0% /dev/shm

Is it possible that the disk runs out of free space, and df just gives
me the wrong output?

There is no other app on the machine, except Squirm processes, and
Sarg.  I had Sarg generate reports every 5 minutes for 6 weeks, and it
ran fine.  Now it runs only every hour, for safety.


  Now Squid is running and serving requests, albeit
 without caching.  However, I keep seeing the same error:
 client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
 filedescriptors

 What is the reason of this since I'm not using caching at all?

 Cache only uses one FD. Client connection uses one, server connection uses
 one. Each helper uses at least one. Your Squid seems to be thinking it only
 has 1024 to share between all those connections. Squid can handle this, but
 it has to do so by slowing down the incoming traffic a lot and possibly
 dropping some client connections.

I increased ulimit to 65536 for Squid as suggested by Wilson, and it
worked fine (without caching).  I re-enabled caching and after a while
Squid crashed with the same error FATAL: storeDirOpenTmpSwapLog:
Failed to open swap log.  Now I'm back to Squid without caching.

The questions I'm asking myself are:
1) Why this issue with FDs happened only after several months?
2) What does take all this space if apparently there's plenty of space on disk?

Thanks for your tips.  If there are other tests I can try please don't
hesitate to post your suggestions.


Leonardo


RE: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-13 Thread Jenny Lee

 Date: Thu, 13 Oct 2011 10:59:09 +0200
 From: leonardodiserpierodavi...@gmail.com
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] Recurrent crashes and warnings: Your cache is 
 running out of filedescriptors
 
 On Wed, Oct 12, 2011 at 3:09 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 
  FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.
 
  So what is taking up all that space?
  2GB+ objects in the cache screwing with the actual size calculation?
  logs?
  swap.state too big?
  core dumps?
  other applications?
 
 What's puzzling is that there appears to be plenty of free space:
 
 squid:/var/cache# df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/sda1 65G 41G 22G 66% /
 tmpfs 1.7G 0 1.7G 0% /lib/init/rw
 udev 10M 652K 9.4M 7% /dev
 tmpfs 1.7G 0 1.7G 0% /dev/shm
 
 Is it possible that the disk runs out of free space, and df just gives
 me the wrong output?
 
Perhaps you are running out of inodes?
 
df -i should give you what you are looking for.
 
Jenny 

RE: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-13 Thread Jenny Lee

  Perhaps you are running out of inodes?
 
  df -i should give you what you are looking for.


 Well done. df reports indeed that I am out of inodes (100% used).
 I've seen that a Sarg daily report contains about 170'000 files. I am
 starting tar.gzipping them.

 Thank you very much Jenny.


 Leonardo
 

Glad this solved. Actually you could increase inode max (i think it was 
double/triple of /proc/sys/fs/file-max setting).
 
However, 170,000 files on a directory on a mechanical drive will make things 
awfully slow.
 
Also, ext4 is preferable since deletes are done at the background. Our tests on 
an SSD with ext3 took 9 mins to delete 1 million files. It was about 7 secs on 
ext4.
 
Whenever we need to deal with high number of files (sometimes in the tune of 
100 Million), we move them to an SSD with ext4 and perform operations there. 
And yes, that moving part... is very painful also unless the files were already 
tarred :)
 
Let me give you an example. Process 1 Million files on a single directory 
(read, write, split to directories, archive):
 
HDD: 6 days
SSD: 4 hours
 
Jenny 
  

[squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-11 Thread Leonardo
Hi all,

I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
configured as a bridge.  The proxy serves a few thousands of users
daily.  It uses Squirm for URL rewriting, and (since 6 weeks) sarg for
generating reports.  I compiled it from source.
This is the output of squid -v:
Squid Cache: Version 3.1.7
configure options:  '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience
I set squid.conf to allocate 10Gb of disk cache:
cache_dir ufs /var/cache 1 16 256


Everything worked fine for almost one year, but now suddenly I keep
having problems.


Recently Squid crashed and I had to delete swap.state.


Now I keep seeing this warning message on cache.log and on console:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

At OS level, /proc/sys/fs/file-max reports 314446.
squidclient mgr:info reports 1024 as the max number of file descriptors.
I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
max_filedescriptors 4096 on squid.conf but neither was successful.  Do
I really have to recompile Squid to increase the max number of FDs?


Today Squid crashed again, and when I tried to relaunch it it gave this output:

2011/10/11 11:18:29| Process ID 28264
2011/10/11 11:18:29| With 1024 file descriptors available
2011/10/11 11:18:29| Initializing IP Cache...
2011/10/11 11:18:29| DNS Socket created at [::], FD 5
2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
(...)
2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm' processes
2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
2011/10/11 11:18:39| Store logging disabled
2011/10/11 11:18:39| Swap maxSize 1024 + 262144 KB, estimated 807857 objects
2011/10/11 11:18:39| Target number of buckets: 40392
2011/10/11 11:18:39| Using 65536 Store buckets
2011/10/11 11:18:39| Max Mem  size: 262144 KB
2011/10/11 11:18:39| Max Swap size: 1024 KB
2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on device
FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.

I therefore deactivated the cache and rerun Squid.  It showed a long
list of errors of this type:
IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
directory
and then started.  Now Squid is running and serving requests, albeit
without caching.  However, I keep seeing the same error:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

What is the reason of this since I'm not using caching at all?


Thanks a lot if you can shed some light on this.
Best regards,


Leonardo


Re: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-11 Thread Wilson Hernandez
I was having this problem in the past and created the following script 
to start squid:


#!/bin/sh -e
#

echo Starting squid...

ulimit -HSn 65536
sleep 1
/usr/local/squid/sbin/squid

echo Done..

That fixed the problem and hasn't happen ever since.

Hope that helps.

On 10/11/2011 9:07 AM, Leonardo wrote:

Hi all,

I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
configured as a bridge.  The proxy serves a few thousands of users
daily.  It uses Squirm for URL rewriting, and (since 6 weeks) sarg for
generating reports.  I compiled it from source.
This is the output of squid -v:
Squid Cache: Version 3.1.7
configure options:  '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience
I set squid.conf to allocate 10Gb of disk cache:
cache_dir ufs /var/cache 1 16 256


Everything worked fine for almost one year, but now suddenly I keep
having problems.


Recently Squid crashed and I had to delete swap.state.


Now I keep seeing this warning message on cache.log and on console:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

At OS level, /proc/sys/fs/file-max reports 314446.
squidclient mgr:info reports 1024 as the max number of file descriptors.
I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
max_filedescriptors 4096 on squid.conf but neither was successful.  Do
I really have to recompile Squid to increase the max number of FDs?


Today Squid crashed again, and when I tried to relaunch it it gave this output:

2011/10/11 11:18:29| Process ID 28264
2011/10/11 11:18:29| With 1024 file descriptors available
2011/10/11 11:18:29| Initializing IP Cache...
2011/10/11 11:18:29| DNS Socket created at [::], FD 5
2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
(...)
2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm' processes
2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
2011/10/11 11:18:39| Store logging disabled
2011/10/11 11:18:39| Swap maxSize 1024 + 262144 KB, estimated 807857 objects
2011/10/11 11:18:39| Target number of buckets: 40392
2011/10/11 11:18:39| Using 65536 Store buckets
2011/10/11 11:18:39| Max Mem  size: 262144 KB
2011/10/11 11:18:39| Max Swap size: 1024 KB
2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on device
FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.

I therefore deactivated the cache and rerun Squid.  It showed a long
list of errors of this type:
IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
directory
and then started.  Now Squid is running and serving requests, albeit
without caching.  However, I keep seeing the same error:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out of
filedescriptors

What is the reason of this since I'm not using caching at all?


Thanks a lot if you can shed some light on this.
Best regards,


Leonardo


Re: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-11 Thread Fred B

- Leonardo leonardodiserpierodavi...@gmail.com a écrit :

 Hi all,
 
 I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
 configured as a bridge.  The proxy serves a few thousands of users
 daily.  It uses Squirm for URL rewriting, and (since 6 weeks) sarg
 for
 generating reports.  I compiled it from source.
 This is the output of squid -v:
 Squid Cache: Version 3.1.7
 configure options:  '--enable-linux-netfilter' '--enable-wccp'
 '--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
 '--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
 'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
 --enable-ltdl-convenience
 I set squid.conf to allocate 10Gb of disk cache:
 cache_dir ufs /var/cache 1 16 256
 
 
 Everything worked fine for almost one year, but now suddenly I keep
 having problems.
 
 
 Recently Squid crashed and I had to delete swap.state.
 
 
 Now I keep seeing this warning message on cache.log and on console:
 client_side.cc(2977) okToAccept: WARNING! Your cache is running out
 of
 filedescriptors
 
 At OS level, /proc/sys/fs/file-max reports 314446.
 squidclient mgr:info reports 1024 as the max number of file
 descriptors.
 I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
 max_filedescriptors 4096 on squid.conf but neither was successful. 
 Do
 I really have to recompile Squid to increase the max number of FDs?
 
 
 Today Squid crashed again, and when I tried to relaunch it it gave
 this output:
 
 2011/10/11 11:18:29| Process ID 28264
 2011/10/11 11:18:29| With 1024 file descriptors available
 2011/10/11 11:18:29| Initializing IP Cache...
 2011/10/11 11:18:29| DNS Socket created at [::], FD 5
 2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
 (...)
 2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm'
 processes
 2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
 2011/10/11 11:18:39| Store logging disabled
 2011/10/11 11:18:39| Swap maxSize 1024 + 262144 KB, estimated
 807857 objects
 2011/10/11 11:18:39| Target number of buckets: 40392
 2011/10/11 11:18:39| Using 65536 Store buckets
 2011/10/11 11:18:39| Max Mem  size: 262144 KB
 2011/10/11 11:18:39| Max Swap size: 1024 KB
 2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on
 device
 FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.
 
 I therefore deactivated the cache and rerun Squid.  It showed a long
 list of errors of this type:
 IpIntercept.cc(137) NetfilterInterception:  NF
 getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
 directory
 and then started.  Now Squid is running and serving requests, albeit
 without caching.  However, I keep seeing the same error:
 client_side.cc(2977) okToAccept: WARNING! Your cache is running out
 of
 filedescriptors
 
 What is the reason of this since I'm not using caching at all?
 
 
 Thanks a lot if you can shed some light on this.
 Best regards,
 
 
 Leonardo



Hi,

For 2011/10/11 11:18:29| With 1024 file descriptors available

Before compilation

ulimit -n 65536
And rebuild with '--with-filedescriptors=65536' 
 
If it's not enough add ulimit -n 48000 in /etc/init.d/squid

For 2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on device

Could you post the result of df -h /var



Re: [squid-users] Recurrent crashes and warnings: Your cache is running out of filedescriptors

2011-10-11 Thread Amos Jeffries

On Tue, 11 Oct 2011 15:07:16 +0200, Leonardo wrote:

Hi all,

I'm running a transparent Squid proxy on a Linux Debian 5.0.5,
configured as a bridge.  The proxy serves a few thousands of users
daily.  It uses Squirm for URL rewriting, and (since 6 weeks) sarg 
for

generating reports.  I compiled it from source.
This is the output of squid -v:
Squid Cache: Version 3.1.7
configure options:  '--enable-linux-netfilter' '--enable-wccp'
'--prefix=/usr' '--localstatedir=/var' '--libexecdir=/lib/squid'
'--srcdir=.' '--datadir=/share/squid' '--sysconfdir=/etc/squid'
'CPPFLAGS=-I../libltdl' --with-squid=/root/squid-3.1.7
--enable-ltdl-convenience
I set squid.conf to allocate 10Gb of disk cache:
cache_dir ufs /var/cache 1 16 256



Please try 3.1.15. There were some FD problems solved since .7




Now I keep seeing this warning message on cache.log and on console:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out 
of

filedescriptors

At OS level, /proc/sys/fs/file-max reports 314446.
squidclient mgr:info reports 1024 as the max number of file 
descriptors.

I've tried both to set SQUID_MAXFD=4096 on etc/default/squid and
max_filedescriptors 4096 on squid.conf but neither was successful.  
Do

I really have to recompile Squid to increase the max number of FDs?



You need to run ulimit to raise the per-process limit before starting 
Squid.

Squid runs with the lower of this set of limits:
 OS /proc
 ulimit -n
 ./configure --with-max-filedscriptors=N (default 1024)
 squid.conf max_filedescriptors   (default 0, 'unlimited')




Today Squid crashed again, and when I tried to relaunch it it gave
this output:

2011/10/11 11:18:29| Process ID 28264
2011/10/11 11:18:29| With 1024 file descriptors available
2011/10/11 11:18:29| Initializing IP Cache...
2011/10/11 11:18:29| DNS Socket created at [::], FD 5
2011/10/11 11:18:29| DNS Socket created at 0.0.0.0, FD 6
(...)
2011/10/11 11:18:29| helperOpenServers: Starting 40/40 'squirm' 
processes

2011/10/11 11:18:39| Unlinkd pipe opened on FD 91
2011/10/11 11:18:39| Store logging disabled
2011/10/11 11:18:39| Swap maxSize 1024 + 262144 KB, estimated
807857 objects
2011/10/11 11:18:39| Target number of buckets: 40392
2011/10/11 11:18:39| Using 65536 Store buckets
2011/10/11 11:18:39| Max Mem  size: 262144 KB
2011/10/11 11:18:39| Max Swap size: 1024 KB
2011/10/11 11:18:39| /var/cache/swap.state.new: (28) No space left on 
device

FATAL: storeDirOpenTmpSwapLog: Failed to open swap log.


So what is taking up all that space?
 2GB+ objects in the cache screwing with the actual size calculation?
 logs?
 swap.state too big?
 core dumps?
 other applications?



I therefore deactivated the cache and rerun Squid.  It showed a long
list of errors of this type:
IpIntercept.cc(137) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 10: (2) No such file or
directory
and then started.


*then* started?  This error appears when a client connects. Squid has 
to already be started accepting connections for it to occur.



 Now Squid is running and serving requests, albeit
without caching.  However, I keep seeing the same error:
client_side.cc(2977) okToAccept: WARNING! Your cache is running out 
of

filedescriptors

What is the reason of this since I'm not using caching at all?


Cache only uses one FD. Client connection uses one, server connection 
uses one. Each helper uses at least one. Your Squid seems to be thinking 
it only has 1024 to share between all those connections. Squid can 
handle this, but it has to do so by slowing down the incoming traffic a 
lot and possibly dropping some client connections.



Amos