I'm also registering 17 "listening" file descriptor on Kannel's administration port.
one is registered for each of the bearerbox threads running. this may be a bug (or
feature?) in lsof which makes it display the same entry in the process file descriptor
table once for each thread, even as pthreads in Linux share the file descriptor table.
I can also list over a thousand open file descriptors for bearerbox alone - this is
again for the same reason. try to filter by the process id instead of the process
name, and you'll get a much more reasonable count (in my case - about 70 open file
descriptor).
--
Oded Arbel
m-Wise mobile solutions
[EMAIL PROTECTED]
+972-9-9581711
+972-67-340014
::..
These download files are in Microsoft Word 6.0 format. After unzipping, these files
can be viewed in any text editor, including all versions of Microsoft Word, WordPad,
and MicrosoftWord Viewer
-- From www.Microsoft.com
-----Original Message-----
From: Steve Rapaport [mailto:[EMAIL PROTECTED]]
Sent: Monday, June 24, 2002 9:42 PM
To: Stipe Tolj
Cc: [EMAIL PROTECTED]
Subject: Re: Bug: Status stops working
Just a progress report on the "status" bug:
Stipe guesses a possible overuse of file descriptors,
possibly due to HTTP 1.1 keepalives. Seems reasonable
to me.
What I've found so far:
1. Our requests for status are performed using "readfile"
which claims to use HTTP 1.0 hence no "keepalive" from that side.
2. the "/sbin/lsof" program can report on number of open file
descriptors for each program or process. I just tried it now
and it reports 17 open listening TCP descriptors for bearerbox on port 13100,
which does seem excessive (I'm sure we're not running 17
status report requests). But it's not enough to cause a problem yet,
and we are having no problems with status yet.
By the way we're getting a report of 2375 open descriptors for
bearerbox alone! The number doesn't seem to fluctuate much
within a few minutes. This really seems like a lot to me, but perhaps
someone else knows better... I suspect that a lot of descriptors are
not being closed....
No I'm not using today's CVS build, i'm using one from March. Perhaps
a new build would help, Stipe says that there have been a couple of
patches recently.
Steve
[root@web1 admin]# lsof -c bearer | grep 13100
bearerbox 3975 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 3976 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29720 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29721 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29722 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29723 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29724 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29725 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29726 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29727 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29728 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29729 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29730 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29731 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29734 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29735 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)
bearerbox 29736 kannel 7u IPv4 2076138 TCP *:13100 (LISTEN)