I did some captures during a peak this morning, I have some lsof and
netstat data.

It seems to me that most file descriptors used by tomcat are some http
connections:

 thomas@localhost  ~/ads3/tbo11h12  cat lsof| wc -l
17772
 thomas@localhost  ~/ads3/tbo11h12  cat lsof | grep TCP | wc -l
13966

(Note that the application also send request to external servers via http)


Regarding netstat I did a small script to try to aggregate connections with
a human readable name, if my script is right the connections between nginx
and tomcat are as follows:

tomcat => nginx SYN_RECV 127
tomcat => nginx ESTABLISHED 1650
tomcat => nginx CLOSE_WAIT 8381
tomcat => nginx TIME_WAIT 65

nginx => tomcat SYN_SENT 20119
nginx => tomcat ESTABLISHED 4692
nginx => tomcat TIME_WAIT 122
nginx => tomcat FIN_WAIT2 488
nginx => tomcat FIN_WAIT1 13

Concerning the other response and the system max number of file, I am not
sure this is where our issue lies. The peak itself seems to be a sympton of
an issue, tomcat fd are around 1000 almost all the time except when a peak
occurs. In such cases it can go up to 10000 or more sometimes.

Thomas



2015-04-20 15:41 GMT+02:00 Rainer Jung <rainer.j...@kippdata.de>:

> Am 20.04.2015 um 14:11 schrieb Thomas Boniface:
>
>> Hi,
>>
>> I have tried to find help regarding an issue we experience with our
>> platform leading to random file descriptor peaks. This happens more often
>> on heavy load but can also happen on low traffic periods.
>>
>> Our application is using servlet 3.0 async features and an async
>> connector.
>> We noticed that a lot of issues regarding asynchronous feature were fixed
>> between our production version and the last stable build. We decided to
>> give it a try to see if it improves things or at least give clues on what
>> can cause the issue; Unfortunately it did neither.
>>
>> The file descriptor peaks and application blocking happens frequently with
>> this version when it only happens rarely on previous version (tomcat7
>> 7.0.28-4).
>>
>> Tomcat is behind an nginx server. The tomcat connector used is configured
>> as follows:
>>
>> We use an Nio connector:
>> <Connector port="8080" protocol="org.apache.coyote.
>> http11.Http11NioProtocol"
>>        selectorTimeout="1000"
>>        maxThreads="200"
>>        maxHttpHeaderSize="16384"
>>        address="127.0.0.1"
>>        redirectPort="8443"/>
>>
>> In catalina I can see some Broken pipe message that were not happening
>> with
>> previous version.
>>
>> I compared thread dumps from server with both the new and "old" version of
>> tomcat and both look similar from my stand point.
>>
>> My explanation may not be very clear, but I hope this gives an idea how
>> what we are experiencing. Any pointer would be welcomed.
>>
>
> If the peaks happen long enough and your platforms has the tools available
> you can use lsof to look for what those FDs are - or on Linux looking at
> "ls -l /proc/PID/fd/*" (PID is the process PID file) - or on Solaris use
> the pfiles command.
>
> If the result is what is expected, namely that by far the most FDs are
> coming from network connections for port 8080, then you can check via
> "netstat" in which connection state those are.
>
> If most are in ESTABLISHED state, then you/we need to further break down
> the strategy.
>
> Regards,
>
> Rainer
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to