Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-26 Thread Christopher Schultz
1835053 /apps/apache-tomcat/logs/fms.log
java    165217 root  111w  REG    8,6   919042  
1835019 /apps/apache-tomcat/logs/obd.log
java    165217 root  112u sock    0,9  0t0 
16823821 protocol: TCPv6
java    165217 root  113u sock    0,9  0t0 
16823822 protocol: TCPv6
java    165217 root  114u sock    0,9  0t0 
16823633 protocol: TCPv6
java    165217 root  115u sock    0,9  0t0 
16823527 protocol: TCPv6
java    165217 root  116u sock    0,9  0t0 
16823489 protocol: TCPv6
java    165217 root  117u sock    0,9  0t0 
16823490 protocol: TCPv6
java    165217 root  118u sock    0,9  0t0 
16823491 protocol: TCPv6

..

Thanks

-Original Message-
From: André Warnier (tomcat/perl) 
Sent: Tuesday, May 25, 2021 12:42 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.
The point is to try to figure out what these thousands of apparently 
"TCPv6" sockets
belonging to the tomcat process actually are, so that we can maybe 
begin to look at where

they may be coming from.
The trouble is, the lsof output so far did not really tell us what 
these "sock" things

might be.

But there may be a clue here :
https://serverfault.com/questions/1000338/in-lsof-output-what-are-those-sock-lines 


(about when things run in a container).
Is that your case ?
And if so, could you run the lsof command in the container, as they 
suggest ?


And the point of forcing a tomcat/JVM GC was this :
When you restart tomcat (actually the JVM which runs tomcat), the OS 
will clean up *all*
the file descriptors belonging to that process, including the 
"legitimate" ones shown by

netstat, and the "unknown" ones shown in addition by lsof.
Doing a GC, without stopping the JVM, would clean up *only* such 
sockets/fd that are held
by objects which are discarded, but still sit on the heap awaiting a 
GC to really destroy
them.  If your heap is very large, it may otherwise take a long while 
before such a GC

happens, and such sockets could accumulate.
One way to trigger a GC is through JMX, but it takes a bit of 
additional setup to make

that work. That's why I was asking if you had some method to do that.
(see : https://code.google.com/archive/p/jmxsh/)
But let's look at the lsof part first.



On 24.05.2021 16:09, Yeggy Javadi wrote:

Hi,
I restarted tomcat so PID has changed to 143152; I do not know how to 
trigger tomcat GC, I just restart it to reset the lsof to 0.

Please see outputs below:

# ps -ef | grep tomcat
root  143152   1  0 May22 ?    00:26:44 
/usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties 
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager 
-d64 -server -Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar 
-Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root  153962  153912  0 10:00 pts/1    00:00:00 grep --color=auto 
tomcat


# lsof -p 143152 | wc -l
41043

# lsof -p 143152 | grep "protocol: TCPv6"| wc -l
40487

# netstat -p -a -n --inet6 | grep 143152
tcp6   0  0 :::8443 :::*
LISTEN  143152/java
tcp6   0  0 :::443  :::*
LISTEN  143152/java
tcp6   0  0 127.0.0.1:8005  :::*
LISTEN  143152/java
tcp6   0  0 :::1099 :::*
LISTEN  143152/java
tcp6   0  0 :::80   :::*
LISTEN  143152/java
tcp6   0  0 :::36081    :::*
LISTEN  143152/java
tcp6   0  0 10.4.3.55:60736 10.4.3.55:9300  
ESTABLISHED 143152/java
tcp6   0  0 10.4.3.55:60732 10.4.3.55:9300  
ESTABLISHED 143152/java
tcp6   0  0 10.4.3.55:60728 10.4.3.55:9300  
ESTABLISHED 143152/java
tcp6   0  0 10.4.3.55:80    10.197.255.10:55446 
ESTABLISHED 143152/java
tcp6   1  0 10.4.3.55:55958 10.4.3.55:11576 
CLOSE_WAIT  143152/java
tcp6   0  0 10.4.3.55:53682 172.22.21.48:443
ESTABLISHED 143152/java
tcp6   0  0 127.0.0.1:48622 127.0.0.1:5432  
ESTABLISHED 143152/java
tcp6   0  0 10.4.3.55:60748 10.4.3.55:9300  
ESTABLISHED 143152/java
tcp6   1  0 10.4.3.55:55956 1

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-26 Thread tomcat/perl
 0t0  TCP 127.0.0.1:mxi 
(LISTEN)

java    165217 root   94u sock    0,9  0t0 16963075 
protocol: TCPv6
java    165217 root   96u sock    0,9  0t0 16824649 
protocol: TCPv6
java    165217 root   97u IPv6   17051898  0t0  TCP 
10.4.3.55:60204->172.22.22.192:https (ESTABLISHED)

java    165217 root   98u sock    0,9  0t0 16824673 
protocol: TCPv6
java    165217 root   99u sock    0,9  0t0 16824711 
protocol: TCPv6
java    165217 root  100u sock    0,9  0t0 16893245 
protocol: TCPv6
java    165217 root  101u sock    0,9  0t0 16824385 
protocol: TCPv6
java    165217 root  102u sock    0,9  0t0 16824264 
protocol: TCPv6
java    165217 root  103u sock    0,9  0t0 16824278 
protocol: TCPv6
java    165217 root  104u sock    0,9  0t0 16824279 
protocol: TCPv6
java    165217 root  105u sock    0,9  0t0 16824063 
protocol: TCPv6
java    165217 root  106u sock    0,9  0t0 16823982 
protocol: TCPv6
java    165217 root  107u sock    0,9  0t0 16823959 
protocol: TCPv6
java    165217 root  108r  CHR    1,8  0t0 1041 
/dev/random
java    165217 root  109u sock    0,9  0t0 16778709 
protocol: TCPv6
java    165217 root  110w  REG    8,6 25506176  1835053 
/apps/apache-tomcat/logs/fms.log
java    165217 root  111w  REG    8,6   919042  1835019 
/apps/apache-tomcat/logs/obd.log

java    165217 root  112u sock    0,9  0t0 16823821 
protocol: TCPv6
java    165217 root  113u sock    0,9  0t0 16823822 
protocol: TCPv6
java    165217 root  114u sock    0,9  0t0 16823633 
protocol: TCPv6
java    165217 root  115u sock    0,9  0t0 16823527 
protocol: TCPv6
java    165217 root  116u sock    0,9  0t0 16823489 
protocol: TCPv6
java    165217 root  117u sock    0,9  0t0 16823490 
protocol: TCPv6
java    165217 root  118u sock    0,9  0t0 16823491 
protocol: TCPv6
..

Thanks

-Original Message-
From: André Warnier (tomcat/perl) 
Sent: Tuesday, May 25, 2021 12:42 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.
The point is to try to figure out what these thousands of apparently "TCPv6" 
sockets
belonging to the tomcat process actually are, so that we can maybe begin to 
look at where
they may be coming from.
The trouble is, the lsof output so far did not really tell us what these "sock" 
things
might be.

But there may be a clue here :
https://serverfault.com/questions/1000338/in-lsof-output-what-are-those-sock-lines
(about when things run in a container).
Is that your case ?
And if so, could you run the lsof command in the container, as they suggest ?

And the point of forcing a tomcat/JVM GC was this :
When you restart tomcat (actually the JVM which runs tomcat), the OS will clean 
up *all*
the file descriptors belonging to that process, including the "legitimate" ones 
shown by
netstat, and the "unknown" ones shown in addition by lsof.
Doing a GC, without stopping the JVM, would clean up *only* such sockets/fd 
that are held
by objects which are discarded, but still sit on the heap awaiting a GC to 
really destroy
them.  If your heap is very large, it may otherwise take a long while before 
such a GC
happens, and such sockets could accumulate.
One way to trigger a GC is through JMX, but it takes a bit of additional setup 
to make
that work. That's why I was asking if you had some method to do that.
(see : https://code.google.com/archive/p/jmxsh/)
But let's look at the lsof part first.



On 24.05.2021 16:09, Yeggy Javadi wrote:

Hi,
I restarted tomcat so PID has changed to 143152; I do not know how to trigger tomcat 
GC, I just restart it to reset the lsof to 0.

Please see outputs below:

# ps -ef | grep tomcat
root  143152   1  0 May22 ?    00:26:44 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties 
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m -Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dignore.endorsed.dirs= 
-classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar 
-Dcatalina.base=/usr/local/apache-tomcat -Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp org.apache.catalina.startup.Bootstrap start

root  153962  153912  0 10:00 pts/1    00:00:00 grep --color=auto tomcat

# lsof -p 143152 | wc -l
41043

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-26 Thread tomcat/perl
 
protocol: TCPv6
java165217 root  106u sock0,9  0t0 16823982 
protocol: TCPv6
java165217 root  107u sock0,9  0t0 16823959 
protocol: TCPv6
java165217 root  108r  CHR1,8  0t0 1041 
/dev/random
java165217 root  109u sock0,9  0t0 16778709 
protocol: TCPv6
java165217 root  110w  REG8,6 25506176  1835053 
/apps/apache-tomcat/logs/fms.log
java165217 root  111w  REG8,6   919042  1835019 
/apps/apache-tomcat/logs/obd.log
java165217 root  112u sock0,9  0t0 16823821 
protocol: TCPv6
java165217 root  113u sock0,9  0t0 16823822 
protocol: TCPv6
java165217 root  114u sock0,9  0t0 16823633 
protocol: TCPv6
java165217 root  115u sock0,9  0t0 16823527 
protocol: TCPv6
java165217 root  116u sock0,9  0t0 16823489 
protocol: TCPv6
java165217 root  117u sock0,9  0t0 16823490 
protocol: TCPv6
java165217 root  118u sock0,9  0t0 16823491 
protocol: TCPv6
..

Thanks

-Original Message-
From: André Warnier (tomcat/perl) 
Sent: Tuesday, May 25, 2021 12:42 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.
The point is to try to figure out what these thousands of apparently "TCPv6" 
sockets
belonging to the tomcat process actually are, so that we can maybe begin to 
look at where
they may be coming from.
The trouble is, the lsof output so far did not really tell us what these "sock" 
things
might be.

But there may be a clue here :
https://serverfault.com/questions/1000338/in-lsof-output-what-are-those-sock-lines
(about when things run in a container).
Is that your case ?
And if so, could you run the lsof command in the container, as they suggest ?

And the point of forcing a tomcat/JVM GC was this :
When you restart tomcat (actually the JVM which runs tomcat), the OS will clean 
up *all*
the file descriptors belonging to that process, including the "legitimate" ones 
shown by
netstat, and the "unknown" ones shown in addition by lsof.
Doing a GC, without stopping the JVM, would clean up *only* such sockets/fd 
that are held
by objects which are discarded, but still sit on the heap awaiting a GC to 
really destroy
them.  If your heap is very large, it may otherwise take a long while before 
such a GC
happens, and such sockets could accumulate.
One way to trigger a GC is through JMX, but it takes a bit of additional setup 
to make
that work. That's why I was asking if you had some method to do that.
(see : https://code.google.com/archive/p/jmxsh/)
But let's look at the lsof part first.



On 24.05.2021 16:09, Yeggy Javadi wrote:

Hi,
I restarted tomcat so PID has changed to 143152; I do not know how to trigger 
tomcat GC, I just restart it to reset the lsof to 0.
Please see outputs below:

# ps -ef | grep tomcat
root  143152   1  0 May22 ?00:26:44 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root  153962  153912  0 10:00 pts/100:00:00 grep --color=auto tomcat

# lsof -p 143152 | wc -l
41043

# lsof -p 143152 | grep "protocol: TCPv6"| wc -l
40487

# netstat -p -a -n --inet6 | grep 143152
tcp6   0  0 :::8443 :::*LISTEN  
143152/java
tcp6   0  0 :::443  :::*LISTEN  
143152/java
tcp6   0  0 127.0.0.1:8005  :::*LISTEN  
143152/java
tcp6   0  0 :::1099 :::*LISTEN  
143152/java
tcp6   0  0 :::80   :::*LISTEN  
143152/java
tcp6   0  0 :::36081:::*LISTEN  
143152/java
tcp6   0  0 10.4.3.55:60736 10.4.3.55:9300  ESTABLISHED 
143152/java
tcp6   0  0 10.4.3.55:60732 10.4.3.55:9300  ESTABLISHED 
143152/java
tcp6   0  0 10.4.3.55:60728 10.4.3.55:9300  ESTABLISHED 
143152/java
tcp6   0  0 10.4.3.55:8010.197.255.10:55446 ESTABLISHED 
143152/java
tcp6

RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-25 Thread Yeggy Javadi
-INF/lib/spring-jms-3.2.17.RELEASE.jar
java165217 root  mem   REG8,264021   160837 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/lucene-join-4.10.4.jar
java165217 root  mem   REG8,2   714194   160828 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/javassist-3.18.1-GA.jar
java165217 root  mem   REG8,2   279193   160876 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/commons-lang-2.5.jar
java165217 root  mem   REG8,275311   160826 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/hibernate-commons-annotations-4.0.4.Final.jar
java165217 root  mem   REG8,2   436303   160745 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/avro-1.7.7.jar

java165217 root   73w FIFO   0,13  0t0 16786125 pipe
java165217 root   74u  a_inode   0,14010970 
[eventpoll]
java165217 root   75r FIFO   0,13  0t0 16786126 pipe
java165217 root   76w FIFO   0,13  0t0 16786126 pipe
java165217 root   77u  a_inode   0,14010970 
[eventpoll]
java165217 root   78u sock0,9  0t0 16961335 
protocol: TCPv6
java165217 root   79r FIFO   0,13  0t0 16786127 pipe
java165217 root   80w FIFO   0,13  0t0 16786127 pipe
java165217 root   81u  a_inode   0,14010970 
[eventpoll]
java165217 root   82r FIFO   0,13  0t0 16786128 pipe
java165217 root   83w FIFO   0,13  0t0 16786128 pipe
java165217 root   84u  a_inode   0,14010970 
[eventpoll]
java165217 root   86r FIFO   0,13  0t0 16786129 pipe
java165217 root   87w FIFO   0,13  0t0 16786129 pipe
java165217 root   88u  a_inode   0,14010970 
[eventpoll]
java165217 root   89r FIFO   0,13  0t0 16786130 pipe
java165217 root   90w FIFO   0,13  0t0 16786130 pipe
java165217 root   91u  a_inode   0,14010970 
[eventpoll]
java165217 root   93u IPv6   16786133  0t0  TCP 
127.0.0.1:mxi (LISTEN)
java165217 root   94u sock0,9  0t0 16963075 
protocol: TCPv6
java165217 root   96u sock0,9  0t0 16824649 
protocol: TCPv6
java165217 root   97u IPv6   17051898  0t0  TCP 
10.4.3.55:60204->172.22.22.192:https (ESTABLISHED)
java165217 root   98u sock0,9  0t0 16824673 
protocol: TCPv6
java165217 root   99u sock0,9  0t0 16824711 
protocol: TCPv6
java165217 root  100u sock0,9  0t0 16893245 
protocol: TCPv6
java165217 root  101u sock0,9  0t0 16824385 
protocol: TCPv6
java165217 root  102u sock0,9  0t0 16824264 
protocol: TCPv6
java165217 root  103u sock0,9  0t0 16824278 
protocol: TCPv6
java165217 root  104u sock0,9  0t0 16824279 
protocol: TCPv6
java165217 root  105u sock0,9  0t0 16824063 
protocol: TCPv6
java165217 root  106u sock0,9  0t0 16823982 
protocol: TCPv6
java165217 root  107u sock0,9  0t0 16823959 
protocol: TCPv6
java165217 root  108r  CHR1,8  0t0 1041 
/dev/random
java165217 root  109u sock0,9  0t0 16778709 
protocol: TCPv6
java165217 root  110w  REG8,6 25506176  1835053 
/apps/apache-tomcat/logs/fms.log
java165217 root  111w  REG8,6   919042  1835019 
/apps/apache-tomcat/logs/obd.log
java165217 root  112u sock0,9  0t0 16823821 
protocol: TCPv6
java165217 root  113u sock0,9  0t0 16823822 
protocol: TCPv6
java165217 root  114u sock0,9  0t0 16823633 
protocol: TCPv6
java165217 root  115u sock0,9  0t0 16823527 
protocol: TCPv6
java165217 root  116u sock0,9  0t0 16823489 
protocol: TCPv6
java165217 root  117u sock0,9  0t0 16823490 
protocol: TCPv6
java165217 root  118u sock0,9  0t0 16823491 
protocol: TCPv6
..

Thanks

-Original Message-
From: André Warnier (tomcat/perl)  
Sent: Tuesday, May 25, 2021 12:42 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.
The point is to try to figure out what these thousands of apparently "TCPv6" 
sockets 
belonging to the tomcat process actually are, so that we can maybe begin to 
look at where 
they 

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-25 Thread tomcat/perl
HED)
java143152 root *818u  IPv6 15602584  0t0  TCP 
Yeggy-F8-FMSVA:http->10.197.255.10:55039 (ESTABLISHED)

Thank

-Original Message-----
From: André Warnier (tomcat/perl) 
Sent: Sunday, May 23, 2021 6:30 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.

I have no idea what lines like this are :

java130244 root   78u sock0,9  0t0 12154374 
protocol: TCPv6


There are obviously too many of them, for them to match with the sockets listed 
by netstat.

The ones which in the lsof output, have "TYPE" set to "IPv6" seem to correspond 
to the
ones marked as "LISTEN" in the netstat output.
But the ones with TYPE="sock" and NAME="protocol: TCPv6" are a mystery to me.

Could you redo a netstat as follows :
# netstat -p -a -n --inet6 | grep 130244

and can you also try this form of lsof :
# lsof -a -p 130244 -T s -i6

And finally (after copying the result of the above) : do you know how to 
trigger a GC
(Garbage Collection) in your tomcat JVM ?
(the point is to see if when a GC happens, these things disappear).


On 22.05.2021 18:03, Yeggy Javadi wrote:

Here it is:

# netstat -p -a  --tcp | grep 130244
tcp6   0  0 [::]:pcsync-https   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:https  [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:37537  [::]:*  LISTEN  
130244/java
tcp6   0  0 localhost.localdoma:mxi [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:8009   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:rmiregistry[::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:http   [::]:*  LISTEN  
130244/java
tcp6  86  0 Yeggy-F8-FMSVA:39680172.22.22.192:https CLOSE_WAIT  
130244/java
tcp6   0  1 Yeggy-F8-FMSVA:5361810.12.3.78:httpsSYN_SENT
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54772Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42664 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54782Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54766Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42662 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54778Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54788Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54770Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54790Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54776Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54786Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54780Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45736 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54768Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54784Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42660 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  1 Yeggy-F8-FMSVA:4292210.12.3.77:httpsSYN_SENT
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:35794172.22.22.192:https ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54774Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45734 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:41016 localhost.localdo:vrace ESTABLISHED 
130244/java


# lsof -p 130244
COMMANDPID USER   FD  TYPE DEVICE SIZE/OFF NODE NAME
java130244 root  cwd   DIR8,2 4096   157664 
/usr/local/freestor/bin
java130244 root  rtd   DIR8,3 40962 /
java130244 root  txt   REG8,2 8712 8913 
/usr/local/jdk/jre1.8.0_271/bin/java
java130244 root  mem   REG8,2   498864 9007 
/usr/local/jdk/jre1.8.0_271/lib/amd64/libfontmanager.so
java130244 root  mem   REG8,239176 9006 
/usr/local/jdk/jre1.8.0_271/lib/amd64/libawt_headless.so
java130244 root  mem   REG8,2   759184 8996 
/usr/local/jdk/jre1.8.0_271/lib/amd64/libawt.so
java130244 root  mem   REG8,2  3559360 9139 
/usr/local/jdk/jre1.8.0_271/lib/resources.jar
java130244 root  mem   REG8,299680   133076

RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-24 Thread Yeggy Javadi
 0t0  TCP 
localhost.localdomain:52312->localhost.localdomain:postgres (ESTABLISHED)
java143152 root  215u  IPv6 13822102  0t0  TCP 
localhost.localdomain:48620->localhost.localdomain:postgres (ESTABLISHED)
java143152 root  216u  IPv6 13823064  0t0  TCP 
localhost.localdomain:48624->localhost.localdomain:postgres (ESTABLISHED)
java143152 root  217u  IPv6 13821107  0t0  TCP 
localhost.localdomain:48622->localhost.localdomain:postgres (ESTABLISHED)
java143152 root  269u  IPv6 13824001  0t0  TCP 
localhost.localdomain:46976->localhost.localdomain:vrace (ESTABLISHED)
java143152 root  270u  IPv6 13787907  0t0  TCP 
Yeggy-F8-FMSVA:60726->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  271u  IPv6 13787908  0t0  TCP 
Yeggy-F8-FMSVA:60728->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  272u  IPv6 13787909  0t0  TCP 
Yeggy-F8-FMSVA:60730->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  273u  IPv6 13787910  0t0  TCP 
Yeggy-F8-FMSVA:60732->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  274u  IPv6 13787911  0t0  TCP 
Yeggy-F8-FMSVA:60734->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  275u  IPv6 13787912  0t0  TCP 
Yeggy-F8-FMSVA:60736->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  276u  IPv6 13787913  0t0  TCP 
Yeggy-F8-FMSVA:60738->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  277u  IPv6 13787914  0t0  TCP 
Yeggy-F8-FMSVA:60740->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  278u  IPv6 13787915  0t0  TCP 
Yeggy-F8-FMSVA:60742->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  279u  IPv6 13787916  0t0  TCP 
Yeggy-F8-FMSVA:60744->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  280u  IPv6 13787917  0t0  TCP 
Yeggy-F8-FMSVA:60746->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  281u  IPv6 13787918  0t0  TCP 
Yeggy-F8-FMSVA:60748->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  282u  IPv6 13787919  0t0  TCP 
Yeggy-F8-FMSVA:60750->Yeggy-F8-FMSVA:vrace (ESTABLISHED)
java143152 root  293u  IPv6 13821122  0t0  TCP *:rmiregistry (LISTEN)
java143152 root  294u  IPv6 13816318  0t0  TCP *:36081 (LISTEN)
java143152 root  304u  IPv6 13787923  0t0  TCP *:pcsync-https (LISTEN)
java143152 root *740u  IPv6 15607907  0t0  TCP 
Yeggy-F8-FMSVA:53366->172.22.21.48:https (ESTABLISHED)
java143152 root *876u  IPv6 15608779  0t0  TCP 
Yeggy-F8-FMSVA:40328->172.22.21.47:https (ESTABLISHED)
java143152 root *881u  IPv6 15614275  0t0  TCP 
Yeggy-F8-FMSVA:59970->172.22.22.192:https (ESTABLISHED)
java143152 root *882u  IPv6 15558639  0t0  TCP 
Yeggy-F8-FMSVA:53526->172.22.21.48:https (ESTABLISHED)
java143152 root *067u  IPv6 15602583  0t0  TCP 
Yeggy-F8-FMSVA:http->10.197.255.10:55037 (ESTABLISHED)
java143152 root *070u  IPv6 15602586  0t0  TCP 
Yeggy-F8-FMSVA:40318->Yeggy-F8-FMSVA:36081 (ESTABLISHED)
java143152 root *073u  IPv6 15602582  0t0  TCP 
Yeggy-F8-FMSVA:http->10.197.255.10:55036 (ESTABLISHED)
java143152 root *080u  IPv6 15512947  0t0  TCP 
Yeggy-F8-FMSVA:55956->Yeggy-F8-FMSVA:11576 (CLOSE_WAIT)
java143152 root *081u  IPv6 15512948  0t0  TCP 
Yeggy-F8-FMSVA:55958->Yeggy-F8-FMSVA:11576 (CLOSE_WAIT)
java143152 root *817u  IPv6 15611323  0t0  TCP 
Yeggy-F8-FMSVA:36081->Yeggy-F8-FMSVA:40318 (ESTABLISHED)
java143152 root *818u  IPv6 15602584  0t0  TCP 
Yeggy-F8-FMSVA:http->10.197.255.10:55039 (ESTABLISHED)

Thank

-Original Message-
From: André Warnier (tomcat/perl)  
Sent: Sunday, May 23, 2021 6:30 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.

I have no idea what lines like this are :
> java130244 root   78u sock0,9  0t0 12154374 
> protocol: TCPv6

There are obviously too many of them, for them to match with the sockets listed 
by netstat.

The ones which in the lsof output, have "TYPE" set to "IPv6" seem to correspond 
to the 
ones marked as "LISTEN" in the netstat output.
But the ones with TYPE="sock" and NAME="protocol: TCPv6" are a mystery to me.

Could you redo a netstat as follows :
# netstat -p -a -n --inet6 | grep 130244

and can you also try this form of lsof :
# lsof -a -p 130244 -T s -i6

And finally (after copying the result of the above) : do you know how to 
trigger a GC 
(Garbage Collection) in your tomcat JVM ?
(the point is to see if when a GC happens, these things disappear).


On 22.05.2021 18:03, Yeggy Javadi wrote:
> Here it is:
> 
> # netstat -p -a  --tcp | grep 130244
> tcp6   0  0 [::]:pcsync-https   [::]:*  LISTEN
>   130244/java
> tcp6   0  0 [::]:https  [::]:*  LISTEN
>   130244/java
> tcp6  

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-23 Thread tomcat/perl
30244 root  115u sock0,9  0t0 11987351 
protocol: TCPv6
java130244 root  116u sock0,9  0t0 11987352 
protocol: TCPv6
java130244 root  117u sock0,9  0t0 11987353 
protocol: TCPv6
java130244 root  118u sock0,9  0t0 11987354 
protocol: TCPv6
java130244 root  119u sock0,9  0t0 11987355 
protocol: TCPv6
java130244 root  120u sock0,9  0t0 11987356 
protocol: TCPv6
java130244 root  121u sock0,9  0t0 11987357 
protocol: TCPv6
java130244 root  122u sock0,9  0t0 11987358 
protocol: TCPv6
java130244 root  123u sock0,9  0t0 11987359 
protocol: TCPv6
java130244 root  124u sock0,9  0t0 11987360 
protocol: TCPv6
...
Keep seeing the same line

-Original Message-
From: André Warnier (tomcat/perl) 
Sent: Saturday, May 22, 2021 10:22 AM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Mmm. Nothing very special in that netstat output.
The sockets seen there look quite normal for tomcat, and there are not a lot.
What about the IPv4 sockets ? (remove the -6 in your netstat command)

Looks like lsof is counting things which are not IPv6 TCP sockets belonging to 
the tomcat JVM process.
Where is the difference between the lsof count (19948) and the netstat count 
(25) ?
(genuine question, I'm puzzled too)

Can you give an example of the "lsof -p 130244" output lines ?
(not all 19948 please, but enough to see some variety)

On 21.05.2021 16:11, Yeggy Javadi wrote:

Hi,
Here its is:
# lsof -p 130244 | grep "protocol: TCPv6"| wc -l
19948

# netstat -p -a -6 --tcp | grep 130244
tcp6   0  0 [::]:pcsync-https   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:https  [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:37537  [::]:*  LISTEN  
130244/java
tcp6   0  0 localhost.localdoma:mxi [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:8009   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:rmiregistry[::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:http   [::]:*  LISTEN  
130244/java
tcp6  86  0 Yeggy-F8-FMSVA:39680172.22.22.192:https CLOSE_WAIT  
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54772Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42664 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54782Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54766Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42662 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54778Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:46966 localhost.localdo:11753 ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54788Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54770Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64799 ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54790Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54776Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54786Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54780Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45736 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54768Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54784Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42660 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54774Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45734 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64798 ESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:41016 localhost.localdo:vrace ESTABLISHED 
130244/java
Thanks
-Original Message-
From: Noelette Stout 
Sent: Friday, May 21, 2021 8:28 AM
To: Tomcat Users List 
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

ss has all the same options as netstat

On Fri, May 21, 2021 at 3:51 AM André Warnier (tomcat/perl)

wrote:


Soyrry to top-post, but it's getting cluttered down there.

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-23 Thread Felix Schumacher
   0,13  0t0 11968848 pipe
>> java130244 root   83w FIFO   0,13  0t0 11968848 pipe
>> java130244 root   84u  a_inode   0,14010970 
>> [eventpoll]
>> java130244 root   86r FIFO   0,13  0t0 11968849 pipe
>> java130244 root   87w FIFO   0,13  0t0 11968849 pipe
>> java130244 root   88u  a_inode   0,14010970 
>> [eventpoll]
>> java130244 root   89r FIFO   0,13  0t0 11968850 pipe
>> java130244 root   90w FIFO   0,13  0t0 11968850 pipe
>> java130244 root   91u  a_inode   0,14010970 
>> [eventpoll]
>> java130244 root   93u IPv6   11968853  0t0  TCP 
>> localhost.localdomain:mxi (LISTEN)
>> java130244 root   94u IPv6   12298916  0t0  TCP 
>> localhost.localdomain:45734->localhost.localdomain:postgres (ESTABLISHED)
>> java130244 root   95u sock0,9  0t0 12017983 
>> protocol: TCPv6
>> java130244 root   96u sock0,9  0t0 11983086 
>> protocol: TCPv6
>> java130244 root   97u sock0,9  0t0 11983087 
>> protocol: TCPv6
>> java130244 root   98u sock0,9  0t0 11983088 
>> protocol: TCPv6
>> java130244 root   99u sock0,9  0t0 11983089 
>> protocol: TCPv6
>> java130244 root  100u sock0,9  0t0 11983090 
>> protocol: TCPv6
>> java130244 root  101u sock0,9  0t0 11983091 
>> protocol: TCPv6
>> java130244 root  102u sock0,9  0t0 11983092 
>> protocol: TCPv6
>> java130244 root  103u sock0,9  0t0 11969375 
>> protocol: TCPv6
>> java130244 root  104u sock        0,9      0t0 11987339 
>> protocol: TCPv6
>> java130244 root  105u sock0,9  0t0 11987347 
>> protocol: TCPv6
>> java130244 root  106u sock0,9  0t0 11969384 
>> protocol: TCPv6
>> java130244 root  107u sock0,9  0t0 11987348 
>> protocol: TCPv6
>> java130244 root  108r  CHR1,8  0t0 1041 
>> /dev/random
>> java130244 root  109w  REG8,6 19536380  1835053 
>> /apps/apache-tomcat/logs/fms.log
>> java130244 root  110u sock0,9  0t0 11979291 
>> protocol: TCPv6
>> java130244 root  111u sock0,9  0t0 11979292 
>> protocol: TCPv6
>> java130244 root  112w  REG8,6   835408  1835019 
>> /apps/apache-tomcat/logs/obd.log
>> java130244 root  113u sock0,9  0t0 11987349 
>> protocol: TCPv6
>> java130244 root  114u sock0,9  0t0 11987350 
>> protocol: TCPv6
>> java130244 root  115u sock0,9  0t0 11987351 
>> protocol: TCPv6
>> java130244 root  116u sock0,9  0t0 11987352 
>> protocol: TCPv6
>> java130244 root  117u sock0,9  0t0 11987353 
>> protocol: TCPv6
>> java130244 root  118u sock0,9  0t0 11987354 
>> protocol: TCPv6
>> java130244 root  119u sock0,9  0t0 11987355 
>> protocol: TCPv6
>> java130244 root  120u sock0,9  0t0 11987356 
>> protocol: TCPv6
>> java130244 root  121u sock0,9  0t0 11987357 
>> protocol: TCPv6
>> java130244 root  122u sock0,9  0t0 11987358 
>> protocol: TCPv6
>> java130244 root  123u sock0,9  0t0 11987359 
>> protocol: TCPv6
>> java130244 root  124u sock0,9  0t0 11987360 
>> protocol: TCPv6
>> ...
>> Keep seeing the same line
>
> How many of those lines are there? (lsof -p ... | wc -l)
>
> Are you sure there are more than before? Maybe your ulimits where higher
> on your older system and are now
> bothering you?
>
> What is the setting of ulimit for Tomcats process? (cat /proc//limits)
>
> Felix
>
>
>> -Original Message-
>> From: André Warnier (tomcat/perl)  
>> Sent: Saturday, May 22, 2021 10:22 AM
>> To: users@tomcat.apache.org
>> Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8
>>
>> Mmm. Nothing very special in that netstat output.
>

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-23 Thread Felix Schumacher
6
> java130244 root  102u sock0,9  0t0 11983092 
> protocol: TCPv6
> java130244 root  103u sock0,9  0t0 11969375 
> protocol: TCPv6
> java130244 root  104u sock0,9  0t0 11987339 
> protocol: TCPv6
> java130244 root  105u sock0,9  0t0 11987347 
> protocol: TCPv6
> java130244 root  106u sock0,9  0t0 11969384 
> protocol: TCPv6
> java130244 root  107u sock0,9  0t0 11987348 
> protocol: TCPv6
> java130244 root  108r  CHR1,8  0t0 1041 
> /dev/random
> java130244 root  109w  REG8,6 19536380  1835053 
> /apps/apache-tomcat/logs/fms.log
> java130244 root  110u sock0,9  0t0 11979291 
> protocol: TCPv6
> java130244 root  111u sock0,9  0t0 11979292 
> protocol: TCPv6
> java130244 root  112w  REG8,6   835408  1835019 
> /apps/apache-tomcat/logs/obd.log
> java130244 root  113u sock0,9  0t0 11987349 
> protocol: TCPv6
> java130244 root  114u sock0,9  0t0 11987350 
> protocol: TCPv6
> java130244 root  115u sock0,9  0t0 11987351 
> protocol: TCPv6
> java130244 root  116u sock0,9  0t0 11987352 
> protocol: TCPv6
> java130244 root  117u sock0,9  0t0 11987353 
> protocol: TCPv6
> java130244 root  118u sock0,9  0t0 11987354 
> protocol: TCPv6
> java130244 root  119u sock0,9  0t0 11987355 
> protocol: TCPv6
> java130244 root  120u sock0,9  0t0 11987356 
> protocol: TCPv6
> java130244 root  121u sock0,9  0t0 11987357 
> protocol: TCPv6
> java130244 root  122u sock0,9  0t0 11987358 
> protocol: TCPv6
> java130244 root  123u sock0,9  0t0 11987359 
> protocol: TCPv6
> java130244 root  124u sock0,9  0t0 11987360 
> protocol: TCPv6
> ...
> Keep seeing the same line


How many of those lines are there? (lsof -p ... | wc -l)

Are you sure there are more than before? Maybe your ulimits where higher
on your older system and are now
bothering you?

What is the setting of ulimit for Tomcats process? (cat /proc//limits)

Felix


>
> -Original Message-
> From: André Warnier (tomcat/perl)  
> Sent: Saturday, May 22, 2021 10:22 AM
> To: users@tomcat.apache.org
> Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8
>
> Mmm. Nothing very special in that netstat output.
> The sockets seen there look quite normal for tomcat, and there are not a lot.
> What about the IPv4 sockets ? (remove the -6 in your netstat command)
>
> Looks like lsof is counting things which are not IPv6 TCP sockets belonging 
> to the tomcat JVM process.
> Where is the difference between the lsof count (19948) and the netstat count 
> (25) ?
> (genuine question, I'm puzzled too)
>
> Can you give an example of the "lsof -p 130244" output lines ?
> (not all 19948 please, but enough to see some variety)
>
> On 21.05.2021 16:11, Yeggy Javadi wrote:
>> Hi,
>> Here its is:
>> # lsof -p 130244 | grep "protocol: TCPv6"| wc -l
>> 19948
>>
>> # netstat -p -a -6 --tcp | grep 130244
>> tcp6   0  0 [::]:pcsync-https   [::]:*  LISTEN   
>>130244/java
>> tcp6   0  0 [::]:https  [::]:*  LISTEN   
>>130244/java
>> tcp6   0  0 [::]:37537  [::]:*  LISTEN   
>>130244/java
>> tcp6   0  0 localhost.localdoma:mxi [::]:*  LISTEN   
>>130244/java
>> tcp6   0  0 [::]:8009   [::]:*  LISTEN   
>>130244/java
>> tcp6   0  0 [::]:rmiregistry[::]:*  LISTEN   
>>130244/java
>> tcp6   0  0 [::]:http   [::]:*  LISTEN   
>>130244/java
>> tcp6  86  0 Yeggy-F8-FMSVA:39680172.22.22.192:https 
>> CLOSE_WAIT  130244/java
>> tcp6   0  0 Yeggy-F8-FMSVA:54772Yeggy-F8-FMSVA:vrace
>> ESTABLISHED 130244/java
>> tcp6   0  0 localhost.localdo:42664 localhost.loca:postgres 
>> ESTABLISHED 130244/java
>> tcp6   0  0 Yeggy-F8-FMSVA:54782Yeggy-F8-FMSVA:vrace
>> ESTABLISHED 130244/java
>> tcp6   0  0 Yeggy-F8-FMSVA:54766Yeggy-F8-FMSVA:vrace

RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-22 Thread Yeggy Javadi
Pv6
...
Keep seeing the same line

-Original Message-
From: André Warnier (tomcat/perl)  
Sent: Saturday, May 22, 2021 10:22 AM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Mmm. Nothing very special in that netstat output.
The sockets seen there look quite normal for tomcat, and there are not a lot.
What about the IPv4 sockets ? (remove the -6 in your netstat command)

Looks like lsof is counting things which are not IPv6 TCP sockets belonging to 
the tomcat JVM process.
Where is the difference between the lsof count (19948) and the netstat count 
(25) ?
(genuine question, I'm puzzled too)

Can you give an example of the "lsof -p 130244" output lines ?
(not all 19948 please, but enough to see some variety)

On 21.05.2021 16:11, Yeggy Javadi wrote:
> Hi,
> Here its is:
> # lsof -p 130244 | grep "protocol: TCPv6"| wc -l
> 19948
> 
> # netstat -p -a -6 --tcp | grep 130244
> tcp6   0  0 [::]:pcsync-https   [::]:*  LISTEN
>   130244/java
> tcp6   0  0 [::]:https  [::]:*  LISTEN
>   130244/java
> tcp6   0  0 [::]:37537  [::]:*  LISTEN
>   130244/java
> tcp6   0  0 localhost.localdoma:mxi [::]:*  LISTEN
>   130244/java
> tcp6   0  0 [::]:8009   [::]:*  LISTEN
>   130244/java
> tcp6   0  0 [::]:rmiregistry[::]:*  LISTEN
>   130244/java
> tcp6   0  0 [::]:http   [::]:*  LISTEN
>   130244/java
> tcp6  86  0 Yeggy-F8-FMSVA:39680172.22.22.192:https 
> CLOSE_WAIT  130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54772Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 localhost.localdo:42664 localhost.loca:postgres 
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54782Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54766Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 localhost.localdo:42662 localhost.loca:postgres 
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54778Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 localhost.localdo:46966 localhost.localdo:11753 
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54788Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54770Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64799 
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54790Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54776Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54786Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54780Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 localhost.localdo:45736 localhost.loca:postgres 
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54768Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54784Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 localhost.localdo:42660 localhost.loca:postgres 
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:54774Yeggy-F8-FMSVA:vrace
> ESTABLISHED 130244/java
> tcp6   0  0 localhost.localdo:45734 localhost.loca:postgres 
> ESTABLISHED 130244/java
> tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64798 
> ESTABLISHED 130244/java
> tcp6   0  0 localhost.localdo:41016 localhost.localdo:vrace 
> ESTABLISHED 130244/java
> Thanks
> -Original Message-
> From: Noelette Stout 
> Sent: Friday, May 21, 2021 8:28 AM
> To: Tomcat Users List 
> Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8
> 
> ss has all the same options as netstat
> 
> On Fri, May 21, 2021 at 3:51 AM André Warnier (tomcat/perl) 
> 
> wrote:
> 
>> Soyrry to top-post, but it's getting cluttered down there..
>>
>> The next thing that you may want to do :
>>
>>> netstat -p -a -6 --tcp
>>
>> That is an alternative list of sockets, which also shows the "tcp state"
>> of the sockets.
>> To get only the ones of the tomcat JVM PID, filter with grep based of 
>> the last column.
>> The type of thing you are looking for is the column which should show 
>> "LISTEN", or "ESTABLISHED" or "CLOSE_WAIT"

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-22 Thread tomcat/perl

Mmm. Nothing very special in that netstat output.
The sockets seen there look quite normal for tomcat, and there are not a lot.
What about the IPv4 sockets ? (remove the -6 in your netstat command)

Looks like lsof is counting things which are not IPv6 TCP sockets belonging to the tomcat 
JVM process.

Where is the difference between the lsof count (19948) and the netstat count 
(25) ?
(genuine question, I'm puzzled too)

Can you give an example of the "lsof -p 130244" output lines ?
(not all 19948 please, but enough to see some variety)

On 21.05.2021 16:11, Yeggy Javadi wrote:

Hi,
Here its is:
# lsof -p 130244 | grep "protocol: TCPv6"| wc -l
19948

# netstat -p -a -6 --tcp | grep 130244
tcp6   0  0 [::]:pcsync-https   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:https  [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:37537  [::]:*  LISTEN  
130244/java
tcp6   0  0 localhost.localdoma:mxi [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:8009   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:rmiregistry[::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:http   [::]:*  LISTEN  
130244/java
tcp6  86  0 Yeggy-F8-FMSVA:39680172.22.22.192:https CLOSE_WAIT  
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54772Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42664 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54782Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54766Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42662 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54778Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:46966 localhost.localdo:11753 ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54788Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54770Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64799 ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54790Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54776Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54786Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54780Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45736 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54768Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54784Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42660 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54774Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45734 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64798 ESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:41016 localhost.localdo:vrace ESTABLISHED 
130244/java
Thanks
-Original Message-
From: Noelette Stout 
Sent: Friday, May 21, 2021 8:28 AM
To: Tomcat Users List 
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

ss has all the same options as netstat

On Fri, May 21, 2021 at 3:51 AM André Warnier (tomcat/perl) 
wrote:


Soyrry to top-post, but it's getting cluttered down there..

The next thing that you may want to do :

   > netstat -p -a -6 --tcp

That is an alternative list of sockets, which also shows the "tcp state"
of the sockets.
To get only the ones of the tomcat JVM PID, filter with grep based of
the last column.
The type of thing you are looking for is the column which should show
"LISTEN", or "ESTABLISHED" or "CLOSE_WAIT" etc..

The options above :
-p : show PID and program
-a : show all sockets states
-6 : only inet v6
--tcp : only TCP sockets

"netstat" may not be on your system by default, and you may need to
install it.
An alternative is "ss", but I don't know the options.


On 21.05.2021 02:14, Yeggy Javadi wrote:

Hi,
Yes; that is what I get and as you can see among 8028 open files,
7474

are for TCPv6 sockets:

java130244 root 7805u sock0,9  0t0 12294251

protocol: TCPv6


# ps -ef | grep tomcat
root  130244   1  1 11:01 ?00:06:20

/usr/local/jre/bin/java
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.
properties
-Djava.util.logging.manager=org.apac

RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-21 Thread Yeggy Javadi
Hi,
Here its is:
# lsof -p 130244 | grep "protocol: TCPv6"| wc -l
19948

# netstat -p -a -6 --tcp | grep 130244
tcp6   0  0 [::]:pcsync-https   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:https  [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:37537  [::]:*  LISTEN  
130244/java
tcp6   0  0 localhost.localdoma:mxi [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:8009   [::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:rmiregistry[::]:*  LISTEN  
130244/java
tcp6   0  0 [::]:http   [::]:*  LISTEN  
130244/java
tcp6  86  0 Yeggy-F8-FMSVA:39680172.22.22.192:https CLOSE_WAIT  
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54772Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42664 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54782Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54766Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42662 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54778Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:46966 localhost.localdo:11753 ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54788Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54770Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64799 ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54790Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54776Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54786Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54780Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45736 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54768Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54784Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:42660 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:54774Yeggy-F8-FMSVA:vraceESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:45734 localhost.loca:postgres ESTABLISHED 
130244/java
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:64798 ESTABLISHED 
130244/java
tcp6   0  0 localhost.localdo:41016 localhost.localdo:vrace ESTABLISHED 
130244/java
Thanks
-Original Message-
From: Noelette Stout  
Sent: Friday, May 21, 2021 8:28 AM
To: Tomcat Users List 
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

ss has all the same options as netstat

On Fri, May 21, 2021 at 3:51 AM André Warnier (tomcat/perl) 
wrote:

> Soyrry to top-post, but it's getting cluttered down there..
>
> The next thing that you may want to do :
>
>   > netstat -p -a -6 --tcp
>
> That is an alternative list of sockets, which also shows the "tcp state"
> of the sockets.
> To get only the ones of the tomcat JVM PID, filter with grep based of 
> the last column.
> The type of thing you are looking for is the column which should show 
> "LISTEN", or "ESTABLISHED" or "CLOSE_WAIT" etc..
>
> The options above :
> -p : show PID and program
> -a : show all sockets states
> -6 : only inet v6
> --tcp : only TCP sockets
>
> "netstat" may not be on your system by default, and you may need to 
> install it.
> An alternative is "ss", but I don't know the options.
>
>
> On 21.05.2021 02:14, Yeggy Javadi wrote:
> > Hi,
> > Yes; that is what I get and as you can see among 8028 open files, 
> > 7474
> are for TCPv6 sockets:
> > java130244 root 7805u sock0,9  0t0 12294251
> protocol: TCPv6
> >
> > # ps -ef | grep tomcat
> > root  130244   1  1 11:01 ?00:06:20
> /usr/local/jre/bin/java
> -Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.
> properties 
> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 
> -server -Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m
> -Djdk.tls.ephemeralDHKeySize=2048
> -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
> -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
> -Dignore.endorsed.dirs= -classpath
> /usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bi
&

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-21 Thread Noelette Stout
ss has all the same options as netstat

On Fri, May 21, 2021 at 3:51 AM André Warnier (tomcat/perl) 
wrote:

> Soyrry to top-post, but it's getting cluttered down there..
>
> The next thing that you may want to do :
>
>   > netstat -p -a -6 --tcp
>
> That is an alternative list of sockets, which also shows the "tcp state"
> of the sockets.
> To get only the ones of the tomcat JVM PID, filter with grep based of the
> last column.
> The type of thing you are looking for is the column which should show
> "LISTEN", or
> "ESTABLISHED" or "CLOSE_WAIT" etc..
>
> The options above :
> -p : show PID and program
> -a : show all sockets states
> -6 : only inet v6
> --tcp : only TCP sockets
>
> "netstat" may not be on your system by default, and you may need to
> install it.
> An alternative is "ss", but I don't know the options.
>
>
> On 21.05.2021 02:14, Yeggy Javadi wrote:
> > Hi,
> > Yes; that is what I get and as you can see among 8028 open files, 7474
> are for TCPv6 sockets:
> > java130244 root 7805u sock0,9  0t0 12294251
> protocol: TCPv6
> >
> > # ps -ef | grep tomcat
> > root  130244   1  1 11:01 ?00:06:20
> /usr/local/jre/bin/java
> -Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64
> -server -Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m
> -Djdk.tls.ephemeralDHKeySize=2048
> -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
> -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
> -Dignore.endorsed.dirs= -classpath
> /usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
> -Dcatalina.base=/usr/local/apache-tomcat
> -Dcatalina.home=/usr/local/apache-tomcat
> -Djava.io.tmpdir=/usr/local/apache-tomcat/temp
> org.apache.catalina.startup.Bootstrap start
> > root  132566  132492  0 20:10 pts/100:00:00 grep --color=auto
> tomcat
> >
> > #lsof -p 130244 | wc -l
> > 8028
> >
> > #lsof -p 130244 | grep "protocol: TCPv6"| wc -l
> > 7474
> >
> > Thanks
> >
> > -Original Message-
> > From: André Warnier (tomcat/perl) 
> > Sent: Thursday, May 20, 2021 4:19 PM
> > To: users@tomcat.apache.org
> > Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8
> >
> > Hi.
> > According to the list below, you have 2 java (JVM) processes running on
> your system.
> > One (PID = 130244) is the JVM which runs tomcat.  This is visible when
> you look at the whole command-line.
> >
> > The other (PID = 130516) runs ElasticSearch, which I believe is not
> relevant here.
> >
> > So you should run lsof with the "-p 130244" option, to show only the
> files opened by the tomcat JVM.
> > To show the current Tomcat JVM PID, do e.g. : ps -ef | grep
> "apache-tomcat"
> > (or anything else unique in that line)
> >
> >
> > On 20.05.2021 21:00, Yeggy Javadi wrote:
> >> Hi Chris,
> >> Please indicate how to show lsof or netstat to *just the JVM* process.
> >>
> >> Below is the list of running processes on my server:
> >> UID  PIDPPID  C STIME TTY  TIME CMD
> >> root   1   0  0 May07 ?00:00:14
> /usr/lib/systemd/systemd --switched-root --system --deserialize 17
> >> root   2   0  0 May07 ?00:00:00 [kthreadd]
> >> root   3   2  0 May07 ?00:00:00 [rcu_gp]
> >> root   4   2  0 May07 ?00:00:00 [rcu_par_gp]
> >> root   6   2  0 May07 ?00:00:00
> [kworker/0:0H-kblockd]
> >> root   8   2  0 May07 ?00:00:00 [mm_percpu_wq]
> >> root   9   2  0 May07 ?00:00:00 [ksoftirqd/0]
> >> root  10   2  0 May07 ?00:02:18 [rcu_sched]
> >> root  11   2  0 May07 ?00:00:00 [migration/0]
> >> root  12   2  0 May07 ?00:00:00 [watchdog/0]
> >> root  13   2  0 May07 ?00:00:00 [cpuhp/0]
> >> root  14   2  0 May07 ?00:00:00 [cpuhp/1]
> >> root  15   2  0 May07 ?00:00:00 [watchdog/1]
> >> root  16   2  0 May07 ?00:00:00 [migration/1]
> >> root  17   2  0 May07 ?00:00:00 [ksoftirqd/1]
> >> root  19   2  0 May07 ?00:00:00
> [kworker/1:0H-kblockd]
> >> root  20   2  0 May07 ?00:

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-21 Thread tomcat/perl

Soyrry to top-post, but it's getting cluttered down there..

The next thing that you may want to do :

 > netstat -p -a -6 --tcp

That is an alternative list of sockets, which also shows the "tcp state" of the 
sockets.
To get only the ones of the tomcat JVM PID, filter with grep based of the last 
column.
The type of thing you are looking for is the column which should show "LISTEN", or 
"ESTABLISHED" or "CLOSE_WAIT" etc..


The options above :
-p : show PID and program
-a : show all sockets states
-6 : only inet v6
--tcp : only TCP sockets

"netstat" may not be on your system by default, and you may need to install it.
An alternative is "ss", but I don't know the options.


On 21.05.2021 02:14, Yeggy Javadi wrote:

Hi,
Yes; that is what I get and as you can see among 8028 open files, 7474 are for 
TCPv6 sockets:
java130244 root 7805u sock0,9  0t0 12294251 
protocol: TCPv6

# ps -ef | grep tomcat
root  130244   1  1 11:01 ?00:06:20 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root  132566  132492  0 20:10 pts/100:00:00 grep --color=auto tomcat

#lsof -p 130244 | wc -l
8028

#lsof -p 130244 | grep "protocol: TCPv6"| wc -l
7474

Thanks

-Original Message-
From: André Warnier (tomcat/perl) 
Sent: Thursday, May 20, 2021 4:19 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.
According to the list below, you have 2 java (JVM) processes running on your 
system.
One (PID = 130244) is the JVM which runs tomcat.  This is visible when you look 
at the whole command-line.

The other (PID = 130516) runs ElasticSearch, which I believe is not relevant 
here.

So you should run lsof with the "-p 130244" option, to show only the files 
opened by the tomcat JVM.
To show the current Tomcat JVM PID, do e.g. : ps -ef | grep "apache-tomcat"
(or anything else unique in that line)


On 20.05.2021 21:00, Yeggy Javadi wrote:

Hi Chris,
Please indicate how to show lsof or netstat to *just the JVM* process.

Below is the list of running processes on my server:
UID  PIDPPID  C STIME TTY  TIME CMD
root   1   0  0 May07 ?00:00:14 /usr/lib/systemd/systemd 
--switched-root --system --deserialize 17
root   2   0  0 May07 ?00:00:00 [kthreadd]
root   3   2  0 May07 ?00:00:00 [rcu_gp]
root   4   2  0 May07 ?00:00:00 [rcu_par_gp]
root   6   2  0 May07 ?00:00:00 [kworker/0:0H-kblockd]
root   8   2  0 May07 ?00:00:00 [mm_percpu_wq]
root   9   2  0 May07 ?00:00:00 [ksoftirqd/0]
root  10   2  0 May07 ?00:02:18 [rcu_sched]
root  11   2  0 May07 ?00:00:00 [migration/0]
root  12   2  0 May07 ?00:00:00 [watchdog/0]
root  13   2  0 May07 ?00:00:00 [cpuhp/0]
root  14   2  0 May07 ?00:00:00 [cpuhp/1]
root  15   2  0 May07 ?00:00:00 [watchdog/1]
root  16   2  0 May07 ?00:00:00 [migration/1]
root  17   2  0 May07 ?00:00:00 [ksoftirqd/1]
root  19   2  0 May07 ?00:00:00 [kworker/1:0H-kblockd]
root  20   2  0 May07 ?00:00:00 [cpuhp/2]
root  21   2  0 May07 ?00:00:00 [watchdog/2]
root  22   2  0 May07 ?00:00:00 [migration/2]
root  23   2  0 May07 ?00:00:00 [ksoftirqd/2]
root  25   2  0 May07 ?00:00:00 [kworker/2:0H-kblockd]
root  26   2  0 May07 ?00:00:00 [cpuhp/3]
root  27   2  0 May07 ?00:00:00 [watchdog/3]
root  28   2  0 May07 ?00:00:00 [migration/3]
root  29   2  0 May07 ?00:00:00 [ksoftirqd/3]
root  31   2  0 May07 ?00:00:00 [kworker/3:0H-kblockd]
root  32   2  0 May07 ?00:00:00 [cpuhp/4]
root  33   2  0 May07 ?00:00:00 [watchdog/4]
root  34   2  0 May07 ?00:00:00 [migration/4]
root  35   2  0 May07 ?00:00:00 [ksoftirqd/4]
root  37   2  0 May07 ?00:00:00 [kworker/4:0H-kblockd]
ro

RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-20 Thread Yeggy Javadi
Hi,
Yes; that is what I get and as you can see among 8028 open files, 7474 are for 
TCPv6 sockets:
java130244 root 7805u sock0,9  0t0 12294251 
protocol: TCPv6

# ps -ef | grep tomcat
root  130244   1  1 11:01 ?00:06:20 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root  132566  132492  0 20:10 pts/100:00:00 grep --color=auto tomcat

#lsof -p 130244 | wc -l
8028

#lsof -p 130244 | grep "protocol: TCPv6"| wc -l
7474

Thanks

-Original Message-
From: André Warnier (tomcat/perl)  
Sent: Thursday, May 20, 2021 4:19 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Hi.
According to the list below, you have 2 java (JVM) processes running on your 
system.
One (PID = 130244) is the JVM which runs tomcat.  This is visible when you look 
at the whole command-line.

The other (PID = 130516) runs ElasticSearch, which I believe is not relevant 
here.

So you should run lsof with the "-p 130244" option, to show only the files 
opened by the tomcat JVM.
To show the current Tomcat JVM PID, do e.g. : ps -ef | grep "apache-tomcat"
(or anything else unique in that line)


On 20.05.2021 21:00, Yeggy Javadi wrote:
> Hi Chris,
> Please indicate how to show lsof or netstat to *just the JVM* process.
> 
> Below is the list of running processes on my server:
> UID  PIDPPID  C STIME TTY  TIME CMD
> root   1   0  0 May07 ?00:00:14 /usr/lib/systemd/systemd 
> --switched-root --system --deserialize 17
> root   2   0  0 May07 ?00:00:00 [kthreadd]
> root   3   2  0 May07 ?00:00:00 [rcu_gp]
> root   4   2  0 May07 ?00:00:00 [rcu_par_gp]
> root   6   2  0 May07 ?00:00:00 [kworker/0:0H-kblockd]
> root   8   2  0 May07 ?00:00:00 [mm_percpu_wq]
> root   9   2  0 May07 ?00:00:00 [ksoftirqd/0]
> root  10   2  0 May07 ?00:02:18 [rcu_sched]
> root  11   2  0 May07 ?00:00:00 [migration/0]
> root  12   2  0 May07 ?00:00:00 [watchdog/0]
> root  13   2  0 May07 ?00:00:00 [cpuhp/0]
> root  14   2  0 May07 ?00:00:00 [cpuhp/1]
> root  15   2  0 May07 ?00:00:00 [watchdog/1]
> root  16   2  0 May07 ?00:00:00 [migration/1]
> root  17   2  0 May07 ?00:00:00 [ksoftirqd/1]
> root  19   2  0 May07 ?00:00:00 [kworker/1:0H-kblockd]
> root  20   2  0 May07 ?00:00:00 [cpuhp/2]
> root  21   2  0 May07 ?00:00:00 [watchdog/2]
> root  22   2  0 May07 ?00:00:00 [migration/2]
> root  23   2  0 May07 ?00:00:00 [ksoftirqd/2]
> root  25   2  0 May07 ?00:00:00 [kworker/2:0H-kblockd]
> root  26   2  0 May07 ?00:00:00 [cpuhp/3]
> root  27   2  0 May07 ?00:00:00 [watchdog/3]
> root  28   2  0 May07 ?00:00:00 [migration/3]
> root  29   2  0 May07 ?00:00:00 [ksoftirqd/3]
> root  31   2  0 May07 ?00:00:00 [kworker/3:0H-kblockd]
> root  32   2  0 May07 ?00:00:00 [cpuhp/4]
> root  33   2  0 May07 ?00:00:00 [watchdog/4]
> root  34   2  0 May07 ?00:00:00 [migration/4]
> root  35   2  0 May07 ?00:00:00 [ksoftirqd/4]
> root  37   2  0 May07 ?00:00:00 [kworker/4:0H-kblockd]
> root  38   2  0 May07 ?00:00:00 [cpuhp/5]
> root  39   2  0 May07 ?00:00:00 [watchdog/5]
> root  40   2  0 May07 ?00:00:00 [migration/5]
> root  41   2  0 May07 ?00:00:00 [ksoftirqd/5]
> root  43   2  0 May07 ?00:00:00 [kworker/5:0H-kblockd]
> root  44   2  0 May07 ?00:00:00 [cpuhp/6]
> root  45   2  0 May07 ?00:00:00 [watchdog/6]
> root  46   2  0 May07 ?00:00:00 [migration/6]
> root  47   2  0 May07 ?  

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-20 Thread tomcat/perl
00:00 ps -ef

Thanks
-Original Message-
From: Christopher Schultz 
Sent: Thursday, May 20, 2021 12:49 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Yeggy,

On 5/19/21 19:37, Yeggy Javadi wrote:

Hi Chris,
Thanks for your email. Below are replies to your questions:

1. Did you upgrade anything recently (like Java VM)?
[YJ] To support Linux 8, only Postgres was upgraded from version 9.3 to 9.6.

2. What is error you are seeing? A full stack trace would be helpful.
[YJ] The application error can occur in any place with the (Too many open 
files) error when the limit of open files for tomcat has reached (262144 in my 
environment), for example, I can get a connectivity error when pulling info 
from a server as below :

   [InventoryPullerTask - 10.1.6.25] ERROR FSSDataCollectorService -
Error : The XML configuration file failed to be retrieved for server
10.1.6.25. Check server connectivity.Error retrieving IPStorConfg for
server=10.1.6.25 error=java.io.FileNotFoundException:
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/spring-orm-3.
2.10.RELEASE.jar (Too many open files) restatus=-1 output=

3. What is your  configuration?
[YJ] It is as below in server.xml:

  
...
  

  
  


You have lots of settings on the HTTP connector but not the HTTPS one.
Is that intentional? Which one are you using? Or both?

Are you using AJP? If not, disable the connetor entirely. Definitely never 
enable it on 0.0.0.0 unless you really know what you are doing.


4. Are you counting all the open files for a single process or all of the 
sub-processes which represent the threads of the main process?
Different kernel versions count things differently.
[YJ] I am just getting the process ID of tomcat and count open files
for that process


Ok.


5. Running lsof, netstat, etc. can you see if some large number of those 
sockets are bound to any specific port (listen or connect)?
[YJ] Here is the netstat output:
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  0 Yeggy-F8-FMSVA:ssh  10.12.3.33:55236ESTABLISHED
tcp0 64 Yeggy-F8-FMSVA:ssh  10.197.255.10:60378 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36846 ESTABLISHED


That's not helpful. I want to see what the JVM process is holding, not what the 
whole OS is handling.

My guess is that lsof is going to be a better bet for inspecting the process. But if you get 
"too many open files" being thrown inside the JVM and you don't see a 1/4 million lines 
of "lsof" output on your process, then something else is going on. But remember to limit 
your lsof to *just the JVM* process.

-chris


-Original Message-
From: Christopher Schultz 
Sent: Monday, May 17, 2021 5:18 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Yeggy,

On 5/17/21 15:17, Yeggy Javadi wrote:

Hi,
I am seeing too many TCP sockets open by tomcat on Oracle Linux 8:
java51798 root 6601u sock0,9  0t0 3959342 protocol: 
TCPv6

I was running the same application on Linux 7.6 with the same
apache-tomcat version without seeing this issue. I saw that RedHat
has removed apache-tomcat fom their Linux 8 distribution. Please help
with throubleshooting and indicating if there is any changes in Linux
8 that can explain this

Configuration:
apache-tomcat-8.5.59
OEL Linux 8 kernel 4.18.0-193.28.1.el8_2.x86_64 Web-based application
using Postgres and elastic search databases


# ps -ef | grep tomcat
root   51798   1  1 11:16 ?00:06:01 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root   54122   54037  0 19:43 pts/100:00:00 grep --color=auto tomcat


# cat /proc/51798/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max data size unlimitedunlimitedbytes
Max stack size8388608  unlimitedbytes
Max core file sizeunlimitedunlimitedbytes
Max resident set  unlimited

RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-20 Thread Yeggy Javadi
-256-cert-...@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-...@openssh.com,ssh-rsa,ssh-rsa-cert-...@openssh.com
 
-oCASignatureAlgorithms=ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,rsa-sha2-256,rsa-sha2-512,ssh-rsa
root  122905   2  0 May19 ?00:00:27 [kworker/7:0-events]
root  126037   2  0 05:30 ?00:00:00 [kworker/1:2-events]
root  128108   2  0 08:40 ?00:00:00 
[kworker/4:1-events_power_efficient]
root  128360   2  0 09:10 ?00:00:00 [kworker/4:2-cgroup_destroy]
root  128430   2  0 09:30 ?00:00:00 [kworker/6:0-mm_percpu_wq]
root  128738   2  0 11:00 ?00:00:00 [kworker/3:3-mm_percpu_wq]
root  128761   2  0 11:00 ?00:00:00 [kworker/2:1-mm_percpu_wq]
root  130244   1  1 11:01 ?00:03:50 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
elastic+  130516   1  1 11:01 ?00:03:26 /bin/java -Xms8g -Xmx8g 
-Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 
-Delasticsearch -Des.foreground=yes -Des.path.home=/usr/share/elasticsearch -cp 
:/usr/share/elasticsearch/lib/elasticsearch-1.7.6.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/*
 -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid 
-Des.default.path.home=/usr/share/elasticsearch 
-Des.default.path.logs=/var/log/elasticsearch 
-Des.default.path.data=/var/lib/elasticsearch 
-Des.default.config=/etc/elasticsearch/elasticsearch.yml 
-Des.default.path.conf=/etc/elasticsearch 
org.elasticsearch.bootstrap.Elasticsearch
postgres  130548   1  0 11:01 ?00:00:00 /usr/pgsql-9.6/bin/postgres 
-D /DB/postgres/data
postgres  130561  130548  0 11:01 ?00:00:00 postgres: logger process
postgres  130563  130548  0 11:01 ?00:00:00 postgres: checkpointer 
process
postgres  130564  130548  0 11:01 ?00:00:00 postgres: writer process
postgres  130565  130548  0 11:01 ?00:00:00 postgres: wal writer process
postgres  130566  130548  0 11:01 ?00:00:00 postgres: autovacuum 
launcher process
postgres  130567  130548  0 11:01 ?00:00:00 postgres: stats collector 
process
root  130570   1  0 11:01 ?00:00:01 /usr/local/freestor/bin/fmsd
postgres  130675  130548  0 11:01 ?00:00:03 postgres: postgres freestor 
127.0.0.1(42660) idle
postgres  130676  130548  0 11:01 ?00:00:04 postgres: postgres freestor 
127.0.0.1(42662) idle
postgres  130677  130548  0 11:01 ?00:00:03 postgres: postgres freestor 
127.0.0.1(42664) idle
root  131206   2  0 13:00 ?00:00:00 [kworker/7:2]
root  131269   2  0 13:20 ?00:00:00 [kworker/6:2]
root  131347   2  0 13:49 ?00:00:00 [kworker/5:1-mm_percpu_wq]
root  131364   2  0 13:56 ?00:00:00 
[kworker/u16:2-events_unbound]
root  131401   2  0 14:05 ?00:00:00 
[kworker/u16:1-events_unbound]
root  131438   2  0 14:20 ?00:00:00 [kworker/2:2]
root  131465   2  0 14:30 ?00:00:00 [kworker/0:2-events]
root  131490   2  0 14:40 ?00:00:00 [kworker/3:0]
root  131491   2  0 14:40 ?00:00:00 [kworker/0:1-events]
root  131515   2  0 14:49 ?00:00:00 
[kworker/1:0-events_freezable]
root  131520   2  0 14:50 ?00:00:00 [kworker/5:0]
root  131525   14261  0 14:52 ?00:00:00 sshd: root [priv]
root  131529  131525  0 14:52 ?00:00:00 sshd: root@pts/1
root  131530  131529  0 14:52 pts/100:00:00 -bash
root  131600  131530  0 14:57 pts/100:00:00 ps -ef

Thanks
-Original Message-
From: Christopher Schultz  
Sent: Thursday, May 20, 2021 12:49 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Yeggy,

On 5/19/21 19:37, Yeggy Javadi wrote:
> Hi Chris,
> Thanks for your email. Below are replies to your questions:
> 
> 1. Did you upgrade anything recently (like Java VM)?
> [YJ] To support Linux 8, only Postgres was upgraded from version 9.3 to 9.6.
> 
> 2. What is error you are seeing? A full stack trace would be helpful.
> [YJ] The app

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-20 Thread Christopher Schultz

Yeggy,

On 5/19/21 19:37, Yeggy Javadi wrote:

Hi Chris,
Thanks for your email. Below are replies to your questions:

1. Did you upgrade anything recently (like Java VM)?
[YJ] To support Linux 8, only Postgres was upgraded from version 9.3 to 9.6.

2. What is error you are seeing? A full stack trace would be helpful.
[YJ] The application error can occur in any place with the (Too many open 
files) error when the limit of open files for tomcat has reached (262144 in my 
environment), for example, I can get a connectivity error when pulling info 
from a server as below :

  [InventoryPullerTask - 10.1.6.25] ERROR FSSDataCollectorService - Error : The 
XML configuration file failed to be retrieved for server 10.1.6.25. Check 
server connectivity.Error retrieving IPStorConfg for server=10.1.6.25 
error=java.io.FileNotFoundException: 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/spring-orm-3.2.10.RELEASE.jar
 (Too many open files) restatus=-1 output=

3. What is your  configuration?
[YJ] It is as below in server.xml:

 
...
 

 
 


You have lots of settings on the HTTP connector but not the HTTPS one. 
Is that intentional? Which one are you using? Or both?


Are you using AJP? If not, disable the connetor entirely. Definitely 
never enable it on 0.0.0.0 unless you really know what you are doing.



4. Are you counting all the open files for a single process or all of the 
sub-processes which represent the threads of the main process?
Different kernel versions count things differently.
[YJ] I am just getting the process ID of tomcat and count open files for that 
process


Ok.


5. Running lsof, netstat, etc. can you see if some large number of those 
sockets are bound to any specific port (listen or connect)?
[YJ] Here is the netstat output:
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  0 Yeggy-F8-FMSVA:ssh  10.12.3.33:55236ESTABLISHED
tcp0 64 Yeggy-F8-FMSVA:ssh  10.197.255.10:60378 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36846 ESTABLISHED


That's not helpful. I want to see what the JVM process is holding, not 
what the whole OS is handling.


My guess is that lsof is going to be a better bet for inspecting the 
process. But if you get "too many open files" being thrown inside the 
JVM and you don't see a 1/4 million lines of "lsof" output on your 
process, then something else is going on. But remember to limit your 
lsof to *just the JVM* process.


-chris


-Original Message-
From: Christopher Schultz 
Sent: Monday, May 17, 2021 5:18 PM
To: users@tomcat.apache.org
Subject: Re: apache-tomcat-8.5.59 too many open files on Linux 8

Yeggy,

On 5/17/21 15:17, Yeggy Javadi wrote:

Hi,
I am seeing too many TCP sockets open by tomcat on Oracle Linux 8:
java51798 root 6601u sock0,9  0t0 3959342 protocol: 
TCPv6

I was running the same application on Linux 7.6 with the same
apache-tomcat version without seeing this issue. I saw that RedHat has
removed apache-tomcat fom their Linux 8 distribution. Please help with
throubleshooting and indicating if there is any changes in Linux 8
that can explain this

Configuration:
apache-tomcat-8.5.59
OEL Linux 8 kernel 4.18.0-193.28.1.el8_2.x86_64 Web-based application
using Postgres and elastic search databases


# ps -ef | grep tomcat
root   51798   1  1 11:16 ?00:06:01 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root   54122   54037  0 19:43 pts/100:00:00 grep --color=auto tomcat


# cat /proc/51798/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max data size unlimitedunlimitedbytes
Max stack size8388608  unlimitedbytes
Max core file sizeunlimitedunlimitedbytes
Max resident set  unlimitedunlimitedbytes
Max processes 256518   256518   processes
Max open files262144   262144   fil

RE: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-19 Thread Yeggy Javadi
Hi Chris,
Thanks for your email. Below are replies to your questions:

1. Did you upgrade anything recently (like Java VM)?
[YJ] To support Linux 8, only Postgres was upgraded from version 9.3 to 9.6.

2. What is error you are seeing? A full stack trace would be helpful.
[YJ] The application error can occur in any place with the (Too many open 
files) error when the limit of open files for tomcat has reached (262144 in my 
environment), for example, I can get a connectivity error when pulling info 
from a server as below :

 [InventoryPullerTask - 10.1.6.25] ERROR FSSDataCollectorService - Error : The 
XML configuration file failed to be retrieved for server 10.1.6.25. Check 
server connectivity.Error retrieving IPStorConfg for server=10.1.6.25 
error=java.io.FileNotFoundException: 
/usr/local/apache-tomcat-8.5.59/webapps/ROOT/WEB-INF/lib/spring-orm-3.2.10.RELEASE.jar
 (Too many open files) restatus=-1 output=

3. What is your  configuration?
[YJ] It is as below in server.xml:


...






4. Are you counting all the open files for a single process or all of the 
sub-processes which represent the threads of the main process? 
Different kernel versions count things differently.
[YJ] I am just getting the process ID of tomcat and count open files for that 
process

5. Running lsof, netstat, etc. can you see if some large number of those 
sockets are bound to any specific port (listen or connect)?
[YJ] Here is the netstat output:
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address   Foreign Address State
tcp0  0 Yeggy-F8-FMSVA:ssh  10.12.3.33:55236ESTABLISHED
tcp0 64 Yeggy-F8-FMSVA:ssh  10.197.255.10:60378 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36846 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36850 ESTABLISHED
tcp0  0 localhost.localdo:11753 localhost.localdo:51302 ESTABLISHED
tcp0  0 localhost.loca:postgres localhost.localdo:36844 ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48922ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48964ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48968ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:60817 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:48968Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48974ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:60771 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:48934Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48936Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48954Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48970Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48932Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48938ESTABLISHED
tcp6   0  0 localhost.localdo:51302 localhost.localdo:11753 ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48956Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48928ESTABLISHED
tcp6   0  0 localhost.localdo:36844 localhost.loca:postgres ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48930Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 localhost.localdo:35202 localhost.localdo:vrace ESTABLISHED
tcp6   0  0 localhost.localdo:36850 localhost.loca:postgres ESTABLISHED
tcp6   0  0 localhost.localdo:vrace localhost.localdo:35202 ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48966ESTABLISHED
tcp6   0  0 localhost.localdo:51298 localhost.localdo:11753 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48954ESTABLISHED
tcp6   0   1045 Yeggy-F8-FMSVA:54246172.22.22.192:https ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48970ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48918Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48972ESTABLISHED
tcp6   0  0 localhost.localdo:36846 localhost.loca:postgres ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48960Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48960ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48974Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:http 10.197.255.10:60714 TIME_WAIT
tcp6   0  0 Yeggy-F8-FMSVA:48924Yeggy-F8-FMSVA:vraceESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48924ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:vraceYeggy-F8-FMSVA:48924ESTABLISHED
tcp6   0  0 Yeggy-F8-FMSVA:48972Yeggy-F8

Re: apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-17 Thread Christopher Schultz

Yeggy,

On 5/17/21 15:17, Yeggy Javadi wrote:

Hi,
I am seeing too many TCP sockets open by tomcat on Oracle Linux 8:
java51798 root 6601u sock0,9  0t0 3959342 protocol: 
TCPv6

I was running the same application on Linux 7.6 with the same apache-tomcat 
version without seeing this issue. I saw that RedHat has remove apache-tomcat 
fom their Linux 8 distribution. Please help with throubleshooting and 
indicating if there is any changes in Linux 8 that can explain this

Configuration:
apache-tomcat-8.5.59
OEL Linux 8 kernel 4.18.0-193.28.1.el8_2.x86_64
Web-based application using Postgres and elastic search databases


# ps -ef | grep tomcat
root   51798   1  1 11:16 ?00:06:01 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root   54122   54037  0 19:43 pts/100:00:00 grep --color=auto tomcat


# cat /proc/51798/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max data size unlimitedunlimitedbytes
Max stack size8388608  unlimitedbytes
Max core file sizeunlimitedunlimitedbytes
Max resident set  unlimitedunlimitedbytes
Max processes 256518   256518   processes
Max open files262144   262144   files
Max locked memory 6553665536bytes
Max address space unlimitedunlimitedbytes
Max file locksunlimitedunlimitedlocks
Max pending signals   256518   256518   signals
Max msgqueue size 819200   819200   bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout  unlimitedunlimitedus


Too may open files:
# lsof -p 51798 | wc -l
6679


Most of open files are TCP sockets:
java51798 root 6601u sock0,9  0t0 3959342 protocol: 
TCPv6


# lsof -p 51798 | grep "protocol: TCPv6" | wc -l
6413


Some potentially dumb questions:

1. Did you upgrade anything recently (like Java VM)?

2. What is error you are seeing? A full stack trace would be helpful.

3. What is your  configuration?

4. Are you counting all the open files for a single process or all of 
the sub-processes which represent the threads of the main process? 
Different kernel versions count things differently.


5. Running lsof, netstat, etc. can you see if some large number of those 
sockets are bound to any specific port (listen or connect)?


-chris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



apache-tomcat-8.5.59 too many open files on Linux 8

2021-05-17 Thread Yeggy Javadi
Hi,
I am seeing too many TCP sockets open by tomcat on Oracle Linux 8:
java51798 root 6601u sock0,9  0t0 3959342 protocol: 
TCPv6

I was running the same application on Linux 7.6 with the same apache-tomcat 
version without seeing this issue. I saw that RedHat has remove apache-tomcat 
fom their Linux 8 distribution. Please help with throubleshooting and 
indicating if there is any changes in Linux 8 that can explain this

Configuration:
apache-tomcat-8.5.59
OEL Linux 8 kernel 4.18.0-193.28.1.el8_2.x86_64
Web-based application using Postgres and elastic search databases


# ps -ef | grep tomcat
root   51798   1  1 11:16 ?00:06:01 /usr/local/jre/bin/java 
-Djava.util.logging.config.file=/usr/local/apache-tomcat/conf/logging.properties
 -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -d64 -server 
-Xms1800m -Xmx8192m -XX:MaxMetaspaceSize=1800m 
-Djdk.tls.ephemeralDHKeySize=2048 
-Djava.protocol.handler.pkgs=org.apache.catalina.webresources 
-Dorg.apache.catalina.security.SecurityListener.UMASK=0027 
-Dignore.endorsed.dirs= -classpath 
/usr/local/apache-tomcat/bin/bootstrap.jar:/usr/local/apache-tomcat/bin/tomcat-juli.jar
 -Dcatalina.base=/usr/local/apache-tomcat 
-Dcatalina.home=/usr/local/apache-tomcat 
-Djava.io.tmpdir=/usr/local/apache-tomcat/temp 
org.apache.catalina.startup.Bootstrap start
root   54122   54037  0 19:43 pts/100:00:00 grep --color=auto tomcat


# cat /proc/51798/limits
Limit Soft Limit   Hard Limit   Units
Max cpu time  unlimitedunlimitedseconds
Max file size unlimitedunlimitedbytes
Max data size unlimitedunlimitedbytes
Max stack size8388608  unlimitedbytes
Max core file sizeunlimitedunlimitedbytes
Max resident set  unlimitedunlimitedbytes
Max processes 256518   256518   processes
Max open files262144   262144   files
Max locked memory 6553665536bytes
Max address space unlimitedunlimitedbytes
Max file locksunlimitedunlimitedlocks
Max pending signals   256518   256518   signals
Max msgqueue size 819200   819200   bytes
Max nice priority 00
Max realtime priority 00
Max realtime timeout  unlimitedunlimitedus


Too may open files:
# lsof -p 51798 | wc -l
6679


Most of open files are TCP sockets:
java51798 root 6601u sock0,9  0t0 3959342 protocol: 
TCPv6


# lsof -p 51798 | grep "protocol: TCPv6" | wc -l
6413


Thanks



Re: HTTP rejections and too many open files exception on tomcat

2013-02-18 Thread André Warnier

Radha Krishnan D wrote:

Hi,

We are running into too many open issues  exception on  our production systems 
for the last


With a lot of guesswork ..

Google for ulimit ?
Get a bigger server ?



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: HTTP rejections and too many open files exception on tomcat

2013-02-18 Thread Radha Krishnan D
Hi

Am sorry, clicked submit by mistake

The below is the actual problem


We are running into too many open files  exception on  our production tomcat 
server for the last 2-3 days.

We have max threads as 150 and acceptCount as 100 and at one point of time all 
the 250 threads are busy processing requests  (due to application related 
issues)

At peak times, when all the threads are busy, new requests are rejected by 
tomcat.

Nearly 40 mins to 1 hr after the rejections begin, there are too many open 
files exception coming on the tomcat server..

The number of file descriptors is 16384 and we are getting 50-60 requests per 
sec.

What can be the root cause here.  After rejecting a request, is the socket not 
closed immediately and this results in pile up of open files or is there some 
other reason ?

Can any one help us in this regard ?


Thanks,
D.Radhakrishnan


-Original Message-
From: André Warnier [mailto:a...@ice-sa.com]
Sent: 18 February 2013 22:32
To: Tomcat Users List
Subject: Re: HTTP rejections and too many open files exception on tomcat

Radha Krishnan D wrote:
 Hi,

 We are running into too many open issues  exception on  our production 
 systems for the last

With a lot of guesswork ..

Google for ulimit ?
Get a bigger server ?



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

This email and any attachments are confidential, and may be legally privileged 
and protected by copyright. If you are not the intended recipient dissemination 
or copying of this email is prohibited. If you have received this in error, 
please notify the sender by replying by email and then delete the email 
completely from your system. Any views or opinions are solely those of the 
sender. This communication is not intended to form a binding contract unless 
expressly indicated to the contrary and properly authorised. Any actions taken 
on the basis of this email are at the recipient's own risk.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: HTTP rejections and too many open files exception on tomcat

2013-02-18 Thread André Warnier

Radha Krishnan D wrote:

Hi

Am sorry, clicked submit by mistake

The below is the actual problem


We are running into too many open files  exception on  our production tomcat 
server for the last 2-3 days.

We have max threads as 150 and acceptCount as 100 and at one point of time all 
the 250 threads are busy processing requests  (due to application related 
issues)

At peak times, when all the threads are busy, new requests are rejected by 
tomcat.

Nearly 40 mins to 1 hr after the rejections begin, there are too many open 
files exception coming on the tomcat server..

The number of file descriptors is 16384 and we are getting 50-60 requests per 
sec.

What can be the root cause here.  After rejecting a request, is the socket not 
closed immediately and this results in pile up of open files or is there some 
other reason ?

Can any one help us in this regard ?




platform/OS version ?
Java version ?
Tomcat version (x.y.z) ?

The number of file descriptors is 16384.. : where does that number come from ?
And are they sockets or other things ?

After rejecting a request, is the socket not closed immediately and this results in pile 
up of open files or is there some other reason ?


There are some states in TCP/IP, that a connection goes through even after a close() has 
been issued by the server side.  There are some delays there, and maybe that is a source 
for your problem.  Google for TCP state diagram.

Use netstat and lsof (under Unix/Linux) to find out more.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: HTTP rejections and too many open files exception on tomcat

2013-02-18 Thread André Warnier

André Warnier wrote:

Radha Krishnan D wrote:

Hi

Am sorry, clicked submit by mistake

The below is the actual problem


We are running into too many open files  exception on  our production 
tomcat server for the last 2-3 days.


We have max threads as 150 and acceptCount as 100 and at one point of 
time all the 250 threads are busy processing requests  (due to 
application related issues)


At peak times, when all the threads are busy, new requests are 
rejected by tomcat.


Nearly 40 mins to 1 hr after the rejections begin, there are too many 
open files exception coming on the tomcat server..


The number of file descriptors is 16384 and we are getting 50-60 
requests per sec.


What can be the root cause here.  After rejecting a request, is the 
socket not closed immediately and this results in pile up of open 
files or is there some other reason ?


Can any one help us in this regard ?




platform/OS version ?
Java version ?
Tomcat version (x.y.z) ?

The number of file descriptors is 16384.. : where does that number 
come from ?

And are they sockets or other things ?

After rejecting a request, is the socket not closed immediately and 
this results in pile up of open files or is there some other reason ?


There are some states in TCP/IP, that a connection goes through even 
after a close() has been issued by the server side.  There are some 
delays there, and maybe that is a source for your problem.  Google for 
TCP state diagram.

Use netstat and lsof (under Unix/Linux) to find out more.



See also this
http://stackoverflow.com/questions/34588/how-do-i-change-the-number-of-open-files-limit-in-linux

The point is : your system(s) may really be too small to handle the load that comes to 
them; or your application may be too slow to respond; or you may have KeepAlive set too 
high for your own good, or any of the parameters here that can be used to modulate what 
happens could be wrong for your case : 
https://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Implementation


But all these parameters are to some extent interdependent, and before tuning any of 
them, you would need to have (and provide us with) a better picture of what really 
happens.  If you start playing around with them without really understanding what is 
happening and what you are doing, you'll probably make the situation worse, not better.




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: HTTP rejections and too many open files exception on tomcat

2013-02-18 Thread Pid
On 18/02/2013 17:05, Radha Krishnan D wrote:
 Hi
 
 Am sorry, clicked submit by mistake
 
 The below is the actual problem
 
 
 We are running into too many open files  exception on  our production tomcat 
 server for the last 2-3 days.
 
 We have max threads as 150 and acceptCount as 100 and at one point of time 
 all the 250 threads are busy processing requests  (due to application related 
 issues)

You only have 150 threads, not 250.  150 is not a large number  is in
fact lower than the default.

At 50-60 requests per second, you must serve each request within 0.3s to
avoid queuing requests (ie, using the acceptCount queue).  You should
avoid queueing requests.


 At peak times, when all the threads are busy, new requests are rejected by 
 tomcat.

Expected.


 Nearly 40 mins to 1 hr after the rejections begin, there are too many open 
 files exception coming on the tomcat server..

Maybe this is the real peak.


 The number of file descriptors is 16384 and we are getting 50-60 requests per 
 sec.
 
 What can be the root cause here.  

Too many possible options, not enough info.


 After rejecting a request, is the socket not closed immediately and this 
 results in pile up of open files or is there some other reason ?

 Can any one help us in this regard ?

Please post the Connector configuration from server.xml.  You may
obscure/remove passwords if present.


p

 Thanks,
 D.Radhakrishnan
 
 
 -Original Message-
 From: André Warnier [mailto:a...@ice-sa.com]
 Sent: 18 February 2013 22:32
 To: Tomcat Users List
 Subject: Re: HTTP rejections and too many open files exception on tomcat
 
 Radha Krishnan D wrote:
 Hi,

 We are running into too many open issues  exception on  our production 
 systems for the last
 
 With a lot of guesswork ..
 
 Google for ulimit ?
 Get a bigger server ?
 
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 This email and any attachments are confidential, and may be legally 
 privileged and protected by copyright. If you are not the intended recipient 
 dissemination or copying of this email is prohibited. If you have received 
 this in error, please notify the sender by replying by email and then delete 
 the email completely from your system. Any views or opinions are solely those 
 of the sender. This communication is not intended to form a binding contract 
 unless expressly indicated to the contrary and properly authorised. Any 
 actions taken on the basis of this email are at the recipient's own risk.
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 


-- 

[key:62590808]

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: HTTP rejections and too many open files exception on tomcat

2013-02-18 Thread Mark Thomas

On 18/02/2013 21:14, Pid wrote:

On 18/02/2013 17:05, Radha Krishnan D wrote:

Hi

Am sorry, clicked submit by mistake

The below is the actual problem


We are running into too many open files  exception on  our production tomcat 
server for the last 2-3 days.

We have max threads as 150 and acceptCount as 100 and at one point of time all 
the 250 threads are busy processing requests  (due to application related 
issues)


You only have 150 threads, not 250.  150 is not a large number  is in
fact lower than the default.

At 50-60 requests per second, you must serve each request within 0.3s


That number (0.3) is around an order of magnitude too small.

Mark


to
avoid queuing requests (ie, using the acceptCount queue).  You should
avoid queueing requests.



At peak times, when all the threads are busy, new requests are rejected by 
tomcat.


Expected.



Nearly 40 mins to 1 hr after the rejections begin, there are too many open 
files exception coming on the tomcat server..


Maybe this is the real peak.



The number of file descriptors is 16384 and we are getting 50-60 requests per 
sec.

What can be the root cause here.


Too many possible options, not enough info.



After rejecting a request, is the socket not closed immediately and this 
results in pile up of open files or is there some other reason ?

Can any one help us in this regard ?


Please post the Connector configuration from server.xml.  You may
obscure/remove passwords if present.


p


Thanks,
D.Radhakrishnan


-Original Message-
From: André Warnier [mailto:a...@ice-sa.com]
Sent: 18 February 2013 22:32
To: Tomcat Users List
Subject: Re: HTTP rejections and too many open files exception on tomcat

Radha Krishnan D wrote:

Hi,

We are running into too many open issues  exception on  our production systems 
for the last


With a lot of guesswork ..

Google for ulimit ?
Get a bigger server ?



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

This email and any attachments are confidential, and may be legally privileged 
and protected by copyright. If you are not the intended recipient dissemination 
or copying of this email is prohibited. If you have received this in error, 
please notify the sender by replying by email and then delete the email 
completely from your system. Any views or opinions are solely those of the 
sender. This communication is not intended to form a binding contract unless 
expressly indicated to the contrary and properly authorised. Any actions taken 
on the basis of this email are at the recipient's own risk.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org







-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat 7 NIO Socket accept failed - Too many open files

2012-05-02 Thread David Wall
I am running Tomcat 7.0.26 on Linux we received a lot of the following 
exceptions during load testing:


May 2, 2012 3:04:03 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
SEVERE: Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
at 
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:784)

at java.lang.Thread.run(Thread.java:662)

Is there something I can tune to remove this as a problem?  My NIO+SSL 
connector is configured like this:


Connector port=8443 
protocol=org.apache.coyote.http11.Http11NioProtocol SSLEnabled=true
   maxThreads=800 scheme=https secure=true 
acceptCount=200 connectionTimeout=4000 acceptorThreadCount=2

keystoreFile=keys/tomcatkeys keystorePass=VALUEREMOVED
   clientAuth=false sslProtocol=TLS /

During the test, we had created about 1,800 concurrent sessions, though 
I think many of those were active because of the exceptions kept the 
user's transaction from completing and when their session would end 
normally.


Thanks,
David


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 7 NIO Socket accept failed - Too many open files

2012-05-02 Thread Pid *
On 2 May 2012, at 20:19, David Wall d.w...@computer.org wrote:

 I am running Tomcat 7.0.26 on Linux we received a lot of the following 
 exceptions during load testing:

 May 2, 2012 3:04:03 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
 SEVERE: Socket accept failed
 java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
at 
 org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:784)
at java.lang.Thread.run(Thread.java:662)

 Is there something I can tune to remove this as a problem?

It's an OS issue: google 'ulimit'.


p


 My NIO+SSL connector is configured like this:

 Connector port=8443 protocol=org.apache.coyote.http11.Http11NioProtocol 
 SSLEnabled=true
   maxThreads=800 scheme=https secure=true acceptCount=200 
 connectionTimeout=4000 acceptorThreadCount=2
keystoreFile=keys/tomcatkeys keystorePass=VALUEREMOVED
   clientAuth=false sslProtocol=TLS /

 During the test, we had created about 1,800 concurrent sessions, though I 
 think many of those were active because of the exceptions kept the user's 
 transaction from completing and when their session would end normally.

 Thanks,
 David


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 7 NIO Socket accept failed - Too many open files

2012-05-02 Thread David Wall



On 5/2/2012 12:34 PM, Pid * wrote:

It's an OS issue: google 'ulimit'.


p


Yes, I am familiar with ulimit -Sn (it's 1024), but I suspect this could 
be a Tomcat issue somehow opening too many files and/or not releasing 
them.  I had never seen this issue before we upgraded from Tomcat 5.5 
(all using BIO) to Tomcat 7.0 (all using NIO).  We run on lots of 
servers, and none have shown this error before (and they are all Linux 
servers all set to 1024 for open files).  But we will give it a try by 
setting to a higher number.


The reason we suspect it's Tomcat is that we're getting other 
exceptions, too, those that indicate our session/request objects are not 
valid when our JSPs are running (and of course work fine when the loads 
are normal, but start to fail when we push lots of concurrent requests 
at Tomcat).


David

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Tomcat 7 NIO Socket accept failed - Too many open files

2012-05-02 Thread Filip Hanik (mailing lists)
Ok, lsof -p pid (IIRC) should do the trick, it will tell all the handles open 
for that process, and you can deduce where the problem stems from

 -Original Message-
 From: David Wall [mailto:d.w...@computer.org]
 Sent: Wednesday, May 02, 2012 2:48 PM
 To: users@tomcat.apache.org
 Subject: Re: Tomcat 7 NIO Socket accept failed - Too many open files
 
 
 
 On 5/2/2012 12:34 PM, Pid * wrote:
  It's an OS issue: google 'ulimit'.
 
 
  p
 
 Yes, I am familiar with ulimit -Sn (it's 1024), but I suspect this could
 be a Tomcat issue somehow opening too many files and/or not releasing
 them.  I had never seen this issue before we upgraded from Tomcat 5.5
 (all using BIO) to Tomcat 7.0 (all using NIO).  We run on lots of
 servers, and none have shown this error before (and they are all Linux
 servers all set to 1024 for open files).  But we will give it a try by
 setting to a higher number.
 
 The reason we suspect it's Tomcat is that we're getting other
 exceptions, too, those that indicate our session/request objects are not
 valid when our JSPs are running (and of course work fine when the loads
 are normal, but start to fail when we push lots of concurrent requests
 at Tomcat).
 
 David
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 7 NIO Socket accept failed - Too many open files

2012-05-02 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Filip,

On 5/2/12 5:40 PM, Filip Hanik (mailing lists) wrote:
 Ok, lsof -p pid (IIRC) should do the trick, it will tell all the 
 handles open for that process, and you can deduce where the
 problem stems from

+1

If you have maxThreads=800 then you're already most of the way to
1024 without even counting things like stdin/stdout/stderr, all the
files the JVM keeps open for various reasons, etc.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk+h2m8ACgkQ9CaO5/Lv0PCTPwCgjU5NTNUXj8yWofJVsU9jQsMF
7dYAniD0ZHtfCHa4GPQqLy7Z0JPZzzPJ
=+3m8
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-27 Thread Pid *
On 27 Jan 2012, at 05:32, gnath gautam_exquis...@yahoo.com wrote:

 Hello Chris,


 After seeing the initial connection pool issue, i started searching online 
 for help and i found this article :
 http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
 so, i thought may be tomcat's jar would bring some improvement. by the way, 
 we had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons dbcp 
 jar instead of using tomcat-jdbc.jar.

Tomcat ships with a DBCP implementing of its own.

How and where are you defining the database?


p




 Just because we are running tomcat-6.0.35, it did not come with 
 tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in 
 WEB-INF/lib and started using it.


 I agree what you are saying about leaking the connection and will plan to set 
 the logAbandoned flag as you suggested.


 However, i was about to file a new issue but would like to describe here as 
 well. So we have 2 servers running tomcat (same code, same configuration). 
 After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, 
 one of the servers is doing great (ofcourse i agree that pool is cleaning up 
 the mess), but we saw one new issue on the second server. it hasn't been 
 releasing the connections and was consistently growing slowly. So i collected 
 thread dump and i saw a deadlock :

 Found one Java-level deadlock:
 =
 catalina-exec-1:
   waiting to lock monitor 0x5d7944b8 (object 0x0005bd522568, a 
 com.mysql.jdbc.Connection),
   which is held by [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]
 [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
   waiting to lock monitor 0x5dcdea28 (object 0x0005bd659ce8, a 
 com.mysql.jdbc.ResultSet),
   which is held by catalina-exec-1

 Java stack information for the threads listed above:
 ===
 catalina-exec-1:
 at com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
 - waiting to lock 0x0005bd522568 (a com.mysql.jdbc.Connection)
 at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
 at com.mysql.jdbc.Field.getName(Field.java:487)
 at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
 at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
 - locked 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
 at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)

 [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
 at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
 - waiting to lock 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
 at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
 - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
 - locked 0x0005bd5e81c0 (a 
 com.mysql.jdbc.ServerPreparedStatement)
 at 
 com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
 at 
 com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
 - locked 0x0005bd525ba0 (a java.lang.Object)
 - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
 - locked 0x0005bd5e81c0 (a 
 com.mysql.jdbc.ServerPreparedStatement)
 at 
 com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
 at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
 at com.mysql.jdbc.Connection.close(Connection.java:2098)
 - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
 at 
 org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)



 Please help us on this. Could it be a problem with tomcat-jdbc.jar?

 Thanks
 -G


 
 From: Christopher Schultz ch...@christopherschultz.net
 To: Tomcat Users List users@tomcat.apache.org
 Sent: Thursday, January 26, 2012 9:41 AM
 Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 G,

 On 1/25/12 11:53 PM, gnath wrote:
 As you have suggested, i started collecting the thread dumps

 Thread dumps will set you free. Well, not really. Instead, they will
 tell you where your webapp is breaking, which usually means more work
 for you. But at least the hard part is done: finding out what's breaking.

 when it happened again and we saw some kind of DBCP Connection pool
 issues leading to 'Too Many open files' issue.

 That will definitely do it.

 So we decided to replace the commons DBCP with tomcat-jdbc.jar
 (with same configuration properties).

 Why?

 After this change, it seemed for few hours but started seeing in
 the logs where the Connection Pool jar could not give any
 connections and seems to be all the connections are busy. So we
 went ahead and added a configuration property
 'removeAbandoned=true' in our Datasource configuration.

 I would go back to DBCP

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-27 Thread gnath
We defined our data sources in spring configuration file. We did not have any 
DB configuration defined on Server.

Thanks
-G




 From: Pid * p...@pidster.com
To: Tomcat Users List users@tomcat.apache.org 
Sent: Friday, January 27, 2012 1:03 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files issue with
 
On 27 Jan 2012, at 05:32, gnath gautam_exquis...@yahoo.com wrote:

 Hello Chris,


 After seeing the initial connection pool issue, i started searching online 
 for help and i found this article :
 http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
 so, i thought may be tomcat's jar would bring some improvement. by the way, 
 we had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons dbcp 
 jar instead of using tomcat-jdbc.jar.

Tomcat ships with a DBCP implementing of its own.

How and where are you defining the database?


p




 Just because we are running tomcat-6.0.35, it did not come with 
 tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in 
 WEB-INF/lib and started using it.


 I agree what you are saying about leaking the connection and will plan to set 
 the logAbandoned flag as you suggested.


 However, i was about to file a new issue but would like to describe here as 
 well. So we have 2 servers running tomcat (same code, same configuration). 
 After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, 
 one of the servers is doing great (ofcourse i agree that pool is cleaning up 
 the mess), but we saw one new issue on the second server. it hasn't been 
 releasing the connections and was consistently growing slowly. So i collected 
 thread dump and i saw a deadlock :

 Found one Java-level deadlock:
 =
 catalina-exec-1:
   waiting to lock monitor 0x5d7944b8 (object 0x0005bd522568, a 
com.mysql.jdbc.Connection),
   which is held by [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]
 [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
   waiting to lock monitor 0x5dcdea28 (object 0x0005bd659ce8, a 
com.mysql.jdbc.ResultSet),
   which is held by catalina-exec-1

 Java stack information for the threads listed above:
 ===
 catalina-exec-1:
         at com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
         - waiting to lock 0x0005bd522568 (a com.mysql.jdbc.Connection)
         at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
         at com.mysql.jdbc.Field.getName(Field.java:487)
         at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
         at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
         - locked 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
         at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)

 [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
         at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
         - waiting to lock 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
         at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
         - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
         - locked 0x0005bd5e81c0 (a 
com.mysql.jdbc.ServerPreparedStatement)
         at 
com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
         at 
com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
         - locked 0x0005bd525ba0 (a java.lang.Object)
         - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
         - locked 0x0005bd5e81c0 (a 
com.mysql.jdbc.ServerPreparedStatement)
         at 
com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
         at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
         at com.mysql.jdbc.Connection.close(Connection.java:2098)
         - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
         at 
org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)



 Please help us on this. Could it be a problem with tomcat-jdbc.jar?

 Thanks
 -G


 
 From: Christopher Schultz ch...@christopherschultz.net
 To: Tomcat Users List users@tomcat.apache.org
 Sent: Thursday, January 26, 2012 9:41 AM
 Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 G,

 On 1/25/12 11:53 PM, gnath wrote:
 As you have suggested, i started collecting the thread dumps

 Thread dumps will set you free. Well, not really. Instead, they will
 tell you where your webapp is breaking, which usually means more work
 for you. But at least the hard part is done: finding out what's breaking.

 when it happened again and we saw some kind of DBCP Connection pool
 issues leading to 'Too Many open files' issue.

 That will definitely do it.

 So we decided to replace the commons DBCP with tomcat-jdbc.jar
 (with same

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-27 Thread Pid
This is top-posting, for anyone who's watching  doesn't know.


p

On 27/01/2012 14:26, gnath wrote:
 We defined our data sources in spring configuration file. We did not have any 
 DB configuration defined on Server.

 
  From: Pid * p...@pidster.com
 To: Tomcat Users List users@tomcat.apache.org 
 Sent: Friday, January 27, 2012 1:03 AM
 Subject: Re: Tomcat 6.0.35-SocketException: Too many open files issue with
  
 On 27 Jan 2012, at 05:32, gnath gautam_exquis...@yahoo.com wrote:
 
 Hello Chris,


 After seeing the initial connection pool issue, i started searching online 
 for help and i found this article :
 http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
 so, i thought may be tomcat's jar would bring some improvement. by the way, 
 we had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons 
 dbcp jar instead of using tomcat-jdbc.jar.
 
 Tomcat ships with a DBCP implementing of its own.
 
 How and where are you defining the database?
 
 
 p
 
 
 

 Just because we are running tomcat-6.0.35, it did not come with 
 tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in 
 WEB-INF/lib and started using it.


 I agree what you are saying about leaking the connection and will plan to 
 set the logAbandoned flag as you suggested.


 However, i was about to file a new issue but would like to describe here as 
 well. So we have 2 servers running tomcat (same code, same configuration). 
 After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, 
 one of the servers is doing great (ofcourse i agree that pool is cleaning up 
 the mess), but we saw one new issue on the second server. it hasn't been 
 releasing the connections and was consistently growing slowly. So i 
 collected thread dump and i saw a deadlock :

 Found one Java-level deadlock:
 =
 catalina-exec-1:
waiting to lock monitor 0x5d7944b8 (object 0x0005bd522568, a 
 com.mysql.jdbc.Connection),
which is held by [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]
 [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
waiting to lock monitor 0x5dcdea28 (object 0x0005bd659ce8, a 
 com.mysql.jdbc.ResultSet),
which is held by catalina-exec-1

 Java stack information for the threads listed above:
 ===
 catalina-exec-1:
  at 
 com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
  - waiting to lock 0x0005bd522568 (a com.mysql.jdbc.Connection)
  at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
  at com.mysql.jdbc.Field.getName(Field.java:487)
  at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
  at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
  - locked 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
  at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)

 [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
  at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
  - waiting to lock 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
  at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
  - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
  - locked 0x0005bd5e81c0 (a 
 com.mysql.jdbc.ServerPreparedStatement)
  at 
 com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
  at 
 com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
  - locked 0x0005bd525ba0 (a java.lang.Object)
  - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
  - locked 0x0005bd5e81c0 (a 
 com.mysql.jdbc.ServerPreparedStatement)
  at 
 com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
  at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
  at com.mysql.jdbc.Connection.close(Connection.java:2098)
  - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
  at 
 org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)



 Please help us on this. Could it be a problem with tomcat-jdbc.jar?

 Thanks
 -G


 
 From: Christopher Schultz ch...@christopherschultz.net
 To: Tomcat Users List users@tomcat.apache.org
 Sent: Thursday, January 26, 2012 9:41 AM
 Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with

 G,
 
 On 1/25/12 11:53 PM, gnath wrote:
 As you have suggested, i started collecting the thread dumps
 
 Thread dumps will set you free. Well, not really. Instead, they will
 tell you where your webapp is breaking, which usually means more work
 for you. But at least the hard part is done: finding out what's breaking.
 
 when it happened again and we saw some kind of DBCP Connection pool
 issues leading to 'Too Many open files' issue

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-26 Thread Pid
On 26/01/2012 04:53, gnath wrote:
 Hi Chris, 
 
 Thanks a lot for looking into this and giving answers for all my questions. 
 Sorry, i could not get chance to reply in time. As you have suggested, i 
 started collecting the thread dumps when it happened again and we saw some 
 kind of DBCP Connection pool issues leading to 'Too Many open files' issue. 
 So we decided to replace the commons DBCP with tomcat-jdbc.jar (with same 
 configuration properties). After this change, it seemed for few hours but 
 started seeing in the logs where the Connection Pool jar could not give any 
 connections and seems to be all the connections are busy. So we went ahead 
 and added a configuration property 'removeAbandoned=true' in our Datasource 
 configuration. 
 
 
 We are still watching the performance and the server behavior after these 
 changes. 
 Will keep you posted on how things will turn out or if i see any further 
 issues. 
 
 
 thank you once again, I really appreciate your help.
 
 Thanks

This sounds increasingly like your application isn't returning
connections to the pool properly.  Switching pool implementation won't
help if this is the case.

You should carefully examine the code where the database is used to
ensure that DB resources are returned to the pool in a finally block,
after use.

Chris's question regarding 'what has changed' is still relevant.


p

 
  From: Christopher Schultz ch...@christopherschultz.net
 To: Tomcat Users List users@tomcat.apache.org 
 Sent: Monday, January 23, 2012 7:51 AM
 Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
  
 G,
 
 On 1/22/12 6:18 PM, gnath wrote:
 We have 2 connectors (one for http and another for https) using
 the tomcatThreadPool. I have the connectionTimeout=2 for
 http connector.  However i was told that our https connector might
 not be used by the app as our loadbalancer is handling all the
 https traffic and just sending them to http connector.
 
 You might want to disable that HTTPS connector, but it's probably not
 hurting you at all in this case -- just a bit of wasted resources. If
 you are sharing a thread pool then there is no negative impact on the
 number of threads and/or open files that you have to deal with, here.
 
 the ulimit settings were increased from default 1024 to 4096 by
 our admin. not sure how he did that, but i see the count as 4096
 when i do ulimit -a.
 
 Well, if your admin says it's right, I suppose it's right.
 
 for ulimit -n i see its 'unlimited'.
 
 That's good.
 
 for cat /proc/PID/limits, i get the following response:
 
 Limit Soft Limit   Hard Limit
 Units Max cpu time  unlimitedunlimited
 seconds Max file size unlimitedunlimited
 bytes Max data size unlimitedunlimited
 bytes Max stack size10485760 unlimited
 bytes Max core file size0unlimited
 bytes Max resident set  unlimitedunlimited
 bytes Max processes unlimitedunlimited
 processes Max open files4096 4096
 files Max locked memory 3276832768
 bytes Max address space unlimitedunlimited
 bytes Max file locksunlimitedunlimited
 locks Max pending signals   202752   202752
 signals Max msgqueue size 819200   819200
 bytes Max nice priority 00
   Max realtime priority 00
 
 Those all look good to me.
 
 This morning Tomcat hung again but this time it dint say 'too many 
 open files' in logs but i only see this below in catalina.out:
 
 org.apache.tomcat.util.http.Parameters processParameters INFO:
 Invalid chunk starting at byte [0] and ending at byte [0] with a
 value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
 processParameters INFO: Invalid chunk starting at byte [0] and
 ending at byte [0] with a value of [null] ignored
 
 Hmm...
 
 When it hung(java process is still up), i ran few commands like
 lsof by PID and couple others.
 
 Next time, take a thread dump as well. The fact that Tomcat hung up
 without an OS problem (like Too Many Open Files) is probably not good.
 If this happens again with an apparent hang with no stack traces in
 the logs, take a thread dump and post it back here under a different
 subject.
 
 here is what i got:
 
 lsof -p PID| wc -l 1342
 
 lsof | wc -l 4520
 
 lsof -u USER| wc -l 1953
 
 Hmm I wonder if you are hitting a *user* or even *system* limit of
 some kind (though a *NIX system with a hard limit of ~4500 file
 descriptors seems entirely unreasonable). I also wonder how many
 /processes/ and/or /threads/ you have running at once.
 
 After i kill java process the lsof for pid returned obviously to
 zero
 
 Of course.
 
 Is there any chance that the tomcat is ignoring the ulimit

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-26 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

G,

On 1/25/12 11:53 PM, gnath wrote:
 As you have suggested, i started collecting the thread dumps

Thread dumps will set you free. Well, not really. Instead, they will
tell you where your webapp is breaking, which usually means more work
for you. But at least the hard part is done: finding out what's breaking.

 when it happened again and we saw some kind of DBCP Connection pool
 issues leading to 'Too Many open files' issue.

That will definitely do it.

 So we decided to replace the commons DBCP with tomcat-jdbc.jar
 (with same configuration properties).

Why?

 After this change, it seemed for few hours but started seeing in
 the logs where the Connection Pool jar could not give any 
 connections and seems to be all the connections are busy. So we
 went ahead and added a configuration property
 'removeAbandoned=true' in our Datasource configuration.

I would go back to DBCP unless you think you need to switch for some
reason.

I suspect you are leaking database connections and don't have a
suitable timeout for removal of lost database connections (or maybe
didn't have that set up in the first place).

You really need to enable logAbandoned so you can find out where
your connection leaks are, and fix them. In development, set
maxActive=1 and leave it there, forever. Also, set
logAbandoned=true and always run like that in development. Running
like that in production isn't a bad idea, either.

 We are still watching the performance and the server behavior
 after these changes. Will keep you posted on how things will turn
 out or if i see any further issues.

I suspect you are still leaking connections, but your pool is now
silently cleaning-up after the mess your webapp is making. Instrument
your pool. Fix your leaks.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8hkDkACgkQ9CaO5/Lv0PCxFgCgs+EiV/CNjmCNekeDwKHgnNtZ
5LYAoKZUkIAJOK0eItkoHBF3wScK9lQf
=AyL4
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-26 Thread gnath
Hello Chris, 


After seeing the initial connection pool issue, i started searching online for 
help and i found this article :
http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
so, i thought may be tomcat's jar would bring some improvement. by the way, we 
had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons dbcp jar 
instead of using tomcat-jdbc.jar.

Just because we are running tomcat-6.0.35, it did not come with 
tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in 
WEB-INF/lib and started using it. 


I agree what you are saying about leaking the connection and will plan to set 
the logAbandoned flag as you suggested. 


However, i was about to file a new issue but would like to describe here as 
well. So we have 2 servers running tomcat (same code, same configuration). 
After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, one 
of the servers is doing great (ofcourse i agree that pool is cleaning up the 
mess), but we saw one new issue on the second server. it hasn't been releasing 
the connections and was consistently growing slowly. So i collected thread dump 
and i saw a deadlock :

Found one Java-level deadlock:
=
catalina-exec-1:
  waiting to lock monitor 0x5d7944b8 (object 0x0005bd522568, a 
com.mysql.jdbc.Connection),
  which is held by [Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]
[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
  waiting to lock monitor 0x5dcdea28 (object 0x0005bd659ce8, a 
com.mysql.jdbc.ResultSet),
  which is held by catalina-exec-1

Java stack information for the threads listed above:
===
catalina-exec-1:
    at com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
    - waiting to lock 0x0005bd522568 (a com.mysql.jdbc.Connection)
    at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
    at com.mysql.jdbc.Field.getName(Field.java:487)
    at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
    at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
    - locked 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
    at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)

[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]:
    at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
    - waiting to lock 0x0005bd659ce8 (a com.mysql.jdbc.ResultSet)
    at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
    - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
    - locked 0x0005bd5e81c0 (a com.mysql.jdbc.ServerPreparedStatement)
    at 
com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
    at 
com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
    - locked 0x0005bd525ba0 (a java.lang.Object)
    - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
    - locked 0x0005bd5e81c0 (a com.mysql.jdbc.ServerPreparedStatement)
    at 
com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
    at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
    at com.mysql.jdbc.Connection.close(Connection.java:2098)
    - locked 0x0005bd522568 (a com.mysql.jdbc.Connection)
    at 
org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)



Please help us on this. Could it be a problem with tomcat-jdbc.jar?

Thanks
-G



 From: Christopher Schultz ch...@christopherschultz.net
To: Tomcat Users List users@tomcat.apache.org 
Sent: Thursday, January 26, 2012 9:41 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

G,

On 1/25/12 11:53 PM, gnath wrote:
 As you have suggested, i started collecting the thread dumps

Thread dumps will set you free. Well, not really. Instead, they will
tell you where your webapp is breaking, which usually means more work
for you. But at least the hard part is done: finding out what's breaking.

 when it happened again and we saw some kind of DBCP Connection pool
 issues leading to 'Too Many open files' issue.

That will definitely do it.

 So we decided to replace the commons DBCP with tomcat-jdbc.jar
 (with same configuration properties).

Why?

 After this change, it seemed for few hours but started seeing in
 the logs where the Connection Pool jar could not give any 
 connections and seems to be all the connections are busy. So we
 went ahead and added a configuration property
 'removeAbandoned=true' in our Datasource configuration.

I would go back to DBCP unless you think you need to switch for some
reason.

I suspect you are leaking database connections and don't have a
suitable timeout for removal of lost database connections (or maybe
didn't have that set up in the first place).

You really need

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-25 Thread gnath
Hi Chris, 

Thanks a lot for looking into this and giving answers for all my questions. 
Sorry, i could not get chance to reply in time. As you have suggested, i 
started collecting the thread dumps when it happened again and we saw some kind 
of DBCP Connection pool issues leading to 'Too Many open files' issue. So we 
decided to replace the commons DBCP with tomcat-jdbc.jar (with same 
configuration properties). After this change, it seemed for few hours but 
started seeing in the logs where the Connection Pool jar could not give any 
connections and seems to be all the connections are busy. So we went ahead and 
added a configuration property 'removeAbandoned=true' in our Datasource 
configuration. 


We are still watching the performance and the server behavior after these 
changes. 
Will keep you posted on how things will turn out or if i see any further 
issues. 


thank you once again, I really appreciate your help.

Thanks
-G




 From: Christopher Schultz ch...@christopherschultz.net
To: Tomcat Users List users@tomcat.apache.org 
Sent: Monday, January 23, 2012 7:51 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

G,

On 1/22/12 6:18 PM, gnath wrote:
 We have 2 connectors (one for http and another for https) using
 the tomcatThreadPool. I have the connectionTimeout=2 for
 http connector.  However i was told that our https connector might
 not be used by the app as our loadbalancer is handling all the
 https traffic and just sending them to http connector.

You might want to disable that HTTPS connector, but it's probably not
hurting you at all in this case -- just a bit of wasted resources. If
you are sharing a thread pool then there is no negative impact on the
number of threads and/or open files that you have to deal with, here.

 the ulimit settings were increased from default 1024 to 4096 by
 our admin. not sure how he did that, but i see the count as 4096
 when i do ulimit -a.

Well, if your admin says it's right, I suppose it's right.

 for ulimit -n i see its 'unlimited'.

That's good.

 for cat /proc/PID/limits, i get the following response:
 
 Limit                     Soft Limit           Hard Limit
 Units Max cpu time              unlimited            unlimited
 seconds Max file size             unlimited            unlimited
 bytes Max data size             unlimited            unlimited
 bytes Max stack size            10485760             unlimited
 bytes Max core file size        0                    unlimited
 bytes Max resident set          unlimited            unlimited
 bytes Max processes             unlimited            unlimited
 processes Max open files            4096                 4096
 files Max locked memory         32768                32768
 bytes Max address space         unlimited            unlimited
 bytes Max file locks            unlimited            unlimited
 locks Max pending signals       202752               202752
 signals Max msgqueue size         819200               819200
 bytes Max nice priority         0                    0
  Max realtime priority     0                    0

Those all look good to me.

 This morning Tomcat hung again but this time it dint say 'too many 
 open files' in logs but i only see this below in catalina.out:
 
 org.apache.tomcat.util.http.Parameters processParameters INFO:
 Invalid chunk starting at byte [0] and ending at byte [0] with a
 value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
 processParameters INFO: Invalid chunk starting at byte [0] and
 ending at byte [0] with a value of [null] ignored

Hmm...

 When it hung(java process is still up), i ran few commands like
 lsof by PID and couple others.

Next time, take a thread dump as well. The fact that Tomcat hung up
without an OS problem (like Too Many Open Files) is probably not good.
If this happens again with an apparent hang with no stack traces in
the logs, take a thread dump and post it back here under a different
subject.

 here is what i got:
 
 lsof -p PID| wc -l 1342
 
 lsof | wc -l 4520
 
 lsof -u USER| wc -l 1953

Hmm I wonder if you are hitting a *user* or even *system* limit of
some kind (though a *NIX system with a hard limit of ~4500 file
descriptors seems entirely unreasonable). I also wonder how many
/processes/ and/or /threads/ you have running at once.

 After i kill java process the lsof for pid returned obviously to
 zero

Of course.

 Is there any chance that the tomcat is ignoring the ulimit?

Those limits are not self-imposed: the OS imposes those limits. Tomcat
doesn't even know it's own ulimit (of any kind), so it will simply
consume whatever resources you have configured it to use, and if it
hits a limit, the JVM will experience some kind of OS-related error.

 , some people on web were saying something about setting this in
 catalina.sh.

Setting what? ulimit? I'd do it in setenv.sh because that's a more

RE: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-23 Thread Purvis Robert (NHS CONNECTING FOR HEALTH)
Sorry to possibly state the obvious, but are there perhaps files that are not 
being closed?

This can often happen if code goes into a catch, and a file is not closed. 
Best to have a finally block which checks if a file is open, and closes it.


Robert Purvis

-Original Message-
From: gnath [mailto:gautam_exquis...@yahoo.com]
Sent: 22 January 2012 08:01
To: users@tomcat.apache.org
Subject: Tomcat 6.0.35-SocketException: Too many open files issue with

Hello,

We have been seeing SocketException: Too many open files in production 
environment(Linux OS running Tomcat 6.0.35 with sun's JDK 1.6.30) every day and 
requires a restart of Tomcat. When this happened for the first time, we 
searched online and found people suggesting to increase the file descriptors 
size and we increased to 4096. But still the problem persists.  We have the 
Orion App Server also running on the same machine but usually during the day 
when we check the open file descriptor by command: ls -l /proc/PID/fd, its 
always less than 1000 combined  for both Orion and Tomcat.


Here is the exception we see pouring in the logs once it starts: This requires 
us to kill java process and restart tomcat. Our Tomcat configuration 
maxThreadCount is 500 with minSpareThreads=50 in server.xml


SEVERE: Socket accept failed
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
at java.net.ServerSocket.implAccept(ServerSocket.java:462)
at java.net.ServerSocket.accept(ServerSocket.java:430)
at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
at 
org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
at java.lang.Thread.run(Thread.java:662)

ulimit -a gives for the user where Tomcat is running.


open files  (-n) 4096


Please let me know what could be the issue here and how can i resolve this 'Too 
many open files' issue.

Thanks
-G



This message may contain confidential information. If you are not the intended 
recipient please inform the
sender that you have received the message in error before deleting it.
Please do not disclose, copy or distribute information in this e-mail or take 
any action in reliance on its contents:
to do so is strictly prohibited and may be unlawful.

Thank you for your co-operation.

NHSmail is the secure email and directory service available for all NHS staff 
in England and Scotland
NHSmail is approved for exchanging patient data and other sensitive information 
with NHSmail and GSi recipients
NHSmail provides an email address for your career in the NHS and can be 
accessed anywhere
For more information and to find out how you can switch, visit 
www.connectingforhealth.nhs.uk/nhsmail



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-23 Thread gnath
Hello Robert, 


We are talking about an app which has been grown since years and now migrated 
to Tomcat. Most of the main code is migrated to use spring jdbc and spring 3.0 
mvc. Would you recommend some way that i can spot the issue from inspecting the 
open file descriptors once the server hangs/crashes so that i can check the 
code to see what you are saying? Please do let me know.


All, please help if you can give me any hints from the information i provided 
down below.


Thanks
-G




 From: Purvis Robert (NHS CONNECTING FOR HEALTH) robert.pur...@nhs.net
To: Tomcat Users List users@tomcat.apache.org; gnath 
gautam_exquis...@yahoo.com 
Sent: Monday, January 23, 2012 1:32 AM
Subject: RE: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
Sorry to possibly state the obvious, but are there perhaps files that are not 
being closed?

This can often happen if code goes into a catch, and a file is not closed. 
Best to have a finally block which checks if a file is open, and closes it.


Robert Purvis

-Original Message-
From: gnath [mailto:gautam_exquis...@yahoo.com]
Sent: 22 January 2012 08:01
To: users@tomcat.apache.org
Subject: Tomcat 6.0.35-SocketException: Too many open files issue with

Hello,

We have been seeing SocketException: Too many open files in production 
environment(Linux OS running Tomcat 6.0.35 with sun's JDK 1.6.30) every day and 
requires a restart of Tomcat. When this happened for the first time, we 
searched online and found people suggesting to increase the file descriptors 
size and we increased to 4096. But still the problem persists.  We have the 
Orion App Server also running on the same machine but usually during the day 
when we check the open file descriptor by command: ls -l /proc/PID/fd, its 
always less than 1000 combined  for both Orion and Tomcat.


Here is the exception we see pouring in the logs once it starts: This requires 
us to kill java process and restart tomcat. Our Tomcat configuration 
maxThreadCount is 500 with minSpareThreads=50 in server.xml


SEVERE: Socket accept failed
java.net.SocketException: Too many open files
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
        at java.net.ServerSocket.implAccept(ServerSocket.java:462)
        at java.net.ServerSocket.accept(ServerSocket.java:430)
        at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
        at 
org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
        at java.lang.Thread.run(Thread.java:662)

ulimit -a gives for the user where Tomcat is running.


open files                      (-n) 4096


Please let me know what could be the issue here and how can i resolve this 'Too 
many open files' issue.

Thanks
-G



This message may contain confidential information. If you are not the intended 
recipient please inform the
sender that you have received the message in error before deleting it.
Please do not disclose, copy or distribute information in this e-mail or take 
any action in reliance on its contents:
to do so is strictly prohibited and may be unlawful.

Thank you for your co-operation.

NHSmail is the secure email and directory service available for all NHS staff 
in England and Scotland
NHSmail is approved for exchanging patient data and other sensitive information 
with NHSmail and GSi recipients
NHSmail provides an email address for your career in the NHS and can be 
accessed anywhere
For more information and to find out how you can switch, visit 
www.connectingforhealth.nhs.uk/nhsmail



Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-23 Thread Daniel Mikusa
On Mon, 2012-01-23 at 05:33 -0800, gnath wrote:
 Hello Robert, 
 
 
 We are talking about an app which has been grown since years and now migrated 
 to Tomcat. Most of the main code is migrated to use spring jdbc and spring 
 3.0 mvc. 
 Would you recommend some way that i can spot the issue from inspecting the 
 open file descriptors once the server hangs/crashes so that i can check the 
 code to see what you are saying? Please do let me know.

Try running FindBugs on your code.  In most cases, it will detect when
you do not correctly close a file.

Dan


 
 
 All, please help if you can give me any hints from the information i provided 
 down below.
 
 
 Thanks
 -G
 
 
 
 
  From: Purvis Robert (NHS CONNECTING FOR HEALTH) robert.pur...@nhs.net
 To: Tomcat Users List users@tomcat.apache.org; gnath 
 gautam_exquis...@yahoo.com 
 Sent: Monday, January 23, 2012 1:32 AM
 Subject: RE: Tomcat 6.0.35-SocketException: Too many open files  issue with
  
 Sorry to possibly state the obvious, but are there perhaps files that are not 
 being closed?
 
 This can often happen if code goes into a catch, and a file is not closed. 
 Best to have a finally block which checks if a file is open, and closes it.
 
 
 Robert Purvis
 
 -Original Message-
 From: gnath [mailto:gautam_exquis...@yahoo.com]
 Sent: 22 January 2012 08:01
 To: users@tomcat.apache.org
 Subject: Tomcat 6.0.35-SocketException: Too many open files issue with
 
 Hello,
 
 We have been seeing SocketException: Too many open files in production 
 environment(Linux OS running Tomcat 6.0.35 with sun's JDK 1.6.30) every day 
 and requires a restart of Tomcat. When this happened for the first time, we 
 searched online and found people suggesting to increase the file descriptors 
 size and we increased to 4096. But still the problem persists.  We have the 
 Orion App Server also running on the same machine but usually during the day 
 when we check the open file descriptor by command: ls -l /proc/PID/fd, its 
 always less than 1000 combined  for both Orion and Tomcat.
 
 
 Here is the exception we see pouring in the logs once it starts: This 
 requires us to kill java process and restart tomcat. Our Tomcat configuration 
 maxThreadCount is 500 with minSpareThreads=50 in server.xml
 
 
 SEVERE: Socket accept failed
 java.net.SocketException: Too many open files
 at java.net.PlainSocketImpl.socketAccept(Native Method)
 at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
 at java.net.ServerSocket.implAccept(ServerSocket.java:462)
 at java.net.ServerSocket.accept(ServerSocket.java:430)
 at 
 org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
 at 
 org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
 at java.lang.Thread.run(Thread.java:662)
 
 ulimit -a gives for the user where Tomcat is running.
 
 
 open files  (-n) 4096
 
 
 Please let me know what could be the issue here and how can i resolve this 
 'Too many open files' issue.
 
 Thanks
 -G
 
 
 
 This message may contain confidential information. If you are not the 
 intended recipient please inform the
 sender that you have received the message in error before deleting it.
 Please do not disclose, copy or distribute information in this e-mail or take 
 any action in reliance on its contents:
 to do so is strictly prohibited and may be unlawful.
 
 Thank you for your co-operation.
 
 NHSmail is the secure email and directory service available for all NHS staff 
 in England and Scotland
 NHSmail is approved for exchanging patient data and other sensitive 
 information with NHSmail and GSi recipients
 NHSmail provides an email address for your career in the NHS and can be 
 accessed anywhere
 For more information and to find out how you can switch, visit 
 www.connectingforhealth.nhs.uk/nhsmail
 
 


Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-23 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

G,

On 1/22/12 6:18 PM, gnath wrote:
 We have 2 connectors (one for http and another for https) using
 the tomcatThreadPool. I have the connectionTimeout=2 for
 http connector.  However i was told that our https connector might
 not be used by the app as our loadbalancer is handling all the
 https traffic and just sending them to http connector.

You might want to disable that HTTPS connector, but it's probably not
hurting you at all in this case -- just a bit of wasted resources. If
you are sharing a thread pool then there is no negative impact on the
number of threads and/or open files that you have to deal with, here.

 the ulimit settings were increased from default 1024 to 4096 by
 our admin. not sure how he did that, but i see the count as 4096
 when i do ulimit -a.

Well, if your admin says it's right, I suppose it's right.

 for ulimit -n i see its 'unlimited'.

That's good.

 for cat /proc/PID/limits, i get the following response:
 
 Limit Soft Limit   Hard Limit
 Units Max cpu time  unlimitedunlimited
 seconds Max file size unlimitedunlimited
 bytes Max data size unlimitedunlimited
 bytes Max stack size10485760 unlimited
 bytes Max core file size0unlimited
 bytes Max resident set  unlimitedunlimited
 bytes Max processes unlimitedunlimited
 processes Max open files4096 4096
 files Max locked memory 3276832768
 bytes Max address space unlimitedunlimited
 bytes Max file locksunlimitedunlimited
 locks Max pending signals   202752   202752
 signals Max msgqueue size 819200   819200
 bytes Max nice priority 00
  Max realtime priority 00

Those all look good to me.

 This morning Tomcat hung again but this time it dint say 'too many 
 open files' in logs but i only see this below in catalina.out:
 
 org.apache.tomcat.util.http.Parameters processParameters INFO:
 Invalid chunk starting at byte [0] and ending at byte [0] with a
 value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
 processParameters INFO: Invalid chunk starting at byte [0] and
 ending at byte [0] with a value of [null] ignored

Hmm...

 When it hung(java process is still up), i ran few commands like
 lsof by PID and couple others.

Next time, take a thread dump as well. The fact that Tomcat hung up
without an OS problem (like Too Many Open Files) is probably not good.
If this happens again with an apparent hang with no stack traces in
the logs, take a thread dump and post it back here under a different
subject.

 here is what i got:
 
 lsof -p PID| wc -l 1342
 
 lsof | wc -l 4520
 
 lsof -u USER| wc -l 1953

Hmm I wonder if you are hitting a *user* or even *system* limit of
some kind (though a *NIX system with a hard limit of ~4500 file
descriptors seems entirely unreasonable). I also wonder how many
/processes/ and/or /threads/ you have running at once.

 After i kill java process the lsof for pid returned obviously to
 zero

Of course.

 Is there any chance that the tomcat is ignoring the ulimit?

Those limits are not self-imposed: the OS imposes those limits. Tomcat
doesn't even know it's own ulimit (of any kind), so it will simply
consume whatever resources you have configured it to use, and if it
hits a limit, the JVM will experience some kind of OS-related error.

 , some people on web were saying something about setting this in
 catalina.sh.

Setting what? ulimit? I'd do it in setenv.sh because that's a more
appropriate place for that kind of thing. I'm also interested in what
the Internet has to say about what setting(s) to use.

 Please help with my ongoing issue.. its getting very hard to
 monitor the logs every minute and restarting whenever it hangs with
 these kind of issues. I very much appreciate your help in this.

Did this just start happening recently? Perhaps with an upgrade of
some component?

If you think this might actually be related to the number of file
handles being used by your thread pool, you might want to reduce the
maximum number of threads for that thread pool: a slightly less
responsive site is better than one that goes down all the time because
of hard resource limits.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8dghAACgkQ9CaO5/Lv0PCmKQCfUaYfeoSkTRDKBppR4ZGFTWgI
8dEAoKgwy1BcKO6bC8nbbLWd6hn0a38N
=TZCu
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-22 Thread gnath
Hello, 

We have been seeing SocketException: Too many open files in production 
environment(Linux OS running Tomcat 6.0.35 with sun's JDK 1.6.30) every day and 
requires a restart of Tomcat. When this happened for the first time, we 
searched online and found people suggesting to increase the file descriptors 
size and we increased to 4096. But still the problem persists.  We have the 
Orion App Server also running on the same machine but usually during the day 
when we check the open file descriptor by command: ls -l /proc/PID/fd, its 
always less than 1000 combined  for both Orion and Tomcat. 


Here is the exception we see pouring in the logs once it starts: This requires 
us to kill java process and restart tomcat. Our Tomcat configuration 
maxThreadCount is 500 with minSpareThreads=50 in server.xml


SEVERE: Socket accept failed
java.net.SocketException: Too many open files
    at java.net.PlainSocketImpl.socketAccept(Native Method)
    at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
    at java.net.ServerSocket.implAccept(ServerSocket.java:462)
    at java.net.ServerSocket.accept(ServerSocket.java:430)
    at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
    at 
org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
    at java.lang.Thread.run(Thread.java:662)

ulimit -a gives for the user where Tomcat is running.


open files  (-n) 4096


Please let me know what could be the issue here and how can i resolve this 'Too 
many open files' issue.

Thanks
-G

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

G,

On 1/22/12 3:01 AM, gnath wrote:
 We have been seeing SocketException: Too many open files in 
 production environment(Linux OS running Tomcat 6.0.35 with sun's
 JDK 1.6.30) every day and requires a restart of Tomcat. When this
 happened for the first time, we searched online and found people
 suggesting to increase the file descriptors size and we increased
 to 4096. But still the problem persists. We have the Orion App
 Server also running on the same machine but usually during the day
 when we check the open file descriptor by command: ls -l
 /proc/PID/fd, its always less than 1000 combined for both Orion and
 Tomcat.
 
 Here is the exception we see pouring in the logs once it starts: 
 This requires us to kill java process and restart tomcat. Our
 Tomcat configuration maxThreadCount is 500 with minSpareThreads=50
 in server.xml

How many connectors do you have? If you have more than one connector
with 500 threads, then you can have more threads than maybe you are
expecting.

 SEVERE: Socket accept failed java.net.SocketException: Too many
 open files at java.net.PlainSocketImpl.socketAccept(Native Method) 
 at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408) at
 java.net.ServerSocket.implAccept(ServerSocket.java:462) at
 java.net.ServerSocket.accept(ServerSocket.java:430) at
 org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)

 
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
 at java.lang.Thread.run(Thread.java:662)
 
 ulimit -a gives for the user where Tomcat is running.
 
 open files  (-n) 4096

How did you set the ulimit for this user? Did you do it in a login
script or something, or just at the command-line at some point?

How about (-u) max user processes or threads-per-process or anything
like that?

Sometimes the Too many files open is not entirely accurate.

What does 'cat /proc/PID/limits' show you?

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8cYZMACgkQ9CaO5/Lv0PC7+ACeMW3/jwhOUKB9RZ3u+dfN85jD
NnMAoLU7QJ6DXKaI9Q/mPeEO6x9gXzx6
=Nd1d
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Tomcat 6.0.35-SocketException: Too many open files issue with

2012-01-22 Thread gnath
Thanks chris for looking into this.

Here are answers for the questions you asked.

We have 2 connectors (one for http and another for https) using the 
tomcatThreadPool. I have the connectionTimeout=2 for http connector.  
However i was told that our https connector might not be used by the app as our 
loadbalancer is handling all the https traffic and just sending them to http 
connector.

the ulimit settings were increased from default 1024 to 4096 by our admin. not 
sure how he did that, but i see the count as 4096 when i do ulimit -a.

for ulimit -n i see its 'unlimited'.

for cat /proc/PID/limits, i get the following response:

Limit Soft Limit   Hard Limit   Units 
Max cpu time  unlimited    unlimited    seconds   
Max file size unlimited    unlimited    bytes 
Max data size unlimited    unlimited    bytes 
Max stack size    10485760 unlimited    bytes 
Max core file size    0    unlimited    bytes 
Max resident set  unlimited    unlimited    bytes 
Max processes unlimited    unlimited    processes 
Max open files    4096 4096 files 
Max locked memory 32768    32768    bytes 
Max address space unlimited    unlimited    bytes 
Max file locks    unlimited    unlimited    locks 
Max pending signals   202752   202752   signals   
Max msgqueue size 819200   819200   bytes 
Max nice priority 0    0    
Max realtime priority 0    0 



This morning Tomcat hung again but this time it dint say 'too many open files' 
in logs but i only see this below in catalina.out:

org.apache.tomcat.util.http.Parameters processParameters
INFO: Invalid chunk starting at byte [0] and ending at byte [0] with a value of 
[null] ignored
Jorg.apache.tomcat.util.http.Parameters processParameters
INFO: Invalid chunk starting at byte [0] and ending at byte [0] with a value of 
[null] ignored

When it hung(java process is still up), i ran few commands like lsof by PID and 
couple others. here is what i got:

lsof -p PID| wc -l
1342

lsof | wc -l
4520

lsof -u USER| wc -l
1953

After i kill java process the lsof for pid returned obviously to zero


Is there any chance that the tomcat is ignoring the ulimit?, some people on web 
were saying something about setting this in catalina.sh.

Please help with my ongoing issue.. its getting very hard to monitor the logs 
every minute and restarting whenever it hangs with these kind of issues. I very 
much appreciate your help in this.

Thanks
-G




 From: Christopher Schultz ch...@christopherschultz.net
To: Tomcat Users List users@tomcat.apache.org 
Sent: Sunday, January 22, 2012 11:20 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

G,

On 1/22/12 3:01 AM, gnath wrote:
 We have been seeing SocketException: Too many open files in 
 production environment(Linux OS running Tomcat 6.0.35 with sun's
 JDK 1.6.30) every day and requires a restart of Tomcat. When this
 happened for the first time, we searched online and found people
 suggesting to increase the file descriptors size and we increased
 to 4096. But still the problem persists. We have the Orion App
 Server also running on the same machine but usually during the day
 when we check the open file descriptor by command: ls -l
 /proc/PID/fd, its always less than 1000 combined for both Orion and
 Tomcat.
 
 Here is the exception we see pouring in the logs once it starts: 
 This requires us to kill java process and restart tomcat. Our
 Tomcat configuration maxThreadCount is 500 with minSpareThreads=50
 in server.xml

How many connectors do you have? If you have more than one connector
with 500 threads, then you can have more threads than maybe you are
expecting.

 SEVERE: Socket accept failed java.net.SocketException: Too many
 open files at java.net.PlainSocketImpl.socketAccept(Native Method) 
 at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408) at
 java.net.ServerSocket.implAccept(ServerSocket.java:462) at
 java.net.ServerSocket.accept(ServerSocket.java:430) at
 org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)

 
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
 at java.lang.Thread.run(Thread.java:662)
 
 ulimit -a gives for the user where Tomcat is running.
 
 open files                      (-n) 4096

How did you set the ulimit for this user? Did you do it in a login
script or something, or just at the command-line at some point

Too many open files error

2011-08-24 Thread Campbell, Lance
Tomcat 6.0.32
Java 1.6.27
Apache 2.0
RedHat 6.x 64 bit
/proc/sys/fs/file-max = 3233344

We experienced an issue where we were getting the error too many open files 
in tomcat.  The server manager increase the amount of open files to the above.  
But the error kept coming back even after rebooting the server.  Is there a max 
number of connections that tomcat should run within based on the above specs?

The servlet in question that was being hit returned an XML document after doing 
a series of database queries.

Thanks,

Lance


Re: Too many open files error

2011-08-24 Thread Francis GALIEGUE
On Wed, Aug 24, 2011 at 17:21, Campbell, Lance la...@illinois.edu wrote:
 Tomcat 6.0.32
 Java 1.6.27
 Apache 2.0
 RedHat 6.x 64 bit
 /proc/sys/fs/file-max = 3233344

 We experienced an issue where we were getting the error too many open files 
 in tomcat.  The server manager increase the amount of open files to the 
 above.  But the error kept coming back even after rebooting the server.  Is 
 there a max number of connections that tomcat should run within based on the 
 above specs?

 The servlet in question that was being hit returned an XML document after 
 doing a series of database queries.


file-max is not what you want to modify. It's the user's limit: RMILIT_NFILE.

Look in /etc/security/limits.d. You'll need to restart Tomcat.

-- 
Francis Galiegue
ONE2TEAM
Ingénieur système
Mob : +33 (0) 683 877 875
Tel : +33 (0) 178 945 552
f...@one2team.com
40 avenue Raymond Poincaré
75116 Paris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Too many open files error

2011-08-24 Thread Campbell, Lance
The file /etc/security/limits.d is empty.  What would be an example of 
something you would expect to see in there that would relate to changing the 
RMILIT_NFILE value? 

Thanks,  

From: Francis GALIEGUE [f...@one2team.com]
Sent: Wednesday, August 24, 2011 10:24 AM
To: Tomcat Users List
Subject: Re: Too many open files error

On Wed, Aug 24, 2011 at 17:21, Campbell, Lance la...@illinois.edu wrote:
 Tomcat 6.0.32
 Java 1.6.27
 Apache 2.0
 RedHat 6.x 64 bit
 /proc/sys/fs/file-max = 3233344

 We experienced an issue where we were getting the error too many open files 
 in tomcat.  The server manager increase the amount of open files to the 
 above.  But the error kept coming back even after rebooting the server.  Is 
 there a max number of connections that tomcat should run within based on the 
 above specs?

 The servlet in question that was being hit returned an XML document after 
 doing a series of database queries.


file-max is not what you want to modify. It's the user's limit: RMILIT_NFILE.

Look in /etc/security/limits.d. You'll need to restart Tomcat.

--
Francis Galiegue
ONE2TEAM
Ingénieur système
Mob : +33 (0) 683 877 875
Tel : +33 (0) 178 945 552
f...@one2team.com
40 avenue Raymond Poincaré
75116 Paris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Too many open files error

2011-08-24 Thread Francis GALIEGUE
On Wed, Aug 24, 2011 at 17:33, Campbell, Lance la...@illinois.edu wrote:
 The file /etc/security/limits.d is empty.  What would be an example of 
 something you would expect to see in there that would relate to changing the 
 RMILIT_NFILE value?


It's a directory, not a file. Create a file named tomcat in it (or
whatever name you want) and put in these two lines:

tomcat softnofile  16384
tomcat hardnofile  16384

(if the user running Tomcat is indeed called tomcat)

If you want to see the current limit, as root, run:

su tomcat -c ulimit -n

-- 
Francis Galiegue
ONE2TEAM
Ingénieur système
Mob : +33 (0) 683 877 875
Tel : +33 (0) 178 945 552
f...@one2team.com
40 avenue Raymond Poincaré
75116 Paris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Too many open files error

2011-08-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Lance,

On 8/24/2011 11:21 AM, Campbell, Lance wrote:
 Tomcat 6.0.32 Java 1.6.27 Apache 2.0 RedHat 6.x 64 bit 
 /proc/sys/fs/file-max = 3233344
 
 We experienced an issue where we were getting the error too many 
 open files in tomcat.  The server manager increase the amount of 
 open files to the above.  But the error kept coming back even
 after rebooting the server.  Is there a max number of connections
 that tomcat should run within based on the above specs?
 
 The servlet in question that was being hit returned an XML
 document after doing a series of database queries.

You may find that the problem isn't the number of on-disk files but
the number of file descriptors, which might actually have different
meanings on your system.

It's also possible that the JVM is giving you a spurious message about
too many files when the problem is really the number of /threads/ --
I've seen that in the past, too.

Can you post the exact stack trace that you got along with this error?
Also, how about the output of ulimit -a for the user that actually
runs Tomcat? Finally, what do your Connector elements look like in
conf/server.xml?

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk5VRJ8ACgkQ9CaO5/Lv0PCD1gCgk7+nPVcTrN87QRiceYYYjnfi
SdEAoK/4AUixlaSqINfTdnLHty+/zI/B
=aigt
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



too many open files issue in tomcat

2011-06-27 Thread dasari.rao
Hi,



I am facing the issue of too many open files in the tomcat and not
able to process any request further. Did somebody faced the same problem
and what is the problem and solution for the same.



This issue is creating lot of problem on the production systems and if
somebody has already solved this issue, then provide us the solution.



Regards

Dayakar


Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com


RE: too many open files issue in tomcat

2011-06-27 Thread Guy Katz
I think you will get better help by providing the following:
 -Which tomcat version are you using?
-Which OS are you deploying on?
-What is your memory setting for tomcat (if explicitly set)?
-What's your file descriptor configuration in the OS (if explicitly
set)?
-does the problem arrive alongside an out of memory error?
-are there any 'IO heavy' processes that run on your deployment machine
along with tomcat (DB, etc)?

-Original Message-
From: dasari@wipro.com [mailto:dasari@wipro.com] 
Sent: Monday, June 27, 2011 12:54 PM
To: users@tomcat.apache.org
Subject: too many open files issue in tomcat

Hi,

 

I am facing the issue of too many open files in the tomcat and not
able to process any request further. Did somebody faced the same problem
and what is the problem and solution for the same.

 

This issue is creating lot of problem on the production systems and if
somebody has already solved this issue, then provide us the solution.

 

Regards

Dayakar


Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments
to this message are intended for the exclusive use of the addressee(s)
and may contain proprietary, confidential or privileged information. If
you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and
destroy all copies of this message and any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient
should check this email and any attachments for the presence of viruses.
The company accepts no liability for any damage caused by any virus
transmitted by this email. 

www.wipro.com
##
This message is intended only for the designated recipient(s).It may contain 
confidential or proprietary information.
If you are not the designated recipient, you may not review, copy or distribute 
this message.
If you have mistakenly received this message, please notify the sender by a 
reply e-mail and delete this message. 
Thank you.
##

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: too many open files issue in tomcat

2011-06-27 Thread dasari.rao
Hi,

Tomcat version is 6.0.29
OS is RHEL5.5-1
-Xms256m -Xmx768m
No explicit file descriptor configuration its default of OS
Not observed the out of memory error but some time found the socket
problem
Only tomcat is running on the server but communicates with the DB on the
other server.

Regards
Dayakar
-Original Message-
From: Guy Katz [mailto:gk...@allot.com]
Sent: Monday, June 27, 2011 3:38 PM
To: users@tomcat.apache.org
Subject: RE: too many open files issue in tomcat

I think you will get better help by providing the following:
 -Which tomcat version are you using?
-Which OS are you deploying on?
-What is your memory setting for tomcat (if explicitly set)?
-What's your file descriptor configuration in the OS (if explicitly
set)?
-does the problem arrive alongside an out of memory error?
-are there any 'IO heavy' processes that run on your deployment machine
along with tomcat (DB, etc)?

-Original Message-
From: dasari@wipro.com [mailto:dasari@wipro.com]
Sent: Monday, June 27, 2011 12:54 PM
To: users@tomcat.apache.org
Subject: too many open files issue in tomcat

Hi,



I am facing the issue of too many open files in the tomcat and not
able to process any request further. Did somebody faced the same problem
and what is the problem and solution for the same.



This issue is creating lot of problem on the production systems and if
somebody has already solved this issue, then provide us the solution.



Regards

Dayakar


Please do not print this email unless it is absolutely necessary.

The information contained in this electronic message and any attachments
to this message are intended for the exclusive use of the addressee(s)
and may contain proprietary, confidential or privileged information. If
you are not the intended recipient, you should not disseminate,
distribute or copy this e-mail. Please notify the sender immediately and
destroy all copies of this message and any attachments.

WARNING: Computer viruses can be transmitted via email. The recipient
should check this email and any attachments for the presence of viruses.
The company accepts no liability for any damage caused by any virus
transmitted by this email.

www.wipro.com

##
This message is intended only for the designated recipient(s).It may
contain confidential or proprietary information.
If you are not the designated recipient, you may not review, copy or
distribute this message.
If you have mistakenly received this message, please notify the sender
by a reply e-mail and delete this message.
Thank you.

##

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: too many open files issue in tomcat

2011-06-27 Thread Mark Thomas
On 27/06/2011 11:21, dasari@wipro.com wrote:
 Hi,
 
 Tomcat version is 6.0.29
 OS is RHEL5.5-1
 -Xms256m -Xmx768m
 No explicit file descriptor configuration its default of OS
 Not observed the out of memory error but some time found the socket
 problem
 Only tomcat is running on the server but communicates with the DB on the
 other server.

Search the archives / look in the FAQ (I think this in in there).

Mark

 
 Regards
 Dayakar
 -Original Message-
 From: Guy Katz [mailto:gk...@allot.com] 
 Sent: Monday, June 27, 2011 3:38 PM
 To: users@tomcat.apache.org
 Subject: RE: too many open files issue in tomcat
 
 I think you will get better help by providing the following:
  -Which tomcat version are you using?
 -Which OS are you deploying on?
 -What is your memory setting for tomcat (if explicitly set)?
 -What's your file descriptor configuration in the OS (if explicitly
 set)?
 -does the problem arrive alongside an out of memory error?
 -are there any 'IO heavy' processes that run on your deployment machine
 along with tomcat (DB, etc)?
 
 -Original Message-
 From: dasari@wipro.com [mailto:dasari@wipro.com] 
 Sent: Monday, June 27, 2011 12:54 PM
 To: users@tomcat.apache.org
 Subject: too many open files issue in tomcat
 
 Hi,
 
  
 
 I am facing the issue of too many open files in the tomcat and not
 able to process any request further. Did somebody faced the same problem
 and what is the problem and solution for the same.
 
  
 
 This issue is creating lot of problem on the production systems and if
 somebody has already solved this issue, then provide us the solution.
 
  
 
 Regards
 
 Dayakar
 
 
 Please do not print this email unless it is absolutely necessary. 
 
 The information contained in this electronic message and any attachments
 to this message are intended for the exclusive use of the addressee(s)
 and may contain proprietary, confidential or privileged information. If
 you are not the intended recipient, you should not disseminate,
 distribute or copy this e-mail. Please notify the sender immediately and
 destroy all copies of this message and any attachments. 
 
 WARNING: Computer viruses can be transmitted via email. The recipient
 should check this email and any attachments for the presence of viruses.
 The company accepts no liability for any damage caused by any virus
 transmitted by this email. 
 
 www.wipro.com
 
 ##
 This message is intended only for the designated recipient(s).It may
 contain confidential or proprietary information.
 If you are not the designated recipient, you may not review, copy or
 distribute this message.
 If you have mistakenly received this message, please notify the sender
 by a reply e-mail and delete this message. 
 Thank you.
 
 ##
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 Please do not print this email unless it is absolutely necessary. 
 
 The information contained in this electronic message and any attachments to 
 this message are intended for the exclusive use of the addressee(s) and may 
 contain proprietary, confidential or privileged information. If you are not 
 the intended recipient, you should not disseminate, distribute or copy this 
 e-mail. Please notify the sender immediately and destroy all copies of this 
 message and any attachments. 
 
 WARNING: Computer viruses can be transmitted via email. The recipient should 
 check this email and any attachments for the presence of viruses. The company 
 accepts no liability for any damage caused by any virus transmitted by this 
 email. 
 
 www.wipro.com
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 




-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: too many open files issue in tomcat

2011-06-27 Thread Jason Viloria
On Mon, Jun 27, 2011 at 12:30 PM, Mark Thomas ma...@apache.org wrote:

 On 27/06/2011 11:21, dasari@wipro.com wrote:
  Hi,
 
  Tomcat version is 6.0.29
  OS is RHEL5.5-1
  -Xms256m -Xmx768m
  No explicit file descriptor configuration its default of OS
  Not observed the out of memory error but some time found the socket
  problem
  Only tomcat is running on the server but communicates with the DB on the
  other server.

 Search the archives / look in the FAQ (I think this in in there).


Just talking from what I have experienced, most of the time I have
encountered this problem is because there is a bottleneck somehwere in the
code. Tomcat process needing to communicate via soap and remote system takes
too long to timeout hence tomcat processes are lying around wasting fd.
Another common issue is DB bottlenecks once again causing tomcat processes
to lay around for too long and as a result don't get freed up fast enough to
further serve clients.  Also you need to see how many connections are coming
in to your system using any network monitoring tool you can put on, it could
really be just that you are reaching fd limit and hence must edit your
ulimits.Hope this helps.

/Jason


Too many open files

2010-05-25 Thread privat

Hello,

I have a problem with my little CORBA-Servlet.
Allways I get this error:

java.net.SocketException: Too many open files
sun.nio.ch.Net.socket0(Native Method)
sun.nio.ch.Net.socket(Net.java:97)
sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:84)

sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
java.nio.channels.SocketChannel.open(SocketChannel.java:105)
java.nio.channels.SocketChannel.open(SocketChannel.java:145)

com.sun.corba.se.impl.transport.DefaultSocketFactoryImpl.createSocket(DefaultSocketFactoryImpl.java:60)
com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:188)

com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:218)

com.sun.corba.se.impl.transport.SocketOrChannelContactInfoImpl.createConnection(SocketOrChannelContactInfoImpl.java:101)

com.sun.corba.se.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:152)

com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:118)

com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.is_a(CorbaClientDelegateImpl.java:211)
org.omg.CORBA.portable.ObjectImpl._is_a(ObjectImpl.java:112)

I use: Tomcat 6.0.26 on Linux JVM: 1.6.0_20-b02

Thanks for your help.

br,

Markus 



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Too many open files

2010-05-25 Thread André Warnier

pri...@samea.de wrote:

Hello,

I have a problem with my little CORBA-Servlet.
Allways I get this error:

java.net.SocketException: Too many open files
sun.nio.ch.Net.socket0(Native Method)
sun.nio.ch.Net.socket(Net.java:97)
sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:84)
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37) 


java.nio.channels.SocketChannel.open(SocketChannel.java:105)
java.nio.channels.SocketChannel.open(SocketChannel.java:145)
com.sun.corba.se.impl.transport.DefaultSocketFactoryImpl.createSocket(DefaultSocketFactoryImpl.java:60) 

com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:188) 

com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:218) 

com.sun.corba.se.impl.transport.SocketOrChannelContactInfoImpl.createConnection(SocketOrChannelContactInfoImpl.java:101) 

com.sun.corba.se.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:152) 

com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:118) 

com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.is_a(CorbaClientDelegateImpl.java:211) 


org.omg.CORBA.portable.ObjectImpl._is_a(ObjectImpl.java:112)

I use: Tomcat 6.0.26 on Linux JVM: 1.6.0_20-b02

At first sight, it doesn't look as if this is a Tomcat issue.  It seems 
more of an issue within your servlet.

Is this servlet opening its own connection to something else?
If yes, then you probably forget to close this connection when you are 
done with it, and they accumulate until the OS tells your process that 
it has too many open sockets at the same time.


Doing a netstat -an would probably provide more information.

If you are under Unix/Linux, you can also try lsof, but only studying 
the options is already quite a challenge.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Too many open files

2010-05-25 Thread Pid
On 25/05/2010 10:26, pri...@samea.de wrote:
 Hello,
 
 I have a problem with my little CORBA-Servlet.
 Allways I get this error:
 
 java.net.SocketException: Too many open files
 sun.nio.ch.Net.socket0(Native Method)
 sun.nio.ch.Net.socket(Net.java:97)
 sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:84)
 
 sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
 
 java.nio.channels.SocketChannel.open(SocketChannel.java:105)
 java.nio.channels.SocketChannel.open(SocketChannel.java:145)
 
 com.sun.corba.se.impl.transport.DefaultSocketFactoryImpl.createSocket(DefaultSocketFactoryImpl.java:60)
 
 com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:188)
 
 
 com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:218)
 
 
 com.sun.corba.se.impl.transport.SocketOrChannelContactInfoImpl.createConnection(SocketOrChannelContactInfoImpl.java:101)
 
 
 com.sun.corba.se.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:152)
 
 
 com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:118)
 
 
 com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.is_a(CorbaClientDelegateImpl.java:211)
 
 org.omg.CORBA.portable.ObjectImpl._is_a(ObjectImpl.java:112)
 
 I use: Tomcat 6.0.26 on Linux JVM: 1.6.0_20-b02
 
 Thanks for your help.

Please start a completely new email, rather than replying to an existing
one and editing the subject  body - which is called thread-hijacking.


p

 br,
 
 Markus
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 




signature.asc
Description: OpenPGP digital signature


RE: too many open files

2010-05-25 Thread privat
Hello,

I had a look at my servlet, but it closes the connection after
doing a request.

Further it crashes by opening the nameserver.

Thanks for your help.

br,

Markus


pri...@samea.de wrote:
 Hello,
 
 I have a problem with my little CORBA-Servlet.
 Allways I get this error:
 
 java.net.SocketException: Too many open files
 sun.nio.ch.Net.socket0(Native Method)
 sun.nio.ch.Net.socket(Net.java:97)
 sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:84)
 
 sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
  
 
 java.nio.channels.SocketChannel.open(SocketChannel.java:105)
 java.nio.channels.SocketChannel.open(SocketChannel.java:145)
 
 com.sun.corba.se.impl.transport.DefaultSocketFactoryImpl.createSocket(DefaultSocketFactoryImpl.java:60)
  
 
 com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:188)
  
 
 
 com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:218)
  
 
 
 com.sun.corba.se.impl.transport.SocketOrChannelContactInfoImpl.createConnection(SocketOrChannelContactInfoImpl.java:101)
  
 
 
 com.sun.corba.se.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:152)
  
 
 
 com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:118)
  
 
 
 com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.is_a(CorbaClientDelegateImpl.java:211)
  
 
 org.omg.CORBA.portable.ObjectImpl._is_a(ObjectImpl.java:112)
 
 I use: Tomcat 6.0.26 on Linux JVM: 1.6.0_20-b02
 
At first sight, it doesn't look as if this is a Tomcat issue.  It seems 
more of an issue within your servlet.
Is this servlet opening its own connection to something else?
If yes, then you probably forget to close this connection when you are 
done with it, and they accumulate until the OS tells your process that 
it has too many open sockets at the same time.

Doing a netstat -an would probably provide more information.

If you are under Unix/Linux, you can also try lsof, but only studying 
the options is already quite a challenge.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


Re: too many open files

2010-05-25 Thread Pid
On 25/05/2010 11:38, pri...@samea.de wrote:
 Hello,
 
 I had a look at my servlet, but it closes the connection after
 doing a request.

Does it close the connection if the request throws an error?


p

 Further it crashes by opening the nameserver.
 
 Thanks for your help.
 
 br,
 
 Markus
 
 
 pri...@samea.de wrote:
 Hello,

 I have a problem with my little CORBA-Servlet.
 Allways I get this error:

 java.net.SocketException: Too many open files
 sun.nio.ch.Net.socket0(Native Method)
 sun.nio.ch.Net.socket(Net.java:97)
 sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:84)
 
 sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)
  

 java.nio.channels.SocketChannel.open(SocketChannel.java:105)
 java.nio.channels.SocketChannel.open(SocketChannel.java:145)
 
 com.sun.corba.se.impl.transport.DefaultSocketFactoryImpl.createSocket(DefaultSocketFactoryImpl.java:60)
  

 com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:188)
  

 
 com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:218)
  

 
 com.sun.corba.se.impl.transport.SocketOrChannelContactInfoImpl.createConnection(SocketOrChannelContactInfoImpl.java:101)
  

 
 com.sun.corba.se.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:152)
  

 
 com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:118)
  

 
 com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.is_a(CorbaClientDelegateImpl.java:211)
  

 org.omg.CORBA.portable.ObjectImpl._is_a(ObjectImpl.java:112)

 I use: Tomcat 6.0.26 on Linux JVM: 1.6.0_20-b02

 At first sight, it doesn't look as if this is a Tomcat issue.  It seems 
 more of an issue within your servlet.
 Is this servlet opening its own connection to something else?
 If yes, then you probably forget to close this connection when you are 
 done with it, and they accumulate until the OS tells your process that 
 it has too many open sockets at the same time.

 Doing a netstat -an would probably provide more information.

 If you are under Unix/Linux, you can also try lsof, but only studying 
 the options is already quite a challenge.

 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 




signature.asc
Description: OpenPGP digital signature


RE: too many open files

2010-05-25 Thread Caldarale, Charles R
 From: Pid [mailto:p...@pidster.com]
 Subject: Re: too many open files
 
  I had a look at my servlet, but it closes the connection after
  doing a request.
 
 Does it close the connection if the request throws an error?

And as previously suggested, use netstat to find out if the excess files really 
are sockets.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Too many open files

2010-05-25 Thread Rainer Jung

On 25.05.2010 11:54, André Warnier wrote:

pri...@samea.de wrote:

Hello,

I have a problem with my little CORBA-Servlet.
Allways I get this error:

java.net.SocketException: Too many open files
sun.nio.ch.Net.socket0(Native Method)
sun.nio.ch.Net.socket(Net.java:97)
sun.nio.ch.SocketChannelImpl.init(SocketChannelImpl.java:84)
sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37)

java.nio.channels.SocketChannel.open(SocketChannel.java:105)
java.nio.channels.SocketChannel.open(SocketChannel.java:145)
com.sun.corba.se.impl.transport.DefaultSocketFactoryImpl.createSocket(DefaultSocketFactoryImpl.java:60)

com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:188)

com.sun.corba.se.impl.transport.SocketOrChannelConnectionImpl.init(SocketOrChannelConnectionImpl.java:218)

com.sun.corba.se.impl.transport.SocketOrChannelContactInfoImpl.createConnection(SocketOrChannelContactInfoImpl.java:101)

com.sun.corba.se.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:152)

com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:118)

com.sun.corba.se.impl.protocol.CorbaClientDelegateImpl.is_a(CorbaClientDelegateImpl.java:211)

org.omg.CORBA.portable.ObjectImpl._is_a(ObjectImpl.java:112)

I use: Tomcat 6.0.26 on Linux JVM: 1.6.0_20-b02


At first sight, it doesn't look as if this is a Tomcat issue. It seems
more of an issue within your servlet.
Is this servlet opening its own connection to something else?
If yes, then you probably forget to close this connection when you are
done with it, and they accumulate until the OS tells your process that
it has too many open sockets at the same time.

Doing a netstat -an would probably provide more information.

If you are under Unix/Linux, you can also try lsof, but only studying
the options is already quite a challenge.


On Linux you can get the most important file descriptor info also by 
looking at the proc filesystem. If your process has process id (PID) 
XYZ, then do:


   ls -l /proc/XYZ/fd

It will list all open file descriptors, so you can find out, why there 
are so many open ones (are they files, sockets, ...). If they are 
sockets, you can list the association betweeb the sockets and the PID 
(=XYZ) using


   netstat -anp

Finally if you think you will need only a bit more descriptors, then 
have a look at the ulimit command (man ulimit). There are hard and 
soft limits. You can look at the active limits with ulimit -a and 
ulimit -H -a. The one you are interested in is open files. You might 
be able to set another limit with ulimit -n.


Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: java.net.SocketException: Too many open files

2009-06-04 Thread Lawrence Lamprecht
Greetings All,

I had the exact same issue at one time a while ago, my resolution was the 
setting of the max number of files that was aloowed to be opened by the OS 
which affected the tomcat application.

After I increased the max-files setting, all was resolved.

Kind regards / Met vriendelijke groet,
Lawrence Lamprecht

-Original Message-
From: Caldarale, Charles R [mailto:chuck.caldar...@unisys.com] 
Sent: Wednesday, June 03, 2009 10:20 PM
To: Tomcat Users List
Subject: RE: java.net.SocketException: Too many open files

 From: John Cartwright [mailto:john.c.cartwri...@noaa.gov]
 Subject: Re: java.net.SocketException: Too many open files
 
 I'll look into modifying the memory parameters.

That would be a complete waste of time.  The heap size has nothing to do with 
the problem you're seeing, and the discussion cited by Martin is full of 
erroneous information.  Please ignore it.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



java.net.SocketException: Too many open files

2009-06-03 Thread John . C . Cartwright
Hello All,

something strange happened to our tomcat 5.5 instance running on RHEL Linux.

Suddenly we began getting the exception listed below written out to the
catalina.out log.  It was repeated approx 4 million times w/in a couple
of hours until it filled up the file system and hung tomcat.  This is a
pretty standard setup.  Requests come into apache and then a ProxyPass
forwards onto tomcat. Apache has a directive like the following for each
context:

ProxyPass /gdsg   ajp://localhost:8009/gdsg


Can someone please help to to understand what might cause such an
exception?  Is there a good way to see the files tomcat has open?  I'm
not sure that a simple lsof -u tomcat will show them all.

Thanks so much for the help!

--john

Jun 1, 2009 6:19:07 AM org.apache.jk.common.ChannelSocket acceptConnections
WARNING: Exception executing accept
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
at java.net.ServerSocket.implAccept(ServerSocket.java:450)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at org.apache.jk.common.ChannelSocket.accept(ChannelSocket.java:312)
at
org.apache.jk.common.ChannelSocket.acceptConnections(ChannelSocket.java:666)
at
org.apache.jk.common.ChannelSocket$SocketAcceptor.runIt(ChannelSocket.java:876)
at
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
at java.lang.Thread.run(Thread.java:595)

 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: java.net.SocketException: Too many open files

2009-06-03 Thread Peter Crowther
 From: john.c.cartwri...@noaa.gov [mailto:john.c.cartwri...@noaa.gov]
 Can someone please help to to understand what might cause such an
 exception?

File descriptor exhaustion - the process has run out of fds.  Any i/o could use 
a file descriptor, whether that's socket to httpd, socket to database or access 
to a file.  Naively, I'd expect lsof to show them - what makes you think it 
isn't?

If you're lucky, you merely need to find the piece of code that's leaking 
resources and fix it - which I accept isn't always the easiest of jobs.  If 
you're *un*lucky, it's load related and you've just plain run out.  I'll leave 
the UNIX specialists to suggest ways of increasing the number of fds per 
process, but there have been some recent threads on here.

- Peter

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: java.net.SocketException: Too many open files

2009-06-03 Thread Martin Gainty
 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 Date: Wed, 3 Jun 2009 09:04:33 -0600
 From: john.c.cartwri...@noaa.gov
 Subject: java.net.SocketException: Too many open files
 To: users@tomcat.apache.org
 
 Hello All,
 
 something strange happened to our tomcat 5.5 instance running on RHEL Linux.
 
 Suddenly we began getting the exception listed below written out to the
 catalina.out log.  It was repeated approx 4 million times w/in a couple
 of hours until it filled up the file system and hung tomcat.  This is a
 pretty standard setup.  Requests come into apache and then a ProxyPass
 forwards onto tomcat. Apache has a directive like the following for each
 context:
 
 ProxyPass /gdsg   ajp://localhost:8009/gdsg
 
 
 Can someone please help to to understand what might cause such an
 exception?  Is there a good way to see the files tomcat has open?  I'm
 not sure that a simple lsof -u tomcat will show them all.
 
 Thanks so much for the help!
 
 --john
 
 Jun 1, 2009 6:19:07 AM org.apache.jk.common.ChannelSocket acceptConnections
 WARNING: Exception executing accept
 java.net.SocketException: Too many open files
 at java.net.PlainSocketImpl.socketAccept(Native Method)
 at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
 at java.net.ServerSocket.implAccept(ServerSocket.java:450)
 at java.net.ServerSocket.accept(ServerSocket.java:421)
 at org.apache.jk.common.ChannelSocket.accept(ChannelSocket.java:312)
 at
 org.apache.jk.common.ChannelSocket.acceptConnections(ChannelSocket.java:666)
 at
 org.apache.jk.common.ChannelSocket$SocketAcceptor.runIt(ChannelSocket.java:876)
 at
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
 at java.lang.Thread.run(Thread.java:595)
 
  
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 

_
Windows Live™ SkyDrive™: Get 25 GB of free online storage.
http://windowslive.com/online/skydrive?ocid=TXT_TAGLM_WL_SD_25GB_062009

Re: java.net.SocketException: Too many open files

2009-06-03 Thread John Cartwright
 ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




  

Date: Wed, 3 Jun 2009 09:04:33 -0600
From: john.c.cartwri...@noaa.gov
Subject: java.net.SocketException: Too many open files
To: users@tomcat.apache.org

Hello All,

something strange happened to our tomcat 5.5 instance running on RHEL Linux.

Suddenly we began getting the exception listed below written out to the
catalina.out log.  It was repeated approx 4 million times w/in a couple
of hours until it filled up the file system and hung tomcat.  This is a
pretty standard setup.  Requests come into apache and then a ProxyPass
forwards onto tomcat. Apache has a directive like the following for each
context:

ProxyPass /gdsg   ajp://localhost:8009/gdsg


Can someone please help to to understand what might cause such an
exception?  Is there a good way to see the files tomcat has open?  I'm
not sure that a simple lsof -u tomcat will show them all.

Thanks so much for the help!

--john

Jun 1, 2009 6:19:07 AM org.apache.jk.common.ChannelSocket acceptConnections
WARNING: Exception executing accept
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
at java.net.ServerSocket.implAccept(ServerSocket.java:450)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at org.apache.jk.common.ChannelSocket.accept(ChannelSocket.java:312)
at
org.apache.jk.common.ChannelSocket.acceptConnections(ChannelSocket.java:666)
at
org.apache.jk.common.ChannelSocket$SocketAcceptor.runIt(ChannelSocket.java:876)
at
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685)
at java.lang.Thread.run(Thread.java:595)

 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org




_
Windows Live™ SkyDrive™: Get 25 GB of free online storage.
http://windowslive.com/online/skydrive?ocid=TXT_TAGLM_WL_SD_25GB_062009
  


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: java.net.SocketException: Too many open files

2009-06-03 Thread Caldarale, Charles R
 From: John Cartwright [mailto:john.c.cartwri...@noaa.gov]
 Subject: Re: java.net.SocketException: Too many open files
 
 I'll look into modifying the memory parameters.

That would be a complete waste of time.  The heap size has nothing to do with 
the problem you're seeing, and the discussion cited by Martin is full of 
erroneous information.  Please ignore it.

 - Chuck


THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is thus for use only by the intended recipient. If you received 
this in error, please contact the sender and delete the e-mail and its 
attachments from all computers.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: java.net.SocketException: Too many open files

2009-06-03 Thread John Cartwright
Thanks for your reply Peter.  Initially I was assuming that lsof was not 
showing me files on disk that were being opened and read by servlets.  
However, I've been unable to reproduce that in a more controlled setting.


Since this system has been running for weeks w/o any modification, 
something must be leaking.  I'll keep monitoring and see if I can gather 
more information.


--john


Peter Crowther wrote:

From: john.c.cartwri...@noaa.gov [mailto:john.c.cartwri...@noaa.gov]
Can someone please help to to understand what might cause such an
exception?



File descriptor exhaustion - the process has run out of fds.  Any i/o could use 
a file descriptor, whether that's socket to httpd, socket to database or access 
to a file.  Naively, I'd expect lsof to show them - what makes you think it 
isn't?

If you're lucky, you merely need to find the piece of code that's leaking 
resources and fix it - which I accept isn't always the easiest of jobs.  If 
you're *un*lucky, it's load related and you've just plain run out.  I'll leave 
the UNIX specialists to suggest ways of increasing the number of fds per 
process, but there have been some recent threads on here.

- Peter

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

  


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: java.net.SocketException: Too many open files

2009-06-03 Thread Martin Gainty

glad to hear that helped

you may want to consider implementing SSL on your connectors to protect your TC 
resources
http://www.mbaworld.com/docs/ssl-howto.html

Please let us know if we can provide any assistance to your efforts
Martin Gainty 
__ 
Jogi és Bizalmassági kinyilatkoztatás/Verzicht und 
Vertraulichkeitanmerkung/Note de déni et de confidentialité
 Ez az
üzenet bizalmas.  Ha nem ön az akinek szánva volt, akkor kérjük, hogy
jelentse azt nekünk vissza. Semmiféle továbbítása vagy másolatának
készítése nem megengedett.  Ez az üzenet csak ismeret cserét szolgál és
semmiféle jogi alkalmazhatósága sincs.  Mivel az electronikus üzenetek
könnyen megváltoztathatóak, ezért minket semmi felelöség nem terhelhet
ezen üzenet tartalma miatt.

Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 Date: Wed, 3 Jun 2009 14:15:43 -0600
 From: john.c.cartwri...@noaa.gov
 Subject: Re: java.net.SocketException: Too many open files
 To: users@tomcat.apache.org
 
 Thanks for your suggestions Martin, I'll look into modifying the memory 
 parameters.  Strange thing is that this has been running for weeks w/o 
 any changes in the configuration or contexts.
 
 --john
 
 
 Martin Gainty wrote:
  Here is the code
  void acceptConnections() {
  if( log.isDebugEnabled() )
  log.debug(Accepting ajp connections on  + port);
  while( running ) {
  try{
  MsgContext ep=createMsgContext(packetSize);
  ep.setSource(this);
  ep.setWorkerEnv( wEnv );
  this.accept(ep);
 
  if( !running ) break;
  
   // Since this is a long-running connection, we don't care about 
  the small GC
   //next line gac's
  SocketConnection ajpConn=
  new SocketConnection(this, ep);
  tp.runIt( ajpConn );
  }catch(Exception ex) {
  if (running) log.warn(Exception executing accept ,ex);
  }
  }
  }
 
  where org.apache.jk.core.JkHandler.java contains the createMsgContext
public MsgContext createMsgContext() {
  return new MsgContext(8*1024);
  }
 
  which calls org.apache.jk.core.MsgContext
  public MsgContext(int bsize) {
  try {
  c2b = new C2BConverter(iso-8859-1);
  } catch(IOException iex) {
  log.warn(Can't happen, iex);
  }
  jkIS = new JkInputStream(this, bsize);
  }
 
  which calls org.apache.jk.common.JkInputStream
  public JkInputStream(MsgContext context, int bsize) {
  mc = context;
  bodyMsg = new MsgAjp(bsize);
  outputMsg = new MsgAjp(bsize);
  }
 
  //which calls org.apache.jk.common.MsgAjp
private byte buf[];
  // The current read or write position in the buffer
  private int pos;
  /**
   * This actually means different things depending on whether the
   * packet is read or write.  For read, it's the length of the
   * payload (excluding the header).  For write, it's the length of
   * the packet as a whole (counting the header).  Oh, well.
   */
  private int len; 
  /*** The maximum packet size*/
  private int bufsize;
 
  /*** Constructor that takes a buffer size*/
  public MsgAjp(int bsize) {
  if(bsize  8*1024) {
  bsize = 8*1024;
  }
  bufsize = bsize;
  buf = new byte[bsize];
  }
  3 possiblities
  1)misconfig with AJP connector
 
  2)so you're running out of heap 
  tweaking JAVA_OPT minheap and maxheap parameters would help
  a 2004 post http://forums.sun.com/thread.jspa?messageID=10055131 says it 
  best
 
  The default stack size is 256K on UNIX and on 32-bit Windows operating 
  systems. To set Stack
  size
 
  java -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8  -Xms512m 
  -Xmx512m
 
  1. Setting the New generation heap size
   -XX:NewSize
   Use
  this option to set the New generation Java heap size. Set this value to
  a multiple of 1024

Re: NIO Connector: Too many open files [SOLVED]

2009-05-23 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Filip,

On 5/22/2009 10:14 AM, Filip Hanik - Dev Lists wrote:
 [The NIO connector in 6.0.18] simply never called close on the FD when it was 
 done

I tried the tomcat-coyote.jar from Tomcat 6.0.20 and everything worked
quite well. Thanks a lot for catching this problem and bringing it to my
attention.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkoX9VcACgkQ9CaO5/Lv0PBKagCgw/rZOhOKiuFIIQ5NFD+sQNRO
e1wAoJBERiOhKTaHFJeu/TqIjTQWGPWY
=3Vxj
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Rainer,

On 5/21/2009 12:21 PM, Rainer Jung wrote:
 2 remarks about all your stress testing efforts:
 
 A) TIME_WAIT
 
 When not doing HTTP Keep-Alive, under high load the size of the TCP hash
 table and the effectiveness of the system to lookp up TCP connections
 can limit the throughput you can reach. More precisely, depending on the
 excat way of connection shutdown, you get TIME_WAIT states for the
 finished connections (without HTTP Keep Alive it could be one such
 connection per request). Most systems get slow, once the number of those
 connections reaches somthing arounf 3.

That's fine, but the TIME_WAIT connections should be counted against the
process's file limit, should it? At that point, the process has released
the connection and the OS is babysitting it through the final stages of
TCP shutdown.

I understand that, with keepalive disabled, performance will kind of
suck. But, I shouldn't be running out of file descriptors.

 E.g. if you are doing 2000 requests per second without HTTP Keep Alive
 and the combination of web server and stress test tool leads to
 TIME_WAITs, after 15 seconds your table size might reach a critical size.

Meaning that the kernel can't keep up, or the NIO connector can't keep
up? I suspect the latter, because the other tests under the same
conditions at least complete... the NIO one appears not to have a
chance. Now, I'm running 6 tests and the NIO test is the 5th one, so
it's possible that it's just poorly positioned in my test batter. But,
since I've observed this failure at essentially the same place each
time, I suspect the NIO connector itself is at fault.

 Not using HTTP Keep Alive will very likely limit quickly the achievable
 throughput when going up in concurrency.

I'm willing to accept that, but 40 max connections should not be
resulting in hundreds of left-open file descriptors.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkoWlzUACgkQ9CaO5/Lv0PDtKgCfc9ZOaqcHfvdJHD28wZIiwMAq
52EAn3N1Nk3JoqSOcDkjdlsmJ49t+Yfr
=FQdw
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Rainer,

On 5/21/2009 12:13 PM, Rainer Jung wrote:
 On 21.05.2009 17:55, Christopher Schultz wrote:
 All,

 I've been testing the performance of various Tomcat configurations
 against Apache httpd and my serious tests are not completing for the NIO
 connector because the server is running out of files:

 May 20, 2009 2:35:55 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
 SEVERE: Socket accept failed
 java.io.IOException: Too many open files
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at 
 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
 at 
 org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
 at java.lang.Thread.run(Thread.java:619)
 A bit of background for those who haven't followed the Apache httpd vs
 Tomcat static content performance thread:

 I'm running Tomcat 6.0.18 using tcnative 1.1.16. Apache httpd is not
 being used for this test, so the client is contacting Tomcat directly
 from localhost.

 $ uname -a
 Linux chadis 2.6.14-gentoo-r5 #2 PREEMPT Sat Dec 17 16:30:55 EST 2005
 i686 AMD Athlon(tm) XP 1700+ AuthenticAMD GNU/Linux

 $ java -version
 java version 1.6.0_13
 Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
 Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode, sharing)

 $ ulimit -n (fds per process limit)
 1024

 1GiB RAM on the machine, here are the heap details /after/  the tests
 are run:

 $ jmap -heap 1430
 Attaching to process ID 1430, please wait...
 Debugger attached successfully.
 Client compiler detected.
 JVM version is 11.3-b02

 using thread-local object allocation.
 Mark Sweep Compact GC

 Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize  = 67108864 (64.0MB)
NewSize  = 1048576 (1.0MB)
MaxNewSize   = 4294901760 (4095.9375MB)
OldSize  = 4194304 (4.0MB)
NewRatio = 12
SurvivorRatio= 8
PermSize = 12582912 (12.0MB)
MaxPermSize  = 67108864 (64.0MB)

 Heap Usage:
 New Generation (Eden + 1 Survivor Space):
capacity = 2228224 (2.125MB)
used = 612888 (0.5844955444335938MB)
free = 1615336 (1.5405044555664062MB)
27.505672679227942% used
 Eden Space:
capacity = 2031616 (1.9375MB)
used = 612888 (0.5844955444335938MB)
free = 1418728 (1.3530044555664062MB)
30.167511970766128% used
 - From Space:
capacity = 196608 (0.1875MB)
used = 0 (0.0MB)
free = 196608 (0.1875MB)
0.0% used
 To Space:
capacity = 196608 (0.1875MB)
used = 0 (0.0MB)
free = 196608 (0.1875MB)
0.0% used
 tenured generation:
capacity = 28311552 (27.0MB)
used = 20464784 (19.516738891601562MB)
free = 7846768 (7.4832611083984375MB)
72.28421811704283% used
 Perm Generation:
capacity = 12582912 (12.0MB)
used = 8834304 (8.425048828125MB)
free = 3748608 (3.574951171875MB)
70.208740234375% used

 Here are my Connector configurations:

 !-- Regular non-APR Coyote Connector --
 Connector  port=8001
 protocol=org.apache.coyote.http11.Http11Protocol
 connectionTimeout=2
 server=Coyote1.1non-APR
 /

 !-- APR Connector --
 Connector  port=8002
 protocol=org.apache.coyote.http11.Http11AprProtocol
 useSendfile=true
 connectionTimeout=2
 server=Coyote1.1APR
 /

 !-- APR without sendfile --
 Connector  port=8003
 protocol=org.apache.coyote.http11.Http11AprProtocol
 useSendfile=false
 connectionTimeout=2
 server=Coyote1.1APRw/osendfile
 /

 !-- NIO Connector --
 Connector  port=8004
 protocol=org.apache.coyote.http11.Http11NioProtocol
 useSendfile=true
 connectionTimeout=2
 server=Coyote1.1NIO
 /

 !-- APR without sendfile --
 Connector  port=8005
 protocol=org.apache.coyote.http11.Http11NioProtocol
 useSendfile=false
 connectionTimeout=2
 server=Coyote1.1NIOw/osendfile
 /

 All connectors are configured at once, so I should have a maximum of 40
 threads in each pool. The command I ran to benchmark each connector was
 (for example):

 /usr/sbin/ab -c 40 -t 480 -n 1000 http://localhost:8004/4kiB.bin

 This runs ApacheBench for 8 minutes with 40 client threads requesting a
 4k file over and over again. This particular test succeeded, but there
 are 14 more tests, each using a file twice the size of the previous
 test. After the 128k file test, every single test fails after that.

 The last test I ran (with only 1 thread instead of 40), the NIO
 connector died in the same way, but the NIO connector without

Re: NIO Connector: Too many open files

2009-05-22 Thread Rainer Jung
Hi Chris,

On 22.05.2009 14:14, Christopher Schultz wrote:
 Rainer,
 
 On 5/21/2009 12:21 PM, Rainer Jung wrote:
 2 remarks about all your stress testing efforts:
 
 A) TIME_WAIT
 
 When not doing HTTP Keep-Alive, under high load the size of the TCP hash
 table and the effectiveness of the system to lookp up TCP connections
 can limit the throughput you can reach. More precisely, depending on the
 excat way of connection shutdown, you get TIME_WAIT states for the
 finished connections (without HTTP Keep Alive it could be one such
 connection per request). Most systems get slow, once the number of those
 connections reaches somthing arounf 3.
 
 That's fine, but the TIME_WAIT connections should be counted against the
 process's file limit, should it? At that point, the process has released
 the connection and the OS is babysitting it through the final stages of
 TCP shutdown.

Those connections will *not* be counted against process file
descriptors. They only exist as an entry in a TCP connection table. They
are no longer associated with the process. It's more of a TCP house
cleaning thing.

 I understand that, with keepalive disabled, performance will kind of
 suck. But, I shouldn't be running out of file descriptors.

Not our of FDs, but if the number of TIME_WAITs gets huge (check via
netstat during the run), your TCP throughput will drop and will be
restricted by the size of the connection hash.

 E.g. if you are doing 2000 requests per second without HTTP Keep Alive
 and the combination of web server and stress test tool leads to
 TIME_WAITs, after 15 seconds your table size might reach a critical size.
 
 Meaning that the kernel can't keep up, or the NIO connector can't keep
 up? I suspect the latter, because the other tests under the same
 conditions at least complete... the NIO one appears not to have a
 chance. Now, I'm running 6 tests and the NIO test is the 5th one, so
 it's possible that it's just poorly positioned in my test batter. But,
 since I've observed this failure at essentially the same place each
 time, I suspect the NIO connector itself is at fault.

I'm talking about a very general TCP thing. I'm not saying you actually
ran into it, but I'm saying that it makes sense to check the number of
TIME_WAITs via netstat during the test. If it gets very big, than the
TCP implementation will limit your throughput and most likely will
become the first bottleneck you hit. Again: I'm not saying that already
happened, but you should check, whether you run into this while doing
the test.

 Not using HTTP Keep Alive will very likely limit quickly the achievable
 throughput when going up in concurrency.
 
 I'm willing to accept that, but 40 max connections should not be
 resulting in hundreds of left-open file descriptors.

The file descriptos thing is totaly independent. I hijacked the thread :)

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Rainer Jung
Hi Chris,

On 22.05.2009 14:29, Christopher Schultz wrote:
 $ jmap -heap 1430
 Attaching to process ID 1430, please wait...
 Debugger attached successfully.
 Client compiler detected.
 JVM version is 11.3-b02

 using thread-local object allocation.
 Mark Sweep Compact GC

 Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize  = 67108864 (64.0MB)
NewSize  = 1048576 (1.0MB)
MaxNewSize   = 4294901760 (4095.9375MB)
OldSize  = 4194304 (4.0MB)
NewRatio = 12
SurvivorRatio= 8
PermSize = 12582912 (12.0MB)
MaxPermSize  = 67108864 (64.0MB)

 Heap Usage:
 New Generation (Eden + 1 Survivor Space):
capacity = 2228224 (2.125MB)
used = 612888 (0.5844955444335938MB)
free = 1615336 (1.5405044555664062MB)
27.505672679227942% used
 Eden Space:
capacity = 2031616 (1.9375MB)
used = 612888 (0.5844955444335938MB)
free = 1418728 (1.3530044555664062MB)
30.167511970766128% used
 - From Space:
capacity = 196608 (0.1875MB)
used = 0 (0.0MB)
free = 196608 (0.1875MB)
0.0% used
 To Space:
capacity = 196608 (0.1875MB)
used = 0 (0.0MB)
free = 196608 (0.1875MB)
0.0% used
 tenured generation:
capacity = 28311552 (27.0MB)
used = 20464784 (19.516738891601562MB)
free = 7846768 (7.4832611083984375MB)
72.28421811704283% used
 Perm Generation:
capacity = 12582912 (12.0MB)
used = 8834304 (8.425048828125MB)
free = 3748608 (3.574951171875MB)
70.208740234375% used

 Here are my Connector configurations:

 !-- Regular non-APR Coyote Connector --
 Connector  port=8001
 protocol=org.apache.coyote.http11.Http11Protocol
 connectionTimeout=2
 server=Coyote1.1non-APR
/

 !-- APR Connector --
 Connector  port=8002
 protocol=org.apache.coyote.http11.Http11AprProtocol
 useSendfile=true
 connectionTimeout=2
 server=Coyote1.1APR
/

 !-- APR without sendfile --
 Connector  port=8003
 protocol=org.apache.coyote.http11.Http11AprProtocol
 useSendfile=false
 connectionTimeout=2
 server=Coyote1.1APRw/osendfile
/

 !-- NIO Connector --
 Connector  port=8004
 protocol=org.apache.coyote.http11.Http11NioProtocol
 useSendfile=true
 connectionTimeout=2
 server=Coyote1.1NIO
/

 !-- APR without sendfile --
 Connector  port=8005
 protocol=org.apache.coyote.http11.Http11NioProtocol
 useSendfile=false
 connectionTimeout=2
 server=Coyote1.1NIOw/osendfile
/

 All connectors are configured at once, so I should have a maximum of 40
 threads in each pool. The command I ran to benchmark each connector was
 (for example):

 /usr/sbin/ab -c 40 -t 480 -n 1000 http://localhost:8004/4kiB.bin

 This runs ApacheBench for 8 minutes with 40 client threads requesting a
 4k file over and over again. This particular test succeeded, but there
 are 14 more tests, each using a file twice the size of the previous
 test. After the 128k file test, every single test fails after that.

 The last test I ran (with only 1 thread instead of 40), the NIO
 connector died in the same way, but the NIO connector without sendfile
 enabled appeared to work properly. This time (40 threads), neither of
 the connectors worker properly, the NIO connector failing to complete
 any tests after the 128kb test and the NIO-sendfile connector failed to
 complete /all/ of the tests (automatically run immediately following the
 NIO tests).

 No OOMEs were encountered: only the exception shown above (no more
 files). On my previous tests, lsof reported that only one of my files
 was still open by the process. After this most recent test, it appears
 that 954 of my static files are still open by the process (and the test
 ended over 24 hours ago).

 The initial set of tests (c=1) seemed to recover, while this second set
 of tests (c=40) has not.

 My knee-jerk reaction is that the most number of files that should ever
 be open is 40: one per request processing thread. Something, somewhere
 is causing these file descriptors to stay open.

 Unfortunately, I don't have any GC information for the time period
 covering the test.

 I still have the JVM running, so I can probably inspect it for certain
 things if anyone has any questions. Unfortunately, I can't run any new
 JSP files (out of files!) and it looks like I can't connect using
 jconsole (probably because the JVM can't open a new socket).

 I'd love some suggestions at to what's going on, here.
 Maybe looking at the directory /proc/PID/fd (sorry PID) will give more
 info (at least of the FDs are still in 

Re: NIO Connector: Too many open files

2009-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Filip,

On 5/21/2009 12:34 PM, Filip Hanik - Dev Lists wrote:
 hi Christopher, generally, ulimit -n 1024 is too low for any kind of web
 server.

Fair enough, but I'm not putting an unreasonable load on my server with
ulimit -n 1024 and 40 concurrent connections, am I?

 And there was also a file descriptor leak in the NIO connector, fixed in
 http://svn.apache.org/viewvc?rev=734454view=rev
 
 this is when Tomcat NIO serves up static content.

Yup, I'm only serving up static content (using sendfile in the first
test that failed to complete, and NOT using sendfile in the second one
which completed with c=1 but failed on c=40).

Is tcnative trunk stable enough to just try using that?

The commit comment says NIO fixes... nothing more specific? ;)

Thanks,
- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkoWofMACgkQ9CaO5/Lv0PBGnACgjDwmquIeUv1FfeCYKQHEx2HZ
s9AAn2UYoLK2J3exkh1Q0h8Wt8A0rJSa
=8tQI
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Christopher Schultz
Rainer,

On 5/22/2009 8:55 AM, Rainer Jung wrote:
 You could run a JSP including a call to System.gc();

Right. The JVM is out of file descriptors. I cannot deploy a new JSP
onto the server without restarting it. ;) I mentioned this in my initial
post.

 First of all this really looks bad and interesting. So we would benefit
 from understanding what's going on.

Filip indicates that there is a resource leak fixed in the NIO
connector. I'll have to build that and upgrade my TC installation to see
if it fixes this problem.

 Are there chances you can try again and see, whether those additional
 FDs for the static content go way if ou trigger a major GC, either via
 such a JSP, or via the JConsole or any other way you like?

I've tried everything I can think of. jconsole couldn't connect (still
out of file descriptors!) which I also already mentioned. I gave up a
few minutes ago and stopped the server in preparation for upgrading the
connectors themselves.

I'll reply once I have some more information.

Thanks,
-chris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Filip Hanik - Dev Lists

Christopher Schultz wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Filip,

On 5/21/2009 12:34 PM, Filip Hanik - Dev Lists wrote:
  

hi Christopher, generally, ulimit -n 1024 is too low for any kind of web
server.



Fair enough, but I'm not putting an unreasonable load on my server with
ulimit -n 1024 and 40 concurrent connections, am I?

  

And there was also a file descriptor leak in the NIO connector, fixed in
http://svn.apache.org/viewvc?rev=734454view=rev

this is when Tomcat NIO serves up static content.



Yup, I'm only serving up static content (using sendfile in the first
test that failed to complete, and NOT using sendfile in the second one
which completed with c=1 but failed on c=40).

Is tcnative trunk stable enough to just try using that?

The commit comment says NIO fixes... nothing more specific? ;)
  


if you look at the commit, there are changelog changes in there. one of 
them being a FD leak with static content.

It simply never called close on the FD when it was done

Filip


Thanks,
- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkoWofMACgkQ9CaO5/Lv0PBGnACgjDwmquIeUv1FfeCYKQHEx2HZ
s9AAn2UYoLK2J3exkh1Q0h8Wt8A0rJSa
=8tQI
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


  



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Rainer Jung
On 22.05.2009 15:46, Christopher Schultz wrote:
 Rainer,
 
 On 5/22/2009 8:55 AM, Rainer Jung wrote:
 You could run a JSP including a call to System.gc();
 
 Right. The JVM is out of file descriptors. I cannot deploy a new JSP
 onto the server without restarting it. ;) I mentioned this in my initial
 post.

Yes, of course. In that case you are pretty much restricted to the
outside tools you already used. So it would make sense to deploy and
precompile/call the new JSP once before starting the next run, so you at
least have a chance of calling it. Of course the connection the hte http
connector needed for the request also needs an FD, but I had the
impression from your last post, that a few of those FDs are being closed
after some time. Maybe enough for calling the JSP.

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Christopher Schultz
Filip,

On 5/22/2009 10:14 AM, Filip Hanik - Dev Lists wrote:
 if you look at the commit, there are changelog changes in there. one of
 them being a FD leak with static content.

Gotcha. I was just looking at the commit comment itself.

 It simply never called close on the FD when it was done

Oops. Does this affect the connector regardless of the setting for
sendFile?

Also, I asked if using the trunk tcnative was the solution... of course
it's not: the bug was in the Java code. Should I get the tomcat-native
package and compile /that/ Java code, or should I get Tomcat 6.0.x trunk
and compile /that/?

I think I'd prefer to change as little as possible to resume my tests,
so if the connector itself is completely contained within the
tomcat-native package, and there won't be any incompatibilities with
6.0.18, I'd prefer to do that. In that case, do I just replace
lib/tomcat-coyote.jar after I build?

Thanks,
-chris

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Rainer,

On 5/22/2009 10:32 AM, Rainer Jung wrote:
 On 22.05.2009 15:46, Christopher Schultz wrote:
 Rainer,

 On 5/22/2009 8:55 AM, Rainer Jung wrote:
 You could run a JSP including a call to System.gc();
 Right. The JVM is out of file descriptors. I cannot deploy a new JSP
 onto the server without restarting it. ;) I mentioned this in my initial
 post.
 
 Yes, of course. In that case you are pretty much restricted to the
 outside tools you already used. So it would make sense to deploy and
 precompile/call the new JSP once before starting the next run, so you at
 least have a chance of calling it.

Yeah, I had one of those deployed already. Unfortunately, one of the
things that JSP does is call InetAddress.getLocalHost() which fails
because ... I was out of file descriptors. It's amazing how much stuff
stops working once you hit that wall and nothing will be released ;)

 Of course the connection the hte http
 connector needed for the request also needs an FD, but I had the
 impression from your last post, that a few of those FDs are being closed
 after some time. Maybe enough for calling the JSP.

Yeah, I could make an HTTP connection to the server, but I think that's
because the connector already had an fd allocated for the listen.
Actually, I'm not sure why some things worked (like making new requests)
but others did not (such as doing a localhost lookup).

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkoW+bIACgkQ9CaO5/Lv0PB2UQCeNTDx5CBnaU3F44sdchJZ29gZ
OUEAoJVlwqaRoig1HLi4fBra2QL5JyhU
=9czx
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-22 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Rainer,

On 5/22/2009 8:48 AM, Rainer Jung wrote:
 The file descriptos thing is totaly independent. I hijacked the thread :)

Yeah, I know. Filip independently mentioned that there's a fd leak in
the NIO implementation includes in 6.0.18. I have yet to test the fix,
but I'm hopeful.

Thanks,
- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkoW+fsACgkQ9CaO5/Lv0PBGBQCgnlcDb+h+LZ1/qggGuyLq+P77
gvwAoKy3tPXilE+4poxrhl9wRLE7XQaK
=PmtD
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



NIO Connector: Too many open files

2009-05-21 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

All,

I've been testing the performance of various Tomcat configurations
against Apache httpd and my serious tests are not completing for the NIO
connector because the server is running out of files:

 May 20, 2009 2:35:55 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
 SEVERE: Socket accept failed
 java.io.IOException: Too many open files
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at 
 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
 at 
 org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
 at java.lang.Thread.run(Thread.java:619)

A bit of background for those who haven't followed the Apache httpd vs
Tomcat static content performance thread:

I'm running Tomcat 6.0.18 using tcnative 1.1.16. Apache httpd is not
being used for this test, so the client is contacting Tomcat directly
from localhost.

$ uname -a
Linux chadis 2.6.14-gentoo-r5 #2 PREEMPT Sat Dec 17 16:30:55 EST 2005
i686 AMD Athlon(tm) XP 1700+ AuthenticAMD GNU/Linux

$ java -version
java version 1.6.0_13
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode, sharing)

$ ulimit -n (fds per process limit)
1024

1GiB RAM on the machine, here are the heap details /after/  the tests
are run:

$ jmap -heap 1430
Attaching to process ID 1430, please wait...
Debugger attached successfully.
Client compiler detected.
JVM version is 11.3-b02

using thread-local object allocation.
Mark Sweep Compact GC

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize  = 67108864 (64.0MB)
   NewSize  = 1048576 (1.0MB)
   MaxNewSize   = 4294901760 (4095.9375MB)
   OldSize  = 4194304 (4.0MB)
   NewRatio = 12
   SurvivorRatio= 8
   PermSize = 12582912 (12.0MB)
   MaxPermSize  = 67108864 (64.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 2228224 (2.125MB)
   used = 612888 (0.5844955444335938MB)
   free = 1615336 (1.5405044555664062MB)
   27.505672679227942% used
Eden Space:
   capacity = 2031616 (1.9375MB)
   used = 612888 (0.5844955444335938MB)
   free = 1418728 (1.3530044555664062MB)
   30.167511970766128% used
- From Space:
   capacity = 196608 (0.1875MB)
   used = 0 (0.0MB)
   free = 196608 (0.1875MB)
   0.0% used
To Space:
   capacity = 196608 (0.1875MB)
   used = 0 (0.0MB)
   free = 196608 (0.1875MB)
   0.0% used
tenured generation:
   capacity = 28311552 (27.0MB)
   used = 20464784 (19.516738891601562MB)
   free = 7846768 (7.4832611083984375MB)
   72.28421811704283% used
Perm Generation:
   capacity = 12582912 (12.0MB)
   used = 8834304 (8.425048828125MB)
   free = 3748608 (3.574951171875MB)
   70.208740234375% used

Here are my Connector configurations:

!-- Regular non-APR Coyote Connector --
Connector  port=8001
protocol=org.apache.coyote.http11.Http11Protocol
connectionTimeout=2
server=Coyote1.1non-APR
   /

!-- APR Connector --
Connector  port=8002
protocol=org.apache.coyote.http11.Http11AprProtocol
useSendfile=true
connectionTimeout=2
server=Coyote1.1APR
   /

!-- APR without sendfile --
Connector  port=8003
protocol=org.apache.coyote.http11.Http11AprProtocol
useSendfile=false
connectionTimeout=2
server=Coyote1.1APRw/osendfile
   /

!-- NIO Connector --
Connector  port=8004
protocol=org.apache.coyote.http11.Http11NioProtocol
useSendfile=true
connectionTimeout=2
server=Coyote1.1NIO
   /

!-- APR without sendfile --
Connector  port=8005
protocol=org.apache.coyote.http11.Http11NioProtocol
useSendfile=false
connectionTimeout=2
server=Coyote1.1NIOw/osendfile
   /

All connectors are configured at once, so I should have a maximum of 40
threads in each pool. The command I ran to benchmark each connector was
(for example):

/usr/sbin/ab -c 40 -t 480 -n 1000 http://localhost:8004/4kiB.bin

This runs ApacheBench for 8 minutes with 40 client threads requesting a
4k file over and over again. This particular test succeeded, but there
are 14 more tests, each using a file twice the size of the previous
test. After the 128k file test, every single test fails after that.

The last test I ran (with only 1 thread instead of 40), the NIO
connector died in the same way, but the NIO connector without sendfile
enabled appeared to work properly. This time (40 threads), neither of
the connectors worker properly, the NIO connector failing to complete
any tests after the 128kb test and the NIO-sendfile

Re: NIO Connector: Too many open files

2009-05-21 Thread Rainer Jung
On 21.05.2009 17:55, Christopher Schultz wrote:
 All,
 
 I've been testing the performance of various Tomcat configurations
 against Apache httpd and my serious tests are not completing for the NIO
 connector because the server is running out of files:
 
 May 20, 2009 2:35:55 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
 SEVERE: Socket accept failed
 java.io.IOException: Too many open files
 at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
 at 
 sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
 at 
 org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
 at java.lang.Thread.run(Thread.java:619)
 
 A bit of background for those who haven't followed the Apache httpd vs
 Tomcat static content performance thread:
 
 I'm running Tomcat 6.0.18 using tcnative 1.1.16. Apache httpd is not
 being used for this test, so the client is contacting Tomcat directly
 from localhost.
 
 $ uname -a
 Linux chadis 2.6.14-gentoo-r5 #2 PREEMPT Sat Dec 17 16:30:55 EST 2005
 i686 AMD Athlon(tm) XP 1700+ AuthenticAMD GNU/Linux
 
 $ java -version
 java version 1.6.0_13
 Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
 Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode, sharing)
 
 $ ulimit -n (fds per process limit)
 1024
 
 1GiB RAM on the machine, here are the heap details /after/  the tests
 are run:
 
 $ jmap -heap 1430
 Attaching to process ID 1430, please wait...
 Debugger attached successfully.
 Client compiler detected.
 JVM version is 11.3-b02
 
 using thread-local object allocation.
 Mark Sweep Compact GC
 
 Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize  = 67108864 (64.0MB)
NewSize  = 1048576 (1.0MB)
MaxNewSize   = 4294901760 (4095.9375MB)
OldSize  = 4194304 (4.0MB)
NewRatio = 12
SurvivorRatio= 8
PermSize = 12582912 (12.0MB)
MaxPermSize  = 67108864 (64.0MB)
 
 Heap Usage:
 New Generation (Eden + 1 Survivor Space):
capacity = 2228224 (2.125MB)
used = 612888 (0.5844955444335938MB)
free = 1615336 (1.5405044555664062MB)
27.505672679227942% used
 Eden Space:
capacity = 2031616 (1.9375MB)
used = 612888 (0.5844955444335938MB)
free = 1418728 (1.3530044555664062MB)
30.167511970766128% used
 - From Space:
capacity = 196608 (0.1875MB)
used = 0 (0.0MB)
free = 196608 (0.1875MB)
0.0% used
 To Space:
capacity = 196608 (0.1875MB)
used = 0 (0.0MB)
free = 196608 (0.1875MB)
0.0% used
 tenured generation:
capacity = 28311552 (27.0MB)
used = 20464784 (19.516738891601562MB)
free = 7846768 (7.4832611083984375MB)
72.28421811704283% used
 Perm Generation:
capacity = 12582912 (12.0MB)
used = 8834304 (8.425048828125MB)
free = 3748608 (3.574951171875MB)
70.208740234375% used
 
 Here are my Connector configurations:
 
 !-- Regular non-APR Coyote Connector --
 Connector  port=8001
 protocol=org.apache.coyote.http11.Http11Protocol
 connectionTimeout=2
 server=Coyote1.1non-APR
  /
 
 !-- APR Connector --
 Connector  port=8002
 protocol=org.apache.coyote.http11.Http11AprProtocol
 useSendfile=true
 connectionTimeout=2
 server=Coyote1.1APR
  /
 
 !-- APR without sendfile --
 Connector  port=8003
 protocol=org.apache.coyote.http11.Http11AprProtocol
 useSendfile=false
 connectionTimeout=2
 server=Coyote1.1APRw/osendfile
  /
 
 !-- NIO Connector --
 Connector  port=8004
 protocol=org.apache.coyote.http11.Http11NioProtocol
 useSendfile=true
 connectionTimeout=2
 server=Coyote1.1NIO
  /
 
 !-- APR without sendfile --
 Connector  port=8005
 protocol=org.apache.coyote.http11.Http11NioProtocol
 useSendfile=false
 connectionTimeout=2
 server=Coyote1.1NIOw/osendfile
  /
 
 All connectors are configured at once, so I should have a maximum of 40
 threads in each pool. The command I ran to benchmark each connector was
 (for example):
 
 /usr/sbin/ab -c 40 -t 480 -n 1000 http://localhost:8004/4kiB.bin
 
 This runs ApacheBench for 8 minutes with 40 client threads requesting a
 4k file over and over again. This particular test succeeded, but there
 are 14 more tests, each using a file twice the size of the previous
 test. After the 128k file test, every single test fails after that.
 
 The last test I ran (with only 1 thread instead of 40), the NIO
 connector died in the same way, but the NIO connector without sendfile
 enabled appeared to work properly. This time (40 threads), neither

Re: NIO Connector: Too many open files

2009-05-21 Thread Rainer Jung
2 remarks about all your stress testing efforts:

A) TIME_WAIT

When not doing HTTP Keep-Alive, under high load the size of the TCP hash
table and the effectiveness of the system to lookp up TCP connections
can limit the throughput you can reach. More precisely, depending on the
excat way of connection shutdown, you get TIME_WAIT states for the
finished connections (without HTTP Keep Alive it could be one such
connection per request). Most systems get slow, once the number of those
connections reaches somthing arounf 3.

E.g. if you are doing 2000 requests per second without HTTP Keep Alive
and the combination of web server and stress test tool leads to
TIME_WAITs, after 15 seconds your table size might reach a critical size.

The amount of time a system waits before it destroys TIME_WAIT
connections varies. Solaris 4 minutes, but tunable down to 5 seconds,
Linux 1 minute (I think) and still not tunable (a free pizza to the
first one who can show me how to tune the time interval it takes for a
TCP connection to be moved out of TIME_WAIT), on Windows I think also 1
minute but tunable.

Not using HTTP Keep Alive will very likely limit quickly the achievable
throughput when going up in concurrency.

B) Resource usage

You are doing a first interesting analysis, namely the base throughput
you can reach using concurrency one. Now the throughput we can reach
using a given setup always depends on the first bottleneck we hit.

For big files it might be the network bandwidth. For small files it
might be CPU or maybe I/Os per second we can do on the interface.

So it is also interesting to compare the resource usage. Some resources
are harder to measure (like memory), but the one easy to measure
resource is CPU. So you get some more info, when also measuring CPU per
request. Think of it as maximum speed and gas efficiency.

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: NIO Connector: Too many open files

2009-05-21 Thread Filip Hanik - Dev Lists
hi Christopher, generally, ulimit -n 1024 is too low for any kind of web 
server.

And there was also a file descriptor leak in the NIO connector, fixed in
http://svn.apache.org/viewvc?rev=734454view=rev

this is when Tomcat NIO serves up static content.

Filip


Christopher Schultz wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

All,

I've been testing the performance of various Tomcat configurations
against Apache httpd and my serious tests are not completing for the NIO
connector because the server is running out of files:

  

May 20, 2009 2:35:55 AM org.apache.tomcat.util.net.NioEndpoint$Acceptor run
SEVERE: Socket accept failed
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
at 
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1198)
at java.lang.Thread.run(Thread.java:619)



A bit of background for those who haven't followed the Apache httpd vs
Tomcat static content performance thread:

I'm running Tomcat 6.0.18 using tcnative 1.1.16. Apache httpd is not
being used for this test, so the client is contacting Tomcat directly
from localhost.

$ uname -a
Linux chadis 2.6.14-gentoo-r5 #2 PREEMPT Sat Dec 17 16:30:55 EST 2005
i686 AMD Athlon(tm) XP 1700+ AuthenticAMD GNU/Linux

$ java -version
java version 1.6.0_13
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode, sharing)

$ ulimit -n (fds per process limit)
1024

1GiB RAM on the machine, here are the heap details /after/  the tests
are run:

$ jmap -heap 1430
Attaching to process ID 1430, please wait...
Debugger attached successfully.
Client compiler detected.
JVM version is 11.3-b02

using thread-local object allocation.
Mark Sweep Compact GC

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize  = 67108864 (64.0MB)
   NewSize  = 1048576 (1.0MB)
   MaxNewSize   = 4294901760 (4095.9375MB)
   OldSize  = 4194304 (4.0MB)
   NewRatio = 12
   SurvivorRatio= 8
   PermSize = 12582912 (12.0MB)
   MaxPermSize  = 67108864 (64.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 2228224 (2.125MB)
   used = 612888 (0.5844955444335938MB)
   free = 1615336 (1.5405044555664062MB)
   27.505672679227942% used
Eden Space:
   capacity = 2031616 (1.9375MB)
   used = 612888 (0.5844955444335938MB)
   free = 1418728 (1.3530044555664062MB)
   30.167511970766128% used
- From Space:
   capacity = 196608 (0.1875MB)
   used = 0 (0.0MB)
   free = 196608 (0.1875MB)
   0.0% used
To Space:
   capacity = 196608 (0.1875MB)
   used = 0 (0.0MB)
   free = 196608 (0.1875MB)
   0.0% used
tenured generation:
   capacity = 28311552 (27.0MB)
   used = 20464784 (19.516738891601562MB)
   free = 7846768 (7.4832611083984375MB)
   72.28421811704283% used
Perm Generation:
   capacity = 12582912 (12.0MB)
   used = 8834304 (8.425048828125MB)
   free = 3748608 (3.574951171875MB)
   70.208740234375% used

Here are my Connector configurations:

!-- Regular non-APR Coyote Connector --
Connector  port=8001
protocol=org.apache.coyote.http11.Http11Protocol
connectionTimeout=2
server=Coyote1.1non-APR
   /

!-- APR Connector --
Connector  port=8002
protocol=org.apache.coyote.http11.Http11AprProtocol
useSendfile=true
connectionTimeout=2
server=Coyote1.1APR
   /

!-- APR without sendfile --
Connector  port=8003
protocol=org.apache.coyote.http11.Http11AprProtocol
useSendfile=false
connectionTimeout=2
server=Coyote1.1APRw/osendfile
   /

!-- NIO Connector --
Connector  port=8004
protocol=org.apache.coyote.http11.Http11NioProtocol
useSendfile=true
connectionTimeout=2
server=Coyote1.1NIO
   /

!-- APR without sendfile --
Connector  port=8005
protocol=org.apache.coyote.http11.Http11NioProtocol
useSendfile=false
connectionTimeout=2
server=Coyote1.1NIOw/osendfile
   /

All connectors are configured at once, so I should have a maximum of 40
threads in each pool. The command I ran to benchmark each connector was
(for example):

/usr/sbin/ab -c 40 -t 480 -n 1000 http://localhost:8004/4kiB.bin

This runs ApacheBench for 8 minutes with 40 client threads requesting a
4k file over and over again. This particular test succeeded, but there
are 14 more tests, each using a file twice the size of the previous
test. After the 128k file test, every single test fails after that.

The last test I ran (with only 1

Re: Too many open files

2008-10-10 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Mohit,

Mohit Anchlia wrote:
 So I tried all the options. I also changed the code to use connection
 pooling with only 2 connections but still there are bunch of
 CLOSE_WAITS. As soon as I stop tomcat all of them go away. I am not
 able to figure out why there are so many CLOSE_WAITS hanging around
 when I just have 2 connections in my pool.

These are mostly HTTP connections to localhost, right? Maybe you are
using connection timeout options that are too long for your quick
transactions. CLOSE_WAIT is a normal TCP state, but if these polie up on
top of each other because of long (minutes?) timeouts then you can
easily run out of file handles (socket ~= file handle, which is why you
are getting the too many open files error).

Consider setting some of these timeout options on HttpClient (if such
options exist) or researching the defaults for these options. Also, make
sure you are cleaning up after your connections appropriately (properly
catching IOExceptions, closing connections in finally blocks, etc.). If
connections are closing unexpectedly, they may be sitting in CLOSE_WAIT
longer than necessary.

Hope that helps,
- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkjvqzkACgkQ9CaO5/Lv0PA8dgCdHtFiD0gquai4yEBXOKdZFOrm
bsEAoKOXMxo+u5I1EW2MQPuWvLJGhEYe
=m2kr
-END PGP SIGNATURE-

-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Too many open files

2008-10-10 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Johnny,

Johnny Kewl wrote:
 PS: I see you have a Apache in front...
 Try this... setup the 8080 connector if you havnt already got it working
 in TC... and go into TC direct... then check file handles
 If no problem, you know its Apache or the JK connector...
 ... will at least point you in right direction...

Oh, I hadn't even considered that. Are you using Apache httpd +
mod_proxy_http? If so, then the connections might be the ones from
Apache httpd to Tomcat, as Johnns suggests. In that case, you'll want to
inspect your httpd configuration for socket timeout configuration options.

Also, you might be able to remove Apache httpd from the mix entirely.
Are you sure you need to front Tomcat with Apache?

- -chris

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkjvq7YACgkQ9CaO5/Lv0PAcbwCfTs6rmtPRXkGzqxQe3WvOMJiJ
PkkAnRHrOl6QuoqOipqcyoCw3eZbcUjh
=qWJz
-END PGP SIGNATURE-

-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Too many open files

2008-10-10 Thread Mohit Anchlia
I am using tomcat to apache (for load balancing)

On Fri, Oct 10, 2008 at 12:23 PM, Christopher Schultz
[EMAIL PROTECTED] wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Johnny,

 Johnny Kewl wrote:
 PS: I see you have a Apache in front...
 Try this... setup the 8080 connector if you havnt already got it working
 in TC... and go into TC direct... then check file handles
 If no problem, you know its Apache or the JK connector...
 ... will at least point you in right direction...

 Oh, I hadn't even considered that. Are you using Apache httpd +
 mod_proxy_http? If so, then the connections might be the ones from
 Apache httpd to Tomcat, as Johnns suggests. In that case, you'll want to
 inspect your httpd configuration for socket timeout configuration options.

 Also, you might be able to remove Apache httpd from the mix entirely.
 Are you sure you need to front Tomcat with Apache?

 - -chris

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (MingW32)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iEYEARECAAYFAkjvq7YACgkQ9CaO5/Lv0PAcbwCfTs6rmtPRXkGzqxQe3WvOMJiJ
 PkkAnRHrOl6QuoqOipqcyoCw3eZbcUjh
 =qWJz
 -END PGP SIGNATURE-

 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Too many open files

2008-10-08 Thread Konstantin Kolinko
2008/10/8 Mohit Anchlia [EMAIL PROTECTED]:
 I can see you can't wait to hear the debate. Anyhow, I am using
 HttpClient from apache commons and I do have .getReleaseConnection().


See comment #22 here:
https://issues.apache.org/bugzilla/show_bug.cgi?id=28727#c22
and the message thread that it refers, [1]:

[1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg04338.html

I do not know, if that applies to your case. That message thread ([1]) is
of December 2003, thus I do not know whether it is still applicable.

Just my 0.02 EUR.

Best regards,
Konstantin Kolinko

-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Too many open files

2008-10-08 Thread Mohit Anchlia
So I tried all the options. I also changed the code to use connection
pooling with only 2 connections but still there are bunch of
CLOSE_WAITS. As soon as I stop tomcat all of them go away. I am not
able to figure out why there are so many CLOSE_WAITS hanging around
when I just have 2 connections in my pool.

On Wed, Oct 8, 2008 at 6:14 AM, Konstantin Kolinko
[EMAIL PROTECTED] wrote:
 2008/10/8 Mohit Anchlia [EMAIL PROTECTED]:
 I can see you can't wait to hear the debate. Anyhow, I am using
 HttpClient from apache commons and I do have .getReleaseConnection().


 See comment #22 here:
 https://issues.apache.org/bugzilla/show_bug.cgi?id=28727#c22
 and the message thread that it refers, [1]:

 [1] http://www.mail-archive.com/[EMAIL PROTECTED]/msg04338.html

 I do not know, if that applies to your case. That message thread ([1]) is
 of December 2003, thus I do not know whether it is still applicable.

 Just my 0.02 EUR.

 Best regards,
 Konstantin Kolinko

 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]



-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Too many open files

2008-10-08 Thread Serge Fonville
I'm recently new to tomcat, but still I hope I am able to help a little.On
one end, any connection object should be nulled so the garbage collector can
pick it up.
On the other end, in server.xml define an executor and refer it in the
connector, that way, you can specify the idle time and the maximum amount of
threads you want.

Hope this helps...

Serge

On Wed, Oct 8, 2008 at 11:24 PM, Mohit Anchlia [EMAIL PROTECTED]wrote:

 So I tried all the options. I also changed the code to use connection
 pooling with only 2 connections but still there are bunch of
 CLOSE_WAITS. As soon as I stop tomcat all of them go away. I am not
 able to figure out why there are so many CLOSE_WAITS hanging around
 when I just have 2 connections in my pool.

 On Wed, Oct 8, 2008 at 6:14 AM, Konstantin Kolinko
 [EMAIL PROTECTED] wrote:
  2008/10/8 Mohit Anchlia [EMAIL PROTECTED]:
  I can see you can't wait to hear the debate. Anyhow, I am using
  HttpClient from apache commons and I do have .getReleaseConnection().
 
 
  See comment #22 here:
  https://issues.apache.org/bugzilla/show_bug.cgi?id=28727#c22
  and the message thread that it refers, [1]:
 
  [1]
 http://www.mail-archive.com/[EMAIL PROTECTED]/msg04338.html
 
  I do not know, if that applies to your case. That message thread ([1]) is
  of December 2003, thus I do not know whether it is still applicable.
 
  Just my 0.02 EUR.
 
  Best regards,
  Konstantin Kolinko
 
  -
  To start a new topic, e-mail: users@tomcat.apache.org
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 

 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




Re: Too many open files

2008-10-08 Thread Mohit Anchlia
I don't know how that will help in lowering down the CLOSE_WAITS from
tomcat to apache. I'll look at it though.

On Wed, Oct 8, 2008 at 3:07 PM, Serge Fonville [EMAIL PROTECTED] wrote:
 I'm recently new to tomcat, but still I hope I am able to help a little.On
 one end, any connection object should be nulled so the garbage collector can
 pick it up.
 On the other end, in server.xml define an executor and refer it in the
 connector, that way, you can specify the idle time and the maximum amount of
 threads you want.

 Hope this helps...

 Serge

 On Wed, Oct 8, 2008 at 11:24 PM, Mohit Anchlia [EMAIL PROTECTED]wrote:

 So I tried all the options. I also changed the code to use connection
 pooling with only 2 connections but still there are bunch of
 CLOSE_WAITS. As soon as I stop tomcat all of them go away. I am not
 able to figure out why there are so many CLOSE_WAITS hanging around
 when I just have 2 connections in my pool.

 On Wed, Oct 8, 2008 at 6:14 AM, Konstantin Kolinko
 [EMAIL PROTECTED] wrote:
  2008/10/8 Mohit Anchlia [EMAIL PROTECTED]:
  I can see you can't wait to hear the debate. Anyhow, I am using
  HttpClient from apache commons and I do have .getReleaseConnection().
 
 
  See comment #22 here:
  https://issues.apache.org/bugzilla/show_bug.cgi?id=28727#c22
  and the message thread that it refers, [1]:
 
  [1]
 http://www.mail-archive.com/[EMAIL PROTECTED]/msg04338.html
 
  I do not know, if that applies to your case. That message thread ([1]) is
  of December 2003, thus I do not know whether it is still applicable.
 
  Just my 0.02 EUR.
 
  Best regards,
  Konstantin Kolinko
 
  -
  To start a new topic, e-mail: users@tomcat.apache.org
  To unsubscribe, e-mail: [EMAIL PROTECTED]
  For additional commands, e-mail: [EMAIL PROTECTED]
 
 

 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]




-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Too many open files

2008-10-07 Thread Mohit Anchlia
Tomcat throws too many open files and when I do lsof I get bunch of:

java14130 root  935u  IPv4 30842592   TCP
localhost:41971-localhost:http (CLOSE_WAIT)
java14130 root  937u  IPv4 30841213   TCP
efeitws3.ptctax.intuit.com:41161-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  938u  IPv4 30841214   TCP
localhost:41162-localhost:http (CLOSE_WAIT)
java14130 root  939u  IPv4 30841220   TCP
localhost:41165-localhost:http (CLOSE_WAIT)
java14130 root  940u  IPv4 30842516   TCP
localhost:41927-localhost:http (CLOSE_WAIT)
java14130 root  941u  IPv4 30841226   TCP
localhost:41168-localhost:http (CLOSE_WAIT)
java14130 root  943u  IPv4 30841899   TCP
efeitws3.ptctax.intuit.com:41566-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  944u  IPv4 30841694   TCP
efeitws3.ptctax.intuit.com:41453-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  945u  IPv4 30841695   TCP
localhost:41454-localhost:http (CLOSE_WAIT)
java14130 root  946u  IPv4 30841900   TCP
localhost:41567-localhost:http (CLOSE_WAIT)
java14130 root  948u  IPv4 30842415   TCP
efeitws3.ptctax.intuit.com:41864-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  949u  IPv4 30842416   TCP
localhost:41865-localhost:http (CLOSE_WAIT)
java14130 root  950u  IPv4 30842419   TCP
localhost:41867-localhost:http (CLOSE_WAIT)
java14130 root  952u  IPv4 30850596   TCP
localhost:42484-localhost:http (CLOSE_WAIT)
java14130 root  953u  IPv4 30842760   TCP
localhost:42058-localhost:http (CLOSE_WAIT)
java14130 root  954u  IPv4 30842596   TCP
localhost:41974-localhost:http (CLOSE_WAIT)
java14130 root  956u  IPv4 30842093   TCP
localhost:41676-localhost:http (CLOSE_WAIT)
java14130 root  957u  IPv4 30842195   TCP
efeitws3.ptctax.intuit.com:41737-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  958u  IPv4 30841730   TCP
localhost:41467-localhost:http (CLOSE_WAIT)
java14130 root  959u  IPv4 30841737   TCP
localhost:41472-localhost:http (CLOSE_WAIT)
java14130 root  960u  IPv4 30842196   TCP
localhost:41738-localhost:http (CLOSE_WAIT)
java14130 root  961u  IPv4 30842528   TCP
localhost:41933-localhost:http (CLOSE_WAIT)
java14130 root  962u  IPv4 30842363   TCP
localhost:41836-localhost:http (CLOSE_WAIT)
java14130 root  964u  IPv4 30842365   TCP
efeitws3.ptctax.intuit.com:41837-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  965u  IPv4 30842366   TCP
localhost:41838-localhost:http (CLOSE_WAIT)
java14130 root  966u  IPv4 30842367   TCP
localhost:41839-localhost:http (CLOSE_WAIT)
java14130 root  967u  IPv4 30842371   TCP
localhost:41841-localhost:http (CLOSE_WAIT)
java14130 root  968u  IPv4 30842465   TCP
localhost:41895-localhost:http (CLOSE_WAIT)
java14130 root  969u  IPv4 30848501   TCP
localhost:42415-localhost:http (CLOSE_WAIT)
java14130 root  970u  IPv4 30842533   TCP
localhost:41936-localhost:http (CLOSE_WAIT)
java14130 root  971u  IPv4 30842468   TCP
localhost:41898-localhost:http (CLOSE_WAIT)
java14130 root  972u  IPv4 30842534   TCP
localhost:41937-localhost:http (CLOSE_WAIT)
java14130 root  973u  IPv4 30842765   TCP
localhost:42062-localhost:http (CLOSE_WAIT)
java14130 root  974u  IPv4 30842472   TCP
localhost:41901-localhost:http (CLOSE_WAIT)
java14130 root  975u  IPv4 30842122   TCP
localhost:41694-localhost:http (CLOSE_WAIT)
java14130 root  976u  IPv4 30842123   TCP
localhost:41695-localhost:http (CLOSE_WAIT)
java14130 root  977u  IPv4 30843217   TCP
localhost:42188-localhost:http (CLOSE_WAIT)
java14130 root  978u  IPv4 30842125   TCP
localhost:41696-localhost:http (CLOSE_WAIT)
java14130 root  979u  IPv4 30842126   TCP
efeitws3.ptctax.intuit.com:41697-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  981u  IPv4 30842128   TCP
efeitws3.ptctax.intuit.com:41698-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  982u  IPv4 30842129   TCP
localhost:41699-localhost:http (CLOSE_WAIT)
java14130 root  983u  IPv4 30888558   TCP
localhost:43218-localhost:http (CLOSE_WAIT)
java14130 root  984u  IPv4 30842617   TCP
localhost:41986-localhost:http (CLOSE_WAIT)
java14130 root  985u  IPv4 30842618   TCP
efeitws3.ptctax.intuit.com:41987-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  986u  IPv4 30844067   TCP
efeitws3.ptctax.intuit.com:42260-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  987u  IPv4 30850599   TCP
efeitws3.ptctax.intuit.com:42486-10.10.81.94

Re: Too many open files

2008-10-07 Thread Johnny Kewl


- Original Message - 
From: Mohit Anchlia [EMAIL PROTECTED]

To: Tomcat Users List users@tomcat.apache.org
Sent: Tuesday, October 07, 2008 11:44 PM
Subject: Too many open files



Tomcat throws too many open files and when I do lsof I get bunch of:

java14130 root  935u  IPv4 30842592   TCP
localhost:41971-localhost:http (CLOSE_WAIT)
java14130 root  937u  IPv4 30841213   TCP
efeitws3.ptctax.intuit.com:41161-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  938u  IPv4 30841214   TCP
localhost:41162-localhost:http (CLOSE_WAIT)
java14130 root  939u  IPv4 30841220   TCP
localhost:41165-localhost:http (CLOSE_WAIT)
java14130 root  940u  IPv4 30842516   TCP
localhost:41927-localhost:http (CLOSE_WAIT)
java14130 root  941u  IPv4 30841226   TCP
localhost:41168-localhost:http (CLOSE_WAIT)
java14130 root  943u  IPv4 30841899   TCP
efeitws3.ptctax.intuit.com:41566-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  944u  IPv4 30841694   TCP
efeitws3.ptctax.intuit.com:41453-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  945u  IPv4 30841695   TCP
localhost:41454-localhost:http (CLOSE_WAIT)
java14130 root  946u  IPv4 30841900   TCP
localhost:41567-localhost:http (CLOSE_WAIT)
java14130 root  948u  IPv4 30842415   TCP
efeitws3.ptctax.intuit.com:41864-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  949u  IPv4 30842416   TCP
localhost:41865-localhost:http (CLOSE_WAIT)
java14130 root  950u  IPv4 30842419   TCP
localhost:41867-localhost:http (CLOSE_WAIT)
java14130 root  952u  IPv4 30850596   TCP
localhost:42484-localhost:http (CLOSE_WAIT)
java14130 root  953u  IPv4 30842760   TCP
localhost:42058-localhost:http (CLOSE_WAIT)
java14130 root  954u  IPv4 30842596   TCP
localhost:41974-localhost:http (CLOSE_WAIT)
java14130 root  956u  IPv4 30842093   TCP
localhost:41676-localhost:http (CLOSE_WAIT)
java14130 root  957u  IPv4 30842195   TCP
efeitws3.ptctax.intuit.com:41737-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  958u  IPv4 30841730   TCP
localhost:41467-localhost:http (CLOSE_WAIT)
java14130 root  959u  IPv4 30841737   TCP
localhost:41472-localhost:http (CLOSE_WAIT)
java14130 root  960u  IPv4 30842196   TCP
localhost:41738-localhost:http (CLOSE_WAIT)
java14130 root  961u  IPv4 30842528   TCP
localhost:41933-localhost:http (CLOSE_WAIT)
java14130 root  962u  IPv4 30842363   TCP
localhost:41836-localhost:http (CLOSE_WAIT)
java14130 root  964u  IPv4 30842365   TCP
efeitws3.ptctax.intuit.com:41837-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  965u  IPv4 30842366   TCP
localhost:41838-localhost:http (CLOSE_WAIT)
java14130 root  966u  IPv4 30842367   TCP
localhost:41839-localhost:http (CLOSE_WAIT)
java14130 root  967u  IPv4 30842371   TCP
localhost:41841-localhost:http (CLOSE_WAIT)
java14130 root  968u  IPv4 30842465   TCP
localhost:41895-localhost:http (CLOSE_WAIT)
java14130 root  969u  IPv4 30848501   TCP
localhost:42415-localhost:http (CLOSE_WAIT)
java14130 root  970u  IPv4 30842533   TCP
localhost:41936-localhost:http (CLOSE_WAIT)
java14130 root  971u  IPv4 30842468   TCP
localhost:41898-localhost:http (CLOSE_WAIT)
java14130 root  972u  IPv4 30842534   TCP
localhost:41937-localhost:http (CLOSE_WAIT)
java14130 root  973u  IPv4 30842765   TCP
localhost:42062-localhost:http (CLOSE_WAIT)
java14130 root  974u  IPv4 30842472   TCP
localhost:41901-localhost:http (CLOSE_WAIT)
java14130 root  975u  IPv4 30842122   TCP
localhost:41694-localhost:http (CLOSE_WAIT)
java14130 root  976u  IPv4 30842123   TCP
localhost:41695-localhost:http (CLOSE_WAIT)
java14130 root  977u  IPv4 30843217   TCP
localhost:42188-localhost:http (CLOSE_WAIT)
java14130 root  978u  IPv4 30842125   TCP
localhost:41696-localhost:http (CLOSE_WAIT)
java14130 root  979u  IPv4 30842126   TCP
efeitws3.ptctax.intuit.com:41697-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  981u  IPv4 30842128   TCP
efeitws3.ptctax.intuit.com:41698-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  982u  IPv4 30842129   TCP
localhost:41699-localhost:http (CLOSE_WAIT)
java14130 root  983u  IPv4 30888558   TCP
localhost:43218-localhost:http (CLOSE_WAIT)
java14130 root  984u  IPv4 30842617   TCP
localhost:41986-localhost:http (CLOSE_WAIT)
java14130 root  985u  IPv4 30842618   TCP
efeitws3.ptctax.intuit.com:41987-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  986u  IPv4 30844067

Re: Too many open files

2008-10-07 Thread Mohit Anchlia
I can see you can't wait to hear the debate. Anyhow, I am using
HttpClient from apache commons and I do have .getReleaseConnection().

On Tue, Oct 7, 2008 at 4:56 PM, Johnny Kewl [EMAIL PROTECTED] wrote:

 - Original Message - From: Mohit Anchlia [EMAIL PROTECTED]
 To: Tomcat Users List users@tomcat.apache.org
 Sent: Tuesday, October 07, 2008 11:44 PM
 Subject: Too many open files


 Tomcat throws too many open files and when I do lsof I get bunch of:

 java14130 root  935u  IPv4 30842592   TCP
 localhost:41971-localhost:http (CLOSE_WAIT)
 java14130 root  937u  IPv4 30841213   TCP
 efeitws3.ptctax.intuit.com:41161-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  938u  IPv4 30841214   TCP
 localhost:41162-localhost:http (CLOSE_WAIT)
 java14130 root  939u  IPv4 30841220   TCP
 localhost:41165-localhost:http (CLOSE_WAIT)
 java14130 root  940u  IPv4 30842516   TCP
 localhost:41927-localhost:http (CLOSE_WAIT)
 java14130 root  941u  IPv4 30841226   TCP
 localhost:41168-localhost:http (CLOSE_WAIT)
 java14130 root  943u  IPv4 30841899   TCP
 efeitws3.ptctax.intuit.com:41566-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  944u  IPv4 30841694   TCP
 efeitws3.ptctax.intuit.com:41453-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  945u  IPv4 30841695   TCP
 localhost:41454-localhost:http (CLOSE_WAIT)
 java14130 root  946u  IPv4 30841900   TCP
 localhost:41567-localhost:http (CLOSE_WAIT)
 java14130 root  948u  IPv4 30842415   TCP
 efeitws3.ptctax.intuit.com:41864-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  949u  IPv4 30842416   TCP
 localhost:41865-localhost:http (CLOSE_WAIT)
 java14130 root  950u  IPv4 30842419   TCP
 localhost:41867-localhost:http (CLOSE_WAIT)
 java14130 root  952u  IPv4 30850596   TCP
 localhost:42484-localhost:http (CLOSE_WAIT)
 java14130 root  953u  IPv4 30842760   TCP
 localhost:42058-localhost:http (CLOSE_WAIT)
 java14130 root  954u  IPv4 30842596   TCP
 localhost:41974-localhost:http (CLOSE_WAIT)
 java14130 root  956u  IPv4 30842093   TCP
 localhost:41676-localhost:http (CLOSE_WAIT)
 java14130 root  957u  IPv4 30842195   TCP
 efeitws3.ptctax.intuit.com:41737-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  958u  IPv4 30841730   TCP
 localhost:41467-localhost:http (CLOSE_WAIT)
 java14130 root  959u  IPv4 30841737   TCP
 localhost:41472-localhost:http (CLOSE_WAIT)
 java14130 root  960u  IPv4 30842196   TCP
 localhost:41738-localhost:http (CLOSE_WAIT)
 java14130 root  961u  IPv4 30842528   TCP
 localhost:41933-localhost:http (CLOSE_WAIT)
 java14130 root  962u  IPv4 30842363   TCP
 localhost:41836-localhost:http (CLOSE_WAIT)
 java14130 root  964u  IPv4 30842365   TCP
 efeitws3.ptctax.intuit.com:41837-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  965u  IPv4 30842366   TCP
 localhost:41838-localhost:http (CLOSE_WAIT)
 java14130 root  966u  IPv4 30842367   TCP
 localhost:41839-localhost:http (CLOSE_WAIT)
 java14130 root  967u  IPv4 30842371   TCP
 localhost:41841-localhost:http (CLOSE_WAIT)
 java14130 root  968u  IPv4 30842465   TCP
 localhost:41895-localhost:http (CLOSE_WAIT)
 java14130 root  969u  IPv4 30848501   TCP
 localhost:42415-localhost:http (CLOSE_WAIT)
 java14130 root  970u  IPv4 30842533   TCP
 localhost:41936-localhost:http (CLOSE_WAIT)
 java14130 root  971u  IPv4 30842468   TCP
 localhost:41898-localhost:http (CLOSE_WAIT)
 java14130 root  972u  IPv4 30842534   TCP
 localhost:41937-localhost:http (CLOSE_WAIT)
 java14130 root  973u  IPv4 30842765   TCP
 localhost:42062-localhost:http (CLOSE_WAIT)
 java14130 root  974u  IPv4 30842472   TCP
 localhost:41901-localhost:http (CLOSE_WAIT)
 java14130 root  975u  IPv4 30842122   TCP
 localhost:41694-localhost:http (CLOSE_WAIT)
 java14130 root  976u  IPv4 30842123   TCP
 localhost:41695-localhost:http (CLOSE_WAIT)
 java14130 root  977u  IPv4 30843217   TCP
 localhost:42188-localhost:http (CLOSE_WAIT)
 java14130 root  978u  IPv4 30842125   TCP
 localhost:41696-localhost:http (CLOSE_WAIT)
 java14130 root  979u  IPv4 30842126   TCP
 efeitws3.ptctax.intuit.com:41697-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  981u  IPv4 30842128   TCP
 efeitws3.ptctax.intuit.com:41698-10.10.81.94:webcache (CLOSE_WAIT)
 java14130 root  982u  IPv4 30842129   TCP
 localhost:41699-localhost:http (CLOSE_WAIT)
 java14130 root  983u  IPv4 30888558   TCP
 localhost:43218

Re: Too many open files

2008-10-07 Thread Johnny Kewl


- Original Message - 
From: Johnny Kewl [EMAIL PROTECTED]

To: Tomcat Users List users@tomcat.apache.org
Sent: Wednesday, October 08, 2008 1:56 AM
Subject: Re: Too many open files




- Original Message - 
From: Mohit Anchlia [EMAIL PROTECTED]

To: Tomcat Users List users@tomcat.apache.org
Sent: Tuesday, October 07, 2008 11:44 PM
Subject: Too many open files



Tomcat throws too many open files and when I do lsof I get bunch of:

java14130 root  935u  IPv4 30842592   TCP
localhost:41971-localhost:http (CLOSE_WAIT)
java14130 root  937u  IPv4 30841213   TCP
efeitws3.ptctax.intuit.com:41161-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  938u  IPv4 30841214   TCP
localhost:41162-localhost:http (CLOSE_WAIT)
java14130 root  939u  IPv4 30841220   TCP
localhost:41165-localhost:http (CLOSE_WAIT)
java14130 root  940u  IPv4 30842516   TCP
localhost:41927-localhost:http (CLOSE_WAIT)
java14130 root  941u  IPv4 30841226   TCP
localhost:41168-localhost:http (CLOSE_WAIT)
java14130 root  943u  IPv4 30841899   TCP
efeitws3.ptctax.intuit.com:41566-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  944u  IPv4 30841694   TCP
efeitws3.ptctax.intuit.com:41453-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  945u  IPv4 30841695   TCP
localhost:41454-localhost:http (CLOSE_WAIT)
java14130 root  946u  IPv4 30841900   TCP
localhost:41567-localhost:http (CLOSE_WAIT)
java14130 root  948u  IPv4 30842415   TCP
efeitws3.ptctax.intuit.com:41864-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  949u  IPv4 30842416   TCP
localhost:41865-localhost:http (CLOSE_WAIT)
java14130 root  950u  IPv4 30842419   TCP
localhost:41867-localhost:http (CLOSE_WAIT)
java14130 root  952u  IPv4 30850596   TCP
localhost:42484-localhost:http (CLOSE_WAIT)
java14130 root  953u  IPv4 30842760   TCP
localhost:42058-localhost:http (CLOSE_WAIT)
java14130 root  954u  IPv4 30842596   TCP
localhost:41974-localhost:http (CLOSE_WAIT)
java14130 root  956u  IPv4 30842093   TCP
localhost:41676-localhost:http (CLOSE_WAIT)
java14130 root  957u  IPv4 30842195   TCP
efeitws3.ptctax.intuit.com:41737-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  958u  IPv4 30841730   TCP
localhost:41467-localhost:http (CLOSE_WAIT)
java14130 root  959u  IPv4 30841737   TCP
localhost:41472-localhost:http (CLOSE_WAIT)
java14130 root  960u  IPv4 30842196   TCP
localhost:41738-localhost:http (CLOSE_WAIT)
java14130 root  961u  IPv4 30842528   TCP
localhost:41933-localhost:http (CLOSE_WAIT)
java14130 root  962u  IPv4 30842363   TCP
localhost:41836-localhost:http (CLOSE_WAIT)
java14130 root  964u  IPv4 30842365   TCP
efeitws3.ptctax.intuit.com:41837-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  965u  IPv4 30842366   TCP
localhost:41838-localhost:http (CLOSE_WAIT)
java14130 root  966u  IPv4 30842367   TCP
localhost:41839-localhost:http (CLOSE_WAIT)
java14130 root  967u  IPv4 30842371   TCP
localhost:41841-localhost:http (CLOSE_WAIT)
java14130 root  968u  IPv4 30842465   TCP
localhost:41895-localhost:http (CLOSE_WAIT)
java14130 root  969u  IPv4 30848501   TCP
localhost:42415-localhost:http (CLOSE_WAIT)
java14130 root  970u  IPv4 30842533   TCP
localhost:41936-localhost:http (CLOSE_WAIT)
java14130 root  971u  IPv4 30842468   TCP
localhost:41898-localhost:http (CLOSE_WAIT)
java14130 root  972u  IPv4 30842534   TCP
localhost:41937-localhost:http (CLOSE_WAIT)
java14130 root  973u  IPv4 30842765   TCP
localhost:42062-localhost:http (CLOSE_WAIT)
java14130 root  974u  IPv4 30842472   TCP
localhost:41901-localhost:http (CLOSE_WAIT)
java14130 root  975u  IPv4 30842122   TCP
localhost:41694-localhost:http (CLOSE_WAIT)
java14130 root  976u  IPv4 30842123   TCP
localhost:41695-localhost:http (CLOSE_WAIT)
java14130 root  977u  IPv4 30843217   TCP
localhost:42188-localhost:http (CLOSE_WAIT)
java14130 root  978u  IPv4 30842125   TCP
localhost:41696-localhost:http (CLOSE_WAIT)
java14130 root  979u  IPv4 30842126   TCP
efeitws3.ptctax.intuit.com:41697-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  981u  IPv4 30842128   TCP
efeitws3.ptctax.intuit.com:41698-10.10.81.94:webcache (CLOSE_WAIT)
java14130 root  982u  IPv4 30842129   TCP
localhost:41699-localhost:http (CLOSE_WAIT)
java14130 root  983u  IPv4 30888558   TCP
localhost:43218-localhost:http (CLOSE_WAIT)
java14130 root  984u  IPv4 30842617   TCP
localhost:41986

Re: Too many open files

2008-10-07 Thread Johnny Kewl


- Original Message - 
From: Mohit Anchlia [EMAIL PROTECTED]

To: Tomcat Users List users@tomcat.apache.org
Sent: Wednesday, October 08, 2008 2:11 AM
Subject: Re: Too many open files



I can see you can't wait to hear the debate. Anyhow, I am using
HttpClient from apache commons and I do have .getReleaseConnection().


The brain surgeons are arriving... 


... dont know if thats the cause but its possible
Set the instance to null...
HttpClient = null;

when you done with it... they may do the trick...

Have fun...

---
HARBOR : http://www.kewlstuff.co.za/index.htm
The most powerful application server on earth.
The only real POJO Application Server.
See it in Action : http://www.kewlstuff.co.za/cd_tut_swf/whatisejb1.htm
---
If you cant pay in gold... get lost...



-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Too many open files exception under heavy load - need help!

2008-01-25 Thread Rainer Traut

Tobias Schulz-Hess schrieb:

For Linux, this can be done dynamically by launching (fron the OS



prompt):




 echo 16384 /proc/sys/fs/file-max

When I do
~# cat /proc/sys/fs/file-max
203065


This setting is a kernel limit.


This tells me, that (at least this specific setting) is already
sufficient...


You most likely hit shell limits.

What user runs your tomcat server?

When you have found out, go to
/etc/security/limits.conf
and adjust parameters like this (or according to your needs):

tomcat   softnofile  9
tomcat   hardnofile  9

tomcat   softnproc   8192
tomcat   hardnproc   8192


You can check these limits after relogin with your tomcat user with 
'ulimit -a'.


Rainer

-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



  1   2   >