Re: Too many open files exception under heavy load - need help!

2008-01-25 Thread Tobias Schulz-Hess
Hi Rainer,

Rainer Jung schrieb:
 Hi,

 1) How many fds does the process have, so is the question why can't
 we use all those 4096 fds configured, or is it Where do those 4096
 fdsused by my process come from?
The latter. We can actually see the 4096 fds are used (by port 8080 in
CLOSE_WAIT state...).
Well, we're pretty sure that the fds actually are the connections from
the HTTPConnector of Tomcat. The connector is set to use 200 connections
simultaneously. So the question is: Why aren't those connections closed?...



 2) CLOSE_WAIT means the remote side closed the connection and the
 local side didn't yet close it. What's you remote side with respect to
 TCP? Is it browsers, or a load balancer or stuff like that?
We have NGINX as a proxy in front of the tomcat (on another server). So
request from the Internet arrive at NGINX and are then forwarded to the
tomcat(s).
By now, we're pretty happy with NGINX, since it is really fast and has
low footprint, but could well be that it does not work well with tomcat.

We have the problems with our live servers, so the application, which
actually is initiating the connection is a browser.


 3) Are you using keep alive (not implying that's the cause of your
 problems, but keep alive makes the connection live cycle much more
 complicated from the container point of view).
As far as I understood NGINX, we only use keep alive request for the
communication between client and NGINX. The communication between NGINX
and tomcat does not have settings for keep alive, so I assume: no.

This is the relevant part of the NGINX configuration:

location / {
proxy_pass http://verwandt_de;
proxy_redirect off;
   
proxy_set_header   Host $host;
proxy_set_header   X-Real-IP$remote_addr;
proxy_set_header   X-Forwarded-For 
$proxy_add_x_forwarded_for;
   
client_max_body_size   10m;
client_body_temp_path 
/var/nginx/client_body_temp;
   
proxy_buffering off;
proxy_store off;
   
proxy_connect_timeout  30;
proxy_send_timeout 80;
proxy_read_timeout 80;
}
 

So any suggestions that I should move the topic forward to some NGINX
mailing list?

Kind regards,

Tobias.


 Regards,
 Rainer


 Tobias Schulz-Hess wrote:
 Hi there,

 we use the current Tomcat 6.0 on 2 machines. The hardware is brand
 new and is really fast. We get lots of traffic which is usually
 handled well by the tomcats and the load on those machines is between
 1 and 6 (when we have lots of traffic).
 The machines have debian 4.1/64 as OS.

 However, sometimes (especially if we have lots of traffic) we get the
 following exception:
 INFO   | jvm 1| 2008/01/23 15:28:18 | java.net.SocketException:
 Too many open files
 INFO   | jvm 1| 2008/01/23 15:28:18 |   at
 java.net.PlainSocketImpl.socketAccept(Native Method)
 INFO   | jvm 1| 2008/01/23 15:28:18 |   at
 java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
 INFO   | jvm 1| 2008/01/23 15:28:18 |   at
 java.net.ServerSocket.implAccept(ServerSocket.java:453)
 INFO   | jvm 1| 2008/01/23 15:28:18 |   at
 java.net.ServerSocket.accept(ServerSocket.java:421)
 INFO   | jvm 1| 2008/01/23 15:28:18 |   at
 org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe

 rSocketFactory.java:61)
 INFO   | jvm 1| 2008/01/23 15:28:18 |   at
 org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)

 INFO   | jvm 1| 2008/01/23 15:28:18 |   at
 java.lang.Thread.run(Thread.java:619)
 I

 We already have altered the ulimit from 1024 (default) to 4096 (and
 therefore proofing: yes, I have used google and read almost
 everything about that exception).

 We also looked into the open files and all 95% of them are from or to
 the Tomcat Port 8080. (The other 5% are open JARs, connections to
 memcached and MySQL and SSL-Socket).

 Most of the connections to port 8080 are in the CLOSE_WAIT state.

 I have the strong feeling that something (tomcat, JVM, whatsoever)
 relies that the JVM garbage collection will kill those open
 connections. However, if we have heavy load, the garbage collection
 is suspended and then the connections pile up. But this is just a guess.

 How can this problem be solved?

 Thank you and kind regards,

 Tobias.

 ---
 Tobias Schulz-Hess

 -
 To start a new topic, e-mail: users@tomcat.apache.org
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED

Too many open files exception under heavy load - need help!

2008-01-24 Thread Tobias Schulz-Hess
Hi there,

we use the current Tomcat 6.0 on 2 machines. The hardware is brand new and is 
really fast. We get lots of traffic which is usually handled well by the 
tomcats and the load on those machines is between 1 and 6 (when we have lots of 
traffic).
The machines have debian 4.1/64 as OS.

However, sometimes (especially if we have lots of traffic) we get the following 
exception:
INFO   | jvm 1| 2008/01/23 15:28:18 | java.net.SocketException: Too many 
open files
INFO   | jvm 1| 2008/01/23 15:28:18 |   at 
java.net.PlainSocketImpl.socketAccept(Native Method)
INFO   | jvm 1| 2008/01/23 15:28:18 |   at 
java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
INFO   | jvm 1| 2008/01/23 15:28:18 |   at 
java.net.ServerSocket.implAccept(ServerSocket.java:453)
INFO   | jvm 1| 2008/01/23 15:28:18 |   at 
java.net.ServerSocket.accept(ServerSocket.java:421)
INFO   | jvm 1| 2008/01/23 15:28:18 |   at 
org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServe
rSocketFactory.java:61)
INFO   | jvm 1| 2008/01/23 15:28:18 |   at 
org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310)
INFO   | jvm 1| 2008/01/23 15:28:18 |   at 
java.lang.Thread.run(Thread.java:619)
I

We already have altered the ulimit from 1024 (default) to 4096 (and therefore 
proofing: yes, I have used google and read almost everything about that 
exception).

We also looked into the open files and all 95% of them are from or to the 
Tomcat Port 8080. (The other 5% are open JARs, connections to memcached and 
MySQL and SSL-Socket).

Most of the connections to port 8080 are in the CLOSE_WAIT state.

I have the strong feeling that something (tomcat, JVM, whatsoever) relies that 
the JVM garbage collection will kill those open connections. However, if we 
have heavy load, the garbage collection is suspended and then the connections 
pile up. But this is just a guess.

How can this problem be solved?

Thank you and kind regards,

Tobias.

---
Tobias Schulz-Hess
 
ICS - Internet Consumer Services GmbH
Mittelweg 162
20148 Hamburg
 
Tel:+49 (0) 40 238 49 141
Fax:+49 (0) 40 415 457 14
E-Mail: [EMAIL PROTECTED]
Web:www.internetconsumerservices.com 

Projekte
www.dealjaeger.de 
www.verwandt.de

ICS Internet Consumer Services GmbH
Geschäftsführer: Dipl.-Kfm. Daniel Grözinger, Dipl.-Kfm. Sven Schmidt
Handelsregister: Amtsgericht Hamburg HRB 95149