TCP connection vs Tomcat threads vs File Descriptors - please help

2015-10-17 Thread vicky
Hi All,
can someone please help in understand  that how TCP connections are interlinked 
with the no. ofFile Descriptors & no of threads configured over a machine .
Setup details :OD - Centos 6Tomcat7Java 7
Recently i have faced an problem in while my application was having +20k TCP 
connections in TIME-WAIT state resulting in choking my application though no 
consumed threads & Files descriptors are pretty less than threshold.
1) I would like to understand that is there any limit that how many TCP 
connections a machine can open , IF YES then how to tune it . 
2) Second query   , my understanding was every TCP connection will open one 
file  descriptor but in my currentsituation only 900 FD were used whereas the 
TCP_WAIT connections where +20K .Kindly suggest how do i comprehend this . How 
these are interlinked
3) If i configure 600 threads in server.xml for my HTTP connector & if i'm 
running that machine over 8 core cpu does that mean simultaneously 600 X 8 (cpu 
core) =4800 threads will be served by my tomcat ?

Re: TCP connection vs Tomcat threads vs File Descriptors - please help

2015-10-17 Thread Rainer Jung

Am 17.10.2015 um 08:27 schrieb vicky:

Hi All,
can someone please help in understand  that how TCP connections are interlinked 
with the no. ofFile Descriptors & no of threads configured over a machine .
Setup details :OD - Centos 6Tomcat7Java 7
Recently i have faced an problem in while my application was having +20k TCP 
connections in TIME-WAIT state resulting in choking my application though no 
consumed threads & Files descriptors are pretty less than threshold.
1) I would like to understand that is there any limit that how many TCP 
connections a machine can open , IF YES then how to tune it .
2) Second query   , my understanding was every TCP connection will open one 
file  descriptor but in my currentsituation only 900 FD were used whereas the 
TCP_WAIT connections where +20K .Kindly suggest how do i comprehend this . How 
these are interlinked
3) If i configure 600 threads in server.xml for my HTTP connector & if i'm 
running that machine over 8 core cpu does that mean simultaneously 600 X 8 (cpu 
core) =4800 threads will be served by my tomcat ?


Let me give you an incomplete answer:

A TCP connection in state TIME_WAIT does no longer exist from the point 
of view of the application/Tomcat/Java etc. So it does not need any 
application resources like threads.


To understand TIME_WAIT, you should look for "TCP state diagram" in your 
favorite search engine or grab a copy of Steven's TCP/IP illustrated. 
You will find a picture like this:


http://www.cs.northwestern.edu/~agupta/cs340/project2/TCPIP_State_Transition_Diagram.pdf

(page 2)

There you will see, that an ESTABLISHED connection can only enter 
TIME_WAIT state on the side of the connection, that first started the 
connection shut down by sending a FIN packet. And on that side it will 
always go through TIME_WAIT state.


The default time during which a connection sits in TIME_WAIT on Linux 
seems to be 60, sometimes 120 seconds. So the total number of 
connections in that state is proportional to the number of connections 
per second that the local node starts closing.


Example: Assume you run 100 new connections per second and all of the 
are closed by the local node first. That means in 60 seconds 6000 
connections will pile up in state TIME_WAIT.


In addition, removing TIME_WAIT connections from the OS list is not done 
continuously but in regular intervals, like e.g. every 5 seconds. So the 
real numbers can be slightly higher.


Why are TIME_WAIT states bad? They don't need app resources, so why 
care? Because the increase the list of TCP connection states the OS has 
to manage and a huge number of such TIME_WAIT connections - a few 
10.000s - can make the IP stack slower.


The TIME_WAIT duration is not configurable for Linux only on some other 
Unixes. See the discussion at:


http://comments.gmane.org/gmane.linux.network/244411

For some time you had to live with it and the only things you could do was

- checking whether you could force more connections being closed by the 
remote side first


- reducing the number of connections per second by increasing connection 
reuse, so keeping connections around for a longer time instead of 
constantly creating new ones.


Both options would increase the need for app resources though, because 
the longer lifetime of established connections would often increase the 
number of threads needed.


Now some people recommend using net.ipv4.tcp_tw_reuse, but that tunable 
seems to only apply to outgoing connections. Other suggest using 
net.ipv4.tcp_tw_recycle, but that one seems to make problems if clients 
sit behind a NAT device.


See:

http://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html

Other people suggest tuning 
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait:


http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/

It could be, that this tunable will be replaced by 
nf_conntrack_tcp_timeout_time_wait in new kernels.


Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: TCP connection vs Tomcat threads vs File Descriptors - please help

2015-10-17 Thread vicky
Thank you so much Rainer, for sparing time & answering my query. Vicky 


 On Saturday, 17 October 2015 5:17 PM, Rainer Jung 
 wrote:
   

 Am 17.10.2015 um 08:27 schrieb vicky:
> Hi All,
> can someone please help in understand  that how TCP connections are 
> interlinked with the no. ofFile Descriptors & no of threads configured over a 
> machine .
> Setup details :OD - Centos 6Tomcat7Java 7
> Recently i have faced an problem in while my application was having +20k TCP 
> connections in TIME-WAIT state resulting in choking my application though no 
> consumed threads & Files descriptors are pretty less than threshold.
> 1) I would like to understand that is there any limit that how many TCP 
> connections a machine can open , IF YES then how to tune it .
> 2) Second query  , my understanding was every TCP connection will open one 
> file  descriptor but in my currentsituation only 900 FD were used whereas the 
> TCP_WAIT connections where +20K .Kindly suggest how do i comprehend this . 
> How these are interlinked
> 3) If i configure 600 threads in server.xml for my HTTP connector & if i'm 
> running that machine over 8 core cpu does that mean simultaneously 600 X 8 
> (cpu core) =4800 threads will be served by my tomcat ?

Let me give you an incomplete answer:

A TCP connection in state TIME_WAIT does no longer exist from the point 
of view of the application/Tomcat/Java etc. So it does not need any 
application resources like threads.

To understand TIME_WAIT, you should look for "TCP state diagram" in your 
favorite search engine or grab a copy of Steven's TCP/IP illustrated. 
You will find a picture like this:

http://www.cs.northwestern.edu/~agupta/cs340/project2/TCPIP_State_Transition_Diagram.pdf

(page 2)

There you will see, that an ESTABLISHED connection can only enter 
TIME_WAIT state on the side of the connection, that first started the 
connection shut down by sending a FIN packet. And on that side it will 
always go through TIME_WAIT state.

The default time during which a connection sits in TIME_WAIT on Linux 
seems to be 60, sometimes 120 seconds. So the total number of 
connections in that state is proportional to the number of connections 
per second that the local node starts closing.

Example: Assume you run 100 new connections per second and all of the 
are closed by the local node first. That means in 60 seconds 6000 
connections will pile up in state TIME_WAIT.

In addition, removing TIME_WAIT connections from the OS list is not done 
continuously but in regular intervals, like e.g. every 5 seconds. So the 
real numbers can be slightly higher.

Why are TIME_WAIT states bad? They don't need app resources, so why 
care? Because the increase the list of TCP connection states the OS has 
to manage and a huge number of such TIME_WAIT connections - a few 
10.000s - can make the IP stack slower.

The TIME_WAIT duration is not configurable for Linux only on some other 
Unixes. See the discussion at:

http://comments.gmane.org/gmane.linux.network/244411

For some time you had to live with it and the only things you could do was

- checking whether you could force more connections being closed by the 
remote side first

- reducing the number of connections per second by increasing connection 
reuse, so keeping connections around for a longer time instead of 
constantly creating new ones.

Both options would increase the need for app resources though, because 
the longer lifetime of established connections would often increase the 
number of threads needed.

Now some people recommend using net.ipv4.tcp_tw_reuse, but that tunable 
seems to only apply to outgoing connections. Other suggest using 
net.ipv4.tcp_tw_recycle, but that one seems to make problems if clients 
sit behind a NAT device.

See:

http://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html

Other people suggest tuning 
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait:

http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/

It could be, that this tunable will be replaced by 
nf_conntrack_tcp_timeout_time_wait in new kernels.

Regards,

Rainer

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org