I believe that the explanation given below by Guido is incorrect and misleading, as it seems to confuse CLOSE_WAIT with TIME_WAIT.
See : TCP/IP State Transition Diagram (RFC793)

CLOSE-WAIT represents waiting for a connection termination request from the 
local user.

TIME-WAIT represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request.

Thus, CLOSE_WAIT is a /normal/ state of a TCP/IP connection. There is no timeout for it that can be set by any TCP/IP parameter. Basically it means : the remote cient has closed this connection, and the local OS is waiting for the local application to also close its side of the connection. And the local OS is going to wait - for an undefinite amount of time - until that happens (or until the process which has this connection still opens, exits). And in this case, the process which has this connection open, is the JVM which runs Tomcat (which by definition never exits, until you terminate Tomcat).

Many connections in the CLOSE_WAIT state mean, in most cases, that the application running under Tomcat, is not closing its sockets properly.
(This can happen in some "devious" ways, not easy to immediately diagnose).

Try the following : when you notice a high number of connections in CLOSE_WAIT state, force the JVM which runs Tomcat, to do a Major Garbage Collection.
(I do this using jmxsh, but there are several other way to do this)
And check after this, how many CLOSE_WAIT connections are still there.


On 11.05.2017 11:03, Adhavan Mathiyalagan wrote:
Thanks Guido !

On Thu, May 11, 2017 at 12:02 PM, Jäkel, Guido <g.jae...@dnb.de> wrote:

Dear Adhavan,

I think this is quiet normal, because the browser clients "in front" will
reuse connections (using keep-alive at TCP level) but an in-between load
balancer may be not work or configured in this way and will use a new
connection for each request to the backend.

Then, you'll see a lot of sockets in the TCP/IP closedown workflow between
the load balancer and the backend server. Pleases refer to TCP/IP that the
port even for a "well closed connection" will be hold some time to handle
late (duplicate) packets. Think about a duplicated, delayed RST packet -
this should not close the next connection to this port.

Because this situation is very unlikely or even impossible on a local area
network, you may adjust the TCP stack setting of your server to use much
lower protection times (in the magnitude of seconds) and also adjust
others. And at Linux, you may also expand the range of ports used for
connections.

BTW: If you have a dedicated stateful packet inspecting firewall between
your LB and the server, you also have to take a look on this.


Said that, one more cent about the protocol between the LB and the Tomcat:
I don’t know about HTTP, but if you use AJP (with mod_jk) you may configure
it to keep and reuse connections to the Tomcat backend(s).

Guido

-----Original Message-----
From: Adhavan Mathiyalagan [mailto:adhav....@gmail.com]
Sent: Wednesday, May 10, 2017 6:32 PM
To: Tomcat Users List
Subject: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD

Team,

Tomcat version : 8.0.18

Apache HTTPD version : 2.2


There are lot of CLOSE_WAIT connections being created at the
Application(tomcat)  ,when the traffic is routed through the Apache HTTPD
load balancer to the Application running over tomcat container. This leads
to slowness of the port where the Application is running and eventually
the
application is not accessible through that particular PORT.

In case of the traffic directly reaching the Application PORT without
HTTPD
(Load balancer) there is no CLOSE_WAIT connections created and
application
can handle the load seamlessly.

Thanks in advance for the support.

Regards,
Adhavan.M




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to