Re: CLOSE_WAIT connections when OutOfMemoryError is thrown
On 04/09/18 03:20, Antonio Rafael Rodrigues wrote: > Hi > In my rest API, everytime time my request generates an OutOfMemoryError the > client doesn't get a response from server and hangs forever. If I kill the > client, I can see by lsof that a CLOSE_WAIT connection remains and it goes > away just if I restart the Spring application. > I can reproduce it easily with an plain Servlet: > > > public class CustomServlet extends HttpServlet { > > @Override > protected void doGet(HttpServletRequest req, HttpServletResponse resp) > throws ServletException, IOException { > System.out.println("Test"); > throw new OutOfMemoryError(); > } > } > > Any request to this servlet hangs, if the client gives up a CLOSE_WAIT > connection remains. > > I'd like to know if there is some way to get over it No. Once an OOME occurs the JVM is in a potentially unknown / unstable state and the only safe thing to do is to shut it down and restart it. There are some OOMEs that are potentially recoverable but reliably determining if this is the case is tricky. > and if it is a bug. No. Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
> From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] > Subject: Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD What part of do not top-post do you not understand? > The Application port is configured in the catalina.properties file > HTTP_PORT=8030 > JVM_ROUTE=dl360x3805.8030 Those are not tags that mean anything to Tomcat. If your application is using port 8030 on its own, it's your application's responsibility to clean up after itself properly. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi, The Application port is configured in the catalina.properties file # String cache configuration. tomcat.util.buf.StringCache.byte.enabled=true #tomcat.util.buf.StringCache.char.enabled=true #tomcat.util.buf.StringCache.trainThreshold=50 #tomcat.util.buf.StringCache.cacheSize=5000 SHUTDOWN_PORT=-1 HTTP_PORT=8030 JVM_ROUTE=dl360x3805.8030 With regard to the HTTPD configuration , the members are configured in the another file (balancer.conf) which is included in the httpd.conf Include /etc/httpd/conf/balancer.conf BalancerMember http://dl360x3806:8030/custcare_cmax/view/services retry=60 route=dl360x3806.8030 Regards, Adhavan.M On Thu, May 11, 2017 at 9:06 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Adhavan, > > On 5/11/17 11:32 AM, Adhavan Mathiyalagan wrote: > > 8030 is the port where the application is running. > > Port 8030 appears nowhere in your configuration. Not in server.xml > (where you used ${HTTP_PORT}, which could plausibly be 8030) and not > in httpd.conf -- where you specify all port numbers for mod_proxy_http > and none of them were port 8030. > > - -chris > > > On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat) > >wrote: > > > >> On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: > >> > >>> Hi Chris, > >>> > >>> *Tomcat Configuration* > >>> > >>> HTTP/1.1 and APR > >>> > >>> >>> > >>> connectionTimeout="2" > >>> > >>> redirectPort="8443" maxHttpHeaderSize="8192" /> > >>> > >>> > >>> ${catalina.base}/conf/web.xml > >>> > >>> > >>> > >>> className="org.apache.catalina.session.StandardManager" > >>> maxActiveSessions="400"/> > >>> > >>> > >>> *HTTPD Configuration* > >>> > >>> > >>> > >>> ServerTokens OS ServerRoot "/etc/httpd" > >>> > >>> PidFile run/httpd.pid > >>> > >>> Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 > >>> KeepAliveTimeout 15 StartServers256 > >>> MinSpareServers100 MaxSpareServers500 ServerLimit > >>> 2000 MaxClients2000 MaxRequestsPerChild 4000 > >>> > >>> StartServers 4 MaxClients > >>> 300 MinSpareThreads 25 MaxSpareThreads 75 > >>> ThreadsPerChild 25 MaxRequestsPerChild 0 > >>> > >>> > >>> > >>> ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests > >>> Off BalancerMember > >>> http://dl360x3799:8011/admx_ecms/view/services retry=60 > >>> status=+H route=dl360x3799.8011 BalancerMember > >>> http://dl360x3799:8012/admx_ecms/view/services retry=60 > >>> status=+H route=dl360x3799.8012 ProxySet > >>> stickysession=JSESSIONID ProxySet lbmethod=byrequests > >>> ProxyPass /custcare_cmax/view/services balancer://wsiservices > >>> ProxyPassReverse /custcare_cmax/view/services > >>> balancer://wsiservices ProxyPass /admx_ecms/view/services > >>> balancer://wsiservices ProxyPassReverse > >>> /admx_ecms/view/services balancer://wsiservices >>> balancer://wsiinstances> BalancerMember > >>> http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 > >>> route=dl360x3806.8035 BalancerMember > >>> http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 > >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID > >>> ProxySet lbmethod=byrequests ProxyPass > >>> /custcare_cmax/services/ws_cma3 balancer://wsiinstances > >>> ProxyPassReverse /custcare_cmax/services/ws_cma3 > >>> balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 > >>> balancer://wsiinstances ProxyPassReverse > >>> /admx_ecms/services/ws_cma3 balancer://wsiinstances >>> balancer://admxcluster> BalancerMember > >>> http://dl360x3799:8011/admx_ecms retry=60 status=+H > >>> route=dl360x3799.8011 BalancerMember > >>> http://dl360x3799:8012/admx_ecms retry=60 status=+H > >>> route=dl360x3799.8012 ProxySet stickysession=JSESSIONID > >>> ProxySet lbmethod=byrequests ProxyPass /admx_ecms > >>> balancer://admxcluster ProxyPassReverse /admx_ecms > >>> balancer://admxcluster > >>> BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 > >>> status=+H route=dl360x3799.8021 BalancerMember > >>> http://dl360x3799:8022/custcare_cmax retry=60 status=+H > >>> route=dl360x3799.8022 BalancerMember > >>> http://dl360x3806:8035/custcare_cmax retry=60 > >>> route=dl360x3806.8035 BalancerMember > >>> http://dl360x3806:8036/custcare_cmax retry=60 > >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID > >>> ProxySet lbmethod=byrequests ProxyPass /custcare_cmax > >>> balancer://cmaxcluster ProxyPassReverse /custcare_cmax > >>> balancer://cmaxcluster > >>> BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 > >>> ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests > >>> ProxyPass /mx balancer://mxcluster ProxyPassReverse > >>> /mx balancer://mxcluster SetHandler > >>> balancer-manager > >>> SetHandler server-status ExtendedStatus On > >>> TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv > >>> proxy-nokeepalive 1 > >>> > >>> > >>> > >> Hi. Your netstat screenshot showed the CLOSE_WAIT connections on
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 11.05.2017 17:32, Adhavan Mathiyalagan wrote: Hi, 8030 is the port where the application is running. /What/ application ? Is that a stand-alone application ? For Tomcat, I cannot say (because it is not clear below what value ${HTTP_PORT} has. But from your front-end balancer, it looks like it is forwarding to a series of ports, none of which are 8030. And please stop top-posting.. Regards, Adhavan.M On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat)wrote: On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: Hi Chris, *Tomcat Configuration* HTTP/1.1 and APR ${catalina.base}/conf/web.xml *HTTPD Configuration* ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 StartServers256 MinSpareServers100 MaxSpareServers500 ServerLimit2000 MaxClients2000 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests Off BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/view/services balancer://wsiservices ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices ProxyPass /admx_ecms/view/services balancer://wsiservices ProxyPassReverse /admx_ecms/view/services balancer://wsiservices BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /admx_ecms balancer://admxcluster ProxyPassReverse /admx_ecms balancer://admxcluster BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H route=dl360x3799.8021 BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H route=dl360x3799.8022 BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax balancer://cmaxcluster ProxyPassReverse /custcare_cmax balancer://cmaxcluster BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /mx balancer://mxcluster ProxyPassReverse /mx balancer://mxcluster SetHandler balancer-manager SetHandler server-status ExtendedStatus On TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 Hi. Your netstat screenshot showed the CLOSE_WAIT connections on port 8030, like : tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 CLOSE_WAIT But I do not see any mention of port 8030 in your configs above. So what is listening there ? ("netstat --tcp -aopn" would show this) On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 11:32 AM, Adhavan Mathiyalagan wrote: > 8030 is the port where the application is running. Port 8030 appears nowhere in your configuration. Not in server.xml (where you used ${HTTP_PORT}, which could plausibly be 8030) and not in httpd.conf -- where you specify all port numbers for mod_proxy_http and none of them were port 8030. - -chris > On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat) >wrote: > >> On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: >> >>> Hi Chris, >>> >>> *Tomcat Configuration* >>> >>> HTTP/1.1 and APR >>> >>> >> >>> connectionTimeout="2" >>> >>> redirectPort="8443" maxHttpHeaderSize="8192" /> >>> >>> >>> ${catalina.base}/conf/web.xml >>> >>> >>> >> className="org.apache.catalina.session.StandardManager" >>> maxActiveSessions="400"/> >>> >>> >>> *HTTPD Configuration* >>> >>> >>> >>> ServerTokens OS ServerRoot "/etc/httpd" >>> >>> PidFile run/httpd.pid >>> >>> Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 >>> KeepAliveTimeout 15 StartServers256 >>> MinSpareServers100 MaxSpareServers500 ServerLimit >>> 2000 MaxClients2000 MaxRequestsPerChild 4000 >>> >>> StartServers 4 MaxClients >>> 300 MinSpareThreads 25 MaxSpareThreads 75 >>> ThreadsPerChild 25 MaxRequestsPerChild 0 >>> >>> >>> >>> ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests >>> Off BalancerMember >>> http://dl360x3799:8011/admx_ecms/view/services retry=60 >>> status=+H route=dl360x3799.8011 BalancerMember >>> http://dl360x3799:8012/admx_ecms/view/services retry=60 >>> status=+H route=dl360x3799.8012 ProxySet >>> stickysession=JSESSIONID ProxySet lbmethod=byrequests >>> ProxyPass /custcare_cmax/view/services balancer://wsiservices >>> ProxyPassReverse /custcare_cmax/view/services >>> balancer://wsiservices ProxyPass /admx_ecms/view/services >>> balancer://wsiservices ProxyPassReverse >>> /admx_ecms/view/services balancer://wsiservices >> balancer://wsiinstances> BalancerMember >>> http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 >>> route=dl360x3806.8035 BalancerMember >>> http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID >>> ProxySet lbmethod=byrequests ProxyPass >>> /custcare_cmax/services/ws_cma3 balancer://wsiinstances >>> ProxyPassReverse /custcare_cmax/services/ws_cma3 >>> balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 >>> balancer://wsiinstances ProxyPassReverse >>> /admx_ecms/services/ws_cma3 balancer://wsiinstances >> balancer://admxcluster> BalancerMember >>> http://dl360x3799:8011/admx_ecms retry=60 status=+H >>> route=dl360x3799.8011 BalancerMember >>> http://dl360x3799:8012/admx_ecms retry=60 status=+H >>> route=dl360x3799.8012 ProxySet stickysession=JSESSIONID >>> ProxySet lbmethod=byrequests ProxyPass /admx_ecms >>> balancer://admxcluster ProxyPassReverse /admx_ecms >>> balancer://admxcluster >>> BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 >>> status=+H route=dl360x3799.8021 BalancerMember >>> http://dl360x3799:8022/custcare_cmax retry=60 status=+H >>> route=dl360x3799.8022 BalancerMember >>> http://dl360x3806:8035/custcare_cmax retry=60 >>> route=dl360x3806.8035 BalancerMember >>> http://dl360x3806:8036/custcare_cmax retry=60 >>> route=dl360x3806.8036 ProxySet stickysession=JSESSIONID >>> ProxySet lbmethod=byrequests ProxyPass /custcare_cmax >>> balancer://cmaxcluster ProxyPassReverse /custcare_cmax >>> balancer://cmaxcluster >>> BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 >>> ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests >>> ProxyPass /mx balancer://mxcluster ProxyPassReverse >>> /mx balancer://mxcluster SetHandler >>> balancer-manager >>> SetHandler server-status ExtendedStatus On >>> TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv >>> proxy-nokeepalive 1 >>> >>> >>> >> Hi. Your netstat screenshot showed the CLOSE_WAIT connections on >> port 8030, like : >> >> tcp 509 0 :::10.61.137.49:8030 >> :::10.61.137.47:60903 CLOSE_WAIT >> >> But I do not see any mention of port 8030 in your configs above. >> So what is listening there ? ("netstat --tcp -aopn" would show >> this) >> >> >> >> On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < >>> ch...@christopherschultz.net> wrote: >>> >>> -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: > The connections in the CLOSE_WAIT are owned by the > Application /Tomcat process. > Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools -
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 10:57 AM, Adhavan Mathiyalagan wrote: > *Tomcat Configuration* > > HTTP/1.1 and APR > > > connectionTimeout="2" > > redirectPort="8443" maxHttpHeaderSize="8192" /> Okay, so you have a number of defaults taking effect, including: maxConnections="8192" maxKeepAliveRequests="100" maxThreads="200" ... and you have no configured, so the default executor will be used, which performs no reduction of threads when they are idle. > *HTTPD Configuration* > > > Timeout 60 KeepAlive Off Wow, really? > MaxKeepAliveRequests 100 KeepAliveTimeout 15 That's ... confusing. Why do you disable KeepAlives, then configure certain aspects of KeepAlive? > StartServers256 MinSpareServers100 > MaxSpareServers500 ServerLimit2000 MaxClients2000 > MaxRequestsPerChild 4000 > > StartServers 4 MaxClients 300 > MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild > 25 MaxRequestsPerChild 0 Which module is actually in use? pre-fork or worker? For pre-fork, each of your httpd instances can generate 2000 simultaneous connections to Tomcat. For worker MPM, each of your httpd instances can generate 300 simultaneous connections to Tomcat. Tomcat is configured to handle a maximum of 200 simultaneous requests, so you are very likely to have a situation where your web server is handling far most load than Tomcat can. If you have a fairly high percentage of web-server-only requests, then this is probably okay. But if the overwhelming majority of requests to the web server will need to proxied-over to your application server, then you are gong to have problems. The problem gets worse if you have more than one httpd instance. > ServerName * Timeout 300 You have conflicting settings for "Timeout". You may want to review thos e. > ProxyPreserveHost On ProxyRequests Off balancer://wsiservices> BalancerMember > http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H > route=dl360x3799.8011 BalancerMember > http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H > route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet > lbmethod=byrequests Odd to have only "hot standbys" in a cluster. > SetEnv force-proxy-request-1.0 1 This is a horrible idea for performance, mostly because it disabled HTTP KeepAlives. Why are you doing this? > SetEnv proxy-nokeepalive 1 This is a horrible idea for performance. Why are you disabling KeepAlive for proxied requests? - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUhJwACgkQHPApP6U8 pFi7VRAAiZaMKSdmnCJt/Jy/S8K3uH+eT0hoIsN4TppH1UNxkFM7k9I0A0bN4x1I eyFVC/Q1bDLP2DR1+pqYQipk0w5w3mgQsrqz4QEKFYkQy8LEzSq+/T6EskHLin3z liGCgMzgJwaDejKczvgp0n3e0THI4iLPIeNvwmUROW7D7BcK76czio2C+lV0dBcU 7m+Ysfk2hpdim/DQMgZbgE6SNLs8PJ64S9DnCfZEDwgGZOJPWpAd83Y06LkxqHkj t8ttklYdVQqUumDeeKIlL2e5lbxs2cbcedNo+L4CR+ZMzu/5diLMFmvoPffsfvA9 JnHAMShAFhS3Ktogv5m/DcBrcv2FTysOiTAs6MFYPS4TNARO959k7WEgnDvqZcAK B4tp7UPABQVSBqeHnOElfdCHBFUv4rxtWPnEoRh7Rzf+RAknufXj2Tv3FxhY+cyy 941fajGhMGttCe3P51FmMrWNlKdKWFXHSRq4izn7v6cwM40PmO/Atlu9zk1HoSme pqoM5DrXVsO8wknxmnm5ejhg/a3svMIrNs1tagEVCHPM7PQAJKXVw8Mxwqa7luGl i4VJ1/hlT8ewxu/NBczbrJ3zJzsLW06Tq5IC2fa2WCDtKF4WEylCDIeZTaPdAbme M+21Rr1H+U5XoQ0ZgFnn7JUFvfOeI/61NwyjTecSbew4M6qJGCk= =HsXn -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi, 8030 is the port where the application is running. Regards, Adhavan.M On Thu, May 11, 2017 at 8:53 PM, André Warnier (tomcat)wrote: > On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: > >> Hi Chris, >> >> *Tomcat Configuration* >> >> HTTP/1.1 and APR >> >> > >> connectionTimeout="2" >> >> redirectPort="8443" maxHttpHeaderSize="8192" /> >> >> >> ${catalina.base}/conf/web.xml >> >> > className="org.apache.catalina.session.StandardManager" >> maxActiveSessions="400"/> >> >> >> >> *HTTPD Configuration* >> >> >> >> ServerTokens OS >> ServerRoot "/etc/httpd" >> >> PidFile run/httpd.pid >> >> Timeout 60 >> KeepAlive Off >> MaxKeepAliveRequests 100 >> KeepAliveTimeout 15 >> >> StartServers256 >> MinSpareServers100 >> MaxSpareServers500 >> ServerLimit2000 >> MaxClients2000 >> MaxRequestsPerChild 4000 >> >> >> >> StartServers 4 >> MaxClients 300 >> MinSpareThreads 25 >> MaxSpareThreads 75 >> ThreadsPerChild 25 >> MaxRequestsPerChild 0 >> >> >> >> >> ServerName * >> Timeout 300 >> ProxyPreserveHost On >> ProxyRequests Off >> >> BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 >> status=+H route=dl360x3799.8011 >> BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 >> status=+H route=dl360x3799.8012 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /custcare_cmax/view/services balancer://wsiservices >> ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices >> ProxyPass /admx_ecms/view/services balancer://wsiservices >> ProxyPassReverse /admx_ecms/view/services balancer://wsiservices >> >> BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 >> retry=60 route=dl360x3806.8035 >> BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 >> retry=60 route=dl360x3806.8036 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances >> ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances >> ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances >> ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances >> >> BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H >> route=dl360x3799.8011 >> BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H >> route=dl360x3799.8012 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /admx_ecms balancer://admxcluster >> ProxyPassReverse /admx_ecms balancer://admxcluster >> >> BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H >> route=dl360x3799.8021 >> BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H >> route=dl360x3799.8022 >> BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 >> route=dl360x3806.8035 >> BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 >> route=dl360x3806.8036 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /custcare_cmax balancer://cmaxcluster >> ProxyPassReverse /custcare_cmax balancer://cmaxcluster >> >> BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 >> ProxySet stickysession=JSESSIONID >> ProxySet lbmethod=byrequests >> >> ProxyPass /mx balancer://mxcluster >> ProxyPassReverse /mx balancer://mxcluster >> >> SetHandler balancer-manager >> >> >> SetHandler server-status >> >> ExtendedStatus On >> TraceEnable Off >> SetEnv force-proxy-request-1.0 1 >> SetEnv proxy-nokeepalive 1 >> >> >> > Hi. > Your netstat screenshot showed the CLOSE_WAIT connections on port 8030, > like : > > tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 > CLOSE_WAIT > > But I do not see any mention of port 8030 in your configs above. So what > is listening there ? > ("netstat --tcp -aopn" would show this) > > > > On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < >> ch...@christopherschultz.net> wrote: >> >> -BEGIN PGP SIGNED MESSAGE- >>> Hash: SHA256 >>> >>> Adhavan, >>> >>> On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: >>> The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. >>> >>> Okay. Can you please post your configuration on both httpd and Tomcat >>> sides? If it's not clear from your configuration, please tell us which >>> type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). >>> >>> - -chris >>> -BEGIN PGP SIGNATURE- >>> Comment: GPGTools - http://gpgtools.org >>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ >>> >>> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 >>> pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c >>> BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD >>> 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs >>>
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 11.05.2017 16:57, Adhavan Mathiyalagan wrote: Hi Chris, *Tomcat Configuration* HTTP/1.1 and APR ${catalina.base}/conf/web.xml *HTTPD Configuration* ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 StartServers256 MinSpareServers100 MaxSpareServers500 ServerLimit2000 MaxClients2000 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests Off BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/view/services balancer://wsiservices ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices ProxyPass /admx_ecms/view/services balancer://wsiservices ProxyPassReverse /admx_ecms/view/services balancer://wsiservices BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /admx_ecms balancer://admxcluster ProxyPassReverse /admx_ecms balancer://admxcluster BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H route=dl360x3799.8021 BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H route=dl360x3799.8022 BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax balancer://cmaxcluster ProxyPassReverse /custcare_cmax balancer://cmaxcluster BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /mx balancer://mxcluster ProxyPassReverse /mx balancer://mxcluster SetHandler balancer-manager SetHandler server-status ExtendedStatus On TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 Hi. Your netstat screenshot showed the CLOSE_WAIT connections on port 8030, like : tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 CLOSE_WAIT But I do not see any mention of port 8030 in your configs above. So what is listening there ? ("netstat --tcp -aopn" would show this) On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB rdqIajl3tAAxCylTQA2hnVfbhu60Iz/Eky4kWATLY0kO5aR7YsXPQFxIQYnkYVZa 0o9TjRVJvhoLwSv10RmD1JxEXCXbpr3qeD+zvDK+TJSowCPqu2xnx+DqGkjpiWk6 eSHDyxaSJqfuz02HeDXWivhYmRE/iWKSETox5Na8UR2MjOdLnPw= =YwUt -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail:
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi Chris, *Tomcat Configuration* HTTP/1.1 and APR ${catalina.base}/conf/web.xml *HTTPD Configuration* ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 60 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15 StartServers256 MinSpareServers100 MaxSpareServers500 ServerLimit2000 MaxClients2000 MaxRequestsPerChild 4000 StartServers 4 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 ServerName * Timeout 300 ProxyPreserveHost On ProxyRequests Off BalancerMember http://dl360x3799:8011/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms/view/services retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/view/services balancer://wsiservices ProxyPassReverse /custcare_cmax/view/services balancer://wsiservices ProxyPass /admx_ecms/view/services balancer://wsiservices ProxyPassReverse /admx_ecms/view/services balancer://wsiservices BalancerMember http://dl360x3806:8035/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax/services/ws_cma3 retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /custcare_cmax/services/ws_cma3 balancer://wsiinstances ProxyPass /admx_ecms/services/ws_cma3 balancer://wsiinstances ProxyPassReverse /admx_ecms/services/ws_cma3 balancer://wsiinstances BalancerMember http://dl360x3799:8011/admx_ecms retry=60 status=+H route=dl360x3799.8011 BalancerMember http://dl360x3799:8012/admx_ecms retry=60 status=+H route=dl360x3799.8012 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /admx_ecms balancer://admxcluster ProxyPassReverse /admx_ecms balancer://admxcluster BalancerMember http://dl360x3799:8021/custcare_cmax retry=60 status=+H route=dl360x3799.8021 BalancerMember http://dl360x3799:8022/custcare_cmax retry=60 status=+H route=dl360x3799.8022 BalancerMember http://dl360x3806:8035/custcare_cmax retry=60 route=dl360x3806.8035 BalancerMember http://dl360x3806:8036/custcare_cmax retry=60 route=dl360x3806.8036 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /custcare_cmax balancer://cmaxcluster ProxyPassReverse /custcare_cmax balancer://cmaxcluster BalancerMember http://dl360x3805:8089/mx route=dl360x3806.8089 ProxySet stickysession=JSESSIONID ProxySet lbmethod=byrequests ProxyPass /mx balancer://mxcluster ProxyPassReverse /mx balancer://mxcluster SetHandler balancer-manager SetHandler server-status ExtendedStatus On TraceEnable Off SetEnv force-proxy-request-1.0 1 SetEnv proxy-nokeepalive 1 On Thu, May 11, 2017 at 7:20 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Adhavan, > > On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: > > The connections in the CLOSE_WAIT are owned by the Application > > /Tomcat process. > > Okay. Can you please post your configuration on both httpd and Tomcat > sides? If it's not clear from your configuration, please tell us which > type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). > > - -chris > -BEGIN PGP SIGNATURE- > Comment: GPGTools - http://gpgtools.org > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 > pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c > BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD > 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs > 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH > dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X > Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA > x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM > b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB > rdqIajl3tAAxCylTQA2hnVfbhu60Iz/Eky4kWATLY0kO5aR7YsXPQFxIQYnkYVZa > 0o9TjRVJvhoLwSv10RmD1JxEXCXbpr3qeD+zvDK+TJSowCPqu2xnx+DqGkjpiWk6 > eSHDyxaSJqfuz02HeDXWivhYmRE/iWKSETox5Na8UR2MjOdLnPw= > =YwUt > -END PGP SIGNATURE- > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi Chris, The netstat O/P below for the CLOSE_WAIT connections tcp 509 0 :::10.61.137.49:8030:::10.61.137.47:60903 CLOSE_WAIT tcp 491 0 :::10.61.137.49:8030:::10.61.137.47:24856 CLOSE_WAIT tcp 360 0 :::10.61.137.49:8030:::10.61.137.47:12328 CLOSE_WAIT tcp 511 0 :::10.61.137.49:8030:::10.61.137.47:24710 CLOSE_WAIT tcp 479 0 :::10.61.137.49:8030:::10.61.137.47:33175 CLOSE_WAIT tcp 361 0 :::10.61.137.49:8030:::10.61.137.47:58084 CLOSE_WAIT tcp 531 0 :::10.61.137.49:8030:::10.61.137.47:42030 CLOSE_WAIT tcp 971 0 :::10.61.137.49:8030:::10.61.137.47:17692 CLOSE_WAIT tcp 361 0 :::10.61.137.49:8030:::10.61.137.47:60303 CLOSE_WAIT 10.61.137.49 -> Application IP 10.61.137.47 -> Load balancer IP Regards, Adhavan.M On Thu, May 11, 2017 at 7:06 PM, André Warnier (tomcat)wrote: > On 11.05.2017 15:30, Adhavan Mathiyalagan wrote: > >> Hi Chris, >> >> The connections in the CLOSE_WAIT are owned by the Application /Tomcat >> process. >> > > Can you provide an example output of the "netstat" command that shows such > connections ? (not all, just some) > (copy and paste it right here) > -> > > > >> Regards, >> Adhavan.M >> >> On Thu, May 11, 2017 at 6:53 PM, Christopher Schultz < >> ch...@christopherschultz.net> wrote: >> >> -BEGIN PGP SIGNED MESSAGE- >>> Hash: SHA256 >>> >>> Adhavan, >>> >>> On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: >>> Team, Tomcat version : 8.0.18 Apache HTTPD version : 2.2 There are lot of CLOSE_WAIT connections being created at the Application(tomcat) ,when the traffic is routed through the Apache HTTPD load balancer to the Application running over tomcat container. This leads to slowness of the port where the Application is running and eventually the application is not accessible through that particular PORT. >>> >>> Please clarify: are the connections in the CLOSE_WAIT state owned by the >>> httpd process or the Tomcat process? >>> >>> In case of the traffic directly reaching the Application PORT without HTTPD (Load balancer) there is no CLOSE_WAIT connections created and application can handle the load seamlessly. >>> >>> - -chris >>> -BEGIN PGP SIGNATURE- >>> Comment: GPGTools - http://gpgtools.org >>> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ >>> >>> iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 >>> pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr >>> uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ >>> hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O >>> yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp >>> KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF >>> toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u >>> oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV >>> OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW >>> OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM >>> eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF >>> kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== >>> =nMBw >>> -END PGP SIGNATURE- >>> >>> - >>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org >>> For additional commands, e-mail: users-h...@tomcat.apache.org >>> >>> >>> >> > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Dear André, ups - yes, I confused both. FIN ;) Guido On 11.05.2017 13:37, André Warnier (tomcat) wrote: > I believe that the explanation given below by Guido is incorrect and > misleading, as it seems to confuse CLOSE_WAIT with TIME_WAIT. > See : TCP/IP State Transition Diagram (RFC793) > > CLOSE-WAIT represents waiting for a connection termination request from the > local user. > > TIME-WAIT represents waiting for enough time to pass to be sure the remote > TCP received the acknowledgment of its connection termination request. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/11/17 9:30 AM, Adhavan Mathiyalagan wrote: > The connections in the CLOSE_WAIT are owned by the Application > /Tomcat process. Okay. Can you please post your configuration on both httpd and Tomcat sides? If it's not clear from your configuration, please tell us which type of connector you are using (e.g. AJP/HTTP and BIO/NIO/APR). - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUbAoACgkQHPApP6U8 pFjWZQ/9EfGcfgvvkM92bIaRBYYh93ET2X7tKP6xQnusKfJ6D0xubfAOU5E+P77c BM/3jS1rNyP29zOouHxsGj3h8VzHR4w5ieo6SHHZzkRiOngULSd8hIAbtYdE1UfD 4LX8D86KkOZ7HlIxQOQMphP/Lta7KaJ+90FFRmuvEzj3UfYM0JOpzgND/e9609hs 6XhpPzmWlSpxdGrnAqoVpMow6F+X1lwolWaZxFCAevQ8gUFqnBVFxfT+zmkwT5mH dqk/jPlaAsTUOf4bz4ly8xrXmD3uAldODzRzVpIMCAtPIvkVGWazyIUltF6w5o1X Bz4Z8efsc6mKGrfqcTAar/mpbzAdlbkUVusAhWurXfM+NIneAER7cuR8c1DfldOA x1L3owirmTIM9+qf+KV9d+bnsdMfEuGnnNEnx2SYXaCGh4+2sZOG4Zbb4oRO5RlM b+7emzY+Y4JVnbFYVQD1D/RSUS5V+jX69ewm7hfksRPUJYLLDR8smJ1vbAR4MMHB rdqIajl3tAAxCylTQA2hnVfbhu60Iz/Eky4kWATLY0kO5aR7YsXPQFxIQYnkYVZa 0o9TjRVJvhoLwSv10RmD1JxEXCXbpr3qeD+zvDK+TJSowCPqu2xnx+DqGkjpiWk6 eSHDyxaSJqfuz02HeDXWivhYmRE/iWKSETox5Na8UR2MjOdLnPw= =YwUt -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 11.05.2017 15:30, Adhavan Mathiyalagan wrote: Hi Chris, The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Can you provide an example output of the "netstat" command that shows such connections ? (not all, just some) (copy and paste it right here) -> Regards, Adhavan.M On Thu, May 11, 2017 at 6:53 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: Team, Tomcat version : 8.0.18 Apache HTTPD version : 2.2 There are lot of CLOSE_WAIT connections being created at the Application(tomcat) ,when the traffic is routed through the Apache HTTPD load balancer to the Application running over tomcat container. This leads to slowness of the port where the Application is running and eventually the application is not accessible through that particular PORT. Please clarify: are the connections in the CLOSE_WAIT state owned by the httpd process or the Tomcat process? In case of the traffic directly reaching the Application PORT without HTTPD (Load balancer) there is no CLOSE_WAIT connections created and application can handle the load seamlessly. - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== =nMBw -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Hi Chris, The connections in the CLOSE_WAIT are owned by the Application /Tomcat process. Regards, Adhavan.M On Thu, May 11, 2017 at 6:53 PM, Christopher Schultz < ch...@christopherschultz.net> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > Adhavan, > > On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: > > Team, > > > > Tomcat version : 8.0.18 > > > > Apache HTTPD version : 2.2 > > > > > > There are lot of CLOSE_WAIT connections being created at the > > Application(tomcat) ,when the traffic is routed through the Apache > > HTTPD load balancer to the Application running over tomcat > > container. This leads to slowness of the port where the > > Application is running and eventually the application is not > > accessible through that particular PORT. > > Please clarify: are the connections in the CLOSE_WAIT state owned by the > httpd process or the Tomcat process? > > > In case of the traffic directly reaching the Application PORT > > without HTTPD (Load balancer) there is no CLOSE_WAIT connections > > created and application can handle the load seamlessly. > > - -chris > -BEGIN PGP SIGNATURE- > Comment: GPGTools - http://gpgtools.org > Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ > > iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 > pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr > uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ > hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O > yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp > KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF > toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u > oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV > OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW > OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM > eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF > kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== > =nMBw > -END PGP SIGNATURE- > > - > To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org > For additional commands, e-mail: users-h...@tomcat.apache.org > >
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Adhavan, On 5/10/17 12:32 PM, Adhavan Mathiyalagan wrote: > Team, > > Tomcat version : 8.0.18 > > Apache HTTPD version : 2.2 > > > There are lot of CLOSE_WAIT connections being created at the > Application(tomcat) ,when the traffic is routed through the Apache > HTTPD load balancer to the Application running over tomcat > container. This leads to slowness of the port where the > Application is running and eventually the application is not > accessible through that particular PORT. Please clarify: are the connections in the CLOSE_WAIT state owned by the httpd process or the Tomcat process? > In case of the traffic directly reaching the Application PORT > without HTTPD (Load balancer) there is no CLOSE_WAIT connections > created and application can handle the load seamlessly. - -chris -BEGIN PGP SIGNATURE- Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIyBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlkUZcoACgkQHPApP6U8 pFgiuA/4uARxnF+2c6E+oAIUVX+j2vb+RBicYKAuO6KW67cXtP5UBBFmR5jlLOyr uz6M8qDB0H89IkEgny3oQeaZYVvDeFokthAwTe3SCrtfsWb0d39EHGUoNfxZQnCZ hjygWvxmmuO84RqNrO0Q1+UNWYlPB0cK3SLFZRmh59zJg+C8FBDG2OAIEpevXw0O yPdnSHq6KwX3kZA3KZWx03YUBwjjTk1TLvfq8vfmMmp96THd4QXqvhI46xOcV/sp KBUrRIQhjTDPsm7EH268ffve0kgcXIkmh7qj2cCl07+CrVn6TbPXSwQEm5j5CjIF toMywVs9szCwT0qRlOaLALQyXdUJnuUwBNjTp+DIPIukeUZ1BqwC/DopTHftzr6u oT7ZWurZBFCZUSCsbfyi6c7FTRs/jqT3eIo2he5Q3AxtZ2CayzC4xgx2vxqrBTkV OEESNhnzH3QdJTFnDDQCLtrr7lHyZ6/4MKDUK9Ax2LjVt63kRdIW31VWs0Y2KqbW OGd9apwNe9FrTEGn7zAw+lXKKmWr/2DMEViawmKUxtoZMQsrW6NPTvlNmKX4zgYM eU0ZHE5d1SMYwfPXzH+w/Cqv+hZMssNfKMZ9rdjPd+rf8xgzL27tvMvg/rjvrRfF kuiNtfFcA34CDfR+bEed2eYAAUMizb+uzPUHhVAZMaR8T8CXGQ== =nMBw -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
I believe that the explanation given below by Guido is incorrect and misleading, as it seems to confuse CLOSE_WAIT with TIME_WAIT. See : TCP/IP State Transition Diagram (RFC793) CLOSE-WAIT represents waiting for a connection termination request from the local user. TIME-WAIT represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request. Thus, CLOSE_WAIT is a /normal/ state of a TCP/IP connection. There is no timeout for it that can be set by any TCP/IP parameter. Basically it means : the remote cient has closed this connection, and the local OS is waiting for the local application to also close its side of the connection. And the local OS is going to wait - for an undefinite amount of time - until that happens (or until the process which has this connection still opens, exits). And in this case, the process which has this connection open, is the JVM which runs Tomcat (which by definition never exits, until you terminate Tomcat). Many connections in the CLOSE_WAIT state mean, in most cases, that the application running under Tomcat, is not closing its sockets properly. (This can happen in some "devious" ways, not easy to immediately diagnose). Try the following : when you notice a high number of connections in CLOSE_WAIT state, force the JVM which runs Tomcat, to do a Major Garbage Collection. (I do this using jmxsh, but there are several other way to do this) And check after this, how many CLOSE_WAIT connections are still there. On 11.05.2017 11:03, Adhavan Mathiyalagan wrote: Thanks Guido ! On Thu, May 11, 2017 at 12:02 PM, Jäkel, Guidowrote: Dear Adhavan, I think this is quiet normal, because the browser clients "in front" will reuse connections (using keep-alive at TCP level) but an in-between load balancer may be not work or configured in this way and will use a new connection for each request to the backend. Then, you'll see a lot of sockets in the TCP/IP closedown workflow between the load balancer and the backend server. Pleases refer to TCP/IP that the port even for a "well closed connection" will be hold some time to handle late (duplicate) packets. Think about a duplicated, delayed RST packet - this should not close the next connection to this port. Because this situation is very unlikely or even impossible on a local area network, you may adjust the TCP stack setting of your server to use much lower protection times (in the magnitude of seconds) and also adjust others. And at Linux, you may also expand the range of ports used for connections. BTW: If you have a dedicated stateful packet inspecting firewall between your LB and the server, you also have to take a look on this. Said that, one more cent about the protocol between the LB and the Tomcat: I don’t know about HTTP, but if you use AJP (with mod_jk) you may configure it to keep and reuse connections to the Tomcat backend(s). Guido -Original Message- From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] Sent: Wednesday, May 10, 2017 6:32 PM To: Tomcat Users List Subject: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD Team, Tomcat version : 8.0.18 Apache HTTPD version : 2.2 There are lot of CLOSE_WAIT connections being created at the Application(tomcat) ,when the traffic is routed through the Apache HTTPD load balancer to the Application running over tomcat container. This leads to slowness of the port where the Application is running and eventually the application is not accessible through that particular PORT. In case of the traffic directly reaching the Application PORT without HTTPD (Load balancer) there is no CLOSE_WAIT connections created and application can handle the load seamlessly. Thanks in advance for the support. Regards, Adhavan.M - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Thanks Guido ! On Thu, May 11, 2017 at 12:02 PM, Jäkel, Guidowrote: > Dear Adhavan, > > I think this is quiet normal, because the browser clients "in front" will > reuse connections (using keep-alive at TCP level) but an in-between load > balancer may be not work or configured in this way and will use a new > connection for each request to the backend. > > Then, you'll see a lot of sockets in the TCP/IP closedown workflow between > the load balancer and the backend server. Pleases refer to TCP/IP that the > port even for a "well closed connection" will be hold some time to handle > late (duplicate) packets. Think about a duplicated, delayed RST packet - > this should not close the next connection to this port. > > Because this situation is very unlikely or even impossible on a local area > network, you may adjust the TCP stack setting of your server to use much > lower protection times (in the magnitude of seconds) and also adjust > others. And at Linux, you may also expand the range of ports used for > connections. > > BTW: If you have a dedicated stateful packet inspecting firewall between > your LB and the server, you also have to take a look on this. > > > Said that, one more cent about the protocol between the LB and the Tomcat: > I don’t know about HTTP, but if you use AJP (with mod_jk) you may configure > it to keep and reuse connections to the Tomcat backend(s). > > Guido > > >-Original Message- > >From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] > >Sent: Wednesday, May 10, 2017 6:32 PM > >To: Tomcat Users List > >Subject: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD > > > >Team, > > > >Tomcat version : 8.0.18 > > > >Apache HTTPD version : 2.2 > > > > > >There are lot of CLOSE_WAIT connections being created at the > >Application(tomcat) ,when the traffic is routed through the Apache HTTPD > >load balancer to the Application running over tomcat container. This leads > >to slowness of the port where the Application is running and eventually > the > >application is not accessible through that particular PORT. > > > >In case of the traffic directly reaching the Application PORT without > HTTPD > >(Load balancer) there is no CLOSE_WAIT connections created and > application > >can handle the load seamlessly. > > > >Thanks in advance for the support. > > > >Regards, > >Adhavan.M >
RE: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
Dear Adhavan, I think this is quiet normal, because the browser clients "in front" will reuse connections (using keep-alive at TCP level) but an in-between load balancer may be not work or configured in this way and will use a new connection for each request to the backend. Then, you'll see a lot of sockets in the TCP/IP closedown workflow between the load balancer and the backend server. Pleases refer to TCP/IP that the port even for a "well closed connection" will be hold some time to handle late (duplicate) packets. Think about a duplicated, delayed RST packet - this should not close the next connection to this port. Because this situation is very unlikely or even impossible on a local area network, you may adjust the TCP stack setting of your server to use much lower protection times (in the magnitude of seconds) and also adjust others. And at Linux, you may also expand the range of ports used for connections. BTW: If you have a dedicated stateful packet inspecting firewall between your LB and the server, you also have to take a look on this. Said that, one more cent about the protocol between the LB and the Tomcat: I don’t know about HTTP, but if you use AJP (with mod_jk) you may configure it to keep and reuse connections to the Tomcat backend(s). Guido >-Original Message- >From: Adhavan Mathiyalagan [mailto:adhav@gmail.com] >Sent: Wednesday, May 10, 2017 6:32 PM >To: Tomcat Users List >Subject: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD > >Team, > >Tomcat version : 8.0.18 > >Apache HTTPD version : 2.2 > > >There are lot of CLOSE_WAIT connections being created at the >Application(tomcat) ,when the traffic is routed through the Apache HTTPD >load balancer to the Application running over tomcat container. This leads >to slowness of the port where the Application is running and eventually the >application is not accessible through that particular PORT. > >In case of the traffic directly reaching the Application PORT without HTTPD >(Load balancer) there is no CLOSE_WAIT connections created and application >can handle the load seamlessly. > >Thanks in advance for the support. > >Regards, >Adhavan.M
Re: CLOSE_WAIT between Application (Tomcat) and Apache HTTPD
On 10/05/17 17:32, Adhavan Mathiyalagan wrote: > Team, > > Tomcat version : 8.0.18 That is over two years old. Have you considered updating? > Apache HTTPD version : 2.2 > > > There are lot of CLOSE_WAIT connections being created at the > Application(tomcat) ,when the traffic is routed through the Apache HTTPD > load balancer to the Application running over tomcat container. This leads > to slowness of the port where the Application is running and eventually the > application is not accessible through that particular PORT. > > In case of the traffic directly reaching the Application PORT without HTTPD > (Load balancer) there is no CLOSE_WAIT connections created and application > can handle the load seamlessly. > > Thanks in advance for the support. Relevant configuration settings please. Mark - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: Close_wait state
On 18.02.2016 16:50, Elias, Michael wrote: Hi - We are running tomcat version 7.0.50. Starting 2 days ago are application stopped responding to requests. Our investigation showed us that we are not closing connections. We see after 300 tcp sessions, for the tomcat PID, in CLOSE_WAIT state out app stops responding. Restarting the app clears the state. We took tcpdumps between our web layer and our tomcat layer. What we see in a successful connection is, after the response, tomcat sends a FIN, web ACK's, then web sends a FIN and Tomcat ACK's.. connection closes In a bad connection, tomcat does not send its FIN after the response, after 3 minutes, the WEB sends a FIN and tomcat ACK's. the connection goes into CLOSE_WAIT and stays in that state until restart of tomcat. Any help would be greatly appreciated. I have a question, and a story to this : Question : what happens to your connections in CLOSE_WAIT, if you force Tomcat (or rather, its JVM) to do a GC (garbage collection) ? (There are probably different ways to do that, but I know only one and it is lengthy to set up. Maybe someone has a quick suggestion ?) Story : One case in the past in which I had a similar issue, was with a webapp which : - created an object which itself created a TCP connection to some external process - used that object (its methods) to access that connection - and when the time came to close this connection, it just "forgot" the object, and left it to the JVM to cleanup when it destroyed the object And the JVM ended up with hundreds of connections in the CLOSE_WAIT state, up to a point (under Linux) where the entire TCP stack became unresponsive. My interpretation of what happened then is : Because in Java the garbage collection is asynchronous with the rest and only happens when needed, this unreferenced object could stay on the heap for quite a while. (As a matter of fact, the more comfortable the heap, the longer it stays). And because the JVM, to create a socket, uses ultimately some native code and some underlying native socket structure, this underlying OS-level socket remained also, in its CLOSE_WAIT state, for a long time after the original java object and wrapped connection had long ceased to be used by the webapp. A GC cleared that, because it finally eliminates and destroys unreferenced objects, and their linked native structures at the same time, which has the effect of finally closing the connection properly. So a GC magically deleted these hundreds of CLOSE_WAIT connections. Maybe your case is similar ? The proper solution of course is to make sure that the webapp properly closes the underlying connection before it drops the object that encapsulates it. An unproper and temporary (but in the meantime working) solution for me - because we had no access to the bad code - was to write a script which ran regularly, and forced the Tomcat JVM to do a GC. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: close_wait in Tomcat 7.0.63
Hi Team, Please let me know if any solution is present. This issue is hampering the application up time. Please let me know if any other details are required. Best Regards, Prashant Kaujalgi -Original Message- From: Prashant Kaujalgi [mailto:prashant.kauja...@e-nxt.com] Sent: Thursday, January 07, 2016 12:28 PM To: 'Tomcat Users List' Subject: RE: close_wait in Tomcat 7.0.63 Hi Konstantin, Thanks for prompt reply. Server.xml contains following resource. We are having Apache web server (2.2) which sends request to tomcat and when tomcat communicates with database it creates close_wait. Best Regards, Prashant Kaujalgi -Original Message- From: Konstantin Kolinko [mailto:knst.koli...@gmail.com] Sent: Thursday, January 07, 2016 12:15 PM To: Tomcat Users List Subject: Re: close_wait in Tomcat 7.0.63 2016-01-07 9:36 GMT+03:00 Prashant Kaujalgi <prashant.kauja...@e-nxt.com>: > Dear Team, > > > > First of all, I want to apologies if there is a well known fix to my > problem. > > > > Environment: > > OS: Windows server 2008 > > Tomcat application server: Apache Tomcat 7.0.63 > > Web server : Apache 2.2 > > JRE build : jdk1.6.0_23 > > Connection pooling: Tomcat jdbc connection pooling (Tomcat-jdbc.jar) > > > > We are having web based application hosted on Tomcat 7. We are facing > close_wait issue between tomcat and database > > server. After certain time period close_wait count increases and > reaches threshold (maxActive="500") after which tomcat was unable to > create new thread and we have to restart the service. > > > > Our observation is that Oracle closes the connection and Tomcat is not > able to close the same connection and hence resulting in close_wait > > > > Below is the sample netstat when CLOSE_WAIT was there. Application > server is > 192.168.15.109 with 219 database server listing on 1527 port. > > > > TCP192.168.xx.xx:51588 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51621 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51622 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51623 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51632 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51647 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51648 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51658 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51659 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51691 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51699 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51705 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51706 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51722 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51724 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51725 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51744 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51805 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51807 172.16.xx.xx:1527 CLOSE_WAIT 3812 > What is your actual configuration? Also, https://bz.apache.org/bugzilla/show_bug.cgi?id=58610#c2 - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org Disclaimer & Privilege Notice: This e-Mail may contain proprietary, privileged and confidential information and is sent for the intended recipient(s) only. If, by an addressing or transmission error, this mail has been misdirected to you, you are requested to notify us immediately by return email message and delete this mail and its attachments. You are also hereby notified that any use, any form of reproduction, dissemination, copying, disclosure, modification, distribution and/or publication of this e-mail message, contents or its attachment(s) other than by its intended recipient(s) is strictly prohibited. Any opinions expressed in this email are those of the individual and may not necessarily represent those of e-Nxt Financials Ltd. Before opening attachment(s), please scan for viruses. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: close_wait in Tomcat 7.0.63
2016-01-07 9:36 GMT+03:00 Prashant Kaujalgi: > Dear Team, > > > > First of all, I want to apologies if there is a well known fix to my > problem. > > > > Environment: > > OS: Windows server 2008 > > Tomcat application server: Apache Tomcat 7.0.63 > > Web server : Apache 2.2 > > JRE build : jdk1.6.0_23 > > Connection pooling: Tomcat jdbc connection pooling (Tomcat-jdbc.jar) > > > > We are having web based application hosted on Tomcat 7. We are facing > close_wait issue between tomcat and database > > server. After certain time period close_wait count increases and reaches > threshold (maxActive="500") after which tomcat was unable to create new > thread and we have to restart the service. > > > > Our observation is that Oracle closes the connection and Tomcat is not able > to close the same connection and hence resulting in close_wait > > > > Below is the sample netstat when CLOSE_WAIT was there. Application server is > 192.168.15.109 with 219 database server listing on 1527 port. > > > > TCP192.168.xx.xx:51588 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51621 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51622 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51623 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51632 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51647 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51648 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51658 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51659 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51691 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51699 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51705 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51706 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51722 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51724 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51725 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51744 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51805 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51807 172.16.xx.xx:1527 CLOSE_WAIT 3812 > What is your actual configuration? Also, https://bz.apache.org/bugzilla/show_bug.cgi?id=58610#c2 - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: close_wait in Tomcat 7.0.63
Hi Konstantin, Thanks for prompt reply. Server.xml contains following resource. We are having Apache web server (2.2) which sends request to tomcat and when tomcat communicates with database it creates close_wait. Best Regards, Prashant Kaujalgi -Original Message- From: Konstantin Kolinko [mailto:knst.koli...@gmail.com] Sent: Thursday, January 07, 2016 12:15 PM To: Tomcat Users List Subject: Re: close_wait in Tomcat 7.0.63 2016-01-07 9:36 GMT+03:00 Prashant Kaujalgi <prashant.kauja...@e-nxt.com>: > Dear Team, > > > > First of all, I want to apologies if there is a well known fix to my > problem. > > > > Environment: > > OS: Windows server 2008 > > Tomcat application server: Apache Tomcat 7.0.63 > > Web server : Apache 2.2 > > JRE build : jdk1.6.0_23 > > Connection pooling: Tomcat jdbc connection pooling (Tomcat-jdbc.jar) > > > > We are having web based application hosted on Tomcat 7. We are facing > close_wait issue between tomcat and database > > server. After certain time period close_wait count increases and > reaches threshold (maxActive="500") after which tomcat was unable to > create new thread and we have to restart the service. > > > > Our observation is that Oracle closes the connection and Tomcat is not > able to close the same connection and hence resulting in close_wait > > > > Below is the sample netstat when CLOSE_WAIT was there. Application > server is > 192.168.15.109 with 219 database server listing on 1527 port. > > > > TCP192.168.xx.xx:51588 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51621 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51622 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51623 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51632 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51647 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51648 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51658 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51659 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51691 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51699 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51705 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51706 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51722 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51724 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51725 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51744 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51805 172.16.xx.xx:1527 CLOSE_WAIT 3812 > > TCP192.168.xx.xx:51807 172.16.xx.xx:1527 CLOSE_WAIT 3812 > What is your actual configuration? Also, https://bz.apache.org/bugzilla/show_bug.cgi?id=58610#c2 - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org Disclaimer & Privilege Notice: This e-Mail may contain proprietary, privileged and confidential information and is sent for the intended recipient(s) only. If, by an addressing or transmission error, this mail has been misdirected to you, you are requested to notify us immediately by return email message and delete this mail and its attachments. You are also hereby notified that any use, any form of reproduction, dissemination, copying, disclosure, modification, distribution and/or publication of this e-mail message, contents or its attachment(s) other than by its intended recipient(s) is strictly prohibited. Any opinions expressed in this email are those of the individual and may not necessarily represent those of e-Nxt Financials Ltd. Before opening attachment(s), please scan for viruses. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT Connection Issue
Hi, 1] having one project in webapps, which will hold connection for 45 seconds. 2] I executed 5000 CURL request, on above project. 3] And then from Clinet Side, from where, I execute curl, kill all curl process. So, on server all ESTABLISHED becomes, CLOSE_WAIT in netstat. tcp0 0 10.168.43.69:8080 115.113.7.178:1197 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:1965 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1709 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:64429 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:64941 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:64685 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:4268 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:4780 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:5036 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:2220 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:2476 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:2732 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:2988 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:3244 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:3500 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:3756 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:4012 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:1196 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1452 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1708 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1964 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:64428 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:64684 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:64940 CLOSE_WAIT 10761/java tcp 126 0 10.168.43.69:8080 10.168.86.11:55709 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:5039 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:4783 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:4271 CLOSE_WAIT 10761/java On Fri, Nov 18, 2011 at 1:22 PM, Pid * p...@pidster.com wrote: On 18 Nov 2011, at 07:34, Chandrakant Solanki solanki.chandrak...@gmail.com wrote: Hi All, I am using apache-tomcat 6.0.26 and below is my server.xml Connector port=8080 protocol=org.apache.coyote. http11.Http11NioProtocol redirectPort=8443 maxKeepAliveRequests=1 maxThreads=5000 minSpareThreads=100 maxSpareThreads=300 processCache=500 acceptorThreadCount=1 enableLookups=false disableUploadTimeout=false connectionUploadTimeout=24 compression=on compressionMinSize=2048 noCompressionUserAgents=gozilla, traviata compressableMimeType=text/html,text/xml acceptCount=50 connectionTimeout=6 / Connector port=8443 protocol=org.apache.coyote.http11.Http11Protocol maxThreads=1 minSpareThreads=100 maxSpareThreads=300 processCache=500 acceptorThreadCount=1 enableLookups=false disableUploadTimeout=false connectionUploadTimeout=24 compression=on connectionTimeout=6 compressionMinSize=2048 noCompressionUserAgents=gozilla, traviata compressableMimeType=text/html,text/xml acceptCount=50 scheme=https secure=true address=X.X.X.X allowTrace=false SSLEnabled=true SSLCertificateFile= SSLCertificateKeyFile=... clientAuth=false sslProtocol=TLSv1 maxKeepAliveRequests=1/ I have executed CURL request, around 5000 and after that I kill all my curl process. So, all ESTABLISHED connection becomes in CLOSE_WAIT state. You have described some TCP states. Can you state what the problem is please? p Is any configuration is missing or doing something wrong.. Please help me out. -- Regards, Chandrakant Solanki - To unsubscribe,
Re: CLOSE_WAIT Connection Issue
On 18 Nov 2011, at 08:49, Chandrakant Solanki solanki.chandrak...@gmail.com wrote: Hi, 1] having one project in webapps, which will hold connection for 45 seconds. 2] I executed 5000 CURL request, on above project. 3] And then from Clinet Side, from where, I execute curl, kill all curl process. So, on server all ESTABLISHED becomes, CLOSE_WAIT in netstat Another clear description of what you're seeing, thanks - but what is the problem? What do you expect or want to happen? p tcp0 0 10.168.43.69:8080 115.113.7.178:1197 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:1965 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1709 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:64429 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:64941 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:64685 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:4268 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:4780 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:5036 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:2220 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:2476 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:2732 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:2988 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:3244 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:3500 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:3756 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:4012 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:1196 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1452 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1708 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:1964 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:64428 CLOSE_WAIT 10761/java tcp0 0 10.168.43.69:8080 115.113.7.178:64684 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:64940 CLOSE_WAIT 10761/java tcp 126 0 10.168.43.69:8080 10.168.86.11:55709 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:5039 CLOSE_WAIT 10761/java tcp1 0 10.168.43.69:8080 115.113.7.178:4783 CLOSE_WAIT 10761/java tcp 294 0 10.168.43.69:8080 115.113.7.178:4271 CLOSE_WAIT 10761/java On Fri, Nov 18, 2011 at 1:22 PM, Pid * p...@pidster.com wrote: On 18 Nov 2011, at 07:34, Chandrakant Solanki solanki.chandrak...@gmail.com wrote: Hi All, I am using apache-tomcat 6.0.26 and below is my server.xml Connector port=8080 protocol=org.apache.coyote. http11.Http11NioProtocol redirectPort=8443 maxKeepAliveRequests=1 maxThreads=5000 minSpareThreads=100 maxSpareThreads=300 processCache=500 acceptorThreadCount=1 enableLookups=false disableUploadTimeout=false connectionUploadTimeout=24 compression=on compressionMinSize=2048 noCompressionUserAgents=gozilla, traviata compressableMimeType=text/html,text/xml acceptCount=50 connectionTimeout=6 / Connector port=8443 protocol=org.apache.coyote.http11.Http11Protocol maxThreads=1 minSpareThreads=100 maxSpareThreads=300 processCache=500 acceptorThreadCount=1 enableLookups=false disableUploadTimeout=false connectionUploadTimeout=24 compression=on connectionTimeout=6 compressionMinSize=2048 noCompressionUserAgents=gozilla, traviata compressableMimeType=text/html,text/xml acceptCount=50 scheme=https secure=true address=X.X.X.X allowTrace=false SSLEnabled=true SSLCertificateFile= SSLCertificateKeyFile=... clientAuth=false sslProtocol=TLSv1 maxKeepAliveRequests=1/ I have executed CURL request, around 5000 and after that I kill all my curl process. So, all ESTABLISHED connection becomes in CLOSE_WAIT state. You have described some TCP states. Can you state what the problem is please? p Is any
Re: CLOSE_WAIT Connection Issue
3] And then from Clinet Side, from where, I execute curl, kill all curl process. So, on server all ESTABLISHED becomes, CLOSE_WAIT in netstat. I'd imagine kill -KILL or kill -TERM is preventing proper socket teardown. The server is expecting ACKs from the clients that apparently not being sent. That's certainly expected behavior for the KILL signal. I might expect curl to handle the TERM signal gracefully by tearing down the connections before exiting, but I've never tried. M - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT Connection Issue
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Chandrakant, This is a bit OT from your original question: On 11/18/11 2:34 AM, Chandrakant Solanki wrote: Connector port=8080 protocol=org.apache.coyote. http11.Http11NioProtocol maxThreads=5000 minSpareThreads=100 maxSpareThreads=300 Connector port=8443 protocol=org.apache.coyote.http11.Http11Protocol maxThreads=1 minSpareThreads=100 maxSpareThreads=300 minSpareThreads and maxSpareThreads are not documented attributes in http://tomcat.apache.org/tomcat-6.0-doc/config/http.html (except for one curious mention of them under the description for useExecutor). Why are you using them? I think you probably want to be using an Executor (http://tomcat.apache.org/tomcat-6.0-doc/config/executor.html) which /does/ support similar configuration (and will share threads between these two connectors, which will probably be nice). - -chris -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.17 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk7GezQACgkQ9CaO5/Lv0PCHIgCfU9+7YMlOcVZcqWN0MRsQsWSN 7GoAnR5GUHk7u1lg8kVJ9mwn7GzSW0zM =14Tj -END PGP SIGNATURE- - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT Connection Issue
On 18 Nov 2011, at 07:34, Chandrakant Solanki solanki.chandrak...@gmail.com wrote: Hi All, I am using apache-tomcat 6.0.26 and below is my server.xml Connector port=8080 protocol=org.apache.coyote. http11.Http11NioProtocol redirectPort=8443 maxKeepAliveRequests=1 maxThreads=5000 minSpareThreads=100 maxSpareThreads=300 processCache=500 acceptorThreadCount=1 enableLookups=false disableUploadTimeout=false connectionUploadTimeout=24 compression=on compressionMinSize=2048 noCompressionUserAgents=gozilla, traviata compressableMimeType=text/html,text/xml acceptCount=50 connectionTimeout=6 / Connector port=8443 protocol=org.apache.coyote.http11.Http11Protocol maxThreads=1 minSpareThreads=100 maxSpareThreads=300 processCache=500 acceptorThreadCount=1 enableLookups=false disableUploadTimeout=false connectionUploadTimeout=24 compression=on connectionTimeout=6 compressionMinSize=2048 noCompressionUserAgents=gozilla, traviata compressableMimeType=text/html,text/xml acceptCount=50 scheme=https secure=true address=X.X.X.X allowTrace=false SSLEnabled=true SSLCertificateFile= SSLCertificateKeyFile=... clientAuth=false sslProtocol=TLSv1 maxKeepAliveRequests=1/ I have executed CURL request, around 5000 and after that I kill all my curl process. So, all ESTABLISHED connection becomes in CLOSE_WAIT state. You have described some TCP states. Can you state what the problem is please? p Is any configuration is missing or doing something wrong.. Please help me out. -- Regards, Chandrakant Solanki - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT
On 3 February 2011 11:35, Pid p...@pidster.com wrote: What factor caused so many people to hijack this thread? Using a mail client such as Gmail, which performs its own threading and doesn't respect or even show the thread ID. (And, Andre, you're right, I was confusing the two states - my bad) - Peter
Re: CLOSE_WAIT
Peter Crowther schrieb am 03.02.2011 um 11:47 (+): On 3 February 2011 11:35, Pid p...@pidster.com wrote: What factor caused so many people to hijack this thread? Using a mail client such as Gmail, which performs its own threading and doesn't respect or even show the thread ID. Or something Produced By Microsoft Exchange V6.5 (thread hijacker #1), or Yahoo Mail (TH2), or iPhone (TH3). It's not simply to blame on technology, I guess. :-) -- Michael Ludwig - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT
On 2 February 2011 10:24, Bw57899 bw57...@gmail.com wrote: Install an application in apache tomcat (6.0.29) in dev env on Solaris 10 with no issue. But after move to production, there are always about 50 ~ 100 CLOSE_WAIT on port 1521. The application need connect an Oracle database which is on another server. So what can I do to check the problem? CLOSE_WAIT is normal behaviour - after a TCP socket is closed, it's tombstoned for a period so that the TCP stack knows what to do with incoming datagrams that might be late. Why do you think this is a problem? Except that you might be opening and closing a lot of connections to Oracle? - Peter
Re: CLOSE_WAIT
Peter Crowther wrote: On 2 February 2011 10:24, Bw57899 bw57...@gmail.com wrote: Install an application in apache tomcat (6.0.29) in dev env on Solaris 10 with no issue. But after move to production, there are always about 50 ~ 100 CLOSE_WAIT on port 1521. The application need connect an Oracle database which is on another server. So what can I do to check the problem? CLOSE_WAIT is normal behaviour - after a TCP socket is closed, it's tombstoned for a period so that the TCP stack knows what to do with incoming datagrams that might be late. Peter, I do not think that this is true, and I believe that you are confusing this with the TIME_WAIT state. See for example this : http://support.microsoft.com/kb/137984 CLOSE_WAIT: A socket application has been terminated, but Netstat reports the socket in a CLOSE_WAIT state. This could indicate that the client properly closed the connection (FIN has been sent), but the server still has its socket open. This could be the result of one instance (among all threads or processes) of the socket not being closed. I had/have a case like that with a third-party Tomcat application. It goes typically like this : The webapp creates an object C which among other things makes a TCP connection to another server. The webapp then uses this object's methods to send/receive data from the other server. At the end of this exchange, the webapp sends a command to the external server, to tell it I'm done. The external server then closes its end of the connection. Now the webapp, by means of closing the connection, sets the object C = null. For the webapp, this means that the connection object C is now effectively closed. But in fact, the object C still exists somewhere on the heap, and it still holds on to its underlying (OS-level) socket, which has never been closed from the Tomcat server side. The underlying TCP connection is in the CLOSE_WAIT state, because the socket has never been closed on the Tomcat server side, and it remains dangling. It only disappears when the Tomcat JVM does a GC, and the Object C is really discarded. That really closes the underlying TCP socket, and then the state progresses LAST_ACK and finally CLOSED and gone. An easy way to verify if this is the case of the OP, is to force Tomcat to do a GC, and see if these CLOSE_WAIT connections then disappear. If it is the case, then I would advise the OP to check his webapp, to see if it does not do the same kind of thing as described above. One problem that I have seen happen with this, is that as the number of CLOSE_WAIT sockets increases (to a few hundred), the whole server becomes unable to handle further TCP connections of any kind, being in the practice paralysed. I suppose that there must exist some inherent limit as to the maximum number of sockets which a system (or a process) can have active at any one time. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: CLOSE_WAIT and what to do about it
From: André Warnier [mailto:a...@ice-sa.com] public void close() throws SomeException { putEndRequest(); flush(); socket = null; } flush() being another function which reads the socket until there's nothing left to read, and throws away the result. socket is a property of the object created by this class, obtained somewhere else from a java.net.Socket object. Looking at that code above, it is obvious that socket is open, until it is set to null, without previously doing a socket.close(). I don't know Java enough to know if this alone could cause that socket to be lingering until the GC, but I kind of suspect so. Nice piece of detective work, André! Yes, that code's broken - the socket's not referenced but not closed, so it will stay open until a GC tidies it up. $deity only knows what the original developer was thinking when they wrote that. - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: CLOSE_WAIT and what to do about it
From: André Warnier [mailto:a...@ice-sa.com] Subject: Re: CLOSE_WAIT and what to do about it If these sockets disappear during a GC, then it must mean that they are still being referenced by some abandoned objects sitting on the Heap, which have not yet been reclaimed by the GC. Which probably means that the objects in question have gone out of scope, before the socket they used was properly close()'d. Your analysis looks reasonable to me. There are some analysis tools that will examine a live heap (or dump thereof) and find the reachable and unreachable objects; jhat is a free one that comes with JDK 6: http://java.sun.com/javase/6/webnotes/trouble/TSG-VM/html/tooldescr.html#gblfj - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT and what to do about it
Caldarale, Charles R wrote: From: André Warnier [mailto:a...@ice-sa.com] Subject: Re: CLOSE_WAIT and what to do about it Relatedly, does there exist any way to force a given JVM process to do a full GC interactively, but from a Linux command-line ? Found a command line tool that will do what you want: http://code.google.com/p/jmxsh/ I've used it to trigger a GC in Tomcat via the following steps. 1) Start Tomcat with the following options: -Dcom.sun.management.jmxremote.port=port -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false (You can, of course, set the authentication and SSL options as needed.) 2) Start jmxsh from the directory its jar is in with this: java -jar jmxsh*.jar 3) Enter the following commands (but not the bracketed bits): jmx_connect -h localhost -p port [blank line to enter browse mode] 5 [selects java.lang] 1 [selects the Memory mbean] 5 [performs a GC] The doc for jmxsh indicates the above steps should be scriptable, but I haven't tried that. It is likely that you could use jmx_connect with a different kind of service and avoid opening up an RMI port; if I figure that out, I'll let you know. Hi. Thanks a million for providing the above info. That jmxsh program is really useful. I don't really know what I'm doing here, but I can at least more or less figure out what happens. To recall, my original issue is that I have some Java applications (among which a Tomcat webapp and a couple of stand-alone Java daemon-like programs) which apparently leave an ever-increasing number of sockets lingering in a CLOSE_WAIT state. And I was wondering if it was possible, as one test, to force the JVM running these applications to perform a GC, right now, from the outside. Well, it is. Following is a trace of a session with jmxsh, with one of these applications. Initial socket situation : r...@arthur:/home/star/xml# netstat -pan | grep CLOSE tcp6 0 0 :::127.0.0.1:48267 :::127.0.0.1:11002 CLOSE_WAIT 7618/java tcp6 12 0 :::127.0.0.1:36936 :::127.0.0.1:11002 CLOSE_WAIT 7816/java tcp6 12 0 :::127.0.0.1:50322 :::127.0.0.1:11002 CLOSE_WAIT 7816/java r...@arthur:/home/star/xml# ps -ef | grep 7618 root 7618 1 1 14:32 pts/300:00:15 ./java -server -Dcom.sun.management.jmxremote.port=11201 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Xms64M -Xmx64M -Dpgm=STARWeb -jar /home//web4/java/xyz.jar -c /home/star/web4/config -p 11101 The above is the process which I am going to stress, in the sense of communicating with it, which has the result of having it itself open a TCP connection with another server listening on port 11002, then closing this socket (in principle), and this multiple times. (As you see, the program was started with the jmxremote options allowing later communication with jmxsh.) Now some interactions with the application pid=7618 ... Situation later on : r...@arthur:/home/star/xml# netstat -pan | grep CLOSE tcp6 0 0 :::127.0.0.1:55798 :::127.0.0.1:11002 CLOSE_WAIT 7618/java tcp6 0 0 :::127.0.0.1:57029 :::127.0.0.1:11002 CLOSE_WAIT 7618/java tcp6 0 0 :::127.0.0.1:48267 :::127.0.0.1:11002 CLOSE_WAIT 7618/java tcp6 0 0 :::127.0.0.1:56781 :::127.0.0.1:11002 CLOSE_WAIT 7618/java tcp6 12 0 :::127.0.0.1:36936 :::127.0.0.1:11002 CLOSE_WAIT 7816/java tcp6 12 0 :::127.0.0.1:58341 :::127.0.0.1:11002 CLOSE_WAIT 7816/java tcp6 0 0 :::127.0.0.1:32972 :::127.0.0.1:11002 CLOSE_WAIT 7618/java tcp6 12 0 :::127.0.0.1:50322 :::127.0.0.1:11002 CLOSE_WAIT 7816/java So this application indeed left a number of sockets in the CLOSE_WAIT state. Now triggering a GC with jmxsh : a...@arthur:~$ java -jar bin/jmxsh-R4.jar jmxsh v1.0, Tue Jan 22 17:23:12 GMT+01:00 2008 Type 'help' for help. Give the option '-?' to any command for usage help. Starting up in shell mode. % jmx_connect -h localhost -p 11201 Connected to service:jmx:rmi:///jndi/rmi://localhost:11201/jmxrmi. % Entering browse mode. Available Domains: 1. java.util.logging 2. JMImplementation 3. java.lang SERVER: service:jmx:rmi:///jndi/rmi://localhost:11201/jmxrmi Select a domain: 3 Available MBeans: 1. java.lang:type=Compilation 2. java.lang:type=MemoryManager,name=CodeCacheManager 3. java.lang:type=GarbageCollector,name=Copy 4. java.lang:type=MemoryPool,name=Eden Space 5. java.lang:type=Runtime 6. java.lang:type=ClassLoading 7. java.lang:type=MemoryPool,name=Survivor Space 8. java.lang:type
Re: CLOSE_WAIT and what to do about it
Caldarale, Charles R wrote: From: André Warnier [mailto:a...@ice-sa.com] Subject: Re: CLOSE_WAIT and what to do about it If these sockets disappear during a GC, then it must mean that they are still being referenced by some abandoned objects sitting on the Heap, which have not yet been reclaimed by the GC. Which probably means that the objects in question have gone out of scope, before the socket they used was properly close()'d. Your analysis looks reasonable to me. There are some analysis tools that will examine a live heap (or dump thereof) and find the reachable and unreachable objects; jhat is a free one that comes with JDK 6: http://java.sun.com/javase/6/webnotes/trouble/TSG-VM/html/tooldescr.html#gblfj Allright, I have done that too. I generated a Heap dump using jmap -heap:format=b pid That gave me file heap.bin of some 4.5 MB. I then used the jhat program to open it. jhat launches itself by default as a webserver on port 7000, which you can access using a normal browser. That's where my problem starts however, because being a mere Java fiddler I don't really know what I am looking at, and what to look for. I did a lot of guesswork anyway, and using my knowledge of the application more than the links, I came upon the name of a class that looks like it is reponsible for opening/closing the sockets that remain in CLOSE_WAIT. I found the following function in the class : public void close() throws SomeException { putEndRequest(); flush(); socket = null; } flush() being another function which reads the socket until there's nothing left to read, and throws away the result. socket is a property of the object created by this class, obtained somewhere else from a java.net.Socket object. Looking at that code above, it is obvious that socket is open, until it is set to null, without previously doing a socket.close(). I don't know Java enough to know if this alone could cause that socket to be lingering until the GC, but I kind of suspect so. How does a Java expert look at that ? - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: CLOSE_WAIT and what to do about it
From: André Warnier [mailto:a...@ice-sa.com] Subject: Re: CLOSE_WAIT and what to do about it Looking at that code above, it is obvious that socket is open, until it is set to null, without previously doing a socket.close(). I don't know Java enough to know if this alone could cause that socket to be lingering until the GC, but I kind of suspect so. For not being that familiar with Java, you've done an admirable job of tracking this down. What you've found certainly looks like the cause of the problem; the class you encountered appears to be a wrapper for a plain java.net.Socket, and whoever wrote it simply missed putting in a socket.close() call. Perhaps this was originally developed on an older JVM with more frequent non-generational garbage collection, so the problem wasn't noticed then. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT and what to do about it
Caldarale, Charles R wrote: From: André Warnier [mailto:a...@ice-sa.com] Subject: Re: CLOSE_WAIT and what to do about it Looking at that code above, it is obvious that socket is open, until it is set to null, without previously doing a socket.close(). I don't know Java enough to know if this alone could cause that socket to be lingering until the GC, but I kind of suspect so. For not being that familiar with Java, you've done an admirable job of tracking this down. What you've found certainly looks like the cause of the problem; the class you encountered appears to be a wrapper for a plain java.net.Socket, and whoever wrote it simply missed putting in a socket.close() call. Perhaps this was originally developed on an older JVM with more frequent non-generational garbage collection, so the problem wasn't noticed then. I was standing on the shoulders of giants. Thanks for the help. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: CLOSE_WAIT and what to do about it
From: André Warnier [mailto:a...@ice-sa.com] Subject: Re: CLOSE_WAIT and what to do about it Relatedly, does there exist any way to force a given JVM process to do a full GC interactively, but from a Linux command-line ? Found a command line tool that will do what you want: http://code.google.com/p/jmxsh/ I've used it to trigger a GC in Tomcat via the following steps. 1) Start Tomcat with the following options: -Dcom.sun.management.jmxremote.port=port -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false (You can, of course, set the authentication and SSL options as needed.) 2) Start jmxsh from the directory its jar is in with this: java -jar jmxsh*.jar 3) Enter the following commands (but not the bracketed bits): jmx_connect -h localhost -p port [blank line to enter browse mode] 5 [selects java.lang] 1 [selects the Memory mbean] 5 [performs a GC] The doc for jmxsh indicates the above steps should be scriptable, but I haven't tried that. It is likely that you could use jmx_connect with a different kind of service and avoid opening up an RMI port; if I figure that out, I'll let you know. - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT and what to do about it
Skimmed quickly through your post there while working, so forgive me if this is irrelevant. CLOSE_WAIT is a state where the connection has been closed on the tcp/ip level, but the application (in this case java) has not closed the socket descriptor yet. As a coincidence we just fixed this very same issue in our application, which uses the httpclient library. There is a known issue with the httpclient library where sockets are not closed after the connection ends (issue or feature you be the judge), we worked around this by explicitly calling a close ourselves. If httpclient is used that could be the culprit. See http://www.nabble.com/tcp-connections-left-with-CLOSE_WAIT-td13757202.html for a better description Rgds, Taylan André Warnier wrote: Hi. As a follow-upon another thread originally entitled apache/tomcat communication issues (502 response), I'd like to pursue the CLOSE-WAIT subject. Sorry if this post is a bit long, I want to make sure that I do provide all the necessary information. Like the original poster, I am seeing on my systems a fair number of sockets apparently stuck for a long time in the CLOSE_WAIT state. (Sometimes several hundreds of them). They seem to predominantly concern Tomcat and other java processes, but as Alan pointed out previously and I confirm, my perspective is slanted, because we use a lot of common java programs and webapps on our servers, and the ones mostly affected talk to eachother and come from the same vendor. Unfortunately also, I do not have the sources of these programs/webapps available, and will not get them, and I can't do without these programs. It has been previously established that a socket in a long-time-lingering CLOSE-WAIT status, is due to one or the other side of a TCP connection not properly closing its side of the connection when it is done with it. I also surmise (without having a definite proof of this), that this is essentially bad, as it ties up some resources that could be otherwise freed. I have also been told or discovered that, our servers being Linux Debian servers, programs such as ps, netstat and lsof can help in determining precisely how many such lingering sockets there are, and who the culprit processes are (to some extent). In our case, we know which are the programs involved, because we know which ones open a listening socket and on what fixed port, and we also know which are the other processes talking to them. But, as mentioned previously, we do not have the source of these programs and will not get them, but cannot practically do without them for now. But we do have full root control of the Linux servers where these programs are running. So my question is : considering the situation above, is there something I can do locally to free these lingering CLOSE_WAIT sockets, and under which conditions ? (I must admit that I am a bit lost among the myriad options of lsof) For example, suppose I start with a netstat -pan command and I see the display below (sorry for the line-wrapping). I see a number of sockets in the CLOSE_WAIT state, and for those I have a process-id, which I can associate to a particular process. For example, I see this line : tcp6 12 0 :::127.0.0.1:41764 :::127.0.0.1:11002 CLOSE_WAIT 29649/java which tells me that there is a local process 29649/java, whith a local socket port 41674 in the CLOSE_WAIT state, related to another socket #11002 on the same host. On the other hand, I see this line : tcp0 0 127.0.0.1:11002 127.0.0.1:41764 FIN_WAIT2 - which shows a local socket on port 11002, related to this other local socket port #41764, with no process-id/program displayed. What does that tell me ? I also know that the process-id 29649 corresponds to a local java process, of the daemon variety, multi-threaded. That program talks to another known server program, written in C, of which instances are started on an ad-hoc base by inetd, and which listens on port 11002 (in fact it is inetd who does, and it passes this socket on to the process it forks, I understand that). (The link with Tomcat is that I also see frequently the same situation, where the process owning the CLOSE_WAIT socket is Tomcat, more specifically one webapp running inside it. It's just that in this particular snapshot it isn't.) What it looks like to me in this case, is that at some point one of the threads of process # 29649 opened a client socket #41674 to the local inetd port #11002; that inetd then started the underlying server process (the C program); that the underlying C program then at some point exited; but that process #41674 never closes one of the sides of its connection with port #11002. Can I somehow detect this condition, and force the offending thread of process #29649 to close that socket (or just force this thread to exit) ? I realise this may be a complex question, and that the answers may be
RE: CLOSE_WAIT and what to do about it
From: André Warnier [mailto:a...@ice-sa.com] It has been previously established that a socket in a long-time-lingering CLOSE-WAIT status, is due to one or the other side of a TCP connection not properly closing its side of the connection when it is done with it. I also surmise (without having a definite proof of this), that this is essentially bad, as it ties up some resources that could be otherwise freed. At the very least it'll tie up a kernel data structure for the socket itself. I don't know modern Linux kernels well enough to know how buffers are allocated, but I suspect you won't be wasting much memory on buffers as they'll be allocated on-demand. You're probably talking tens to low hundreds of bytes for each one of these. You will also be consuming resources in whichever program is not closing the sockets correctly. So my question is : considering the situation above, is there something I can do locally to free these lingering CLOSE_WAIT sockets, and under which conditions ? For example, I see this line : tcp6 12 0 :::127.0.0.1:41764 :::127.0.0.1:11002 CLOSE_WAIT 29649/java which tells me that there is a local process 29649/java, whith a local socket port 41674 in the CLOSE_WAIT state, related to another socket #11002 on the same host. On the other hand, I see this line : tcp0 0 127.0.0.1:11002 127.0.0.1:41764 FIN_WAIT2 - which shows a local socket on port 11002, related to this other local socket port #41764, with no process-id/program displayed. What does that tell me ? The process that was on port 11002 closed its end of the socket and sent a FIN. Process 29649 hasn't closed its end of the socket yet. I also know that the process-id 29649 corresponds to a local java process, of the daemon variety, multi-threaded. That program talks to another known server program, written in C, of which instances are started on an ad-hoc base by inetd, and which listens on port 11002 (in fact it is inetd who does, and it passes this socket on to the process it forks, I understand that). The local Java process may have a resource leak. It appears not to have closed the socket it was using to communicate with the server. A possible reason for the lack of a PID on port 11002 is that the socket was handed across from inetd to the C daemon - not sure about this. What it looks like to me in this case, is that at some point one of the threads of process # 29649 opened a client socket #41674 to the local inetd port #11002; that inetd then started the underlying server process (the C program); that the underlying C program then at some point exited; but that process #41674 never closes one of the sides of its connection with port #11002. Agree. Can I somehow detect this condition, and force the offending thread of process #29649 to close that socket (or just force this thread to exit) ? Threads are flows of control. Threads do not reference objects other than from their stack and any thread-local storage - and there are plenty of other places that can hold onto objects! The socket may well be referenced from an object on the heap (not the stack) that's ultimately referenced by a static variable in a class, for example, in which case zapping a thread may well do nothing. You need to find out what, if anything, is holding onto the socket. If you have some way of forcing that Java process to collect garbage, you should do so. It's possible for sockets that haven't been close()d to hang around, unreferenced but not yet garbage collected. A full GC would collect any of these, finalizing them as it does and hence closing the socket. If a full GC doesn't close the socket, some other object is still referencing it. If a full GC doesn't clear the problem, you may need to go in with some memory-tracing tool and find out what's holding onto the socket. It's a long, long time since I had to do this in Java, so I have no idea of the appropriate tools - my brain's telling me Son of Strike, which is for the .Net CLR and *definitely* wrong! Does that help? Or is it clear as mud? - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT and what to do about it
Peter Crowther wrote: [...] Does that help? Or is it clear as mud? For no-java-expert-me, it is indeed of the hazy category. But it helps a lot, in the sense of adding a +3 in the column get back to the vendor and ask them to fix their code. ;-) Thanks. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT and what to do about it
Peter Crowther wrote: [...] If you have some way of forcing that Java process to collect garbage, you should do so. It's possible for sockets that haven't been close()d to hang around, unreferenced but not yet garbage collected. A full GC would collect any of these, finalizing them as it does and hence closing the socket. If a full GC doesn't close the socket, some other object is still referencing it. Hopping on that idea, and still considering the try something from the outside, without modifying the code kind of view : This process is started as a daemon, with a java command-line. Is it possible to add some arguments to that command-line to induce the JVM to do a GC more often ? (I don't think that in this case it would have a very negative impact on performance.) It currently starts without any -D switches at all to the command-line, basically : path/to/java/java -jar theapp.jar The same question for the related Tomcat webapp (which I suspect of having the same issue). But in that case I do have to be a bit more careful regarding the performance impact, although this webapp is pretty much all that is running in this Tomcat. And that Tomcat (on some of our systems) starts under jsvc, and I don't really know where to set the parameters for that one under Linux. Relatedly, does there exist any way to force a given JVM process to do a full GC interactively, but from a Linux command-line ? I have full access to these systems, but usually only in SSH console mode, and I don't know if there is any kind of graphical GUI installed or accessible on them. Basically, I'd like to see if triggering a GC reduces this number of lingering sockets. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: CLOSE_WAIT and what to do about it
From: André Warnier [mailto:a...@ice-sa.com] This process is started as a daemon, with a java command-line. Is it possible to add some arguments to that command-line to induce the JVM to do a GC more often ? http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html - I don't think so, although the RMI option under Explicit Garbage Collection might work. The same question for the related Tomcat webapp (which I suspect of having the same issue). But in that case I do have to be a bit more careful regarding the performance impact, although this webapp is pretty much all that is running in this Tomcat. That one's easy. Add another webapp with one page. When the page is requested, call System.GC(). Job done! Relatedly, does there exist any way to force a given JVM process to do a full GC interactively, but from a Linux command-line ? I'm not aware of one, but I'm not an expert. I await the experts' comments with interest! - Peter - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
RE: CLOSE_WAIT and what to do about it
From: André Warnier [mailto:a...@ice-sa.com] Subject: Re: CLOSE_WAIT and what to do about it Relatedly, does there exist any way to force a given JVM process to do a full GC interactively, but from a Linux command-line ? I haven't found one yet, but there are numerous command-line monitoring utilities included with the JDK that display all sorts of GC information, using the same connection mechanism as JConsole. Since JConsole can force a GC in a JVM its monitoring, doing it from the command line is feasible. Might have to do a little coding... - Chuck THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. - To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org
Re: CLOSE_WAIT and what to do about it
Hi André, I didn't fully read all responses, so I hope i don't repeat to much (or worse contradict statements contained in other replies). On 08.04.2009 12:32, André Warnier wrote: Like the original poster, I am seeing on my systems a fair number of sockets apparently stuck for a long time in the CLOSE_WAIT state. (Sometimes several hundreds of them). They seem to predominantly concern Tomcat and other java processes, but as Alan pointed out previously and I confirm, my perspective is slanted, because we use a lot of common java programs and webapps on our servers, and the ones mostly affected talk to eachother and come from the same vendor. Unfortunately also, I do not have the sources of these programs/webapps available, and will not get them, and I can't do without these programs. It has been previously established that a socket in a long-time-lingering CLOSE-WAIT status, is due to one or the other side of a TCP connection not properly closing its side of the connection when it is done with it. CLOSE_WAIT says the other side shut down the connection. TCP connections are allowed to stay for an arbitrary time in half-closed state. In general TCP connection can be used in a duplex way. But assume one end has finished communication (sending data). Then it can already close its side of the connection. The nice TCP state diagram is contained in the fundamental book of Stevens, and can be seen e.g. at http://www.cse.iitb.ac.in/perfnet/cs456/tcp-state-diag.pdf As you can see, CLOSE_WAIT on one end always implies FIN_WAIT2 on the other end (except, when between the two ends there's yet another component, that interferes with the communication like maybe a firewall). In the special situation where both ends of the communication are on the same system, one finds each connection twice, one from the point of view of each side of the connection. It is always important to think about which end one is looking at, when interpreting the two lines. I also surmise (without having a definite proof of this), that this is essentially bad, as it ties up some resources that could be otherwise freed. I have also been told or discovered that, our servers being Linux Debian servers, programs such as ps, netstat and lsof can help in determining precisely how many such lingering sockets there are, and who the culprit processes are (to some extent). True. In our case, we know which are the programs involved, because we know which ones open a listening socket and on what fixed port, and we also know which are the other processes talking to them. But, as mentioned previously, we do not have the source of these programs and will not get them, but cannot practically do without them for now. But we do have full root control of the Linux servers where these programs are running. The details may depend on the used protocols and sometimes you can get information about timeouts you can set in the application, like idle timeouts for persistent connections. So my question is : considering the situation above, is there something I can do locally to free these lingering CLOSE_WAIT sockets, and under which conditions ? (I must admit that I am a bit lost among the myriad options of lsof) I would say no, if you can't change the application and the developper of it didn't provide any configuration options. CLOSE_WAIT from the point of view of tcp is a legitimate state without any builtin timeout. For example, suppose I start with a netstat -pan command and I see the display below (sorry for the line-wrapping). I see a number of sockets in the CLOSE_WAIT state, and for those I have a process-id, which I can associate to a particular process. For example, I see this line : tcp6 12 0 :::127.0.0.1:41764 :::127.0.0.1:11002 CLOSE_WAIT 29649/java which tells me that there is a local process 29649/java, whith a local socket port 41674 in the CLOSE_WAIT state, related to another socket #11002 on the same host. On the other hand, I see this line : tcp0 0 127.0.0.1:11002 127.0.0.1:41764 FIN_WAIT2 - which shows a local socket on port 11002, related to this other local socket port #41764, with no process-id/program displayed. What does that tell me ? My interpretation (not 100% sure): Not sure, what your OS shows in netstat after closing the local side of a connection, more precisely whether the pid is still shown, or is removed. Depending on this answer, either we have a simple one-sided shutdown, or even a process exit. In both cases the process 41764 didn't have any reason to use the established connection in the meantime, so it didn't realise, that the connection is only half-open. As soon as it tried to use it, it should/would detect that and most likely (if programmed correctly) close it. I also know that the process-id 29649 corresponds to a local java process, of the daemon variety, multi-threaded. That program talks to another known server