Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-12 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Isaac,

On 3/11/14, 6:56 PM, Isaac Gonzalez wrote:
 
 
 -Original Message- From: Christopher Schultz
 [mailto:ch...@christopherschultz.net] Sent: Friday, March 07, 2014
 8:18 AM To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk
 connections after server runs for a couple of days
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA256
 
 
 
 On 3/6/14, 7:39 AM, Daniel Mikusa wrote:
 On Mar 5, 2014, at 4:51 PM, Isaac Gonzalez
 igonza...@autoreturn.com wrote:
 
 
 
 -Original Message- From: Daniel Mikusa 
 [mailto:dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014 
 12:42 PM To: Tomcat Users List Subject: Re: tomcat 6 refuses
 mod_jk connections after server runs for a couple of days
 
 On Mar 4, 2014, at 1:55 PM, Isaac Gonzalez
 igonza...@autoreturn.com wrote:
 
 Dan,
 
  From: Daniel Mikusa
  [dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014 6:20
 AM To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk
  connections after server runs for a couple of days
 
 On Mar 4, 2014, at 6:32 AM, Rainer Jung
 rainer.j...@kippdata.de wrote:
 
 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of
 thread dumps of when we experienced the issue again
 today. I also noticed we get this message right before
 the problem occurs: Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable

 
run SEVERE: Caught exception (java.lang.OutOfMemoryError:
 unable to create new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,

 
terminating thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus
 native memory needed by the process plus thread stacks. Not
 unlikely, that you ran out of memory address space for a 32
 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the
 app works with less threads
 
 - limit you connector thread pool size. That will still
 mean that if requests begin to queue because of performance
 problems, the web server can't create additional
 connections, but you won't get in an irregular situation as
 you experience now. In that case you would need to
 configure a low idle timeout for the connections on the JK
  and TC side.
 
 It may also be possible to lower the thread stack size with
 the -Xss option.
 
 Ok so we are 64 bit Linux with 1024k in the 64-bit
 VMwould lowering it to 64k be a bit too low? What sort of
 repercussions would we run into? Very helpful information by
 the way.
 
 It depends on your apps, so you'll need to test and see.  If
 you go too low, you'll get StackOverflow exceptions.  If you
 see those, just gradually increase until they go away.
 
 Dan
 
 
 
 -Isaac
 
 
 http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#thread

 
s_
 
 
 oom
 
 Might buy you some room for a few additional threads.
 
 Dan
 
 
 Regards,
 
 Rainer
 
 
 ---

 
- --
 
 
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: 
 users-h...@tomcat.apache.org
 
 
 
 

 
- -
 
 
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail:
 users-h...@tomcat.apache.org 
 

 
- -
 
 
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail:
 users-h...@tomcat.apache.org
 
 
 
 -



 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 Ok so the problem just happened again just now. Dan, Can you 
 elaborate on how to configure limiting the connector thread
 pool size. I am also going to lower the thread stack size as
 you recommended. It seems like this problem creeps up when we
 have a hiccup in connectivity at our data center. Perhaps I
 also need to lower the idle timeout some more between tomcat
 and mod_jk. They are also between a firewall by the way, so I
 can configure a timeout between the two there as well. We
 aren't experiencing too many idle disconnects there.
 
 See maxConnections / maxThreads on the Connector tag.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Impl

 
ementation
 
 or Executor if you’re using an executor.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html
 
 ... and you definitely *should* be using a manually-configured
 Executor.
 
 - -chris
 
 Chris, why should I be using a connector since we are only having
 users use the single 8009 AJP connection on each tomcat instance? I
 am the only one that uses the 8080 connector

RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-11 Thread Isaac Gonzalez


-Original Message-
From: Christopher Schultz [mailto:ch...@christopherschultz.net] 
Sent: Friday, March 07, 2014 8:18 AM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 3/6/14, 7:39 AM, Daniel Mikusa wrote:
 On Mar 5, 2014, at 4:51 PM, Isaac Gonzalez igonza...@autoreturn.com 
 wrote:
 
 
 
 -Original Message- From: Daniel Mikusa 
 [mailto:dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014
 12:42 PM To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk 
 connections after server runs for a couple of days
 
 On Mar 4, 2014, at 1:55 PM, Isaac Gonzalez igonza...@autoreturn.com 
 wrote:
 
 Dan,
 
  From: Daniel Mikusa 
 [dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014 6:20 AM
 To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk 
 connections after server runs for a couple of days
 
 On Mar 4, 2014, at 6:32 AM, Rainer Jung rainer.j...@kippdata.de 
 wrote:
 
 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread 
 dumps of when we experienced the issue again today.
 I also noticed we get this message right before the problem
 occurs: Feb 27, 2014 12:47:15 PM
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable
 run SEVERE: Caught exception (java.lang.OutOfMemoryError:
 unable to create new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,
  terminating thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus native 
 memory needed by the process plus thread stacks. Not unlikely, that 
 you ran out of memory address space for a 32 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the app 
 works with less threads
 
 - limit you connector thread pool size. That will still mean that 
 if requests begin to queue because of performance problems, the web 
 server can't create additional connections, but you won't get in an 
 irregular situation as you experience now. In that case you would 
 need to configure a low idle timeout for the connections on the JK 
 and TC side.
 
 It may also be possible to lower the thread stack size with the -Xss 
 option.
 
 Ok so we are 64 bit Linux with 1024k in the 64-bit VMwould 
 lowering it to 64k be a bit too low? What sort of repercussions 
 would we run into? Very helpful information by the way.
 
 It depends on your apps, so you'll need to test and see.  If you go 
 too low, you'll get StackOverflow exceptions.  If you see those, just 
 gradually increase until they go away.
 
 Dan
 
 
 
 -Isaac
 
 
 http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#thread
 s_

 
oom
 
 Might buy you some room for a few additional threads.
 
 Dan
 
 
 Regards,
 
 Rainer
 
 
 ---
 --

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail:
 users-h...@tomcat.apache.org
 
 
 
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 Ok so the problem just happened again just now. Dan, Can you 
 elaborate on how to configure limiting the connector thread pool 
 size. I am also going to lower the thread stack size as you 
 recommended. It seems like this problem creeps up when we have a 
 hiccup in connectivity at our data center. Perhaps I also need to 
 lower the idle timeout some more between tomcat and mod_jk. They are 
 also between a firewall by the way, so I can configure a timeout 
 between the two there as well. We aren't experiencing too many idle 
 disconnects there.
 
 See maxConnections / maxThreads on the Connector tag.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Impl
 ementation

  or Executor if you’re using an executor.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html

... and you definitely *should* be using a manually-configured Executor.

- -chris

Chris, why should I be using a connector since we are only having users use the 
single 8009 AJP connection on each tomcat instance? I am the only one that uses 
the 8080 connector for troubleshooting and monitoring purposes. Is it mainly to 
help recycle unused threads?

-Isaac


Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-07 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 3/6/14, 7:39 AM, Daniel Mikusa wrote:
 On Mar 5, 2014, at 4:51 PM, Isaac Gonzalez
 igonza...@autoreturn.com wrote:
 
 
 
 -Original Message- From: Daniel Mikusa
 [mailto:dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014
 12:42 PM To: Tomcat Users List Subject: Re: tomcat 6 refuses
 mod_jk connections after server runs for a couple of days
 
 On Mar 4, 2014, at 1:55 PM, Isaac Gonzalez
 igonza...@autoreturn.com wrote:
 
 Dan,
 
  From: Daniel Mikusa
 [dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014 6:20 AM 
 To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk
 connections after server runs for a couple of days
 
 On Mar 4, 2014, at 6:32 AM, Rainer Jung
 rainer.j...@kippdata.de wrote:
 
 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of
 thread dumps of when we experienced the issue again today.
 I also noticed we get this message right before the problem
 occurs: Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable
 run SEVERE: Caught exception (java.lang.OutOfMemoryError:
 unable to create new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,
  terminating thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus
 native memory needed by the process plus thread stacks. Not
 unlikely, that you ran out of memory address space for a 32
 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the
 app works with less threads
 
 - limit you connector thread pool size. That will still mean
 that if requests begin to queue because of performance
 problems, the web server can't create additional connections,
 but you won't get in an irregular situation as you experience
 now. In that case you would need to configure a low idle
 timeout for the connections on the JK and TC side.
 
 It may also be possible to lower the thread stack size with the
 -Xss option.
 
 Ok so we are 64 bit Linux with 1024k in the 64-bit VMwould
 lowering it to 64k be a bit too low? What sort of repercussions
 would we run into? Very helpful information by the way.
 
 It depends on your apps, so you'll need to test and see.  If you
 go too low, you'll get StackOverflow exceptions.  If you see
 those, just gradually increase until they go away.
 
 Dan
 
 
 
 -Isaac
 
 
 http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_

 
oom
 
 Might buy you some room for a few additional threads.
 
 Dan
 
 
 Regards,
 
 Rainer
 
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail:
 users-h...@tomcat.apache.org
 
 
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 Ok so the problem just happened again just now. Dan, Can you
 elaborate on how to configure limiting the connector thread pool
 size. I am also going to lower the thread stack size as you
 recommended. It seems like this problem creeps up when we have a
 hiccup in connectivity at our data center. Perhaps I also need to
 lower the idle timeout some more between tomcat and mod_jk. They
 are also between a firewall by the way, so I can configure a
 timeout between the two there as well. We aren't experiencing too
 many idle disconnects there.
 
 See maxConnections / maxThreads on the Connector tag.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Implementation

  or Executor if you’re using an executor.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html

... and you definitely *should* be using a manually-configured Executor.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTGfE9AAoJEBzwKT+lPKRYz8AQAJZj3JaI2wsYt62N7s6L+DHB
En9FT1J3o96rvrrqhiLT7Et5R3o6kR++eeajVgHprbLLMYnDVH+k4+rvKDoW5JUp
IBGeti5hukvojAEtBjVQdGBB0zE3v632E9mUHepyelC5Y3v5hKnQ7hLMYrJSbgmk
+V1Dustvg3BQ4Dgzrn28OaDMrWd/9Lt2zDZAtxaGKH7DKkkkIvCQ6KWq7KGLd+4H
fUx013uE5Pdd1VlqjicTXGLP1WtjiafFXjK4TQtwBiNhocCXFIBhCa/fHGJOOPv2
NRhM0UBkJ3JKUGLBP8XX4YOsl367gyTmdL1DoHT7XH7XZeefRAYKd2Py314wdAYR
MXpDVtfOAszt2Ezae

RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-07 Thread Isaac Gonzalez


-Original Message-
From: Christopher Schultz [mailto:ch...@christopherschultz.net] 
Sent: Friday, March 07, 2014 8:18 AM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



On 3/6/14, 7:39 AM, Daniel Mikusa wrote:
 On Mar 5, 2014, at 4:51 PM, Isaac Gonzalez igonza...@autoreturn.com 
 wrote:
 
 
 
 -Original Message- From: Daniel Mikusa 
 [mailto:dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014
 12:42 PM To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk 
 connections after server runs for a couple of days
 
 On Mar 4, 2014, at 1:55 PM, Isaac Gonzalez igonza...@autoreturn.com 
 wrote:
 
 Dan,
 
  From: Daniel Mikusa 
 [dmik...@gopivotal.com] Sent: Tuesday, March 04, 2014 6:20 AM
 To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk 
 connections after server runs for a couple of days
 
 On Mar 4, 2014, at 6:32 AM, Rainer Jung rainer.j...@kippdata.de 
 wrote:
 
 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread 
 dumps of when we experienced the issue again today.
 I also noticed we get this message right before the problem
 occurs: Feb 27, 2014 12:47:15 PM
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable
 run SEVERE: Caught exception (java.lang.OutOfMemoryError:
 unable to create new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,
  terminating thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus native 
 memory needed by the process plus thread stacks. Not unlikely, that 
 you ran out of memory address space for a 32 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the app 
 works with less threads
 
 - limit you connector thread pool size. That will still mean that 
 if requests begin to queue because of performance problems, the web 
 server can't create additional connections, but you won't get in an 
 irregular situation as you experience now. In that case you would 
 need to configure a low idle timeout for the connections on the JK 
 and TC side.
 
 It may also be possible to lower the thread stack size with the -Xss 
 option.
 
 Ok so we are 64 bit Linux with 1024k in the 64-bit VMwould 
 lowering it to 64k be a bit too low? What sort of repercussions 
 would we run into? Very helpful information by the way.
 
 It depends on your apps, so you'll need to test and see.  If you go 
 too low, you'll get StackOverflow exceptions.  If you see those, just 
 gradually increase until they go away.
 
 Dan
 
 
 
 -Isaac
 
 
 http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#thread
 s_

 
oom
 
 Might buy you some room for a few additional threads.
 
 Dan
 
 
 Regards,
 
 Rainer
 
 
 ---
 --

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail:
 users-h...@tomcat.apache.org
 
 
 
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 -

 
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 Ok so the problem just happened again just now. Dan, Can you 
 elaborate on how to configure limiting the connector thread pool 
 size. I am also going to lower the thread stack size as you 
 recommended. It seems like this problem creeps up when we have a 
 hiccup in connectivity at our data center. Perhaps I also need to 
 lower the idle timeout some more between tomcat and mod_jk. They are 
 also between a firewall by the way, so I can configure a timeout 
 between the two there as well. We aren't experiencing too many idle 
 disconnects there.
 
 See maxConnections / maxThreads on the Connector tag.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Impl
 ementation

  or Executor if you’re using an executor.
 
 http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html

... and you definitely *should* be using a manually-configured Executor.

- -chris

Ok so move to a manually configured executor and set the thread stack size to 
be smaller than the 1024k default sounds like it should definitely help out a 
bit.

-Isaac



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org

Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-06 Thread Daniel Mikusa
On Mar 5, 2014, at 4:51 PM, Isaac Gonzalez igonza...@autoreturn.com wrote:

 
 
 -Original Message-
 From: Daniel Mikusa [mailto:dmik...@gopivotal.com] 
 Sent: Tuesday, March 04, 2014 12:42 PM
 To: Tomcat Users List
 Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a 
 couple of days
 
 On Mar 4, 2014, at 1:55 PM, Isaac Gonzalez igonza...@autoreturn.com wrote:
 
 Dan,
 
 
 From: Daniel Mikusa [dmik...@gopivotal.com]
 Sent: Tuesday, March 04, 2014 6:20 AM
 To: Tomcat Users List
 Subject: Re: tomcat 6 refuses mod_jk connections after server runs for 
 a couple of days
 
 On Mar 4, 2014, at 6:32 AM, Rainer Jung rainer.j...@kippdata.de wrote:
 
 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of 
 when we experienced the issue again today. I also noticed we get this 
 message right before the problem occurs:
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to 
 create new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, 
 terminating thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus native 
 memory needed by the process plus thread stacks. Not unlikely, that 
 you ran out of memory address space for a 32 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the app works 
 with less threads
 
 - limit you connector thread pool size. That will still mean that if 
 requests begin to queue because of performance problems, the web 
 server can't create additional connections, but you won't get in an 
 irregular situation as you experience now. In that case you would 
 need to configure a low idle timeout for the connections on the JK and TC 
 side.
 
 It may also be possible to lower the thread stack size with the -Xss option.
 
 Ok so we are 64 bit Linux with 1024k in the 64-bit VMwould lowering it 
 to 64k be a bit too low? What sort of repercussions would we run into?
 Very helpful information by the way.
 
 It depends on your apps, so you'll need to test and see.  If you go too low, 
 you'll get StackOverflow exceptions.  If you see those, just gradually 
 increase until they go away.
 
 Dan
 
 
 
 -Isaac
 
 
 http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_
 oom
 
 Might buy you some room for a few additional threads.
 
 Dan
 
 
 Regards,
 
 Rainer
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 Ok so the problem just happened again just now. Dan, Can you elaborate on how 
 to configure limiting the connector thread pool size. I am also going to 
 lower the thread stack size as you recommended. It seems like this problem 
 creeps up when we have a hiccup in connectivity at our data center. Perhaps I 
 also need to lower the idle timeout some more between tomcat and mod_jk. They 
 are also between a firewall by the way, so I can configure a timeout between 
 the two there as well. We aren't experiencing too many idle disconnects there.

See maxConnections / maxThreads on the Connector tag.

  
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html#Standard_Implementation

or Executor if you’re using an executor.

  http://tomcat.apache.org/tomcat-7.0-doc/config/executor.html

Dan

 
 -Isaac
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-05 Thread Isaac Gonzalez


-Original Message-
From: Daniel Mikusa [mailto:dmik...@gopivotal.com] 
Sent: Tuesday, March 04, 2014 12:42 PM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

On Mar 4, 2014, at 1:55 PM, Isaac Gonzalez igonza...@autoreturn.com wrote:

 Dan,
 
 
 From: Daniel Mikusa [dmik...@gopivotal.com]
 Sent: Tuesday, March 04, 2014 6:20 AM
 To: Tomcat Users List
 Subject: Re: tomcat 6 refuses mod_jk connections after server runs for 
 a couple of days
 
 On Mar 4, 2014, at 6:32 AM, Rainer Jung rainer.j...@kippdata.de wrote:
 
 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of 
 when we experienced the issue again today. I also noticed we get this 
 message right before the problem occurs:
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to 
 create new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, 
 terminating thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus native 
 memory needed by the process plus thread stacks. Not unlikely, that 
 you ran out of memory address space for a 32 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the app works 
 with less threads
 
 - limit you connector thread pool size. That will still mean that if 
 requests begin to queue because of performance problems, the web 
 server can't create additional connections, but you won't get in an 
 irregular situation as you experience now. In that case you would 
 need to configure a low idle timeout for the connections on the JK and TC 
 side.
 
 It may also be possible to lower the thread stack size with the -Xss option.
 
 Ok so we are 64 bit Linux with 1024k in the 64-bit VMwould lowering it to 
 64k be a bit too low? What sort of repercussions would we run into?
 Very helpful information by the way.

It depends on your apps, so you'll need to test and see.  If you go too low, 
you'll get StackOverflow exceptions.  If you see those, just gradually increase 
until they go away.

Dan


 
 -Isaac
 
  
 http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_
 oom
 
 Might buy you some room for a few additional threads.
 
 Dan
 
 
 Regards,
 
 Rainer
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


Ok so the problem just happened again just now. Dan, Can you elaborate on how 
to configure limiting the connector thread pool size. I am also going to lower 
the thread stack size as you recommended. It seems like this problem creeps up 
when we have a hiccup in connectivity at our data center. Perhaps I also need 
to lower the idle timeout some more between tomcat and mod_jk. They are also 
between a firewall by the way, so I can configure a timeout between the two 
there as well. We aren't experiencing too many idle disconnects there.

-Isaac

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-04 Thread Rainer Jung
On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of when 
 we experienced the issue again today. I also noticed we get this message 
 right before the problem occurs:  
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to create new 
 native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, terminating thread

Is it a 32Bit system? You have 2GB of heap plus Perm plus native memory
needed by the process plus thread stacks. Not unlikely, that you ran out
of memory address space for a 32 bit process.

The only fixes would then be:

- switch to a 64 bit system

- reduce heap if the app can work with less

- improve performance or eliminate bottlenecks so that the app works
with less threads

- limit you connector thread pool size. That will still mean that if
requests begin to queue because of performance problems, the web server
can't create additional connections, but you won't get in an irregular
situation as you experience now. In that case you would need to
configure a low idle timeout for the connections on the JK and TC side.

Regards,

Rainer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-04 Thread Daniel Mikusa
On Mar 4, 2014, at 6:32 AM, Rainer Jung rainer.j...@kippdata.de wrote:

 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of when 
 we experienced the issue again today. I also noticed we get this message 
 right before the problem occurs:  
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to create new 
 native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, terminating thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus native memory
 needed by the process plus thread stacks. Not unlikely, that you ran out
 of memory address space for a 32 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the app works
 with less threads
 
 - limit you connector thread pool size. That will still mean that if
 requests begin to queue because of performance problems, the web server
 can't create additional connections, but you won't get in an irregular
 situation as you experience now. In that case you would need to
 configure a low idle timeout for the connections on the JK and TC side.

It may also be possible to lower the thread stack size with the -Xss option.

  http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_oom

Might buy you some room for a few additional threads.

Dan

 
 Regards,
 
 Rainer
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-04 Thread Isaac Gonzalez
Rainer,


From: Rainer Jung [rainer.j...@kippdata.de]
Sent: Tuesday, March 04, 2014 3:32 AM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of when 
 we experienced the issue again today. I also noticed we get this message 
 right before the problem occurs:
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to create new 
 native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, terminating thread

Is it a 32Bit system? You have 2GB of heap plus Perm plus native memory
needed by the process plus thread stacks. Not unlikely, that you ran out
of memory address space for a 32 bit process.

 No we are running on 64 bit


The only fixes would then be:

- switch to a 64 bit system

- reduce heap if the app can work with less
  I'd like to keep the heap the same...most of our apps need it.

- improve performance or eliminate bottlenecks so that the app works
with less threads
 Can you give an example of such a bottleneck? Ie: open unfinished 
connections to the backend database from tomcat?

- limit you connector thread pool size. That will still mean that if
requests begin to queue because of performance problems, the web server
can't create additional connections, but you won't get in an irregular
situation as you experience now. In that case you would need to
configure a low idle timeout for the connections on the JK and TC side.

  I'm not sure I want to do this because that would cause hiccups on 
the client UI and not allow them to connect. It would keep active ones open I 
imagine. I already have a 5 minute idle timeout on JK and TC...Guess I need to 
lower it down to like a minute or so... Just wondering if that would be too low.

-Isaac

Regards,

Rainer


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-04 Thread Isaac Gonzalez
Dan,


From: Daniel Mikusa [dmik...@gopivotal.com]
Sent: Tuesday, March 04, 2014 6:20 AM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

On Mar 4, 2014, at 6:32 AM, Rainer Jung rainer.j...@kippdata.de wrote:

 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of when 
 we experienced the issue again today. I also noticed we get this message 
 right before the problem occurs:
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to create new 
 native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, terminating thread

 Is it a 32Bit system? You have 2GB of heap plus Perm plus native memory
 needed by the process plus thread stacks. Not unlikely, that you ran out
 of memory address space for a 32 bit process.

 The only fixes would then be:

 - switch to a 64 bit system

 - reduce heap if the app can work with less

 - improve performance or eliminate bottlenecks so that the app works
 with less threads

 - limit you connector thread pool size. That will still mean that if
 requests begin to queue because of performance problems, the web server
 can't create additional connections, but you won't get in an irregular
 situation as you experience now. In that case you would need to
 configure a low idle timeout for the connections on the JK and TC side.

It may also be possible to lower the thread stack size with the -Xss option.

Ok so we are 64 bit Linux with 1024k in the 64-bit VMwould lowering it to 
64k be a bit too low? What sort of repercussions would we run into?
Very helpful information by the way.

-Isaac

  http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_oom

Might buy you some room for a few additional threads.

Dan


 Regards,

 Rainer


 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-04 Thread Daniel Mikusa
On Mar 4, 2014, at 1:55 PM, Isaac Gonzalez igonza...@autoreturn.com wrote:

 Dan,
 
 
 From: Daniel Mikusa [dmik...@gopivotal.com]
 Sent: Tuesday, March 04, 2014 6:20 AM
 To: Tomcat Users List
 Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a 
 couple of days
 
 On Mar 4, 2014, at 6:32 AM, Rainer Jung rainer.j...@kippdata.de wrote:
 
 On 27.02.2014 23:06, Isaac Gonzalez wrote:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of 
 when we experienced the issue again today. I also noticed we get this 
 message right before the problem occurs:
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to create new 
 native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, terminating 
 thread
 
 Is it a 32Bit system? You have 2GB of heap plus Perm plus native memory
 needed by the process plus thread stacks. Not unlikely, that you ran out
 of memory address space for a 32 bit process.
 
 The only fixes would then be:
 
 - switch to a 64 bit system
 
 - reduce heap if the app can work with less
 
 - improve performance or eliminate bottlenecks so that the app works
 with less threads
 
 - limit you connector thread pool size. That will still mean that if
 requests begin to queue because of performance problems, the web server
 can't create additional connections, but you won't get in an irregular
 situation as you experience now. In that case you would need to
 configure a low idle timeout for the connections on the JK and TC side.
 
 It may also be possible to lower the thread stack size with the -Xss option.
 
 Ok so we are 64 bit Linux with 1024k in the 64-bit VMwould lowering it to 
 64k be a bit too low? What sort of repercussions would we run into?
 Very helpful information by the way.

It depends on your apps, so you’ll need to test and see.  If you go too low, 
you’ll get StackOverflow exceptions.  If you see those, just gradually increase 
until they go away.

Dan


 
 -Isaac
 
  http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#threads_oom
 
 Might buy you some room for a few additional threads.
 
 Dan
 
 
 Regards,
 
 Rainer
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 
 
 
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 -
 To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
 For additional commands, e-mail: users-h...@tomcat.apache.org
 


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-03-03 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Issac,

On 3/1/14, 12:41 AM, Isaac Gonzalez wrote:
  From: Christopher Schultz
 [ch...@christopherschultz.net] Sent: Friday, February 28, 2014
 11:40 AM To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk
 connections after server runs for a couple of days
 
 pipe size(512 bytes, -p) 8 POSIX message queues 
 (bytes, -q) 819200 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240 cpu time (seconds,
 -t) unlimited max user processes  (-u) 1024
 
 You might want to increase this number. How many processes is
 tomcat running outside of the JVM? This is likely to be the
 limit you are hitting.
 
 Tomcat is only running about 7 processes total, one for each 
 JVM...but nothing else...unless I need to look beyond ps... Don't 
 think this is it...but you never know

Some *NIXs count individual threads as processes. You'll have to check
in your own environment.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTFKTgAAoJEBzwKT+lPKRYhK0P/0XBtTjcmIFE+tRIsR+SxTN2
T5wpkzqmjO7o1Q2+BRQXJyWIFB3EshlKjB4e3P6RkiqB9yWL5PV6N+7LqZGlJZSn
AMWQDUIkszG7wP6E1lTMNm3IJRba0PGJlhsYb4BtTuxQyKJlqMNdzKw3OyETghbl
ibOAPRHZZK9apQKSQKaO7CGOUv0+VAGTwwf1tcXv7UQVOqpryApuCWIrVNNMyrOD
wiXioTj6qhbcl7i+x+qW2Nqk4ldkNGVKgS+3wfAjzUbFuXWfPMKRG7YCAltySP8W
tZ+4SFAQ0GYTP0M2yoEC8+m8kXTOlDWHNeLPvhlU3NhFOCl25W9u1GHJyvFvjzKP
IV+nglZ47qZryQYWYeiZOquM25hjuvCGT+r5o0enrca3VFxw7TCdYyDOq/CHLQjU
+MaY9yXEuRYbXphdmWj0hEWyY+TirTEdumjFB1tVKrY87jr7biq2gyCtRC/mey8t
xLLi3nYhHEqVDCPCv8pt+5nQg6AgQRqv3veBt8Z86p21JuuBcPoRYJy6aI0KpIms
11Xxe9M05M8W4b7wY+4U6vrxYVsEx6GVPEA8ZTsARTpwDjYeHRSQ5Tp9nRf9NrAf
vCYtws9OkOfy7UOkEGQt4OLpuGHdld4rCEQfNwC+REuqW68r7+hE/bpqZwFFalP4
rf60cH024jR32N8gJvqn
=ZKmN
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-28 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Konstantin,

On 2/27/14, 5:40 PM, Konstantin Kolinko wrote:
 2014-02-28 2:06 GMT+04:00 Isaac Gonzalez
 igonza...@autoreturn.com:
 Hi Christopher(and Konstantin), attached is a couple of thread
 dumps of when we experienced the issue again today. I also
 noticed we get this message right before the problem occurs: Feb
 27, 2014 12:47:15 PM
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run 
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to
 create new native thread) executing
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,
 terminating thread
 
 That explains why a connection cannot be accepted.
 
 I wonder are you hitting an ulimit limit, or there is just not
 enough free memory to allocate stack area for a new thread (which
 size is set by -Xss option to java executable).
 
 Your thread dumps contain 149 threads each.

While it does explain why one (Tomcat) server would become
unresponsive, it doesn't really explain why the entire cluster would
become unresponsive.

Issac, are you using sticky-sessions or anything like that, or does
your load-balancing mod_jk configuration choose arbitrarily between a
backend server? You initially gave an abridged configuration, so I
can't tell.

 After the split, did both Tomcats appear to lock-up 
 simultaneously, or did only one of them have a problem and the 
 other one stayed up?
 
 Isaac: They all appear to lock-up simultaneously, if users try to 
 access that JK mount point.
 
 [...]
 
 Isaac: I am not load-balancing the tomcat servers...I only have 
 one...I do load balance the apache front end servers via dns 
 round-robin

Oh, you only have a single back-end server? Well, then that why they
all go down at once, so you seem to have found your problem: the
server itself is going down because you don't have enough resources to
keep it up.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTEOWeAAoJEBzwKT+lPKRYUxYQAItxQZTOWjubWA32fDwGhcGy
Y4cuBDU7OZiw/I4oQTCEY0Uk62eKf6g0dXQL4VqM8j4jz2VLhP10uPDA3c/+CBhU
kpVJg2ifLD0YJmUqnxuTHYIdwwvPrPa/PNCAWYJCcDXE1DVEz8HLAoYlQY5oJ5Cf
P+wtYTgWqJyzddtB2sjB+YQcVj+83aWkfKuipednSdm0utCPP5PQzPiF7agoP9qt
vDB0preG0GFQXTShYqMRKeEt3hu+BdLXugp7kJA5KDMEcSbWyPzefxWl0CtKhcJB
d/ntEtoYbR0gWGO4Qajio6NVmw9TWzBf4spbg8scBz8ijE314VNsw6mdT9F55TZZ
43iYSnDAK1dNfs7guqAAk7z5Gf+fChy28zFmOm0lSzs1/o5HHvJFqKse94hzjJW6
R4uCUVktbvoJPfot6zoG3ofsYm+PVcibPOj4Xh0m7nBPKvqTZ5BVyeLBRR/E+KRy
O4jJ0DshRnhy3qL9l5uO6h7miIb+GMwjpc3A6lbbITHVDKspaq8kll+m6sAn1ppV
Z8PnNysSMTGHY6azjMisZlp4b/i9r+Nc+HabtbjRrt1StfuWrHTeIwQ46n7XcMrr
biATXUpo94kM2eGJecP0jtyBrgYwkaz22NtbW3i2l47XQKa/dhhP6IzlgzK8Fmki
eKiBf5+iNyhc9dB3adQ2
=Uchz
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-28 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Issac,

On 2/27/14, 6:23 PM, Isaac Gonzalez wrote:
 
 -Original Message- From: Konstantin Kolinko
 [mailto:knst.koli...@gmail.com] Sent: Thursday, February 27, 2014
 2:40 PM To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk
 connections after server runs for a couple of days
 
 2014-02-28 2:06 GMT+04:00 Isaac Gonzalez
 igonza...@autoreturn.com:
 Hi Christopher(and Konstantin), attached is a couple of thread
 dumps of when we experienced the issue again today. I also
 noticed we get this message right before the problem occurs: Feb
 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run 
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to
 create new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,
 terminating thread
 
 That explains why a connection cannot be accepted.
 
 I wonder are you hitting an ulimit limit, or there is just not
 enough free memory to allocate stack area for a new thread (which
 size is set by -Xss option to java executable).
 
 I thought of the ulimit settings and increased it to the upward
 limit allowed at the end of last weekend: [root@server ~]# su
 tomcat [tomcat@server root]$ ulimit -a core file size
 (blocks, -c) 0 data seg size   (kbytes, -d) unlimited 
 scheduling priority (-e) 0 file size
 (blocks, -f) unlimited pending signals (-i) 62835 
 max locked memory   (kbytes, -l) 64 max memory size
 (kbytes, -m) unlimited open files  (-n) 65535

Open-files can sometimes be a problem. This setting looks just fine to
me, though.

 pipe size(512 bytes, -p) 8 POSIX message queues
 (bytes, -q) 819200 real-time priority  (-r) 0 stack
 size  (kbytes, -s) 10240 cpu time
 (seconds, -t) unlimited max user processes  (-u) 1024

You might want to increase this number. How many processes is tomcat
running outside of the JVM? This is likely to be the limit you are
hitting.

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTEOY2AAoJEBzwKT+lPKRYmAwP+wVF9fwj88SOXxWTEKXdbk01
a9slFh4LmnDn0tacrAmA3m47VsndF3ewHwIEF1yP7wNRYf3NXSCzVp0OWuji6ZU1
yfjPpBT3pI8dVPqu9hkWVxkvQxA5xL0sm+9L2BeVBW0QbLs69L0g/v3xt2LwMxvF
j/9mZNqW6A177ZG1o5wcdexRhzV2566Z3idWdc8Zp9uISwFdZXdzYxJtTiku9k6q
nV3gQ8ICAwI+VGBKc1DwbL6QqUwpY8O7OjmQ5OEJaqHYEXjVNkdgo87oY+2BXRkV
9BW9J1zHLPAi8UhdfumDeqRKBQ7JPRhRLGGrhAHsmmA+G0XzShzU2zY84s5PSGU8
GwNiNZ/NJpTPtYjV5viY3GdWWbyeO9J4VDUBsBbs8k1XN7a44OjmKpRhnVIlQT6z
XLYfg3GpWjK8Xdd2L81RB/O6Q2xn9jY5FMik8jh0HgDm38Wf4AeymhVdEaEfVT5Z
TdAQECOFeYGDgLHNY7sFr/QQfJkLAFhfNM9xcgDTx4WcUH9V4QMn8S2qjOeFPbgx
hqwg+p2au18JMTb+RkmnHAVIcqtiFtUU/dN9Xap/vH+bc8UKimE87brBlnTnD/pk
uW0ea5m4f6MDcX2hDSh4+1ZU5uI0ZqTMvcp445UE0GW/4ITu9iauVedvM9fUlVlt
fzDyTTTEUHZ10n+yF5XC
=kOiz
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-28 Thread Isaac Gonzalez
Christopher


From: Christopher Schultz [ch...@christopherschultz.net]
Sent: Friday, February 28, 2014 11:38 AM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Konstantin,

On 2/27/14, 5:40 PM, Konstantin Kolinko wrote:
 2014-02-28 2:06 GMT+04:00 Isaac Gonzalez
 igonza...@autoreturn.com:
 Hi Christopher(and Konstantin), attached is a couple of thread
 dumps of when we experienced the issue again today. I also
 noticed we get this message right before the problem occurs: Feb
 27, 2014 12:47:15 PM
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to
 create new native thread) executing
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,
 terminating thread

 That explains why a connection cannot be accepted.

 I wonder are you hitting an ulimit limit, or there is just not
 enough free memory to allocate stack area for a new thread (which
 size is set by -Xss option to java executable).

 Your thread dumps contain 149 threads each.

While it does explain why one (Tomcat) server would become
unresponsive, it doesn't really explain why the entire cluster would
become unresponsive.

Issac, are you using sticky-sessions or anything like that, or does
your load-balancing mod_jk configuration choose arbitrarily between a
backend server? You initially gave an abridged configuration, so I
can't tell.
  
   As you indicate below, I am not clustering. There is only one 
backend tomcat.

 After the split, did both Tomcats appear to lock-up
 simultaneously, or did only one of them have a problem and the
 other one stayed up?

 Isaac: They all appear to lock-up simultaneously, if users try to
 access that JK mount point.

 [...]

 Isaac: I am not load-balancing the tomcat servers...I only have
 one...I do load balance the apache front end servers via dns
 round-robin

Oh, you only have a single back-end server? Well, then that why they
all go down at once, so you seem to have found your problem: the
server itself is going down because you don't have enough resources to
keep it up.

  Indeed I have! Seems like I underallocated server memory...the machine 
had only 8 Gigs with 7 tomcat instances running that all had up to a maximum of 
2 gigs of maximum memory heap size, plus OS stuff running, RabbitMQ, and other 
things. I am wondering though if something else could be the underlying root 
cause of this issue, or was I simply under allocating memory..such as 
connections not being closed, either by the client mod_jk connector, or the db 
connector...We'll see in the next few days I guess
thanks again Chris, you and Konstantin pointed me to the issue...

  -Isaac


- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTEOWeAAoJEBzwKT+lPKRYUxYQAItxQZTOWjubWA32fDwGhcGy
Y4cuBDU7OZiw/I4oQTCEY0Uk62eKf6g0dXQL4VqM8j4jz2VLhP10uPDA3c/+CBhU
kpVJg2ifLD0YJmUqnxuTHYIdwwvPrPa/PNCAWYJCcDXE1DVEz8HLAoYlQY5oJ5Cf
P+wtYTgWqJyzddtB2sjB+YQcVj+83aWkfKuipednSdm0utCPP5PQzPiF7agoP9qt
vDB0preG0GFQXTShYqMRKeEt3hu+BdLXugp7kJA5KDMEcSbWyPzefxWl0CtKhcJB
d/ntEtoYbR0gWGO4Qajio6NVmw9TWzBf4spbg8scBz8ijE314VNsw6mdT9F55TZZ
43iYSnDAK1dNfs7guqAAk7z5Gf+fChy28zFmOm0lSzs1/o5HHvJFqKse94hzjJW6
R4uCUVktbvoJPfot6zoG3ofsYm+PVcibPOj4Xh0m7nBPKvqTZ5BVyeLBRR/E+KRy
O4jJ0DshRnhy3qL9l5uO6h7miIb+GMwjpc3A6lbbITHVDKspaq8kll+m6sAn1ppV
Z8PnNysSMTGHY6azjMisZlp4b/i9r+Nc+HabtbjRrt1StfuWrHTeIwQ46n7XcMrr
biATXUpo94kM2eGJecP0jtyBrgYwkaz22NtbW3i2l47XQKa/dhhP6IzlgzK8Fmki
eKiBf5+iNyhc9dB3adQ2
=Uchz
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-28 Thread Isaac Gonzalez
Christopher,


From: Christopher Schultz [ch...@christopherschultz.net]
Sent: Friday, February 28, 2014 11:40 AM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Issac,

On 2/27/14, 6:23 PM, Isaac Gonzalez wrote:

 -Original Message- From: Konstantin Kolinko
 [mailto:knst.koli...@gmail.com] Sent: Thursday, February 27, 2014
 2:40 PM To: Tomcat Users List Subject: Re: tomcat 6 refuses mod_jk
 connections after server runs for a couple of days

 2014-02-28 2:06 GMT+04:00 Isaac Gonzalez
 igonza...@autoreturn.com:
 Hi Christopher(and Konstantin), attached is a couple of thread
 dumps of when we experienced the issue again today. I also
 noticed we get this message right before the problem occurs: Feb
 27, 2014 12:47:15 PM
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to
 create new native thread) executing
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea,
 terminating thread

 That explains why a connection cannot be accepted.

 I wonder are you hitting an ulimit limit, or there is just not
 enough free memory to allocate stack area for a new thread (which
 size is set by -Xss option to java executable).

 I thought of the ulimit settings and increased it to the upward
 limit allowed at the end of last weekend: [root@server ~]# su
 tomcat [tomcat@server root]$ ulimit -a core file size
 (blocks, -c) 0 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0 file size
 (blocks, -f) unlimited pending signals (-i) 62835
 max locked memory   (kbytes, -l) 64 max memory size
 (kbytes, -m) unlimited open files  (-n) 65535

Open-files can sometimes be a problem. This setting looks just fine to
me, though.

 pipe size(512 bytes, -p) 8 POSIX message queues
 (bytes, -q) 819200 real-time priority  (-r) 0 stack
 size  (kbytes, -s) 10240 cpu time
 (seconds, -t) unlimited max user processes  (-u) 1024

You might want to increase this number. How many processes is tomcat
running outside of the JVM? This is likely to be the limit you are
hitting.

   Tomcat is only running about 7 processes total, one for each 
JVM...but nothing else...unless I need to look beyond ps...
   Don't think this is it...but you never know
  
   -Isaac

- -chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCAAGBQJTEOY2AAoJEBzwKT+lPKRYmAwP+wVF9fwj88SOXxWTEKXdbk01
a9slFh4LmnDn0tacrAmA3m47VsndF3ewHwIEF1yP7wNRYf3NXSCzVp0OWuji6ZU1
yfjPpBT3pI8dVPqu9hkWVxkvQxA5xL0sm+9L2BeVBW0QbLs69L0g/v3xt2LwMxvF
j/9mZNqW6A177ZG1o5wcdexRhzV2566Z3idWdc8Zp9uISwFdZXdzYxJtTiku9k6q
nV3gQ8ICAwI+VGBKc1DwbL6QqUwpY8O7OjmQ5OEJaqHYEXjVNkdgo87oY+2BXRkV
9BW9J1zHLPAi8UhdfumDeqRKBQ7JPRhRLGGrhAHsmmA+G0XzShzU2zY84s5PSGU8
GwNiNZ/NJpTPtYjV5viY3GdWWbyeO9J4VDUBsBbs8k1XN7a44OjmKpRhnVIlQT6z
XLYfg3GpWjK8Xdd2L81RB/O6Q2xn9jY5FMik8jh0HgDm38Wf4AeymhVdEaEfVT5Z
TdAQECOFeYGDgLHNY7sFr/QQfJkLAFhfNM9xcgDTx4WcUH9V4QMn8S2qjOeFPbgx
hqwg+p2au18JMTb+RkmnHAVIcqtiFtUU/dN9Xap/vH+bc8UKimE87brBlnTnD/pk
uW0ea5m4f6MDcX2hDSh4+1ZU5uI0ZqTMvcp445UE0GW/4ITu9iauVedvM9fUlVlt
fzDyTTTEUHZ10n+yF5XC
=kOiz
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org
-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-27 Thread Konstantin Kolinko
2014-02-28 2:06 GMT+04:00 Isaac Gonzalez igonza...@autoreturn.com:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of when 
 we experienced the issue again today. I also noticed we get this message 
 right before the problem occurs:
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to create new 
 native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, terminating thread

That explains why a connection cannot be accepted.

I wonder are you hitting an ulimit limit,
or there is just not enough free memory to allocate stack area for a
new thread (which size is set by -Xss option to java executable).

Your thread dumps contain 149 threads each.

Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-27 Thread Isaac Gonzalez

-Original Message-
From: Konstantin Kolinko [mailto:knst.koli...@gmail.com] 
Sent: Thursday, February 27, 2014 2:40 PM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

2014-02-28 2:06 GMT+04:00 Isaac Gonzalez igonza...@autoreturn.com:
 Hi Christopher(and Konstantin), attached is a couple of thread dumps of when 
 we experienced the issue again today. I also noticed we get this message 
 right before the problem occurs:
 Feb 27, 2014 12:47:15 PM 
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
 SEVERE: Caught exception (java.lang.OutOfMemoryError: unable to create 
 new native thread) executing 
 org.apache.jk.common.ChannelSocket$SocketAcceptor@177ddea, terminating 
 thread

That explains why a connection cannot be accepted.

I wonder are you hitting an ulimit limit, or there is just not enough free 
memory to allocate stack area for a new thread (which size is set by -Xss 
option to java executable).

I thought of the ulimit settings and increased it to the upward limit 
allowed at the end of last weekend:
[root@server ~]# su tomcat
[tomcat@server root]$ ulimit -a
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 62835
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 65535
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 10240
cpu time   (seconds, -t) unlimited
max user processes  (-u) 1024
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

Here is my options I am passing to Java in my init 
script(I'm not using the -Xss option)
-Dbuild.compiler.emacs=true -XX:MaxPermSize=256M -Xmx2048m 
-Duser.timezone=America/Los_Angeles -Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.port=8089 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false -Xdebug 
-Xrunjdwp:transport=dt_socket,address=8003,server=y,suspend=n

Your thread dumps contain 149 threads each.

So I am not maxing out threads...seems like each tomcat node is running out of 
memory at the same time

Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-25 Thread Isaac Gonzalez
Hi Christopher thanks so much for your replies...,
 I am responding with inline comments below

From: Christopher Schultz [ch...@christopherschultz.net]
Sent: Monday, February 24, 2014 9:56 PM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Isaac,

On 2/24/14, 2:27 PM, Isaac Gonzalez wrote:
 Hello all,

 I'm running tomcat 6.0.32 on Cent OS 6 with 2 front end apache load
 balancers with a firewall in between the tomcat and load balancers
 using mod_jk  v. 1.2.37 under apache 2.2.10 to connect the backend
 tomcat. I have had this running ok for a few years but our user
 traffic has increased significantly. A few months ago, the tomcat
 server seemed to refuse or not accept any new connections from
 either load balancer and required a restart on the tomcat end, even
 though I could easily connect to tomcat on port 8080(manager). I
 can intermittently telnet to port 8009, but am denied a bit as well
 both inside and outside the firewall.

 I proceeded to split the tomcats up into their own instances,
 hoping when this issue recurred that it would only affect a
 particular tomcat app. It also gave our developers the ability to
 patch a single tomcat app without downing all of our apps.

 Unfortunately, this issue has recurred several times and I have
 spent most of my days researching and digging for hope of someone
 with a similar experience that may have resolved it. Last Friday
 the problem was so bad, I had to completely restart the tomcat
 server(reboot it).

 So far I am at a loss...I have installed psi-probe on all tomcat
 instances to give me more in depth analysis to tomcat threads and
 related server metadata when the problem is occuring. I have made a
 few modifications to workers.properties, in particular to decrease
 the connection timeout as well as the tomcat ajp connector from 10
 minutes to 5 minutes and added the ping timeout and socket timeout.
 I also increased my apache prefork MPM client connections to 500 on
 each load balancer. Below is my relevant configs...any suggestions
 to help remedy this would help... I have also increased threads
 from 200 to 500 on all tomcat instances.

I'd be interested to see a thread dump on a stuck Tomcat to see what
it's doing. If it happens again, please take a thread dump (or, better
yet, 3 or so maybe 5-10 seconds apart) and post them back to the list.
http://wiki.apache.org/tomcat/HowTo#How_do_I_obtain_a_thread_dump_of_my_running_webapp_.3F

Isaac: Ok, I will submit one...PSI Probe shows them all but I have to click on 
each one at a time...

Does restarting the Tomcat instance fix everything, or do you have to
also bounce httpd? What happens if you bounce only httpd?



Isaac: Restarting the Tomcat instance fixes it. Bouncing httpd has no affect.

After the split, did both Tomcats appear to lock-up simultaneously,
or did only one of them have a problem and the other one stayed up?



Isaac: They all appear to lock-up simultaneously, if users try to access that 
JK mount point.

Do the lock-ups appear to be related to anything you can observe, such
as particularly high-load, etc.?

I have seen the lock-up appear when we had some network latency and other 
network issues going on all externally facing traffic at this datacenter. I 
have also seen it happen when there is some database connectivity issues within 
the applications. Other times I have just seen it appear with possibly a high 
load.

 Workers.properties:

 worker.list=jkstatus,server1,server2,server3,server4,server5,server6,server7,server8


worker.jkstatus.type=status

 # Let's define some defaults worker.basic.port=8009
 worker.basic.type=ajp13 worker.basic.socket_keepalive=True
 worker.basic.connection_pool_timeout=300
 worker.basic.ping_timeout=1000 worker.basic.ping_mode=A
 worker.basic.socket_timeout=10

 worker.lb1.distance=0 worker.lb1.reference=worker.basic

 worker.server1.host= server1hostname
 worker.server1.reference=worker.lb1
 worker.server2.host=server2hostname
 worker.server2.reference=worker.lb1
 worker.server3.host=server3hostname
 worker.server3.reference=worker.lb1 worker.server4.host=
 server4hostname worker.server4.reference=worker.lb1
 worker.server5.host= server5hostname
 worker.server5.reference=worker.lb1 worker.server6.host=
 server6hostname worker.server6.reference=worker.lb1
 worker.server7.host= server7hostname
 worker.server7.reference=worker.lb1 worker.server8.host=
 server7hostname worker.server8.reference=worker.lb1

You didn't show any JkMounts in your httpd.conf file. What worker are
you using? It sounded like you were load-balancing the servers, but
your lb1 worker does not have any balance_workers setting so it
doesn't look like it's going to work.



Isaac: I am not load-balancing the tomcat servers...I only have one...I do 
load balance the apache front end servers via dns round-robin

Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-25 Thread Konstantin Kolinko
2014-02-24 23:27 GMT+04:00 Isaac Gonzalez igonza...@autoreturn.com:
 Hello all,

 I'm running tomcat 6.0.32

Can you upgrade to 6.0.39 or 7.0.52?

 on Cent OS 6 with 2 front end apache load balancers with a firewall in 
 between the tomcat and load balancers

A firewall between Apache HTTPD Server and Apache Tomcat?

Sometimes a firewall may drop a TCP connection without properly
terminating is. So your Tomcat might still think that it has 500 AJP
connections open and refuse new ones.

There have been several discussions on such issues over the years.

An old thread,
http://marc.info/?t=12181860762r=1w=2

Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-25 Thread Isaac Gonzalez
Hi Konstantin,

I can try to upgrade to tomcat 6.0.39 or tomcat 7...It should be a simple 
enough upgradewould possibly help out a bit.

I have the worker.basic.socket_keepalive=True set so according to the tomcat 
connector documentation, this should help with the firewall dropping open 
connections. When I have this problem and check the AJP threads in PSI-Probe 
for each tomcat instance, for example, its always well under the maximum 
threads. Perhaps I should be looking at AJP sessions instead?...I'm not sure 
there is a way to maximize or minimize this.

-Isaac

-Original Message-
From: Konstantin Kolinko [mailto:knst.koli...@gmail.com] 
Sent: Tuesday, February 25, 2014 3:33 PM
To: Tomcat Users List
Subject: Re: tomcat 6 refuses mod_jk connections after server runs for a couple 
of days

2014-02-24 23:27 GMT+04:00 Isaac Gonzalez igonza...@autoreturn.com:
 Hello all,

 I'm running tomcat 6.0.32

Can you upgrade to 6.0.39 or 7.0.52?

 on Cent OS 6 with 2 front end apache load balancers with a firewall in 
 between the tomcat and load balancers

A firewall between Apache HTTPD Server and Apache Tomcat?

Sometimes a firewall may drop a TCP connection without properly terminating is. 
So your Tomcat might still think that it has 500 AJP connections open and 
refuse new ones.

There have been several discussions on such issues over the years.

An old thread,
http://marc.info/?t=12181860762r=1w=2

Best regards,
Konstantin Kolinko

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: tomcat 6 refuses mod_jk connections after server runs for a couple of days

2014-02-24 Thread Christopher Schultz
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Isaac,

On 2/24/14, 2:27 PM, Isaac Gonzalez wrote:
 Hello all,
 
 I'm running tomcat 6.0.32 on Cent OS 6 with 2 front end apache load
 balancers with a firewall in between the tomcat and load balancers
 using mod_jk  v. 1.2.37 under apache 2.2.10 to connect the backend
 tomcat. I have had this running ok for a few years but our user
 traffic has increased significantly. A few months ago, the tomcat
 server seemed to refuse or not accept any new connections from
 either load balancer and required a restart on the tomcat end, even
 though I could easily connect to tomcat on port 8080(manager). I
 can intermittently telnet to port 8009, but am denied a bit as well
 both inside and outside the firewall.
 
 I proceeded to split the tomcats up into their own instances,
 hoping when this issue recurred that it would only affect a
 particular tomcat app. It also gave our developers the ability to
 patch a single tomcat app without downing all of our apps.
 
 Unfortunately, this issue has recurred several times and I have
 spent most of my days researching and digging for hope of someone
 with a similar experience that may have resolved it. Last Friday
 the problem was so bad, I had to completely restart the tomcat
 server(reboot it).
 
 So far I am at a loss...I have installed psi-probe on all tomcat
 instances to give me more in depth analysis to tomcat threads and
 related server metadata when the problem is occuring. I have made a
 few modifications to workers.properties, in particular to decrease
 the connection timeout as well as the tomcat ajp connector from 10
 minutes to 5 minutes and added the ping timeout and socket timeout.
 I also increased my apache prefork MPM client connections to 500 on
 each load balancer. Below is my relevant configs...any suggestions
 to help remedy this would help... I have also increased threads
 from 200 to 500 on all tomcat instances.

I'd be interested to see a thread dump on a stuck Tomcat to see what
it's doing. If it happens again, please take a thread dump (or, better
yet, 3 or so maybe 5-10 seconds apart) and post them back to the list.
http://wiki.apache.org/tomcat/HowTo#How_do_I_obtain_a_thread_dump_of_my_running_webapp_.3F

Does restarting the Tomcat instance fix everything, or do you have to
also bounce httpd? What happens if you bounce only httpd?

After the split, did both Tomcats appear to lock-up simultaneously,
or did only one of them have a problem and the other one stayed up?

Do the lock-ups appear to be related to anything you can observe, such
as particularly high-load, etc.?

 Workers.properties:
 
 worker.list=jkstatus,server1,server2,server3,server4,server5,server6,server7,server8

 
worker.jkstatus.type=status
 
 # Let's define some defaults worker.basic.port=8009 
 worker.basic.type=ajp13 worker.basic.socket_keepalive=True 
 worker.basic.connection_pool_timeout=300 
 worker.basic.ping_timeout=1000 worker.basic.ping_mode=A 
 worker.basic.socket_timeout=10
 
 worker.lb1.distance=0 worker.lb1.reference=worker.basic
 
 worker.server1.host= server1hostname 
 worker.server1.reference=worker.lb1 
 worker.server2.host=server2hostname 
 worker.server2.reference=worker.lb1 
 worker.server3.host=server3hostname 
 worker.server3.reference=worker.lb1 worker.server4.host=
 server4hostname worker.server4.reference=worker.lb1 
 worker.server5.host= server5hostname 
 worker.server5.reference=worker.lb1 worker.server6.host=
 server6hostname worker.server6.reference=worker.lb1 
 worker.server7.host= server7hostname 
 worker.server7.reference=worker.lb1 worker.server8.host=
 server7hostname worker.server8.reference=worker.lb1

You didn't show any JkMounts in your httpd.conf file. What worker are
you using? It sounded like you were load-balancing the servers, but
your lb1 worker does not have any balance_workers setting so it
doesn't look like it's going to work.

 httpd.conf:
 
 KeepAlive Off MaxKeepAliveRequests 100 KeepAliveTimeout 15
 
 
 
 # prefork MPM # StartServers: number of server processes to start #
 MinSpareServers: minimum number of server processes which are kept
 spare # MaxSpareServers: maximum number of server processes which
 are kept spare # ServerLimit: maximum value for MaxClients for the
 lifetime of the server # MaxClients: maximum number of server
 processes allowed to start # MaxRequestsPerChild: maximum number of
 requests a server process serves IfModule prefork.c StartServers
 8 MinSpareServers5 MaxSpareServers   20 ServerLimit  500 
 MaxClients   500 MaxRequestsPerChild  5000 /IfModule

It would be good to see your Jk* setting as well.

 Tomcat server.xml:
 
 !-- Define an AJP 1.3 Connector on port 8009 -- Connector
 port=8009 address=x.x.x.x protocol=AJP/1.3
 redirectPort=8443 connectionTimeout=30 maxThreads=500 /

Why do you both having a connectionTimeout on an AJP connection? httpd
should only send a request to you once the request line has been
received by the client, so