RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-22 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Friday, May 22, 2009 2:53 AM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in
 RUNNABLEstage even with no activity
 
 My point is: persistent connections are good, but connections which are
 idle for a long time are not as good, so close them after some idle
 time, like e.g. 10 minutes. Of course this means you need to create new
 ones once your load goes up again, but that's not a big problem.

[Pantvaidya, Vishwajit] Why are connections idle for a long time not good? I 
thought threads when idle take only a little memory and cpu. Are there any 
other reasons?

Thanks a lot Rainer, Chuck, Chris, Andre, Pid, Martin and everyone else I 
missed. I spent quite some time yesterday chewing on everything I gathered in 
the last few days' interactions and the conflicting behavior we are seeing in 
our systems - that led to following conclusions and action plan:

Behavior observed in diff production systems:
a. medium-to-large thread count whether firewall exists or not
b. % of runnable threads is much higher where firewall between httpd/tomcat
c. atleast 1 server where firewall exists has run out of threads
d. atleast 1 server where no firewall exists has run out of threads

Conclusions:
1. In general, runnable threads should not be a prob, unless they correspond to 
dropped connections. Since on our servers that have firewall between httpd and 
tomcat, runnable connections are not being used for new requests and tomcat 
keeps on creating new threads (leading to #b/c above), those threads could 
correspond to:
i. connections dropped by firewall or
ii. hanging tomcat threads as httpd recycle timeout disconnected the 
connection from that side (and there was no connectiontimeout in server.xml so 
that tomcat could do the same) or
iii. combination of these i and ii
2. Runnable threads on servers where no firewall exist (and we do not see 
server running out of threads) should not be a point of concerns as they do not 
correspond to dropped connections, as seen from netstat o/p at the end of this 
email. So #a above could be ignored.
3. Observation #d above is puzzling and currently I have no answers for that

Action:
- check both sides by using netstat -anop (Apache side and the Tomcat side 
without connectionTimeout, so you can see the problem in the original form). 
See whether the number of AJP connections in the various TCP states differs 
much between the netstat output on the Apache and on the Tomcat system.
- Bring workers.properties settings in line with Apache recommendations:
- Worker...cachesize=10 - set to 1
- Worker...cache_timeout=600 - remove
- Worker...recycle_timeout=300 - remove


Netstat o/p's: connector running on 21005, no firewall between httpd/tomcat

Httpd Side:

Proto Recv-Q Send-Q Local Address   Foreign Address 
State   PID/Program nameTimer
tcp0  0 129.41.29.241:53777 129.41.29.48:21005  
ESTABLISHED -   keepalive (2869.65/0/0)
tcp0  0 129.41.29.241:53943 129.41.29.48:21005  
ESTABLISHED -   keepalive (3341.39/0/0)
tcp0  0 129.41.29.241:49950 129.41.29.48:21005  
ESTABLISHED -   keepalive (6701.51/0/0)
tcp0  0 129.41.29.241:49927 129.41.29.48:21005  
ESTABLISHED -   keepalive (6240.25/0/0)
tcp0  0 129.41.29.241:49926 129.41.29.48:21005  
ESTABLISHED -   keepalive (6239.47/0/0)
tcp0  0 129.41.29.241:49971 129.41.29.48:21005  
ESTABLISHED -   keepalive (6931.40/0/0)
tcp0  0 129.41.29.241:49868 129.41.29.48:21005  
ESTABLISHED -   keepalive (5743.83/0/0)
tcp0  0 129.41.29.241:49865 129.41.29.48:21005  
ESTABLISHED -   keepalive (5741.65/0/0)
tcp0  0 129.41.29.241:49867 129.41.29.48:21005  
ESTABLISHED -   keepalive (5743.16/0/0)
tcp0  0 129.41.29.241:49901 129.41.29.48:21005  
ESTABLISHED -   keepalive (5906.92/0/0)
tcp0  0 129.41.29.241:49795 129.41.29.48:21005  
ESTABLISHED -   keepalive (4659.11/0/0)
tcp0  0 129.41.29.241:49558 129.41.29.48:21005  
ESTABLISHED -   keepalive (1705.06/0/0)
tcp0  0 129.41.29.241:50796 129.41.29.48:21005  
ESTABLISHED -   keepalive (4551.79/0/0)
tcp0  0 129.41.29.241:50784 129.41.29.48:21005  
ESTABLISHED -   keepalive (4539.53/0/0)
tcp0  0 129.41.29.241:50711 129.41.29.48:21005  
ESTABLISHED -   keepalive

RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-22 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Friday, May 22, 2009 12:39 PM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in
 RUNNABLEstage even with no activity
  [Pantvaidya, Vishwajit] Why are connections idle for a long time not
 good? I thought threads when idle take only a little memory and cpu. Are
 there any other reasons?
 
 Because you might want to monitor connections in order to learn how many
 threads you need for your load and how things grow or shrink over time.
 If you keep connections open an infinite number of time, you'll only
 monitor the biggest need since restart, which is often not very
 interesting, because it often is artificial (triggered by some
 performance slowness you might have a very big connection number created
 during a short time).

[Pantvaidya, Vishwajit] Good reason - I think ultimately after some immediate 
testing to diagnose the outofthread issues, I will use timeouts.


  d. atleast 1 server where no firewall exists has run out of threads
 
 Concurrency = Load * ResponseTime
 
 Concurrency: number of requests being processed in parallel
 Load: Number of Requests per Second being handled
 ResponseTime: Average Response time in seconds.
 
 So in case you have a performance problem and for a given load your
 response time goes up by a factor of ten, the number of connections will
 also go up by a factpr of 10. That's most often the reason for d) and
 was the reason, why we asked for thread dumps.

[Pantvaidya, Vishwajit] Again good explanation and makes a lot of sense - I do 
seem to remember we had performance problems on that machine. Will keep this in 
mind and monitor threads and take dumps if outofthreads reoccurs on that server.


  - Bring workers.properties settings in line with Apache recommendations:
  - Worker...cachesize=10 - set to 1
 
 respectively when using Apache, remove this. Rely on the defaults for
 that one.

[Pantvaidya, Vishwajit] Sure will do - once we migrate to jk 1.2.28.


 
  - Worker...cache_timeout=600 - remove
  - Worker...recycle_timeout=300 - remove
 
 Hmmm.

[Pantvaidya, Vishwajit] Considering the excellent reasons you have given above 
- ultimately I will retain timeouts. But for testing firewall issues, I need to 
rollback connTimeout in server.xml and to make sure that my settings are 
consistent I need to rollback the above timeouts also.

Again thanks - I think I have reasonable explanations for most of the issues / 
conflicting observations. This thread may be quiet for some time as I do more 
testing as per the actions I mentioned - will get back with results and final 
conclusions later.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-21 Thread Pantvaidya, Vishwajit

 
 1) If you want to analyze your original problem, you need to get back to
 the original situation, i.e. without connectionTimeout. It doesn't make
 much sense to guess about the original problem by looking at something
 very different.

[Pantvaidya, Vishwajit] Yes - I have already initiated that. Expect to be able 
to test the system without connectionTimeout tonight.


 2) The output of netstat and the content of a thread dump change in
 time. If you want to understand the exact relations between the two
 netstat outputs and a thread dump, you need to ensure to produce those
 three things as close in time as possible. I'm talking about seconds not
 milliseconds.

[Pantvaidya, Vishwajit] Yes, I had taken the netstats and the threads dumps 
pretty close together. But I will try and append a timestamp next time I take 
the output.


 3) I think I already indicated that you do not want to look at entries
 in TIME_WAIT state. This state is special and not related to any threads

[Pantvaidya, Vishwajit] My netstat o/p had FIN_WAIT and CLOSE_WAIT, but not 
TIMED_WAIT. Did some reading on TCP states, and seems to me that next time I do 
a netstat (w/o the connTimeout in server.xml), I can ignore processes with all 
these wait states as they seem to just indicate connections in different stages 
of closing.


 4) Firewall idle connection drop: First read
 
 http://tomcat.apache.org/connectors-
 doc/generic_howto/timeouts.html#Firewall%20Connection%20Dropping
 
 carefully and try to understand.
 
 Any mod_jk attribute that takes a booelan value will accept 1, true,
 True, t or T as true, 0, false, False, f or F as false (and maybe even
 more).

[Pantvaidya, Vishwajit] Our workers.props file has most recommended settings:
Worker...type=ajp13
Worker...cachesize=10
Worker...cache_timeout=600
Worker...socket_keepalive=1
Worker...recycle_timeout=300

We are not setting connectionpoolsize and minsize - but from the timeouts doc 
that should be okay as JK auto adjusts to httpd poolsize. So I think only thing 
left is to remove connectionTimeout and test.

Your link recommends connectionTimeouts and JKoption +DisableReuse as a final 
option - I think that will remove persistent connections on httpd and tomcat 
side. For us, the connectionTimeout alone worked. And my netstat o/p showing 11 
conns on http side and only 2 on tomcat side means our http conn's are 
persistent while tomcat ones are not, right?. So I am thinking the perf 
downside should be better than if I had set +DisableReuse also?


 5) Matching port numbers
 
 Usually the port numbers should match. The non matching of the port
 numbers could indicate, that there is a firewall in between, although
 most firewall systems will be transparent to the ports (yes, I know
 there are many variations). Since the port numbers are very close I
 would guess, that the reason for not matching is that netstat was done a
 couple of seconds or more apart, and your connections are only used for
 a very short time, so we have lots of new connections activity.

[Pantvaidya, Vishwajit] Yes - I confirmed from admins that there is a firewall 
and I will work with them to understand that side more. Our connection timeout 
are in the order of 10 mins - so I am not sure why the port#s don't match - 
will try and find if the firewall is having different port# ranges configured 
for httpd  tomcat side.


 6) TCP states
 
 LISTEN on the Tomcat side corresponds to the one TP thread, that does a
 socket accept.
 
 ESTABLISHED: both sides still want to use this connection. On the Tomcat
 side shows up as socketRead0()
 
 CLOSE_WAIT: the other side has closed the connection, the local side
 hasn't yet. E.g. if Tomcat closes the connection because of
 connectionTimeout, but Apache doesn't have a configured idle timeout and
 didn't yet try to reuse the half-closed connection, the connection will
 be shown as CLOSE_WAIT on the httpd side. If Apache closed the
 connection, but Tomcat hasn't noticed yet, it will be CLOSE_WAIT at the
 Tomcat end. In this case it could be also socketRead0() in the thread
 dump.
 
 FIN_WAIT2: most likely the other end of CLOSE_WAIT.

[Pantvaidya, Vishwajit] This is interesting because all the conn's in the 
netstat o/p on httpd and tomcat sides which involved the connector port 21065 
(either in local/foreign addr) were in WAIT states. But I was seeing one 
RUNNABLE thread in socketAccept in the thread console. But anyway I will redo 
this whole thing with the conntimeouts removed and make sure I take the 
netstats and thread dumps in close succession.

 
 7) mod_jk update
 
 Before you start to fix your mod_jk configuration, go to your ops people
 and tell them that they are using a very bad mod_jk version and they
 have to update. The right version to update to is 1.2.28. It does make
 no sense at all to try to fix this with your old version. Solve your
 problem in the right way, by setting much more attributes on the JK side
 than simply the connectionTimeout

RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-21 Thread Pantvaidya, Vishwajit


 -Original Message-
 From: Christopher Schultz [mailto:ch...@christopherschultz.net]
 Sent: Thursday, May 21, 2009 10:05 AM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in
 RUNNABLEstage even with no activity
 
 
 Vishwajit,
 
 On 5/20/2009 3:01 PM, Pantvaidya, Vishwajit wrote:
  [Pantvaidya, Vishwajit] Ok so RUNNABLE i.e. persistent threads should
  not be an issue. The only reason why I thought that was an issue was
  that I was observing that the none of the RUNNABLE connections were
  not being used to serve new requests, only the WAITING ones were -
  and I do know for sure that the RUNNABLE threads were not servicing
  any existing requests as I was the only one using the system then.
 
 It seems pretty clear that this is what your problem is. See if you can
 follow the order of events described below:
 
 1. Tomcat and Apache httpd are started. httpd makes one or more
(persistent) AJP connections to Tomcat and holds them open (duh).
Each connection from httpd-Tomcat puts a Java thread in the RUNNABLE
state (though actually blocked on socket read, it's not really
runnable)
 
 2. Some requests are received by httpd and sent over the AJP connections
to Tomcat (or not ... it really doesn't matter)
 
 3. Time passes, your recycle_timeout (300s) or cache_timeout (600s)
expires
 
 4. A new request comes in to httpd destined for Tomcat. mod_jk dutifully
follows your instructions for closing the connections expired in #3
above (note that Tomcat has no idea that the connection has been
closed, and so those threads remain in the RUNNABLE state, not
connected to anything, lost forever)
 
 5. A new connection (or multiple new connections... not sure exactly
how mod_jks connection expiration-and-reconnect logic is done)
is made to Tomcat which allocates a new thread (or threads)
which is/are in the RUNNABLE state
 
 Rinse, repeat, your server chokes to death when it runs out of threads.
 
 The above description accounts for your loss of 4 threads at a time:
 your web browser requests the initial page followed by 3 other assets
 (image, css, whatever). Each one of them hits step #4 above, causing a
 new AJP connection to be created, with the old one still hanging around
 on the Tomcat side just wasting a thread and memory.
 
 By setting connectionTimeout on the AJP Connector, you are /doing what
 you should have done in the first place, which is match mod_jk
 cache_timeout with Connector connectionTimeout/. This allows the threads
 on the Tomcat side to expire just like those on the httpd side. They
 should expire at (virtually) the same time and everything works as
 expected.

[Pantvaidya, Vishwajit] Thanks Chris - all this makes a lot of sense. However I 
am not seeing same problem (tomcat running out of threads) on other servers 
which are running exactly same configuration except that in those cases is no 
firewall separating websvr and tomcat. Here are the figures of RUNNABLE on 3 
different tomcat server running same config:

1. Firewall between httpd and tomcat - 120 threads, 112 runnable (93%)
2. No firewall between httpd and tomcat - 40 threads, 11 runnable (27%)
3. No firewall between httpd and tomcat - 48 threads, 2 runnable (4%)

Leads me to believe there is some firewall related mischief happening with #1.


 This problem is compounded by your initial configuration which created
 10 connections from httpd-Tomcat for every (prefork) httpd process,
 resulting in 9 useless AJP connections for every httpd process. I
 suspect that you were expiring 10 connections at a time instead of just
 one, meaning that you were running out of threads 10 times faster than
 you otherwise would.

[Pantvaidya, Vishwajit] I did not note connections expiring in multiple of 10. 
But I will keep an eye out for this. However from the cachesize explanation at 
http://tomcat.apache.org/connectors-doc/reference/workers.html#Deprecated%20Worker%20Directives
 I get the impression that this value imposes an upper limit - meaning it may 
not necessarily create 10 tomcat/jk connections for an httpd child process


 Suggestions:
 1. Tell your ops guys we know what we're talking about
 2. Upgrade mod_jk
 3. Set connection_pool_size=1, or, better yet, remove the config
altogether and let mod_jk determine its own value
 4. Remove all timeouts unless you know that you have a misbehaving
firewall. If you do, enable cping/cpong (the preferred strategy
by at least one author or mod_jk)
 
 - -chris

[Pantvaidya, Vishwajit] I will set
- cachesize=1 (doc says jk will autoset this value only for worker-mpm and we 
use httpd 2.0 prefork)
-  remove cache and recycle timeouts

But before all this, I will retest after removing connectionTimeout in 
server.xml - just to test if there are firewall caused issues as mentioned 
above.


-
To unsubscribe, e-mail: users

RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-21 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Thursday, May 21, 2009 3:37 PM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in
 RUNNABLEstage even with no activity
 
 On 22.05.2009 00:19, Pantvaidya, Vishwajit wrote:
  [Pantvaidya, Vishwajit] I will set
  - cachesize=1 (doc says jk will autoset this value only for worker-mpm
 and we use httpd 2.0 prefork)
 
 You don't have to: JK will discover this number for the Apache web
 server automatically and set the pool size to this value.

[Pantvaidya, Vishwajit] Does what you say hold true for jk 1.2.15 also? Because 
I saw that for the 1.2.15 cachesize directive, 
http://tomcat.apache.org/connectors-doc/reference/workers.html#Deprecated%20Worker%20Directives
 says that JK will discover the number of threads per child process on Apache 
2 web server with worker-mpm and set its default value to match the 
ThreadsPerChild Apache directive.. Since we use pre-fork MPM, I assumed we 
need to set cachesize.

  -  remove cache and recycle timeouts
 
 Chris and me are not having the same opinion here. You can choose :)
 
[Pantvaidya, Vishwajit] I think that may be only because my adding the 
connectionTimeout led you to believe that I wanted nonpersistent conn's. Now 
that I know persistent connections are better, I am trying to rollback 
connectionTimeout - and then I guess you will agree with Chris that I need to 
rollback the recycletimeouts, etc in workers file on httpd side also?


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit

  RUNNABLE and WAITING are thread states in the JVM. They don't relate in
  general to states inside Tomcat. In this special situation they do.
 
  The states you observe are both completely normal in themselves. One
  (the stack you abbreviate with RUNNABLE) is handling a persistant
  connection between web server and Tomcat which could send more requests,
  but at the moment no request is being processed, the other (you
  abbreviate with WAITING) is available to be associated with a new
  connection that might come in some time in the future.
 
 
 [Pantvaidya, Vishwajit] Thanks Rainer. The RUNNABLE thread - is it a
 connection between Tomcat and webserver, or between Tomcat and AJP? Is it
 still RUNNABLE and not WAITING because the servlet has not explicitly
 closed the connection yet (something like
 HttpServletResponse.getOutputStresm.close)
 
 
 [Pantvaidya, Vishwajit] My problem is that tomcat is running out of
 threads (maxthreadcount=200). My analysis of the issue is:
 - threads count is exceeded because of a slow buildup of RUNNABLE threads
 (and not because number of simultaneous http requests at some point
 exceeded max thread count)
 - most/all newly created TP-Processor threads are in RUNNABLE state and
 remain RUNNABLE - never go back to WAITING state (waiting for thread pool)
 - in such case, I find that tomcat spawns new threads when a new request
 comes in
 - this continues and finally tomcat runs out of threads
 - Setting connectionTimeout in server.xml seems to have resolved the issue
 - but I am wondering if that was just a workaround i.e. whether so many
 threads remaining RUNNABLE indicate a flaw in our webapp i.e. it not doing
 whatever's necessary to close them and return them to WAITING state.
 

[Pantvaidya, Vishwajit] After setting connectionTimeout in tomcat server.xml, 
the number of open threads is now consistently under 10 and most of them are 
now in WAITING stage. So looks like connectionTimeout also destroys idle 
threads. But I am still wondering - why should I have to set connectionTimeout 
to prevent tomcat running out of threads. I certainly don't mind if the 
TP-Processor threads continue to hang around as long as they are in WAITING 
state.
1. Is it expected behavior that most tomcat threads are in RUNNABLE state?
2. If not, does it indicate a problem in the app or in tomcat configuration?
My thinking is that the answer to #1 is no, and that to #2 is that it is an app 
problem. But just wanted to confirm and find out what people out there think.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit
 
  [Pantvaidya, Vishwajit] My problem is that tomcat is running out of
 threads (maxthreadcount=200). My analysis of the issue is:
  - threads count is exceeded because of a slow buildup of RUNNABLE
 threads (and not because number of simultaneous http requests at some
 point exceeded max thread count)
 
 I don't belibve this reason. I would say thread count is exceeded,
 because you allow a much higher concurrency on the web server layer.
 
[Pantvaidya, Vishwajit] Is there a tool you can recommend for me to monitor/log 
the http requests so that I have figures to back up my analysis.


  - most/all newly created TP-Processor threads are in RUNNABLE state and
 remain RUNNABLE - never go back to WAITING state (waiting for thread pool)
 
 So you are using persistent connections. There's no *problem* with that
 per se. If you ae uncomfortable with it configure the timeouts in the
 Tomcat connector *and* mod_jk.
 
[Pantvaidya, Vishwajit] Ok so RUNNABLE i.e. persistent threads should not be an 
issue. The only reason why I thought that was an issue was that I was observing 
that the none of the RUNNABLE connections were not being used to serve new 
requests, only the WAITING ones were - and I do know for sure that the RUNNABLE 
threads were not servicing any existing requests as I was the only one using 
the system then.

  - in such case, I find that tomcat spawns new threads when a new request
 comes in
 
 request - connection
 
  - this continues and finally tomcat runs out of threads
 
 That's to simple, usually the new requests should be handled by
 existing Apache processes that already have a connection to Tomcat and
 will not create a new one.
 
[Pantvaidya, Vishwajit] In my case the existing persistent connections are not 
servicing any new requests.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Wednesday, May 20, 2009 11:53 AM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in RUNNABLE
 stage even with no activity
 
 On 20.05.2009 19:47, Caldarale, Charles R wrote:
  From: Caldarale, Charles R [mailto:chuck.caldar...@unisys.com]
  Subject: RE: Running out of tomcat threads - why many threads
  inRUNNABLEstage even with no activity
 
  - Setting connectionTimeout in server.xml seems to have resolved
  the issue
  Only because you're throwing away what appears to be a usable
  connection that's designed to be persistent.
 
  Do you have something between Tomcat and httpd that could be silently
 closing connections?  (Some badly behaved firewalls are known to do this.)
 That would make the existing AJP connections useless, without notifying
 the Tomcat threads that the connection is no longer there.  Setting the
 timeout would allow those connections to be discarded and new ones
 created.
 
 That's a good point. You should check both sides by using netstat -an.
 The Apache side and the Tomcat side (without connectionTimeout, so you
 can see the problem in the original form). See whether the number of AJP
 connections in the various TCP states differs much between the netstat
 output on the Apache and on the Tomcat system.
 

[Pantvaidya, Vishwajit] Ok will do this.
To complicate things - we are seeing these outofthread problems only in one of 
our production servers - so I need to figure out if there are any differences 
in firewall settings between the 2 servers.

Finally, is it possible that some bad code in the app could be hanging onto 
those RUNNABLE connections which is why tomcat is not releasing them? Or if 
that was the case, would the stack trace of that thread show the code that was 
hanging onto it? In my case, all RUNNABLE connections show the stacktrace 

TP-Processor4 - Thread t...@29
   java.lang.Thread.State: RUNNABLE
at java.net.PlainSocketImpl.socketAccept(Native Method)...
at org.apache.jk.common.ChannelSocket.accept(ChannelSocket.java:293)...
at 
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
at java.lang.Thread.run(Thread.java:595)

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit
 
  Finally, is it possible that some bad code in the app could
  be hanging onto those RUNNABLE connections which is why tomcat
  is not releasing them?
 
 Once more: NO, NO, NO!  The threads you see in a RUNNABLE state are
 perfectly normal and expected.  Go do the netstat that Rainer suggested
 and let us know what you see.  Stop fixating on the thread state.
 
  - Chuck
 
 

[Pantvaidya, Vishwajit] Ok will do Chuck - thanks a lot for persisting with me 
through this issue.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Wednesday, May 20, 2009 11:53 AM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in RUNNABLE
 stage even with no activity
 
 On 20.05.2009 19:47, Caldarale, Charles R wrote:
  From: Caldarale, Charles R [mailto:chuck.caldar...@unisys.com]
  Subject: RE: Running out of tomcat threads - why many threads
  inRUNNABLEstage even with no activity
 
  - Setting connectionTimeout in server.xml seems to have resolved
  the issue
  Only because you're throwing away what appears to be a usable
  connection that's designed to be persistent.
 
  Do you have something between Tomcat and httpd that could be silently
 closing connections?  (Some badly behaved firewalls are known to do this.)
 That would make the existing AJP connections useless, without notifying
 the Tomcat threads that the connection is no longer there.  Setting the
 timeout would allow those connections to be discarded and new ones
 created.
 
 That's a good point. You should check both sides by using netstat -an.
 The Apache side and the Tomcat side (without connectionTimeout, so you
 can see the problem in the original form). See whether the number of AJP
 connections in the various TCP states differs much between the netstat
 output on the Apache and on the Tomcat system.
 
[Pantvaidya, Vishwajit] My tomcat connector port is 21065. Netstat shows 
following output under Active Internet Connections:

On httpd machine

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address 
State
tcp0  0 0.0.0.0:25  0.0.0.0:*   
LISTEN
tcp1  0 129.41.29.243:43237 172.27.127.201:21065
CLOSE_WAIT
tcp1  0 129.41.29.243:43244 172.27.127.201:21065
CLOSE_WAIT
tcp1  0 129.41.29.243:43245 172.27.127.201:21065
CLOSE_WAIT
tcp1  0 129.41.29.243:43225 172.27.127.201:21065
CLOSE_WAIT
tcp1  0 129.41.29.243:43227 172.27.127.201:21065
CLOSE_WAIT
tcp0  0 129.41.29.243:43239 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43238 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43243 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43242 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43241 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43240 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43247 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43246 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43219 172.27.127.202:21069
TIME_WAIT
tcp0  0 129.41.29.243:43209 172.27.127.202:21069
ESTABLISHED
tcp0  0 129.41.29.243:43208 172.27.127.202:21069
TIME_WAIT


On tomcat machine

Proto Recv-Q Send-Q Local Address   Foreign Address 
State
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43216  
FIN_WAIT2
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43217  
FIN_WAIT2
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43218  
FIN_WAIT2
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43211  
TIME_WAIT
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43212  
FIN_WAIT2
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43213  
FIN_WAIT2
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43214  
FIN_WAIT2
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43215  
TIME_WAIT
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43204  
TIME_WAIT
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43205  
TIME_WAIT
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43206  
TIME_WAIT


- My thread dump shows 8 TP-Processor threads - but this output has 11 threads.
- why do the 11 threads in the httpd o/p show port 21069 in foreign addr. Or 
are those not the correct threads I should be looking at?
- currently I do have connectionTimeout set in server.xml. I will need to wait 
until night to reset that.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit
 The fact that *none* of the ports match would suggest (but not prove) that
 someone in the middle is closing the connections, and not telling either
 end about it.
 
 Do the netstat -anop again; it should be more interesting.
 
  - Chuck
 

[Pantvaidya, Vishwajit] Tomcat server port 11065, connector port 21065

On Httpd Side:

Proto Recv-Q Send-Q Local Address   Foreign Address 
State   PID/Program nameTimer
...
tcp0  0 0.0.0.0:25  0.0.0.0:*   
LISTEN  -   off (0.00/0/0)
tcp1  0 129.41.29.243:44003 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7194.80/0/0)
tcp1  0 129.41.29.243:44002 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7194.43/0/0)
tcp1  0 129.41.29.243:44001 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7192.26/0/0)
tcp1  0 129.41.29.243:44000 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7189.64/0/0)
tcp1  0 129.41.29.243:43990 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7016.23/0/0)
tcp1  0 129.41.29.243:43999 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7189.30/0/0)
tcp1  0 129.41.29.243:43998 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7186.76/0/0)
tcp1  0 129.41.29.243:43996 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7183.86/0/0)
tcp1  0 129.41.29.243:43994 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7174.09/0/0)
tcp1  0 129.41.29.243:43993 172.27.127.201:21065
CLOSE_WAIT  -   keepalive (7164.63/0/0)
...


On Tomcat side:

(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address 
State   PID/Program nameTimer
...
tcp0  0 :::21065:::*
LISTEN  6988/java   off (0.00/0/0)
tcp0  0 :::127.0.0.1:11065  :::*
LISTEN  6988/java   off (0.00/0/0)
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43992  
FIN_WAIT2   -   timewait (56.71/0/0)
tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43991  
FIN_WAIT2   -   timewait (56.24/0/0)
...



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit
  The fact that *none* of the ports match would suggest (but not prove)
 that
  someone in the middle is closing the connections, and not telling either
  end about it.
 
  Do the netstat -anop again; it should be more interesting.
 
   - Chuck
 
 
 [Pantvaidya, Vishwajit] Tomcat server port 11065, connector port 21065
 
 On Httpd Side:
 
 Proto Recv-Q Send-Q Local Address   Foreign Address
 State   PID/Program nameTimer
 ...
 tcp0  0 0.0.0.0:25  0.0.0.0:*
 LISTEN  -   off (0.00/0/0)
 tcp1  0 129.41.29.243:44003 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7194.80/0/0)
 tcp1  0 129.41.29.243:44002 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7194.43/0/0)
 tcp1  0 129.41.29.243:44001 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7192.26/0/0)
 tcp1  0 129.41.29.243:44000 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7189.64/0/0)
 tcp1  0 129.41.29.243:43990 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7016.23/0/0)
 tcp1  0 129.41.29.243:43999 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7189.30/0/0)
 tcp1  0 129.41.29.243:43998 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7186.76/0/0)
 tcp1  0 129.41.29.243:43996 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7183.86/0/0)
 tcp1  0 129.41.29.243:43994 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7174.09/0/0)
 tcp1  0 129.41.29.243:43993 172.27.127.201:21065
 CLOSE_WAIT  -   keepalive (7164.63/0/0)
 ...
 
 
 On Tomcat side:
 
 (Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
 Active Internet connections (servers and established)
 Proto Recv-Q Send-Q Local Address   Foreign Address
 State   PID/Program nameTimer
 ...
 tcp0  0 :::21065:::*
 LISTEN  6988/java   off (0.00/0/0)
 tcp0  0 :::127.0.0.1:11065  :::*
 LISTEN  6988/java   off (0.00/0/0)
 tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43992
 FIN_WAIT2   -   timewait (56.71/0/0)
 tcp0  0 :::172.27.127.201:21065 :::129.41.29.243:43991
 FIN_WAIT2   -   timewait (56.24/0/0)
 ...
 
 
 
[Pantvaidya, Vishwajit] By the way, in the thread console, I see 8 TP-Processor 
threads (2 RUNNABLE, 6 WAITING). But above netstat output on tomcat side shows 
only 2 connections on port 21065. Shouldn't there be 1 thread / connection? 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Pointing java.endorsed.dirs to a different location when running tomcat

2009-05-20 Thread Pantvaidya, Vishwajit
I am running a webapp under tomcat 5.5 with the server.xml having a Host 
element as:

Host name=localhost appBase=C:/myapphome  debug=0

The webapp needs to set the system property java.endorsed.dirs to a location 
like C:\myapphome\lib\endorsed.
But the setclasspath.bat that comes bundled with tomcat sets this property to 
CATALINA_HOME\common\endorsed. It does not even check if the property is 
already set - so calling tomcat script from a wrapper script which sets this 
property will not work.

Any options/recommendations to resolve this issue?





RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-20 Thread Pantvaidya, Vishwajit
 This FAQ entry looks promising:
 http://tomcat.apache.org/connectors-doc/miscellaneous/faq.html
 
 Look at the entry entitled I've got a firewall between my web server and
 Tomcat which drops ajp13 connections after some time.
 
 Configuring keep-alives is a fairly low-overhead workaround, but it would
 be better to fix the firewall so it doesn't silently drop connections.
 
  - Chuck
 
[Pantvaidya, Vishwajit] Thanks Chuck. My workers.properties already has 
following settings:

port=21065
cachesize=10
cache_timeout=600
socket_keepalive=1
recycle_timeout=300

So socket_keepalive is already 1. So does this mean that firewall is dropping 
connections in spite of it.

About the netstat output I sent earlier - I guess an indicator of a firewall 
dropping connections, would be if the output showed many more active 
connections on the tomcat side than on the httpd side - is that accurate?
My netstat o/p had only 2 tomcat connections active in FIN_WAIT2 and about 11 
in keepalive on httpd side - I guess this does not indicate any hanging 
connections? Could that be because currently connectionTimeout is active in my 
server.xml?


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Caching static files in hierarchical directory structure

2009-05-19 Thread Pantvaidya, Vishwajit
Is there anyway to cache sets of files in multiple levels of a hierarchical 
directory structure e.g. 
/js/*.js
/js/1/*.js
/js/1/1/*.js

I was checking this out on the httpd side using mod_file_cache, mod_headers, 
mod_expires. The Directory and other directives seem to take wildcards like * 
and ? but I don't see anything to span multiple levels.
Or wil it work if I just point to the top of the hierarchy i.e. /js here.


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-19 Thread Pantvaidya, Vishwajit
  Yes, sure - we will upgrade at some point of
  time. But since upgrading all our servers will be some work, that may
  not happen right away.
 
 Upgrading mod_jk is the least painful of all of these, and the most
 likely to affect you.
 
[Pantvaidya, Vishwajit] I understand and agree and will push for this - but 
most admins are conservative and so I am not harboring high hopes for success 
in this. 

  Here are figures from 3 of the servers which
  have not yet run out of threads (so the thread count does not add up
  to 200). I have taken these late at night when no users are present,
  so I was expecting all threads to be Waiting for tomcat thread-pool.
 
  1. Total TP-Processor threads 48, Waiting 46, Runnable 2
  2. Total TP-Processor threads 40, Waiting 29, Runnable 11
  3. Total TP-Processor threads 120, Waiting 7, Runnable 113
 
 Are you sure you aren't seeing any traffic, even that late at night?
 What if you watch the access logs? Are there requests actively being
 serviced?
 
[Pantvaidya, Vishwajit] I was tailing the logs - there were no accesses.

  Do you think this could be because of the
  application? I was under the impression that there is some tomcat
  config parameter that controls this - which was set to 4.
 
 No, Tomcat uses precisely 1 thread to handle each incoming HTTP request.
 If keepalives are used, multiple requests may be handled by the same
 thread before it is returned to the pool, or different threads may be
 used to serve different requests from the single connection, but in
 either case, no more than 1 thread will be used to service a single HTTP
 request.
 

[Pantvaidya, Vishwajit] Could this happen if upon my http browser request, the 
app could be spawning multiple redirects in quick succession leading tomcat to 
create multiple threads. Any other thoughts why I could be seeing tomcat spawn 
threads in multiples of 4?

  My workers config is:
 
  Worker...type=ajp13
  Worker...cachesize=10
  Are you using the prefork MPM? If so, cachesize should be /1/.
 
  [Pantvaidya, Vishwajit] Could you please elaborate. What is the
  prefork MPM?
 
 The MPM is the concurrency strategy employed by Apache httpd. Either you
 are using the prefork MPM which starts multiple httpd processes to
 handle requests, or you are using the worker MPM which starts multiple
 threads to handle requests. Actually, mod_jk should be able to
 auto-detect the appropriate cachesize (called connection_pool_size,
 now), so you shouldn't have to set this.
 
[Pantvaidya, Vishwajit] Ok thanks. httpd -l showed perfork.c. I guess that 
means we are using prefork MPM. So our cachesize should be 1? Our mod_jk 
version is 1.2.15 - will that also auto-detect the cache-size?


  Worker...cache_timeout=600 Worker...socket_keepalive=1
  Worker...recycle_timeout=300
  Are these timeouts necessary? Why not simply let the connections
  stay alive all the time?
 
  [Pantvaidya, Vishwajit] Sure we could. But for any production change,
  I would have to offer a good enough reason.
 
 What was the good enough reason to set those timeouts in the first
 place?
 
[Pantvaidya, Vishwajit] I agree - but again as I mentioned above because the 
admin will be conservative about any changes, I need to have a strong reason.
Also when you say let the connection stay alive, does that mean let the 
TP-Processor thread remain in Waiting state / Runnable state?

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-19 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Monday, May 18, 2009 11:10 PM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in
 RUNNABLEstage even with no activity
 
 On 19.05.2009 02:54, Caldarale, Charles R wrote:
  From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com]
  Subject: RE: Running out of tomcat threads - why many threads in
  RUNNABLEstage even with no activity
 
  Ok - so then the question is when does tomcat transition the thread
  from Running to Waiting? Does that happen after AJP drops that
  connection?
 
 RUNNABLE and WAITING are thread states in the JVM. They don't relate in
 general to states inside Tomcat. In this special situation they do.
 
 The states you observe are both completely normal in themselves. One
 (the stack you abbreviate with RUNNABLE) is handling a persistant
 connection between web server and Tomcat which could send more requests,
 but at the moment no request is being processed, the other (you
 abbreviate with WAITING) is available to be associated with a new
 connection that might come in some time in the future.
 

[Pantvaidya, Vishwajit] Thanks Rainer. The RUNNABLE thread - is it a connection 
between Tomcat and webserver, or between Tomcat and AJP? Is it still RUNNABLE 
and not WAITING because the servlet has not explicitly closed the connection 
yet (something like HttpServletResponse.getOutputStresm.close)

 
  So could the problem be occurring here because AJP is holding on to
  connections?
 
  Sorry, I haven't been following the thread that closely.  Not sure
  what the problem you're referring to actually is, but having a Tomcat
  thread reading input from the AJP connector is pretty normal.
 
 The same to me. What's the problem? AJP is designed to reuse connections
 (use persistent connections). If you do not want them to be used for a
 very long time or like those connections to be closed when being idle,
 you have to configure the appropriate timeouts. Look at the timeouts
 documentation page of mod_jk.
 
 In general your max thread numbers in the web server layer and in the
 Tomcat AJP pool need to be set consistently.
 

[Pantvaidya, Vishwajit] My problem is that tomcat is running out of threads 
(maxthreadcount=200). My analysis of the issue is:
- threads count is exceeded because of a slow buildup of RUNNABLE threads (and 
not because number of simultaneous http requests at some point exceeded max 
thread count)
- most/all newly created TP-Processor threads are in RUNNABLE state and remain 
RUNNABLE - never go back to WAITING state (waiting for thread pool)
- in such case, I find that tomcat spawns new threads when a new request comes 
in
- this continues and finally tomcat runs out of threads
- Setting connectionTimeout in server.xml seems to have resolved the issue - 
but I am wondering if that was just a workaround i.e. whether so many threads 
remaining RUNNABLE indicate a flaw in our webapp i.e. it not doing whatever's 
necessary to close them and return them to WAITING state.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-18 Thread Pantvaidya, Vishwajit
Hi Chris,

Thanks for your reply.

 
 On 5/13/2009 5:28 PM, Pantvaidya, Vishwajit wrote:
  My setup is tomcat 5.5.17 + mod_jk 1.2.15 + httpd 2.2.2. I am using
  AJP1.3.
 
 Old versions of everything. Consider upgrading?
 
[Pantvaidya, Vishwajit] Yes, sure - we will upgrade at some point of time. But 
since upgrading all our servers will be some work, that may not happen right 
away.


  Every 2-3 days with no major load, tomcat throws the error: SEVERE:
  All threads (200) are currently busy, waiting...
 
  I have been monitoring my tomcat TP-Processor thread behavior over
  extended time intervals and observe that: - even when there is no
  activity on the server, several TP-Processor threads are in RUNNABLE
  state while few are in WAITING state
 
 It appears that you have 200 threads available. How many (on average)
 are RUNNABLE versus WAITING? (The two counts should add to 200, unless
 there's some other state (BLOCKED?) that the threads can be in, but you
 didn't mention any other states).

[Pantvaidya, Vishwajit] Here are figures from 3 of the servers which have not 
yet run out of threads (so the thread count does not add up to 200). I have 
taken these late at night when no users are present, so I was expecting all 
threads to be Waiting for tomcat thread-pool.

1. Total TP-Processor threads 48, Waiting 46, Runnable 2
2. Total TP-Processor threads 40, Waiting 29, Runnable 11
3. Total TP-Processor threads 120, Waiting 7, Runnable 113


 
  - RUNNABLE threads stack trace shows java.lang.Thread.State:
  RUNNABLE at java.net.SocketInputStream.socketRead0(Native
  Method)...
 
 This indicates that the client has not yet disconnected, and is probably
 still sending data. If there were not any data waiting, the state should
 be BLOCKED.
 
  - WAITING thread stack trace shows java.lang.Thread.State: WAITING
  on
  org.apache.tomcat.util.threads.threadpool$controlrunna...@53533c55
 
 These are idle threads.
 
  - tomcat adds 4 new TP-Processor threads when a request comes in and
  it can find no WAITING threads
 
 Wow, 4 new threads? That seems like 3 too many...

[Pantvaidya, Vishwajit] Do you think this could be because of the application? 
I was under the impression that there is some tomcat config parameter that 
controls this - which was set to 4.

 
  So I conclude that my tomcat is running out of threads due to many
  threads being in RUNNABLE state when actually they should be in
  WAITING state. Is that happening because of the socket_keepalive in
  my workers.properties shown below?
 
 worker.socket_keepalive just keeps the connection between Apache httpd
 and Tomcat alive in case you have an overzealous firewall that closes
 inactive connections. The request processor shouldn't be affected by
 this setting.
 
  Why are threads added in bunches of 4 - is there any way to configure
  this?
 
  My workers config is:
 
  Worker...type=ajp13 Worker...cachesize=10
 
 Are you using the prefork MPM? If so, cachesize should be /1/.
 

[Pantvaidya, Vishwajit] Could you please elaborate. What is the prefork MPM?


  Worker...cache_timeout=600 Worker...socket_keepalive=1
  Worker...recycle_timeout=300
 
 Are these timeouts necessary? Why not simply let the connections stay
 alive all the time?
 
[Pantvaidya, Vishwajit] Sure we could. But for any production change, I would 
have to offer a good enough reason.

  Earlier posts related to this issue on the list seem to recommend
  tweaking: - several timeouts - JkOptions +DisableReuse
 
 This will require that every incoming HTTP connection opens up a new
 ajp13 connection to Tomcat. Your performance will totally suck if you
 enable this. But if it's the only way for you to get your application
 working properly, then I guess you'll have to do it. I suspect you /will
 not/ have to enable +DisableReuse.
 
[Pantvaidya, Vishwajit] I was seeing earlier posts on this list mention some 
disagreement on the performance impact of setting +DisableReuse. Otherwise I 
would not even think of this.

By the way, the above 3 figures I provided are without connectiontimeout being 
set for Connector element in server.xml.

- Vish.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-18 Thread Pantvaidya, Vishwajit

 [Pantvaidya, Vishwajit] Here are figures from 3 of the servers which have
 not yet run out of threads (so the thread count does not add up to 200). I
 have taken these late at night when no users are present, so I was
 expecting all threads to be Waiting for tomcat thread-pool.
 
 1. Total TP-Processor threads 48, Waiting 46, Runnable 2
 2. Total TP-Processor threads 40, Waiting 29, Runnable 11
 3. Total TP-Processor threads 120, Waiting 7, Runnable 113
 
 
 

[Pantvaidya, Vishwajit] Posting the thread dumps for the above 3 cases, since 
Rainer mentioned that he would like to see more of the stack trace.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-18 Thread Pantvaidya, Vishwajit
  [Pantvaidya, Vishwajit] Here are figures from 3 of the servers which

 have

  not yet run out of threads (so the thread count does not add up to 200).

 I

  have taken these late at night when no users are present, so I was

  expecting all threads to be Waiting for tomcat thread-pool.

 

  1. Total TP-Processor threads 48, Waiting 46, Runnable 2

  2. Total TP-Processor threads 40, Waiting 29, Runnable 11

  3. Total TP-Processor threads 120, Waiting 7, Runnable 113

 

 

[Pantvaidya, Vishwajit] Since Rainer mentioned that he would like to see more 
of the stack trace, here are the complete stack traces for a Runnable and 
Waiting thread from #3 above. All Runnable/Waiting threads from all the above 
cases have same stack trace as below:



TP-Processor119 - Thread t...@2294

   java.lang.Thread.State: RUNNABLE

at java.net.SocketInputStream.socketRead0(Native Method)

at java.net.SocketInputStream.read(SocketInputStream.java:129)

at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)

at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)

at java.io.BufferedInputStream.read(BufferedInputStream.java:313)

at org.apache.jk.common.ChannelSocket.read(ChannelSocket.java:607)

at org.apache.jk.common.ChannelSocket.receive(ChannelSocket.java:545)

at 
org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:672)

at 
org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:876)

at 
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)

at java.lang.Thread.run(Thread.java:595)



TP-Processor118 - Thread t...@2293

   java.lang.Thread.State: WAITING on 
org.apache.tomcat.util.threads.threadpool$controlrunna...@3579cafe

at java.lang.Object.wait(Native Method)

at java.lang.Object.wait(Object.java:474)

at 
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:656)

at java.lang.Thread.run(Thread.java:595)




RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-18 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Rainer Jung [mailto:rainer.j...@kippdata.de]
 Sent: Monday, May 18, 2009 2:43 PM
 To: Tomcat Users List
 Subject: Re: Running out of tomcat threads - why many threads in RUNNABLE
 stage even with no activity
 
 Yes, those two look like waiting for next request on an existing
 connection from the web server to Tomcat and sitting idle in the pool,
 waiting for a new connection to handle.
 

[Pantvaidya, Vishwajit] Thanks Rainier. Any idea why threads would be sitting 
around in Runnable state even when nobody has been using application for a long 
time. From whatever I have read on this, it seems to me that this could happen 
if a servlet writes something to a response stream, closes the response stream, 
but after that keeps on doing some processing (e.g. running an infinite loop). 
I am reasonably sure that our app is not doing something like that. Unless 
there was something like an infinite loop running in a servlet, I would assume 
that the servlet would eventually return and the tomcat TP-Processor thread 
would be released back to the connection pool (go into Waiting state).


 On 18.05.2009 22:44, Pantvaidya, Vishwajit wrote:
  [Pantvaidya, Vishwajit] Here are figures from 3 of the servers which
 
  have
 
  not yet run out of threads (so the thread count does not add up to
 200).
 
  I
 
  have taken these late at night when no users are present, so I was
 
  expecting all threads to be Waiting for tomcat thread-pool.
 
 
  1. Total TP-Processor threads 48, Waiting 46, Runnable 2
 
  2. Total TP-Processor threads 40, Waiting 29, Runnable 11
 
  3. Total TP-Processor threads 120, Waiting 7, Runnable 113
 
 
 
  [Pantvaidya, Vishwajit] Since Rainer mentioned that he would like to see
 more of the stack trace, here are the complete stack traces for a Runnable
 and Waiting thread from #3 above. All Runnable/Waiting threads from all
 the above cases have same stack trace as below:
 
 
 
  TP-Processor119 - Thread t...@2294
 
 java.lang.Thread.State: RUNNABLE
 
  at java.net.SocketInputStream.socketRead0(Native Method)
 
  at java.net.SocketInputStream.read(SocketInputStream.java:129)
 
  at
 java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
 
  at
 java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
 
  at
 java.io.BufferedInputStream.read(BufferedInputStream.java:313)
 
  at
 org.apache.jk.common.ChannelSocket.read(ChannelSocket.java:607)
 
  at
 org.apache.jk.common.ChannelSocket.receive(ChannelSocket.java:545)
 
  at
 org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:67
 2)
 
  at
 org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.ja
 va:876)
 
  at
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.j
 ava:684)
 
  at java.lang.Thread.run(Thread.java:595)
 
 
 
  TP-Processor118 - Thread t...@2293
 
 java.lang.Thread.State: WAITING on
 org.apache.tomcat.util.threads.threadpool$controlrunna...@3579cafe
 
  at java.lang.Object.wait(Native Method)
 
  at java.lang.Object.wait(Object.java:474)
 
  at
 org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.j
 ava:656)
 
  at java.lang.Thread.run(Thread.java:595)
 
 

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLEstage even with no activity

2009-05-18 Thread Pantvaidya, Vishwajit
 -Original Message-
 From: Caldarale, Charles R [mailto:chuck.caldar...@unisys.com]
 Sent: Monday, May 18, 2009 4:02 PM
 To: Tomcat Users List
 Subject: RE: Running out of tomcat threads - why many threads in
 RUNNABLEstage even with no activity
 
  From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com]
  Subject: RE: Running out of tomcat threads - why many threads in
  RUNNABLEstage even with no activity
 
  From whatever I have read on this, it seems to me that this could
  happen if a servlet writes something to a response stream, closes
  the response stream, but after that keeps on doing some processing
  (e.g. running an infinite loop).
 
 No - the thread would be inside the servlet in that case.  The thread here
 in the RUNNABLE state is waiting for a *new* request to come in over an
 active AJP connection; a thread in the WAITING state would be assigned to
 a new connection when one is accepted.
 

[Pantvaidya, Vishwajit] 
Ok - so then the question is when does tomcat transition the thread from 
Running to Waiting?
Does that happen after AJP drops that connection?
So could the problem be occurring here because AJP is holding on to connections?



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-14 Thread Pantvaidya, Vishwajit
I set connectionTimeout in server.xml to 60 and now the RUNNABLE threads go 
back to WAITING stage after that time.

But our other servers which are running the same configuration, same webapp and 
do not have connectionTimeout set in server.xml, do not show so many RUNNABLE 
threads, but more WAITING threads. So looks like the threads are getting 
recycled properly there.

Any idea why could this be? Could it be the OS (all servers run Linux but I do 
not know which flavors/versions)?



-Original Message-
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] 
Sent: Wednesday, May 13, 2009 2:28 PM
To: users@tomcat.apache.org
Subject: Running out of tomcat threads - why many threads in RUNNABLE stage 
even with no activity

My setup is tomcat 5.5.17 + mod_jk 1.2.15 + httpd 2.2.2. I am using AJP1.3.
Every 2-3 days with no major load, tomcat throws the error: SEVERE: All 
threads (200) are currently busy, waiting...

I have been monitoring my tomcat TP-Processor thread behavior over extended 
time intervals and observe that:
- even when there is no activity on the server, several TP-Processor threads 
are in RUNNABLE state while few are in WAITING state
- RUNNABLE threads stack trace shows java.lang.Thread.State: RUNNABLE at 
java.net.SocketInputStream.socketRead0(Native Method)...
- WAITING thread stack trace shows java.lang.Thread.State: WAITING on 
org.apache.tomcat.util.threads.threadpool$controlrunna...@53533c55
- tomcat adds 4 new TP-Processor threads when a request comes in and it can 
find no WAITING threads

So I conclude that my tomcat is running out of threads due to many threads 
being in RUNNABLE state when actually they should be in WAITING state. Is that 
happening because of the socket_keepalive in my workers.properties shown below?
Why are threads added in bunches of 4 - is there any way to configure this?

My workers config is:

Worker...type=ajp13
Worker...cachesize=10
Worker...cache_timeout=600
Worker...socket_keepalive=1
Worker...recycle_timeout=300
 
Earlier posts related to this issue on the list seem to recommend tweaking:
- several timeouts
- JkOptions +DisableReuse

I am planning to do the following to resolve our problem:
- upgrade jk to latest version - e.g. 1.2.28
- replace recycle_timeout with connection_pool_timeout
- add connectionTimeout in server.xml
- add JkOptions +DisableReuse

Please let me know if this is okay or suggestions if any.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



RE: Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-14 Thread Pantvaidya, Vishwajit
Since I did not get any responses to this, just wanted to ask - did I post this 
to the wrong list and should I be posting this to the tomcat developers list 
instead?


-Original Message-
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] 
Sent: Thursday, May 14, 2009 11:29 AM
To: Tomcat Users List
Subject: RE: Running out of tomcat threads - why many threads in RUNNABLE stage 
even with no activity

I set connectionTimeout in server.xml to 60 and now the RUNNABLE threads go 
back to WAITING stage after that time.

But our other servers which are running the same configuration, same webapp and 
do not have connectionTimeout set in server.xml, do not show so many RUNNABLE 
threads, but more WAITING threads. So looks like the threads are getting 
recycled properly there.


Any idea why could this be? Could it be the OS (all servers run Linux but I do 
not know which flavors/versions)?



-Original Message-
From: Pantvaidya, Vishwajit [mailto:vpant...@selectica.com] 
Sent: Wednesday, May 13, 2009 2:28 PM
To: users@tomcat.apache.org
Subject: Running out of tomcat threads - why many threads in RUNNABLE stage 
even with no activity

My setup is tomcat 5.5.17 + mod_jk 1.2.15 + httpd 2.2.2. I am using AJP1.3.
Every 2-3 days with no major load, tomcat throws the error: SEVERE: All 
threads (200) are currently busy, waiting...

I have been monitoring my tomcat TP-Processor thread behavior over extended 
time intervals and observe that:
- even when there is no activity on the server, several TP-Processor threads 
are in RUNNABLE state while few are in WAITING state
- RUNNABLE threads stack trace shows java.lang.Thread.State: RUNNABLE at 
java.net.SocketInputStream.socketRead0(Native Method)...
- WAITING thread stack trace shows java.lang.Thread.State: WAITING on 
org.apache.tomcat.util.threads.threadpool$controlrunna...@53533c55
- tomcat adds 4 new TP-Processor threads when a request comes in and it can 
find no WAITING threads

So I conclude that my tomcat is running out of threads due to many threads 
being in RUNNABLE state when actually they should be in WAITING state. Is that 
happening because of the socket_keepalive in my workers.properties shown below?
Why are threads added in bunches of 4 - is there any way to configure this?

My workers config is:

Worker...type=ajp13
Worker...cachesize=10
Worker...cache_timeout=600
Worker...socket_keepalive=1
Worker...recycle_timeout=300
 
Earlier posts related to this issue on the list seem to recommend tweaking:
- several timeouts
- JkOptions +DisableReuse

I am planning to do the following to resolve our problem:
- upgrade jk to latest version - e.g. 1.2.28
- replace recycle_timeout with connection_pool_timeout
- add connectionTimeout in server.xml
- add JkOptions +DisableReuse

Please let me know if this is okay or suggestions if any.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Running out of tomcat threads - why many threads in RUNNABLE stage even with no activity

2009-05-13 Thread Pantvaidya, Vishwajit
My setup is tomcat 5.5.17 + mod_jk 1.2.15 + httpd 2.2.2. I am using AJP1.3.
Every 2-3 days with no major load, tomcat throws the error: SEVERE: All 
threads (200) are currently busy, waiting...

I have been monitoring my tomcat TP-Processor thread behavior over extended 
time intervals and observe that:
- even when there is no activity on the server, several TP-Processor threads 
are in RUNNABLE state while few are in WAITING state
- RUNNABLE threads stack trace shows java.lang.Thread.State: RUNNABLE at 
java.net.SocketInputStream.socketRead0(Native Method)...
- WAITING thread stack trace shows java.lang.Thread.State: WAITING on 
org.apache.tomcat.util.threads.threadpool$controlrunna...@53533c55
- tomcat adds 4 new TP-Processor threads when a request comes in and it can 
find no WAITING threads

So I conclude that my tomcat is running out of threads due to many threads 
being in RUNNABLE state when actually they should be in WAITING state. Is that 
happening because of the socket_keepalive in my workers.properties shown below?
Why are threads added in bunches of 4 - is there any way to configure this?

My workers config is:

Worker...type=ajp13
Worker...cachesize=10
Worker...cache_timeout=600
Worker...socket_keepalive=1
Worker...recycle_timeout=300
 
Earlier posts related to this issue on the list seem to recommend tweaking:
- several timeouts
- JkOptions +DisableReuse

I am planning to do the following to resolve our problem:
- upgrade jk to latest version - e.g. 1.2.28
- replace recycle_timeout with connection_pool_timeout
- add connectionTimeout in server.xml
- add JkOptions +DisableReuse

Please let me know if this is okay or suggestions if any.

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org