On 13.05.2009 23:28, Pantvaidya, Vishwajit wrote:
> My setup is tomcat 5.5.17 + mod_jk 1.2.15 + httpd 2.2.2. I am using 
> AJP1.3. Every 2-3 days with no major load, tomcat throws the error: 
> "SEVERE: All threads (200) are currently busy, waiting..."
> 
> I have been monitoring my tomcat TP-Processor thread behavior over 
> extended time intervals and observe that: - even when there is no 
> activity on the server, several TP-Processor threads are in RUNNABLE 
> state while few are in WAITING state - RUNNABLE threads stack trace 
> shows "java.lang.Thread.State: RUNNABLE at 
> java.net.SocketInputStream.socketRead0(Native Method)..."

We would need to see more of the stack. It's likely that those are
connected to the web server, waiting for the next request.

> - WAITING thread stack trace shows "java.lang.Thread.State: WAITING 
> on 
> org.apache.tomcat.util.threads.threadpool$controlrunna...@53533c55"

Likely idle in the pool, availale to handle new connections.

> - tomcat adds 4 new TP-Processor threads when a request comes in and 
> it can find no WAITING threads
> 
> So I conclude that my tomcat is running out of threads due to many 
> threads being in RUNNABLE state when actually they should be in 
> WAITING state. Is that happening because of the socket_keepalive in 
> my workers.properties shown below? Why are threads added in bunches 
> of 4 - is there any way to configure this?

Those socketRead0 threads (Disclaimer: I already said we need more of
the stack to be sure) are connected to the web server, waiting for new
requests. As long as the new requests come from one of theose web server
processes, no new thread is needed to handle them.

socket_keepalive is not directly related to that. It tries to workaround
a problem, where some component (e.g. firewall) between web server and
Tomcat cuts an idle connection without letting the web server and Tomcat
know.

If you want to free the thread handling the persistent connections, you
caqn use the connection pool timeout on the jk side and also the
connection pool minimum size (e.g. setting it to 0).

On the Tomcat side use connetionTimeout. Be warned, that jk and Tomcat
do not use the same time unit for those parameters. Have a look at the
timeouts documentation of mod_jk.

> My workers config is:
> 
> Worker...type=ajp13 Worker...cachesize=10 Worker...cache_timeout=600
>  Worker...socket_keepalive=1 Worker...recycle_timeout=300
> 
> Earlier posts related to this issue on the list seem to recommend 
> tweaking: - several timeouts - JkOptions +DisableReuse

Very last resort. Should not be needed and might obscure some other problem.

> I am planning to do the following to resolve our problem: - upgrade 
> jk to latest version - e.g. 1.2.28 - replace recycle_timeout with 
> connection_pool_timeout - add connectionTimeout in server.xml - add 
> JkOptions +DisableReuse
> 
> Please let me know if this is okay or suggestions if any.

I suspect, that during the time of the "all threads are currently busy"
message, something in or behind your app was slow and so requests got
stuck in front of Tomcat, the web server pool was growing until it
connected 200 web server processes/threads trying to send requests to
Tomcat. To find out what the root cause was, you'll need to make the
thread dumps during the problem time.

Also note, that the maximum concurrency in your web server layer should
be a good fit to the maximum concurrency (thread pool size) in the
Tomcat layer.

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to