Re: Response already committed
On 03 October 2006, Dan Adams said: Hmm, I don't think that is the culprit. I think all of our stuff is thread safe. We're using a framework (Tapestry) which shields us from threading issues like that and prevents us from storing request stuff in the session. Also, if that were the case would that cause problems when loading static files? I don't think so. We have occasionally seen response not committed errors with our Tapestry-based app. I suspect it's a subtle bug in our code that we haven't tracked down yet. You might ask on the Tapestry list for advice. Also, we are using a filter which when it tries to do a redirect will throw an error complaining about this so this happens way before our app ever gets to do anything: I don't understand that paragraph, but you might try disabling that filter and seeing if the problem goes away (assuming you can reproduce in non-production environment). Greg - To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Monitoring Tomcat load (4.1.30)
Hi -- we've recently had problems with Tomcat on one server getting overloaded and then refusing further requests. What happens is that more and more request-processing threads get stuck on some slow operation (eg. database query, opening a socket to another server), until eventually Tomcat's thread pool is exhausted and it starts responding with 503 Service temporarily unavailable. Now, obviously the *real* fix is to get to the root of things and figure out why those requests are slow (or blocked, or whatever). But this keeps happening for various different reasons, and when it does we get angry customers on the line wanting to know what the hell happened to the web server. (We deploy a suite of related web applications on dedicated servers to a hundred or so customers; one day a deadlock will affect customer X, a few weeks later a network outage between servers will affect customer Y, and so on.) So I want to implement some sort of automatic load monitoring of Tomcat. This should give us a rough idea of when things are about to go bad *before* the customer even finds out about it (never mind picks up the phone) -- and it's independent of what the underlying cause is. Ideally, I'd like to know if the number of concurrent requests goes above X for Y minutes, and raise the alarm if so. This is across *all* webapps running in the same container. I've implemented a vile hack that hits Tomcat's process with SIGQUIT to trigger a thread dump, then parses the thread dump to look for threads that the JVM says are runnable. E.g. this: Ajp13Processor[8009][33] daemon prio=1 tid=0x0856a528 nid=0x2263 in Object.wait() [99bc4000..99bc487c] is presumed to be an idle thread (it's waiting on a org.apache.ajp.tomcat4.Ajp13Processor monitor). But this: Ajp13Processor[8009][28] daemon prio=1 tid=0x0856c6d8 nid=0x2263 runnable [99dc7000..99dc887c] is presumed to be processing a request (it's deep in the bowels of our JDBC driver, reading from the database server ... which is what most of our requests seem to spend most of their time doing). This seems to work and it gives a rough-and-ready snapshot of how busy Tomcat is at the moment. If I run it every 60 sec for a while, I get output like this: /var/log/tomcat/thread-dump-20060925_112753.log: 20/34 /var/log/tomcat/thread-dump-20060925_112858.log: 17/34 /var/log/tomcat/thread-dump-20060925_113003.log: 20/34 /var/log/tomcat/thread-dump-20060925_113109.log: 20/34 /var/log/tomcat/thread-dump-20060925_113214.log: 18/34 /var/log/tomcat/thread-dump-20060925_113319.log: 21/34 where the first number is the count of runnable Ajp13Processor threads (ie. concurrent requests) and the second number is the total number of Ajp13Processors. I have two concerns about this vile hack: * well, it *is* a vile hack -- is there a cleaner way to get this information out of Tomcat? (keeping in mind that we're running 4.1.30) * just how hard on the JVM is it to get a thread dump? I would probably decrease the frequency to every 10 min in production, but even so it makes me a bit nervous. Thanks -- Greg - To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Limiting the number of connection threads per application
On 04 May 2006, Ken Dombeck said: We have 2 applications installed inside the same Tomcat 5.0 instance app1 and app2. URL app1.url.com is for app1 and app2.url.com is for app2. Both URLs have the same ip address but still hit port 80. The maxThreads for the connector is set to 100. The problem we are experiencing is that app2 will slow down and consume all 100 of the connector threads. What we would like to do is limit the number of threads that each application can use to 50 so that one application can not cause a denial of service attack to the other applications in the container. By limiting the number of connections to 50 we would expect the 51st connection to that application to wait until a thread freed up. Wow, I asked this very same question just the other day. Tim Funk funkman at joedog dot org kindly provided the following suggestion: An easier solution is to throttle the webapp via a filter. For example: Filter{ final int maxThreadCount=10; int threadCount=0; doFilter() { synchronized(this) { if (threadCountmaxThreadCount) { response.sendError(SC_SERVICE_UNAVAILABLE); return; } threadCount++; } try { chain.doFilter(request, response); } finally { synchronized(this) { threadCount--; } } } } I have implemented something like this for us and it works like a charm. Haven't got it into production yet though. Oh yeah, Tim's pseudo-code has an off-by-one error in it: I ended up using the equivalent of doFilter() { synchronized(this) { if (threadCount = maxThreadCount) { response.sendError(SC_SERVICE_UNAVAILABLE); ... And if your problem is at the level of whole webapps, make sure you apply the filter to whole webapps. Here's a snippet from my web.xml that configures this filter: !-- Filter to throttle the number of concurrent requests across the whole webapp. -- filter filter-namethrottle/filter-name filter-classcom.intelerad.web.lib.RequestThrottler/filter-class init-param param-namemax_concurrent_requests/param-name param-value5/param-value /init-param /filter filter-mapping filter-namethrottle/filter-name url-pattern/*/url-pattern /filter-mapping The filter-mapping is the important part to make sure it covers a whole webapp. Greg - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Limiting effects of badly-behaved webapps
We've been using Tomcat 4.1.30 happily for a couple of years now, but every so often one badly-behaved webapp can make life unhappy for everyone living in the container. (Our Tomcat deployment is part of a suite of applications that run on a small cluster of Linux servers; all of the webapps running inside Tomcat are written and controlled by us. We have around a hundred of these small clusters deployed worldwide, so several hundred servers all told.) Here's what typically happens: * webapp A tries to open a database connection to another server in the cluster, but that server is down and packets to it just disappear (alternately, A runs a badly-written and consequently very s-l-o-w query: either way, it's a database operation that takes a long time) * meanwhile, the thread running that request for A is holding a synchronization lock: yes, we know that you're not supposed to hold synchronization locks while doing I/O, but the programmers who wrote this stuff 3-5 years ago did not know that. We fix the bugs as we find them, but they aren't easy to find and they aren't easy to fix. * thus, all requests to A backup in a queue waiting for the original thread to finish its slow I/O and release that synchronization lock. If there are enough incoming requests for A, then Tomcat's thread pool is gradually exhausted, eventually allocating all 75 threads to process requests for A that are blocked by that one synchronization lock. * now Tomcat is unable to process requests for webapps B, C, D, and our whole application suite is effectively dead. Oops! Obviously, the right long-term fix is don't hold synchronization locks while doing database I/O. (It would also help if database connections and queries were always fast, but alas! life just doesn't work that way.) But until all those bugs are found and fixed, this cascading failure is going to happen occasionally. One idea that has occurred to me is to limit the number of threads Tomcat allocates to any one webapp. Say we could limit webapp A to 25 threads from Tomcat's pool of 75: users depending on A would still be shut out (all requests block), but that failure would not cascade out to affect all other webapps running in the same container. So I'm wondering: * is there an easy way to implement this with Tomcat 4.1? how about 5.5? (we haven't upgraded because we're pretty happy with 4.1 ... but if there's a compelling reason to switch to 5.5, we'll do it) * are there other good techniques for limiting the damage caused by badly-behaved webapps? I'm sure holding synchronization lock while doing database I/O is only one type of bad behaviour lurking in our code ... I'd like to reduce the effect webapps in the same container have on each other as much as possible. Thanks -- Greg - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: iSeries Installation
On 02 May 2006, Mike Scherer said: How and where do I specify the JAVA_HOME or the JRE_HOME Environment variables? What OS are you using? With most Unix-like systems, you just do something like export JAVA_HOME=... before running the Tomcat launch script. E.g. if you start Tomcat from an init script, you could add that line to your init script. Or you could hack startup.sh to set the required variables. Or you could hack startup.sh to source an external file where you set whatever env variables you need (that's what we do, although we're still using Tomcat 4.1, so it's catalina.sh that we hacked ... same idea). Anyways, this isn't really a Tomcat question, it's a how do I set environment variables on my OS? question. Greg - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Limiting effects of badly-behaved webapps
On 02 May 2006, Tim Funk said: An easier solution is to throttle the webapp via a filter. For example: Good idea, thanks! I'll try to implement that and report back to the list. Greg - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]