I have a similar problem(TC 3.3.1) but it is related to our pooling solution running out of connections and/or it trying to reap not-checked-in connections. The lockups users reported were during a reap cycle. After a few seconds up to a minute(sound familiar) it would "go away" and the app would start servicing requests again(checkout "netstat -a" and count your connections before and after your hicup). It only happens when someone "hits" the place where I didn't checkin enough times to deplete the pool. I'm feverishly trying to finish implementation so as a bandaid I've increased my pools depth to 100 connections for alpha testing...working for now.

Question: Anyone have any ideas on how to track what/where a checkout doesn't get checked-in(better pooling solution? I'm using bitmechanic now). What benefits does TC 4/5 built-in pooling mechanism have over my current solution?

Also, when my TC ran out of memory(out of Resources Exception) I just increased the Heap size using TOMCAT_OPTS="$TOMCAT_OPTS -Xmx300m" placed in the tomcat.sh script(TC 3.3.1 remember) and it's never happened again(no object loitering, just lots of objects!). If you grow too large(and/or never stop growing!) I'd look into indirectly referenced objects..here's a link to an article about this behavior(good performance analysis info): http://www.opensourcetutorials.com/tutorials/Server-Side-Coding/Java/java-garbage-collection-performance/page4.html

--Jonathan

Sam Gallant wrote:

Everyone,
Thanks in advance for any help. Also I have a gmail invite for the
person who has a fix for this if they are interested.

My company has been using Tomcat for several years, but a problem has
crept up that we have not been able to solve. Basically, tomcat will
stop processing requests for 2-60 second period several times a day.

Here is a list of software that we have tried. (Note we have tried
changing each key componant to see if we isolate the componant that is
the problem, but no luck yet)

OS: RedHat 9 & AS3
Threading model: linux threads & nptl
JVM: sun 1.4.2_4 & latest ibm
Http connector: ajp w/apache 2 and coyote connector
JBDC connector 1.0


1. Doesn't always happen durning old gen garbage collection, but does sometimes 2. Before switching to incremental gc we received out of memory errors which resulted in Tomcat completly hanging 3. After switching to incremental gc the effect changed to 2-60 second periods of time that Tomcat won't process request, but it does resume on its' own. 4. Cpu usage for most of the day is less than 20% utiliztion, but when the problem occurs the cpu spikes to 100% utiliztion briefly.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to