Re: Any synchronization issues with SMP?

2004-07-01 Thread Martin Schulz
OT: The following is advice to servlet developers rather than container 
developers.

For what it's worth, just be careful when you use GZIP(In|Out)putStream,
since the native code uses a mutex (for no good reason, imho), to prevent
garbage collection from interfering with the array being used. Bottleneck
where I didn't expect one.
In particular, never place a Object(In|Out)putStream directly on top of the
GZIP streams, always use a buffer stream between the two.
Reason is that the object stream reads/writes in very small chunks,
causing four system calls per chunk.
Or else interesting things are going to happen on larger SMP systems.
 Martin
Martin Schulz wrote:
it appears that the JVM slows everything down to a crawl,
including the code path which should lead to another accept being
called., for up to 8 minutes!!!
Furthermore, the mpstat has the nice porperty that CPU usage adds
up to exactly 100%, i.e. a single CPU is used... no more, no less.
This corresponds to 12% or 13% CPU utilization shown in prstat
based on 8 CPUs.  My interpretation is that the JVM is effectively
preventing parallel execution (which otherwise appears to work fine).
Nearly all threads either wait, read from a Socket, or zip/unzip data.
I'm not sure what all that means, but Tomcat appears to be a victim
of it.  I'll experiment some more.  Main difference with the systems
Rainer mentioned is the JVM (1.4.2_04) and the CPU (Sparc III 1.2GHz).
If any of this rings a bell, drop me a note.  I'll be happy to share data
as appropriate.
I'll repost to the list only if I learn anything which impacts Tomcat 
directly
(other than that the code path to hand of the socket accept 
responsibility
is not suitable for _very_ high hit rates, which does not worry me too
much at this point).

Cheers!
   Martin
Martin Schulz wrote:
Rainer,
Thanks for the tips.  I am about to take timing stats
internally in the ThreadPool and the Tcp workers.
Also, the described symptoms do not disappear, but seem to be of much
shorter duration when only 4 CPUs are used for the application.
I'll summarize what I find.
Martin
Rainer Jung wrote:
Hi,
we know one application running on 9 systems with 4 US II CPUs each
under Solaris 9. Peak request rates at 20 requests/second per system.
Tomcat is 4.1.29, Java is 1.3.1_09. No symptoms like yours!
You should send a signal QUIT to the jvm process during the
unresponsiveness time. This is a general JVM mechanism (at least for 
sun
JVMs). The signal writes a stack trace for each thread on STDOUT (so 
you
should also start tomcat with redirection of STDOUT the output to some
file). Beware: older JVM in rare cases stopped working after getting
this signal (not expected with 1.3.1_09).

In this stack dump you should be able to figure out, in which methods
most of your threads stay and what the status is.
Is there native code included (via JNI)? Any synchronization done in 
the
application itself? Are you using Tomcat clustering? Which JVM?

Sincerely
Rainer Jung



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Any synchronization issues with SMP?

2004-06-25 Thread Martin Schulz
it appears that the JVM slows everything down to a crawl,
including the code path which should lead to another accept being
called., for up to 8 minutes!!!
Furthermore, the mpstat has the nice porperty that CPU usage adds
up to exactly 100%, i.e. a single CPU is used... no more, no less.
This corresponds to 12% or 13% CPU utilization shown in prstat
based on 8 CPUs.  My interpretation is that the JVM is effectively
preventing parallel execution (which otherwise appears to work fine).
Nearly all threads either wait, read from a Socket, or zip/unzip data.
I'm not sure what all that means, but Tomcat appears to be a victim
of it.  I'll experiment some more.  Main difference with the systems
Rainer mentioned is the JVM (1.4.2_04) and the CPU (Sparc III 1.2GHz).
If any of this rings a bell, drop me a note.  I'll be happy to share data
as appropriate.
I'll repost to the list only if I learn anything which impacts Tomcat 
directly
(other than that the code path to hand of the socket accept responsibility
is not suitable for _very_ high hit rates, which does not worry me too
much at this point).

Cheers!
   Martin
Martin Schulz wrote:
Rainer,
Thanks for the tips.  I am about to take timing stats
internally in the ThreadPool and the Tcp workers.
Also, the described symptoms do not disappear, but seem to be of much
shorter duration when only 4 CPUs are used for the application.
I'll summarize what I find.
Martin
Rainer Jung wrote:
Hi,
we know one application running on 9 systems with 4 US II CPUs each
under Solaris 9. Peak request rates at 20 requests/second per system.
Tomcat is 4.1.29, Java is 1.3.1_09. No symptoms like yours!
You should send a signal QUIT to the jvm process during the
unresponsiveness time. This is a general JVM mechanism (at least for sun
JVMs). The signal writes a stack trace for each thread on STDOUT (so you
should also start tomcat with redirection of STDOUT the output to some
file). Beware: older JVM in rare cases stopped working after getting
this signal (not expected with 1.3.1_09).
In this stack dump you should be able to figure out, in which methods
most of your threads stay and what the status is.
Is there native code included (via JNI)? Any synchronization done in the
application itself? Are you using Tomcat clustering? Which JVM?
Sincerely
Rainer Jung


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Any synchronization issues with SMP?

2004-06-23 Thread Martin Schulz
Rainer,
Thanks for the tips.  I am about to take timing stats
internally in the ThreadPool and the Tcp workers.
Also, the described symptoms do not disappear, but seem to be of much
shorter duration when only 4 CPUs are used for the application.
I'll summarize what I find.
Martin
Rainer Jung wrote:
Hi,
we know one application running on 9 systems with 4 US II CPUs each
under Solaris 9. Peak request rates at 20 requests/second per system.
Tomcat is 4.1.29, Java is 1.3.1_09. No symptoms like yours!
You should send a signal QUIT to the jvm process during the
unresponsiveness time. This is a general JVM mechanism (at least for sun
JVMs). The signal writes a stack trace for each thread on STDOUT (so you
should also start tomcat with redirection of STDOUT the output to some
file). Beware: older JVM in rare cases stopped working after getting
this signal (not expected with 1.3.1_09).
In this stack dump you should be able to figure out, in which methods
most of your threads stay and what the status is.
Is there native code included (via JNI)? Any synchronization done in the
application itself? Are you using Tomcat clustering? Which JVM?
Sincerely
Rainer Jung
Martin Schulz wrote:
Can someone confirm that Tomcat works well on busy SMP systems (e.g. 
Sun V1280),
or whether there are problems in Tomcat.

Here's what we have at our end:
We are currently performance testing our application (Tomcat 4.1.30) 
on Solaris 9,
on a V1280 system w. 8 CPUs. (SDK 1.4.2_04).  Transaction rates are 
moderate, around 30/s.

The application, after about 30-40 minutes, gets unresponsive for a 
little while (1-10 minutes),
gets back to work (for a varying period of time, but definitely under 
30 min), and then the cycle
repeats.  At half the transaction rate, the smptoms are no longer 
easily observed,.

The above symptoms disappear when we use a single CPU, or a single 
board of 4 CPUs.
That scenario seems to imply synchronization problems soemwhere in 
the Java code.
The problem could not be observed in development configurations 
either (Wintel, 1-CPU Sun
boxes.)

The behavior is such that connections get accepted, but no response 
is sent (established connections
remain at a fixed level).   The number of connections in this state 
varies (20-70).

 From the timers we keep we learn that the service gets stuck when 
reading the input.
Once unblocked the responses get sent out rapidly again.

We have tuned the system for efficient, high-volume TCP-IP traffic.
We tried the coyote connector and the HttpConnector, both with the 
same effect.

Please respond if you can confirm or dismiss threading issues in Tomcat.
We would also be ineterested in testing approaches for such a scenario.
.I have kept system statistics for both scenarios and can provide 
these on request.

Thanks!
   Martin Schulz
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Any synchronization issues with SMP?

2004-06-18 Thread Martin Schulz
Can someone confirm that Tomcat works well on busy SMP systems (e.g. Sun 
V1280),
or whether there are problems in Tomcat.

Here's what we have at our end:
We are currently performance testing our application (Tomcat 4.1.30) on 
Solaris 9,
on a V1280 system w. 8 CPUs. (SDK 1.4.2_04).  Transaction rates are 
moderate, around 30/s.

The application, after about 30-40 minutes, gets unresponsive for a 
little while (1-10 minutes),
gets back to work (for a varying period of time, but definitely under 30 
min), and then the cycle
repeats.  At half the transaction rate, the smptoms are no longer easily 
observed,.

The above symptoms disappear when we use a single CPU, or a single board 
of 4 CPUs.
That scenario seems to imply synchronization problems soemwhere in the 
Java code.
The problem could not be observed in development configurations either 
(Wintel, 1-CPU Sun
boxes.)

The behavior is such that connections get accepted, but no response is 
sent (established connections
remain at a fixed level).   The number of connections in this state 
varies (20-70).

From the timers we keep we learn that the service gets stuck when 
reading the input.
Once unblocked the responses get sent out rapidly again.

We have tuned the system for efficient, high-volume TCP-IP traffic.
We tried the coyote connector and the HttpConnector, both with the same 
effect.

Please respond if you can confirm or dismiss threading issues in Tomcat.
We would also be ineterested in testing approaches for such a scenario.
.I have kept system statistics for both scenarios and can provide these 
on request.

Thanks!
   Martin Schulz
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


org.apache.catalina.loader.WebappClassLoader cacheing

2003-02-05 Thread Martin Schulz
Hi,

I would like to know whether I can control the cacheing
behavior of org.apache.catalina.loader.WebappClassLoader
somehow.

For the purpose of developing a server application, I would
like to be able to drop in config files and other resource
files such as scripts and be able to reload them
programmatically through
myclass.class.getResourceAsStream(myResource).

Since WebappClassLoader diligently caches all resources
in a HashMap, the resource file can never be reloaded.

Subclassing WebappClassLoader is an option, but I'd like to
avoid modifying the tomcat jar if possible.

Are there any workarounds this restriction
e.g. by using getResource URL et al. which will not ultimately
wind up accessing the same cache?

Thanks!

   Martin

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.443 / Virus Database: 248 - Release Date: 10/01/2003
 


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]