'Justin Erenkrantz' wrote:

On Fri, Mar 01, 2002 at 10:19:17PM -0600, Emery Berger wrote:

High-performance was indeed one of my design goals. What tests would you
consider authoritative? I've been using static page loads, driven by a
process on the same machine. This was the best way I found to really
stress pool allocation. I'd be happy to run any other tests you could
recommend.


I believe that we found that mod_include (multiple #includes) or mod_autoindex request (lots of subreqs) really stress the pool code. I think Brian had one test case where ~30 pools were created and destroyed during that one mod_include'd request. -- justin


Right, mod_include was where the performance differences in different pool implementations became really apparent.

To answer Emery's original question about other recommended performance
tests, here's some background info:

I've been testing mod_include performance mostly with artificial
benchmark files that do things like this:

<!--#include virtual="1.shtml" -->
<!--#include virtual="1.shtml" -->
<!--#include virtual="1.shtml" -->
<!--#include virtual="1.shtml" -->
<!--#include virtual="1.shtml" -->
...

where "1.shtml" is a 1-byte file (so that network transfer time
doesn't overshadow the time spent in the httpd code).

Ian H. has been testing with pages from news.com that do a total
of about 10 includes, using a stress testing setup that provides
data on CPU and memory utilization under high load.

For testing pool stuff, a couple of techniques that we've found to
be valuable are:

 * Look at httpd CPU utilization, not just throughput.

 * For really useful comparisons of the CPU utilization of
   different implementations, run with a good profiler to
   get precise measurements of the time spent in apr_palloc().
   We've used mostly Quantify for this, because it provides
   CPU utilization measurements down to the basic block level--
   good for finding out where the bottlenecks are within a
   function.

--Brian





Reply via email to