Re: [PATCH] remove some mutex locks in the worker MPM

2003-01-02 Thread Brian Pane
Aaron Bannert wrote:


The patch looks good at first glance. Have you done any testing
to see how much it improves performance (on UP and MP machines)
and if it has any effect when APR is build with generic atomics?



Here are the performance numbers that I have.  I ran
httpd-2.1.0-dev on an 8x167MHZ CPU Sun with Solaris 8
(32-bit mode).  The client driver sent a fixed number
of concurrent requests for a 1-byte file (to keep the
time spent in network writes from overshadowing the
results).

I tested with both the SPARC V8+ native atomic ops
and APR's mutex-based default atomics:

50 clients
 load  %CPU  req/s
standard 2.1.0-dev worker 4.59  0.53  1090
patched w/native atomic ops   4.57  0.53  1097
   w/mutex atomic ops4.70  0.52  1093

100 clients
 load  %CPU  req/s
standard 2.1.0-dev worker 4.74  0.54  1070
patched w/native atomic ops   4.69  0.54  1083
   w/mutex atomic ops4.61  0.54  1067

Basically, the patch results in a slightly higher
throughput with lower CPU load.  That matches what
I'd expect from a reduction of mutex contention.
With the mutex-based fallback implementation of the
apr_atomic API, performance was slightly worse than
the original worker code in the 100-client case, but
faster in the 50-client case.  (The effect of using
my worker patch with the mutex-based atomics is to
increase the number of lock calls while reducing
the amount of time spent in each critical region.
These two effects seem to counteract each other.)

Brian





RE: [PATCH] remove some mutex locks in the worker MPM

2003-01-01 Thread Bill Stoddard
 it may also have to do with caching we were doing (mod_mem_cache crashed and
burned,
What version were you running?  What was the failure? If you can give me enough
info to debug the problem, I'll work on it.

Bill




Re: [PATCH] remove some mutex locks in the worker MPM

2003-01-01 Thread Aaron Bannert
The patch looks good at first glance. Have you done any testing
to see how much it improves performance (on UP and MP machines)
and if it has any effect when APR is build with generic atomics?

-aaron


On Tuesday, December 31, 2002, at 05:30  PM, Brian Pane wrote:


I'm working on replacing some mutex locks with atomic-compare-and-swap
based algorithms in the worker MPM, in order to get better concurrency
and lower overhead.

Here's the first change: take the pool recycling code out of the
mutex-protected critical region in the queue_info code.  Comments
welcome...

Next on my list is the code that synchronizes the idle worker count.
I think I can eliminate the need to lock a mutex except in the
special case where all the workers are busy.





Re: [PATCH] remove some mutex locks in the worker MPM

2002-12-31 Thread David Burry
Oh, I should have mentioned, our mutex issues lessened a lot when we made
more processes with fewer threads each, but that kind of started defeating
the purpose of using the worker mpm after a while...  your optimizations
sound like they may help fix this issue.. thanks again.

Dave

- Original Message -
From: David Burry [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, December 31, 2002 5:54 PM
Subject: Re: [PATCH] remove some mutex locks in the worker MPM


 Ohh this sounds like an awesome optimization... I noticed mutex
contentions
 were extremely high on a very high traffic machine (say.. high enough to
get
 close to maxing out a gig ethernet card) using the worker mpm on solaris
 8...  it may also have to do with caching we were doing (mod_mem_cache
 crashed and burned, we had to use mod_file_cache to get it to work but it
 was still quite the exercise).

 Dave

 - Original Message -
 From: Brian Pane [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, December 31, 2002 5:30 PM
 Subject: [PATCH] remove some mutex locks in the worker MPM


  I'm working on replacing some mutex locks with atomic-compare-and-swap
  based algorithms in the worker MPM, in order to get better concurrency
  and lower overhead.
 
  Here's the first change: take the pool recycling code out of the
  mutex-protected critical region in the queue_info code.  Comments
  welcome...
 
  Next on my list is the code that synchronizes the idle worker count.
  I think I can eliminate the need to lock a mutex except in the
  special case where all the workers are busy.
 
  Brian