It's amazing, but we have had this design before. In fact, this was the threaded MPM. The reason we removed that MPM, was that it falls apart when trying to do a restart.
The easy way to fix the problem you observed is to have the listener thread not call accept if there are no free worker threads. That is why we originally had two condition variables in the worker design. Ryan ---------------------------------------------- Ryan Bloom [EMAIL PROTECTED] 645 Howard St. [EMAIL PROTECTED] San Francisco, CA > -----Original Message----- > From: Brian Pane [mailto:[EMAIL PROTECTED]] > Sent: Tuesday, April 09, 2002 10:51 PM > To: [EMAIL PROTECTED] > Subject: [PATCH] convert worker MPM to leader/followers design > > Based on the "slow Apache 2.0" thread earlier today, > and my observation therein that it's possible for a > worker child process to block on a full file descriptor > queue (all threads busy) while other child procs have > idle threads, I decided to revive the idea of switching > the worker thread management to a leader/followers > pattern. > The way it works is: > * There's no dedicated listener thread. The workers > take turns serving as the listener. > * Idle threads are listed in a stack. Each thread has > a condition variable. When the current listener > accepts a connection, it pops the next idle thread > from the stack and wakes it up using the condition > variable. The newly awakened thread becomes the > new listener. > * If there is no idle thread available to become > the new listener, the next thread to finish handling > its current connection takes over as listener. > (Thus a process that's already saturated with > connections won't call accept() until it actually > has an idle thread available.) > > In order to implement the patch quickly, I've used a > mutex to guard the stack for now, rather than using > atomic compare-and-swap operations like I'd once > proposed. In order to improve scalability, though, > this mutex is *not* used for the condition variable > signaling. Instead, each worker thread has a private > mutex for use with its condition variable. This > thread-private mutex is locked at thread creation, > and the only subsequent operations on it are those > done implicitly by the cond_signal/cond_wait. Thus > only the thread associated with that mutex ever locks > or unlocks it, which should help to reduce synchronization > overhead. (The design is dependent on the semantics > of the one-listener-at-a-time model to synchronize > the cond_signal with the cond_wait.) > > Can I get a few volunteers to test/review this? > > Thanks, > --Brian