> On Fri, Apr 26, 2002 at 11:32:19AM -0400, Paul J. Reder wrote:
> > In my tests, this patch allows existing worker threads to continue
> > procesing requests while the new threads are started.
> >
> > In the previous code the server would pause while new threads were
> > being created. The new threads started accepting work immediately,
> > causing the existing threads to starve even though there are a
> > small (but growing) number of new threads.
> >
> > This patch allows the server to maintain a higher level of responsiveness
> > during the ramp up time.
>
> I don't quite understand what you are saying here. AIUI the worker MPM
> creates all threads as soon as it is started, and as an optimization it
> creates the listener thread as soon as there are at least one worker
> thread availble. By delaying the startup of the listener thread we're
> merely increasing the amount of time it takes to start a new child and
> start accepting connections.
By deferring the start-up of the listener, we are decreasing the amount of time it
takes
to start the new process. My speculation in creating the patch was that we could save
time
spent context switching between a few active workers and the listen thread and use that
time to startup the new threads. More speculation...contect switching may be
particularly
expensive when threads are starting, or conversly, thread starting may be really
expensive
when lots of context switches are happening in the process. What is interesting is
that,
at least by Paul's measurements, the patch does make a difference.
I think Jeff's comment was close to on target as well. If the listener thread can
efficiently defer accepting connections when there are no workers available, that would
probably accomplish much the same.
Bill
> Please correct me if I'm missing something.
>
> The reason I think you were seeing a pause while new threads were being
> created, as Jeff points out, was because our listener thread was able
> to accept far more connections than we had available workers or would
> have available workers. In the worst case, since we create the listener
> as soon as there is 1 worker, it is possible to have a queue filled
> with ap_threads_per_child accept()ed connections and only 1 worker.
> As soon as the next worker is created the listener is able to accept()
> yet another connection and stuff that into the queue.
>
> And I think I've just realized something else. Since the scoreboard
> is not updated until a worker thread pulls the connection off of the
> queue, the parent is not going to create another child in accordance
> with how many connections are accept()ed. This means that we are able to
> accept up to 2*ThreadsPerChild*number_of_children connections while the
> parent will only count us as having 1/2 that amount of concurrency, and
> therefore will not match the demand. This is another bug in the worker
> MPM that would be fixed if we prevented the listener from accepting more
> connections that workers.
Yep and that is closly related to another problem Paul is tracking down.
process_idle_server maintenance is thrashing a bit when a load spike comes in (ie,
processes are actually being told to shutdown in the midst of a load spike).
Bill
>
> -aaron
>