> Idle processing would remain largely the same. Count the number of
> idle threads and see if it falls within the min/max range. If not,
> consider adjusting.
>
> For request processing I suggest the following algorithm. First,
> we need to add a directive config called max_total_threads. The
> algorithm would then be:
> x = count_the_total_number_of_threads;
> y = count_the_total_number_of_servers;
> if (((x + threads_per_child) <= max_total_threads) &&
> (y < max_clients))
> start_a_new_server;
>
> This allows max_clients to be set higher, even though the number
> of threads per child is high because you could configure something
> like the following:
>
> max_clients = 250; /* A very large scary number, but stick with me... */
> threads_per_child = 100; /* for a total of 25000 possible threads! */
>
> max_total_threads = 300; /* Max at any given time, out of that possible 25000. */
> min_spare_threads = 20;
> max_spare_threads = 50;
>
> This means that there will be 3 servers at a minimum (* 100 threads = 300).
> But, in the worst case, there could be as many as 201 servers running (each
> with 1 thread still left, where 201 + 100 > 300 so no new servers can start).
>
> If you have 201 requests still being processed, then the server is probably
> relatively busy (as opposed to the current design where the server is nearly
> idle). If a server that is tuned reasonably, given its hardware
> capabilities, gets to this busy state then it is understandable that the
> server is slow in processing new requests.
>
> Thoughts?
I don't see how this will work without a major overhaul of the basic
Apache design. The problem is the scoreboard. Each thread/process gets a
spot in the scoreboard, and in order for a new process to start, there
needs to be a spot in the scoreboard for all of it's threads. Currently,
all of the threads for a given process are in contiguous spots in the
scoreboard. In order to handle the case that you are talking about, we
will need to modify the scoreboard, to either consolidate threads in the
scoreboard as processes exit (a nightmare for serialization), or we will
need to make the scoreboard smarter about finding open locations in the
scoreboard (again serialization issues). Currently locations in the
scoreboard are a very simple mathematical operation, this will need to
change in order for your plan to work.
The numbers you describe above will actually allocate a scoreboard big
enough to hold 25000 threads, which is a non-starter. You will need to
figure out how you are going to allocate a scoreboard, without
over-allocating too much, and without introducing a lot of serialization,
because adding too much serialization will kill our performance.
Allow me to offer another idea. What if we create temporary process for
this case. The idea is that when we are shutting down, if processes get
stuck with a couple of long-lived requests, then the parent checks the
scoreboard to see how many contiguous locations there are in the
scoreboard. It then creates a single process with that many threads. As
more processes are required, we can create more temporary servers. Then,
as soon as we can create a full process, we start to kill of the temporary
process, and spawn new full processes.
This solves your problem of having processes stuck in long-lived requests,
and it solves the scoreboard issue too.
Ryan
_______________________________________________________________________________
Ryan Bloom [EMAIL PROTECTED]
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------