< snipping the pieces about locking.  Not because I don't care, but
because I have stated too often that I can't accept locking the scoreboard on
every request.  If this design requires that lock, I have modules that
will be greatly impacted. >

> > > The only reason we have locking at all is to prevent the 4 cases listed above 
>from
> > > colliding with each other.  Even in the 4 cases above, the lock contention will 
>be minimal
> > > and the performance degradation minimal and perhaps not even measureable.
> > >
> > > A few other benefits to Pauls design:
> > > 1. Eliminates the requirement for compiled in HARD_SERVER_LIMIT or 
>HARD_THREAD_LIMIT.
> >
> > You still need to request a certain amount of shared memory when you
> > start the parent process, and you will require HARD_SERVER_LIMIT and
> > HARD_THREAD_LIMIT to know how much to request.  Either that, or whenever
> > you add a new child, you will need to allocate more shared memory, and
> > somehow tell your child processes about it.
>
> The allocation is based on the configured values for the MaxClients and 
>ThreadsPerChild.
> These values can changed from one start to the next (allowing system tuning to happen
> between restarts - without a recompile!)

Your going to lose information between restarts, or you are going to
require copying large chunks of shared memory on graceful restarts.

> > > 2. You don't need to allocate child score if you don't care about mod_status 
>(but it can
> > > be added during a restart)
> >
> > You still need to request a certain amount of shared memory when you
> > start the parent process, and you will require HARD_SERVER_LIMIT and
> > HARD_THREAD_LIMIT to know how much to request.  Either that, or whenever
> > you restart, you will forget all information about your child processes.
>
> See above for allocation size basis. As for keeping info from one restart to the 
>next,
> the code currently allocates a new scoreboard on graceless restarts and all info 
>from the
> old scoreboard is lost. On gracefull restarts the existing scoreoard is reused. In 
>the
> future gracefull restarts would get a clean scorebaord and the old one would 
>eventually
> be cleaned up. The results would be collected and stored at a common top level.

You are either going to leak shared memory like a sieve, or you are going
to need to copy the data.  At what point are you planning to free the
shared memory that was allocated suring the first starting of the server?

> > > 4. Does not require any changes to the MPM.  Each MPM can start threads 
>according to its'
> > > ThreadsPerChild setting w/o needing to pay attention to the scoreboard (I 
>believe your
> > > design required child processes to wait for worker score slots to become 
>available before
> > > it can start threads. This is imposing too much unnecessary work on the MPM.).
> >
> > You are ignoring one problem with the design.  I have now posted about it
> > three times, and nobody has told me how it will be fixed.  You need to
> > honor MaxClients.  If I set MaxClients to 250, then I expect MaxClients to
> > be 250, regardless of whether those clients are long-lived or not.  If
> > every child can always start threads according to ThreadsPerChild, you
> > will be violating MaxClients on a heavily loaded server.  This means that
> > you are actually requiring MORE changes to the starting logic than the
> > table implementation, because each child, will need to determine how many
> > threads to start at any given time.
>
> There is currently code to look if another process can be started (in
> process_idle_server_maintenance), there will still be code to do that with
> my design. The design still limits the number of processes and the number
> of workers per process. The difference is that a new process can be started
> when (current_total_workers < ((MaxClients-1) * ThreadsPerChild)) provided
> there is a spare process.
>
> "My design can handle this" (I heard you say that all the way from CA). This
> is true to a point. Currently, in the table design, a process cannot own
> workers from mutiple rows. A process owns all of the workers in a given row.
> Currently there is a one to one mapping of process slots to worker rows.
> Your proposal is to allow multiple processes to map to a single row, but still
> not map a process to workers in multiple rows.
>
> Apache does a wonderful job of distributing the request load to workers.
> There is a cycle that happens where many workers start running into the MRPC
> limit and start quiescing. As workers quiesce you will see 5 freed up from
> this row, and 6 from that, and another 10 from that...

So what?  Worst case, if you have a lot of child processes dying and being
respawned, you will lose some performance.  If your server is displaying
this behavior, then I would argue that MRPC is set too low, and you should
take a look at how it is setup.

> You will need to be able to start up a new process per row to take advantage
> of these dribs and drabs that are appearing. With my design, as soon as there
> is ThreadsPerChild number of workers on the free list, and there is a free
> process, I can start a new process with ThreadsPerChild workers.

What do you mean, and there is a free process?  Does that mean that if I
am in a position where I have MaxClients = 5, and I have 5 processes with
one thread in a long-lived request, that I won't start a new process?
That won't work.  You need to be able to start a new process to replace an
old one before an old process has died, otherwise, in pathological cases,
you will have a dead server.

> What am I missing here? There is no overrun of the configured values. And the
> algorithm isn't any more complex.

The algorithm is MUCH more complex.  I know it is, because I implemented
my entire patch in 155 lines.  That is the size of the entire patch.  I
need to test it more, but I expect to post later today or tomorrow.


_______________________________________________________________________________
Ryan Bloom                              [EMAIL PROTECTED]
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------


Reply via email to