Re: svn commit: r1895553 - /httpd/httpd/trunk/server/mpm/event/event.c

2021-12-06 Thread Eric Covener
> This is fine. Maybe the large headroom is a good idea for people that 
> complain about the change in behavior and as you state, a
> large scoreboard should not be too costly these days. Hence +1 to the 
> additional condition.

+1, the option to just over-allocate ServerLimit even farther is
better than a new obscure directive.


Re: svn commit: r1895553 - /httpd/httpd/trunk/server/mpm/event/event.c

2021-12-06 Thread Ruediger Pluem



On 12/6/21 4:58 PM, Yann Ylavic wrote:
> On Mon, Dec 6, 2021 at 3:27 PM Yann Ylavic  wrote:
>>
>> On Mon, Dec 6, 2021 at 1:53 PM Ruediger Pluem  wrote:
>>>
>>> On 12/6/21 1:33 PM, Yann Ylavic wrote:

 How about (modulo brain fart):
 const int N = 1; /* or 2, 3, 4.. */
 int avail_daemons = server_limit - retained->total_daemons;
 int have_room_for_N_restarts = (avail_daemons / N >= 
 active_daemons_limit);
 int inactive_daemons = retained->total_daemons - 
 retained->active_daemons;
 int do_kill = (have_room_for_N_restarts || inactive_daemons == 0);
 if (do_kill) {
 ap_mpm_podx_signal(retained->buckets[child_bucket].pod,
AP_MPM_PODX_GRACEFUL);
 }
 else {
 /* Wait for inactive_daemons to settle down */
 }
 ?

>>>
>>> Not sure if it is worth it as I think that have_room_for_N_restarts will 
>>> only rarely be true.
>>
>> You are right if "server_limit < 2 * active_daemons_limit" but I don't
>> think it's a "reasonable" configuration, I wouldn't mind serializing
>> kills (i.e. inactive_daemons == 0) in this case to avoid "Scoreboard
>> is full" issues on a "loady" restart, all the more with slow-to-exit
>> processes (asynchronous, huge timeouts connections). Scoreboard is
>> quite cheap in (shared-)memory space, "ServerLimit >= 5 *
>> active_daemons_limit" is not what will eat system memory..
>>
>> So if we take "N = server_limit / active_daemons_limit", the above
>> looks like something that could work for killing processes >
>> MaxSpareThreads faster (even though I don't find it necessarily
>> helpful personally) with a "reasonable" configuration.
> 
> Argh no sorry for the babbling (we don't want to maintain N potential
> restarts all the time, one is enough, more is at the admin's
> discretion depending on the workflow..).
> 
> If we want to stop killing processes to keep a reserve of
> active_daemons_limit for a potential graceful restart, the condition
> might be:
>   int do_kill = (retained->total_daemons == retained->active_daemons
>  || (server_limit - retained->total_daemons >
>  active_daemons_limit));
> 
> But after all it might not be your concern nor your priority..

This is fine. Maybe the large headroom is a good idea for people that complain 
about the change in behavior and as you state, a
large scoreboard should not be too costly these days. Hence +1 to the 
additional condition.

Regards

RĂ¼diger



Re: svn commit: r1895553 - /httpd/httpd/trunk/server/mpm/event/event.c

2021-12-06 Thread Yann Ylavic
On Mon, Dec 6, 2021 at 3:27 PM Yann Ylavic  wrote:
>
> On Mon, Dec 6, 2021 at 1:53 PM Ruediger Pluem  wrote:
> >
> > On 12/6/21 1:33 PM, Yann Ylavic wrote:
> > >
> > > How about (modulo brain fart):
> > > const int N = 1; /* or 2, 3, 4.. */
> > > int avail_daemons = server_limit - retained->total_daemons;
> > > int have_room_for_N_restarts = (avail_daemons / N >= 
> > > active_daemons_limit);
> > > int inactive_daemons = retained->total_daemons - 
> > > retained->active_daemons;
> > > int do_kill = (have_room_for_N_restarts || inactive_daemons == 0);
> > > if (do_kill) {
> > > ap_mpm_podx_signal(retained->buckets[child_bucket].pod,
> > >AP_MPM_PODX_GRACEFUL);
> > > }
> > > else {
> > > /* Wait for inactive_daemons to settle down */
> > > }
> > > ?
> > >
> >
> > Not sure if it is worth it as I think that have_room_for_N_restarts will 
> > only rarely be true.
>
> You are right if "server_limit < 2 * active_daemons_limit" but I don't
> think it's a "reasonable" configuration, I wouldn't mind serializing
> kills (i.e. inactive_daemons == 0) in this case to avoid "Scoreboard
> is full" issues on a "loady" restart, all the more with slow-to-exit
> processes (asynchronous, huge timeouts connections). Scoreboard is
> quite cheap in (shared-)memory space, "ServerLimit >= 5 *
> active_daemons_limit" is not what will eat system memory..
>
> So if we take "N = server_limit / active_daemons_limit", the above
> looks like something that could work for killing processes >
> MaxSpareThreads faster (even though I don't find it necessarily
> helpful personally) with a "reasonable" configuration.

Argh no sorry for the babbling (we don't want to maintain N potential
restarts all the time, one is enough, more is at the admin's
discretion depending on the workflow..).

If we want to stop killing processes to keep a reserve of
active_daemons_limit for a potential graceful restart, the condition
might be:
  int do_kill = (retained->total_daemons == retained->active_daemons
 || (server_limit - retained->total_daemons >
 active_daemons_limit));

But after all it might not be your concern nor your priority..

>
> Cheers;
> Yann.


Re: svn commit: r1895553 - /httpd/httpd/trunk/server/mpm/event/event.c

2021-12-06 Thread Yann Ylavic
On Mon, Dec 6, 2021 at 1:53 PM Ruediger Pluem  wrote:
>
> On 12/6/21 1:33 PM, Yann Ylavic wrote:
> >
> > How about (modulo brain fart):
> > const int N = 1; /* or 2, 3, 4.. */
> > int avail_daemons = server_limit - retained->total_daemons;
> > int have_room_for_N_restarts = (avail_daemons / N >= 
> > active_daemons_limit);
> > int inactive_daemons = retained->total_daemons - 
> > retained->active_daemons;
> > int do_kill = (have_room_for_N_restarts || inactive_daemons == 0);
> > if (do_kill) {
> > ap_mpm_podx_signal(retained->buckets[child_bucket].pod,
> >AP_MPM_PODX_GRACEFUL);
> > }
> > else {
> > /* Wait for inactive_daemons to settle down */
> > }
> > ?
> >
>
> Not sure if it is worth it as I think that have_room_for_N_restarts will only 
> rarely be true.

You are right if "server_limit < 2 * active_daemons_limit" but I don't
think it's a "reasonable" configuration, I wouldn't mind serializing
kills (i.e. inactive_daemons == 0) in this case to avoid "Scoreboard
is full" issues on a "loady" restart, all the more with slow-to-exit
processes (asynchronous, huge timeouts connections). Scoreboard is
quite cheap in (shared-)memory space, "ServerLimit >= 5 *
active_daemons_limit" is not what will eat system memory..

So if we take "N = server_limit / active_daemons_limit", the above
looks like something that could work for killing processes >
MaxSpareThreads faster (even though I don't find it necessarily
helpful personally) with a "reasonable" configuration.


Cheers;
Yann.


Re: svn commit: r1895553 - /httpd/httpd/trunk/server/mpm/event/event.c

2021-12-06 Thread Ruediger Pluem



On 12/6/21 1:33 PM, Yann Ylavic wrote:
> On Fri, Dec 3, 2021 at 6:41 PM Eric Covener  wrote:
>>
>> On Fri, Dec 3, 2021 at 11:23 AM Ruediger Pluem  wrote:
>>>
>>> On 12/3/21 2:25 PM, yla...@apache.org wrote:
 Author: ylavic
 Date: Fri Dec  3 13:25:51 2021
 New Revision: 1895553

 URL: http://svn.apache.org/viewvc?rev=1895553&view=rev
 Log:
 mpm_event: Follow up to r1894285: new MaxSpareThreads heuristics.

 When at MaxSpareThreads, instead of deferring the stop if we are close to
 active/server limit let's wait for the pending exits to complete.

 This way we always and accurately account for slow-to-exit processes to
 avoid filling up the scoreboard, whether at the limits or not.
>>>
>>> Just as a comment in case users will report it: This can slow down process 
>>> reduction even if far away from the limits:
>>>
>>> Previously each call to perform_idle_server_maintenance killed off one 
>>> process if there was one to kill from the spare threads
>>> point of view. Now it could take more calls as the process killed by the 
>>> previous call to perform_idle_server_maintenance
>>> might not have died when we return to perform_idle_server_maintenance and 
>>> thus preventing to kill another one. Hence we won't have
>>> multiple processes dying in parallel when we want to reduce processes due 
>>> to too much spare threads.
>>> This can cause situations that if we kill a slow dying process first we 
>>> will have completely idle processes floating around for
>>> quite some time.
> 
> Since the connections are more or less evenly distributed across all
> the processes and each process handles all types of connections
> (w.r.t. lifetime/timeout hence slowness to exit), there is probably
> not a single slow-to-exit process potentially but rather all processes
> or none.
> So if a process is slow to exit I don't think that we gain anything by
> killing more ones quickly (unless we have room for that, see below),
> we will still have completely idle processes (the ones dying) for the
> same time though they won't be able to take any potential increasing
> load happening soon (while waiting for the dying processes before
> killing more allows that).
> 
> Though killing them one at a time is a bit too drastic possibly, what
> would be a reasonable maximum number of dying processes?
> 
>>
>> Could we base on max_daemons_limit instead? In the current impl we
>> might still have ample slack space in the SB.
> 
> IIUC the issue (by design) with max_daemons_limit is that it accounts
> for large holes on graceful restart (when the old gen stops) and those
> won't fill up until MaxSpareThreads (or MaxRequestsPerChild) kicks in
> still, so after a graceful it may not be the appropriate metric for
> "how much room do we have?" at MaxSpareThreads time.
> 
> How about (modulo brain fart):
> const int N = 1; /* or 2, 3, 4.. */
> int avail_daemons = server_limit - retained->total_daemons;
> int have_room_for_N_restarts = (avail_daemons / N >= 
> active_daemons_limit);
> int inactive_daemons = retained->total_daemons - retained->active_daemons;
> int do_kill = (have_room_for_N_restarts || inactive_daemons == 0);
> if (do_kill) {
> ap_mpm_podx_signal(retained->buckets[child_bucket].pod,
>AP_MPM_PODX_GRACEFUL);
> }
> else {
> /* Wait for inactive_daemons to settle down */
> }
> ?
> 

Not sure if it is worth it as I think that have_room_for_N_restarts will only 
rarely be true.

Regards

RĂ¼diger



Re: svn commit: r1895553 - /httpd/httpd/trunk/server/mpm/event/event.c

2021-12-06 Thread Yann Ylavic
On Fri, Dec 3, 2021 at 6:41 PM Eric Covener  wrote:
>
> On Fri, Dec 3, 2021 at 11:23 AM Ruediger Pluem  wrote:
> >
> > On 12/3/21 2:25 PM, yla...@apache.org wrote:
> > > Author: ylavic
> > > Date: Fri Dec  3 13:25:51 2021
> > > New Revision: 1895553
> > >
> > > URL: http://svn.apache.org/viewvc?rev=1895553&view=rev
> > > Log:
> > > mpm_event: Follow up to r1894285: new MaxSpareThreads heuristics.
> > >
> > > When at MaxSpareThreads, instead of deferring the stop if we are close to
> > > active/server limit let's wait for the pending exits to complete.
> > >
> > > This way we always and accurately account for slow-to-exit processes to
> > > avoid filling up the scoreboard, whether at the limits or not.
> >
> > Just as a comment in case users will report it: This can slow down process 
> > reduction even if far away from the limits:
> >
> > Previously each call to perform_idle_server_maintenance killed off one 
> > process if there was one to kill from the spare threads
> > point of view. Now it could take more calls as the process killed by the 
> > previous call to perform_idle_server_maintenance
> > might not have died when we return to perform_idle_server_maintenance and 
> > thus preventing to kill another one. Hence we won't have
> > multiple processes dying in parallel when we want to reduce processes due 
> > to too much spare threads.
> > This can cause situations that if we kill a slow dying process first we 
> > will have completely idle processes floating around for
> > quite some time.

Since the connections are more or less evenly distributed across all
the processes and each process handles all types of connections
(w.r.t. lifetime/timeout hence slowness to exit), there is probably
not a single slow-to-exit process potentially but rather all processes
or none.
So if a process is slow to exit I don't think that we gain anything by
killing more ones quickly (unless we have room for that, see below),
we will still have completely idle processes (the ones dying) for the
same time though they won't be able to take any potential increasing
load happening soon (while waiting for the dying processes before
killing more allows that).

Though killing them one at a time is a bit too drastic possibly, what
would be a reasonable maximum number of dying processes?

>
> Could we base on max_daemons_limit instead? In the current impl we
> might still have ample slack space in the SB.

IIUC the issue (by design) with max_daemons_limit is that it accounts
for large holes on graceful restart (when the old gen stops) and those
won't fill up until MaxSpareThreads (or MaxRequestsPerChild) kicks in
still, so after a graceful it may not be the appropriate metric for
"how much room do we have?" at MaxSpareThreads time.

How about (modulo brain fart):
const int N = 1; /* or 2, 3, 4.. */
int avail_daemons = server_limit - retained->total_daemons;
int have_room_for_N_restarts = (avail_daemons / N >= active_daemons_limit);
int inactive_daemons = retained->total_daemons - retained->active_daemons;
int do_kill = (have_room_for_N_restarts || inactive_daemons == 0);
if (do_kill) {
ap_mpm_podx_signal(retained->buckets[child_bucket].pod,
   AP_MPM_PODX_GRACEFUL);
}
else {
/* Wait for inactive_daemons to settle down */
}
?


Regards;
Yann.


Re: release vibes?

2021-12-06 Thread Noel Butler

On 06/12/2021 20:36, Stefan Eissing wrote:

Friends of httpd, how do you feel about a release in the next two 
weeks?


Kind Regards,
Stefan


-1

Thats days before christmas, most testers and i'm sure devs will be in 
holiday mode, even if not, that close to christmas means server updates 
are in embargo status in most organisations.


--
Regards,
Noel Butler

This Email, including attachments, may contain legally privileged 
information, therefore at all times remains confidential and subject to 
copyright protected under international law. You may not disseminate 
this message without the authors express written authority to do so.   
If you are not the intended recipient, please notify the sender then 
delete all copies of this message including attachments immediately. 
Confidentiality, copyright, and legal privilege are not waived or lost 
by reason of the mistaken delivery of this message.

release vibes?

2021-12-06 Thread Stefan Eissing
Friends of httpd, how do you feel about a release in the next two weeks?

Kind Regards,
Stefan