> Am 14.04.2022 um 17:54 schrieb Yann Ylavic <ylavic....@gmail.com>:
> 
> On Thu, Apr 14, 2022 at 1:43 PM Stefan Eissing <ste...@eissing.org> wrote:
>> 
>> 
>> In test/modules/core/test_002_restarts.py there is now a start of this. 
>> Invoked
>> specifically with:
>> 
>> trunk> STRESS_TEST=1 pytest -vvv -k test_core_002
> 
> Thanks for writing this!
> 
>> 
>> this uses the config given and runs h2load with 6 clients, each using 5 
>> connections
>> over time to perform a number of total requests. Number of clients easily 
>> tweakable.
> 
> I saw h2load issue up to 12 connections with this configuration, so
> raised the MPM limits a bit in r1899862 to avoid failing on
> "MaxRequestWorkers reached" in the logs.

It may not be totally exact in its limiting or you may have counted lingering 
connections?

>> 
>> The test in its current form fail, bc it expects all requests to succeed. 
>> However, h2load
>> counts the remaining requests on a connection unsuccessful if the server 
>> closes. So, maybe
>> we should use another test client or count 'success' differently.
> 
> It passes for me currently with the new MPM limits, probably because
> kept-alive connections are not killed anymore.
> 
>> 
>> Anyway, in test/gen/apache/logs/error_log one can see how mpm_event hits it 
>> limits
>> on workers and concurrent connections. Like
>> 
>> " All workers are busy or dying, will shutdown 1 keep-alive connections"
>> or "AH03269: Too many open connections (1), not accepting new conns in this 
>> process"
>> 
>> So, we need to define how we measure `success` and what we expect/not expect 
>> to see
>> in the error log. I guess?
> 
> I'm not sure what we want to test here, the limits or the restart?
> I thought it was the latter as a first step, but yes if we want to
> test the former we need to either disable keepalive to avoid confusing
> h2load or use another (dedicated?) client more tolerant on how we may
> early close kept-alive connections.

This initial version was just a test balloon. I think we want to test
the dynamic behaviour of mpm_event in regard to child processes and
restarts. I'll take a stab on changing it for this.

> Measuring how we behave at the limits may also be tricky, what would
> be the expectations? No unhandled connections? A minimal latency? Or?
> That may depend on the running machine too..
> 
> If we want to test the reload, we don't need to stress too much (just
> some active connections for each generation), but we need something
> that "-k restart" and/or "-k restart" in parallel and counts the
> processes or the "Child <slot> started|stopped: pid=<pid>" in the
> logs.
> 
> 
> Cheers;
> Yann.

Reply via email to