Am 22.01.2017 um 22:22 schrieb Yann Ylavic: > Hi Stefan, > > On Sun, Jan 22, 2017 at 8:00 PM, Stefan Priebe - Profihost AG > <[email protected]> wrote: >> Am 22.01.2017 um 18:02 schrieb Eric Covener: >>> On Sun, Jan 22, 2017 at 11:21 AM, Stefan Priebe - Profihost AG >>> <[email protected]> wrote: >>>> Hi Stefan, >>>> >>>> no i was mistaken customer isn't using mod_proxy - but I think this is >>>> the patch causing me problems: >>>> https://github.com/apache/httpd/commit/a61a4bd02e483fb45d433343740d0130ee3d8d5d >>>> >>>> What do you think? >>>> >>> >>> If that is the culprit, you could likely minimize it & help confirm with >>> >>> * Increase MaxSpareTheads to the current value of MaxClients (aka >>> MaxRequestWorkers) >>> * Make sure MaxRequestsPerChild is 0. >> >> Why not just revert that one? > > This commit is likely not the "culprit", but probably the one that > brings up the issue. > > It makes so that mpm event's threads/keepalive connections get > terminated more agressively/quickly on graceful restart, for the new > generation of children processes to be able to handle new connections > also more quickly. > > I agree with Eric that the next test would be to avoid "maintenance" > graceful restarts by tunning MaxSpareTheads and MaxRequestsPerChild as > suggested, mod_http2 should then expect the same behaviour from the > mpm as with 2.4.23. > > You may still reproduce the crash with "explicit" graceful restarts > (e.g. "apache[2]ctl -k graceful" or "/etc/init.d/apache2 reload"), but > if it proves to be stable otherwise the issue is still about double > cleanup/close when http2 pools/buckets are in place.
Thanks for this great explanation this makes sense. Currently i'm waiting for a next crash. But didn't get one... no idea which kind of clients are able todo them - but they were very rare. 99% of the crashes are gone with mod_http2 v1.8.9. May be i should retest your V7 patch? Greets, Stefan > @icing: Any special expectation in mod_h2 with regard to mpm workers > threads' lifetime (or keepalive connections that should stay alive for > the configured limit)? > I see that beam buckets make use of thread local storage/keys for > locking, and that they also handle the double cleanup like eoc buckets > did before 1.8.9, but can't follow all the paths yet. > Maybe something to look at there? >
