> Am 19.09.2018 um 17:12 schrieb Yann Ylavic <ylavic....@gmail.com>:
> 
> On Wed, Sep 19, 2018 at 4:46 PM Stefan Eissing
> <stefan.eiss...@greenbytes.de> wrote:
>> 
>> Thanks, Yann, this helped me pin the problem down further:
>> 
>> - with disablereuse=on everything works fine
>> - with ttl=1 the problem is still there
> 
> Is the KeepAliveTimeout on the backend side above 1 (second)?

Yes, upped it to 20, no difference.

>> 
>> and then:
>> - with mpm_worker, the problem also disappears (no disable/ttl needed)
> 
> Hmm, something (new?) which does not respect KA timeout on MPM event...
> Can you confirm this (for instance with tcpdump or alike)?

I will debug more tomorrow. As usual, timing seems to play a role. Basically 
there
is a sequence of 3 requests in play which repeat with different content:
1. POST (no expect), small request body
2. POST (expect-100-cont), upload.py body is 1k/10k/100k/ file
3. GET on files/data-1k etc. 

The requests are done in this order, not parallel. All on a new connection.

If a request fails, it is always 3 and always with proxy re-using a dead 
connection.
So, assuming it is the same proxy connection that gets re-used, what may cause 
the
connection to close after request 2?

That is runs on mpm_worker *may* point to mpm_event. But with "LogLevel 
core:trace8"
it seems to disappear in event, also. With core:trace6 it still happens...

Strange thing.

Btw. Solving this is not urgent for me. I see this only in trunk.

Let's see if I gain more insight tomorrow on this.

Cheers,

Stefan

> 
>> 
>> These tests were running since the dawn of h2 time and are still running in 
>> 2.4.x. Since the problem also goes away on worker, this looks like a new 
>> problem with mpm_event and connection closing (keepalive)?
> 
> I'm having a look at it, will come back if something pops up.

Reply via email to