Maybe this can work? It goes into blocking read with default timeout when there 
is definitely nothing to send from our end.

Attachment: h2-proxy-timeout.patch
Description: Binary data


> Am 27.05.2020 um 15:36 schrieb Stefan Eissing <[email protected]>:
> 
> Is this a new stupid feature that apple mail inserts a sig in front of quoted 
> text? Sorry about that...
> 
>> Am 27.05.2020 um 15:35 schrieb Stefan Eissing <[email protected]>:
>> 
>> 
>> Stefan Eissing
>> 
>> <green/>bytes GmbH
>> Hafenweg 16
>> 48155 Münster
>> www.greenbytes.de
>> 
>>> Am 27.05.2020 um 15:05 schrieb Ruediger Pluem <[email protected]>:
>>> 
>>> 
>>> 
>>> On 5/27/20 1:10 PM, Stefan Eissing wrote:
>>>> The whole thing initially handled processing of several streams in 
>>>> parallel. That is why it has more states than currently necessary.
>>>> 
>>>> h2_proxy_session_process() returns from H2_PROXYS_ST_WAIT state to the 
>>>> caller in mod_proxy_http2.c#255. That one checks the aborted state of the 
>>>> "master" connection. So, when our frontend connection goes away, 
>>>> mod_proxy_http2 processing also aborts. (Which raises the question if 
>>>> c->aborted can ever be set on a HTTP/1.1 connection during this, uhmm.)
>>> 
>>> If you don't write to the frontend I doubt that c->aborted will be set.
>>> 
>>>> 
>>>> But, as I reread it now, the h2_proxy_session_process() will not timeout 
>>>> when the frontend connection stays and the backend just does not send 
>>>> anything back at all. So, any "ProxyTimeout" seems to be ignored atm.
>>> 
>>> So I seem to read this correct. This gets me to the next question: Is this 
>>> the desired behavior? I would expect it to obey
>>> ProxyTimeout or timeout worker settings. Any particular reason why we have 
>>> that timeout starting with 25ms and increasing up to
>>> 100ms and not just using the current timeout set on the socket? I mean even 
>>> in case it would process several streams in parallel
>>> it could block reading from the socket until ProxyTimeout and give up if 
>>> nothing was delivered until then. Or does it need to wake
>>> up quicker in the multi stream scenario as a request for a new stream by 
>>> the proxy_handler needs to be processed by the blocking
>>> thread?
>> 
>> I think respecting ProxyTimeout is the correct behaviour, now that we one 
>> handle one request at a time. However, due to the nature of h2, we cannot 
>> always do a blocking read with that timeout. For example, while a request 
>> body needs to be send, we might be in the same processing loop and need to 
>> check regularly for new data arriving on the frontend connection.
>> 
>> But we can track how long we have neither sent nor received and compare that 
>> to ProxyTimeout. And that includes writing and reading to the frontend 
>> connection (or slave connection when the fronend is h2 itself).
>> 
>>> 
>>> OTOH I understand that in the multiple stream scenario which we currently 
>>> don't use users might expect that the ProxyTimeout
>>> applies to each single stream and hence doing a blocking socket read with a 
>>> ProxyTimeout could lead to streams waiting for much
>>> longer than ProxyTimeout if there are active streams on the connection. I 
>>> guess this would require a stream specific
>>> last_frame_received that would need to checked on a regular basis.
>>> 
>>> Having said this any proposal how to move on? Should we do a blocking read 
>>> with ProxyTimeout and bail out if it times out for now?
>>> 
>>> Regards
>>> 
>>> Rüdiger
> 

Reply via email to