On 10/14/2018 04:20 PM, Mark Blackman wrote:
> 
> 
>> On 14 Oct 2018, at 12:33, Rainer Jung <rainer.j...@kippdata.de 
>> <mailto:rainer.j...@kippdata.de>> wrote:
>>
>> Am 13.10.2018 um 11:46 schrieb Rainer Jung:
>>> Am 11.10.2018 um 20:55 schrieb Ruediger Pluem:
>>>>
>>>>
>>>> On 10/11/2018 08:10 PM, Christophe JAILLET wrote:
>>>>> No issue on my Ubuntu 18.04 VM.
>>>>>
>>>>> On what configuration are you running your tests, Rüdiger? macOS, just 
>>>>> like Jim?
>>>>
>>>> Centos 7.5 64 Bit
>>>>
>>>> Regards
>>>>
>>>> Rüdiger
>>> The test fails for me as well for 2.4.36 on SLES12. Small bodies are OK, 
>>> large not. The limit is somewhere between
>>> 1.3 and 1.5 MB, not always the same. The test hangs there until 
>>> mod_reqtimeout times out after a minute, complaining
>>> that it could not read more data from the client. If I reduce the 
>>> multiplicator 1000000 to eg. 200000 it always passes.
>>> If I start the test server using "t/TEST -start-httpd" and then use curl to 
>>> POST data, I can even POST much bigger
>>> data and get the correct result back. I use
>>>   curl -v --data-binary @BIGFILE http://localhost:8529/apache/buffer_in/ > 
>>> response-body
>>> So I assume it is a problem of interaction between the server reading the 
>>> POST body and the client sending it.
>>> My test framework was freshly assembled recently, so lots of current 
>>> modules.
>>> The setup is based on OpenSSL 1.1.1 in the server and in the test 
>>> framework, but the actual test runs over http, so I
>>> don't expect any OpenSSL related reason for the failure.
>>
>> I did some more tests including using LWP directly and sniffing the packets 
>> on the network plus with mod_dumpio and
>> also doing truss / strace.
>>
>> I can reproduce even when sending using LWP directly or just the POST binary 
>> coming with LWP. I can not reproduce with
>> curl.
>>
>> With mod_dumpio and in a network sniff plus truss it looks like the client 
>> simply stops sending once it got the first
>> response bytes. LWP seems to select the socket FD for read and write. As 
>> long as only write gets signalled, it happily
>> sends data. Once it gets write plus read signalled, it switches over to read 
>> and no longer checks for write. Since our
>> server side implementation is streaming and starts to send the reflected 
>> bytes right away, this LWP behavior breaks
>> the request.
> 
> Hmm, it almost seems like that test/reflector module doesn’t reflect the 
> protocol definition
> though, https://tools.ietf.org/html/rfc7231#section-1
> 
> "A server listens on a connection for a request,
>    parses each message received, interprets the message semantics in
>    relation to the identified request target, and responds to that
>    request with one or more response messages”
> 
> I would interpret that “message received" as the server is expected to wait 
> until the entire request is received, aside from the case of     "Expect: 
> 100-continue” and even that alludes to waiting.
> 
> https://tools.ietf.org/html/rfc7231#section-6.2.1
> 
> "The server intends to send a final response after the request has been fully 
> received and acted upon."
> 
> What do you think?

Actually good points. mod_proxy_http e.g. first processes the whole message 
received before it cares to look for an
answer. I guess in the light of the above this is a perfectly reasonable 
approach for a client. Having LWP acting this
way which it currently does not as I understand could still cause the test to 
fail if the output (server) /
input(client) TCP buffers get filled and then the input (server) / output 
(client) buffers get filled. This would cause
the connection to become stalled and finally run in a timeout. I remember bug 
reports for mod_proxy_http being used as
reverse proxy where clients expected to already have a response without sending 
the full request (aka. parts of the
request body still not sent). Given the above from RFC a client cannot be 
expected to read a response before it
completely wrote the request.

Regards

Rüdiger

Reply via email to