> The next client request on that connection fails with a client timeout.

When I see it in isolation like this, this is rather dubious behaviour on 
the part of the client. The client shouldn't put the TCP connection back in 
the pool for reuse when the previous upload is still incomplete.

Unfortunately, this is Java's HttpUrlConnection which is exhibiting this 
behaviour, so we probably ought to support it.
We're currently using that client (backed 
by sun.net.www.protocol.http.HttpURLConnection), as the Akka client isn't 
working for us yet (it doesn't have connection pooling - 
see https://github.com/akka/akka/issues/16856 and it has some other bugs -- 
see https://github.com/akka/akka/issues/16865 )

I'll create an isolated repro case and raise a ticket. Give me a day or two 
to pull the code out of our app framework.



On Monday, February 16, 2015 at 3:58:36 PM UTC, Richard Bradley wrote:
>
> On Thursday, February 12, 2015 at 7:50:50 PM UTC, rkuhn wrote:
>>
>> Hi Richard,
>>
>> 11 feb 2015 kl. 11:30 skrev Richard Bradley <[email protected]>:
>>
>> We are now running up against this issue in practice, so we need a 
>> workaround.
>>
>> To recap: if a client makes a large upload request to an Akka HTTP 
>> ("akka-stream-and-http-experimental-1.0-M2") and the server responds (i.e. 
>> returns an HttpResponse to the "handlerFlow" function) before reading the 
>> full request body, then the TCP stream stalls.
>> The client receives the response, but thinks that the stream is still 
>> valid. It will then send the next HTTP request on the same TCP stream, but 
>> Akka will never read the request, as it’s still waiting for the (now 
>> aborted) request stream to finish reading in the responded-to request.
>>
>> So, for our Akka HTTP server to be well behaved, we must either always 
>> read in the full request and/or half-close the TCP upload stream.
>>
>>
>> As has been discussed earlier, the only reliable way to treat this case 
>> is to fully close the connection, I’m not sure that half-closing will 
>> improve the situation. You may or may not try to emit a response before 
>> closing, but as the standard does not mandate that the client read the 
>> response before the request body has been delivered this cannot really be 
>> guaranteed to work.
>>
>
>
> I don't think that's true.
> The HTTP specs seem clear -- the server is allowed to reply early and 
> close the upload stream while the client is doing a large upload, and the 
> client should listen for such a close:
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.2.2
>
> "An HTTP/1.1 (or later) client sending a message-body SHOULD monitor the 
> network connection for an error status while it is transmitting the request.
> "
>
> In the case of a large upload where the server finds out half-way-through 
> that the upload is invalid, the correct behaviour for the server is to:
>  1. Respond on the "down" stream with the error details (HTTP 400 bad 
> request, etc. etc.)
>  2. Close the "up" stream (i.e. half-close the connection) so that the 
> client stops sending the second half of the large request.
>  (3. Don't close the "down" stream, as we want the client to read our 
> error message.)
>
> As well as the specs, we can look to pre-existing servers. The Apache AXIS 
> SOAP server (Tomcat based) does this, for example if you have a very large 
> XML upload with a syntax error half-way through it, Apache AXIS will follow 
> steps 1-3 above when it reaches that point in the stream.
>
> I think that a well-behaved server needs to always either read in the full 
> request and/or half-close the TCP upload stream.
> My questions from before still stand: 
>  1) how do I know, in the akka.http.server.ExceptionHandler interface (or 
> nearby), whether I need to half-close a part-read request body stream, or 
> if the request body has been read?
>  2) in the akka.http.server.ExceptionHandler interface (or nearby) how do 
> I half-close a part-read materialized request body stream, when I don't 
> have a reference to it? I only have a reference to the unmaterialized 
> Source.
>
>
>  
>
>> Another way to tackle this problem would be to require the use of an 
>> `Expect: 100-continue` header, which would allow exactly the early response 
>> that you want to give.
>>
>
> That's not relevant here -- I'm talking about an error part-way through 
> the request body, not an error in the request headers.
>
>  
>
>> How can I "cancel" the stream? Given an instance of HttpRequest, the 
>> entity has “dataBytes”, but that’s not a materialized stream, but a Source. 
>> How do I get a reference to the materialized stream to cancel it? I tried 
>> "request.entity.dataBytes.runWith(Sink.cancelled)", but that seems wrong.
>>
>>
>> No, that is exactly right.
>>
>
> As I feared, this doesn't work.
> I have written an isolated test case which does the following:
>  1. The client makes a large upload
>  2. The routing code in the Akka server dispatches the request to a worker 
> layer
>  3. The worker layer reads half of the upload body, then returns an 
> exception to the router
>  4. The routing layer runs 
> "request.entity.dataBytes.runWith(Sink.cancelled)"
>   a. ... but this doesn't do anything, as the newly materialized stream 
> isn't actually connected to the TCP -- see 
> https://github.com/akka/akka/issues/15835
>  5. The server responds with an error code
>  6. Now the TCP upload stream is still neither fully read nor closed. The 
> client is blocked trying to upload the second half of the request. The Akka 
> server code thinks the stream is still in use somewhere, as it doesn't know 
> that the worker code has stopped reading the request.
>  7. That TCP connection is stalled, but the client doesn't know it. The 
> next client request on that connection fails with a client timeout.
>
> This test-case is tied to some of my app framework code. I'll try to 
> minimise it into a test case and post it here or raise an issue.
>
>
>
> Do you see what I mean?
>
> I think we need some support for this case from the Akka framework -- it 
> seems very difficult to deal with in userland code only.
>
> Thanks very much,
>
>
>
> Rich
>
>
>
>
>>

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to