We just finished our conversion of all of our rest endpoints from
Unfiltered to Akka Http. In doing so, we noticed a strange error when we
have our nginx (version 1.6) proxy sitting in front of Akka Http. If the
response is a bit large, over say 30K, nginx fails with the following error:
upstream prematurely closed connection while reading upstream
The only way we could find to fix this was to explicitly chunk responses
that were larger than a specified size. I used a piece of code like this
to handle the chunking.
def chunkResponses(chunkThreshold:Int) = mapResponseEntity{
case h @ HttpEntity.Strict(ct, data) if data.size > chunkThreshold =>
val chunks =
Source.
single(data).
mapConcat(_.grouped(chunkThreshold).toList).
map(HttpEntity.Chunk.apply(_))
HttpEntity.Chunked(ct, chunks)
case other => other
}
All of the responses we had been producing were always HttpEntity.Strict as
we always had all of the json that we were going to respond with in memory
already. I did not see any need to chunk. I want to better understand why
this is happening. Has anyone else run into this? I couldn't find much
info out there on this. The closest thing I found was someone else seeing
this a while back with their own http server impl:
https://github.com/ztellman/aleph/issues/169
--
>>>>>>>>>> Read the docs: http://akka.io/docs/
>>>>>>>>>> Check the FAQ:
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
---
You received this message because you are subscribed to the Google Groups "Akka
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.