Thanks all for the feedback, ideas and suggestions.
I tested two of the suggested changes separately: reading the request
completely before replying, and adding a "Connection: close" header (which
I was not sending). Both options seem to eliminate the 502 responses, so
I've decided to add the
If you are going to reject the request in your own code with a 400.
Make sure you set the `Connection: close` header on the response, as this
will trigger Jetty to close the connection on it's side.
Something the HTTP spec allows for (the server is allowed to close the
connection at any point).
Note that if you can control the client, then you can use the Expect:
100-Continue header to avoid sending the body until the headers are
checked... but that doesn't work if the client is just a browser.
It does seem like Nginx could do better if jetty has already send a 400
with connect:close
Did you get the same outcome with Apache also ?
On Wed, 31 Mar 2021, 08:00 Simone Bordet, wrote:
> Hi,
>
> On Wed, Mar 31, 2021 at 6:45 AM Daniel Gredler
> wrote:
> >
> > Hi,
> >
> > I'm playing around with a Jetty-based API service deployed to AWS
> Elastic Beanstalk in a Docker container.
Hi,
On Wed, Mar 31, 2021 at 6:45 AM Daniel Gredler wrote:
>
> Hi,
>
> I'm playing around with a Jetty-based API service deployed to AWS Elastic
> Beanstalk in a Docker container. The setup is basically: EC2 load balancer ->
> nginx reverse proxy -> Docker container running the Jetty service.
>
Hi,
I'm playing around with a Jetty-based API service deployed to AWS Elastic
Beanstalk in a Docker container. The setup is basically: EC2 load balancer
-> nginx reverse proxy -> Docker container running the Jetty service.
One of the API endpoints accepts large POST requests. As a safeguard, I