Re: [jetty-users] Fail-fast request handling best practices

2021-04-05 Thread Daniel Gredler
Thanks all for the feedback, ideas and suggestions.

I tested two of the suggested changes separately: reading the request
completely before replying, and adding a "Connection: close" header (which
I was not sending). Both options seem to eliminate the 502 responses, so
I've decided to add the header and otherwise keep the fail-fast behavior as
is (without reading the request completely).

I did briefly look into replacing nginx with Apache to check the behavior
in that scenario, and this should theoretically be possible, but the
corresponding configuration did not seem to be available in the AWS console
for some reason and I didn't dig into it enough to figure out why.

Also, I wouldn't control the client in this scenario, so the "Expect:
100-continue" header isn't really an option.

I did manage to stay away from SO_LINGER :-)

Thanks again!

Daniel


On Wed, Mar 31, 2021 at 3:38 PM Joakim Erdfelt  wrote:

> If you are going to reject the request in your own code with a 400.
> Make sure you set the `Connection: close` header on the response, as this
> will trigger Jetty to close the connection on it's side.
>
> Something the HTTP spec allows for (the server is allowed to close the
> connection at any point).
> nginx should see this header and not send more data to Jetty (this is
> actually spelled out in the spec)
>
> Joakim Erdfelt / joa...@webtide.com
>
>
> On Tue, Mar 30, 2021 at 11:45 PM Daniel Gredler 
> wrote:
>
>> Hi,
>>
>> I'm playing around with a Jetty-based API service deployed to AWS Elastic
>> Beanstalk in a Docker container. The setup is basically: EC2 load balancer
>> -> nginx reverse proxy -> Docker container running the Jetty service.
>>
>> One of the API endpoints accepts large POST requests. As a safeguard, I
>> wanted to add a maximum request size (e.g. any request body larger than 1
>> MB is rejected). I thought I'd be clever and check the Content-Length
>> header, if present. If the header indicates that the body is too large, I'd
>> reject the request immediately (HTTP 400 error), without even wasting time
>> reading the request body. I can imagine similar fail-fast checks on the
>> security side, using the Authorization HTTP request header.
>>
>> This Content-Length check works correctly most of the time, but
>> occasionally nginx reports "writev() failed (32: Broken pipe) while sending
>> request to upstream" and sends a HTTP 502 error upstream to the load
>> balancer, which duly informs the client that there was a HTTP 502 Bad
>> Gateway error somewhere along the line.
>>
>> It appears that in these instances Jetty is closing the connection after
>> sending back the HTTP 400 error, nginx doesn't notice and continues to try
>> to send the request body content to Jetty, sees at that point that the
>> connection is closed, and reports a less-than-friendly HTTP 502 error to
>> the client.
>>
>> So I'm wondering... is this fail-fast Content-Length header check too
>> clever? Is it best practice to actually always read the full request body,
>> and only fail once the body has been fully read, even if we have enough
>> information to reject the request much earlier? Or would most people just
>> accept the occasional 502 error? I've seen some mentions of SO_LINGER /
>> setSoLingerTime and setAcceptQueueSize as possible workarounds, but
>> SO_LINGER especially always seems to be surrounded with "here be dragons"
>> warnings...
>>
>> What's the best practice here? Should I just accept that I need to read
>> these useless bytes?
>>
>> Take care,
>>
>> Daniel
>>
>> ___
>> jetty-users mailing list
>> jetty-users@eclipse.org
>> To unsubscribe from this list, visit
>> https://www.eclipse.org/mailman/listinfo/jetty-users
>>
> ___
> jetty-users mailing list
> jetty-users@eclipse.org
> To unsubscribe from this list, visit
> https://www.eclipse.org/mailman/listinfo/jetty-users
>
___
jetty-users mailing list
jetty-users@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users


Re: [jetty-users] Fail-fast request handling best practices

2021-03-31 Thread Joakim Erdfelt
If you are going to reject the request in your own code with a 400.
Make sure you set the `Connection: close` header on the response, as this
will trigger Jetty to close the connection on it's side.

Something the HTTP spec allows for (the server is allowed to close the
connection at any point).
nginx should see this header and not send more data to Jetty (this is
actually spelled out in the spec)

Joakim Erdfelt / joa...@webtide.com


On Tue, Mar 30, 2021 at 11:45 PM Daniel Gredler  wrote:

> Hi,
>
> I'm playing around with a Jetty-based API service deployed to AWS Elastic
> Beanstalk in a Docker container. The setup is basically: EC2 load balancer
> -> nginx reverse proxy -> Docker container running the Jetty service.
>
> One of the API endpoints accepts large POST requests. As a safeguard, I
> wanted to add a maximum request size (e.g. any request body larger than 1
> MB is rejected). I thought I'd be clever and check the Content-Length
> header, if present. If the header indicates that the body is too large, I'd
> reject the request immediately (HTTP 400 error), without even wasting time
> reading the request body. I can imagine similar fail-fast checks on the
> security side, using the Authorization HTTP request header.
>
> This Content-Length check works correctly most of the time, but
> occasionally nginx reports "writev() failed (32: Broken pipe) while sending
> request to upstream" and sends a HTTP 502 error upstream to the load
> balancer, which duly informs the client that there was a HTTP 502 Bad
> Gateway error somewhere along the line.
>
> It appears that in these instances Jetty is closing the connection after
> sending back the HTTP 400 error, nginx doesn't notice and continues to try
> to send the request body content to Jetty, sees at that point that the
> connection is closed, and reports a less-than-friendly HTTP 502 error to
> the client.
>
> So I'm wondering... is this fail-fast Content-Length header check too
> clever? Is it best practice to actually always read the full request body,
> and only fail once the body has been fully read, even if we have enough
> information to reject the request much earlier? Or would most people just
> accept the occasional 502 error? I've seen some mentions of SO_LINGER /
> setSoLingerTime and setAcceptQueueSize as possible workarounds, but
> SO_LINGER especially always seems to be surrounded with "here be dragons"
> warnings...
>
> What's the best practice here? Should I just accept that I need to read
> these useless bytes?
>
> Take care,
>
> Daniel
>
> ___
> jetty-users mailing list
> jetty-users@eclipse.org
> To unsubscribe from this list, visit
> https://www.eclipse.org/mailman/listinfo/jetty-users
>
___
jetty-users mailing list
jetty-users@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users


Re: [jetty-users] Fail-fast request handling best practices

2021-03-31 Thread Greg Wilkins
Note that if you can control the client, then you can use the Expect:
100-Continue header to avoid sending the body until the headers are
checked... but that doesn't work if the client is just a browser.

It does seem like Nginx could do better if jetty has already send a 400
with connect:close and then closes the connection.  It is wrong for nginx
to say bad gateway, as that is a perfectly legal thing to do in HTTP.

On Wed, 31 Mar 2021 at 18:00, Simone Bordet  wrote:

> Hi,
>
> On Wed, Mar 31, 2021 at 6:45 AM Daniel Gredler 
> wrote:
> >
> > Hi,
> >
> > I'm playing around with a Jetty-based API service deployed to AWS
> Elastic Beanstalk in a Docker container. The setup is basically: EC2 load
> balancer -> nginx reverse proxy -> Docker container running the Jetty
> service.
> >
> > One of the API endpoints accepts large POST requests. As a safeguard, I
> wanted to add a maximum request size (e.g. any request body larger than 1
> MB is rejected). I thought I'd be clever and check the Content-Length
> header, if present. If the header indicates that the body is too large, I'd
> reject the request immediately (HTTP 400 error), without even wasting time
> reading the request body. I can imagine similar fail-fast checks on the
> security side, using the Authorization HTTP request header.
> >
> > This Content-Length check works correctly most of the time, but
> occasionally nginx reports "writev() failed (32: Broken pipe) while sending
> request to upstream" and sends a HTTP 502 error upstream to the load
> balancer, which duly informs the client that there was a HTTP 502 Bad
> Gateway error somewhere along the line.
> >
> > It appears that in these instances Jetty is closing the connection after
> sending back the HTTP 400 error, nginx doesn't notice and continues to try
> to send the request body content to Jetty, sees at that point that the
> connection is closed, and reports a less-than-friendly HTTP 502 error to
> the client.
> >
> > So I'm wondering... is this fail-fast Content-Length header check too
> clever? Is it best practice to actually always read the full request body,
> and only fail once the body has been fully read, even if we have enough
> information to reject the request much earlier? Or would most people just
> accept the occasional 502 error? I've seen some mentions of SO_LINGER /
> setSoLingerTime and setAcceptQueueSize as possible workarounds, but
> SO_LINGER especially always seems to be surrounded with "here be dragons"
> warnings...
> >
> > What's the best practice here? Should I just accept that I need to read
> these useless bytes?
>
> Don't use SO_LINGER.
> Your best option is to read all the bytes; would be best if you can do
> this asynchronously.
>
> The problem is that by the time you close the connection from Jetty,
> Nginx may not have received the whole content from the client.
> So Jetty closes, then Nginx receives some more content from the
> client, tries to write to Jetty, but finds the connection closed, and
> reports back the 502.
>
> Not reading the content will just cause the TCP connection to congest
> with the same results (502), so if you really want to send a clean 400
> you have to read the whole content.
>
> --
> Simone Bordet
> 
> http://cometd.org
> http://webtide.com
> Developer advice, training, services and support
> from the Jetty & CometD experts.
> ___
> jetty-users mailing list
> jetty-users@eclipse.org
> To unsubscribe from this list, visit
> https://www.eclipse.org/mailman/listinfo/jetty-users
>


-- 
Greg Wilkins  CTO http://webtide.com
___
jetty-users mailing list
jetty-users@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users


Re: [jetty-users] Fail-fast request handling best practices

2021-03-31 Thread Som Lima
Did you get the same outcome with Apache also  ?

On Wed, 31 Mar 2021, 08:00 Simone Bordet,  wrote:

> Hi,
>
> On Wed, Mar 31, 2021 at 6:45 AM Daniel Gredler 
> wrote:
> >
> > Hi,
> >
> > I'm playing around with a Jetty-based API service deployed to AWS
> Elastic Beanstalk in a Docker container. The setup is basically: EC2 load
> balancer -> nginx reverse proxy -> Docker container running the Jetty
> service.
> >
> > One of the API endpoints accepts large POST requests. As a safeguard, I
> wanted to add a maximum request size (e.g. any request body larger than 1
> MB is rejected). I thought I'd be clever and check the Content-Length
> header, if present. If the header indicates that the body is too large, I'd
> reject the request immediately (HTTP 400 error), without even wasting time
> reading the request body. I can imagine similar fail-fast checks on the
> security side, using the Authorization HTTP request header.
> >
> > This Content-Length check works correctly most of the time, but
> occasionally nginx reports "writev() failed (32: Broken pipe) while sending
> request to upstream" and sends a HTTP 502 error upstream to the load
> balancer, which duly informs the client that there was a HTTP 502 Bad
> Gateway error somewhere along the line.
> >
> > It appears that in these instances Jetty is closing the connection after
> sending back the HTTP 400 error, nginx doesn't notice and continues to try
> to send the request body content to Jetty, sees at that point that the
> connection is closed, and reports a less-than-friendly HTTP 502 error to
> the client.
> >
> > So I'm wondering... is this fail-fast Content-Length header check too
> clever? Is it best practice to actually always read the full request body,
> and only fail once the body has been fully read, even if we have enough
> information to reject the request much earlier? Or would most people just
> accept the occasional 502 error? I've seen some mentions of SO_LINGER /
> setSoLingerTime and setAcceptQueueSize as possible workarounds, but
> SO_LINGER especially always seems to be surrounded with "here be dragons"
> warnings...
> >
> > What's the best practice here? Should I just accept that I need to read
> these useless bytes?
>
> Don't use SO_LINGER.
> Your best option is to read all the bytes; would be best if you can do
> this asynchronously.
>
> The problem is that by the time you close the connection from Jetty,
> Nginx may not have received the whole content from the client.
> So Jetty closes, then Nginx receives some more content from the
> client, tries to write to Jetty, but finds the connection closed, and
> reports back the 502.
>
> Not reading the content will just cause the TCP connection to congest
> with the same results (502), so if you really want to send a clean 400
> you have to read the whole content.
>
> --
> Simone Bordet
> 
> http://cometd.org
> http://webtide.com
> Developer advice, training, services and support
> from the Jetty & CometD experts.
> ___
> jetty-users mailing list
> jetty-users@eclipse.org
> To unsubscribe from this list, visit
> https://www.eclipse.org/mailman/listinfo/jetty-users
>
___
jetty-users mailing list
jetty-users@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users


Re: [jetty-users] Fail-fast request handling best practices

2021-03-31 Thread Simone Bordet
Hi,

On Wed, Mar 31, 2021 at 6:45 AM Daniel Gredler  wrote:
>
> Hi,
>
> I'm playing around with a Jetty-based API service deployed to AWS Elastic 
> Beanstalk in a Docker container. The setup is basically: EC2 load balancer -> 
> nginx reverse proxy -> Docker container running the Jetty service.
>
> One of the API endpoints accepts large POST requests. As a safeguard, I 
> wanted to add a maximum request size (e.g. any request body larger than 1 MB 
> is rejected). I thought I'd be clever and check the Content-Length header, if 
> present. If the header indicates that the body is too large, I'd reject the 
> request immediately (HTTP 400 error), without even wasting time reading the 
> request body. I can imagine similar fail-fast checks on the security side, 
> using the Authorization HTTP request header.
>
> This Content-Length check works correctly most of the time, but occasionally 
> nginx reports "writev() failed (32: Broken pipe) while sending request to 
> upstream" and sends a HTTP 502 error upstream to the load balancer, which 
> duly informs the client that there was a HTTP 502 Bad Gateway error somewhere 
> along the line.
>
> It appears that in these instances Jetty is closing the connection after 
> sending back the HTTP 400 error, nginx doesn't notice and continues to try to 
> send the request body content to Jetty, sees at that point that the 
> connection is closed, and reports a less-than-friendly HTTP 502 error to the 
> client.
>
> So I'm wondering... is this fail-fast Content-Length header check too clever? 
> Is it best practice to actually always read the full request body, and only 
> fail once the body has been fully read, even if we have enough information to 
> reject the request much earlier? Or would most people just accept the 
> occasional 502 error? I've seen some mentions of SO_LINGER / setSoLingerTime 
> and setAcceptQueueSize as possible workarounds, but SO_LINGER especially 
> always seems to be surrounded with "here be dragons" warnings...
>
> What's the best practice here? Should I just accept that I need to read these 
> useless bytes?

Don't use SO_LINGER.
Your best option is to read all the bytes; would be best if you can do
this asynchronously.

The problem is that by the time you close the connection from Jetty,
Nginx may not have received the whole content from the client.
So Jetty closes, then Nginx receives some more content from the
client, tries to write to Jetty, but finds the connection closed, and
reports back the 502.

Not reading the content will just cause the TCP connection to congest
with the same results (502), so if you really want to send a clean 400
you have to read the whole content.

-- 
Simone Bordet

http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
___
jetty-users mailing list
jetty-users@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users


[jetty-users] Fail-fast request handling best practices

2021-03-30 Thread Daniel Gredler
Hi,

I'm playing around with a Jetty-based API service deployed to AWS Elastic
Beanstalk in a Docker container. The setup is basically: EC2 load balancer
-> nginx reverse proxy -> Docker container running the Jetty service.

One of the API endpoints accepts large POST requests. As a safeguard, I
wanted to add a maximum request size (e.g. any request body larger than 1
MB is rejected). I thought I'd be clever and check the Content-Length
header, if present. If the header indicates that the body is too large, I'd
reject the request immediately (HTTP 400 error), without even wasting time
reading the request body. I can imagine similar fail-fast checks on the
security side, using the Authorization HTTP request header.

This Content-Length check works correctly most of the time, but
occasionally nginx reports "writev() failed (32: Broken pipe) while sending
request to upstream" and sends a HTTP 502 error upstream to the load
balancer, which duly informs the client that there was a HTTP 502 Bad
Gateway error somewhere along the line.

It appears that in these instances Jetty is closing the connection after
sending back the HTTP 400 error, nginx doesn't notice and continues to try
to send the request body content to Jetty, sees at that point that the
connection is closed, and reports a less-than-friendly HTTP 502 error to
the client.

So I'm wondering... is this fail-fast Content-Length header check too
clever? Is it best practice to actually always read the full request body,
and only fail once the body has been fully read, even if we have enough
information to reject the request much earlier? Or would most people just
accept the occasional 502 error? I've seen some mentions of SO_LINGER /
setSoLingerTime and setAcceptQueueSize as possible workarounds, but
SO_LINGER especially always seems to be surrounded with "here be dragons"
warnings...

What's the best practice here? Should I just accept that I need to read
these useless bytes?

Take care,

Daniel
___
jetty-users mailing list
jetty-users@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users