Re: Static or dynamic content

2016-10-20 Thread Francis Daly
On Thu, Oct 20, 2016 at 09:19:29AM +, Jens Dueholm Christensen wrote:
> On Tuesday, October 18, 2016 08:28 AM Francis Daly wrote,

Hi there,

> > what output do you get when you use the test config in the earlier mail?
> 
> Alas I did not try that config yet, but I would assume that my tests would 
> show exactly the same as yours - should I try or is it purely academic?

Academic now, I think.

> If upstream returns 503 or 404 I would like to have the contents of the 
> error_page for 404 or 503 returned to the client regardless of the HTTP 
> request method used.

> > Possibly in your case you could convert the POST to a GET by using
> > proxy_method and proxy_pass within your error_page location.

> How would you suggest I could use proxy_method and proxy_pass within the 
> @error_503 location?
> I'm comming up short on how to do that without beginning to resend a POST 
> request as a GET request to upstream - a new request that could now 
> potentially succeed (since a haproxy backend server could become available 
> between the POST failed and the request is retried as a GET)?

I was imagining doing a proxy_pass to nginx, not to upstream, for the
503 errors. So dealing with upstream is not a concern.

However, I've tried testing this now, and it all seems to work happily
if you omit the @named location for error_page.

I add "internal" below, to avoid having the url be directly
externally-accessible.

Does this do what you want it to do?

  server {
listen 8080;
error_page 503 /errors/503.html;
location / {
  root html;
  try_files /offline.html @xact;
}
location @xact {
  # this server is "upstream"
  proxy_pass http://127.0.0.1:8082;
  proxy_intercept_errors on;
}
location = /errors/503.html {
  internal;
  # this serves $document_root/errors/503.html
}
  }

For me, when "upstream" returns 503, I get back 503 with the content of
my /usr/local/nginx/html/errors/503.html, whether the initial request
was a GET or a POST.

So perhaps the previous awkwardness was due to the @named location and
the various rewrites.

If that does do what you want, then you can add something similar for 404,
I guess.

Cheers,

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


RE: Static or dynamic content

2016-10-20 Thread Jens Dueholm Christensen
On Tuesday, October 18, 2016 08:28 AM Francis Daly wrote,

> So: a POST for /x will be handled in @xact, which will return 503,
> which will be handled in @error_503, which will be rewritten to a POST
> for /error503.html which will be sent to the file error/error503.html,
> which will return a 405.
>
> Is that what you see?

Yes - per your comments later in your reply about internal redirects and the 
debug log, I enabled the debug log which confirms it (several lines have been 
removed from the following sniplet, but its pretty clear):

---
2016/10/20 10:23:45 [debug] 8408#2492: *1 http upstream request: "/2?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http proxy status 503 "503 Service 
Unavailable"
2016/10/20 10:23:45 [debug] 8408#2492: *1 finalize http upstream request: 503
2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 503, "/2?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 test location: "@error_503"
2016/10/20 10:23:45 [debug] 8408#2492: *1 using location: @error_503 "/2?"
2016/10/20 10:23:45 [notice] 8408#2492: *1 "^(.*)$" matches "/2" while sending 
to client, client: 127.0.0.1, server: localhost, request: "POST /2 HTTP/1.1", 
upstream: "http://127.0.0.1:4431/2;, host: "localhost"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http script copy: "/error503.html"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http script regex end
2016/10/20 10:23:45 [notice] 8408#2492: *1 rewritten data: "/error503.html", 
args: "" while sending to client, client: 127.0.0.1, server: localhost, 
request: "POST /2 HTTP/1.1", upstream: "http://127.0.0.1:4431/2;, host: 
"localhost"
2016/10/20 10:23:45 [debug] 8408#2492: *1 http finalize request: 405, 
"/error503.html?" a:1, c:2
2016/10/20 10:23:45 [debug] 8408#2492: *1 http special response: 405, 
"/error503.html?"
2016/10/20 10:23:45 [debug] 8408#2492: *1 HTTP/1.1 405 Not Allowed
Server: nginx/1.8.0
Date: Thu, 20 Oct 2016 08:23:45 GMT
Content-Type: text/html
Content-Length: 172
Connection: keep-alive
---

> Two sets of questions remain:

> what output do you get when you use the test config in the earlier mail?

Alas I did not try that config yet, but I would assume that my tests would show 
exactly the same as yours - should I try or is it purely academic?

> what output do you want?

> That last one is probably "http 503 with the content of *this* file";
> and is probably completely obvious to you; but I don't think it has been
> explicitly stated here, and it is better to know than to try guessing.

100% correct.
If upstream returns 503 or 404 I would like to have the contents of the 
error_page for 404 or 503 returned to the client regardless of the HTTP request 
method used.

> If you remove the error_page 503 part or the proxy_intercept_errors part,
> does the expected http status code get to your client?

Yes!

> I think that the nginx handling of subrequests from a POST for error
> handling is a bit awkward here. But until someone who cares comes up with
> an elegant and consistent alternative, I expect that it will remain as-is.

Alas.. 

> Possibly in your case you could convert the POST to a GET by using
> proxy_method and proxy_pass within your error_page location.

> That also feels inelegant, but may give the output that you want.

Yes, similar "solutions" like this 
(http://leandroardissone.com/post/19690882654/nginx-405-not-allowed ) and 
others are IMO really ugly and it does make the configfile harder to understand 
and maintain over time.

The "best" (but still ugly!) version I could find is where I catch the 405 
error inside the @error_503 location (as described in the answer to this 
question 
http://stackoverflow.com/questions/16180947/return-503-for-post-request-in-nginx
 ), but I dislike the use of if and $request_filename in that solution - and it 
still doesn't make for easy understanding.


How would you suggest I could use proxy_method and proxy_pass within the 
@error_503 location?
I'm comming up short on how to do that without beginning to resend a POST 
request as a GET request to upstream - a new request that could now potentially 
succeed (since a haproxy backend server could become available between the POST 
failed and the request is retried as a GET)?


Regards,
Jens Dueholm Christensen

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Static or dynamic content

2016-10-18 Thread Francis Daly
On Sat, Oct 15, 2016 at 12:18:11PM +, Jens Dueholm Christensen wrote:
> On Friday, September 30, 2016 12:55 AM Francis Daly wrote,

Hi there,

> > I suspect that when you show your error_page config and the relevant
> > locations, it may become clearer what you want to end up with.
> 
> My local test config looks like this (log specifications and other stuff left 
> out):

> location / {
> root html;
> try_files /offline.html @xact;
> }
> location @xact {
> proxy_pass http://127.0.0.1:4431;
> proxy_intercept_errors on;
> }
> error_page 503 @error_503;
> location @error_503 {
> root error;
> rewrite (logo.png)$ /$1 break;
> rewrite ^(.*)$ /error503.html break;
> }

So: a POST for /x will be handled in @xact, which will return 503,
which will be handled in @error_503, which will be rewritten to a POST
for /error503.html which will be sent to the file error/error503.html,
which will return a 405.

Is that what you see?

Two sets of questions remain:

what output do you get when you use the test config in the earlier mail?

what output do you want?

That last one is probably "http 503 with the content of *this* file";
and is probably completely obvious to you; but I don't think it has been
explicitly stated here, and it is better to know than to try guessing.

> HAProxy returns this:
> 
>   HTTP/1.0 503 Service Unavailable
>   Cache-Control: no-cache
>   Connection: close
>   Content-Type: text/html
> 
>   503 Service Unavailable
>   No server is available to handle this request.
>   

Ok, that's a normal 503.

> HAProxy also logs this (raw syslog packet):
> 
>   <134>Oct 15 13:17:33 jedc-local haproxy[10104]: 127.0.0.1:64746 
> [15/Oct/2016:13:17:33.800] xact_in-DK xact_admin/ 0/-1/-1/-1/0 503 212 
> - - SC-- 0/0/0/0/0 0/0 "POST /2 HTTP/1.0"
> 
> This makes nginx return this back to the browser:
> 
>   HTTP/1.1 405 Not Allowed
>   Server: nginx/1.8.0
>   Date: Sat, 15 Oct 2016 11:17:33 GMT
>   Content-Type: text/html
>   Content-Length: 172
>   Connection: keep-alive

And that's the 405 because your config sends the 503 to a static file.

> nginx also logs this:
> 
>   localhost 127.0.0.1 "-" [15/Oct/2016:13:17:33 +0200] "POST /2 HTTP/1.1" 405 
> 172 503 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 
> Firefox/47.0" http "-" "-" "-" "-" -/-

> There is no mention of the error_page 503 location or any of the resources 
> they specify (logo.png or error503.html) in any of nginx' logs, so I assume 
> that they are not really connected to the problems I see.
> 

Unless you are looking at the nginx debug log, you are not seeing anything
about nginx's internal subrequests.

If you remove the error_page 503 part or the proxy_intercept_errors part,
does the expected http status code get to your client?

> Any ideas?

I think that the nginx handling of subrequests from a POST for error
handling is a bit awkward here. But until someone who cares comes up with
an elegant and consistent alternative, I expect that it will remain as-is.

Possibly in your case you could convert the POST to a GET by using
proxy_method and proxy_pass within your error_page location.

That also feels inelegant, but may give the output that you want.

Cheers,

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


RE: Static or dynamic content

2016-10-15 Thread Jens Dueholm Christensen
On Friday, September 30, 2016 12:55 AM Francis Daly wrote,



>> No, I have an "error_page 503" and a similar one for 404 that points to two 
>> named locations, but that's it.



> That might matter.



> I can now get a 503, 404, or 405 result from nginx, when upstream sends a 503.

[...]

> Now make /tmp/x exist, and /tmp/y not exist.

>

> A GET request for /x is proxied, gets a 503, and returns the content of 
> /tmp/x with a 503 status.

>

> A GET request for /y is proxied, gets a 503, and returns a 404 status.

>

> A POST request for /x is proxied, gets a 503, and returns a 405 status.

>

> A POST request for /y is proxied, gets a 503, and returns a 404 status.

>

> Since you also have an error_page for 404, perhaps that does something that 
> leads to the output that you see.

>

> I suspect that when you show your error_page config and the relevant

> locations, it may become clearer what you want to end up with.



My local test config looks like this (log specifications and other stuff left 
out):



server {

listen   80;

server_name localhost;



location / {

root html;

try_files /offline.html @xact;

add_header Cache-Control "no-cache, max-age=0, no-store, 
must-revalidate";

}



location @xact {

proxy_pass http://127.0.0.1:4431;

proxy_redirect default;

proxy_read_timeout 2s;

proxy_send_timeout 2s;

proxy_connect_timeout 2s;

proxy_intercept_errors on;

}



error_page 404 @error_404;

error_page 503 @error_503;



location @error_404 {

root error;

rewrite (logo.png)$ /$1 break;

rewrite ^(.*)$ /error404.html break;

}



location @error_503 {

root error;

rewrite (logo.png)$ /$1 break;

rewrite ^(.*)$ /error503.html break;

}





> A test system which talks to a local HAProxy which has no "up" backends

> would probably be quicker to build.



Yes, thats what I had listening on 127.0.0.1:4431, and it did give me the same 
behaviour as I'm seeing in our production environment.





I got the following captures via pcap and wireshark:



Conditions are: HAProxy has a backend with no available servers, so every 
request results in a 503 to upstream client (nginx).



A POST request to some resource from a browser:



  POST /2 HTTP/1.1

  Host: localhost

  User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 
Firefox/47.0

  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

  Accept-Language: en

  Accept-Encoding: gzip, deflate

  DNT: 1

  Content-Type: application/x-www-form-urlencoded

  Content-Length: 0

  Cookie: new-feature=1; Language_In_Use=

  Connection: keep-alive



This makes nginx send this request to HAProxy:



  POST /2 HTTP/1.0

  Host: 127.0.0.1:4431

  Connection: close

  Content-Length: 0

  User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 
Firefox/47.0

  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

  Accept-Language: en

  Accept-Encoding: gzip, deflate

  DNT: 1

  Content-Type: application/x-www-form-urlencoded

  Cookie: new-feature=1; Language_In_Use=



HAProxy returns this:



  HTTP/1.0 503 Service Unavailable

  Cache-Control: no-cache

  Connection: close

  Content-Type: text/html



  503 Service Unavailable

  No server is available to handle this request.

  



HAProxy also logs this (raw syslog packet):



  <134>Oct 15 13:17:33 jedc-local haproxy[10104]: 127.0.0.1:64746 
[15/Oct/2016:13:17:33.800] xact_in-DK xact_admin/ 0/-1/-1/-1/0 503 212 - 
- SC-- 0/0/0/0/0 0/0 "POST /2 HTTP/1.0"



This makes nginx return this back to the browser:



  HTTP/1.1 405 Not Allowed

  Server: nginx/1.8.0

  Date: Sat, 15 Oct 2016 11:17:33 GMT

  Content-Type: text/html

  Content-Length: 172

  Connection: keep-alive



nginx also logs this:



  localhost 127.0.0.1 "-" [15/Oct/2016:13:17:33 +0200] "POST /2 HTTP/1.1" 405 
172 503 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:47.0) Gecko/20100101 
Firefox/47.0" http "-" "-" "-" "-" -/-





There is no mention of the error_page 503 location or any of the resources they 
specify (logo.png or error503.html) in any of nginx' logs, so I assume that 
they are not really connected to the problems I see.



Any ideas?



Regards,

Jens Dueholm Christensen
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Static or dynamic content

2016-09-30 Thread Francis Daly
On Fri, Sep 30, 2016 at 09:56:50AM +, Jens Dueholm Christensen wrote:
> On Friday, September 30, 2016 12:02 AM Francis Daly wrote,

Hi there,

> >> I've got an issue where nginx (see below for version/compile options)
> >> returns a 405 (not allowed) to POST requests to clients when the upstream
> >> proxy returns a 503.
> 
> > I don't see that, when I try to build a test system.
> 
> > Are you doing anything clever with error_page or anything like that?
> 
> No, I have an "error_page 503" and a similar one for 404 that points to two 
> named locations, but that's it.

That might matter.

I can now get a 503, 404, or 405 result from nginx, when upstream sends a 503.

Add

error_page 503 @err;
proxy_intercept_errors on;

within the server{} block, and add

location @err {
  root /tmp;
}

Now make /tmp/x exist, and /tmp/y not exist.

A GET request for /x is proxied, gets a 503, and returns the content of
/tmp/x with a 503 status.

A GET request for /y is proxied, gets a 503, and returns a 404 status.

A POST request for /x is proxied, gets a 503, and returns a 405 status.

A POST request for /y is proxied, gets a 503, and returns a 404 status.

Since you also have an error_page for 404, perhaps that does something
that leads to the output that you see.

I suspect that when you show your error_page config and the relevant
locations, it may become clearer what you want to end up with.


> > If you can see the bytes on the wire between HAProxy and nginx in both
> > cases, the difference may be obvious.
> 
> Yes, but alas that involves tinkering with a production system and/or having 
> tcpdump running just at the right point in time when all backends are down.

A test system which talks to a local HAProxy which has no "up" backends
would probably be quicker to build.

Good luck with it,

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


RE: Static or dynamic content

2016-09-30 Thread Jens Dueholm Christensen
On Friday, September 30, 2016 12:02 AM Francis Daly wrote,

>> I've got an issue where nginx (see below for version/compile options)
>> returns a 405 (not allowed) to POST requests to clients when the upstream
>> proxy returns a 503.

> I don't see that, when I try to build a test system.

> Are you doing anything clever with error_page or anything like that?

No, I have an "error_page 503" and a similar one for 404 that points to two 
named locations, but that's it.

> My config is exactly this:
[SNIP]

> Do you see the problem when you test with that?

> (The file /tmp/offline.html does not exist.)

Alas I will have to test, but I am unable to do so today and I am out of the 
office for most parts of next week, so I'll have to get back to you on that.

>> When the request method is a POST nginx transforms this to a 405 "not
>> allowed", which is - as far as I can understand from a number of posts
>> found googling - is to be expected when POSTing to a static resource.

> If you can build a repeatable test case -- either like the above, or
> perhaps configure a HAProxy instance that will return the 503 itself --
> then it should be possible to see what is happening.

Yes, that would be preferable, but I'd just like to have others comment on what 
I'm seeing before I went there.

>> That 503 response is sent back to HAProxy, which parses it along to
>> nginx, which returns the 503 to the client (and it's NOT converted into
>> a 405!), so I know that nginx doesn't translate all 503s to 405s if it's
>> generated by a backend instance even if the request is POSTed.

> If you can see the bytes on the wire between HAProxy and nginx in both
> cases, the difference may be obvious.

Yes, but alas that involves tinkering with a production system and/or having 
tcpdump running just at the right point in time when all backends are down.
Given the nature of this system and that we recieve anywhere from 5000 to 
20.000+ hits per minute it could be close to impossible - but if it comes down 
to that I must find a way if I cannot reproduce it locally..

>> This leads me to think that somehow HAProxy does not provide the
>> correct headers in the 503 errorpage to make nginx assume the response
>> from HAProxy isn't dynamic.

> I suspect that that is incorrect -- 503 is 503 -- but a config that
> allows the error be reproduced will be very helpful.

Oh yes - and given the fact that nginx as well as HAProxy is pretty darn good 
at handling http requests I know my idea is a bit far fetched, but from what 
I'm seeing I cannot come up with another good reason for this behaviour apart 
from moonbeams, unusual sunspot activity localized to this single nginx 
instance in my environment in my datacenter or leprechauns hunting gold - which 
quite frankly I've never before (and never will) accept as possible 
explanations for anything.. ;)

Regards,
Jens Dueholm Christensen

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Static or dynamic content

2016-09-29 Thread Francis Daly
On Wed, Sep 28, 2016 at 11:01:36AM +, Jens Dueholm Christensen wrote:

Hi there,

> I've got an issue where nginx (see below for version/compile options)
> returns a 405 (not allowed) to POST requests to clients when the upstream
> proxy returns a 503.

I don't see that, when I try to build a test system.

Are you doing anything clever with error_page or anything like that?

> My config is basicly this

My config is exactly this:

===
http {
  server {
listen 8080;
location / {
  root /tmp;
  try_files /offline.html @xact;
}
location @xact {
  proxy_pass http://127.0.0.1:8082;
}
  }

  server {
listen 8082;
return 503;
  }
}
===

Do you see the problem when you test with that?

(The file /tmp/offline.html does not exist.)

If not, what part of your full config can you add to the 8080 server
which causes the problem to appear?

  curl -i http://127.0.0.1:8080/x

does a GET.

  curl -i -d k=v http://127.0.0.1:8080/x

does a POST. (Use -v instead of -i for even more information.)

> When the request method is a POST nginx transforms this to a 405 "not
> allowed", which is - as far as I can understand from a number of posts
> found googling - is to be expected when POSTing to a static resource.

If you can build a repeatable test case -- either like the above, or
perhaps configure a HAProxy instance that will return the 503 itself --
then it should be possible to see what is happening.

> An individual backend instance can also return a 503 error if it
> encounters a problem servicing a request.
>
> That 503 response is sent back to HAProxy, which parses it along to
> nginx, which returns the 503 to the client (and it's NOT converted into
> a 405!), so I know that nginx doesn't translate all 503s to 405s if it's
> generated by a backend instance even if the request is POSTed.

If you can see the bytes on the wire between HAProxy and nginx in both
cases, the difference may be obvious.

> This leads me to think that somehow HAProxy does not provide the
> correct headers in the 503 errorpage to make nginx assume the response
> from HAProxy isn't dynamic.

I suspect that that is incorrect -- 503 is 503 -- but a config that
allows the error be reproduced will be very helpful.

Cheers,

f
-- 
Francis Dalyfran...@daoine.org

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx