Re: limit_req per subnet?
>> >> I'm looking for something that can >> >> be implemented independently of the backend, but that doesn't seem to >> >> exist in nginx. >> > >> > http://nginx.org/r/limit_req_zone >> > >> > You can define the "key" any way that you want. >> > >> > Perhaps you can create something using "geo". Perhaps you want "the first >> > three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 >> > address, rounded down to a multiple of 8". Perhaps you want something >> > else. >> >> >> So I'm sure I understand, none of the functionality described above >> exists currently? > > A variable with exactly the value that you want it to have, probably > does not exist currently in the stock nginx code. > > The code that allows you to create a variable with exactly the value > that you want it to have, probably does exist in the stock nginx code. > > You can use "geo", "map", "set", or (probably) any of the extension > languages to give the variable the value that you want it to have. > > For example: > > map $binary_remote_addr $bin_slash16 { > "~^(?P..)..$" "$a"; > } > > will probably come close to making $bin_slash16 hold a binary > representation of the first two octets of the connecting ip address. > > (You'll want to confirm whether "dot" matches "any byte" in your regex > engine; or whether you can make it match "any byte" (specifically > including the byte that normally represents newline); before you trust > that fully, of course.) That sounds like a good solution. Will using map along with a regex slow the server down much? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
>>> I'm looking for something that can >>> be implemented independently of the backend, but that doesn't seem to >>> exist in nginx. >> >> http://nginx.org/r/limit_req_zone >> >> You can define the "key" any way that you want. >> >> Perhaps you can create something using "geo". Perhaps you want "the first >> three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 >> address, rounded down to a multiple of 8". Perhaps you want something >> else. > > > So I'm sure I understand, none of the functionality described above > exists currently? Or can it be configured without hacking the nginx core? - Grant >> The exact thing that you want, probably does not exist. >> >> The tools that are needed to create it, probably do exist. >> >> All that seems to be missing is the incentive for someone to actually >> do the work to build a thing that you would like to exist. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
>> I'm looking for something that can >> be implemented independently of the backend, but that doesn't seem to >> exist in nginx. > > http://nginx.org/r/limit_req_zone > > You can define the "key" any way that you want. > > Perhaps you can create something using "geo". Perhaps you want "the first > three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 > address, rounded down to a multiple of 8". Perhaps you want something > else. So I'm sure I understand, none of the functionality described above exists currently? - Grant > The exact thing that you want, probably does not exist. > > The tools that are needed to create it, probably do exist. > > All that seems to be missing is the incentive for someone to actually > do the work to build a thing that you would like to exist. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
> That is why you cache the request. DoS or in your case DDoS since multiple > are involved Caching backend responses and having Nginx serve a cached > response even for 1 second that cached response can be valid for it will > save your day. That would be a big project because it would mean rewriting some of the functionality of my backend. I'm looking for something that can be implemented independently of the backend, but that doesn't seem to exist in nginx. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
> proxy_cache / fastcgi_cache the pages output will help. Flood all you want > Nginx handles flooding and lots of connections fine your back end is your > weakness / bottleneck that is allowing them to be successful in effecting > your service. Definitely. My backend is of course the bottleneck so I'd like nginx to refrain from passing a request on to the backend if it is deemed to be part of a group of requests that should be rate limited. But there doesn't seem to be a good way to do that if the group should contain more than one IP. I think any method that groups requests by UA will require too much human monitoring. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
>> I rate limit them using the user-agent > > > Maybe this is the best solution, although of course it doesn't rate > limit real attackers. Is there a good method for monitoring which UAs > request pages above a certain rate so I can write a limit for them? Actually, is there a way to limit rate by UA on the fly? If so, can I do that and somehow avoid limiting multiple legitimate browsers with the same UA? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
> I rate limit them using the user-agent Maybe this is the best solution, although of course it doesn't rate limit real attackers. Is there a good method for monitoring which UAs request pages above a certain rate so I can write a limit for them? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
> I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my > other post indicates semrush resides on AWS, so just block AWS. I doubt there > is any harm in blocking AWS since no major search engine uses them. > > Regarding search engines, the reality is only Google matters. Just look at > your logs. That said, I allow Google, yahoo, and Bing. But yahoo/bing isn't > even 5% of Google traffic. Everything else I block. Majestic (MJ12) is just > ridiculous. I allow the anti-virus companies to poke around, though I can't > figure out what exactly their probes accomplish. Often Intel/McAfee just > pings the server, perhaps to survey hosting software and revision. Good > advertising for nginx! I would really prefer not to block cloud services. It sounds like an admin headache down the road. nginx limit_req works great for a single IP attacker, but all it takes is 3 IPs for an attacker to triple his allowable rate, even from sequential IPs? I'm surprised there's no way to combat this. - Grant >> Did you see if the IPs were from an ISP? If not, I'd ban the service using >> the Hurricane Electric BGP as a guide. At a minimum, you should be blocking >> the major cloud services, especially OVH. They offer free trial accounts, so >> of course the hackers abuse them. > > > What sort of sites run into problems after doing that? I'm sure some > sites need to allow cloud services to access them. A startup search > engine could be run from such a service. > > >> If the attack was from an ISP, I can visualize a fail2ban scheme blocking >> the last quad not being too hard to implement . That is block >> xxx.xxx.xxx.0/24. Or maybe just let a typical fail2ban set up do your >> limiting and don't get fancy about the IP range. >> >> I try "traffic management" at the firewall first. As I discovered with >> "deny" in nginx, much CPU work is still done prior to ignoring the request. >> (I don't recall the details exactly, but there is a thread I started on the >> topic in this list.) Better to block via the firewall since you will be >> running one anyway. > > > It sounds like limit_req in nginx does not have any way to do this. > How would you accomplish this in fail2ban? > > >> I recently suffered DoS from a series of 10 sequential IP addresses. >> limit_req would have dealt with the problem if a single IP address had >> been used. Can it be made to work in a situation like this where a >> series of sequential IP addresses are in play? Maybe per subnet? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit_req per subnet?
> I am curious what is the request uri they was hitting. Was it a dynamic page > or file or a static one. It was semrush and it was all manner of dynamic pages. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
limit_req per subnet?
I recently suffered DoS from a series of 10 sequential IP addresses. limit_req would have dealt with the problem if a single IP address had been used. Can it be made to work in a situation like this where a series of sequential IP addresses are in play? Maybe per subnet? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: keepalive upstream
> I've been struggling with a very difficult to diagnose problem when > using apache2 and Odoo in a reverse proxy configuration with nginx. > Enabling keepalive for upstream in nginx seems to have fixed it. Why > is it not enabled upstream by default as it is downstream? Does anyone know why this isn't a default? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: location query string?
>> > I'm not quite sure what the specific problem you are seeing is, from >> > the previous mails. > >> > Might the problem be related to your upstream not cleanly >> > closing the connections? >> >> It sure could be. Would this be a good way to monitor that possibility: >> >> netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n > > That could indicate when the number of tcp connections in various states > changes; it may be a good starting point to find what the cause of the > problem is. > > nginx makes a http request of upstream; it expects a http response. If > the tcp connection and the http connection is staying open longer than > necessary, that suggests that either the client (nginx) or the server > (upstream) is doing something wrong. > > Can you make the same request manually of upstream, and see if there is > any indication of things not being as they should? > > Is there any difference between a http/1.0 and a http/1.1 request to > upstream? Or if the response includes a Content-Length header or is > chunked? Or any other things that are different between the "working" > and "not-working" cases. > > Your later mail suggests that "Keepalive" is involved somehow. If you > are still keen to investigate -- can you see that nginx does something > wrong when Keepalive is or is not set? Or does upstream do something > wrong when Keepalive is or is not set? (If there is an nginx problem, > I suspect that people will be interested in fixing it. If there is an > upstream problem, then possibly people there will be interested in fixing > it, or possibly a workaround can be provided on the nginx side.) Admittedly this is over my head. I would be happy to test and probe if anyone is interested enough to tell me what to do. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
keepalive upstream
I've been struggling with a very difficult to diagnose problem when using apache2 and Odoo in a reverse proxy configuration with nginx. Enabling keepalive for upstream in nginx seems to have fixed it. Why is it not enabled upstream by default as it is downstream? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: proxy_set_header Connection "";
> By default the Connection header is passed to the origin. If a client sends > a request with Connection: close, Nginx would send this to the upstream, > effectively disabling keepalive. By clearing this header, Nginx will not > send it on to the upstream source, leaving it to send its own Connection > header as appropriate. That makes perfect sense. Is there a way to test if keepalive is active between nginx and the upstream server? - Grant >> Does anyone know why this is required for upstream keepalive? >> >> - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
proxy_set_header Connection "";
Does anyone know why this is required for upstream keepalive? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: location query string?
> I'm not quite sure what the specific problem you are seeing is, from > the previous mails. > > Do you have a reproducible test case where you can see the problem? (And > therefore, see if any suggested fix makes a difference?) > > http://nginx.org/r/proxy_read_timeout should (I think) matter between > successive reads, and should not matter when the upstream cleanly closes > the connection. Might the problem be related to your upstream not cleanly > closing the connections? It sure could be. Would this be a good way to monitor that possibility: netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n I could watch for the TIME_WAIT row getting too large. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
location query string?
Can I define a location block based on the value of a query string so I can set a longer timeout for certain form submissions even though all of my form submissions POST to the same URL? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx reverse proxy causing TCP queuing spikes
>> I do think this is related to 'proxy_read_timeout 60m;' leaving too >> many connections open. Can I somehow allow pages to load for up to >> 60m but not bog my server down with too many connections? > > Pardon me, but why on earth do you have an environment in which an HTTP > request can take an hour? That seems like a serious abuse of the protocol. > > Keeping an HTTP request open means keeping the associated TCP connection open > as well. If you have connections open for an hour, you're probably going to > run into concurrency issues. I don't actually need 60m but I do need up to about 20m for some backend administrative processes. What is the right way to solve this problem? I don't think I can speed up the processes. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx reverse proxy causing TCP queuing spikes
>> I've been struggling with http response time slowdowns and >> corresponding spikes in my TCP Queuing graph in munin. I'm using >> nginx as a reverse proxy to apache which then hands off to my backend, >> and I think the proxy_read_timeout line in my nginx config is at least >> contributing to the issue. Here is all of my proxy config: >> >> proxy_read_timeout 60m; >> proxy_pass http://127.0.0.1:8080; >> >> I think this means I'm leaving connections open for 60 minutes after >> the last server response which sounds like a bad thing. However, some >> of my admin pages need to run for a long time while they wait for the >> server-side stuff to execute. I only use the proxy_read_timeout >> directive on my admin locations and I'm experiencing the TCP spikes >> and http slowdowns during the exact hours that the admin stuff is in >> use. > > > It turns out this issue was due to Odoo which also runs behind nginx > in a reverse proxy configuration on my machine. Has anyone else had > that kind of trouble with Odoo? I do think this is related to 'proxy_read_timeout 60m;' leaving too many connections open. Can I somehow allow pages to load for up to 60m but not bog my server down with too many connections? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Speed up initial connection
Is there anything I can do to speed up the initial connection? It seems like the first page of my site I hit is consistently slower to respond than all subsequent requests. This is the case even when my backend session is still valid and unexpired for that initial request. Is 'multi_accept on;' a good idea? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req and greedy UAs
> limit_req works with multiple connections, it is usually configured per IP > using $binary_remote_addr. See > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone > - you can use variables to set the key to whatever you like. > > limit_req generally helps protect eg your backend against request floods > from a single IP and any amount of connections. limit_conn protects against > excessive connections tying up resources on the webserver itself. I'm suspicious that Odoo which runs behind nginx in a reverse proxy config could be creating too many connections or something similar and bogging things down for my main site which runs in apache2 behind nginx as well. Is there a good way to find out? Stopping the Odoo daemon certainly kills the problem instantly. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx reverse proxy causing TCP queuing spikes
> I've been struggling with http response time slowdowns and > corresponding spikes in my TCP Queuing graph in munin. I'm using > nginx as a reverse proxy to apache which then hands off to my backend, > and I think the proxy_read_timeout line in my nginx config is at least > contributing to the issue. Here is all of my proxy config: > > proxy_read_timeout 60m; > proxy_pass http://127.0.0.1:8080; > > I think this means I'm leaving connections open for 60 minutes after > the last server response which sounds like a bad thing. However, some > of my admin pages need to run for a long time while they wait for the > server-side stuff to execute. I only use the proxy_read_timeout > directive on my admin locations and I'm experiencing the TCP spikes > and http slowdowns during the exact hours that the admin stuff is in > use. It turns out this issue was due to Odoo which also runs behind nginx in a reverse proxy configuration on my machine. Has anyone else had that kind of trouble with Odoo? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
nginx reverse proxy causing TCP queuing spikes
I've been struggling with http response time slowdowns and corresponding spikes in my TCP Queuing graph in munin. I'm using nginx as a reverse proxy to apache which then hands off to my backend, and I think the proxy_read_timeout line in my nginx config is at least contributing to the issue. Here is all of my proxy config: proxy_read_timeout 60m; proxy_pass http://127.0.0.1:8080; I think this means I'm leaving connections open for 60 minutes after the last server response which sounds like a bad thing. However, some of my admin pages need to run for a long time while they wait for the server-side stuff to execute. I only use the proxy_read_timeout directive on my admin locations and I'm experiencing the TCP spikes and http slowdowns during the exact hours that the admin stuff is in use. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req and greedy UAs
> limit_req works with multiple connections, it is usually configured per IP > using $binary_remote_addr. See > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone > - you can use variables to set the key to whatever you like. > > limit_req generally helps protect eg your backend against request floods > from a single IP and any amount of connections. limit_conn protects against > excessive connections tying up resources on the webserver itself. Perfectly understood. Thank you Richard. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req and greedy UAs
> Re-reading the original post, it was concluded that multiple connection > don't effect the rate limiting. I interpreted this incorrectly the first time: > > "Nginx's limit_rate > function limits the data transfer rate of a single connection." > > But I'm certain a few posts, perhaps not on the nginx forum, state > incorrectly that the limiting is per individual connections rather than all > the connections in total. Nice job. Very good to know. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Don't process requests containing folders
>> location ~ (^/[^/]*|.html)$ {} > > Yes, that should do what you describe. I realize now that I didn't define the requirement properly. I said: "match requests with a single / or ending in .html" but what I need is: "match requests with a single / *and* ending in .html, also match /". Will this do it: location ~ ^(/[^/]*\.html|/)$ {} > Note that the . is a metacharacter for "any one"; if you really want > the five-character string ".html" at the end of the request, you should > escape the . to \. Fixed. Do I ever need to escape / in location blocks? >> And let everything else match the following, most of which will 404 >> (cheaply): >> >> location / { internal; } > > Testing and measuring might show that "return 404;" is even cheaper than > "internal;" in the cases where they have the same output. But if there > are cases where the difference in output matters, or if the difference > is not measurable, then leaving it as-is is fine. I'm sure you're right. I'll switch to: location / { return 404; } - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Don't process requests containing folders
>> My site doesn't have any folders in its URL structure so I'd like to >> have nginx process any request which includes a folder (cheap 404) >> instead of sending the request to my backend (expensive 404). > >> Currently I'm using a series of location blocks to check for a valid >> request. Here's the last one before nginx internal takes over: >> >> location ~ (^/|.html)$ { >> } > > I think that says "is exactly /, or ends in html". Yes that is my intention. > I'm actually not sure whether this is intended to be the "good" > request, or the "bad" request. If it is the "bad" one, then "return > 404;" can easily be copied in to each. If it is the "good" one, with a > complicated config, then you may need to have many duplicate lines in > the two locations; or just "include" a file with the good" configuration. That's the good request. I do need it in multiple locations but an include is working well for that. >> Can I expand that to only match requests with a single / or ending in >> .html like this: >> >> location ~ (^[^/]+/?[^/]+$|.html$) { > > Since every real request starts with a /, I think that that pattern > effectively says "ends in html", which matches fewer requests than the > earlier one. That is not what I intended. > If you want to match "requests with a second slash", do just that: > > location ~ ^/.*/ {} > > (the "^" is not necessary there, but I guess-without-testing that > it helps.) When you say it helps, you mean for performance? > If you want to match "requests without a second slash", you could do > > location ~ ^/[^/]*$ {} > > but I suspect you'll be better off with the positive match, plus a > "location /" for "all the rest". I want to keep my location blocks to a minimum so I think I should use the following as my last location block which will send all remaining good requests to my backend: location ~ (^/[^/]*|.html)$ {} And let everything else match the following, most of which will 404 (cheaply): location / { internal; } - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req and greedy UAs
> https://www.nginx.com/blog/tuning-nginx/ > > I have far more faith in this write up regarding tuning than the anti-ddos, > though both have similarities. > > My interpretation is the user bandwidth is connections times rate. But you > can't limit the connection to one because (again my interpretation) there can > be multiple users behind one IP. Think of a university reading your website. > Thus I am more comfortable limiting bandwidth than I am limiting the number > of connections. The 512k rate limit is fine. I wouldn't go any higher. If I understand correctly, limit_req only works if the same connection is used for each request. My goal with limit_conn and limit_conn_zone would be to prevent someone from circumventing limit_req by opening a new connection for each request. Given that, why would my limit_conn/limit_conn_zone config be any different from my limit_req/limit_req_zone config? - Grant > Should I basically duplicate my limit_req and limit_req_zone > directives into limit_conn and limit_conn_zone? In what sort of > situation would someone not do that? > > - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Don't process requests containing folders
My site doesn't have any folders in its URL structure so I'd like to have nginx process any request which includes a folder (cheap 404) instead of sending the request to my backend (expensive 404). Currently I'm using a series of location blocks to check for a valid request. Here's the last one before nginx internal takes over: location ~ (^/|.html)$ { } Can I expand that to only match requests with a single / or ending in .html like this: location ~ (^[^/]+/?[^/]+$|.html$) { } Should that work as expected? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Back button causes limiting?
I just saw some strange stuff in my logs and it only makes sense if pressing the back button creates a new request on an iPad. So if an iPad user presses the back button 5 times quickly, they will have generated 5 requests in a very short period of time which could turn on rate limiting if so configured. Has anyone else noticed this? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req: better message for users?
>> Has anyone experimented with displaying a more informative message >> than "503 Service Temporarily Unavailable" when someone exceeds the >> limit-req? > > > maybe https://tools.ietf.org/html/rfc6585#section-4 ? That's awesome. Any idea why it isn't the default? Do you remember the directive that will set this and roughly where it should go? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req and greedy UAs
> This page has all the secret sauce, including how to limit the number of > connections. > > https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/ > > I set up the firewall with a higher number as a "just in case." Should I basically duplicate my limit_req and limit_req_zone directives into limit_conn and limit_conn_zone? In what sort of situation would someone not do that? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req and greedy UAs
> Since this limit is per IP, is the scenario you stated really a problem? > Only that IP is effected. Or as is often the case, did I miss something? The idea (which I used bad examples to illustrate) is that some mainstream browsers make a series of requests for files which don't necessarily exist. Too many of those requests triggers limiting even though the user didn't do anything wrong. - Grant > Has anyone considered the problem of legitimate UAs which request a > series of files which don't necessarily exist when they access your > site? Requests for files like robots.txt, sitemap.xml, > crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA > to exceed the limit-req burst value. What is the right way to deal > with this? > > - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: limit-req and greedy UAs
> What looks to me to be a real resource hog that quite frankly you cant do > much about are download managers. They open up multiple connections, but the > rate limits apply to each individual connection. (this is why you want to > limit the number of connections.) Does this mean an attacker (for example) could get around rate limits by opening a new connection for each request? How are the number of connections limited? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
limit-req and greedy UAs
Has anyone considered the problem of legitimate UAs which request a series of files which don't necessarily exist when they access your site? Requests for files like robots.txt, sitemap.xml, crossdomain.xml, apple-touch-icon.png, etc could quickly cause the UA to exceed the limit-req burst value. What is the right way to deal with this? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
limit-req: better message for users?
Has anyone experimented with displaying a more informative message than "503 Service Temporarily Unavailable" when someone exceeds the limit-req? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: 301 executes before authentication
> In the links provided above, I see one example of Maxim suggesting a 2-steps > solution playing with a returned status code. Wow, that works. I couldn't follow it at first. Thanks! - Grant >> > Rewrites will execute before authentication module handlers run; this is >> > a >> > function of how Nginx is designed, and this order isn't configurable. >> > See >> > http://forum.nginx.org/read.php?2,41891,43112#msg-43112 and >> > http://www.nginxguts.com/2011/01/phases/. >> >> >> In that case, can anyone figure out how to rewrite this config without >> a redirect so that munin can be accessed with host:port? I worked on >> it for quite a bit but couldn't come up with anything functional. >> >> location = / { >> return 301 https://$host:$server_port/munin/; >> } >> >> location /munin { >> fastcgi_split_path_info ^(/munin)(.*); >> fastcgi_param PATH_INFO $fastcgi_path_info; >> fastcgi_pass unix:/var/run/munin/fcgi-html.sock-1; >> include fastcgi_params; >> } >> >> - Grant >> >> >> >>> I have a server block that contains the following: >> >>> >> >>> auth_basic "Please log in."; >> >>> location = / { >> >>> return 301 https://$host:$server_port/folder/; >> >>> } >> >>> >> >>> I noticed that /folder/ is appended to the URL before the user is >> >>> prompted for authentication. Can that behavior be changed? >> >>> >> >>> - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
301 executes before authentication
I have a server block that contains the following: auth_basic "Please log in."; location = / { return 301 https://$host:$server_port/folder/; } I noticed that /folder/ is appended to the URL before the user is prompted for authentication. Can that behavior be changed? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: ssl_dhparam compatibility issues?
I'm using Mozilla's Old backward compatibility ssl_ciphers so I feel good about my compatibility there, but does the following open me up to potential compatibility problems: # openssl dhparam -out dhparams.pem 2048 DHE params larger than 1024 bits are not compatible with java 6/7 clients. If you need compatibility with those clients, use a DHE of 1024 bits, or disable DHE entirely. My server is open to the internet so I'd like to maintain compatibility with as many clients as possible, but I don't serve any java apps. Given that, will DHE params larger than 1024 bits affect my compatibility? If so, I believe a DHE of 1024 bits opens me to the LogJam attack, so if I disable DHE entirely will that affect my compatibility? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
ssl_dhparam compatibility issues?
I'm using Mozilla's Old backward compatibility ssl_ciphers so I feel good about my compatibility there, but does the following open me up to potential compatibility problems: # openssl dhparam -out dhparams.pem 2048 nginx.conf: ssl_dhparam {path to dhparams.pem} https://wiki.mozilla.org/Security/Server_Side_TLS - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: gzip_types not working as expected
gzip is not working on my piwik.js file according to Google at developers.google.com/speed/pagespeed/insights. It's working fine on my CSS file. How can I troubleshoot this? gzip on; gzip_disable msie6; gzip_types text/javascript application/x-javascript text/css text/plain; You probably miss application/javascript in your list, or the size of your javascript in question is less then the (default ?) minimum size for gizp to be done, check the official documentation for this. Anyway, an easy way to check if you are missing a mime type on your gzip list is to open your page with firebug (or similar) enabled and check the type and size of the particular resource. Just needed to add application/javascript. Thanks guys. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: gzip_types not working as expected
gzip is not working on my piwik.js file according to Google at developers.google.com/speed/pagespeed/insights. It's working fine on my CSS file. How can I troubleshoot this? gzip on; gzip_disable msie6; gzip_types text/javascript application/x-javascript text/css text/plain; - Grant Any help here guys? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
gzip_types not working as expected
gzip is not working on my piwik.js file according to Google at developers.google.com/speed/pagespeed/insights. It's working fine on my CSS file. How can I troubleshoot this? gzip on; gzip_disable msie6; gzip_types text/javascript application/x-javascript text/css text/plain; - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Translating apache config to nginx
location ~ ^(?!installer)(\.?[^\.]+)$ { deny all; } Alternatively: what request do you make? What response do you expect? And what is the regex above intended to do? I actually got these apache deny directives from the roundcube list. Possibly the roundcube list will be able to explain, in words, what the intention is. Then someone may be able to translate those words into an nginx config fragment. Here is the description: deny access to files not containing a dot or starting with a dot in all locations except installer directory Should the following accomplish this in nginx? It gives me 403 during normal operation. location ~ ^(?!installer)(\.?[^\.]+)$ { deny all; } - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
fastcgi caching
I'm using the following config to cache only /piwik/piwik.php: fastcgi_cache_path /var/cache/php-fpm levels=1:2 keys_zone=piwik:10m; fastcgi_cache_key $scheme$request_method$host$request_uri; location /piwik/piwik.php { fastcgi_cache piwik; add_header X-Cache $upstream_cache_status; fastcgi_pass unix:/run/php-fpm.socket; include fastcgi.conf; } I'm getting X-Cache: HIT. I tried to set up a minimal config, but am I missing anything essential? Is setting up a manual purge required or will this manage itself? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Translating apache config to nginx
Here is the description: deny access to files not containing a dot or starting with a dot in all locations except installer directory So: you want it to block /one and /two/, to allow /thr.ee, and to block /.four, yes? That's how I read it too. Should the following accomplish this in nginx? It gives me 403 during normal operation. That configuration seems to get the first three correct and the last one wrong. If you add a / immediately after the first ^, it seems to get all four correct. What is normal operation? If the request you make is like /thr.ee, it should be allowed; if it is not like /thr.ee is should be blocked. I just meant normal browsing around the inbox in Roundcube. (Personally, I'm not sure why you would want that set of restrictions. But if you want it, this is one way to get it.) location ~ ^(?!installer)(\.?[^\.]+)$ { deny all; } I think the corrected directive is as follows? location ~ ^/(?!installer)(\.?[^\.]+)$ { deny all; } - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Translating apache config to nginx
What is normal operation? If the request you make is like /thr.ee, it should be allowed; if it is not like /thr.ee is should be blocked. I just meant normal browsing around the inbox in Roundcube. If you assume that people on this list magically know what Roundcube URIs look like, you're going to be massively reducing the audience that might otherwise be able to help you! ;-) You're right, but the regex was originally written for Roundcube, so my point was that it was supposed to work but didn't and something was probably lost in translation between apache and nginx. It just needed an extra slash. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Translating apache config to nginx
But this causes a 403 during normal operation: location ~ ^(?!installer)(\.?[^\.]+)$ { deny all; } Why is that happening? What requests do you want to match that location? What requests actually match that location? Alternatively: what request do you make? What response do you expect? And what is the regex above intended to do? I actually got these apache deny directives from the roundcube list. I don't have a more specific idea of what this one is supposed to do beyond securing things. I'm not very good with regex and I was hoping someone here would see the problem. Does it make sense that this would work in apache but not in nginx? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Translating apache config to nginx
Roundcube uses some apache config to deny access to certain locations and I'm trying to translate them to nginx. The following seems to work fine: location ~ ^/?(\.git|\.tx|SQL|bin|config|logs|temp|tests|program\/(include|lib|localization|steps)) { deny all; } location ~ /?(README\.md|composer\.json-dist|composer\.json|package\.xml)$ { deny all; } But this causes a 403 during normal operation: location ~ ^(?!installer)(\.?[^\.]+)$ { deny all; } Why is that happening? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: fastcgi index
The fastcgi_index directive is to instruct a fastcgi backend which file to use if a request with an URI ending with / is passed to the backend. That is, it makes sense in a configuration like this: location / { fastcgi_pass localhost:9000; fastcgi_index index.php; include fastcgi.conf; } It doesn't make sense in configurations with only *.php file passed to fastcgi backends though. E.g., in a configuration like this it doesn't make sense and should be removed: location ~ \.php$ { fastcgi_pass localhost:9000; # wrong: fastcgi_index doesn't make sense here fastcgi_index index.php; include fastcgi.conf; } In that case, should it be removed from the example here: http://wiki.nginx.org/PHPFcgiExample - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: fastcgi index
No I mean the \.php regex based one. So now you probably know why top-posting is discouraged. ;) It's just that it opens the door to a lot of problems by allowing all .php scripts to be processed. Furthermore it's even mentioned on the wiki Pitfalls page: http://wiki.nginx.org/Pitfalls#Passing_Uncontrolled_Requests_to_PHP Trivial and correct fix for the problem mentioned on the wiki is to properly configure php, with cgi.fix_pathinfo=0. I would also recommend not allowing php at all under the locations where you allow untrusted parties to put files - or, rather, only allow php under locations where are untrusted parties are not allowed to put files, by properly isolating \.php$ location. But again, there is nothing wrong with the configuration per se. Is the example from the wiki a good one to use? location ~ [^/]\.php(/|$) { http://wiki.nginx.org/PHPFcgiExample - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: fastcgi index
Trivial and correct fix for the problem mentioned on the wiki is to properly configure php, with cgi.fix_pathinfo=0. I didn't realize the PHP config should be changed for nginx. Are there other important changes to make besides 'cgi.fix_pathinfo=0'? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: minimal fastcgi config for 1 file?
I noticed my distro doesn't include any of the following in fastcgi_params and only the first of these in fastcgi.conf: SCRIPT_FILENAME PATH_INFO PATH_TRANSLATED They are all included in fastcgi_params in the example here: http://wiki.nginx.org/PHPFcgiExample Should they all be added to fastcgi_params? No. The idea is that fastcgi_params include basic parameters, and usable in configurations like: location / { fastcgi_pass ... fastcgi_param SCRIPT_FILENAME /path/to/script.php; fastcgi_param PATH_INFO $uri; include fastcgi_param; } Should the wiki example be switched from fastcgi_param to fastcgi.cfg: http://wiki.nginx.org/PHPFcgiExample Also, PATH_INFO and PATH_TRANSLATED appear in the wiki but don't appear in the shipped files. Should they be removed from the wiki? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
fastcgi index
I've found that if I don't specify: index index.html index.htm index.php; in the server blocks where I use fastcgi, I can get a 403 due to the forbidden directory index. I would have thought 'fastcgi_index index.php;' would take care of that. If this is the expected behavior, should the index directive be added to the fastcgi wiki? http://wiki.nginx.org/HttpFastcgiModule - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
minimal fastcgi config for 1 file?
Is it OK to use a minimal fastcgi configuration for a single file like this: location ~ ^/piwik/piwik.php$ { fastcgi_pass unix:/run/php-fpm.socket; include fastcgi_params; } - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: No authentication prompt with if block
Authentication works fine if I don't include the if block but I'd like to allow only a certain user access to this server block. I get a 403 in the browser without any prompt for authentication. auth_basic Authentication Required; auth_basic_user_file htpasswd; if ($remote_user != myuser) { return 403; } What am I doing wrong? Rewrite directives, including if, are executed before access checks (and hence auth_basic). So in your cofiguration 403 is returned before auth_basic has a chance to ask for authentication by returning 401. Something like map $remote_user $invalid_user { default 1; 0; myuser 0; } if ($invalid_user) { return 403; } auth_basic ... should work, as it will allow empty $remote_user and auth_basic will be able to ask for authentication if credentials wasn't supplied. That works great, thank you. Does adding 'map' slow the server down much? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Hiring a dev: nginx+interchange
Hello, I use a perl framework called interchange (icdevgroup.org) and I've been using a perl module called Interchange::Link to interface interchange to apache: https://github.com/interchange/interchange/blob/master/dist/src/mod_perl2/Interchange/Link.pm I'd like to switch from apache to nginx and I need to hire someone to help me interface interchange to nginx. I don't need the interface to include all of the features from Interchange::Link. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: root works, alias doesn't
That's true. Is alias or root preferred in this situation for performance? The root directive is better from any point of view. It is less complicated and bugfree (alias has bugs, see https://trac.nginx.org/nginx/ticket/97 ). You should always prefer root over alias when it is possible. Many thanks Valentin. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: root works, alias doesn't
It works if I specify the full path for the alias. What is the difference between alias and root? I have root specified outside of the server block and I thought I could use alias to avoid specifying the full path again. http://nginx.org/en/docs/http/ngx_http_core_module.html#alias http://nginx.org/en/docs/http/ngx_http_core_module.html#root The docs says that the requested filepath is constructed by concatenating root + URI That's for root. The docs also say that alias replaces the content directory (so it must be absolutely defined through alias). By default, the last part of the URI (after the last slash, so the file name) is searched into the directory specified by alias. alias doesn't construct itself based on root, it's totally independent, so by using that, you'll need to specify the directory absolutely, which is precisely what you wish to avoid. I see. It seems like root and alias function identically within location /. I tried both of the following with the same result: location / { alias webalizer/; } location ~ ^/$ { alias webalizer/$1; } For what you wish to do, you might try the following: set $rootDir /var/www/localhost/htdocs root $rootDir/; location / { alias $rootDir/webalizer/; } alias is meant for exceptional overload of root in a location block, so I guess its use here is a good idea. I'm not sure what you mean by that last sentence. When should alias be used instead of root inside of location /? However, there seems to be no environmental propagation of some $root variable (which may be wanted by developers to avoid confusion and unwanted concatenation of values in the variables tree). $document_root and $realpath_root must be computed last, based on the value of the 'root' directive (or its 'alias' overload), so they can't be used indeed. I'd be glad to know the real reasons of the developers behind the absence of environmental propagation of some $root variable. Me too. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
All webapps behind nginx reverse proxy by port?
I'm thinking of using nginx as a reverse proxy for all of my administrative webapps so I can keep them under nice tight control. Is this a good idea? Would you use port numbers to separate each of them? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: All webapps behind nginx reverse proxy by port?
I'm thinking of using nginx as a reverse proxy for all of my administrative webapps so I can keep them under nice tight control. Is this a good idea? Would you use port numbers to separate each of them? - Grant On second thought, this wouldn't be a reverse proxy setup in every instance. Some webapps could be served straight from nginx, but is configuring them into separate nginx servers a good idea for more control? I'm trying to find out if I'm thinking straight about this. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
root works, alias doesn't
Can anyone tell me why this works: root /var/www/localhost/htdocs; location / { root /var/www/localhost/htdocs/webalizer/; } And this doesn't: root /var/www/localhost/htdocs; location / { alias /webalizer/; } I get: /webalizer/index.html is not found (2: No such file or directory) /var/www/localhost/htdocs/webalizer/index.html does exist. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Strange log file behavior
you are right. There is a problem: https://bugs.gentoo.org/show_bug.cgi?id=473036 Upstream (nginx) accepted the report: http://trac.nginx.org/nginx/ticket/376 Many thanks Igor! You've saved me a lot of trouble. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Strange log file behavior
I noticed that most of my rotated nginx log files are empty (0 bytes). My only access_log directive is in nginx.conf: access_log /var/log/nginx/localhost.access_log combined; Also nginx is currently logging to /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. Does anyone know why these things are happening? This usually happens if someone don't ask nginx to reopen log files after a rotation. See here for details: http://nginx.org/en/docs/control.html#logs I use logrotate: /var/log/nginx/*_log { missingok sharedscripts postrotate test -r /run/nginx.pid kill -USR1 `cat /run/nginx.pid` endscript } Does it look OK? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Strange log file behavior
I noticed that most of my rotated nginx log files are empty (0 bytes). My only access_log directive is in nginx.conf: access_log /var/log/nginx/localhost.access_log combined; Also nginx is currently logging to /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. Does anyone know why these things are happening? This usually happens if someone don't ask nginx to reopen log files after a rotation. See here for details: http://nginx.org/en/docs/control.html#logs I use logrotate: /var/log/nginx/*_log { missingok sharedscripts postrotate test -r /run/nginx.pid kill -USR1 `cat /run/nginx.pid` endscript } Does it look OK? Make sure paths used in postrotate are correct. The paths are correct. I made some tweaks and I'll report back tomorrow on how it goes. Any other ideas? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Strange log file behavior
Any other ideas? Not sure if relevant, but in Gentoo's bug tracker are some open bugs regarding current logrotate versions: https://bugs.gentoo.org/show_bug.cgi?id=476202 https://bugs.gentoo.org/show_bug.cgi?id=474572 https://bugs.gentoo.org/show_bug.cgi?id=476720 Seems to be upstream bugs (not Gentoo specific). So maybe you are affected, too? Which logrotate version do you use? I'm on Gentoo also and I think you nailed it. I will watch those bugs. Thank you! - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Strange log file behavior
I noticed that most of my rotated nginx log files are empty (0 bytes). My only access_log directive is in nginx.conf: access_log /var/log/nginx/localhost.access_log combined; Also nginx is currently logging to /var/log/nginx/localhost.access_log.1 instead of localhost.access_log. Does anyone know why these things are happening? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: munin plugin for nginx
I'm having some trouble getting the nginx plugin working for munin. I've added the following to nginx config and restarted: location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } I've added the following munin config: [nginx*] env.url http://localhost/nginx_status Unfortunately I still get: # munin-run nginx_request request.value U # munin-run nginx_status total.value U reading.value U writing.value U waiting.value U If I remove the allow/deny, I can browse to /nginx_status and I get: Active connections: 13 server accepts handled requests 15 15 16 Reading: 0 Writing: 1 Waiting: 12 What could be the problem? the munin plugin is broken or not getting the status information. Try stracing the munin-run, network capature or turning on the access logs on /nginx_status just to be sure. Well, I run it all over the place with no problem. I usually set it up only on localhost server { listen 127.0.0.1:80 default; server_name localhost; root /var/www; access_log /var/log/nginx/localhost.access.log; error_log /var/log/nginx/localhost.error.log; location ~ /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } ( in /etc/nginx/conf.d/stub, or /etc/nginx/sites-available/stub, symlinked to ../sites-enabled/stub depending on preference ) and restart nginx. Then test this bit works... $ wget -O - http://localhost/nginx_status 2 /dev/null Active connections: 1 server accepts handled requests 67892 67892 70215 Reading: 0 Writing: 1 Waiting: 0 Some os's seem to like [nginx*] env.url http://localhost/nginx_status added to /etc/munin/plugin-conf.d/munin_node then munin-run nginx_status should run just fine. You fixed it! Reducing it to the simplest config that still works, I found that the location /nginx_status block doesn't work with munin inside of any other server block. It only works inside its own server block like so: server { location ~ ^/nginx_status$ { stub_status on; access_log off; allow 127.0.0.1; deny all; } } Is this a munin bug? Thank you! - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: munin plugin for nginx
My config looks like: /etc/munin/plugin-conf.d/munin-node ... [nginx_*] user root /etc/nginx/sites-enabled/default I don't have /etc/nginx/sites-enabled/ at all. What kind of stuff is in the default file? I'm on Gentoo. - Grant ... ## munin nginx status (request/ connections handeled) location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } ... This runs for me very well, so try to edit your /etc/munin/plugin-conf.d/munin-node. You can also Debug by munin-configure --suggest Am 18.06.2013 15:42, schrieb Grant: I'm having some trouble getting the nginx plugin working for munin. I've added the following to nginx config and restarted: location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } I've added the following munin config: [nginx*] env.url http://localhost/nginx_status Unfortunately I still get: # munin-run nginx_request request.value U # munin-run nginx_status total.value U reading.value U writing.value U waiting.value U If I remove the allow/deny, I can browse to /nginx_status and I get: Active connections: 13 server accepts handled requests 15 15 16 Reading: 0 Writing: 1 Waiting: 12 What could be the problem? the munin plugin is broken or not getting the status information. Try stracing the munin-run, network capature or turning on the access logs on /nginx_status just to be sure. Well, I run it all over the place with no problem. I usually set it up only on localhost server { listen 127.0.0.1:80 default; server_name localhost; root /var/www; access_log /var/log/nginx/localhost.access.log; error_log /var/log/nginx/localhost.error.log; location ~ /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } ( in /etc/nginx/conf.d/stub, or /etc/nginx/sites-available/stub, symlinked to ../sites-enabled/stub depending on preference ) and restart nginx. Then test this bit works... $ wget -O - http://localhost/nginx_status 2 /dev/null Active connections: 1 server accepts handled requests 67892 67892 70215 Reading: 0 Writing: 1 Waiting: 0 Some os's seem to like [nginx*] env.url http://localhost/nginx_status added to /etc/munin/plugin-conf.d/munin_node then munin-run nginx_status should run just fine. You fixed it! Reducing it to the simplest config that still works, I found that the location /nginx_status block doesn't work with munin inside of any other server block. It only works inside its own server block like so: server { location ~ ^/nginx_status$ { stub_status on; access_log off; allow 127.0.0.1; deny all; } } Is this a munin bug? Thank you! - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: munin plugin for nginx
Replace [nginx*] env.url http://localhost/nginx_status with [nginx_*] user root Thanks! - Grant my nginx default file egrep -v (^$|^#) /etc/nginx/sites-enabled/default server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name localhost; access_log /var/log/nginx/localhost.access.log; location / { root /var/www; index index.html index.htm; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex on; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /var/www/nginx-default; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { #proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; #fastcgi_index index.php; #fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; #includefastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { #deny all; #} } Am 18.06.2013 16:10, schrieb Grant: My config looks like: /etc/munin/plugin-conf.d/munin-node ... [nginx_*] user root /etc/nginx/sites-enabled/default I don't have /etc/nginx/sites-enabled/ at all. What kind of stuff is in the default file? I'm on Gentoo. - Grant ... ## munin nginx status (request/ connections handeled) location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } ... This runs for me very well, so try to edit your /etc/munin/plugin-conf.d/munin-node. You can also Debug by munin-configure --suggest Am 18.06.2013 15:42, schrieb Grant: I'm having some trouble getting the nginx plugin working for munin. I've added the following to nginx config and restarted: location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } I've added the following munin config: [nginx*] env.url http://localhost/nginx_status Unfortunately I still get: # munin-run nginx_request request.value U # munin-run nginx_status total.value U reading.value U writing.value U waiting.value U If I remove the allow/deny, I can browse to /nginx_status and I get: Active connections: 13 server accepts handled requests 15 15 16 Reading: 0 Writing: 1 Waiting: 12 What could be the problem? the munin plugin is broken or not getting the status information. Try stracing the munin-run, network capature or turning on the access logs on /nginx_status just to be sure. Well, I run it all over the place with no problem. I usually set it up only on localhost server { listen 127.0.0.1:80 default; server_name localhost; root /var/www; access_log /var/log/nginx/localhost.access.log; error_log /var/log/nginx/localhost.error.log; location ~ /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } } ( in /etc/nginx/conf.d/stub, or /etc/nginx/sites-available/stub, symlinked to ../sites-enabled/stub depending on preference ) and restart nginx. Then test this bit works... $ wget -O - http://localhost/nginx_status 2 /dev/null Active connections: 1 server accepts handled requests 67892 67892 70215 Reading: 0 Writing: 1 Waiting: 0 Some os's seem to like [nginx*] env.url http://localhost/nginx_status added to /etc/munin/plugin-conf.d/munin_node then munin-run nginx_status should run just fine. You fixed it! Reducing it to the simplest config that still works, I found that the location /nginx_status block doesn't work with munin inside of any other server block. It only works inside its own server block like so: server { location ~ ^/nginx_status$ { stub_status on; access_log off; allow 127.0.0.1; deny all; } } Is this a munin bug? Thank you! - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
munin plugin for nginx
I'm having some trouble getting the nginx plugin working for munin. I've added the following to nginx config and restarted: location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } I've added the following munin config: [nginx*] env.url http://localhost/nginx_status Unfortunately I still get: # munin-run nginx_request request.value U # munin-run nginx_status total.value U reading.value U writing.value U waiting.value U If I remove the allow/deny, I can browse to /nginx_status and I get: Active connections: 13 server accepts handled requests 15 15 16 Reading: 0 Writing: 1 Waiting: 12 What could be the problem? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Permissions check
I just updated nginx and was warned about permissions. Are these appropriate: /var/log/nginx: drwxr-x--- root root /var/lib/nginx/tmp and /var/lib/nginx/tmp/*: drwx-- nginx nginx - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: Permissions check
I just updated nginx and was warned about permissions. Are these appropriate: /var/log/nginx: drwxr-x--- root root /var/lib/nginx/tmp and /var/lib/nginx/tmp/*: drwx-- nginx nginx - Grant Whoops, please make that: /var/lib/nginx/tmp and /var/lib/nginx/tmp/*: drwx-- apache nginx With nginx running as user apache. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: IMAP: auth_http
nginx seems to require being pointed to an HTTP server for imap authentication. Here's the protocol spec: http://wiki.nginx.org/MailCoreModule#Authentication Is the idea to program this server yourself or does a server like this already exist? It's usually a script written individualy for a specific system. Some samples may be found on the wiki, e.g. here: http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript In that case I request for nginx's imap proxy to function more like imapproxy which is easier to set up. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: HTTPS header missing from single server
How can I make nginx set the HTTPS header in a single http/https server? What is the HTTPS header? I meant to say HTTPS environment variable. piwik with force_ssl=1 on apache goes into a redirect loop because it doesn't know SSL is on due to the nginx reverse proxy. This sounds like one or more fastcgi_param key/value pairs are not set the way your application wants them to be set. http://nginx.org/r/fastcgi_param is how you set them. And it includes an example with the $https variable, which is described in http://nginx.org/en/docs/http/ngx_http_core_module.html#variables I should have mentioned that I'm using proxy_pass. I was able to get it working like this: proxy_set_header X-Forwarded-Proto $scheme; - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
SSL default changes?
It looks like these changes from default are required for SSL session resumption and to mitigate the BEAST SSL vulnerability: ssl_session_cache shared:SSL:10m; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; Should the defaults be changed to these? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx for images (apache for pages)
nice tutorial! didnt you found anything approbiate here? http://wiki.nginx.org/Configuration I tried some of those but nothing seemed to match my situation as clearly as the one I used. http://kbeezie.com/apache-with-nginx/ - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
HTTPS header missing from single server
How can I make nginx set the HTTPS header in a single http/https server? piwik with force_ssl=1 on apache goes into a redirect loop because it doesn't know SSL is on due to the nginx reverse proxy. There is a piwik bug which references a similar problem and blames the HTTPS header: http://dev.piwik.org/trac/ticket/2073 - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
nginx for images (apache for pages)
I'm serving images and dynamic .html pages via apache on port 80. I'd like to have nginx to serve the images. How can this be done since both apache and nginx need to serve requests on port 80? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx for images (apache for pages)
I'm serving images and dynamic .html pages via apache on port 80. I'd like to have nginx to serve the images. How can this be done since both apache and nginx need to serve requests on port 80? - Grant Set apache up as a proxy server for dynamic html behind the nginx server. Is there a good howto for this? Is it difficult when dealing with an ecommerce site? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
imap: invalid header in response while in http auth state
I'm using imapproxy and trying to switch to nginx. courier is listening on port 143. mail { auth_http localhost:143; proxy on; server { listen 144; protocol imap; } } I get: auth http server 127.0.0.1:143 sent invalid header in response while in http auth state, client: 127.0.0.1, server: 0.0.0.0:144 Does anyone know what's wrong? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
IMAP: auth_http
nginx seems to require being pointed to an HTTP server for imap authentication. Here's the protocol spec: http://wiki.nginx.org/MailCoreModule#Authentication Is the idea to program this server yourself or does a server like this already exist? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx for images (apache for pages)
I'm serving images and dynamic .html pages via apache on port 80. I'd like to have nginx to serve the images. How can this be done since both apache and nginx need to serve requests on port 80? - Grant Set apache up as a proxy server for dynamic html behind the nginx server. Is there a good howto for this? Is it difficult when dealing with an ecommerce site? - Grant What a fine little server. This howto was perfect: http://kbeezie.com/apache-with-nginx/ The ecommerce factor was a breeze. Slightly different SSL certificate handling. - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
proxy_read_timeout for an apache location?
Can I set proxy_read_timeout for only a particular location which is passed to apache? - Grant ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx