Re: Custom redirect for one page from https to http with different name.
On Mon, Jan 08, 2024 at 02:22:14PM -0500, James Read wrote: Hi there, > how would I redirect https://example.com/oldname.php to > http://example.com/newname.php Within the https server{} block: location = /oldname.php { return 301 http://example.com/newname.php; } should do it. (Other 30x numbers can work too.) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Nginx serving wrong site
On Mon, Jan 08, 2024 at 11:48:13AM -0500, James Read wrote: Hi there, > OK this is a browser issue and not a nginx issue. I just accessed the site > with lynx and it is showing the right site. However with Chrome it is > showing the wrong site. This may have something to do with the fact that I > had to clear the HSTS cache in the browser in order to be able to see > anything. Thanks for sharing the resolution with the list. It looks like this was a case where you wanted the browser to talk to your nginx on port 80; but the browser was instead talking to a thing on port 443. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Nginx serving wrong site
On Mon, Jan 08, 2024 at 09:49:23AM -0500, James Read wrote: > On Mon, 8 Jan 2024, 09:29 Francis Daly, wrote: > > On Mon, Jan 08, 2024 at 09:13:38AM -0500, James Read wrote: Hi there, > > So I'm going to guess that your "server_name" line is of the > > form "www.example.com"; and your browser is instead accessing > > http://example.com; and nginx is returning the content of the > > default_server for that ip:port instead of this server. > > My server_name is of the form "example.com www.example.com;" so I don't > think that is the problem. Could this be anything to do with dns > configuration? Do your nginx logs indicate that the request is being handled by this nginx instance at all? If not, maybe DNS is not causing your browser to talk to this server's IP address. Do you have any "listen" directives that include specific IP addresses, instead of just ports? Does your example.com resolve to the address of the "listen" in this "server{}"; or to the address of the "listen" in whichever "server{}" is actually being used; or to a different address? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Nginx serving wrong site
On Mon, Jan 08, 2024 at 09:13:38AM -0500, James Read wrote: Hi there, > I literally copied a working configuration. The only changes I made were > the name of the server and the root to find the files to be served. If you're not going to show a configuration, then anyone who might be able to help will be reduced to guessing. So I'm going to guess that your "server_name" line is of the form "www.example.com"; and your browser is instead accessing http://example.com; and nginx is returning the content of the default_server for that ip:port instead of this server. https://nginx.org/en/docs/http/request_processing.html Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Nginx serving wrong site
On Mon, Jan 08, 2024 at 08:11:21AM -0500, James Read wrote: Hi there, > My nginx server is serving the wrong site. I found this explanation online > https://www.computerworld.com/article/2987967/why-your-nginx-server-is-responding-with-content-from-the-wrong-site.html > However this explanation doesn't seem to fit my case as I have a location > which nginx should match correctly. Is there any other reason why nginx > would serve the wrong site? It pretty much always is because what you think you have told nginx to do is not what you have actually told nginx to do. (The other occasions are usually when your browser is not talking to the nginx that you think it is talking to.) To a first approximation: when a request comes to nginx, it first chooses which server{} to handle the request in, then chooses which location{} within that server{} to handle the request in. Can you show a configuration and a request that is handled in a different location{} from what you want? Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Wrong content served
On Tue, Dec 26, 2023 at 07:57:41PM -0300, Daniel A. Rodriguez wrote: Hi there, > This behavior is driving me crazy. Currently have more than 30 sites behind > this reverse proxy, but the latest is refusing to work. Can you provide more details? > Config is simple and pretty similar between them all. "include" means "anything in that file is effectively in this config". Nobody but you knows what is in that file. > server { > listen 80; > server_name material.av.domain; > > include /etc/nginx/snippets/location-letsencrypt.conf; > > # return 301 https://$server_name$request_uri; > > } Your test request is: $ curl -i http://material.av.domain/ What response do you get? What response do you want to get instead? The "return" is commented out, so unless there is something surprising in the location-letsencrypt.conf file, I would expect a http 200 response with the content of "the default" index.html file. > If I point the browser to material.av.domain got redirected to another > sub-domain, among the 30 mentioned before. However, everything else works > just fine. Can you show the response to the "curl" request, to see whether "redirect" is a http 301 from the web server, or is something like a http 200 from the web server with maybe some javascript content that redirects to "the wrong" place? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Forcing incognito mode on a reverse proxy
On Sat, Dec 16, 2023 at 02:16:45PM -0500, Saint Michael wrote: Hi there, > I have a reverse proxy but for security reasons, I need to force the > client to work the closest to an Incognito session as possible. I suspect that that can only reliably be done by telling the client to use an Incognito session. nginx-in-the-middle will not be able to do it, without lots of extra state being stored across requests. (Which may well be doable by you writing the code to do it; but I suspect that it can't be done purely in stock nginx configuration.) > I tried adding the following: > > proxy_set_header Cookie ""; > add_header Set-Cookie "cookie_name=; Expires=Thu, 01 Jan 1970 00:00:01 GMT;"; > } > > but it still does not work correctly. I suspect that it will be useful to learn what exactly you consider an Incognito session to be. My understanding is that, among other things, the client will choose not to send any cookies that had been set outside of this session, but will choose to send cookies that were set within this session. If that is correct, then "never sending cookies" is not the correct design. The client can know when the cookies that it has were set; for nginx to know that, it would need to keep track of the Set-Cookie responses for each client, and only allow through matching Cookie requests from the matching client. And by default, nginx does not know or care about that information. > Is there a way to do this? Probably not trivially. Good luck with it! f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: serving files from /proc
On Tue, Dec 12, 2023 at 04:17:11PM +0100, Jérôme Loyet wrote: Hi there, > I'm trying to serve some files from /proc but nginx return a 0 bytes > content because the file size of many files in /proc/ tree is simply 0 by > design. I suspect that you are going to have to write something to read the file and tell nginx what the "real" size and content is. Stock nginx knows that for static files, when the filesystem says that st_size is 0, the file has a size of 0. That happens to not be true for some things within /proc; so either you get to change the code behind /proc to return "real" values, or you get to write something to return real values. I guess it will be simpler to run something like a fastcgi process that will "cat" the file, and tell nginx to fastcgi_pass to that process for these requests; than to rewrite the /proc code or to rewrite the nginx static file handler to do extra things when it is told that the size is 0. > is there a simple way to configure nginx to return the cotent of > /proc/net/route or any other file in /proc ? Untested, but I suspect "directly: no; indirectly, maybe (as above)". Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: HTTP status 500 when using Nginx with Jenkins
On Wed, Sep 06, 2023 at 03:15:11PM +, David Aldrich wrote: Hi there, > In the failure condition, the browser (Edge) shows (in Developer Tools > Console): > > POST https://jenkins-temptest./pipeline-syntax/generateSnippet > 500 > > I don't know how to access the contents of the 500 reply. The Console can show the Response on the Network tab, if you select this request. But: 500 is the generic "something went wrong" message. If it was generated by nginx, there should be something in the nginx error_log about it. If it was generated by Jenkins and passed through nginx, there should be something in the Jenkins-equivalent. > Is anything obviously wrong with these? Nothing stands out as being "clearly incorrect"; so checking the logs (and maybe increasing the log level before trying again, if the message was generated by nginx) is probably the most useful next step. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: 502 Bad Gateway using Cloudflare and Kestrel
On Sun, Sep 03, 2023 at 09:57:54PM -0700, Sam Hobbs wrote: Hi there, > curl -k https://127.0.0.1:5443 > > (the address that Kestrel is listening to) I get a page that I am expecting. > proxy_passhttp://127.0.0.1:5443; You probably have a space after proxy_pass in your actual config; but you probably should also have "https://"; not "http://"; there as well, since your upstream service is listening for https connections. > Is there a way to determine with relative certainty that the 502 is caused > by something in nginx and not Cloudflare or Kestrel or the application? Is > there a way to get more details? If someone knows how to fix the problem > regardless of where and why it is happening then that would be great help. The nginx error log should show its description of what it thinks is happening; you can change the logging level to have more details written, if that will help diagnose things. And the port-5443 service should log something like "I got a http request to a https port" wherever it writes its information. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Nginx Support required
On Sun, Sep 03, 2023 at 08:05:37AM -0400, Saint Michael wrote: Hi there, > The question is: if this is a full reverse proxy, and all requests come > from my NGINX server, I suspect that the answer is "they don't". Dailymotion seems to be a video hosting site with a business model based around allowing certain web sites to link to embedded videos. You seem to want to link to embedded videos, without being one of those "certain sites". The simplest way to avoid the issue is probably for you to agree terms with the video hosting site, so that your site is allowed to embed links to all-or-some of their videos. > Is there something else that I am missing in the code so 100% of requests > that hit Dailymotion seem to come from the NGNX machine? It's not immediately clear to me why the requests should come from your nginx server. That would seem to miss the point of "outsourcing" the bandwidth requirements for video delivery away from your server -- which is presumably the purpose of using a video hosting site in the first place. If I were running a video-hosting web site based on allowing some sites to link to my videos, and not allowing others, then I would probably try to come up with a sophisticated mechanism to ensure that as many as possible "should be blocked" sources are blocked, while allowing every "should be allowed" source. And probably one of the very first checks I would do would be around the Referer header sent in the request -- if the client tells me that it is coming from a "should be blocked" source, I would probably believe it without needing to do any more sophisticated checking for this request. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Nginx Support required
On Sat, Sep 02, 2023 at 08:01:42AM +, Shashi Kant Sharma wrote: Hi there, > I am looking forward response on this. Can you please response or suggest any > time for discussion today. For the nginx@nginx.org public list for "community" support of the open source application: if you can show what you are doing, and what you are seeing, and what you want to see instead; then there's a better chance that someone will be able to to either recognise the problem, or re-create the problem. You seem to be reporting that an upload of something bigger than 20 MB leads to some problem. It sounds like you might want to use the client_max_body_size directive (http://nginx.org/r/client_max_body_size) if you get a 413 response from nginx when you send a big file and not when you send a small file. But if your nginx accepts the current big upload, and whatever nginx sends the request to for further processing rejects it as too big, then that other thing is the thing that would need to be reconfigured to allow it. Maybe that's enough to allow you to resolve the issue? If not, if you can provide more specific details about what your system is doing, someone might be able to make an alternate suggestion. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Reverse proxying with URL Rewrite
On Wed, Aug 02, 2023 at 01:42:46PM +0300, O G wrote: Hi there, > I would like to reverse proxy the following: > > https://test.us/citizensolutions/AB* to *https://*AB*.test.us/ and all the > subdirectories so that > all files like https://test.us/citizensolutions/AB/css/*.css reverse proxy > to https://*AB*.test.us/css/*.css > > AB in the above example can be any dynamic input that nginx needs to treat > as a variable. Use "map" to create the variables that you want, and then later use the variables. You say "any dynamic input"; I'm going to use "up to the next slash, which must be present"; and you will want to decide how you want to handle the cases of "AB is empty" or "AB is index.html" and the like. Strictly, I'd use "map" to create one variable, and a (pcre) regex named capture group to create the other, like so: === map $request_uri $the_bit_after_the_prefix { ~/citizensolutions/([^/]*)(?/.*) $1; default ""; } === (http://ninx.org/r/map) And then for testing, something like === location = /citizensolutions/ { return 200 "Nope; try again\n"; } location ^~ /citizensolutions/ { return 200 "Would use $the_bit_after_the_prefix, $the_rest_of_the_request\n"; } === and then when you are happy that the "Would use" is showing you what you expect in all cases you care about, replace that location{} content with === if ($the_bit_after_the_prefix = "") { return 301 $request_uri/; } proxy_pass https://$the_bit_after_the_prefix.test.us$the_rest_of_the_request; === You are effectively telling your nginx to resolve-and-access a dns name provided by the user, so you'll want to be happy that your nginx can do that, or the users will see 502 errors. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: wordpress - Primary script unknown
On Thu, Aug 03, 2023 at 02:00:11PM +0200, lejeczek via nginx wrote: Hi there, > 2023/08/03 13:50:24 [debug] 1112963#1112963: *27 fastcgi param: > "SCRIPT_FILENAME: /var/www/ale.xyx_wordpress/index.php" > 2023/08/03 13:50:24 [error] 1112963#1112963: *27 FastCGI sent in stderr: > "Primary script unknown" while reading response header from upstream, > client: 10.3.9.144, server: ale.xyz, request: "GET / HTTP/2.0", upstream: > "fastcgi://unix:/run/php-fpm/www.sock:", host: "ale.xyz" > This is pretty much vanilla-default on Centos 9, those configs are - what am > I missing? "Primary script unknown" is a message from the fastcgi server saying that it is unable to access the file that it has been asked to use. Usually, that filename comes from a specific one of the SCRIPT_FILENAME values it receives (if it receives more than one). Your fastcgi server might log more somewhere about what it thinks it was doing. But can you check: can the user that the fastcgi service is running as, read the file /var/www/ale.xyx_wordpress/index.php from the perspective of the fastcgi service? That is -- do directory permissions towards that file and file permissions on that file allow that user/group to read? Does the file exist at that path name, if the fastcgi service is running in a chroot or other confined context? Do selinux or other access control mechanisms allow the fastcgi service to read the file? Some other (less likely?) possibilities include -- does your nginx send more than one value for SCRIPT_FILENAME; and if so, is your fastcgi server trying to use a different one? Does your fastcgi server actually use SCRIPT_FILENAME, or does it use some other param or combination of params? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Mixing limit_except breaks rewrite functionality: workaround request
On Mon, Jul 10, 2023 at 06:27:04AM +0200, Sten Grüner wrote: Hi there, > Got to do rewrites, since otherwise nginx breaks urlencoded query > parameters. Yes, that sounds like a good reason to not just use "the obvious" config. So -- following the example in the trac ticket that you linked, doing something like === http { map $request_uri $request_without_x { ~^/x/(.*) $1; default ""; } ... server { ... location /x/ { limit_except GET OPTIONS { auth_basic "Write Access"; auth_basic_user_file /etc/nginx/conf.d/htpasswd_write; } proxy_pass http://server:8081/$request_without_x; } } } === looks like it should do what you want? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Mixing limit_except breaks rewrite functionality: workaround request
On Fri, Jun 30, 2023 at 11:29:21AM +0200, Sten Gruener wrote: Hi there, > I trying to mix authentication for POST requests with some > rewrite/proxy_pass logic. This mean that password is required only on > POST/PUT requests. This does not answer the question you asked, but is there a reason for the "rewrite, rewrite, return, proxy_pass" sequence instead of just using exactly "proxy_pass http://server:8081/;"; It looks like that should do what you want, so bugs in the handling of more complicated configs would not apply. Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: NGINX 1.23.4 warning message
On Thu, Jun 29, 2023 at 11:00:21AM -0300, Fabiano Furtado Pessoa Coelho wrote: > On Wed, Jun 28, 2023 at 10:03 PM Sergey A. Osokin wrote: > > On Wed, Jun 28, 2023 at 05:19:55PM -0300, Fabiano Furtado Pessoa Coelho > > wrote: Hi there, > > > Changes with nginx 1.23.428 Mar 2023 > > > > > > *) Change: now nginx issues a warning if protocol parameters > > > of a listening socket are redefined. > > > > [...] > > > > > What do these warning messages mean? Should I be worried? > > > > From the provided configuration snippet it's unclear is the same > > ip:port pairs for two server blocks. In case those are - all > > servers are listening on the same ip:port should have the same > > parameters. To avoid such a warning update parameters for the > > listen directive in the second server block. > > OK. The IPs and ports are the same. However, if I continue to use > exactly this configuration, are there any possible technical issues (I > don't think so, because it is a "warning".) with this environment? > Sorry, but I think I wasn't clear enough with my question. I think the point is: your config makes it look like you want one server to use ssl-and-http2, and you want the other server to use ssl-without-http2. That is not how it works; and that is never how it worked. So a naïve reading of your config might lead the reader to be unclear about what exactly each server is actually using. This new warning is telling you "your config might be misleading". It is still doing exactly what it always was doing (and: I guess it will continue to do that in future updates; but nothing is guaranteed). It is additionally letting you know of this possibly-unintentional config, so you can choose to leave it as-is (and get the warnings on startup) or you can choose to make it explicit that you know what settings apply to (this ip:port in) each server. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: duplicate ports across servers in nginx.conf
On Wed, May 31, 2023 at 06:14:41AM +, Yuval Abadi via nginx wrote: Hi there, I don't speak for the project, but my guess is: > If the configuration has 2 servers sharing the same name and the same port > I got this warning: > "nginx: [warn] conflicting server name "http://www.mut.com/"; on 0.0.0.0:80, > ignored" > > Why not block this mistake? If you have 20 server{}s, and 2 share the name and port, should the entire system fail to start (or reload config)? It seems friendlier to me to use the config as-provided, and alert on things that are not used as the administrator apparently expected. Some configuration issues are considered more important than some others. This particular one is currently not considered "fatal". > I assume the second server ignored, but why let it possible? nginx does not control what the administrator types. > If the servers do not have name > I got this warning: > nginx: [warn] conflicting server name "" on 0.0.0.0:9002, ignored > nginx: [warn] conflicting server name "" on 0.0.0.0:80, ignored Yes; it's the same message, showing the listen ip:port and server_name values that are unexpected. > both warning: > first no way for NGINX gives good warning, both server looks the same. I agree that it would be even friendlier if the error message indicated the filename and line number that the unexpected configuration came from; I suspect that a patch to change that would be thoughtfully considered. Maybe someone will be interested in providing that patch, now that the issue has been mentioned. (Maybe the only reason the log omits the filename is that no-one thought to add it here, where it is added in other places. Or maybe it is harder than that to implement.) > if user did such mistake, better to block. I disagree. It appears that the current code disagrees too; maybe that will change in the future. > Why not enforce using at list one server have "listen default_server port"? I think that is enforced already -- if you have more than one "default_server", you get an "emerg" failure. If you have none explicitly, then the implicit config applies -- and I would rather not lose the implicit config. > Why not enforce server names , and not let more than one server with same > name? I think that is what it is doing already; it considers it a "warning" rather than an "emergency" configuration issue. > Is NGINX set the bit default_server ,on the first "ngx_http_conf_addr_t", > of the first server, that read from conf file? (if no default_server was > defined)? > I'm not quite sure what you are asking: if it is about the code, it is not hidden and is quite readable; if it is about which server is default_server if none is explicit, then the documentation also describes that -- the "implicit" default_server for a specific ip:port is the first server{} that was read with that (possibly implicit) "listen" config. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: duplicate ports across servers in nginx.conf
On Tue, May 30, 2023 at 06:50:19AM +, Yuval Abadi via nginx wrote: Hi there, > When I have 2 servers in nginx.conf with same listen port if the server > have name, nginx issue warning ignore … but nit failed to load. > What happens is only the first server in conf binds the socket. > And worse, If no server names I did not get a warning. Does https://nginx.org/en/docs/http/request_processing.html explain what you are seeing? If not, can you show one small but complete configuration that shows the problem that you are reporting? "name-based virtual servers" are based around listening on the same port, and having the http server responding differently based on the Host: in the incoming request. It would be surprising if that feature became broken. The documentation for "listen" at https://nginx.org/r/listen does note that some parameters only make sense when set once (or set the same each time, if they are set more than once); I don't know if you are hitting one of those cases? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Reverse proxy to forward proxy to internet access
On Sat, May 27, 2023 at 10:42:01AM -0400, Saint Michael wrote: Hi there, > Please look at the links. > All those links are a live digital tunnel to each website. Yes; it looks like you are making "normal" use of nginx's proxy_pass directive, to provide indirect access for clients, to some content on the web that you have access to, that someone on either the upstream-server or client-network side had attempted to block direct access to. That looks like a convenient service for a client who wants to avoid those attempted blocks. I'm just not sure how what you wrote relates to what the original poster asked, or to anything else in the thread. And your original mail that I responded to, could just as well have been written in response to pretty much any message on the mailing list, and it would have had the same looks-like-spam appearance. For your follow-up questions about your service: I would have imagined that there would be a bigger readership for a new thread, rather than hiding things in an unrelated thread; but whatever works for you is good. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Reverse proxy to forward proxy to internet access
On Sat, May 27, 2023 at 12:39:05AM -0400, Saint Michael wrote: Hi there, > 100% Nginx That looks like an ad for a donation button; but it doesn't immediately seem to say "here is how nginx is configured to access a remote web site through a proxy server". Or "here is how nginx is configured to be accessed as if it were a proxy server". (It does seem to indicate "this server acts as a reverse proxy for some specific remote web sites"; but that's pretty much what http://nginx.org/r/proxy_pass does. No doubt there is extra cleverness to handle the "I don't control the upstream server" issues that usually arise; but it does not seem to be relevant to this thread. Am I missing something?) Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Reverse proxy to forward proxy to internet access
On Sat, May 27, 2023 at 09:51:10AM +0530, Miten Mehta wrote: Hi there, > I consider from your reply that niginx reverse proxy cannot provide > internet access through a forward proxy like squid, websense or alike. "http through a proxy" uses a different form of requests from "http". nginx as a client does not make the "http through a proxy" request when it is talking to a configured upstream server. The general "forward proxy" server will expect clients that talk to it, to make "http through a proxy" requests. Your specific "forward proxy" server might be configured to "transparently" intercept "http" requests and make a best-guess effort at interpreting them as if they had been "http through a proxy" requests. And that might work in many cases. If that works well enough in your specific case, great! Only you can know whether it works well enough in your case, to be worth investigating further for problems. > I understand you mentioned that nginx cannot be used as forward proxy. nginx as a server does not specially interpret any "http through a proxy" requests that it receives, and it does not try to follow the "http proxy server" rules for handling requests and responses. If what it does do, works well enough for you, great! > There are many blogs on net claiming to use nginx as forward proxy and also > using upstream forward proxy that is false? I don't see this in official > documentation of nginx. Maybe those many blogs refer to cases where the combination of their client, their upstream proxy server, and their configuration of nginx, works well enough for them. In which case -- great! You should be able to build a test nginx configuration based on those blogs, to see whether it works well enough for you, too. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Reverse proxy to forward proxy to internet access
On Fri, May 26, 2023 at 04:18:59PM +0530, Miten Mehta wrote: Hi there, > Thanks for guidance. If i enable direct internet access from reverse proxy > then can i just use proxy_pass $request_uri and have user format his url as > https://myreverseproxy.com/https://mypub/somepath. Here, $request_uri would start with /, so it would not Just Work as-is. I'm not sure how https://myreverseproxy.com/https://mypub/somepath is different from a "normal" https://myreverseproxy.com/mypub/somepath with a "normal" nginx config based on location ^~ /mypub/ { proxy_pass https://mypub/; } (plus the supporting configuration). So then you have a "normal" nginx proxy_pass setup for specific remote web servers. Which should Just Work like any other proxy_pass configuration. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Reverse proxy to forward proxy to internet access
On Thu, May 25, 2023 at 05:12:26PM +0530, Miten Mehta wrote: Hi there, > Can you guide to configuration to put in reverse proxy config file to use > forward internet proxy? nginx does not talk to a proxy server. If you need to talk to a proxy server, you need something other than "stock" nginx. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Separate location for files served by php-fpm
On Thu, May 18, 2023 at 09:14:42PM -0700, Palvelin Postmaster via nginx wrote: Hi there, > My goal is to serve only requests which include URI /files/hash/* > using a separate location block. Everything else should be served by > the default location block I included in my previous message. Untested, but would location ^~ /files/hash/ { fastcgi_pass php74; fastcgi_param SCRIPT_FILENAME /var/your-php-script.php; expires 10d; } meet what your goal is? Adjust the fastcgi_param value to whatever your fastcgi server needs. The important part is probably the "location" line that matches all-and-only these requests. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Separate location for files served by php-fpm
On Mon, May 15, 2023 at 12:46:14PM -1000, Palvelin Postmaster via nginx wrote: > > On 8. May 2023, at 8.49, Palvelin Postmaster via nginx > > wrote: Hi there, > > I use php-fpm together with nginx. > > > > My PHP app serves files which have hashed filenames and no filename > > extension from a specific subdirectory url, e.g > > /files/hash/31b4ba4a0dc6536201c25e92fe464f85 > > > > I would like to be able to set, for example, a separate ’expires’ value to > > these files with nginx (using a separate location block?). Is that > > achiavable? In principle, yes. So long as the requests use different urls (excluding query string). In practice: from the words here, it is not entirely clear to me what your overall application is doing. Maybe you can have a location{} dedicated to these file-requests; or maybe it would be "cleaner" for the php side to add the extra Expires header. Can you show one or two sample requests that are made to nginx that you do want to have this extra Expires header; and one or two that you do not want to have this extra Expires header? The aim is to come up with a location{} block that matches only the requests that you want, if that is possible. Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Adding 3rd party module not taking effect
On Tue, Apr 25, 2023 at 10:20:12PM +, Martin Wolf wrote: Hi there, > I tried to add a third part module "fair balancing", but it seems to be not > properly added: > > Error log: > 2023/04/25 17:14:45 [emerg] 34510#0: unknown directive "fair" in > /usr/local/nginx/conf/conf.d/nginx-upstream-fair.conf:2... Yes, that says that the nginx binary that is running, does not include that module. > # nginx -V > nginx version: nginx/1.23.4 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) > configure arguments: --without-http_rewrite_module --without-http_gzip_module > --add-module=/usr/local/nginx/modules/nginx_upstream_check_module-master That nginx binary was built with those three arguments to "configure". > steps: > $ ./configure --with-http_ssl_module > --add-module=/usr/local/nginx/modules/nginx-upstream-fair-master That is a different set of arguments to "configure". The nginx binary that you want to run is not the nginx binary that you are running. Perhaps there is more than one nginx binary in your $PATH? > make -f objs/Makefile install > cp objs/nginx '/usr/local/nginx/sbin/nginx' Maybe try "/usr/local/nginx/sbin/nginx -V" to see if that is the newly-built binary? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: situation with friendly urls
On Mon, Apr 10, 2023 at 11:07:11AM -0400, Rick Gutierrez wrote: Hi there, > Sorry Francis the folder path was wrong, update and try again I now see some more things getting good responses, but there are still lots of missing images, and the browser "developer tools" network console shows lots of failing requests. I do not know which ones are broken because files are not where they are expected to be, and which ones are broken because of the "friendly url" issue you are reporting. > Let me see if I send a screenshot, locally it loads fine. When I load the web site from the public internet, I do not see what specific url should lead to those screenshots. If you can identify one particular url, where the response from nginx is not what you want it to be, then maybe that one can be analysed in more detail. Otherwise, I'm afraid that "something does not work" is not enough of a problem report for further action. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: situation with friendly urls
On Sun, Apr 09, 2023 at 04:08:30PM -0400, Rick Gutierrez wrote: > El dom, 9 abr 2023 a las 7:27, Francis Daly () escribió: Hi there, > https://netsoluciones.com > > This is the site, for example when I want to load the site in English > it doesn't do it, it doesn't load the images and css either. When I try loading that site now, I see lots of requests to things that end in ".css" that get a HTTP 404 response; I do not see any images. The first one is for https://netsoluciones.com/assets/helpers/animate.css. Based on the config you provided, that should be handled by the "front" nginx by doing a proxy_pass to the "back" nginx; and the "back" nginx should provide the content of the file /var/www/sites/netsoluciones.com/htdocs/assets/helpers/animate.css. The end result is a 404 File Not Found. Does that file exist on the back-end nginx server? What do those nginx logs say for this request? Did the request get to it at all, or did the request stop at the front-end nginx server? What do *those* nginx logs say for this request? If you make a test request like curl -i https://netsoluciones.com/assets/helpers/animate.css do you see the response that you expect? (Which should probably be HTTP 200 along with the content of the expected file.) Slightly strangely: when I do that using curl, I get a HTTP 200 response but with html not css; where my browser gets a HTTP 404 response. Maybe they are talking to different servers, or maybe there is some config that handles the request differently based on something other than the url. > I hope that by looking at the site you have a better idea. Not really, no, sorry. I do not know how things are intended to look, so I cannot tell which parts are not that way. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: situation with friendly urls
On Sat, Apr 08, 2023 at 11:29:25PM -0400, Rick Gutierrez wrote: Hi there, > Hi here again, I have tried different configurations but I cannot get > the project website to load correctly. When you say that it does not load correctly, can you show one specific request that does not get the response that you want it to get? That should make it easier to identify where things are going wrong. For what it is worth: the debug log that you show, does not appear to come from a system that is using the configuration that you show. So it is possible that the configuration that you are changing, is not the one that the running nginx is actively using. (Or maybe you are only showing a part of the configuration that is not used in this request?) The debug log does not show the locations /assets/ or /css/ or the like; it mainly shows locations related to the third-party pagespeed module. From what you describe, the browser should make a request to the "front" nginx server, which should use its proxy_pass config to make a request to the "backend" nginx server, which should then do whatever it is configured to do. It is not clear to me what request is being made to the "front" server, that is not being handled as you want it to be. > location /assets/ { > >alias /var/www/sites/netsoluciones.com/htdocs/assets/; > > } > https://pastebin.com/JMP3n7iB That seems to show a request for /assets/images/empresa/x26910210_152867885365030_7535289409698400565_o.png.pagespeed.ic.B57rrxzkqD.webp that is handled in the regex location ~ ".*\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+", and not in the prefix location /assets/. > any idea , suggestion? More information. It looks like you want the "/assets/" request to be handled by serving a file from the filesystem; but that seems unrelated to php, friendly urls, and two languages. So if you can describe how you want one specific request to be handled, and can show how it actually is handled, maybe the first place where those two things differ can be identified. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: failure to limit access to a secure area with self-signed client SSL cert fingerprint match
On Wed, Mar 22, 2023 at 08:48:50AM -0400, PGNet Dev wrote: Hi there, > > Do you have the certificate that has that value as the Subject? What > > is that certificate's Issuer? And repeat until you get to the root > > certificate. > > > > And which of the ssl*certificate files named in your config holds those > > certificates? > > i verified all my certs/chains. all good. You verified things in your way, and saw they were good. The nginx logs you provided indicated that nginx verified things in its way, and saw they were not good. It seems like you have a system that works for you now, and that is good. If you want to keep testing for another system, then based on what you reported, and what you provided here, my guess is that your client certificate does verify against whatever is in myCA.CHAIN.crt.pem, and does not verify against whatever is in intermediate_ca.ec.crt.pem. So I suspect that if you put the contents of those two files into a single file, and then refer to that either as ssl_client_certificate or as ssl_trusted_certificate, and do not use the other directive at all; then things might work more like you want. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: failure to limit access to a secure area with self-signed client SSL cert fingerprint match
On Tue, Mar 21, 2023 at 07:02:23PM -0400, PGNet Dev wrote: > > What does the error_log say about this request and response? > 2023/03/21 18:52:14 [info] 4955#4955: *7 client SSL certificate verify > error: certificate status request failed while reading client request > headers, client: 2401::...::1, server: example.com, request: "GET / > HTTP/2.0", host: "example.com" That'll be why nginx blocks the access, at least -- the client cert is not verified as good. You have indicated that the client cert has: Issuer: C = US, ST = NY, O = example.com, OU = example.com_CA, CN = example.com_CA_INT, emailAddress = s...@example.com Do you have the certificate that has that value as the Subject? What is that certificate's Issuer? And repeat until you get to the root certificate. And which of the ssl*certificate files named in your config holds those certificates? f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: failure to limit access to a secure area with self-signed client SSL cert fingerprint match
On Mon, Mar 20, 2023 at 01:51:47PM -0400, PGNet Dev wrote: Hi there, > now, on access to EITHER of > > https://otherexample.com > https://otherexample.com/sec/test > > in browser i get > > 400 Bad Request > The SSL certificate error > nginx What does the error_log say about this request and response? It looks like some part of your nginx/tls setup fails to verify the client certificate; maybe the debug or info log will hint at why. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Connecting a reverse proxy to an http proxy service
On Sat, Feb 25, 2023 at 02:10:46PM -0500, Saint Michael wrote: Hi there, > that there exist private HTTP proxy services, like webshare.io > the question is how I use them from nginx, and how do I add a pool of HTTP > proxies to the configuration? Stock nginx does not use the correct protocol to be able to talk to upstream http proxy servers. I'm not aware of a third-party module that lets it work. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: FAQ Suggestions --- mapping to file, not folder?
On Wed, Feb 22, 2023 at 09:44:42AM -0800, Ivo Welch wrote: Hi there, > I think my fundamental misunderstanding was that a `location` block in > the nginx configuration always maps to a directory (folder) in the > file system. Yes, that's a misunderstanding. The details at http://nginx.org/r/location might help clarify; but they may be too terse for this. > The root just identifies the default file. That's a misunderstanding too. http://nginx.org/r/root for details. > (this also > means that the browser can then always look around for other files in > this directory, though they may be kept unreadable for security.) I'm not sure how that relates to the previous lines. Yes, the browser can request any url. The server can choose how to respond to each request. > I was trying to define one specific URL to map to one specific file in > the file system. Is this possible? That is, is there a way to map one > specific URL to one specific file? location = /this-url { alias /var/www/that-file.txt; } Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Private location does not work
On Sun, Feb 19, 2023 at 09:33:46AM -0500, Saint Michael wrote: Hi there, > it does not work: > 404 Not Found It appears that you are not asking "how do I ensure that a location{} can only be used for internal redirects/requests". > in the public location, /carrier_00163e1bb23c, I have > > Your browser does not support iframes > > so how do I block the public from looking at my HTML and executing directly > /asr? You don't. > Is this a bug? It's a misunderstanding on your part of how the requests from the browser to the server work. Right now, your question is "how do I block people from accessing a URL, while also allowing them to access the URL". And the answer is "you can't, reliably". The thing that you want to achieve, can't be achieved using the plan that you are currently following. In the tradition of "the XY problem": if you will describe the thing that you want to achieve, instead of just a part of the current thing that you are doing to attempt to achieve it, then it may be that someone can suggest a way to achieve it. I do see a later mail that has some more details; but on first glance it seems to be describing your current solution, rather than the problem. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: FAQ Suggestions
On Sun, Feb 19, 2023 at 05:49:48PM -0800, Ivo Welch wrote: Hi there, > thank you, F. I created a completely new ubuntu VM, with a completely > vanilla configuration and only this one extra location statement at > http://164.67.176.22/ , describing the nginx configuration and > referencing its /wth, and it's not working :-( . For the convenience of future searchers, it would be better to include the content at that url in the mail directly. In this particular case, I suspect that the key line is > try /wth, which nginx should resolve to > /var/www/fcgi-bin/wth-root.html. However, this causes a 404 error. When you make the request to /wth and get the 404 response, what is written in the nginx error log? That will tell you what nginx thought that nginx was doing; if that does not match what you thought that nginx should be doing, that might point at the problem. I suspect that the issue is a misunderstanding of what "root" does -- http://nginx.org/r/root. That content also includes a link to "alias", which might be what you want, depending on what you want to have happen. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: FAQ Suggestions
On Sat, Feb 18, 2023 at 05:27:45PM -0800, Ivo Welch wrote: Hi there, > 1. is this mailing list the correct place to suggest additions to the FAQ? It's as good a place as any, yes. > 2. why does > > ``` > location /wth { >root /var/www/fcgi-bin/; >index wth-root.html; > } > ``` > > not resolve '/wth' (but incidentally does resolve '/wth-root.html', > though not '/wth-root'). What test makes you believe that "location /wth" does not resolve the request "/wth", in your config? > I have been scratching my head about this for the longest time. What other location{}s are in this config, which you might have told nginx to use instead of this one? Can you show one example config that shows the problem? For example, if I use: ``` server { listen 10080; root /tmp/r; location /wth { root /tmp/w; index w.html; } } ``` then "curl http://localhost:10080/wth"; redirects me to http://localhost:10080/wth/; and "curl http://localhost:10080/wth/"; gets me the content of /tmp/w/wth/w.html. Do you see or expect something different? Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Private location does not work
On Sun, Feb 19, 2023 at 01:52:12AM -0500, Saint Michael wrote: Hi there, > it fails with forbidden. But I am using only from another location inside > the same server. > > How do I protect internal service locations and at the same time use them? If you are asking "how do I ensure that a location{} can only be used for internal redirects/requests", then you want http://nginx.org/r/internal Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Question about proxy
On Mon, Jan 30, 2023 at 10:39:52PM -0500, Saint Michael wrote: Hi there, > Can you please elaborate on this: > "You probably want subs_filter_types to include text/html, and you probably > want "r" on the subs_filter patterns that are regular expressions rather > than fixed strings" > one example will suffice. https://github.com/yaoweibin/ngx_http_substitutions_filter_module includes: """ Example location / { subs_filter_types text/html text/css text/xml; subs_filter st(\d*).example.com $1.example.com ir; subs_filter a.example.com s.example.com; subs_filter http://$host https://$host; } """ along with explanations of each directive. (If that's *not* the module that you are using, then the documentation for your module should show something similar.) Although I do see that some later text suggests that text/html content is always searched, so maybe being explicit about that in subs_filter_types is not necessary. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Question about proxy
On Sun, Jan 29, 2023 at 03:17:15PM -0500, Saint Michael wrote: Hi there, > What causes each case, i.e., what do I need to do so always the > https://domain.com is NOT the original domain being proxied, but my > own domain (https://disney.ibm.com). You seem to be using the module at https://github.com/yaoweibin/ngx_http_substitutions_filter_module. You probably want subs_filter_types to include text/html, and you probably want "r" on the subs_filter patterns that are regular expressions rather than fixed strings. Generally, you proxy_pass to a server you control, so it may be easier to adjust the upstream so that subs_filter is not needed. But basically: you want any string in the response that the browser will interpret as a url, to be on your server not on the upstream one. So in this case, you can test the output of things like "curl -i https://disney.ibm.com/something";, and see that it does not contain any unexpected mention of perplexity.ai. > subs_filter_types text/css text/javascript application/javascript; > subs_filter "https://cdn*.perplexity.ai/(.*)" > "https://disney.ibm.com/cdn*/$1"; gi > subs_filter "https://perplexity.ai/(.*)" "https://disney.ibm.com/$1"; gi; > subs_filter "https://(.*).perplexity.ai/(.*)" "https://disney.ibm.com/$1/$2"; > gi; > subs_filter "https://www.perplexity.ai"; "https://disney.ibm.com"; gi; > subs_filter "https://perplexity.ai"; "https://disney.ibm.com"; gi; > subs_filter "perplexity.ai" "disney.ibm.com" gi; If you do see an unexpected mention, you can try to see why it is there -- especially the first subs_filter above, I'm not certain what it is trying to do; and the second one probably does not need the regex parts at all -- the fifth and sixth ones probably both do the same thing as it. The third and fourth seem to have different ideas of how "https://www.perplexity.ai/something"; should be substituted; maybe you have a test case which shows why both are needed. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: module geoip2 with map directive
On Sat, Jan 21, 2023 at 04:34:26PM -0600, Rick Gutierrez wrote: Hi there, > I'm using the geoip2 module and when I add the maps directive and make > an include to specify the file it doesn't work. I'm pretty sure that this "include" line works, but... > part of my nginx.conf > > map $geoip2_data_country_code $allowed_country { >default yes; > include /etc/nginx/conf.d/geo_country.conf; > } ...I suspect that you have another line somewhere else like "include /etc/nginx/conf.d/*.conf;", and this file is also included at that point in the config, and its content is not valid there. > nginx: [emerg] unknown directive "NI" in /etc/nginx/conf.d/geo_country.conf:1 > nginx: configuration file /etc/nginx/nginx.conf test failed > > Any ideas, how to do this? Rename this file (and change the include line) to something like geo_country.map, so that the name does not match the other include directive pattern in your config. f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx-1.23.3 on Win Server wth HTTPS
On Wed, Jan 18, 2023 at 08:51:18AM +, Kappes, Michael wrote: Hi there, > After unpacking the ZIP file I have a "nginx.conf" file which I edit, from > line 98, the HTTPS server block starts there. > C:\nginx\nginx-1.23.3>nginx -s reload > nginx: [emerg] unknown directive "HTTPS" in > C:\nginx\nginx-1.23.3/conf/nginx.conf:98 In that file, # HTTPS server is a comment that should stay a comment. You should uncomment and adjust the relevant lines that go from # server { to the matching # } > What do I have to do so that NGINX also accepts HTTPS connections? Have a "listen" with "ssl", and the correct certificate information. http://nginx.org/en/docs/http/configuring_https_servers.html Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Redirect www to not-www
On Tue, Jan 10, 2023 at 06:45:15PM -0500, Paul wrote: Hi there, > BUT... for that one step further and have all server (nginx) responses go > back to the end-client as: > https://a.example.com > and NOT as: > https://www.a.example.com > ^^^ > I have written an /etc/nginx/conf.d/redirect.conf as: > server { > server_name www.a.example.com; > return 301 $scheme://a.example.com$request_uri; > } > > which seems to work, but I would appreciate your opinion - is this the best, > most elegant, secure way? Does it need "permanent" somewhere? It does not need "permanent" -- that it a signal to "rewrite" to use a http 301 not http 302 response; and you are using a http 301 response directly. (See, for example, http://http.cat/301 or http://http.cat/302 for the meaning of the numbers. Warning: contains cats.) > I've never used "scheme" before today, but we've got an external advisory > audit going on, and I'm trying to keep them happy. $scheme is http or https depending on the incoming ssl status. That 4-line server{} block does not do ssl, so $scheme is always http there. http://nginx.org/r/$scheme Either way, this would redirect from http://www.a. to http://a., and then the next request would redirect from http://a. to https://a.. I suggest that you are better off just redirecting to https the first time. You will want a server{} with something like "listen 443 ssl;" and "server_name www.a.example.com;" and the appropriate certificate and key; and then also redirect to https://a. in that block. So for the four families http,https of www.a,a you will probably want three or four server{} blocks -- you could either put http www.a and http a in one block; or you could put https www.a and http www.a in one block; and then one block for the other, plus one for the https a that is the "real" config -- the other ones will be small enough configs that "just" return 301 to https://a. Which should be simple enough to audit for correctness. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Redirect www to not-www
On Tue, Jan 10, 2023 at 12:03:06PM -0500, Paul wrote: Hi there, > Using nginx (1.18.0 on Ubuntu 20.04.5) as proxy to back-end, I have three > sites (a|b|c.example.com) in a fast, reliable production environment. I have > DNS records set up for www.a|b|c.example.com. I have CertBot set up for > only a|b|c.example.com. > > To avoid "doubling" the number of sites-available and security scripts, and > to avoid the unnecessary "www." I would like to add something like: > > server { > server_name www.a.example.com; > return 301 $scheme://a.example.com$request_uri; > } > Maybe I'm missing something fundamental? Yes, you are missing something fundamental :-( There are 4 families of requests that the client can make: * http://www.a.example.com * http://a.example.com * https://www.a.example.com * https://a.example.com It looks like you want each of the first three to be redirected to the fourth? It is straightforward to redirect the first two to the fourth -- something like server { server_name a.example.com www.a.example.com; return 301 https://a.example.com$request_uri; } should cover both. (Optionally with "listen 80;", it replaces your similar no-ssl server{} block.) But for the third family, the client will first try to validate the certificate that it is given when it connects to www.a.example.com, before it will make the http(s) request that you can reply to with a redirect. And since you do not (appear to) have a certificate for www.a.example.com, that validation will fail and there is nothing you can do about it. (Other that get a certificate.) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}}
On Sat, Jan 07, 2023 at 08:28:17AM +0530, Kaushal Shriyan wrote: Hi there, > Thanks Francis for the detailed explanation. Is there a way to configure > Nginx for the below conditions? I think, indirectly, yes. If you want your nginx to react to some external status (e.g., the up-or-down state of a MySQL DB), then you need to indicate how your nginx will become aware of that status. (For what it's worth: if there is nothing new in your system since November / December, the suggestion remains "change your php to do this".) f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Upstream service php-fpm is up and running but reports {"errors": {"status_code": 502,"status": "php-fpm server is down"}}
On Thu, Jan 05, 2023 at 10:15:34PM +0530, Kaushal Shriyan wrote: Hi there, > When I hit http://mydomain.com/apis for conditions when MySQL DB is down. I > get the below output and it works as expected. > > {"errors": "MySQL DB Server is down"} > > When I hit http://mydomain.com/apis for conditions when MySQL DB is up and > running fine, I get the below output in spite of MySQL DB server being > fine. > > {"errors": "MySQL DB Server is down"} Your config is location /apis { return 500 '{"errors": "MySQL DB Server is down"}'; } Whenever you make a request that is handled in that location{}, your nginx will return that response. It looks like your nginx is doing what it was told to do. No part of your config indicates that nginx knows (or cares) whether MySQL DB is up or down. Does something outside of nginx know that? f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: Is there a conflict between Debian Bullseye and nginx?
On Fri, Dec 16, 2022 at 04:27:15PM +0800, Mike Lieberman wrote: Hi there, You have configured your nginx to listen on port 80 on all IP addresses. You have configured your apache to listen on port 80 on all IP addresses. They can't both do that at the same time. The first one works, the other one fails. If you want both to be running, you must configure them to listen on different IP:ports from each other. This is normal. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx returns html instead of json response
On Tue, Nov 29, 2022 at 09:58:20PM +0530, Kaushal Shriyan wrote: Hi there, > I have a follow up question related to the below error which appears in > html instead of JSON format when I hit rest api calls > http://mydomain.com/apis in case of when the MySQL Database service is down > as part of testing the end to end flow. The flow is as follows: > > User -> Nginx webserver -> PHP-FPM upstream server -> MySQL Database. > > *The Website Encountered an Unexpected Error. Please try again Later * > > > Is there a way to display the above string in JSON format? The easiest is probably to see which part of the chain creates that error message (and I guess that it is probably some php); and to change it to return the error content that you want it to return. And then let everything else continue to pass the error content through without change. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: How can I redirect a request via Another External proxy
On Fri, Nov 25, 2022 at 01:41:37PM +0530, Aakarshit Agarwal wrote: Hi there, > Inside a private network, I want to redirect a request via another External > proxy. "Stock" nginx does not talk to a proxy server. If you want to talk to/through a proxy server, you will want something other than nginx. (I'm not aware of any third-party modules that add the facility.) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Use nginx to return a scraped copy of another site
On Thu, Nov 10, 2022 at 08:50:14PM +0800, Tony Mobily wrote: Hi there, > I want to run b.com as a duplicate of a.com, with nginx acting as a proxy. > The contents would be identical; however, I would apply some minor > modifications to the HTML. > Is this possible with nginx? Is there an example configuration I can use as > a starting point? It sounds mostly straightforward. http://nginx.org/r/proxy_pass is probably the basic directive to use. If there are specific things that do not respond the way you want, perhaps something can be changed somewhere to deal with those. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx returns html instead of json response
On Wed, Nov 23, 2022 at 11:27:35PM +0530, Kaushal Shriyan wrote: > On Wed, Nov 23, 2022 at 11:20 PM Francis Daly wrote: Hi there, > I am not sure about this line error_page 555 /dummyfile; what does 555 > code mean and what will be the contents of dummyfile? > > location ^~ /apis/ { > fastcgi_intercept_errors off; > error_page 555 /dummyfile; > fastcgi_pass 127.0.0.1:9000; > include fastcgi.conf; > fastcgi_param SCRIPT_FILENAME > /var/www/html/gsmaidp/web/index.php; > } I thought I had explained it in the previous mails? 555 is an error code that you do not care about (because you do not expect to see it). You can remove either the "error_page" or the "fastcgi_intercept_errors" line (or leave them both in). What happened when you tried it? f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx returns html instead of json response
On Tue, Nov 22, 2022 at 07:52:41PM +0530, Kaushal Shriyan wrote: Hi there, > map $sent_http_content_type $enableerror { > defaulton; > application/json off; > } I believe that from a timing point of view, that variable is not going to have a useful value when you want to use it. > I have attached the nginxtest.conf file for your reference. It is not > working for me. Am I missing anything? Please guide me. The test conf file does not appear to include the seven lines that I suggested you add. And "is not working" is an incomplete problem report. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx returns html instead of json response
On Sat, Nov 19, 2022 at 09:09:34PM +0530, Kaushal Shriyan wrote: Hi there, > On 500 errors also we are handling at Drupal and sending JSON responses to > specify the details about errors. I think that for these api requests, you want to do either one of: * set fastcgi_intercept_errors off * unset error_page for 500 In the below config, I show both. You can probably comment out either one of those two lines, without changing things. Depending on the error indication that you get, you might need to swap the order of the "include" an the "fastcgi_param" lines. So, starting with your original nginx config, add the following stanza within the appropriate server{} block, and outside of any other location{} blocks. The position of this within the server{} should not matter. location ^~ /apis/ { fastcgi_intercept_errors off; error_page 555 /dummyfile; fastcgi_pass 127.0.0.1:9000; include fastcgi.conf; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } Then make some test requests and report either that it works; or that it does not work because when you make this specific request, you get this specific response, but you want that other response instead. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx returns html instead of json response
On Fri, Nov 18, 2022 at 11:10:20PM +0530, Kaushal Shriyan wrote: > On Fri, Nov 18, 2022 at 9:37 PM Francis Daly wrote: Hi there, > Thanks Francis for your email response. Let me explain with two different > scenarios :- Yes, thank you. I believe that what you want is still all clear to me, apart from the actual specific url patterns that you are using. I suspect that I am being unclear in the question that I am asking. Do you actually make an api request for exactly https://mydrupalsite.com/apis/unique_id? If so, what response do you get? And if you do not make an api request for exactly that url, can you show any one url that you do use when making an api request? What you will eventually want to add to your nginx config is something like location ^~ /apis/ { error_page 555 /dummyfile; fastcgi_pass 127.0.0.1:9000; include fastcgi.conf; fastcgi_param SCRIPT_FILENAME something; } but I am unable to guess what the "something" should be. Maybe it should always be "$document_root/index.php". And maybe different or other config is needed as well. f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx returns html instead of json response
On Fri, Nov 18, 2022 at 07:07:41PM +0530, Kaushal Shriyan wrote: > On Thu, Nov 17, 2022 at 10:57 PM Francis Daly wrote: Hi there, > Please let me know if you need any additional information and I look > forward to hearing from you. Thanks in advance. When the request is for "/apis/unique_id", what file on the filesystem do you want nginx to ask drupal to use? Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Running ssl on custom port and its not working
On Thu, Nov 17, 2022 at 12:58:31PM -0500, blason wrote: Hi there, > Nothing interesting as such however below is the curl output from nginx > server How sure are you that this response came from your nginx server? > curl -I https://xxx..xxx:8081/neutrino-sso-web The nginx config you showed included some add_header directives. The matching http response headers are not in what you show here. > HTTP/1.1 302 Found > Date: Thu, 17 Nov 2022 17:57:10 GMT > Server: JBoss-EAP/7 > Strict-Transport-Security: max-age=63072000; includeSubDomains; preload Is there any chance that you are actually talking to a different web server entirely? Do your nginx server logs show this request being handled? (Or have I misunderstood something about this post?) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx returns html instead of json response
On Mon, Nov 14, 2022 at 08:24:15PM +0530, Kaushal Shriyan wrote: > > On Fri, Nov 11, 2022 at 2:38 PM Francis Daly wrote: Hi there, > >> What one specific request do you want to make? (Maybe > >> http://mydomain.com/apis, maybe http://mydomain.com/api/v1/*, maybe > >> http://mydomain.com/api/v1/example, maybe something else?) > >> > >> For that one specific request, what do you want nginx to do with > >> it? (Maybe make a http request to the Drupal system? Or a fastcgi request > >> to the Drupal system? Or handle it internally withint nginx?) > >> > >> For the response from that request, what do you want nginx to do with > >> it? (Send it to the user as-is? Mangle / modify it somehow? If so -- > >> how? Change the http response code or headers? Change the response body?) > So I think, if somehow we can pass the information to Nginx to not take any > action if 500 error occurred while hitting the > https://mydrupalsite.com/apis or https://mydrupalsite.com/apis/uinque_id > URLs then our job will done, because in that case whatever Drupal is > sending we will be able to see that if 500 error occurred. Correct. You will want a location{} to handle the "api" requests; and in that location, do not have the inherited "error_page 500" directive take effect. I think that you cannot "undo" an error_page directive from a previous level, but you can set a "dummy" error_page directive which will have the effect of overriding any values set at a previous level. So -- pick a http response code that you do not care about (e.g. 555) and set error_page for that in this location. >From your config, it looks like there are three forms of "non-api" requests that matter: * /one/file.html - which will return the local file /var/www/html/gsmamarketplace/web/one/file.html * /two/file.php - which will ask drupal to use the local file /var/www/html/gsmamarketplace/web/two/file.php * /three/not-a-file - which will ask drupal to use the local file /var/www/html/gsmamarketplace/web/index.php What forms of "api" request do you expect to receive? And what, specifically, do you want nginx to do with each form? That is -- do you expect "/apis/one/file.html", or "/apis/two/file.php", or "/apis/three/not-a-file", or some of each, or something else? When the request is for "/apis/unique_id", what file on the filesystem do you want nginx to serve; or what file on the filesystem do you want nginx to ask drupal to use? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: CGIT + NGINX : Not able to push commit
On Fri, Nov 18, 2022 at 12:04:42AM +0800, Robbi wrote: Hi there, > Hi, I plan to setup my own git web using cgit. For now I able to clone but I > not able to push changes. What do the nginx logs say about this request? Specifically, these headers: > 23:24:26.253252 http.c:662 <= Recv header: cf-cache-status: > DYNAMIC > 23:24:26.253252 http.c:662 <= Recv header: report-to: > {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=fD6cyeHU7oKzC1IXV6hfWtcCXjRGLX7lNK39sEBhlpUSgG6%2F4V8RjFxV%2F20PIQPuFFJeb03csCfZb87f9Q7b7amvGWLhncuAPTZEZ9GraBoHdhs1MObZEz5FdlvADngnu8w%3D"}],"group":"cf-nel","max_age":604800} > 23:24:26.253252 http.c:662 <= Recv header: server: cloudflare are not obviously listed in your nginx config; so it might be that you are talking to something other than the nginx server you think you are talking to; and that other thing might be returning the 403. If you can test talking to nginx directly, then maybe something will show whether the 403 is coming from nginx, or is coming from the fastcgi server that nginx, in turn, is talking to. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Rewrite rules not working
On Fri, Nov 11, 2022 at 08:29:44AM -0500, blason wrote: Hi there, > By the way which one would you confirm is preferable method rewrite or > return? It depends, based on what you want to do. For what I think you want, in this case, "return" is simpler. f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx returns html instead of json response
On Wed, Nov 09, 2022 at 11:45:20PM +0530, Kaushal Shriyan wrote: Hi there, > Checking in again if someone can help me with my earlier post to this > mailing list? The question in the post was, and is, a bit unclear to me. You seem to be showing multiple different requests, so I'm not sure exactly what you are asking. Maybe it is also unclear to others? In that case,it may be useful if you can simplify your example question? > I have a follow up question, when the user invokes -> > http://mydomain.com/apis <http://mydomain.com/api/v1/*> -> Nginx Webserver > -> Drupal 9 Core CMS -> PHP-FPM backend server. > > Nginx should present the below info on 500 ISE error conditions for /apis > and /apis/* The below message sends back the response to Nginx web server > to render it to the client browser instead of the /error-500.html file > contents. > > "type" => "/problems/API-saving-error", > "title" => $this->t("Issue occured while saving the > API."), > "detail" => $this->t("There are some wrong inputs passed > to DB which caused this issue."), What one specific request do you want to make? (Maybe http://mydomain.com/apis, maybe http://mydomain.com/api/v1/*, maybe http://mydomain.com/api/v1/example, maybe something else?) For that one specific request, what do you want nginx to do with it? (Maybe make a http request to the Drupal system? Or a fastcgi request to the Drupal system? Or handle it internally withint nginx?) For the response from that request, what do you want nginx to do with it? (Send it to the user as-is? Mangle / modify it somehow? If so -- how? Change the http response code or headers? Change the response body?) I suspect that if you can describe what exactly you want nginx to do, someone will have a better chance of sharing how to configure nginx to do that thing. > I have the below settings in nginx conf file > > error_page 500 /error-500.html; > location = /error-500.html { > root > /var/www/html/gsmamarketplace/web/servererrorpages/error-pages-500-503/html; > } For example: the above stanza says "if nginx is going to send a http 500 response, it should send the contents of the file /var/www/html/gsmamarketplace/web/servererrorpages/error-pages-500-503/html/error-500.html as the response body", along with the http 500 response header. If that is what you want nginx to do, the configuration is correct. If it is not, it is not. > I am trying to set the below location and try_files directive block in > nginx.conf file > location /apis { > try_files $uri $uri/ /path/to/api/handler; (This part is not > clear with me) > } And I can see what this nginx config will do; but I do not know what you want it to do. If you can give the full details for one example request, then maybe it will become clear to me. (And maybe others will be able to help too, if they are similarly confused.) Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Rewrite rules not working
On Thu, Nov 10, 2022 at 01:07:23PM -0500, blason wrote: Hi there, > I have a website http://web1.example.local/web1 > Instead I need a rewrite so that if user enters http://web1.example.local it > will be diverted to http://web1.example.local/web1 If you want it to happen, without needing it to be a rewrite, you can do a redirect with location = / { return 301 /web1; } (although I suspect that you will want a trailing slash there, "/web1/;". And variants with a different http response code can be used. And you can use the full "http://web1.example.local/web1/"; if you prefer.) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: proxy_pass works on main page but not other pages
On Sun, Oct 30, 2022 at 11:56:39AM -0600, Brian Carey wrote: Hi there, > Thinking it through though I think my solution is bad since it implies a > dependency between the urls defined in the program and the location used in > nginx, ie. they must match and the program cannot be proxied at an arbitrary > location. if you have a "back-end" at http://one.internal.example.com/, that is reverse-proxied behind the public-facing http://example.com/one/ using the "normal" nginx config fragment location /one/ { proxy_pass http://one.internal.example.com/; } then the client browser making the requests does not know that there is a back-end service. When the client requests http://example.com/one/two.html, nginx will ask for http://one.internal.example.com/two.html, and will send the response http headers and body content back to the client. If that response contains links or references of the form "three.jpg" or "./three.jpg", then the client will make the next request for http://example.com/one/three.jpg, which will get to nginx, which will know to proxy_pass to the same back-end service, and all will probably work. If the response contains links of the form "/three.jpg", then the client will make the next request for http://example.com/three.jpg, which will get to nginx but will probably not get a useful response, because nginx knows that it must not proxy_pass to the same back-end because the local part of the request does not start with /one/. The user will probably see an error or something that looks broken. If the response contains links of the form http://one.internal.example.com/three.jpg, then the client will presumably fail to resolve the hostname one.internal.example.com, and the user will probably see an error. > So hopefully there is a better solution than the one I found. I > hope I'm not asking too many questions. Whether or not a particular back-end can be reverse-proxied easily, or can be reverse-proxied easily at a different local part of the url hierarchy from where it thinks it is installed, it mostly down to the back-end application to decide. In general (and there are exceptions), nginx can readily rewrite the http response headers, and cannot readily rewrite the http response body, in order to adjust links or references to other internal resources. If you control the back-end service, and you know that you want to reverse-proxy it behind http://example.com/one/, you will probably find it easier to work with, if you can install the back-end service at http://one.internal.example.com/one/. That would make the first two forms of links "Just Work"; and the third (full-url) form is usually easier to recognise and replace. > > > I am able to use proxy_pass to forward https:/biscotty.me/striker to > > > the main page of my app. The problem is that all of the links in the > > > app result in a page not found error from the apache server handling > > > requests to /. So it seems like the port number information is > > > somehow being lost in translation? More likely, I guess, is that the links are of the second form, to "/three.jpg" instead of the "three.jpg". But it could also be related to what the initial request from the client was -- "/striker" and "/striker/" are different, and I suspect you should use the with-trailing-slash version in your config "location" line. But if you already have a working configuration, that's good! Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: how can I cache upstream by mime type in nginx
On Tue, Nov 01, 2022 at 10:07:37PM +, Francis Daly wrote: > On Wed, Nov 02, 2022 at 12:29:49AM +0800, Drweb Mike wrote: Hi there, > > My front end is nginx using reverse proxy work with my backend, I was > > trying to cache images files only, but it seems doesn't work at all, the > > *$no_cache* always output default value "proxy", which should be "0" when I > > visit image files > If you want to use variables to decide whether nginx should handle a > request by looking in the cache before asking upstream, you should only > use variables that are available in the request, not ones that come > from upstream. > > (Maybe you can use part of the request uri -- starts with /images/ or > ends with .jpg or .png, for example? It depends on what requests your > clients will be making.) proxy_cache_bypass (http://nginx.org/r/proxy_cache_bypass) is "nginx should not look in the cache for this response; go straight to upstream". proxy_no_cache (http://nginx.org/r/proxy_no_cache) is "nginx should not save this response from upstream to the cache". Maybe you want to never use proxy_cache_bypass; and use proxy_no_cache to make sure that only the things that should be written to the cache, are written there? You could do proxy_no_cache based on $upstream_http_content_type; and that any other requests will look in the cache, see nothing there, and go to upstream anyway. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: how can I cache upstream by mime type in nginx
On Wed, Nov 02, 2022 at 12:29:49AM +0800, Drweb Mike wrote: Hi there, > My front end is nginx using reverse proxy work with my backend, I was > trying to cache images files only, but it seems doesn't work at all, the > *$no_cache* always output default value "proxy", which should be "0" when I > visit image files $upstream_http_content_type is the Content-Type response header from the upstream server, after nginx has sent a request to the upstream server. Before nginx has sent a request to the upstream server, the variable has no value. If you want to use variables to decide whether nginx should handle a request by looking in the cache before asking upstream, you should only use variables that are available in the request, not ones that come from upstream. (Maybe you can use part of the request uri -- starts with /images/ or ends with .jpg or .png, for example? It depends on what requests your clients will be making.) > why *$upstream_http_content_type* map doesn't works as expected Your expectation is wrong. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: NGINX 1.21.x EOL?
On Tue, Oct 18, 2022 at 02:12:59PM +, Devendra.S.Daiya--- via nginx wrote: Hi there, I don't speak for nginx-the-company, or for nginx-the-software. But from knowing some of the history... > I don't see 1.21.6 available for Download. Is it already End Of Life? nginx: > download<https://nginx.org/en/download.html> I don't see any 1.odd-number versions on that page, other than the most recent. 1.odd-number is the "mainline/development" version. Generally, if you are using it, you should be tracking updates yourself. You can find all of the tagged versions by following the "Source Code" links further down the page. Simplest is probably to go the read-only code repository and click "tags", to get to http://hg.nginx.org/nginx/tags > I don't see any update on NGINX webpage. Could anyone please share the > announcement link from NGINX that says 1.21.x no more supported. Or any other > reference. > What would you like "supported" to mean? What it actually means is described at https://nginx.org/en/support.html The licence is at https://nginx.org/LICENSE If you've got a problem with using the code, this list is as good a place as any to ask questions and generally help out; and someone will probably respond at some point. But realistically, problems with an older development version are likely to be most quickly addressed by using a current development version. Of course, if the same problem can be shown in whatever version someone is using, there is a better chance that they'll be able to see if a config change can address the problem. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Installing two versions of PHP-FMP?
On Wed, Oct 05, 2022 at 12:08:34AM +0200, Gilles Ganault wrote: Hi there, > I only have shallow experience with nginx. > > To migrate an old php5-based application to the latest release which expects > php7, I'd like to install both versions of PHP-FPM in one nginx server. php (with php-fpm) is independent of nginx; so the way to install one or more versions of php is "whatever your operating system wants". So do that, to end up with (probably) one tcp port listener (or unix domain socket) for the php5 fastcgi server, plus one for the php7 fastcgi server. ("fpm" = "fastcgi", in this context.) > Although I read elsewhere it's a mistake to install the php package instead > of php-fpm because the former also installs Apache… this is what this > document > <https://menchomneau.medium.com/how-to-install-multi-php-server-on-ubuntu-20-04-and-nginx-ae63bc87c74b> This, and the link in the parallel reply, show how to run one nginx process, configured to run two server{} blocks (which means "two host names"); and one server{} only uses php5 and the other only uses php7. Depending on the applications involved, that might be the simplest way to deploy them. However, there is no reason not to use one server{} block, provided that you have a way of knowing which requests should go to each fastcgi server. > So, what's the recommended way to set things up so that nginx can support > both interpreters and manage two versions of a web app in their respective > directory? Within the server{}, each incoming request is handled in one location{}. Make sure that requests that should be handled by php5 are handled in a location that does "fastcgi_pass" to the php5 server; and the other requests are handled in a location that does "fastcgi_pass" to the php7 server. That could be something like location ~ ^/app5/.*\.php { fastcgi_pass unix:/tmp/php5.sock; } location ~ \.php { fastcgi_pass unix:/tmp/php7.sock; } but the extra details for how each application is installed and what it expects, will matter. (And that config fragment would need extra supporting config, in order to be useful.) Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Nginx does not serve avif
On Fri, Oct 07, 2022 at 02:00:44PM +0200, Martin Wolfert wrote: Hi there, > i found the issue! Good stuff! > Solution: When enabling the webp caching compatibility in WP Rocket > (WordPress plugin), the nginx rules / config could not work. Because WP > Rocket adds ".webp" as suffix to all .jpg images. So having the suffix set > to bla.jpg.webp, the Nginx location ( /location ~ \.(jpg|png)$ {/ ) for sure > could not match! So disabling the webp caching compatibilty in WP Rocket > solves the problem. Nice. My next guess would have been that the browser was requesting thing.jpg, and getting back content that was not a jpeg image, and was getting confused by that mismatch. My guess would have been wrong :-) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Nginx does not serve avif
On Thu, Oct 06, 2022 at 02:30:08PM +0200, Martin Wolfert wrote: Hi there, > In "/var/www/htdocs/blog.lichttraeumer.de/wp-content/uploads/2022/05" i have > located .jpg, .webp and .avif files: Thanks for the details. Both ideas seem to work for me, when testing with curl: === $ cat /etc/nginx/conf.d/test-avif.conf server { listen 127.0.0.5:80; root /tmp/t3; set $img_suffix ""; if ($http_accept ~* "webp") { set $img_suffix ".webp"; } if ($http_accept ~* "avif") { set $img_suffix ".avif"; } location ~ \.(jpg|png)$ { try_files $uri$img_suffix $uri $uri/ =404; } } $ mkdir /tmp/t3 $ echo one.png > /tmp/t3/one.png $ echo one.png.avif > /tmp/t3/one.png.avif $ curl http://127.0.0.5/one.png one.png $ curl -H Accept:webp http://127.0.0.5/one.png one.png $ curl -H Accept:avif http://127.0.0.5/one.png one.png.avif === $ cat /etc/nginx/conf.d/test-avif-map.conf map $http_accept $webp_suffix { "~image/webp" "$uri.webp"; } map $http_accept $avif_suffix { "~image/avif" "$uri.avif"; } server { listen 127.0.0.6:80; root /tmp/t4; location ~ \.(jpg|jpeg)$ { try_files $avif_suffix $webp_suffix $uri =404; } } $ mkdir /tmp/t4 $ echo one.jpg > /tmp/t4/one.jpg $ echo one.jpg.webp > /tmp/t4/one.jpg.webp $ echo one.jpg.avif > /tmp/t4/one.jpg.avif $ echo two.jpg.webp > /tmp/t4/two.jpg.webp $ curl http://127.0.0.6/one.jpg one.jpg $ curl -H Accept:image/avif http://127.0.0.6/one.jpg one.jpg.avif $ curl -H Accept:image/webp http://127.0.0.6/one.jpg one.jpg.webp $ curl -H Accept:image/other http://127.0.0.6/one.jpg one.jpg $ curl -H Accept:image/avif,image/webp http://127.0.0.6/one.jpg one.jpg.avif $ curl -H Accept:image/avif,image/webp http://127.0.0.6/two.jpg two.jpg.webp $ === Do they work for you, when testing with curl? If not -- why not / what is different between your test config and my test config? And if so -- what is different between the curl request and the Firefox request? Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Nginx does not serve avif
On Wed, Sep 28, 2022 at 10:49:15AM +0200, Martin Wolfert wrote: Hi there, > i want to use new image files. That means: first serve (if available) avif, > than webp and lastly jpg images. > location ~* ^/wp-content/.*/.*/.*\.(png|jpg)$ { > add_header Vary Accept; > try_files $uri$img_ext $uri =404; > } > Unfortunately ... Nginx does not serve avif files, if available. Tested it > with the newest Chrome Versions. > > Anyone any idea where my error is located? When you make the request for /dir/thing.png, do you want to get the file /var/www/dir/thing.avif, or the file /var/www/dir/thing.png.avif? The usual questions are: What request do you make? What response do you get? What response do you want to get instead? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Impersonation in Nginx
On Thu, Sep 22, 2022 at 11:07:36AM -0400, suryamohan05 wrote: Hi there, > Does Nginx has a feasibility to pass impersonation in of the config file. I'm afraid I don't understand what you are asking. I suspect that a translation to english chose the wrong possible meaning of the word you are using? Could you use more, or other, words to describe what you want? If you mean: when a user talks to nginx, can nginx send a different http Host: header to different upstream services, then "yes". If you mean: when a user talks to nginx, can nginx send that user's http Basic Authentication credentials, or "login" Cookie, to the upstream, then "yes". If you mean: the same, but nginx sends *another* user's credential, then "maybe" (you can hard-code the http headers to send on every request). Maybe your question is clear to someone else who can answer; but in case not, if you re-ask with an example, you might get a better answer. Thanks, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: forward 443 requests to correct (?) port
On Mon, Sep 19, 2022 at 12:25:04PM -0600, Brian Carey wrote: Hi there, > Maybe I'm misunderstanding how this should work. Can I use non-ssl > connections for upstream servers when the originating request is https? >From nginx's point of view: yes, not a problem. >From the upstream application's point of view: it will often want to know what scheme://host:port/prefix/ it should use when creating links in what it produces; and it might be configured to only "work" when the connection from the client is https. So you might need to configure that side of things in some way to believe that "anything from nginx" is trustworthy; or that "anything with specific http headers" is trustworthy, or something else that depends on this particular application. > I'm forwarding nginx requests to an apache server listening on 8080. > Everything works fine if I explicitly use http but not https. My nginx site > itself has no problem with https and all http traffic is forwarded to https. > However when I try to go to wordpress (on apache) I get an error in my > browser that I am forwarding plain http to https, and indeed the port I see > in the browser is 443 not 8080. Again if I explicitly request http I'm good > but it fails with https. Why is nginx forwarding this traffic to 443 instead > of 8080? Or probably better how do I change this behavior? I'm a bit unclear on what exactly you are reporting, sorry. In general, the browser talks to nginx only; and nginx talks to upstream only; and the browser should not necessarily be aware that it is not talking to upstream. So if you have https://nginx reverse proxying to http://apache:8080, the browser should never know or care about port 8080. It's probably good to be very clear about what should be talking to what; and about what is talking to what. And I suspect that that will need some specific copy-paste details from you, if you are unsure. > So I'm trying to find out how nginx makes that decision. This is the stanza > nginx conf file. > > server { > listen 80 default_server; > listen [::]:80; > server_name biscotty.me; > return 301 https://$hostname$request_uri; > } Ok. Any http request to nginx (on port 80), gets nginx inviting the browser to make a https request. (You may want $server_name or $host, instead of $hostname; but anything that works is good.) > server{ > > listen 443 ssl http2; ... > location /wordpress { > proxy_pass http://0.0.0.0:8080; > proxy_buffering on; > proxy_buffers 12 12k; > proxy_redirect off; > > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $host:8080; > } > > } A https request from the browser to /wordpress/x will lead to a http request from nginx to /wordpress/x. I'm not sure that 0.0.0.0:8080 always works as an IP:port to connect to (I'd probably use a specific IP there); but if it works for you, it is good. What happens after that, is entirely up to wordpress on apache. If you can show the specific request that you make and the response that you get, perhaps using "curl -i" in order to avoid browser caching or "friendly" response interception, then it may become clear what problem exists and what solution to it can be found. I suspect that you will want to omit ":8080" from the proxy_set_header. When you show one request and its response, it may become clear whether your current proxy_redirect setting is appropriate here. (And I do think that, in the past, wordpress was not happy being installed anywhere other than the root of the web service -- it did not work well in a subdirectory. It may well be that that is no longer the case, and things will all Just Work now.) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Fwd: soooo close: nginx gunicorn flask directory stripped from links
On Fri, Sep 16, 2022 at 12:11:05PM -0600, Brian Carey wrote: Hi there, > OK, sadly I was pre-mature in my explanation and claim of success, although > the trailing slash was clearly an issue. Now I can use the application fine > in the open browser which I can see did implement my changes because I can > move around in the app normally. > > But I get too many redirects with other browsers or other instances of the > same browser, which suggests to me that something was cached at some point > in my testing that is allowing it to work. curl returns a 301 and firefox > returns a too many redirects. A 301 to the same Location: url will be a redirect loop; a 301 to a different url may not be. So right now, without involving nginx at all, can you fully use your application if you point your browser at http://gunicorn:8080/, or if you point your browser at http://gunicorn:8080/prefix/ ? Once the upstream / backend is in a known state, adding nginx in front should be more straightforward. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: soooo close: nginx gunicorn flask directory stripped from links
On Fri, Sep 16, 2022 at 02:03:21AM -0600, Brian Scott wrote: Hi there, > Wow that looks promising. Can't try until tomorrow because it's 2am but I'll > try first thing tomorrow. From a best practices point of view would one > solution be better than the other assuming both work? The second suggestion > seems more straight-forward and avoids patches/fixes which is a good thing in > general. > I'm not aware of official "best practices" in this matter. I like "simple", so I tend to try to set up the internal "thing" so that I can reverse-proxy https://external/thing/ to http://internal/thing/, with the hope that internally I can access both forms (while externally only the external form is accessible). (I also try to make http://internal/ redirect to http://internal/thing/, so that I *can* access it easily internally.) Fundamentally, both options should work, provided that the application does not use any internal links that start with "/". Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: soooo close: nginx gunicorn flask directory stripped from links
On Fri, Sep 16, 2022 at 01:22:46AM -0600, Brian Carey wrote: Hi there, > I'm very close to getting my flask app properly reverse-proxied by nginx. If your nginx config is correct, then it might be that the upstream / backend service (the flask app, in this case) does not like being reverse-proxied (to a different "sub-directory"), > I can access and use my app successfully at http://127.0.0.1:8000. > > I can get to my main page at http://my.domain/app. If I specifically enter > the url of a sub-directory/page I can get there, for example > http://my.domain/app works and http://my.domain/app/home works. > Hovering over the link it points to http://my.domain/home instead of > http://my.domain/app/home. Does https://pypi.org/project/flask-reverse-proxy-fix/ apply in your case? That page links to a 404 page, where the original content appears to be at https://web.archive.org/web/20131129080707/http://flask.pocoo.org/snippets/35/ You can possibly / potentially avoid all of that, if you are happy to deploy your app at http://127.0.0.1:8000/app/ instead of at http://127.0.0.1:8000/ -- in that case, all of the "local" links will be the same in both the direct and reverse-proxied cases, so only the hostname/port would need adjusting. (Which is usually more straightforward.) (I'm presuming that it is possible to deploy a flask app somewhere other than the root of the web service.) Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: help with regex in nginx map
On Thu, Sep 15, 2022 at 01:30:24PM -0400, libresco_27 wrote: Hi there, > I'm trying to write a simple regex for a map where only the first part of a > string should match. I went through the documentation, which unfortunately > didn't have much examples. I'm not sure if you are asking "how to use map", "how to set a variable", "how to write a regex in nginx", or something else. Does the following config fragment and example requests help at all? Within a http{} block: === map $arg_input $my_output_variable { "" "it was empty or not set"; default "did not match anything else"; ~^abc*$ "matches start abc star end"; ~^abc "starts with abc"; abc "is abc"; ~abc "contains abc"; } server { listen 127.0.0.3:80; location / { return 200 "input is :$arg_input:, output is :$my_output_variable:\n"; } } === $ curl http://127.0.0.3/ input is ::, output is :it was empty or not set: $ curl http://127.0.0.3/?input=abc input is :abc:, output is :is abc: $ curl http://127.0.0.3/?input=abcc input is :abcc:, output is :matches start abc star end: $ curl http://127.0.0.3/?input=abcd input is :abcd:, output is :starts with abc: $ curl http://127.0.0.3/?input=dabcd input is :dabcd:, output is :contains abc: $ curl http://127.0.0.3/?input=d input is :d:, output is :did not match anything else: > map $string $redirct_string{ > "~^abc*$" 1; > } That regex will only match the strings "ab", "abc", "abcc", "abccc", etc, with any number of c:s. > I also tried to change the regex to a simple "abc*", but it didn't work. That regex will match any string that includes "ab" anywhere in it. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: help with https to http and WSS to WS reverse proxy conf
On Mon, Sep 12, 2022 at 05:46:21PM -0700, Michael Williams wrote: Hi there, > Wow thank you. This really helps all the guidance and instruction. I really > appreciate your time. No worries. > One thing to clarify, is that if I turn off NGINX, the client page works > fine and connects to the app server inside the docker OK. I confess that I am confused as to what your current architecture is. Can you describe it? Along the lines of: Without nginx involved, we have (http service) running on (docker ip:port) and when we tell the client to access (http:// docker ip:port) everything works, including the websocket thing. With nginx involved to test reverse-proxying http, the docker side is identical, but we tell the client to access (http:// nginx ip:port) and everything works? Or not everything works? With nginx involved to test reverse-proxying https, the docker side is identical, and we tell the client to access (https:// nginx ip:port), and some things work? With that information, it might be clear to someone where the first problem appears. In this configuration: > server { > index index.html index.htm; > listen [::]:443 ssl ipv6only=on; # managed by Certbot > listen 443 ssl; # managed by Certbot > listen 25566 ssl; nginx is listening for https on two ports. What test are you running? Which port are you using? > location @wss { > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > proxy_pass http://172.31.24.191:25565; nginx is talking to this port without https. What works here / what fails here? What do the logs say? > My idea was to try changing our client webpage to access a different port # > than the one our app server in the docker is listening to. I'm afraid I am not sure what that means. I thought the client webpage was accessing nginx on port 443 and the backend / upstream http server was listening on the high port? Maybe I am getting confused among multiple tests that you are running. > With that change > I see from WIreshark on my local that the WSS connection seems to go > through OK with NGINX: > > [image: Screen Shot 2022-09-12 at 5.29.50 PM.png] I'm seeing a picture; but I'm not seeing anything that obviously says that a WSS connection is working anywhere. I'm seeing a TLS connection between the client and nginx that is cleanly closed after a fraction of a second. I see nothing that suggests that nginx is doing a proxy_pass to the upstream server. (But maybe that was excluded from the tcpdump?) > Our app server shows that the connection to the server also starts but then > disconnect it: > (22:36:59) Disconnected (unknown opcode 22) With nginx involved, the app server should never see the client IP address directly; it should only see connections from nginx. (It might see the client IP listed in the http headers.) > My question here, does NGINX negotiate the entire handshake for HTTPS to > WSS upgrade itself, without forwarding the same pages to our app server ? > Is there a way to forward those pages to the app server also ? I think our > app server may insist on negotiating a ws:// connection itself, but not a > wss:// connection. As I understand it: the client makes a TLS connection to nginx, and sends a http request inside that TLS connection (== a https request). Separately, nginx makes a http connection to the upstream server, and (through config) passes along the Upgrade-and-friends headers that the client sent to nginx, requesting that this connection switch to a websocket connection. And after that works, nginx effectively becomes a "blind tunnel" for the connection contents, passing unencrypted things on the nginx-to-upstream side and encrypted things on the nginx-to-client side, and generally not caring about what is inside. If things are still not working as wanted, I suggest simplifying things as much as possible. Make the nginx config be not much more than what step 6 on https://www.nginx.com/blog/websocket-nginx/ shows, and include enough information in any report of a test, so that someone else will be able to repeat the test on their system to see what happens there. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: help with https to http and WSS to WS reverse proxy conf
here will probably be an error returned to the client. > location @websocket { > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection $connection_upgrade; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host myFQDN; > proxy_set_header Referer https://myFQDN; > proxy_set_header Referrer https://myFQDN; > # proxy_pass http://localhost:25565; > proxy_pass http://to-websocket; >From below, your websocket service appears to be listening on ip-172-31-24-191.:25565. You'll want to invite nginx to talk to that IP:port, not localhost. > location @ { And this is what should be used if the incoming request has no "Upgrade" header. This entire block is equivalent to "location @ { }" > Here is the listener process on netstat: > > netstat -a -o | grep 255 > > tcp0 0 ip-172-31-24-191.:25565 0.0.0.0:* LISTEN > off (0.00/0/0) If you can access that IP:port from the nginx server to talk to the websocket service, that's what you should configure nginx to try to talk to. > Here is the interface being used: In this case: nginx is talking to an IP. It does not care what the physical interface is. (iptables and the like do care; but that part all looks good from here.) > Here are the iptables stats: If these rules block nginx from talking to the IP:port and getting the response, that will want fixing. Otherwise, it's good. > iptables -L -n -v These appear to say "accept almost everything; nothing has been dropped", so these rules are presumably not blocking nginx. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: negation in the map directive of nginx
On Sun, Sep 11, 2022 at 11:22:39AM -0400, libresco_27 wrote: Hi there, > I tried the approach you suggested and it still doesn't seem to work. > This is what I am doing right now :- > > limit_req_zone $default_client_id zone=sample_zone:50k rate=3r/m sync; > map $client_id $default_client_id { > Z ""; > $client_id $client_id; Probably you want "default" there as the first word on the last line. > When I try to hit the gateway with client_id, it still limits the > requests according to 3rpm configuration. Am I doing this wrong? The map has 5 Zs. Your example has 4 Zs. But more interestingly: $client_id is not a standard nginx variable. How is it being set; and what test are you running? Presumably somewhere else in your config you have a "limit_req" directive, so that you can see the delay between responses. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: help with https to http and WSS to WS reverse proxy conf
On Sat, Sep 10, 2022 at 05:47:29PM -0700, Michael Williams wrote: Hi there, > Can someone with fresh eye please review this config and tell me why > requests are infinite redirection to https? I suspect that whatever you are proxy_pass'ing to is seeing that it is getting a http connection, and it has been configured to insist on having a https connection. In this particular case, your "listen 80 default_server" server block presumably includes "localhost"; and so your "proxy_pass http://localhost:80;"; directive is talking back to that. Which is where the loop is. So - proxy_pass to something that will return content. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: negation in the map directive of nginx
On Thu, Sep 08, 2022 at 01:25:20PM -0400, libresco_27 wrote: Hi there, > I'm working on rate limiting for specific group of client ids where if the > client id is equal to XYZ don't map it, thus, the zone doesn't get > incremented. http://nginx.org/r/limit_req_zone: Requests with an empty key value are not accounted. It's probably easier to set the value to empty for those ones, and not-empty for the rest. > For ex - > limit_req_zone $default_rate_client_id zone=globalClientRateLimit_zone:50k > rate=10r/m sync; > map $client_id $default_rate_client_id { > "^(?!ZZ)$" "$1" > } map $client_id $default_rate_client_id { Z ""; default $client_id; } (or whatever value is wanted). > But this doesn't seem to work. Is this the correct way to negate a > particular string(Z in this example)? Please let me know. Negative regexes can be hard; it's simpler to avoid them entirely. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx not listening on port 443
On Wed, Aug 31, 2022 at 06:06:09PM -0400, biscotty wrote: Hi there, > http { > include mime.types; ... > server { > listen 80; ... > } > } That file does not "include" any files other than mime.types. It looks like you want it to "include conf.d/*.conf" somewhere, or something like that? $ sudo /usr/sbin/nginx -T | grep '^# conf' should show you which configuration files are actually read; if the ones that you want / expect are not listed there, you'll want to change the config to include them. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Securing URLs with Secure Link + HLS
On Wed, Aug 31, 2022 at 05:42:27PM -0400, LewisMM wrote: Hi there, > I make a request with the .m3u8 file with the MD5 Hash and expiration. It > receives a 200 code. Then the m3u8 playlist file tries to load the first > segment in the playlist, however, I receive 403 "Not Authorised" error. > Nginx isn't passing the MD5 hash and expiration to the segment file. The client is not including the MD5 hash and expiration in its second request, because nothing told the client to include it. And nginx is configured not to allow requests without MD5 hash and expiration. So the system is acting as it is configured to do. Just not as you would it to. > I hope you understand my problem. I think that you may have the same misunderstanding of how secure_link and m3u8/ts files should work together, as was displayed initially in the (long-ish) thread at https://forum.nginx.org/read.php?2,284473,284473 If you read through that entire thread, maybe the various design possibilities will become clear. (Don't worry about the S3 part; it is only the secure_link that is relevant here.) Basically, you have to decide why you are using secure_link, and whether you want that to happen just for the m3u8 file, or also for the ts files. And if you decide "yes" for the ts files, then you need to ensure that the client knows to include the information in the request that it sends to nginx -- either by you changing the m3u8 file so that each link has the information needed; or by you changing your url layout so that the simple m3u8 file "just works". Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Securing URLs with Secure Link + HLS
On Wed, Aug 31, 2022 at 11:16:23AM -0400, LewisMM wrote: Hi there, > I've been following this resource: > https://www.nginx.com/blog/securing-urls-secure-link-module-nginx-plus/ > This works with the .M3U8 playlist file. It successfully secures it. > However, when the playlist tries to load the segment files (.ts) files, I > get a 403 error. Nginx is not passing the MD5 hash to the segment files. What request do you make to nginx? (That might be in the nginx access log, or error log.) What file on the filesystem do you want nginx to send you, instead of the 403 response? Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: location regex?
On Wed, Aug 03, 2022 at 08:39:32PM -0400, Jay Haines wrote: Hi there, > I am trying to weed out requests for any uri that contains the string, > "announce" (no quotes). That would include > > * /announce > * /announce/ > * /announce.php Normal config there would be of the form location ~ announce {} but only in a place where that location will actually have a chance to be matched -- so before any other ~regex location that might match the same request; and any =exact location, or ^~prefix location that is the longest-prefix match, will mean that regex matches are not tried. > each with or without query strings. I have the following location blocks in > my server context: > > location ~* announce { > return 444; > } > > location ~* /announce.php { > return 444; > } In that sequence, that second one will never be used. But that's ok; it has the same handling as the first one. > and my log looks good: > > "122.100.172.162" "03/Aug/2022:20:19:00 -0400" "GET > /announce.php?info_hash=%DF%AEF%40%7F%1DA%C9%91S%9F%D4%0D%D6J%E6%992%A3~&peer_id=-BC0171-_sSI%D1n%AA%A9%C3%A5%25%1E&port=15302&natmapped=1&localip=172.18.80.247&port_type=lan&uploaded=46317568&downloaded=11285925264&left=178446055&numwant=50&compact=1&no_peer_id=1&key=38892 > HTTP/1.1" "444" "0" "0.000" "-" "BitComet/1.71.9.7" > > until it doesn't: > > "81.110.165.170" "03/Aug/2022:20:24:03 -0400" "GET > /announce.php?info_hash=%5B%EA0r%8A*8%C4%DAA%81%02%B4%BF%97%CC%1E%A9y%C8&am_peer_id=-TR300%5A-LDXTt3fAIyq%00&port=43342&uploaded=0&downloaded=0&left=5593535899&event=started&key=0&compact=1&numwant=200 > HTTP/1.1" "400" "150" "0.000" "-" "-" If those two requests went to the same server{}, and there was no other config that will have handled them differently, they would both be handled in the same location{} (because each request was "/announce.php", as far as location matching is concerned). The second response is 400, which is "Bad Request", which can come from nginx before any location{} matching is attempted. For example -- something claiming to be a HTTP/1.1 request but not having a Host: header can lead to a log line like that. > I have tried various location prefixes and regexes (and combinations > thereof) but can't seem to find the one that works correctly. The first location{} that you have looks correct to me, in normal nginx terms. If you can investigate the 400-request, maybe you can see whether the response came from nginx directly, or came from something later that involved the announce.php code. (With the config shown, I expect it will have been a "real" bad request, so was rejected before the location-matching (and probably also the server-matching) happened.) Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Proxy buffering query
On Wed, Aug 03, 2022 at 08:49:20AM -0400, libresco_27 wrote: Hi there, > But I'm not explicitly defining the value for proxy_busy_buffers_size to > something. Right now it is set with the default value Oh, sorry. I had misunderstood what you were reporting. I now think that the confusion comes from what "the default value" for proxy_busy_buffers_size is. The documentation has one set of words, which is correct once you know how it is intended to be interpreted. If we can find a set of words that is both correct and clear, we can probably get the documentation changed to help the next person. I think that the key is: if not explicitly defined, the value for proxy_busy_buffers_size is "the bigger of: twice proxy_buffer_size; and the size of two proxy_buffers". By default-default, that is the 8k or 16k that the documentation summary shows. But when you set a big proxy_buffer_size value, you are implicitly increasing proxy_busy_buffers_size as well. And then the other requirement kicks in -- proxy_busy_buffers_size must be not bigger than "proxy_buffers number-minus-one times size". Also, if you want to explicitly set proxy_busy_buffers_size, you cannot make it be smaller than a single proxy_buffers, or than proxy_buffer_size. So with all default values on a 4k page system, proxy_buffers is "8 4k" (32k total, and 28k is the maximum for proxy_busy_buffers_size); proxy_buffer_size is 4k; and proxy_busy_buffers_size is 8k. If you want to set proxy_buffer_size to more than 14k, you must either also increase proxy_buffers (in number or size); or explicitly set proxy_busy_buffers_size to 28k or lower (while not being smaller than proxy_buffer_size). Hopefully this does not make things more confusing... Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Questions about real ip module
On Wed, Aug 03, 2022 at 02:58:59PM +0900, nanaya wrote: > On Wed, Aug 3, 2022, at 07:34, Francis Daly wrote: > > On Sat, Jul 30, 2022 at 05:13:52AM +0900, nanaya wrote: Hi there, > It looks like I tested it on location level. I guess it's similar behavior to > real_ip_header inheritance you mentioned below? Ah, I hadn't tested at location{} level. I had thought it would basically be: wherever the real_ip_header that is used is set, use the matching set_real_ip_from. But I see somewhat confusing test results there now too. So it's probably simplest to say that the current code works most clearly when there is exactly one set of directives in the configuration. If someone finds a use-case that they can't configure with the current code, maybe that will inspire someone to change something. > >> 2. does setting `real_ip_header '';` in a section effectively disable the > >> module for the section? > > > > I don't see that it does; and I don't see that the documentation says > > that it would. So I'd say "no, it does not". > > It seems to achieve the same effect though considering it's not really > possible to send empty header (or is it?). With the odd effective inheritance that I see, any "inner" directive seems to be effectively ignored. So having an "inner" one with the empty value should not make a difference. But I do not understand fully what it is doing. > Thanks. I've reworked the config so it's not needed anymore. Good that you have a config that now works for you. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Proxy buffering query
On Tue, Aug 02, 2022 at 11:01:32AM -0400, libresco_27 wrote: Hi there, > What is the relationship between these three directives - > proxy_busy_buffers_size, proxy_buffers and proxy_buffer_size? http://nginx.org/r/proxy_buffer_size, plus some of the following sections. > Currently, I'm only using proxy_buffer_size in my location block but > whenever I set it to some higher number, for ex: 32k, it throws the > following error - > nginx: [emerg] "proxy_busy_buffers_size" must be less than the size of all > "proxy_buffers" minus one buffer If you have told nginx to use 20 kB of buffers; then also telling nginx that it can have up to 40 kB of those buffers busy sending, is unlikely to be a correct config. I suspect that the error message is to ensure that you do not think that you have configured more buffers than you actually have. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Questions about real ip module
On Sat, Jul 30, 2022 at 05:13:52AM +0900, nanaya wrote: Hi there, > I have a few questions about the real ip module (tried on nginx/1.22.0): I can see similar curious behaviour to what you report. I'm not sure if it is "intended behaviour", or "that kind of variation was never considered" -- either way, you'll likely need a code change to achieve what you want, unless you can adapt your config to what the current code provides. > 1. is there no way to reset the list of `set_real_ip_from` for a specific > subsection? For example to have a completely different set of trusted > addresses for a specific server > That one seems to work for me. set_real_ip_from at http level, with another value at server level. A server without the second value uses the http-level one; a server with the second value uses that value only. Can you show a sample config that does not work? > 2. does setting `real_ip_header '';` in a section effectively disable the > module for the section? I don't see that it does; and I don't see that the documentation says that it would. So I'd say "no, it does not". > 3. documentation says `real_ip_header` is allowed in location block but it > doesn't seem to do anything? > This one is a bit subtle. As far as I can see, if there is no value at http or server level, then the value at location level is effectively used. But if there is something at http or server level, then the value at location level is effectively ignored. That's not the usual way that nginx directive inheritance works; my guess in this case is that the replacement-ip-address-variable is set at the outermost level, and then in the inner level, the variable is seen to have a value and that value is re-used rather than re-calculated. > This still uses address from X-Real-Ip instead of X-Other for allow check and > log: > >From playing with 1.22, if you want different real_ip_header header values to apply in different locations, you probably need to only set the directive at location level -- and set it in every location where you want it. Basically -- ensure that there is nothing to be inherited into a section that wants to have a specific value set, so that the curious effective inheritance behaviour of this directive does not take effect. That might let you get the end result that you want today; if you want a future version to work in "the expected" fashion, then you'll want to convince someone that the cost of maintaining the new code to do that is less than the benefit of being able to do that. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: 2 x Applications using the same domain behind a reverse proxy
On Tue, Jul 26, 2022 at 01:11:45AM +, Mik J via nginx wrote: Hi there, I don't have a full answer, but a few config changes should hopefully help with the ongoing diagnosis. > When I access to example.org, I was to use /var/www/htdocs/app1 and it works. > > When I access to example.org/app2, I was to use /var/www/htdocs/app2 and it > doesn't really work. > location / { > try_files $uri $uri/ /index.php$is_args$args; > root /var/www/htdocs/app1; That says "a request for /thing will look for the file /var/www/htdocs/app1/thing, or else will become a subrequest for /index.php". So far, so good. > location /app2 { > #root /var/www/htdocs/app2; > alias /var/www/htdocs/app2; > try_files $uri $uri/ /index.php$is_args$args; Depending on whether you use "root" or "alias" there, a request for "/app2/thing" will look for one of two different files, or else become a subrequest for "/index.php". I suspect that instead of the above, you want root /var/www/htdocs; try_files $uri $uri/ /app2/index.php$is_args$args; so that if /var/www/htdocs/app2/thing does not exist, the subrequest is for /app2/index.php. > location ~ \.php$ { > root /var/www/htdocs/app2; With that, later things will be looking for /var/www/htdocs/app2/app2/index.php (double /app2) which almost certainly does not exist. With "root" set correctly outside this location{}, you can remove that "root" line entirely. Or change it to be "root /var/www/htdocs;". Those two changes to within "location /app2" and the nested "location ~ \.php$" should be enough to allow whatever the next error is, to appear. If you test by doing (for example) curl -i http://example.org/app2/ the response http headers and content may give a clue as to what is happening versus what should be happening. For the other problem reports -- if they matter, if you can include enough of the configuration that it can be copy-paste'd in to a test system, it will be simpler for someone else to repeat what you are doing. But possibly the above change will mean that they no longer happen. You had a few other questions initially: > > Also what is the best practice on the backend server: > > - should I make one single virtual host with two location statements > > like I did or 2 virtual hosts with a fake name like > > internal.app1.example.org and internal.app2.example.org ? The answer there is always "it depends" :-( In this case, you have moved away from proxy_pass to a backend server, towards fastcgi_pass to a local socket; so I guess it doe not really matter here and now. The more important thing is: does your application allow itself to be (reverse-proxy) accessed or installed in a "subdirectory" like "/app2/"? If it does not, then there are likely to be problems. > > - can I mutualise the location ~ \.php$ between the two ? Probably not; because the two location{}s probably have different requirements. You might be able to have all of the fastcgi_param directives in a common place, and "just" have duplicate "fastcgi_pass" directives in the two locations, though. > > - Should I copy access_log and error_log in the location /app2 statement ? As you wish. You can have nginx writing one log file, and make sure that whatever is reading it knows how to interpret it; or you can have nginx writing multiple log files, and have whatever is reading each one, know how to interpret that one. I suspect that the main advantage to "different log files per location" is that it will be very clear which location{} was in use when the request completed; and if that is not the one that you expected, then you'll want to investigate why. (The main disadvantage is: multiple files to search through, in case things were not handled as you expected.) Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Thanks for your help, Francis Daly
On Sat, Jul 23, 2022 at 09:21:55AM +0100, Francis Daly wrote: > On Fri, Jul 22, 2022 at 11:41:25PM -0600, Jim Taylor wrote: Hi there, one update / possible correction: > > "N: Skipping acquire of configured file 'nginx/binary-arnhf/Packages' > > because the repository doesn't support armhf (or something like that)" > > In this case, it sounds like you may be running one of the Debian-derived > raspberry pi OS's; possibly the one for Raspberry Pi 2 which uses the > 32-bit "armhf" architecture. Debian provides binaries built for that > architecture; RaspberryPi provides binaries built for that architecture; > Nginx does not provide binaries built for that architecture. Web content like https://discourse.osmc.tv/t/rpi-4-architecture-armhf-instead-of-arm64/90382 makes it look like maybe your current system could be multi-architecture, and perhaps you can configure your sources.list to look for the arm64 variant explicitly? You'll probably want to check your-OS-specific documentation; but it might be the case that you can use the nginx repository without a reinstall. Of course, if what you have right now works, it is zero extra effort to keep it working as-is. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Thanks for your help, Francis Daly
On Fri, Jul 22, 2022 at 11:41:25PM -0600, Jim Taylor wrote: Hi there, > Thank you for helping me unmake a mess! You're welcome. > I had mistyped " ' " for " ` ". To be honest, in 60 years in this business > I had never typed " ` " on purpose before, so I typed single quotes around > "lsb_release -cs" There's a whole keyboard full of characters there; why leave the edge ones out? ;-) (Although the `backtick is often awkward to type, because it often a "dead key" which only show on-screen after the subsequent keypress.) Given the choice for retyping commands or error messages, copy-paste is usually a good option. Although that can go wrong when something decides to auto-convert plain quotes to something prettier in typography, so there is no one good answer. > Now everything seems to have run correctly, but ... > > When I do an apt-get update, the line after the 5 'hit' messages and > 'Reading Package Lists' says > > "N: Skipping acquire of configured file 'nginx/binary-arnhf/Packages' > because the repository doesn't support armhf (or something like that)" In this case, it sounds like you may be running one of the Debian-derived raspberry pi OS's; possibly the one for Raspberry Pi 2 which uses the 32-bit "armhf" architecture. Debian provides binaries built for that architecture; RaspberryPi provides binaries built for that architecture; Nginx does not provide binaries built for that architecture. If that is the case, then the quick-and-easy option is for you to remove the nginx repository from your system config -- which is "remove or #-comment the line that you recently edited". Then the next time you run "apt-get update", it will only use the other configured sources. That means that you will continue to get whichever nginx version is provided by the other sources, and you won't get the errors about skipping arm-hf. If you do want to run "the latest" nginx version on your system, then you will need to have a binary built for your system. That could be any of (in no particular order): * install an "arm64" version of Debian -- nginx does provided binaries built for that architecture * build an "armhf" binary of nginx for yourself whenever you want to update * see if someone else has built an "armhf" binary of nginx that you are happy to use * encourage someone else to build an "armhf" binary of nginx for you "Simplest to use right now" is probably "stick with the Debian version" -- you won't get the new features of later nginx versions, but Debian will (try to) incorporate any security-related fixes and issue a new build then. "More educational in your Copious Free Time(TM)" is probably to build an nginx binary for yourself -- either as a "normal" binary build, or as a package suitable for your current system -- and then build-and-replace whenever there is an interesting update to the nginx source code. And, depending on the hardware that you have and what other things you want to run on it, possibly "simplest to support for the future" could be "re-install the operating system as the arm64 version". > Is this normal? Do I need to do another do over? It is an informational message which basically says "now that I look, I'm not using anything from that source this time"; so having that source listed does no harm, but removing that source will mean that it won't try to look the next time. > Thanks again for your help. Now I can build a configuration file and get my > website back on the air. Cheers; and good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: 400 Bad request (spaces in request)
On Fri, Jul 22, 2022 at 05:41:15AM -0400, sipopo wrote: Hi there, > nginx 1.21.1 started return 400 error if exists spaces in request. But I > have old clients which need supports. Maybe anyone knows workaround? spaces in urls have always been incorrect. Early nginx rejected them as broken input; middle nginx was changed to allow most (but not all) spaces, to give broken clients a chance to become fixed clients (which in turn led to problem reports of the form "nginx accepts space G in a url, but rejects space H"); new nginx rejects them again. The change log lists the change as having happened in 1.21.1. It appears that the "become fixed clients" part did not happen. So for your use case for right now -- change back to something earlier than 1.21.1. Once that is working as much as it did previously, you have some time in which you can choose between (as I see it): * fixing your old clients (or links? It might depend how the broken urls are created in the first place.) * staying on the older nginx * carrying your own patch to your newer nginx to handle spaces in the way that you prefer * getting a patch to allow a configuration choice on what to do with spaces committed to stock nginx [+] * using something other than nginx [+] There is a reason why 1.21.1 rejected spaces. You will likely need to convince someone that the benefits of having an option to change back to the known-broken behaviour exceed the costs to them of doing that. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Problem with basic auth on nuxtjs frontend with wordpress backend
On Wed, Jul 20, 2022 at 10:19:28AM -0400, strtwtsn wrote: Hi there, > I'm trying to add basic authentication to a nginx reverse proxy which is in > front of a nuxtjs app. > But if hangs. I've also tried it in the location section, but this hangs > too, what am I missing? What does "it hangs" mean? As in: * what request do you make? (ideally using something like "curl", to avoid any extra-browser complications) * what response do you get? * what response do you want instead? And possibly: * what do the nginx logs (access and error) say about this request? >From your config, a request of the form curl -v https://your-server/TESTING should return information about the SSL negotiation; and after than succeeds, should return a http 401. And then curl -v --user your-name https://your-server/TESTING should have curl ask you for the password, and then should give a different response when using a wrong password and when using the correct password. If this happens: > nothing actually appears in the network page after you have submitted credentials, then something has gone wrong on the client side. When you hit "submit" or "go", the client should make a network request and the network page should show that request. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Reverse proxy forcing language in cookies
On Tue, Jul 19, 2022 at 09:51:24PM -0400, Saint Michael wrote: Hi there, > I was asked to proxy google.com through > https://ГУГЛЭ.pl > but I need to make Google believe that clients are behind a computer > with the Russian language, not English. The question of "what do I include in a request to invite Google to respond in the Russian language" is probably best asked elsewhere. (Because (I guess) there is more likely to be people who know the answer, in a different group.) Once you have the answer -- specific http headers, specific headers with specific values, maybe something else -- then you can start to configure your nginx to include those things in the requests that it makes to its upstream. I guess it will involve proxy_set_header, but I do not know what your upstream requirements are. (Typically, you reverse-proxy to a thing that you control, so that you can know in advance if those requirements will change.) Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: update failure
On Thu, Jul 21, 2022 at 07:58:51PM -0600, Jim Taylor wrote: Hi there, > Installed nginx a couple of days ago, everything appeared to go as it > should. This evening a was going to add some programs, and was stopped cold > at my first command. What do i need to do to go forward? This is more a "debian" question than an "nginx" question, but my best guess is: > root@D-00:~# apt-get update > Hit:1 http://security.debian.org/debian-security bullseye-security InRelease > Hit:2 http://deb.debian.org/debian bullseye InRelease > Hit:3 http://deb.debian.org/debian bullseye-updates InRelease > Ign:4 http://nginx.org/packages/debian 'lsb-release InRelease Wherever your list of sources is configured (possibly /etc/apt/sources.list?) has the string "'lsb-release" (with a leading single quote) where it should probably have the string "bullseye". I wonder... did you follow the installation instructions at http://nginx.org/en/linux_packages.html#Debian? If so, perhaps there was a typo when writing the line http://nginx.org/packages/debian `lsb_release -cs` nginx" \ and "'" was used instead of "`"? (That's not exactly right, because of the _/- difference.) If that is what happened, then: edit the file /etc/apt/sources.list.d/nginx.list as root and change the line that has 'lsb-release to end in /debian bullseye nginx and then repeat the apt-get update. Good luck with it, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Domains not working as expected with nginx
On Fri, Jul 08, 2022 at 12:53:39PM -0700, Jason Crews wrote: Hi there, Thanks for this. I think it says that if you ask for "http://secondarydomain.com";, you will get to > server { > server_name secondarydomain.com; that server block (unless secondarydomain.com resolves to 127.0.0.2); but if you ask for "https://secondarydomain.com";, you will get to > server { > listen 443 ssl http2; > server_name sub.maindomain.com; that server block. Which I think is what you describe for the "wordpress" side of things. Either configure a server block with ssl for secondarydomain.com; or make sure to only access secondarydomain.com over http. (And if something like wordpress redirects to https, make it stop doing that.) Hope this helps, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Domains not working as expected with nginx
On Fri, Jul 08, 2022 at 10:14:13AM -0700, Jason Crews wrote: Hi there, > I'm not sure what I've got misconfigured here, I would appreciate > anyone who could point me in the right direction. > Site structure: > > maindomain.com -> mediawiki -> works > sub.maindomain.com -> basic php website -> works > secondarydomain.com -> wordpress -> goes to sub.maindomain.com > > I've posted all of the config files on reddit: > https://www.reddit.com/r/nginx/comments/vtuha9/domains_not_going_where_expected/ For each server{} block that you have, what are the "listen" directives and what are the "server_name" directives. $ nginx -T | grep 'server\|listen' will probably give a reasonable starting point for that data. Feel free to edit it to hide anything you consider private; but please be consistent. If you use the same IP address in the config twice, edit it to the same thing. If you use different IP addresses, edit them to be different things -- anything in the 10.x network is "private enough". And for server_name entries, one.example.com, two.examle.com, and *.example.net might be reasonable ways to edit thing. (Also: feel free not to change things if you don't consider them private.) And when you report something not working, please be specific about http or https, to which particular hostname. (And confirm whether the hostname resolves to the IP address that nginx is listening on.) Hopefully the answers to those will make it clear what is happening, and what should be changed to make things happen the way you want them to happen. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Reverse proxy to traefik
On Thu, Jul 07, 2022 at 11:17:03AM -0300, Daniel A. Rodriguez wrote: Hi there, > Nginx is actually working as RP for several subdomains for which is also SSL > termination. The traefik box is out of my scope, but it has the ability to > negotiate TLS certificates for its own. That's why I need to forward just > specific subdomain TCP traffic to it. I think you are indicating that you currently have a http section with something like === server { listen nginx-ip:443 ssl; server_name one.example.com; location / { proxy_pass http://internal-one; # or maybe "https://internal-one;"; } } server { listen nginx-ip:443 ssl; server_name two.example.com; location / { proxy_pass http://internal-two; # or maybe "https://internal-two;"; } } === If you need your traefik server to see the original data stream from the client (such as: if your traefik server is using client certificates for authentication; I can't immediately think of any other https reason), then I suspect that in nginx terms you will need a second IP address, and have a separate nginx "stream" block that will listen on that-ip:443. If you are not using client certificates, you can still use a second IP to let traefik see the original data stream. But maybe you can "get away" with a normal http proxy_pass? I guess it depends on your use case, and I'm afraid that I do not know what your specific use case is. The short answer is: on a single IP:port, nginx either listens for stream, or for http, but not both. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Reverse proxy to traefik
On Tue, Jul 05, 2022 at 12:53:05PM +, Daniel Armando Rodriguez via nginx wrote: > El 2022-07-02 08:24, Francis Daly escribió: > > On Fri, Jun 24, 2022 at 04:23:54PM -0300, Daniel Armando Rodriguez > > wrote: Hi there, > > > Made this representation to illustrate the situation. > > > https://i.postimg.cc/Zq1Ndyws/scheme.png > What I need to do is allowing traefik "black" box to negotiate SSL > certificate directly with Let's Encrypt, that was intended to be referred as > stream. I think you are saying that you want nginx to be a "plain" tcp-forwarder in this case. (I'm not certain *why* that matters here, but that's ok; I don't need to understand it ;-) .) Does http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html work for you? Something like == stream { server { listen nginx-ip:443; proxy_pass traefik-ip:443; } } == (If you have a stream listener on an IP:port, you cannot also have a http listener on that same IP:port.) Your picture also shows some blue lines on the left-hand side, so it may be that you also want something like http://nginx.org/en/docs/stream/ngx_stream_ssl_preread_module.html, to choose which "upstream" to proxy_pass to, depending on the server name presented in the SSL connection to nginx. Cheers, f -- Francis Dalyfran...@daoine.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org