QUIC docs with php-fpm & ssl cache
I've installed v1.25 nginx -V nginx version: nginx/1.25.0 (Local Build) built with OpenSSL 3.0.8 7 Feb 2023 TLS SNI support enabled configure arguments: --with-debug ... --with-http_v2_module --with-http_v3_module ... First tries on my existing site that frontends php-fpm, uses ssl cache, etc config-checks with NO errors, and execs with NO errors. But, I see no http3 protocol responses -- everything's still http2. Testing with Firefox at cloudflare, client-side QUIC support is fine. So I'm fairly certain it's my config. I'll reduce to simplest config, and track down what the issue is. Are there (yet) any documented examples for release nginx + QUIC in a php-fpm/fastcgi setup? ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: nginx 1.24 + njs build errors [-Werror=dangling-pointer=] after switch from GCC 12 (Fedora 37) -> GCC13 (Fedora 38)
GCC 13 is not released yet, right? "Real Soon Now (tm)" fyi, https://gcc.gnu.org/gcc-13 April 26, 2023 The GCC developers are pleased to announce the release of GCC 13.1. ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: nginx 1.24 + njs build errors [-Werror=dangling-pointer=] after switch from GCC 12 (Fedora 37) -> GCC13 (Fedora 38)
GCC 13 is not released yet, right? "Real Soon Now (tm)" GCC 13.0.1 Status Report (2023-04-17) https://gcc.gnu.org/pipermail/gcc/2023-April/241140.html It's in the Fedora 38 release, which dropped today: Fedora 38 Released With GNOME 44 Desktop, GCC 13, Many New Features https://www.phoronix.com/news/Fedora-38-Released gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/13/lto-wrapper OFFLOAD_TARGET_NAMES=nvptx-none OFFLOAD_TARGET_DEFAULT=1 Target: x86_64-redhat-linux Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,m2,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-libstdcxx-backtrace --with-libstdcxx-zoneinfo=/usr/share/zoneinfo --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl=/builddir/build/BUILD/gcc-13.0.1-20230401/obj-x86_64-redhat-linux/isl-install --enable-offload-targets=nvptx-none --without-cuda-driver --enable-offload-defaulted --enable-gnu-indirect-function --enable-cet --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux --with-build-config=bootstrap-lto --enable-link-serialization=1 Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 13.0.1 20230401 (Red Hat 13.0.1-0) (GCC) This looks likes the origin https://gcc.gnu.org/gcc-13/changes.html -->https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106393 Can you reproduce the issue with -Wdangling-pointer option and GCC 12? as of ~ an hour ago, the last of my boxes finished updates -- with all GCC 13. i can try to set something up on COPR build sys to check ... ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
nginx 1.24 + njs build errors [-Werror=dangling-pointer=] after switch from GCC 12 (Fedora 37) -> GCC13 (Fedora 38)
I'm building nginx mainline v1.24 on Fedora. on F37, with gcc 12, gcc --version gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4) Copyright (C) 2022 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. build's good. Upgrading to today's new/latest F38, with gcc 13, gcc --version gcc (GCC) 13.0.1 20230401 (Red Hat 13.0.1-0) Copyright (C) 2023 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. builds for target F38+ fail @ dangling-pointer errors, ... src/njs_iterator.c: In function 'njs_object_iterate': src/njs_iterator.c:358:25: error: storing the address of local variable 'string_obj' in '*args.value' [-Werror=dangling-pointer=] 358 | args->value = _obj; | ^ ... cc1: all warnings being treated as errors adding -Wno-dangling-pointer to build flags worksaround it, with successful build. for ref, FAILED build log: https://download.copr.fedorainfracloud.org/results/pgfed/nginx-mainline/fedora-38-x86_64/05802768-nginx/build.log.gz OK build log: https://download.copr.fedorainfracloud.org/results/pgfed/nginx-mainline/fedora-38-x86_64/05802814-nginx/build.log.gz I'm checking to see whether the error flag was added to GCC 13 upstream, or just to Redhat/Fedora flags ... ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: failure to limit access to a secure area with self-signed client SSL cert fingerprint match
Do you have the certificate that has that value as the Subject? What is that certificate's Issuer? And repeat until you get to the root certificate. And which of the ssl*certificate files named in your config holds those certificates? i verified all my certs/chains. all good. with my orig conf, it appears i can't manage to grab/verify ssl client FP's for other-than-primary domains this fails to work, errors as reported above, server { ... servername example.com; ssl_verify_client optional; ssl_verify_depth 2; ssl_client_certificate "/www/ssl/self-signed/myCA.CHAIN.crt.pem"; ssl_trusted_certificate "/www/ssl/le/deploy/example.com/intermediate_ca.ec.crt.pem"; ssl_certificate "/www/ssl/le/deploy/example.com/fullchain.ec.crt.pem"; ssl_certificate_key "/www/ssl/le/deploy/example.com/priv.ec.key"; location /test { if ($ssl_client_verify != SUCCESS) { return 403; } if ($test_ssl_fp_reject) {return 403; } ... } OTOH simply splitting the secure subdir out into a separate server{}/subdomain, with separate, self-signed cert server { ... servername example.com; ssl_verify_client off; ssl_trusted_certificate "/www/ssl/le/deploy/example.com/intermediate_ca.ec.crt.pem"; ssl_certificate "/www/ssl/le/deploy/example.com/fullchain.ec.crt.pem"; ssl_certificate_key "/www/ssl/le/deploy/example.com/priv.ec.key"; } server { servername test.example.com; ssl_verify_client on; ssl_client_certificate "/www/ssl/self-signed/myCA.CHAIN.crt.pem"; ssl_verify_depth 2; ssl_certificate "/www/ssl/self-signed/test.example.com.server.ec.crt.pem"; ssl_certificate_key "/www/ssl/self-signed/test.example.com.ec.key.pem"; location / { if ($ssl_client_verify != SUCCESS) { return 403; } if ($test_ssl_fp_reject) {return 403; } ... } ... } achieves the intended result -- just not in the same server{} block ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: failure to limit access to a secure area with self-signed client SSL cert fingerprint match
What does the error_log say about this request and response? nothing that's giving me a hint i recognize, ... 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 header: "cache-control: no-cache" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 encoded string, len:2 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 encoded string, len:6 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 table add: "te: trailers" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 table account: 42 free:2775 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 header: "te: trailers" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 request line: "GET / HTTP/2.0" 2023/03/21 18:52:14 [info] 4955#4955: *7 client SSL certificate verify error: certificate status request failed while reading client request headers, client: 2401::...::1, server: example.com, request: "GET / HTTP/2.0", host: "example.com" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http finalize request: 495, "/?" a:1, c:1 2023/03/21 18:52:14 [debug] 4955#4955: *7 http special response: 495, "/?" 2023/03/21 18:52:14 [debug] 4955#4955: *7 headers more header filter, uri "/" 2023/03/21 18:52:14 [debug] 4955#4955: *7 xslt filter header 2023/03/21 18:52:14 [debug] 4955#4955: *7 charset: "" > "utf-8" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 header filter 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 push resources 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 table size update: 0 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: ":status: 400" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "date: Tue, 21 Mar 2023 22:52:14 GMT" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "content-type: text/html; charset=utf-8" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "content-length: 208" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "secure: Server" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "x-robots-tag: noindex, nofollow, nosnippet, noarchive" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "x-download-options: noopen" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "x-permitted-cross-domain-policies: none" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "permissions-policy: interest-cohort=()" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "x-xss-protection: 1; mode=block" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "strict-transport-security: max-age=63072000; includeSubDomains; preload" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "x-frame-options: SAMEORIGIN" 2023/03/21 18:52:14 [debug] 4955#4955: *7 http2 output header: "referrer-policy: strict-origin-when-cross-origin" ... ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
failure to limit access to a secure area with self-signed client SSL cert fingerprint match
i run nginx -v nginx version: nginx/1.23.3 (COPR Build) the server's setup to use LE certs server { ... ssl_trusted_certificate "/www/sec/le/deploy/otherexample.com/intermediate_ca.ec.crt.pem"; ssl_certificate "/www/sec/le/deploy/otherexample.com/fullchain.ec.crt.pem"; ssl_certificate_key "/www/sec/le/deploy/otherexample.com/priv.ec.key"; ... i've a secure area that i want to limit access to clients only with exact-matching ssl cert fingerprints i've added map $ssl_client_fingerprint $test_ssl_fp_reject { default 1; # cert's SHA1 FP 01234567890ABCDEFGHIJK1234567890ABCDEFGH 0; } ... log_format ssl_client '"Client fingerprint" $ssl_client_fingerprint ' '"Client DN" $ssl_client_s_dn '; ... server { ... # attempt the verify, to populate $ssl_client_fingerprint ssl_verify_client optional; ssl_verify_depth 2; ssl_client_certificate "/etc/ssl/cert.pem"; ... location /sec/test { if ($test_ssl_fp_reject) {return 403; } root /www/sec/test; try_files /test.php =444; fastcgi_pass phpfpm; fastcgi_index test.php; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi.conf; } ... access_log /var/log/nginx/ssl.log ssl_client; the client cert's self-signed with my own CA, and usage's config'd for Client auth, openssl x509 -in desktop.example.com.client.ec.crt.pem -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 4859 (0x12fb) Signature Algorithm: ecdsa-with-SHA256 Issuer: C = US, ST = NY, O = example.com, OU = example.com_CA, CN = example.com_CA_INT, emailAddress = s...@example.com Validity Not Before: Mar 20 11:17:47 2023 GMT Not After : Mar 17 11:17:47 2024 GMT Subject: C = US, ST = NY, L = New_York, O = example.com, OU = example.com_CA, CN = desktop.example.com, emailAddress = s...@example.com Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (384 bit) pub: 04:...:e5 ASN1 OID: secp384r1 NIST CURVE: P-384 X509v3 extensions: X509v3 Basic Constraints: CA:FALSE Netscape Cert Type: SSL Client, S/MIME Netscape Comment: example.com CLIENT Certificate X509v3 Subject Key Identifier: CC:...:06 X509v3 Authority Key Identifier: D0:...:CD X509v3 Key Usage: critical Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment, Key Agreement X509v3 Extended Key Usage: TLS Web Client Authentication, E-mail Protection X509v3 Subject Alternative Name: DNS:desktop.example.com, DNS:www.desktop.example.com Signature Algorithm: ecdsa-with-SHA256 Signature Value: 30:...:6f i've imported the cert as .pfx into Firefox & Chrome. i can access https://otherexample.com as usual. now, on access to EITHER of https://otherexample.com https://otherexample.com/sec/test in browser i get 400 Bad Request The SSL certificate error nginx while in log, i _do_ see the captured FP & DN, tail -f /var/log/nginx/ssl.log "Client fingerprint" 01234567890ABCDEFGHIJK1234567890ABCDEFGH "Client DN" emailAddress=s...@example.com,CN=desktop.example.com,OU=example.com_CA,O=example.com,L=New_York,ST=NY,C=US if i toggle - ssl_verify_client optional; + ssl_verify_client off; now, access to https://otherexample.com works. but https://otherexample.com/sec/test returns 403 Forbidden nginx since the $ssl_client_fingerprint doesn't populate tail -f /var/log/nginx/ssl.log "Client fingerprint" - "Client DN"
Re: "ssl_stapling" ignored warning on boot with LE certs?
hi, The error message suggests there is something wrong with DNS on> your host. If this happens only on boot but not when you restart/reload nginx after boot, ah. testing, yep, that does seem to be the case this might indicate that DNS is not yet properly available when nginx starts. One possible reason is that nginx systemd service is not properly configured to depend on DNS being available: for nginx to start properly you may want to ensure that there is Wants= and After= dependency on network-online.target, and After= dependency on nss-lookup.target, see nginx.service as shipped by nginx.org nginx packages[1] for an example. i'd added/use unbound as local resolver. changing both edit /etc/systemd/system/nginx.service - After=network-online.target - Wants=network-online.target + After=network-online.target nss-lookup.target unbound.target + Wants=network-online.target nss-lookup.target unbound.target and edit /etc/systemd/system/nginx.service - networks: files dns + networks: dns files does the trick. i wasn't noticing any DNS issues anywhere (else); just this ocsp fail. good catch, thx! o/ ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
"ssl_stapling" ignored warning on boot with LE certs?
i run nginx -v nginx version: nginx/1.23.3 (Local Build) nginx is launched on boot with a systemd service my site's ssl enabled, using letsencrypt certs in my boot logs, i see Feb 15 11:54:03 svr017 nginx[912]: nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "r3.o.lencr.org/" in the certificate "/sec/svr017/fullchain.ec.crt.pem" nginx site config includes ssl_trusted_certificate "/sec/svr017/intermediate_ca.ec.crt.pem"; ssl_certificate "/sec/svr017/fullchain.ec.crt.pem"; ssl_certificate_key "/sec/svr017/priv.ec.key"; ssl_stapling on; ssl_stapling_verify on; ssl_ocsp on; ssl_ocsp_cache shared:OCSP:10m; ssl_stapling_responder http://r3.o.lencr.org/; ssl_ocsp_responder http://r3.o.lencr.org/; checking the cert openssl x509 -noout -text -in /sec/svr017/fullchain.ec.crt.pem | grep -i ocsp -A2 -B1 Authority Information Access: OCSP - URI:http://r3.o.lencr.org CA Issuers - URI:http://r3.i.lencr.org/ X509v3 Subject Alternative Name: from the host dig A r3.o.lencr.org +short o.lencr.edgesuite.net. a1887.dscq.akamai.net. 23.215.130.112 23.215.130.106 23.215.130.113 23.215.130.88 telnet -4 r3.o.lencr.org 80 Trying 23.63.77.32... Connected to r3.o.lencr.org. Escape character is '^]'. curl -Ii http://r3.o.lencr.org/ HTTP/1.1 200 OK Server: nginx Content-Length: 0 Cache-Control: max-age=5863 Expires: Wed, 15 Feb 2023 18:52:39 GMT Date: Wed, 15 Feb 2023 17:14:56 GMT Connection: keep-alive is this warning due to a nginx misconfig? or a cert issue? ___ nginx mailing list nginx@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx
Re: OCSP checks fail only on 1st site hit; OK afterwards ?
This 2012 post Priming the OCSP cache in Nginx https://unmitigatedrisk.com/?p=241 comments "... in Nginx 1.3.7, unfortunately architectural restrictions made it impractical to make it so that pre-fetching the OCSP response on server start-up so instead the first connection to the server primes the cache that is used for later connections. This is a fine compromise but what if you really want the first connection to have the benefit too? Well there are two approaches you can take: ..." where OCSP pre-fetching is a challenge that Cloudflare similarly took up in 2017 outside of its then-Nginx usage, High-reliability OCSP stapling and why it matters https://blog.cloudflare.com/high-reliability-ocsp-stapling/ Adding to edit /etc/systemd/system/nginx.service + ExecStartPost=/bin/bash /etc/nginx/scripts/ocsp_prefetch.sh where cat /etc/nginx/scripts/ocsp_prefetch.sh iterates over served domains, echo QUIT | openssl s_client -connect ${_thisDom}:443 -servername ${_thisDom} -tls1_3 -tlsextdebug -status 2> /dev/null Does the trick. After cold reboot, 1st hits to site(s) no longer fail to respond in-browser, or fail to provide OCSP response to openssl s_client query. IS there an nginx prefetch mechanism available natively in current version ? I found this 7 yr old enhancement request, Fetch OCSP responses on startup, and store across restarts https://trac.nginx.org/nginx/ticket/812 which afaict wasn't resolved. ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: OCSP checks fail only on 1st site hit; OK afterwards ?
an old, 2015 post from Caddy Webserver's author, OCSP Stapling Robustness in Apache and nginx https://gist.github.com/mholt/3b4910c802b2ed7e92294e26a1ae8551 comments, "... nginx's logic is a lot more robust than Apache's in this regard. Good OCSP responses are cached for an hour, but are not replaced until a successful new response has been received, meaning nginx can weather temporary OCSP responder outages. Unfortunately, nginx's logic is drastically worse in a different way: nginx kicks off OCSP queries on-demand, during the TLS handshake, but continues the handshake without waiting for the OCSP response to return. And since the OCSP response caches are unique per worker process, the first TLS connection handled by any given worker process never has a response stapled! (By the way, this makes testing whether you've properly enabled OCSP stapling rather annoying and confusing if you don't know about this.) This behavior also means that if a worker process sites idle for a long time, it doesn't refresh its OCSP responses and could staple an expired OCSP response on the next request it handles. [Update: the expired response issue is fixed in nginx 1.9.2. Now, if the cached OCSP response is expired, no response at all is stapled. A query to the OCSP responder is still initiated in the background, so subsequent handshakes should have a fresh stapled response.] ..." that suggests an 'updated' (back then, as of v >= 1.9.2) behavior of no OCSP response on 1st try, but a background-queried-and-cached ok response subsequently. which, sounds like what i'm seeing. i run nginx/1.23.2 on linux after a clear reboot, on first access to my site front page, I see in log ==> /var/log/nginx/example.com.443.error.log <== 2022/11/09 12:38:15 [info] 1460#1460: *2 SSL_do_handshake() failed (SSL: error:0A000412:SSL routines::sslv3 alert bad certificate:SSL alert number 42) while SSL handshaking, client: 2601:...:xxx1, server: [2600:...:xxx6]:443 if I immediately just reload the page in browser, no more problem; the page renders ok, SSL check out, all site nav is fine subsequent hits to the front page are also OK ... is that (still?) the current mode of operation in nginx's ocsp logic ? ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
OCSP checks fail only on 1st site hit; OK afterwards ?
i run nginx/1.23.2 on linux after a clear reboot, on first access to my site front page, I see in log ==> /var/log/nginx/example.com.443.error.log <== 2022/11/09 12:38:15 [info] 1460#1460: *2 SSL_do_handshake() failed (SSL: error:0A000412:SSL routines::sslv3 alert bad certificate:SSL alert number 42) while SSL handshaking, client: 2601:...:xxx1, server: [2600:...:xxx6]:443 if I immediately just reload the page in browser, no more problem; the page renders ok, SSL check out, all site nav is fine subsequent hits to the front page are also OK i use include letsencrypt certs. digging around, i found this from 2013 Can't get OCSP stapling to work, despite openssl working fine https://success.qualys.com/discussions/s/question/0D52L4TnuFdSAJ/cant-get-ocsp-stapling-to-work-despite-openssl-working-fine my config includes, ssl_stapling on; ssl_stapling_verify on; ssl_stapling_responder http://r3.o.lencr.org/; server { ssl_trusted_certificate ...; } checking, after cold reboot, 1st connect returns an OCSP missing response echo | openssl s_client -connect example.com:443 -servername example.com -tls1_3 -tlsextdebug -status CONNECTED(0003) ... depth=0 CN = example.com verify return:1 !! OCSP response: no response sent ... --- SSL handshake has read 4384 bytes and written 318 bytes Verification: OK --- New, TLSv1.3, Cipher is TLS_CHACHA20_POLY1305_SHA256 Server public key is 384 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- DONE but an immediately subsequent 2nd try returns a response echo | openssl s_client -connect example.com:443 -servername example.com -tls1_3 -tlsextdebug -status CONNECTED(0003) ... verify return:1 OCSP response: == OCSP Response Data: OCSP Response Status: successful (0x0) Response Type: Basic OCSP Response Version: 1 (0x0) Responder Id: C = US, O = Let's Encrypt, CN = R3 Produced At: Nov 9 17:09:00 2022 GMT Responses: Certificate ID: Hash Algorithm: sha1 Issuer Name Hash: 48D...3D1 Issuer Key Hash: 142...2BC Serial Number: 022...84E Cert Status: good This Update: Nov 9 17:00:00 2022 GMT Next Update: Nov 16 16:59:58 2022 GMT Signature Algorithm: sha256WithRSAEncryption Signature Value: 09:...:cf == ... --- SSL handshake has read 4894 bytes and written 318 bytes Verification: OK --- New, TLSv1.3, Cipher is TLS_CHACHA20_POLY1305_SHA256 Server public key is 384 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- DONE so far, this^^ is 100% reproducible for me; always/only on first load after boot this 'feels' like a timeout before OCSP is cached, and no issues after. not sure reading up at https://nginx.org/en/docs/http/ngx_http_ssl_module.html i see ssl_stapling_responder "Overrides the URL of the OCSP responder specified in the “Authority Information Access” certificate extension." which i use, but also ssl_ocsp_responder "Overrides the URL of the OCSP responder specified in the “Authority Information Access” certificate extension for validation of client certificates. " which I don't currently. what's the difference in function/usage between those two? As far as caching, I also see ssl_ocsp_cache which i haven't defined, so it's at default ssl_ocsp_cache off any clues as to what's missing/misconfig'd and responsible for the 1st-time-only fails I see? ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: How to patch and/or upgrade Nginx from source in production environment?
My primary driving reason for considering the deployment of Nginx from source is to use ModSecurity WAF with Nginx. I'm under the impression that it's much easier to use ModSecurity with Nginx when compiled from source. If ModSecurity is the issue ... There are old instructions easily found ON the nginx.com site, https://www.nginx.com/blog/compiling-and-installing-modsecurity-for-open-source-nginx/ for building it as a dynamic module, which can be separately built and added to a packaged nginx build. not required to rebuild/repackage/reinstall nginx itself. of course, you need to match source version to your pkg'd version. but note, NGINX is dumping ... er ... Transitioning to End-of-Life ... ModSecurity support, F5 NGINX ModSecurity WAF Is Transitioning to End-of-Life https://www.nginx.com/blog/f5-nginx-modsecurity-waf-transitioning-to-eol/ and that ModSecurity itself is on its way out, Talking about ModSecurity and the new Coraza WAF https://coreruleset.org/20211222/talking-about-modsecurity-and-the-new-coraza-waf/ but not quite dead yet. in the interim, there's ModSecurity v3/master https://github.com/SpiderLabs/ModSecurity , with a new architecture, and a specific Nginx connector https://github.com/SpiderLabs/ModSecurity-nginx which can, similarly to the above, be built/added as a dynamic module, and still works well enough. and here's a useful tutorial for setting up Nginx + LibModsecurity Configure LibModsecurity with Nginx on CentOS 8 https://kifarunix.com/configure-libmodsecurity-with-nginx-on-centos-8/ ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: How to patch and/or upgrade Nginx from source in production environment?
I don't know the process to install patches. That's a big ol' red flag. Personally, I'd strongly recommend against building/installing into a *production* env, until you're up to snuff with managing the sources, including patches. That said, are you solving for a real/existing production problem you have? Or more a want-to-learn-how-to-build exercise? Looking here https://packages.ubuntu.com/search?keywords=nginx https://changelogs.ubuntu.com/changelogs/pool/main/n/nginx/nginx_1.18.0-6ubuntu14.2/changelog https://changelogs.ubuntu.com/changelogs/pool/main/n/nginx/nginx_1.22.0-1ubuntu1/changelog at first glance it sure looks like sources/packages are actively patched & maintained Is there a specific example of an nginx patch your production environment needed that isn't/wasn't acted upon? If so, had your raised it first with the maintainers, and they refused or failed to act? Or is there a version that you need for valid reasons that isn't available to you? pkgsrc [1] is the one of the good choices to automate builds and manage dependences in a non-root environment on your favorite operating system. +1 there are many. each is its own rabbit-hole, with its own infrastructure & process gotchas. i.e., another layer of stuff/complexity. once mastered, sure -- great to have. ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: How to patch and/or upgrade Nginx from source in production environment?
I should have mentioned that I'm running in an Ubuntu environment so I'm not sure if that makes much difference? Ubuntu/Debian have all the tools for source builds. They also have the apt packaging solution. I assume there are available build services. I'm not an Ubuntu/Debian user. Simply a matter of preference. Beyond that, no opinion worth its salt :-/ worth Per whose definition? Stuff breaks. You either live with it, patch it yourself, or ask someone else to patch it for you. What's the 'worth' to you of not having any particular breakage unsolved? i.e., it depends. Thoughts? Opinions? Only, don't blindly do what others suggest. Do what works best for you. For me, 'my' distro chooses not to build/package nginx mainline. Or, build/config the way I want. So I do it myself, using the distro's build service & tools. Is it a PITA? sure. just less than not having what I need. ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: How to patch and/or upgrade Nginx from source in production environment?
Nginx is an easy build from source, thankfully. Deploying tarbal'd local source-builds to other machines is not terrible at all if you isolate your install DIR (e.g, 'everything' under /opt/nginx); ansible is your friend. But, it's a bit of a slog to deploy into usual distro env, avoid collisions, and if needed, cleanly uninstall. Certainly doable, but can be messy. To solve for that inconvenience, build your own packages from own sources on an open build system (e.g., SUSE's OBS, Fedora's COPR, etc), and install those packages via rpms. Or for that matter, even local rpmbuilds should be portable, as long as you correctly account for differences in target deployment ENVs. yes, rpm .spec files can be annoying. it's a trade-off. I'm curious how many people run Nginx in a production environment that was installed from source and not a package. For those people who are running Nginx in this manner, how do you keep Nginx patched when patches are released? How do you upgrade your existing Nginx in your production environment while minimizing downtime? Thank you, Ed ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: Nginx as mail proxy: different domains with different certs
Name-based (including SNI-based) virtual servers are not supported in the mail proxy module. As such, the remaining options are: - Use multiple names in a certificate - Use IP-based (or port-based) virtual servers You can combine both options as appropriate. add'l useful option for mail proxy + sni https://www.linuxbabe.com/mail-server/smtp-imap-proxy-with-haproxy-debian-ubuntu-centos ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx: lua modules
Want to use lua pages with nginx . Can you please suggest what are the correct modules ?. also where can i find the same?. LUA support with nginx is third-party -- via OpenResty https://www.nginx.com/resources/wiki/modules/lua/ https://openresty.org/en/ OpenResty is packaged as a standalone web platform, bundling a modified version of Nginx's opensource core. It's possible, though not trivial, to extract lua bits from OpenResty source, and build for official Nginx. In 2019, Nginx chose to develop its own native scripting tools -- njs, https://github.com/nginx/njs/issues/179 https://nginx.org/en/docs/njs/ A useful example read re: Lua 'vs' njs script support/usage in Nginx, https://www.rkatz.xyz/post/2021-09-13-nginx-njs-experiments/ ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
hostname support in geo (ngx_http_geo_module) variable maps?
i'm running nginx/1.23.1 i use 'geo'-based (ngx_http_geo_module) permissions to restrict access to some sites e.g., for explicit static IPs geo $RESTRICT_ACCESS { default 0; 127.0.0.1/32 1; 2601:...:abcd1; } server { ... if ($RESTRICT_ACCESS = 0) { return 403;} it works as intended. i'd like to add access for a couple of hosts with dynamic IPs. the IPs *are* tracked, and updated to DNS. e.g., both A & records exist, and are automatically updated on change, at mydynamicIP.example.com so that, in effect, geo $RESTRICT_ACCESS { default 0; 127.0.0.1/32 1; 2601:...:abcd1; 1; 1; } at wiki, there is mention of "ngx_http_rdns_module" https://www.nginx.com/resources/wiki/modules/rdns/ which points to https://github.com/flant/nginx-http-rdns but, there "Disclaimer (February, 2022) This module hasn't been maintained by its original developers for years already." is there a recommended/current method for using *hostnames* in geo? ideally, without lua. ___ nginx mailing list -- nginx@nginx.org To unsubscribe send an email to nginx-le...@nginx.org
Re: nginx 1.21.5 + PCRE2 build fail @ naxsi
On 12/28/21 13:14, Maxim Dounin wrote: The NAXSI bug mentioned in the second commit needs to be fixed before it will be possible to build NAXSI with PCRE2. Noted, & mentioned @ upstream bug (afaict, no prior relevant bug ?) thanks! fwiw, without naxsi, nginx 1.21.5 + pcre2 fails also @ ngx_http_lua_module https://docs.nginx.com/nginx/admin-guide/dynamic-modules/lua/ with grep lua_module nginx.conf load_module /usr/local/nginx-modules/ngx_http_lua_module.so; seems that's a separate issue, and just new/noted here https://github.com/openresty/lua-nginx-module/issues/1984 ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
nginx 1.21.5 + PCRE2 build fail @ naxsi
with pcre2 enabled for new-version build, builds with naxsi now fail i'd reported @ naxsi upstream issue, https://github.com/nbs-system/naxsi/issues/580 comment there suggests, "Possibly connected commits: nginx/nginx@931acbf nginx/nginx@d5f1f16 (mentions NAXSI) nginx/nginx@c6fec0b " 2nd issue, https://github.com/nginx/nginx/commit/d5f1f169bc71d32b96960266d54e189c69af00ba specifically mentions NAXSI, suggesting it was tested/working before commit to release 1.21.5 unclear if i've missed something required, or still an issue :-/ has anyone here yet been successful witjh v1.21.5 + PCRE2 + naxsi ? ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: nginx 1.19.7 + njs {0.5.0/0.5.1/HEAD} fail for Fedora 34; ok for Fedora 33. gcc version? (F33 -> 10x, F34 -> 11x)
On 2/22/21 12:20 PM, Dmitry Volyntsev wrote: No, it is not. Feel free to report it on Github. +1 -> https://github.com/nginx/njs/issues/376 ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
nginx 1.19.7 + njs {0.5.0/0.5.1/HEAD} fail for Fedora 34; ok for Fedora 33. gcc version? (F33 -> 10x, F34 -> 11x)
i'm re-building nginx 1.19.7 @ fedora's COPR buildsys, enabling Fedora 34 builds, in addition to (current) Fedora 33 builds, https://copr.fedorainfracloud.org/coprs/pgfed/nginx-mainline/build/2013618/ builds with any of njs release tag 0.5.0, 0.5.1 or HEAD are fine for Fedora 33 -- no errors on build, and bins exec ok. builds with same spec, for Fedora 34, fails @ njs ... /usr/bin/gcc -c -pipe -fPIC -fvisibility=hidden -O -W -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wmissing-prototypes -Werror -g -O -I/usr/local/luajit2-openresty/include/luajit-2.1 -O3 -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -march=x86-64 -mtune=generic -O3 -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -march=x86-64 -mtune=generic -DNDK_SET_VAR -Wno-deprecated-declarations \ -Isrc -Ibuild -Injs \ -o build/src/njs_fs.o \ -MMD -MF build/src/njs_fs.dep -MT build/src/njs_fs.o \ src/njs_fs.c !!! src/njs_fs.c:1376:33: error: argument 2 of type 'char *' declared as a pointer [-Werror=array-parameter=] 1376 | njs_fs_path(njs_vm_t *vm, char *storage, const njs_value_t *src, | ~~^~~ src/njs_fs.c:96:51: note: previously declared as an array 'char[4097]' 96 | static const char *njs_fs_path(njs_vm_t *vm, char storage[NJS_MAX_PATH + 1], | ~^ /usr/bin/gcc -c -pipe -fPIC -fvisibility=hidden -O -W -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wmissing-prototypes -Werror -g -O -I/usr/local/luajit2-openresty/include/luajit-2.1 -O3 -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -march=x86-64 -mtune=generic -O3 -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -march=x86-64 -mtune=generic -DNDK_SET_VAR -Wno-deprecated-declarations \ -Isrc -Ibuild -Injs \ -o build/src/njs_crypto.o \ -MMD -MF build/src/njs_crypto.dep -MT build/src/njs_crypto.o \ src/njs_crypto.c ... full build log, https://download.copr.fedorainfracloud.org/results/pgfed/nginx-mainline/fedora-34-x86_64/02013618-nginx/builder-live.log.gz i suspect F34's updated gcc version may be at fault @ https://src.fedoraproject.org/rpms/gcc, gcc release versions are Fedora 34 gcc-11.0.0-0.19.fc34 Fedora 33 gcc-10.2.1-3.fc33 i checked @ njs github, and didn't find a gcc11-related issue. is this a known/suspected issue already? or some other cause? ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: v1.19.5 OOPS: "Main process exited, code=dumped, status=11/SEGV" ?
On 12/5/20 2:35 PM, itpp2012 wrote: Known perl issue, google: "segfault at 10 error 4 in libperl.so" aha. +1. thanks! noting, https://serverfault.com/questions/1041031/nginx-sometimes-gets-killed-after-reloading-it-using-systemd ... If you haven't got a need to run Perl code inside nginx (as most people do not) then you can uninstall the package libnginx-mod-http-perl and restart nginx to avoid the problem. This package was pulled in by the virtual package nginx-extras but most people don't actually run perl in the web server and so don't need it. ... my server IS built with ... --with-http_perl_module=dynamic ... and in config load_module /usr/local/nginx-modules/ngx_http_perl_module.so; afayk, is - load_module /usr/local/nginx-modules/ngx_http_perl_module.so; + #load_module /usr/local/nginx-modules/ngx_http_perl_module.so; a sufficient cure? or is a rebuild withOUT the --with-http_perl_module=dynamic opt required? Since it's dynamic, I suspect the simple disable _should_ do the trick; still reading to find/check details ... ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
v1.19.5 OOPS: "Main process exited, code=dumped, status=11/SEGV" ?
I'm running nginx/1.19.5 on a Fedora32 VM, w/ uname -rm 5.9.11-100.fc32.x86_64 x86_64 Its run for ages without issues. At least that I'd noticed ... Today, I caught a SEGV/core-dump; the server stopped systemctl status nginx ● nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/etc/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: failed (Result: core-dump) since Sat 2020-12-05 05:58:03 PST; 7h ago Process: 993 ExecStartPre=/bin/chown -R wwwrun:www /usr/local/etc/nginx (code=exited, status=0/SUCCESS) Process: 999 ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/etc/nginx/nginx.conf -g pid /run/nginx/nginx.pid; (code=exited, status=0/SUCCESS) Process: 1063 ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/etc/nginx/nginx.conf -g pid /run/nginx/nginx.pid; (code=exited, status=0/SUCCESS) Process: 1108 ExecStartPost=/bin/chown -R wwwrun:www /var/log/nginx (code=exited, status=0/SUCCESS) Process: 25986 ExecReload=/bin/kill -s HUP $MAINPID (code=exited, status=0/SUCCESS) Main PID: 1103 (code=dumped, signal=SEGV) CPU: 14.607s Checking logs (at current production loglevels) for this one, nothing out of the ordinary ... EXCEPT The last log entry I see,: /var/log/nginx/main.access.log:61.219.11.153 _ - [05/Dec/2020:05:08:14 -0800] \x01A\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 "400" 150 "-" "-" "-" Given the proximity of the timestamp, I'g guess it's related? I haven't yet figured out where/how to grab the core-dump ; working on that. Checking history it's happened a few times xzegrep SEGV /var/log/messages 2020-11-29T05:52:32.436235-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV 2020-12-01T05:39:03.218376-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV 2020-12-03T05:17:51.653637-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV 2020-12-05T05:58:03.611240-08:00 vm0026 systemd[1]: nginx.service: Main process exited, code=dumped, status=11/SEGV where each instance in log looks like, 2020-12-05T05:55:00.854490-08:00 vm0026 systemd[25768]: Reached target Shutdown. 2020-12-05T05:55:00.854510-08:00 vm0026 systemd[25768]: systemd-exit.service: Succeeded. 2020-12-05T05:55:00.854531-08:00 vm0026 systemd[25768]: Finished Exit the Session. 2020-12-05T05:55:00.854550-08:00 vm0026 systemd[25768]: Reached target Exit the Session. 2020-12-05T05:55:00.858225-08:00 vm0026 systemd[1]: user@0.service: Succeeded. 2020-12-05T05:55:00.858322-08:00 vm0026 systemd[1]: Stopped User Manager for UID 0. 2020-12-05T05:55:00.860232-08:00 vm0026 systemd[1]: Stopping User Runtime Directory /run/user/0... 2020-12-05T05:55:00.868288-08:00 vm0026 systemd[1]: run-user-0.mount: Succeeded. 2020-12-05T05:55:00.870265-08:00 vm0026 systemd[1]: user-runtime-dir@0.service: Succeeded. 2020-12-05T05:55:00.870383-08:00 vm0026 systemd[1]: Stopped User Runtime Directory /run/user/0. 2020-12-05T05:55:00.871216-08:00 vm0026 systemd[1]: Removed slice User Slice of UID 0. 2020-12-05T05:58:03.418222-08:00 vm0026 systemd[1]: Reloading The nginx HTTP and reverse proxy server. 2020-12-05T05:58:03.420214-08:00 vm0026 systemd[1]: Reloaded The nginx HTTP and reverse proxy server. 2020-12-05T05:58:03.432221-08:00 vm0026 systemd[1]: nginx.service: Unit cannot be reloaded because it is inactive. 2020-12-05T05:58:03.432358-08:00 vm0026 systemctl[25987]: nginx.service is not active, cannot reload. 2020-12-05T05:58:03.468235-08:00 vm0026 kernel: nginx[1103]: segfault at 10 ip 7f5c566d6283 sp 7ffeebdca500 error
Re: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ?
On 6/2/20 12:34 PM, Maxim Dounin wrote: > The mis-match comes from trying to redefine the name in some parts > of the configuration but not others. Hope the above explanation > helps. I've reread your comment That is, the name you've written in the proxy_pass directive is the actual hostname, and it will be used in the Host header when creating requests to upstream server. And it is also used in the proxy_ssl_name, so it will be used during SSL handshake for SNI and certificate verification. It's not just "an upstream name". If you want it to be only an upstream name, you'll have to redefine at least proxy_ssl_name and "proxy_set_header Host". (Well, not really, since $proxy_host is also used at least in the proxy_cache_key, but this is probably not that important.) a bunch of times. Still can't grasp it clearly. Which is the source of the pebkac :-/ Otoh, simply _doing_ Alternatively, you may want to use the real name, and define an upstream{} block with that name. This way you won't need to redefine anything. i.e., changing to EITHER case (1): vhost config, - upstream test-upstream { + upstream test.example.com { server test.example.com:1; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... location /app1 { proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_certificate "/etc/ssl/nginx/test.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; - proxy_pass https://test-upstream/; + proxy_pass https://test.example.com/; proxy_ssl_server_name on; proxy_ssl_name test.example.com; } } and, upstream config server { listen 127.0.0.1:1 ssl http2; server_name test.example.com; root /srv/www/test; index index.php; expires -1; ssl_certificate "/etc/ssl/nginx/test.server.crt"; ssl_certificate_key "/etc/ssl/nginx/test.server.key"; ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; - ssl_verify_client off; + ssl_verify_client on; ssl_verify_depth 2; ssl_client_certificate "/etc/ssl/nginx/ca_int.crt"; location ~ \.php { try_files $uri =404; fastcgi_pass phpfpm; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; include includes/fastcgi/fastcgi_params; } error_log /var/log/nginx/test.error.log info; } or case (2): vhost config, - upstream test-upstream { + upstream JUNK { server test.example.com:1; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... location /app1 { proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_certificate "/etc/ssl/nginx/test.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; - proxy_pass https://test-upstream/; + proxy_pass https://test.example.com:1/; proxy_ssl_server_name on; proxy_ssl_name test.example.com; } } and, upstream config server { listen 127.0.0.1:1 ssl http2; server_name test.example.com;
Re: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ?
On 6/2/20 8:27 AM, Francis Daly wrote: > That suggests that if you choose to use "proxy_ssl_server_name on;", > then you almost certainly do not want to add your own "proxy_set_header > Host" value. > > The nginx code probably should not try to check for (and reject) that > combination of directives-and-values; but might it be worth adding a > note to http://nginx.org/r/proxy_ssl_server_name to say that that other > directive is probably a bad idea, especially if you get a http 421 response > from your upstream? trying to simplify/repeat, i've vhost config, upstream test-upstream { server test.example.com:1; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... location /app1 { proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_certificate "/etc/ssl/nginx/test.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; proxy_pass https://test-upstream/; proxy_ssl_server_name on; proxy_ssl_name test.example.com; } } and, upstream config server { listen 127.0.0.1:1 ssl http2; server_name test.example.com; root /srv/www/test; index index.php; expires -1; ssl_certificate "/etc/ssl/nginx/test.server.crt"; ssl_certificate_key "/etc/ssl/nginx/test.server.key"; ssl_trusted_certificate "/etc/ssl/nginx/ca_int.crt"; ssl_verify_client off; ssl_verify_depth 2; ssl_client_certificate "/etc/ssl/nginx/ca_int.crt"; location ~ \.php { try_files $uri =404; fastcgi_pass phpfpm; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; includeincludes/fastcgi/fastcgi_params; } error_log /var/log/nginx/test.error.log info; } on access to https://example.com/app1 still get 421 Misdirected Request in log ==> /var/log/nginx/test.error.log <== 2020/06/02 11:52:13 [info] 8713#8713: *18 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.com, request: "GET / HTTP/1.0", host: "test-upstream" Is that host: "test-upstream" to be expected? it's an upstream name, not an actual host. Still unable to wrap my head around where this mis-match is coming from ... I have a nagging suspicion I'm missing something *really* obvious :-/ ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ?
with patch applied, and 'proxy_ssl_server_name on;' this is where the problem appears 2020/06/02 00:50:08 [debug] 20166#20166: *3 verify:1, error:0, depth:2, subject:"/O=example.com/OU=example.com_CA/L=New_York/ST=NY/C=US/emailAddress=ad...@example.com/CN=example.com_CA", issuer:"/O=example.com/OU=example.com_CA/L=New_York/ST=NY/C=US/emailAddress=ad...@example.com/CN=example.com_CA" 2020/06/02 00:50:08 [debug] 20166#20166: *3 verify:1, error:0, depth:1, subject:"/C=US/ST=NY/O=example.com/OU=example.com_CA/CN=example.com_CA_INTERMEDIATE/emailAddress=ad...@example.com", issuer:"/O=example.com/OU=example.com_CA/L=New_York/ST=NY/C=US/emailAddress=ad...@example.com/CN=example.com_CA" 2020/06/02 00:50:08 [debug] 20166#20166: *3 verify:1, error:0, depth:0, subject:"/C=US/ST=NY/L=New_York/O=example.com/OU=example.com_CA/CN=test.example.net/emailAddress=ad...@example.com", issuer:"/C=US/ST=NY/O=example.com/OU=example.com_CA/CN=example.com_CA_INTERMEDIATE/emailAddress=ad...@example.com" 2020/06/02 00:50:08 [debug] 20166#20166: *3 ssl new session: 0E2A0672:32:1105 2020/06/02 00:50:08 [debug] 20166#20166: *3 ssl new session: 31C878D7:32:1104 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_do_handshake: 1 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL: TLSv1.3, cipher: "TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD" 2020/06/02 00:50:08 [debug] 20166#20166: *3 reusable connection: 1 2020/06/02 00:50:08 [debug] 20166#20166: *3 http wait request handler 2020/06/02 00:50:08 [debug] 20166#20166: *3 malloc: 555967A0B2E0:1024 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_read: 772 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_read: -1 2020/06/02 00:50:08 [debug] 20166#20166: *3 SSL_get_error: 2 2020/06/02 00:50:08 [debug] 20166#20166: *3 reusable connection: 0 2020/06/02 00:50:08 [debug] 20166#20166: *3 posix_memalign: 5559678F6460:4096 @16 2020/06/02 00:50:08 [debug] 20166#20166: *3 posix_memalign: 5559675113A0:4096 @16 2020/06/02 00:50:08 [debug] 20166#20166: *3 http process request line 2020/06/02 00:50:08 [debug] 20166#20166: *3 http request line: "GET /app1 HTTP/1.1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http uri: "/app1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http args: "" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http exten: "" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http process request header line 2020/06/02 00:50:08 [info] 20166#20166: *3 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.net, request: "GET /app1 HTTP/1.1", host: "example.net" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http finalize request: 421, "/app1?" a:1, c:1 2020/06/02 00:50:08 [debug] 20166#20166: *3 event timer del: 50: 3334703 2020/06/02 00:50:08 [debug] 20166#20166: *3 http special response: 421, "/app1?" 2020/06/02 00:50:08 [debug] 20166#20166: *3 http set discard body 2020/06/02 00:50:08 [debug] 20166#20166: *3 headers more header filter, uri "/app1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 lua capture header filter, uri "/app1" 2020/06/02 00:50:08 [debug] 20166#20166: *3 xslt filter header 2020/06/02 00:50:08 [debug] 20166#20166: *3 charset: "" > "utf-8" 2020/06/02 00:50:08 [debug] 20166#20166: *3 HTTP/1.1 421 Misdirected Request noting 2020/06/02 00:50:08 [info] 20166#20166: *3 client attempted to request the server name different from the one that was negotiated while reading client request headers, client: 127.0.0.1, server: test.example.net, request: "GET /app1 HTTP/1.1", host: "example.net" now, need to stare at this and try to figure out 'why?' ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ?
On 6/1/20 8:42 AM, Maxim Dounin wrote: > > proxy_ssl_server_name on; > > to see if it helps. See http://nginx.org/r/proxy_ssl_server_name > for details. enabling it _has_ an effect. now, access to https://example.com/app1 responds, - 502 Bad Gateway + 421 Misdirected Request > > You may also try the following patch to provide somewhat better > debug logging when checking upstream server SSL certificates: I'll get this in place & see what i learn ... ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
proxy_ssl_verify error: 'upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream', for CN/SAN 'matched' client & server certs ?
I'm running nginx -V nginx version: nginx/1.19.0 (pgnd Build) built with OpenSSL 1.1.1g 21 Apr 2020 TLS SNI support enabled ... It serves as front-end SSL termination, site host, and reverse-proxy to backend apps. I'm trying to get a backend app to proxy_ssl_verify the proxy connection to it. I have two self-signed certs: One for "TLS Web Client Authentication, E-mail Protection" openssl x509 -in test.example.com.client.crt -text | egrep "Subject.*CN|DNS|TLS" Subject: C = US, ST = NY, L = New_York, O = example2.com, OU = myCA, CN = test.example.com, emailAddress = s...@example2.com TLS Web Client Authentication, E-mail Protection DNS:test.example.com, DNS:www.test.example.com, DNS:localhost and the other, for "TLS Web Server Authentication" openssl x509 -in test.example.com.server.crt -text | egrep "Subject.*CN|DNS|TLS" Subject: C = US, ST = NY, L = New_York, O = example2.com, OU = myCA, CN = test.example.com, emailAddress = s...@example2.com TLS Web Server Authentication DNS:test.example.com, DNS:www.test.example.com, DNS:localhost The certs 'match' CN & SAN, differing in "X509v3 Extended Key Usage". Both are verified "OK" with my local CA cert openssl verify -CAfile myCA.crt.pem test.example.com.server.crt test.example.com.server.crt: OK openssl verify -CAfile /myCA.crt.pem test.example.com.client.crt test.example.com.client.crt: OK My main nginx config includes, upstream test.example.com { server test.example.com:1; } server { listen 10.10.10.1:443 ssl http2; server_name example.com; ... ssl_verify_client on; ssl_client_certificate "/etc/ssl/nginx/myCA.crt"; ssl_verify_depth 2; ssl_certificate "/etc/ssl/nginx/example.com.server.crt"; ssl_certificate_key "/etc/ssl/nginx/example.com.server.key"; ssl_trusted_certificate "/etc/ssl/nginx/myCA.crt"; location /app1 { proxy_passhttps://test.example.com; proxy_ssl_certificate "/etc/ssl/nginx/test.example.com.client.crt"; proxy_ssl_certificate_key "/etc/ssl/nginx/test.example.com.client.key"; proxy_ssl_trusted_certificate "/etc/ssl/nginx/myCA.crt"; proxy_ssl_verify on; proxy_ssl_verify_depth 2; include includes/reverse-proxy.inc; } } and the upstream config, server { listen 127.0.0.1:1 ssl http2; server_name test.example.com; root /data/webapps/demo_app/; index index.php; expires -1; ssl_certificate "/etc/ssl/nginx/test.example.com.server.crt"; ssl_certificate_key "/etc/ssl/nginx/test.example.com.server.key"; ssl_client_certificate "/etc/ssl/nginx/myCA.crt"; ssl_verify_client optional; ssl_verify_depth 2; location ~ \.php { try_files $uri =404; fastcgi_pass phpfpm; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; includefastcgi_params; } } access to https://example.com/app1 responds, 502 Bad Gateway logs, show an SSL handshake fail ... 2020/05/29 19:00:06 [debug] 29419#29419: *7 SSL: TLSv1.3, cipher: "TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD" 2020/05/29 19:00:06 [debug] 29419#29419: *7 http upstream ssl handshake: "/app1/?" 2020/05/29 19:00:06 [debug] 29419#29419: *7 X509_check_host(): no match 2020/05/29 19:00:06 [error] 29419#29419: *7 upstream SSL certificate does not match "test.example.com" while SSL handshaking to upstream, client: 10.10.10.73, server: example.com, request: "GET /app1/ HTTP/2.0", upstream: "https://127.0.0.1:1/app1/;, host: "example.com" 2020/05/29 19:00:06 [debug] 29419#29419: *7 http next upstream, 2 ... If I toggle - ssl_verify_client on; + ssl_verify_client off; then I'm able to connect to the backend site, as expected. What exactly is NOT matching in the handshake? CN & SAN do ... &/or, is there a config problem above? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: editing a general location match to exclude one, specific instance?
> Second, it's all in the location documentation: I'm not asking about the order. I'm asking about a specific match(es) that'd work in this specific case. If it's trivial, care to share a working example? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
editing a general location match to exclude one, specific instance?
editing a general location match to exclude one, specific instance? I run nginx 1.18.0. I've had a trivial 'protection' rule in place for a long time location ~* (gulpfile\.js|settings.php|readme|schema|htpasswd|password|config) { deny all; } That hasn't caused me any particular problems. Recently, I've added a proxied back end app. In logs I see ==> /var/log/nginx/auth.example1.com.error.log <== 2020/05/12 22:16:39 [error] 57803#57803: *1 access forbidden by rule, client: 10.10.10.10, server: testapp.example1.com, request: "GET /api/configuration HTTP/2.0", host: "testapp.example1.com", referrer: "https://testapp.example1.com/?rd=https://example2.net/app2; removing the "config" match from the protection rule, - location ~* (gulpfile\.js|settings.php|readme|schema|htpasswd|password|config) { + location ~* (gulpfile\.js|settings.php|readme|schema|htpasswd|password) { eliminates the problem. I'd like to edit the match to PASS that^ logged match -- as specifically/uniquely as possible -- but CONTINUE to 'deny all' for all other/remaining matches on "config". How would that best be done? A preceding location match? Or editing the existing one? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ?
You may want to re-read my initial answer and the ticket it links to. If that were _clear_, neither I nor others would STILL be spending time/effort trying to understand & clarify this. Nevermind. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ?
On 7/19/19 11:02 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jul 19, 2019 at 10:52:55AM -0700, PGNet Dev wrote: > >>>> And, if I change nginx to be 'TLSv1.3-only', >>>> >>>> - ssl_protocols TLSv1.3 TLSv1.2; >>>> - ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 >>>> TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; >>>> + ssl_protocols TLSv1.3; >>>> + ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 >>>> TLS13-AES-128-GCM-SHA256"; >>>> >>>> even the webserver config check FAILs, >>>> >>>>nginxconfcheck >>>>TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL >>>> routines:SSL_CTX_set_cipher_list:no cipher match) >>>>nginx: configuration file /usr/local/etc/nginx/nginx.conf test >>>> failed >>>> >>>> and the server fails to start. >>> >>> That's because the cipher string listed contains no valid ciphers. >> >> >> Sorry, I'm missing something :-/ >> >> What's specifically "invalid" about the 3, listed ciphers? >> >> TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 >> TLS13-AES-128-GCM-SHA256 > > There are no such ciphers in the OpenSSL. > Try it yourself: > > $ openssl ciphers TLS13-CHACHA20-POLY1305-SHA256 > Error in cipher list > 0:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher > match:ssl/ssl_lib.c:2549: > > [...] > Then what are these lists? https://wiki.openssl.org/index.php/TLS1.3 Ciphersuites OpenSSL has implemented support for five TLSv1.3 ciphersuites as follows: TLS_AES_256_GCM_SHA384 TLS_CHACHA20_POLY1305_SHA256 TLS_AES_128_GCM_SHA256 TLS_AES_128_CCM_8_SHA256 TLS_AES_128_CCM_SHA256 https://www.openssl.org/blog/blog/2017/05/04/tlsv1.3/ Ciphersuites OpenSSL has implemented support for five TLSv1.3 ciphersuites as follows: TLS13-AES-256-GCM-SHA384 TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-128-GCM-SHA256 TLS13-AES-128-CCM-8-SHA256 TLS13-AES-128-CCM-SHA256 "$ openssl ciphers -s -v ECDHE Will list all the ciphersuites for TLSv1.2 and below that support ECDHE and additionally all of the default TLSv1.3 ciphersuites." openssl ciphers -s -v ECDHE >> TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) >> Mac=AEAD >> TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any >> Enc=CHACHA20/POLY1305(256) Mac=AEAD >> TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) >> Mac=AEAD ... openssl ciphers -tls1_3 >> TLS_AES_256_GCM_SHA384: >> TLS_CHACHA20_POLY1305_SHA256: >> TLS_AES_128_GCM_SHA256: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES2 56-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA openssl ciphers TLS13-CHACHA20-POLY1305-SHA256 Error in cipher list 140418731745728:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: openssl ciphers TLS-CHACHA20-POLY1305-SHA256 Error in cipher list 140126717628864:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl/ssl_lib.c:2549: openssl ciphers TLS13_CHACHA20_POLY1305_SHA256 Error in cipher list
Re: nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ?
>> And, if I change nginx to be 'TLSv1.3-only', >> >> -ssl_protocols TLSv1.3 TLSv1.2; >> -ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 >> TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; >> +ssl_protocols TLSv1.3; >> +ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 >> TLS13-AES-128-GCM-SHA256"; >> >> even the webserver config check FAILs, >> >> nginxconfcheck >> TLS13-AES-128-GCM-SHA256") failed (SSL: error:1410D0B9:SSL >> routines:SSL_CTX_set_cipher_list:no cipher match) >> nginx: configuration file /usr/local/etc/nginx/nginx.conf test >> failed >> >> and the server fails to start. > > That's because the cipher string listed contains no valid ciphers. Sorry, I'm missing something :-/ What's specifically "invalid" about the 3, listed ciphers? TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 as stated here https://www.openssl.org/blog/blog/2018/02/08/tlsv1.3/ OpenSSL has implemented support for five TLSv1.3 ciphersuites as follows: TLS13-AES-256-GCM-SHA384 TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-128-GCM-SHA256 TLS13-AES-128-CCM-8-SHA256 TLS13-AES-128-CCM-SHA256 for openssl ciphers -stdname -s -V 'TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384:ECDHE:!AES128:!SHA1:!SHA256:!SHA384:!COMPLEMENTOFDEFAULT' 0x13,0x02 - TLS_AES_256_GCM_SHA384 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD 0x13,0x03 - TLS_CHACHA20_POLY1305_SHA256 - TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD 0x13,0x01 - TLS_AES_128_GCM_SHA256 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD 0xC0,0x2C - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD 0xC0,0x30 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD 0xCC,0xA9 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD 0xCC,0xA8 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD using the alias TLSv1.3 ciphersuite names is also fine, openssl ciphers -stdname -s -V 'TLS-CHACHA20-POLY1305-SHA256:TLS-AES-128-GCM-SHA256:TLS-AES-256-GCM-SHA384:ECDHE:!AES128:!SHA1:!SHA256:!SHA384:!COMPLEMENTOFDEFAULT' 0x13,0x02 - TLS_AES_256_GCM_SHA384 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD 0x13,0x03 - TLS_CHACHA20_POLY1305_SHA256 - TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256) Mac=AEAD 0x13,0x01 - TLS_AES_128_GCM_SHA256 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD 0xC0,0x2C - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD 0xC0,0x30 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD 0xCC,0xA9 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=ECDSA Enc=CHACHA20/POLY1305(256) Mac=AEAD 0xCC,0xA8 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 - ECDHE-RSA-CHACHA20-POLY1305 TLSv1.2 Kx=ECDH Au=RSA Enc=CHACHA20/POLY1305(256) Mac=AEAD if in nginx config, ssl_protocols TLSv1.3 TLSv1.2; ssl_ciphers "TTLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:TLS13-AES-256-GCM-SHA384:ECDHE:!AES128:!SHA1:!SHA256:!SHA384:!COMPLEMENTOFDEFAULT"; ssllabs.com/ssltest reports Configuration Protocols TLS 1.3 Yes TLS 1.2 Yes TLS 1.1 No TLS 1.0 No SSL 3 No SSL 2 No For TLS 1.3 tests, we only support RFC 8446. Cipher Suites # TLS 1.3 (suites in server-preferred order) TLS_AES_256_GCM_SHA384 (0x1302) ECDH x25519 (eq. 3072 bits RSA) FS 256 TLS_CHACHA20_POLY1305_SHA256 (0x1303) ECDH x25519 (eq. 3072 bits RSA) FS256 TLS_AES_128_GCM_SHA256 (0x1301) ECDH x25519 (eq. 3072 bits RSA) FS 128 # TLS 1.2 (suites in server-preferred order) TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 (0xc02c) ECDH x25519 (eq. 3072 bits RSA) FS 256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030) ECDH x25519 (eq. 3072 bits RSA) FS 256
nginx 1.17.1 configcheck fails if config'd for TLSv1.3-only ?
I run nginx nginx -v nginx version: nginx/1.17.1 on linux/64. I've installed which openssl /usr/local/openssl/bin/openssl openssl version OpenSSL 1.1.1c 28 May 2019 nginx is built with/linked to this version ldd `which nginx` | grep ssl libssl.so.1.1 => /usr/local/openssl/lib64/libssl.so.1.1 (0x7f95bdc09000) libcrypto.so.1.1 => /usr/local/openssl/lib64/libcrypto.so.1.1 (0x7f95bd6f9000) I'm currently working setting up a local-only server, attempting to get it to use TLSv1.3/CHACHA20 only. I've tightened down restrictions in nginx config. With my attempted restrictions in place, I've found that I'm apparently NOT using TLSv1.3/CHACHA20. With this nginx config server { listen 10.0.1.20:443 ssl http2; server_name test.dev.lan; root /data/webapps/nulldir; index index.html; rewrite_log on; access_log /var/log/nginx/access.log main; error_log /var/log/nginx/error.log info; ssl_protocols TLSv1.3 TLSv1.2; ssl_ciphers "TLS13-CHACHA20-POLY1305-SHA256 TLS13-AES-256-GCM-SHA384 TLS13-AES-128-GCM-SHA256 ECDHE-ECDSA-CHACHA20-POLY1305"; ssl_ecdh_curve X25519:prime256v1:secp384r1; ssl_prefer_server_ciphers on; ssl_trusted_certificate "/usr/local/etc/ssl/myCA/myCA.chain.crt.pem"; ssl_certificate "/usr/local/etc/ssl/test/test.ec.crt.pem"; ssl_certificate_key "/usr/local/etc/ssl/test/test.ec.key.pem"; location / { } } config check is ok, nginxconfcheck nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful and I see a TLS 1.3 handshake, openssl s_client -connect 10.0.1.20:443 -CAfile /usr/local/etc/ssl/myCA/myCA.chain.crt.pem CONNECTED(0003) Can't use SSL_get_servername depth=2 O = dev.lan, OU = myCA, L = NewYork, ST = NY, C = US, emailAddress = ad...@dev.lan, CN = myCA_ROOT verify return:1 depth=1 C = US, ST = NY, O = dev.lan, OU = myCA, CN = myCA_INT, emailAddress = ad...@dev.lan verify return:1 depth=0 C = US, ST = NY, L = NewYork, O = dev.lan, OU = myCA, CN = test.dev.lan, emailAddress = ad...@dev.lan verify return:1 --- Certificate chain 0 s:C = US, ST = NY, L = NewYork, O = dev.lan, OU = myCA, CN = test.dev.lan, emailAddress = ad...@dev.lan i:C = US, ST = NY, O = dev.lan, OU = myCA, CN = myCA_INT, emailAddress = ad...@dev.lan --- Server certificate -BEGIN CERTIFICATE- MIIEhjCCBAygAwIBAgICELAwCgYIKoZIzj0EAwIwgbAxCzAJBgNVBAYTAlVTMQsw ... VHldKgTNpiGuFA== -END CERTIFICATE- subject=C = US, ST = NY, L = NewYork, O = dev.lan, OU = myCA, CN = test.dev.lan, emailAddress = ad...@dev.lan issuer=C = US, ST = NY, O = dev.lan, OU = myCA, CN = myCA_INT, emailAddress = ad...@dev.lan --- No client certificate CA names sent Peer signing digest: SHA384 Peer signature type: ECDSA Server Temp Key: X25519, 253 bits --- SSL handshake has read 1565 bytes and written 373 bytes Verification: OK --- New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384 Server public key is 384 bit Secure Renegotiation IS NOT supported No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- --- Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher: TLS_AES_256_GCM_SHA384 Session-ID: CA79B0596A2CCF19BBA9A49E086F99E7F811FAC8349888E37531E46B17FE35A9 Session-ID-ctx: Resumption PSK: 9966170E5086490D231260B15CDA6852D0CCDED661D1C075BF0DE3334C89472B158F2524282DD5F1175381B4317D8DC9 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 300 (seconds) TLS session ticket: - 1e 49 9a 75 97 46 90 9c-8a ec 1b 8d ac 90 5a a6 .I.u.FZ. ... 00d0 - 49 e4 e0 50 62 3b 45 a5-10 f9 9e 2e 43 09 41 40
Re: TLS1.3
On 7/18/19 1:15 PM, Thomas Ward wrote: Might be helpful to point at https://trac.nginx.org/nginx/ticket/1654#comment:2 and other issues which have spurned the request to rebuild downstream. Which, given that NGINX built against 1.1.0 downstream and OpenSSL downstream in Ubuntu with 1.1.1 is set such that TLS 1.3 is "on by default" and therefore is just 'available' and enabled but not able to be controlled/disabled by NGINX directly, it DOES work with TLS1.3 connections and ciphers. We just can't manipulate things. The developer concern downstream is this rebuild won't introduce any other TLS 1.3 behaviors not already present as a result of OpenSSL being "TLS1.3 Enabled By Default" which is the current situation. Thanks for the trac link. fwiw, here I've nginx -V nginx version: nginx/1.17.1 (local build) built with OpenSSL 1.1.1c 28 May 2019 TLS SNI support enabled ... yet, despite the build, I'm seeing some problems with TLSv1.3 cipher usage/config in Nginx. cref: https://mta.openssl.org/pipermail/openssl-users/2019-July/010881.html I've _just_ started poking around with that, and don't know what/where the problem lies atm. It _seems_ to me an issue with Nginx, but I simply am unsure ... Perhaps something i the trac issue will light a bulb for me; I'll take a closer look. Thx o/ ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: TLS1.3
On 7/18/19 1:01 PM, Thomas Ward wrote: There's a few considerations here. We need to make certain that such a rebuild to allow NGINX to control TLS 1.3 protocol or ciphers isn't going to introduce any additional TLS1.3 behaviors or feature functionality that otherwise would not be controlled by OpenSSL under Offhand, have you already demonstrated to your satisfaction that TLSv1.3 served-up in Nginx is, in fact, using the TLSv1.3 ciphers? Regardless of any 'additional' "behaviors or feature or functionality"? Atm, in some simple testing I'm seeing that it doesn't -- and looking for any evidence, anectdotal or otherwise, that it does so I can narrow down if it's 'me' ... ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Nextcloud 16 on Nginx 1.17.1 -- "Status: 500 Internal Server Error" & "Something is wrong with your openssl setup" ?
I run nginx/1.17.1 + PHP 7.4.0-dev on linux/64. It's an in-production setup, with lots of directly hosted, as well as proxied, SSL-secured webapps. I've now installed Nextcloud v16.0.3. For the moment, directly hosted on Nginx, not-yet proxied. It installs to DB with no errors. &, The site's accessible hosted on Nginx; I can get to the app's login page -- securely. But when I enter login credentials & submit, I get an nginx/fastcgi http fastcgi header: "Status: 500 Internal Server Error" and in Nextcloud logs, "Something is wrong with your openssl setup: error:02001002:system library:fopen:No such file or directory," I've posted an issue, with config & error details, here: https://github.com/nextcloud/server/issues/16378 Since I've got lots of other webapps running securely with no issues, and I _am_ able to get to Nextcloud's secure login page with my Nginx-served SSL cert, I suspect the problem's in Nextcloud -- NOT nginx. But thought I'd check-in here ... anyone successfully using Nextcloud on Nginx that can suggest what the problem is, or a fix? OR, *is* this an Nginx issue that I simply haven't recognized ? Header I've missed, or misconfigured? Not immediately clear to me why it wouldn't surface on the login page at 1st connect ... ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
how to force/send TLS Certificate Request for all client connections, in client-side ssl-verification?
I've setup my nginx server with self-signed SSL server-side certs, using my own/local CA. Without client-side verifications, i.e. just an unverified-TLS connection, all's good. If I enable client-side SSL cert verification with, ssl_certificate "ssl/example.com.server.crt.pem"; ssl_certificate_key "ssl/example.com.server.key.pem"; ssl_verify_client on; ssl_client_certificate"ssl_cert_dir/CA_intermediate.crt.pem"; ssl_verify_depth 2; , a connecting android app is failing on connect, receiving FROM the nginx server, HTTP RESPONSE: Response{protocol=http/1.1, code=400, message=Bad Request, url=https://proxy.example.com/dav/myuser%40example.com/3d75dc22-8afc-1946-5b3f-4d84e9b28432/} 400 No required SSL certificate was sent 400 Bad Request No required SSL certificate was sent nginx I've been unsuccessful so far using tshark/ssldump to decrypt the SSL handshake; I suspect (?) it's because my certs are ec signed. Still working on that ... In 'debug' level nginx logs, I see 2019/06/30 21:58:14 [debug] 41777#41777: *7 s:0 in:'35:5' 2019/06/30 21:58:14 [debug] 41777#41777: *7 s:0 in:'2F:/' 2019/06/30 21:58:14 [debug] 41777#41777: *7 http uri: "/dav/myu...@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http args: "" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http exten: "" 2019/06/30 21:58:14 [debug] 41777#41777: *7 posix_memalign: 558C35B3C840:4096 @16 2019/06/30 21:58:14 [debug] 41777#41777: *7 http process request header line 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Depth: 0" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Content-Type: application/xml; charset=utf-8" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Content-Length: 241" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Host: proxy.example.com" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Connection: Keep-Alive" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Accept-Encoding: gzip" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Accept-Language: en-US, en;q=0.7, *;q=0.5" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Authorization: Basic 1cC5...WUVi" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http header done 2019/06/30 21:58:14 [info] 41777#41777: *7 client sent no required SSL certificate while reading client request headers, client: 10.0.1.235, server: proxy.example.com, request: "PROPFIND /dav/myuser%40example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/ HTTP/1.1", host: "proxy.example.com" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http finalize request: 496, "/dav/myu...@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/?" a:1, c:1 2019/06/30 21:58:14 [debug] 41777#41777: *7 event timer del: 15: 91237404 2019/06/30 21:58:14 [debug] 41777#41777: *7 http special response: 496, "/dav/myu...@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/?" 2019/06/30 21:58:14 [debug] 41777#41777: *7 http set discard body 2019/06/30 21:58:14 [debug] 41777#41777: *7 headers more header filter, uri "/dav/myu...@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/" 2019/06/30 21:58:14 [debug] 41777#41777: *7 charset: "" > "utf-8" 2019/06/30 21:58:14 [debug] 41777#41777: *7 HTTP/1.1 400 Bad Request Date: Mon, 01 Jul 2019 04:58:14 GMT Content-Type: text/html; charset=utf-8 Content-Length: 230 Connection: close Secure: Groupware Server X-Content-Type-Options: nosniff In comms with the app vendor, I was asked Does your proxy send TLS Certificate Request https://tools.ietf.org/html/rfc5246#section-7.4.4? ... the TLS stack which is used ... won't send certificates preemptively, but only when they're requested. In my tests, client certificates are working as expected, but ONLY if the server explicitly requests them. I don't recognize the preemptive request above. DOES nginx send such a TLS Certificate Request by default? Is there a required, additional config to force that request? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: effect of bcrypt hash $cost on HTTP Basic authentication's login performance?
> (And no, it does not look like an appropriate question for the > nginx-devel@ list. Consider using nginx@ instead.) k. On 7/2/19 5:23 PM, Maxim Dounin wrote: On Sat, Jun 29, 2019 at 09:48:01AM -0700, PGNet Dev wrote: When generating hashed data for "HTTP Basic" login auth protection, using bcrypt as the hash algorithm, one can vary the resultant hash strength by varying specify bcrypt's $cost, e.g. [...] For site login usage, does *client* login time vary at all with the hash $cost? Other than the initial, one-time hash generation, is there any login-performance reason NOT to use the highest hash $cost? With Basic HTTP authentication, hashing happens on every user request. That is, with high costs you are likely make your site completely unusable. Noted. *ARE* there authentication mechanisms available that do NOT hash on every request? Perhaps via some mode of secure caching? AND, that still maintain a high algorithmic cost to prevent breach attemtps, or at least maximize their efforts? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
effect of bcrypt hash $cost on HTTP Basic authentication's login performance?
When generating hashed data for "HTTP Basic" login auth protection, using bcrypt as the hash algorithm, one can vary the resultant hash strength by varying specify bcrypt's $cost, e.g. php -r "echo password_hash('$my_pass', PASSWORD_BCRYPT, ['cost' => $cost]) . PHP_EOL;" Of course, increased $cost requires increased encryption time. E.g., on my desktop, the hash encryption times vary with cost as, costtime 5 0m0.043s 6 0m0.055s 7 0m0.059s 8 0m0.075s 9 0m0.081s 10 0m0.110s 11 0m0.169s 12 0m0.285s 13 0m0.518s 14 0m0.785s 15 0m1.945s 16 0m3.782s 17 0m7.512s 18 0m14.973s 19 0m29.903s 20 0m59.735s 21 1m59.418s 22 3m58.792s ... For site login usage, does *client* login time vary at all with the hash $cost? Other than the initial, one-time hash generation, is there any login-performance reason NOT to use the highest hash $cost? ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: after upgrade to nginx 1.16.0, $realpath_root returns incorrect path ?
On 5/5/19 2:41 AM, A. Schulze wrote: Am 05.05.19 um 07:14 schrieb PGNet Dev: Dropping back to 1.15 branch, all's working again -- with the var. For example, the diff between 1.15.12 and 1.16.0 is *only* the changed version number. So, be precise about which 1.15 version is working for you. Here, I'd not upgraded these couple of boxes to latest -- instead, I had 1.15.10 & 1.15.9 in place. Both exhibited the same behavior re: the var, and both 'recovered' after I did _clean_ checkout & builds. Sounds like the problem was on my end, tho odd that it's 'just' a build issue. In any case, pebkac, I think. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: after upgrade to nginx 1.16.0, $realpath_root returns incorrect path ?
On 5/4/19 8:11 AM, PGNet Dev wrote: but turning on debug, 2019/05/04 07:51:50 [debug] 6510#6510: *8 http script var: "/index.php" 2019/05/04 07:51:50 [debug] 6510#6510: *8 fastcgi param: "SCRIPT_FILENAME: /usr/local/html/index.php" the SCRIPT_FILENAME path is incorrect. there appears to be an issue with $realpath_root. While I'm digging locally for the problem ... question(s): -- has anything changed in usage of $realpath_root? -- are there any php v7.3.6 related issues? -- any other hints? I replaced the $realpath_root var with a literal path string, and everything works again as expected. Dropping back to 1.15 branch, all's working again -- with the var. Rebuilding PHP had no effect, neither did dropping back to earlier PHP branch(es). Finally, I trashed all of nginx, and did a clean checkout/build. And it works. Of course. No concrete idea what specifically was the problem ... but gone now. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
after upgrade to nginx 1.16.0, $realpath_root returns incorrect path ?
after upgrading my working nginx instance from v1.15.x to nginx -V nginx version: nginx/1.16.0 (local build) built with OpenSSL 1.1.1b 26 Feb 2019 ... running with php-fpm from php -v PHP 7.3.6-dev (cli) (built: Apr 23 2019 19:34:32) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.3.6-dev, Copyright (c) 1998-2018 Zend Technologies with Zend OPcache v7.3.6-dev, Copyright (c) 1999-2018, by Zend Technologies my local site's no longer accessible. standard log reports, 2019/05/04 07:51:50 [error] 6510#6510: *8 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: dev01.pgnd.loc, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm.sock:", host: "dev01.pgnd.loc" in my config, I've got -- as usual, fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; and my expected/target index.php is in its usual path /srv/www/test03/public/index.php but turning on debug, 2019/05/04 07:51:50 [debug] 6510#6510: *8 http script var: "/index.php" 2019/05/04 07:51:50 [debug] 6510#6510: *8 fastcgi param: "SCRIPT_FILENAME: /usr/local/html/index.php" the SCRIPT_FILENAME path is incorrect. there appears to be an issue with $realpath_root. While I'm digging locally for the problem ... question(s): -- has anything changed in usage of $realpath_root? -- are there any php v7.3.6 related issues? -- any other hints? ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: PCRE2 support?
Well, this depends on your point of view. If a project which actually developed the library fails to introduce support to the new version of the library - for an external observer this suggests that there is something wrong with the new version. FUD 'suggestions' simply aren't needed. The Exim project didn't develop the pcre2 library ... Philip Hazel did (https://www.pcre.org/current/doc/html/pcre2.html#SEC4), as a separate project. Exim's last (? something newer out there?) rationale for not adopting it was simply, https://bugs.exim.org/show_bug.cgi?id=1878 "The original PCRE support is not broken. If it is going to go away, then adding PCRE2 support becomes much more important, but I've seen nobody saying that yet." Also of note, https://pcre.org/pcre.txt "The old libraries (now called PCRE1) are still being maintained for bug fixes, but there will be no new development. New projects are advised to use the new PCRE2 libraries." ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives?
Hi On 6/12/18 12:03 AM, Andrei wrote: - The sheer amount of added context switches (proxying was done local on a cPanel box, seeing 20-30k reqs/sec during peak hours) Not clear what you mean here - Having to manage two software versions, configs, auto config builders used by internal tools, etc Not a huge headache here. I can see this gets possibly annoying a scale with # of sites. - More added headaches with central logging Having Varnish's detailed logging is a bit plus, IME, for tracking down cache issues, specifically, and header issues in general. No issues with 'central' logging. - No projected TLS support in Varnish Having a terminator out front hasn't been a problem, save for the additional config considerations. - Bare minimum H2 support in Varnish vs a more mature implementation in Nginx This one I'm somewhat aware of -- haven't yet convinced myself of if/where there's a really problematic bottleneck. Since Nginx can pretty much do everything Varnish does, and more, Except for the richness of the VCL ... I decided to avoid the headaches and just jump over to Nginx (even though I've been an avid Varnish fan since 2.1.5). As for a VCL replacement and purging in Nginx, I suggest reading up on Lua and checking out openresty if you want streamlined updates and don't want to manually compile/manage modules. To avoid overloading the filesystem with added I/O from purge requests/scans/etc, I wrote a simple Perl script that handles all the PURGE requests in order to have regex support and control over the remoals (it basically validates ownership to purge on the related domain, queues removals, then has another thread for the cleanup). My main problem so far is that WordPress appears to be generally Varnish-UNfriendly. Not core, but plugins. With Varnish, I'm having all SORTS of issues/artifacts cropping up. So far, (my) VCL pass exceptions haven't been sufficient. Without Varnish, there are far fewer 'surprises'. Then again, I'm not a huge WP fan to begin with; it's a pain to debug anything beyond standard server config issues. Caching in particular. OTOH, my sites with Nginx+Varnish+Varnish with Symfony work without a hitch. My leaning is, for WP, Nginx only. For SF, Nginx+Varnish. And, TBH, avoiding WP if/when I can. Hope this helps some :) It does, thx! ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives?
On 6/7/18 9:27 AM, Reinis Rozitis wrote: this patch https://github.com/FRiCKLE/ngx_cache_purge/commit/c7345057ad5429617fc0823e92e3fa8043840cef.diff Noted, thx. In my case at one project we decided/had to switch to nginx caching from varnish because varnish (even you are using disk based (mmap/file) backend storage) has a memory overhead per cacheable object (like ~1Kb) While 1Kb doesn't sound much when you start to have milions of objects it adds up and in this case even we had several terabytes of fast SSDs the actual bottleneck ended was there was not enough ram - the instances had only limited 32 Gb so in general there couldnt be more than 33 milion cached objects. Nginx on the other on the same hardware deals with 800+ milion (and increasing) objects without a problem. Point taken. Not an issue for my typical use case; may come up in future, so good to remember. p.s. there is also obviously the ssl thing with varnish vs nginx .. but thats another topic. No real "vs" or "thing" IME. nginx(ssl terminator) -> varnish -> nginx works quite nicely. There's also Varnish's terminator, Hitch, as an alternative, https://www.varnish-software.com/plus/ssl-tls-support/ https://github.com/varnish/hitch which I've been told works well; I haven't bothered since I've already got nginx in place on the backend -- adding a listener on the frontend is trivial. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives?
On 6/6/18 11:31 PM, Jon Franklin wrote: You can try this: https://github.com/nginx-modules/ngx_cache_purge Thx! I'd aptly managed to not find/notice that fork. Does address the 'stale' development status. Still, leaves some of the concerns about nginx ABI, etc. mentioned earlier. I'll set up a test instance and take it all for a spin. OTOH, I've setup a Varnish instance in front of WP. As predicted, it's straightforward. And, the test WP site 'feels' a *lot* more responsive than using the FastCGI cache alternative. I've no quantitative benchmarks ... yet ... and I've not yet run all the 'Canary' tests I need to by any stretch. But it certainly looks promising. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives?
On 6/6/18 4:09 PM, Robert Paprocki wrote: Nginx has no stable API/ABI. With every release you want to leverage you need to walk through your entire test/canary/B-G/whatever cycle. That's a question only you can answer, but asking about "what about X release" is fruitless because of a complete lack of ABI support. In six month's it's an obsolete question, whose only two answers are "be the developer and watching the changelog" or "compile the module, test it, and pray to the diety of your choice that it doesn't explode". That's an excellent point. Esp since I tend to keep production current with Nginx releases. TBH, tho, I've said such a prayer-or-three re: Varnish! Stepping back, these articles compare Nginx vs. Varnish straight-up. There is considerable difference to take into account in examining a stack leverage both. > ... Much agreed. Apparently my reference to 'TheGoogle' refs wasn't snarky or dismissive enough! ;-) If I were you I would strongly question this "prefer to have" if the only question is manageable cache purging. :) Been done. Not convincingly enough, apparently. You can lead a horse ... It's a Nordstrom's(-of-long-ago) moment: "Customer's Right. Because they say so." Thx agn! ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives?
Hi My $0.02 coming from experience building out scalable WP clusters is, stick to Varnish here. Miscommunication on my part -- my aforementioned Varnish-in-front referred to site dev in general. To date, it's been in front of Symfony sites. Works like a champ there. Since you're apparently working with WP under real-world loads, do you perchance have a production-ready, V6-compatible VCL & nginx config you can share? or point to? FRiCKLE's module is great, but it would be scary to put into production- have fun with that test/release cycle :p Yep. Hence my question(s)! The overhead of putting Nginx in front of Varnish is fairly small in the grand scheme of things. What's your motivation to strictly use Nginx? This time 'round, it's not entirely 'my' motivation; came with the job's "prefer to haves". Based, in apparently large part, on the usual use of TheGoogle; these 2 in particular: https://deliciousbrains.com/page-caching-varnish-vs-nginx-fastcgi-cache-2018/ https://www.scalescale.com/tips/nginx/nginx-vs-varnish/ There is official support for cache purging with the commercial version of Nginx: https://www.nginx.com/products/nginx/caching/. Ah, so not (yet) in the FOSS product. I see it's proxy_cache, not fastcgi_cache, based ... I've seen moderate hardware running Nginx (for TLS offload + WAF) -> Varnish (cache + purge) -> Apache/mod_php do 50k r/s on a single node. One would hope this suffices; it's a stable and proven stack. Again, ngx_cache_purge is great, but any unsupported module in a prod environment is scary when you're not writing the code. ;) Again, yep. Thx! ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives?
For some new WordPress sites, I'll be deploying fastcgi_cache as reverse proxy / page cache, instead of usual Varnish. Although there are a number of WP-module-based PURGE options, I prefer that it's handled by the web server. A commonly referenced approach is to use the 'FRiCKLE/ngx_cache_purge', https://github.com/FRiCKLE/ngx_cache_purge/ with associated nginx conf additions, https://easyengine.io/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-purging/ https://www.ryadel.com/en/nginx-purge-proxy-cache-delete-invalidate-linux-centos-7/ ngx_cache_purge module development appears to have gone stale; no commits since ~ 2014. What are your experiences with current use of that module, with latest 1.15x nginx releases? Is there a cleaner, nginx-native approach? Or other nginx purge module that's better maintained? Comments &/or pointers to any docs, etc would be helpful. ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx
Re: nginx/1.13.9 + njs/head build : ngx_http_js_module.so: undefined symbol
On 2/20/18 1:26 PM, Dmitry Volyntsev wrote: Thank you for reporting the problem. Please, make sure that you do 'make clean’ in njs directory after hg update. Of course. I'm always hg update --clean tip r450 is reproducibly causing the error. r446 is fine atm. ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: nginx/1.13.9 + njs/head build : ngx_http_js_module.so: undefined symbol
On 2/20/18 11:41 AM, PGNet Dev wrote: cd /usr/local/src/njs/nginx hg log | head changeset: 450:757271547b56 tag: tip user:Dmitry Volyntsev <xei...@nginx.com> date:Tue Feb 20 19:12:55 2018 +0300 summary: Fixed the names of global functions in backtraces. fyi, simple revert to hg revert -r 446 --all fixes the config check problem, and subsequent nginx build execs as usual ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
nginx/1.13.9 + njs/head build : ngx_http_js_module.so: undefined symbol
Upgrading nginx 1.13.8 -> 1.13.9 with usual ./configure \ ... \ --add-dynamic-module=/usr/local/src/njs/nginx and cd /usr/local/src/njs/nginx hg log | head changeset: 450:757271547b56 tag: tip user:Dmitry Volyntsevdate:Tue Feb 20 19:12:55 2018 +0300 summary: Fixed the names of global functions in backtraces. ls -al /usr/local/src/njs/nginx total 76K drwxr-xr-x 2 root root 4.0K Feb 20 11:31 ./ drwxr-xr-x 6 root root 4.0K Feb 20 11:31 ../ -rw-r--r-- 1 root root 749 Feb 20 11:31 config -rw-r--r-- 1 root root 260 Feb 20 11:31 config.make -rw-r--r-- 1 root root 32K Feb 20 11:31 ngx_http_js_module.c -rw-r--r-- 1 root root 26K Feb 20 11:31 ngx_stream_js_module.c No errors on build, but on conf check /usr/local/sbin/nginx -t -c /usr/local/etc/nginx/nginx.conf nginx: [emerg] dlopen() "/usr/local/nginx-modules/ngx_http_js_module.so" failed (/usr/local/nginx-modules/ngx_http_js_module.so: undefined symbol: njs_vm_value_to_ext_string) in /usr/local/etc/nginx/nginx.conf:34 nginx: configuration file /usr/local/etc/nginx/nginx.conf test failed where, line 34's load_module /usr/local/nginx-modules/ngx_http_js_module.so; and ls -al /usr/local/nginx-modules/ngx_http_js_module.so -rwxr-xr-x 1 root root 1.3M Feb 20 11:26 /usr/local/nginx-modules/ngx_http_js_module.so* ldd /usr/local/nginx-modules/ngx_http_js_module.so linux-vdso.so.1 (0x7ffef8db5000) libssl.so.1.1 => /usr/local/openssl11/lib64/libssl.so.1.1 (0x7f8822d18000) libcrypto.so.1.1 => /usr/local/openssl11/lib64/libcrypto.so.1.1 (0x7f882286e000) libdl.so.2 => /lib64/libdl.so.2 (0x7f882266a000) libz.so.1 => /lib64/libz.so.1 (0x7f8822453000) libpcre.so.1 => /usr/local/lib64/libpcre.so.1 (0x7f88221dc000) libpcrecpp.so.0 => /usr/local/lib64/libpcrecpp.so.0 (0x7f8821fd2000) libm.so.6 => /lib64/libm.so.6 (0x7f8821cd5000) libc.so.6 => /lib64/libc.so.6 (0x7f8821934000) libpthread.so.0 => /lib64/libpthread.so.0 (0x7f8821717000) /lib64/ld-linux-x86-64.so.2 (0x7f88231d7000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x7f882139) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f8821179000) nm /usr/local/nginx-modules/ngx_http_js_module.so | grep "U njs" U njs_vm_external_create U njs_vm_external_prototype U njs_vm_retval_to_ext_string U njs_vm_value_to_ext_string ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: [njs] Version 0.1.12. DOC BUG
fyi, site doc bug/typo instructions @ http://nginx.org/en/docs/njs_about.html state ... The modules can also be built as dynamic: ./configure --add-dynamic_module=path-to-njs/nginx ... that's a typo. should be, ./configure --add-dynamic-module=path-to-njs/nginx ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: nginx 1.11.3 build, linking openssl 1.1.0 (beta/pre-6) fails @ ‘SSL_R_NO_CIPHERS_PASSED’
On 08/11/2016 09:33 AM, Valentin V. Bartenev wrote: On Thursday 11 August 2016 08:51:40 pgndev wrote: This was already fixed a few days ago. http://hg.nginx.org/nginx/rev/1891b2892b68 wbr, Valentin V. Bartenev Didn't see that / Applies cleanly to 1.11.3 release, nginx -V nginx version: nginx/1.11.3 built with OpenSSL 1.1.0-pre6 (beta) 4 Aug 2016 ... & serves up chacha cipher nicely. Thanks. ___ nginx-devel mailing list nginx-devel@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx-devel