ssl stapling, verification fails

2015-04-29 Thread drookie
Hi.

I'm trying to get nginx 1.6.2 to authenticate users using their client
certificates.

I'm using this configuration (besides usual SSL settings, which are proved
to work):

ssl_stapling on;
ssl_client_certificate /etc/nginx/certs/trusted.pem;
ssl_verify_client optional_no_ca;

trusted.pem contains 3 CA certificates: test CA and 2 production CA (main
and intermediate).
To pass verification data to the application I'm using

fastcgi_param X-SSL-Verified $ssl_client_verify;
fastcgi_param X-SSL-Certificate $ssl_client_cert;
fastcgi_param X-SSL-IDN $ssl_client_i_dn;
fastcgi_param X-SSL-SDN $ssl_client_s_dn;

And here comes the issue: when using test CA and test cerificate, I'm
getting X-SSL-Verified: SUCCESS, but when using production ones, I'm getting
X-SSL-Verified: FAILED. You can say that there's a problem in my certificate
bunch, but I tried to verify if the production certificate is really issued
by the CA that I think about:

openssl verify -verbose -CAfile trusted.pem rt.cert 
rt.cert: OK

Looks like it passes the verification. trusted.pem is the same that nginx
uses. In the same time nginx thinks that certificate doesn't pass the test.
Why can this happen ? I've also tried setting 'ssl_verify_client on;' - the
only difference that I get the 400 answer, because the verification fails
explicitely.

Thanks.

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,258480,258480#msg-258480

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


upstream member liveness

2016-04-11 Thread drookie
What is the scope of the upstream member liveness: is it per upstream group,
or per vhost ?

If the question is unclear, consider I have 3 nginx - one balancer and two
backends, and the following config part on the nginx balancer:


upstream backends {
server 192.168.0.1;
server 192.168.0.2;
}

And on both 192.168.0.1 and 192.168.0.2 the following configs:

server {
server_name A;

root /foo/bar1;

location / {
fastcgi_pass 127.0.0.1:9000;
}
}

server {
server_name B;

root /foo/bar1;

location / {
fastcgi_pass 127.0.0.1:9000;
}
}


server 192.168.0.1 returns 500 for vhost A, will it be considered dead for
vhost B (and I supposed it will be) ?

Thanks.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,266077,266077#msg-266077

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: upstream member liveness

2016-04-12 Thread drookie
Is there someone besides Captain Evidence who knows the answer ? This is
actually the problem of the modern internet: half of the decent questions is
flooded out by people, who not only think they know the answer, but are
arrogant enough to insist it, and from the point of an outer observer the
topic looks "answered".

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,266077,266137#msg-266137

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: no live upstreams and NO previous error

2016-09-15 Thread drookie
(yup, it's still the author of the original post, but my other browser just
remembers another set of credentials).

If I increase verbosity of the error_log, I'm seeing additional messages in
log, like 

upstream server temporarily disabled while reading response header from


but this message doesn't explain why the upstream server was disabled. I
understand that the error occured, but what exaclty ? I'm used to see
timeouts instead, or some other explicit problem. This looks totally
mysterios for me. Could someone shine some light on it ?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269577,269583#msg-269583

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: no live upstreams and NO previous error

2016-09-15 Thread drookie
Oh, solved. Upstreams do respond with 500.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,269577,269584#msg-269584

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


nginx caching proxy

2018-10-17 Thread drookie
Hello, 

I did't find the answer in documentation, but am I right, assuming from my
observation, that when the proxy_cache is enabled for a location, and the
client requests the file that isn't in the cache yet, nginx starts
transmitting this file only after it's fully received from the upstream ?
Because I'm seeing the lags equal to the request_time from the upstream.

If I'm right, is there a way to enable the transmitting without waiting for
the end of the file ?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,281621,281621#msg-281621

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


still seeing 413 error with client_max_body_size 0

2019-09-05 Thread drookie
Hello,

I was getting the bunch of 413 statuses in the access log along with getting
explicit error messages about client (logstash in my case, seems like it was
trying to send bodies around 100 megabytes) trying to post body larger than
the client_max_body_size. After I raised this setting to 128m, I stopped
receiving messages in the error log, but not the access log 413 statuses:

10.3.51.214 - - [05/Sep/2019:15:21:27 +0500] elasticsearch.dev.alamics.ru
"POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 82.609
192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:23:00 +0500] elasticsearch.dev.alamics.ru
"POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 91.931
192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:24:24 +0500] elasticsearch.dev.alamics.ru
"POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 83.679
192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:25:35 +0500] elasticsearch.dev.alamics.ru
"POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 69.195
192.168.57.23:9200 413 -
10.3.51.214 - - [05/Sep/2019:15:27:01 +0500] elasticsearch.dev.alamics.ru
"POST /_bulk HTTP/1.1" 413 0 "-" "Java/1.8.0_212" "-" "-" 85.953
192.168.57.23:9200 413 -

I've even tried to set the client_max_body_size to 0, but I'm still getting
these 413 like once per minute. As you can see, the request times are about
1.5 minutes, so it's not the case when I'm still seeing past failing
requests for old setting.

I'm pretty much stuck at this point.

 nginx/1.16.0 on FreeBSD 12-STABLE amd64 from ports.

Thanks.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,285564,285564#msg-285564

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: still seeing 413 error with client_max_body_size 0

2019-09-05 Thread drookie
Oh, sorry.

It's clear that the upstream is sending 413 errors, not the nginx himself.

Should read the log more carefully.

Sorry again.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,285564,285565#msg-285565

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx