Re: Проблема при использовании return и кастомных ошибок на уровне server

2018-03-22 Thread Aziz Rozyev
если есть возможность привязаться к к/л стандартной ошибке, например, для 404:
 
error_page 404 =527 /plugin.html

а так в nginx кажется не предусмотрена ошибка 527.

br,
Aziz.





> On 22 Mar 2018, at 15:54, Никита  wrote:
> 
> 
> Добрый день. 
> 
> При попытке сделать вернуть кастомный error_page на уровне сервера nginx 
> возвращает  стандартную ошибку вместо моей html-ки.
> 
> Вот конфиг:
> server {
> listen 80;
> server_name localhost;
> 
> access_log /var/log/nginx/nginx-www-access.log logstash;
> 
> location /plug.html {
> access_log /var/log/nginx/bot_permanent_plug.log logstash;
> root /usr/share/nginx/html/error_pages;
> }
> 
> error_page 527 /plug.html;
> return 527;
> }
> 
> Вот дебаг лог:
> 
> https://pastebin.com/Zebuqtzt
> 
> Как вернуть кастомную ошибку на уровне сервера?
> 
> Спасибо. 
> -- 
> Никита Маслянников
> ___
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: nginx Connections

2018-03-15 Thread Aziz Rozyev
check the limit_req_module,

http://nginx.org/ru/docs/http/ngx_http_limit_req_module.html

our beloved and hugely useful search engines gave this:

https://www.nginx.com/blog/rate-limiting-nginx/

it’s not possible to manipulate limits with cli though.

br,
Aziz.





> On 15 Mar 2018, at 09:52, Manali  wrote:
> 
> I want to limit the connections used by nginx using CLI.
> 
> I know that we can set worker connections to different values in nginx conf
> file. But no of worker connections will include not only the connections to
> the host. It also includes proxy connections too.
> 
> If I want to give user flexibility to limit the connections, user will not
> know about proxy connections.
> 
> Is there any flexibility in nginx source code to know whether the connection
> established by nginx is to the proxy server or host connections ?
> 
> Can you please help me with this ?
> 
> Let me know if more information is needed.
> 
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,279052,279052#msg-279052
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: How can i configure proxy multiple hosts for a domain?

2018-03-12 Thread Aziz Rozyev
Hi,

perhaps you should’ve explained your intention bit more in details, may be with 
some functional schema. 

As of my current understanding, you want something like port mirroring to 
duplicate
your network traffic.

anyways, it’s out of scope of nginx, search for “port/traffic mirroring”.


br,
Aziz.





> On 12 Mar 2018, at 05:56, mslee  wrote:
> 
> It's not load balancing like round robin, least conn, ip bash.
> I want to know how to proxy simultaneously to the registered proxy host for
> one domain.
> 
> I searched for this method, but all documents were about load balancing.
> Please help me if you are aware of this problem.
> 
> Thank you in advance.
> 
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,278997,278997#msg-278997
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Check the size of one of the request header in nginx conf

2018-03-06 Thread Aziz Rozyev
by the way, there is easier solution to this (thanks to Ruslan Ermilov)

something like this:

map $http_ $disabled {
^.{65,} 1
}

location / {
  if $disable {
 return 404;
  }
  proxy_pass http://upstream;
}
 

br,
Aziz.





> On 6 Mar 2018, at 15:16, Aziz Rozyev <aroz...@nginx.com> wrote:
> 
> hi,
> 
> I think you can do such a checking with lua/njs modules.
> 
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
>> On 6 Mar 2018, at 15:13, mejetjoseph <nginx-fo...@forum.nginx.org> wrote:
>> 
>> Dear Team,
>> 
>> I would like to know is it possible to check the size of one of header
>> values in nginx conf file . I need to reset the header value if the size of
>> this header value exceed 64 character.
>> 
>> Could you please provide can I able to do this condition check in ngnix conf
>> file?
>> 
>> 
>> Kind regards,
>> Joseph
>> 
>> Posted at Nginx Forum: 
>> https://forum.nginx.org/read.php?2,278940,278940#msg-278940
>> 
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-03-02 Thread Aziz Rozyev
Atilla,

man page quote is related to the Valery’s argument that fsync wont affect 
performance, forget it.

It’s nonsense because you’re trying to solve the reliability problem at the 
different level,
it has been multiple times suggested here already by maxim and Paul, that it’s 
better 
to invest to the good server/storage infrastructure, instead of fsyncing each 
PUT.

Regarding the DB server analogy, you’re still not save from the power outages 
as long as your
transaction isn’t in a transaction log. 

If you’re still consent with syncing and ready to sacrifice your time, try 
mounting a file system
with ‘sync’ option.


br,
Aziz.





> On 2 Mar 2018, at 12:12, Nagy, Attila  wrote:
> 
> On 02/28/2018 03:08 PM, Maxim Dounin wrote:
>> The question here is - why you want the file to be on disk, and
>> not just in a buffer?  Because you expect the server to die in a
>> few seconds without flushing the file to disk?  How probable it
>> is, compared to the probability of the disk to die?  A more
>> reliable server can make this probability negligible, hence the
>> suggestion.
> Because the files I upload to nginx servers are important to me. Please step 
> back a little and forget that we are talking about nginx or an HTTP server.
> We have data which we want to write to somewhere.
> Check any of the database servers. Would you accept a DB server which can 
> loose confirmed data or couldn't be configured that way that a 
> write/insert/update/commit/whatever you use to modify or put data into it 
> operation is reliably written by the time you receive acknowledgement?
> Now try to use this example. I would like to use nginx to store files. That's 
> what HTTP PUT is for.
> Of course I'm not expecting that the server will die every day. But when that 
> happens, I want to make sure that the confirmed data is there.
> Let's take a look at various object storage systems, like ceph. Would you 
> accept a confirmed write to be lost there? They make a great deal of work to 
> make that impossible.
> Now try to imagine that somebody doesn't need the complexity of -for example- 
> ceph, but wants to store data with plain HTTP. And you got there. If you 
> store data, then you want to make sure the data is there.
> If you don't, why do you store it anyways?
> 
>> (Also, another question is what "on the disk" meas from physical
>> point of view.  In many cases this in fact means "somewhere in the
>> disk buffers", and a power outage can easily result in the file
>> being not accessible even after fsync().)
> Not with good software/hardware. (and it doesn't really have to be super 
> good, but average)
> 
>> 
>>> Why doing this in a thread is not a good idea? It would'nt block nginx
>>> that way.
>> Because even in threads, fsync() is likely to cause performance
>> degradation.  It might be a better idea to let the OS manage
>> buffers instead.
>> 
> Sure, it will cause some (not much BTW in a good configuration). But if my 
> primary goal is to store files reliably, why should I care?
> I can solve that by using SSDs for logs, BBWCs and a lot more thing. But in 
> the current way, I can't make sure that a HTTP PUT was really successful or 
> it will be successful in some seconds or it will fail badly.
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: why hardcoded /var/log/nginx/error.log in pre-built packages?

2018-03-01 Thread Aziz Rozyev
it’s not hardcoded afaik.

check output of nginx -T perhaps it’s defined error_log directive somewhere..


br,
Aziz.





> On 1 Mar 2018, at 23:40, Daniel  wrote:
> 
> Hello all,
> 
> can someone please explain to me why the location /var/log/nginx/error
> log is hardcoded in the official prebuilt packages?
> 
> Or why nginx -t checks if this file exists even if there is another
> location defined in the config file?
> 
> 
> Thank you.
> 
> Daniel
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-02-28 Thread Aziz Rozyev
here is a synthetic test on vm, not perfect, but representative:


[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=30960 
30960+0 records in
30960+0 records out
253624320 bytes (254 MB) copied, 0.834861 s, 304 MB/s

[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=30960 
conv=fsync
30960+0 records in
30960+0 records out
253624320 bytes (254 MB) copied, 0.854208 s, 297 MB/s

[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=61960
61960+0 records in
61960+0 records out
507576320 bytes (508 MB) copied, 1.71833 s, 295 MB/s
[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=61960 
conv=fsync
61960+0 records in
61960+0 records out
507576320 bytes (508 MB) copied, 1.74482 s, 291 MB/s


br,
Aziz.





> On 1 Mar 2018, at 00:41, Aziz Rozyev <aroz...@nginx.com> wrote:
> 
> Valery, 
> 
> may you please suggest how you came to the conclusion that 
> 
> “fsync simply instructs OS to ensure consistency of a file"?
> 
> As far as understand simply instructing OS staff come at no cost, right?
> 
>> Without fsyncing file's data and metadata a client will receive a positive 
>> reply before data has reached the storage, thus leaving non-zero probability 
>> that states of two systems involved into a web transaction end up 
>> inconsistent.
> 
> 
> I understand why one may need consistency, but doing so with fsyncing is 
> non-sense.
> 
> Here is what man page says in that regard:
> 
> 
> fsync()  transfers  ("flushes")  all  modified  in-core data of (i.e., 
> modified buffer cache pages for) the file referred to by the file descriptor 
> fd to the disk device (or other permanent
>   storage device) so that all changed information can be retrieved even 
> after the system crashed or was rebooted.  This includes writing through or 
> flushing a disk cache if present.  The call
>   blocks until the device reports that the transfer has completed.  It 
> also flushes metadata information associated with the file (see stat(2)).
> 
> 
> 
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
>> On 28 Feb 2018, at 21:24, Valery Kholodkov <valery+ngin...@grid.net.ru> 
>> wrote:
>> 
>> It's completely clear why someone would need to flush file's data and 
>> metadata upon a WebDAV PUT operation. That is because many architectures 
>> expect a PUT operation to be completely settled before a reply is returned.
>> 
>> Without fsyncing file's data and metadata a client will receive a positive 
>> reply before data has reached the storage, thus leaving non-zero probability 
>> that states of two systems involved into a web transaction end up 
>> inconsistent.
>> 
>> Further, the exact moment when the data of certain specific file reaches the 
>> storage depends on numerous factors, for example, I/O contention. 
>> Consequently, the exact moment when the data of a file being uploaded 
>> reaches the storage can be only determined by executing fsync.
>> 
>> val
>> 
>> On 28-02-18 11:04, Aziz Rozyev wrote:
>>> While it’s not clear why one may need to flush the data on each http 
>>> operation,
>>> I can imagine to what performance degradation that may lead of.
>>> if it’s not a some kind of funny clustering among nodes, I wouldn't care 
>>> much
>>> where actual data is, RAM still should be much faster, than disk I/O.
>>> br,
>>> Aziz.
>>>> On 28 Feb 2018, at 12:30, Nagy, Attila <b...@fsn.hu> wrote:
>>>> 
>>>> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
>>>>> 
>>>>>> Now, that nginx supports running threads, are there plans to convert at
>>>>>> least DAV PUTs into it's own thread(pool), so make it possible to do
>>>>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
>>>>> No, there are no such plans.
>>>>> 
>>>>> (Also, trying to do fsync() might not be the best idea even in
>>>>> threads.  A reliable server might be a better option.)
>>>>> 
>>>> What do you mean by a reliable server?
>>>> I want to make sure when the HTTP operation returns, the file is on the 
>>>> disk, not just in a buffer waiting for an indefinite amount of time to be 
>>>> flushed.
>>>> This is what fsync is for.
>>>> 
>>>> Why doing this in a thread is not a good idea? It would'nt block nginx 
>>>> that way.
>> 
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-02-28 Thread Aziz Rozyev
Valery, 

may you please suggest how you came to the conclusion that 

“fsync simply instructs OS to ensure consistency of a file"?

As far as understand simply instructing OS staff come at no cost, right?

> Without fsyncing file's data and metadata a client will receive a positive 
> reply before data has reached the storage, thus leaving non-zero probability 
> that states of two systems involved into a web transaction end up 
> inconsistent.


I understand why one may need consistency, but doing so with fsyncing is 
non-sense.

Here is what man page says in that regard:


fsync()  transfers  ("flushes")  all  modified  in-core data of (i.e., modified 
buffer cache pages for) the file referred to by the file descriptor fd to the 
disk device (or other permanent
   storage device) so that all changed information can be retrieved even 
after the system crashed or was rebooted.  This includes writing through or 
flushing a disk cache if present.  The call
   blocks until the device reports that the transfer has completed.  It 
also flushes metadata information associated with the file (see stat(2)).




br,
Aziz.





> On 28 Feb 2018, at 21:24, Valery Kholodkov <valery+ngin...@grid.net.ru> wrote:
> 
> It's completely clear why someone would need to flush file's data and 
> metadata upon a WebDAV PUT operation. That is because many architectures 
> expect a PUT operation to be completely settled before a reply is returned.
> 
> Without fsyncing file's data and metadata a client will receive a positive 
> reply before data has reached the storage, thus leaving non-zero probability 
> that states of two systems involved into a web transaction end up 
> inconsistent.
> 
> Further, the exact moment when the data of certain specific file reaches the 
> storage depends on numerous factors, for example, I/O contention. 
> Consequently, the exact moment when the data of a file being uploaded reaches 
> the storage can be only determined by executing fsync.
> 
> val
> 
> On 28-02-18 11:04, Aziz Rozyev wrote:
>> While it’s not clear why one may need to flush the data on each http 
>> operation,
>> I can imagine to what performance degradation that may lead of.
>> if it’s not a some kind of funny clustering among nodes, I wouldn't care much
>> where actual data is, RAM still should be much faster, than disk I/O.
>> br,
>> Aziz.
>>> On 28 Feb 2018, at 12:30, Nagy, Attila <b...@fsn.hu> wrote:
>>> 
>>> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
>>>> 
>>>>> Now, that nginx supports running threads, are there plans to convert at
>>>>> least DAV PUTs into it's own thread(pool), so make it possible to do
>>>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
>>>> No, there are no such plans.
>>>> 
>>>> (Also, trying to do fsync() might not be the best idea even in
>>>> threads.  A reliable server might be a better option.)
>>>> 
>>> What do you mean by a reliable server?
>>> I want to make sure when the HTTP operation returns, the file is on the 
>>> disk, not just in a buffer waiting for an indefinite amount of time to be 
>>> flushed.
>>> This is what fsync is for.
>>> 
>>> Why doing this in a thread is not a good idea? It would'nt block nginx that 
>>> way.
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-02-28 Thread Aziz Rozyev
While it’s not clear why one may need to flush the data on each http operation,
I can imagine to what performance degradation that may lead of. 

if it’s not a some kind of funny clustering among nodes, I wouldn't care much
where actual data is, RAM still should be much faster, than disk I/O.


br,
Aziz.





> On 28 Feb 2018, at 12:30, Nagy, Attila  wrote:
> 
> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
>> 
>>> Now, that nginx supports running threads, are there plans to convert at
>>> least DAV PUTs into it's own thread(pool), so make it possible to do
>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
>> No, there are no such plans.
>> 
>> (Also, trying to do fsync() might not be the best idea even in
>> threads.  A reliable server might be a better option.)
>> 
> What do you mean by a reliable server?
> I want to make sure when the HTTP operation returns, the file is on the disk, 
> not just in a buffer waiting for an indefinite amount of time to be flushed.
> This is what fsync is for.
> 
> Why doing this in a thread is not a good idea? It would'nt block nginx that 
> way.
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Jenkins reverse proxy on single domain with multiple apps

2018-02-26 Thread Aziz Rozyev
well, as I’ve said try checking headers first, as per the following doc:
https://wiki.jenkins.io/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy

it can be much complicated than just proxy_pass’ing.

regarding sub_fillter,

try getting rid of one of the sub_filters, probably the second one:

  proxy_passhttp://jenkins:8080/;
   sub_filter 'url=/login?from=%2F' 'url=/jenkins/login?from=%2F';
   # sub_filter "('/login?from=%2F')" "('/jenkins/login?from=%2F')";
   sub_filter_once off;

br,
Aziz.





> On 26 Feb 2018, at 18:03, kefi...@gmail.com  
> wrote:
> 
> The setup is a simple set of docker containers. 
> 
> a 'localhost' is a docker container with nginx acting as a reverse proxy
> 
> it has defined /jenkins location to reverse proxy connection to jenkins
> container which exposes the java app at port 8080 so:
> 
> location /jenkins {
>  proxy_pass http://jenkins:8080;
> }
> 
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,278748,278771#msg-278771
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Jenkins reverse proxy on single domain with multiple apps

2018-02-25 Thread Aziz Rozyev
Hi,

compare the output of 

  curl -ivvv http://jenkins:8080
  curl -ivvv http://localhost/jenkins

then

  curl -iLvvv http://jenkins:8080
  curl -iLvvv http://localhost/jenkins

pay attention on the cookie headers.

java based applications usually may set session cookies and you should
handle them accordingly.

and it’s not clear (at least for me) why you’ve mentioned localhost/app1|2,
is that app1/app2 are different jenkins applications servers?


br,
Aziz.





> On 25 Feb 2018, at 11:08, kefi...@gmail.com  
> wrote:
> 
> Hello,
> 
> I am trying to setup a reverse proxy on a single domain to host multiple
> apps separated by URI, for example:
> http://localhost/app1
> http://localhost/app2
> etc.
> 
> Right know having problems with reverse proxy jenkins which sends HTML page
> in HTTP reply with relative paths eg. /static/abc/css/common.css
> 
> So far I have rewritten the content to the login screen as per below but I
> want to know if there is any other way to rewrite this automatically:
> 
> location /jenkins {
>proxy_passhttp://jenkins:8080/;
>sub_filter 'url=/login?from=%2F' 'url=/jenkins/login?from=%2F';
>sub_filter "('/login?from=%2F')" "('/jenkins/login?from=%2F')";
>sub_filter_once off;
> 
>}
> 
> 
> Once I hit http://localhost/jenkins I am getting whole bunch of 404 because
> of relative links as per the HTML response below
> 
> 
>   data-resurl="/static/a4190cd9">
> 
> 
>Jenkins [Jenkins] href="/static/a4190cd9/css/layout-common.css" type="text/css" /> rel="stylesheet" href="/static/a4190cd9/css/style.css" type="text/css"
> /> type="text/css" /> href="/static/a4190cd9/css/responsive-grid.css" type="text/css" /> rel="shortcut icon" href="/static/a4190cd9/favicon.ico"
> type="image/vnd.microsoft.icon" /> href="/images/mask-icon.svg" />var isRunAsTest=false; var
> rootURL=""; var resURL="/static/a4190cd9"; src="/static/a4190cd9/scripts/prototype.js"
> type="text/javascript"> src="/static/a4190cd9/scripts/behavior.js"
> type="text/javascript"> src='/adjuncts/a4190cd9/org/kohsuke/stapler/bind.js'
> type='text/javascript'> src="/static/a4190cd9/scripts/yui/yahoo/yahoo-min.js"> src="/static/a4190cd9/scripts/yui/dom/dom-min.js"> src="/static/a4190cd9/scripts/yui/event/event-min.js"> src="/static/a4190cd9/scripts/yui/animation/animation-min.js"> src="/static/a4190cd9/scripts/yui/dragdrop/dragdrop-min.js"> src="/static/a4190cd9/scripts/yui/container/container-min.js"> src="/static/a4190cd9/scripts/yui/connection/connection-min.js"> src="/static/a4190cd9/scripts/yui/datasource/datasource-min.js"> src="/static/a4190cd9/scripts/yui/autocomplete/autocomplete-min.js"> src="/static/a4190cd9/scripts/yui/menu/menu-min.js"> src="/static/a4190cd9/scripts/yui/element/element-min.js"> src="/static/a4190cd9/scripts/yui/button/button-min.js"> src="/static/a4190cd9/scripts/yui/storage/storage-min.js"> src="/static/a4190cd9/scripts/hudson-behavior.js"
> type="text/javascript"> src="/static/a4190cd9/scripts/sortable.js"
> type="text/javascript">crumb.init("Jenkins-Crumb",
> "3842e2c3b39902ce834febe3772fe35c"); href="/static/a4190cd9/scripts/yui/container/assets/container.css"
> type="text/css" /> href="/static/a4190cd9/scripts/yui/assets/skins/sam/skin.css"
> type="text/css" /> href="/static/a4190cd9/scripts/yui/container/assets/skins/sam/container.css"
> type="text/css" /> href="/static/a4190cd9/scripts/yui/button/assets/skins/sam/button.css"
> type="text/css" /> href="/static/a4190cd9/scripts/yui/menu/assets/skins/sam/menu.css"
> type="text/css" /> name="viewport" content="width=device-width, initial-scale=1" /> src="/static/a4190cd9/jsbundles/page-init.js"
> type="text/javascript"> data-model-type="jenkins.install.SetupWizard" id="jenkins"
> class="yui-skin-sam full-screen jenkins-2.60.3" data-version="2.60.3"> id="page-body" class="clear"> name="skip2content"> src="/static/a4190cd9/jsbundles/pluginSetupWizard.js"
> type="text/javascript"> href="/static/a4190cd9/jsbundles/pluginSetupWizard.css" type="text/css"
> /> type="hidden" value="/" /> class="modal fade in" style="display: block;"> class="modal-content"> class="modal-title">Getting Started class="water-mark icon-service">Unlock JenkinsTo ensure Jenkins is securely set up by
> the administrator, a password has been written to the log ( href="https://jenkins.io/redirect/find-jenkins-logs; target="_blank">not
> sure where to find it?) and this file on the server:
> /var/jenkins_home/secrets/initialAdminPasswordPlease
> copy the password from either location and paste it below. class="form-group "> for="security-token">Administrator password type="hidden" value="admin" /> class="form-control" type="password" name="j_password"
> /> class="btn btn-primary set-security-key" value="Continue"
> />
> 
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,278748,278748#msg-278748
> 
> ___
> nginx mailing list
> nginx@nginx.org
> 

Re: Redirection

2018-02-22 Thread Aziz Rozyev
Hi,

show your full config, usually there is no need to set variable like 
$tomcatdomain,

proxy_pass http://tomcatdomain;

is enough.

br,
Aziz.





> On 22 Feb 2018, at 16:17, imrickysingh  wrote:
> 
> Hi guys,
> 
> I am new to nginx and facing some problem with my setup.
> 
> In my setup i have nginx and tomcat with the application running on tomcat
> as http://tomcatdomain/application_name. I want to redirect to application
> if someone hit http://nginxdomain/app. I am able to do the redirection using
> location block as:
> 
> location /app {
>  proxy_pass $tomcatdomain;
>  proxy_set_header Host $host;
>  proxy_pass_request_headers  on;
> }
> 
> http://nginxdomain/app gives default tomcat page but i am not being able to
> reach the application.
> If i do like: http://nginxdomain/app/application_name, then it doesn't go
> anywhere but gives me the default tomcat page
> 
> 
> 
> Regards,
> Ricky Singh
> 
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,278722,278722#msg-278722
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx error log parser

2018-01-11 Thread Aziz Rozyev
Hi,

seems, that fluentd has an nginx_parser plugin already, another solution that 
probably should work is to use the grep filters,
something as follows:


   @type grep
   
  key client
  patter ^client.*\ $
   
   
  key server
  pattern ^server.*\ $
   
   
  key host
  pattern ^host.*$
   
   
  key zone
  pattern ^zone.*\ $
   
….. 



then use record_trasformer type, to make further modifications. But, I didn’t 
tried above, 
probably it’s something that better to be asked from fluentd community..


br,
Aziz.





> On 10 Jan 2018, at 15:23, mohit Agrawal <mohit3081...@gmail.com> wrote:
> 
> Thanks Aziz for this, I get your point, but can we do awking in fluentd cons 
> file ? Basically we are looking for realtime awking a nginx error log file, 
> how heavy this would be according to you.
> 
> On 10 January 2018 at 17:44, Aziz Rozyev <aroz...@nginx.com> wrote:
> If you need parse exactly the same format, as you’ve shown in you question, 
> it’s fairly easy to create something e.g. perl/awk/sed script.
> 
> for instance:
> 
> # tst.awk #
> BEGIN {FS = "," }
> {
> split($1, m, "\ ")
> printf "%s", "{ "
> printf "%s",$2
> printf "%s",$3
> printf "%s",$5
> printf "%s",$4
> printf "reason: %s %s %s %s \"%s\"\n", m[6], m[7], m[8], m[9], m[10]
> print " }”
> 
> }
> #
> 
> 
> result:
> 
> echo 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections 
> by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET 
> /api/xyz HTTP/1.1", host: "www.xyz.com" | awk -f /tmp/test.awk
> {  client: xx.xx.xx.xx server: www.xyz.com host: www.xyz.com request: GET 
> /api/xyz HTTP/1.1reason: limiting connections by zone "rl_conn"
>  }
> 
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
> > On 10 Jan 2018, at 14:45, mohit Agrawal <mohit3081...@gmail.com> wrote:
> >
> > Yeah I have tried grok / regex pattern as well. But not extensive success 
> > that I achieved. grok didn't work for me, I tried regex then it was able to 
> > segregate time , pid, tid, log_level and message. I also need message break 
> > up for above pattern
> >
> > On 10 January 2018 at 17:12, Aziz Rozyev <aroz...@nginx.com> wrote:
> > Hi Mohit,
> >
> > check the second reply. I’m not sure that there is a conventional pretty 
> > printing
> > tools for nginx error log.
> >
> >
> > br,
> > Aziz.
> >
> >
> >
> >
> >
> > > On 10 Jan 2018, at 14:37, mohit Agrawal <mohit3081...@gmail.com> wrote:
> > >
> > > Hi Aziz,
> > >
> > > log_format directive only provides formatting for access log, I am 
> > > looking to format error.log which doesn't take log_format directive.
> > > Above example that I gave is just for nginx error logs.
> > >
> > > Thanks
> > >
> > > On 10 January 2018 at 15:26, Aziz Rozyev <aroz...@nginx.com> wrote:
> > > btw, after re-reading the your questing, it looks like you need something 
> > > like logstash grok filter.
> > >
> > > br,
> > > Aziz.
> > >
> > >
> > >
> > >
> > >
> > > > On 10 Jan 2018, at 11:45, mohit Agrawal <mohit3081...@gmail.com> wrote:
> > > >
> > > > Hi ,
> > > >
> > > > I am looking to parse nginx error log so as to find out which 
> > > > particular IP is throttled during specific amount of time on connection 
> > > > throttling  / request throttling. The format looks like :
> > > >
> > > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections 
> > > > by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: 
> > > > "GET /api/xyz HTTP/1.1", host: "www.xyz.com"
> > > > And the sample that I am looking for is :
> > > >
> > > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", 
> > > > "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by 
> > > > zone "rl_conn""}
> > > > so that I can pass it through ELK stack and find out the root ip which 
> > > > is causing issue.
> > > >
> > > >
> > > > --
> > > > Mohit Agrawal
> > > > ___
> > > > nginx mailing list
> > > > nginx@nginx.org
> > > > http://mailman.nginx.org/mailman/listinfo/nginx
> > >
> > > ___
> > > nginx mailing list
> > > nginx@nginx.org
> > > http://mailman.nginx.org/mailman/listinfo/nginx
> > >
> > >
> > >
> > > --
> > > Mohit Agrawal
> >
> >
> >
> >
> > --
> > Mohit Agrawal
> 
> 
> 
> 
> -- 
> Mohit Agrawal

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx error log parser

2018-01-10 Thread Aziz Rozyev
If you need parse exactly the same format, as you’ve shown in you question, 
it’s fairly easy to create something e.g. perl/awk/sed script.

for instance:

# tst.awk # 
BEGIN {FS = "," }
{
split($1, m, "\ ")
printf "%s", "{ "
printf "%s",$2
printf "%s",$3
printf "%s",$5
printf "%s",$4
printf "reason: %s %s %s %s \"%s\"\n", m[6], m[7], m[8], m[9], m[10]
print " }”

}
#


result:

echo 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by 
zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET 
/api/xyz HTTP/1.1", host: "www.xyz.com" | awk -f /tmp/test.awk 
{  client: xx.xx.xx.xx server: www.xyz.com host: www.xyz.com request: GET 
/api/xyz HTTP/1.1reason: limiting connections by zone "rl_conn"
 }


br,
Aziz.





> On 10 Jan 2018, at 14:45, mohit Agrawal <mohit3081...@gmail.com> wrote:
> 
> Yeah I have tried grok / regex pattern as well. But not extensive success 
> that I achieved. grok didn't work for me, I tried regex then it was able to 
> segregate time , pid, tid, log_level and message. I also need message break 
> up for above pattern
> 
> On 10 January 2018 at 17:12, Aziz Rozyev <aroz...@nginx.com> wrote:
> Hi Mohit,
> 
> check the second reply. I’m not sure that there is a conventional pretty 
> printing
> tools for nginx error log.
> 
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
> > On 10 Jan 2018, at 14:37, mohit Agrawal <mohit3081...@gmail.com> wrote:
> >
> > Hi Aziz,
> >
> > log_format directive only provides formatting for access log, I am looking 
> > to format error.log which doesn't take log_format directive.
> > Above example that I gave is just for nginx error logs.
> >
> > Thanks
> >
> > On 10 January 2018 at 15:26, Aziz Rozyev <aroz...@nginx.com> wrote:
> > btw, after re-reading the your questing, it looks like you need something 
> > like logstash grok filter.
> >
> > br,
> > Aziz.
> >
> >
> >
> >
> >
> > > On 10 Jan 2018, at 11:45, mohit Agrawal <mohit3081...@gmail.com> wrote:
> > >
> > > Hi ,
> > >
> > > I am looking to parse nginx error log so as to find out which particular 
> > > IP is throttled during specific amount of time on connection throttling  
> > > / request throttling. The format looks like :
> > >
> > > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections 
> > > by zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: 
> > > "GET /api/xyz HTTP/1.1", host: "www.xyz.com"
> > > And the sample that I am looking for is :
> > >
> > > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", 
> > > "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone 
> > > "rl_conn""}
> > > so that I can pass it through ELK stack and find out the root ip which is 
> > > causing issue.
> > >
> > >
> > > --
> > > Mohit Agrawal
> > > ___
> > > nginx mailing list
> > > nginx@nginx.org
> > > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> >
> >
> > --
> > Mohit Agrawal
> 
> 
> 
> 
> -- 
> Mohit Agrawal

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx error log parser

2018-01-10 Thread Aziz Rozyev
Hi Mohit,

check the second reply. I’m not sure that there is a conventional pretty 
printing
tools for nginx error log.


br,
Aziz.





> On 10 Jan 2018, at 14:37, mohit Agrawal <mohit3081...@gmail.com> wrote:
> 
> Hi Aziz,
> 
> log_format directive only provides formatting for access log, I am looking to 
> format error.log which doesn't take log_format directive. 
> Above example that I gave is just for nginx error logs.
> 
> Thanks
> 
> On 10 January 2018 at 15:26, Aziz Rozyev <aroz...@nginx.com> wrote:
> btw, after re-reading the your questing, it looks like you need something 
> like logstash grok filter.
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
> > On 10 Jan 2018, at 11:45, mohit Agrawal <mohit3081...@gmail.com> wrote:
> >
> > Hi ,
> >
> > I am looking to parse nginx error log so as to find out which particular IP 
> > is throttled during specific amount of time on connection throttling  / 
> > request throttling. The format looks like :
> >
> > 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by 
> > zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET 
> > /api/xyz HTTP/1.1", host: "www.xyz.com"
> > And the sample that I am looking for is :
> >
> > {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", 
> > "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone 
> > "rl_conn""}
> > so that I can pass it through ELK stack and find out the root ip which is 
> > causing issue.
> >
> >
> > --
> > Mohit Agrawal
> > ___
> > nginx mailing list
> > nginx@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> 
> 
> 
> -- 
> Mohit Agrawal

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx error log parser

2018-01-10 Thread Aziz Rozyev
btw, after re-reading the your questing, it looks like you need something like 
logstash grok filter.

br,
Aziz.





> On 10 Jan 2018, at 11:45, mohit Agrawal  wrote:
> 
> Hi ,
> 
> I am looking to parse nginx error log so as to find out which particular IP 
> is throttled during specific amount of time on connection throttling  / 
> request throttling. The format looks like :
> 
> 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by 
> zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET 
> /api/xyz HTTP/1.1", host: "www.xyz.com"
> And the sample that I am looking for is : 
> 
> {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", 
> "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone 
> "rl_conn""}
> so that I can pass it through ELK stack and find out the root ip which is 
> causing issue.
> 
> 
> -- 
> Mohit Agrawal
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx error log parser

2018-01-10 Thread Aziz Rozyev
is the 'log_format json’ is what you’re asking for?

http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format

br,
Aziz.





> On 10 Jan 2018, at 11:45, mohit Agrawal  wrote:
> 
> Hi ,
> 
> I am looking to parse nginx error log so as to find out which particular IP 
> is throttled during specific amount of time on connection throttling  / 
> request throttling. The format looks like :
> 
> 2018/01/10 06:26:31 [error] 13485#13485: *64285471 limiting connections by 
> zone "rl_conn", client: xx.xx.xx.xx, server: www.xyz.com, request: "GET 
> /api/xyz HTTP/1.1", host: "www.xyz.com"
> And the sample that I am looking for is : 
> 
> {client: "xx.xx.xx.xx", server: "www.xyz.com", host: "www.xyz.com", 
> "request": "GET /api/xyz HTTP/1.1", reason: "limiting connections by zone 
> "rl_conn""}
> so that I can pass it through ELK stack and find out the root ip which is 
> causing issue.
> 
> 
> -- 
> Mohit Agrawal
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 504 gateway timeouts

2018-01-09 Thread Aziz Rozyev
Hi Wade,

At least provide the access/error log fragments, curl -ivvv <..> outputs 
directly to the 3rd party service and via the nginx,
jmeter (if you use that) outputs would make sense. Also, it would be nice to 
compare nginx configurations from the mac and linux.

Currently it’s barely possible to conclude something relevant about the issue, 
having exactly the same nginx configurations, 
and getting different results looks rather strange. 

Hint: change ‘error_log’ directive’s level to ‘info’, check error/access logs, 
gather tcpdump. And if that won’t make the problem
clearer, start nginx in debug mode. this article should be useful: 
https://www.nginx.com/resources/admin-guide/debug/



br,
Aziz.





> On 9 Jan 2018, at 23:56, Peter Booth  wrote:
> 
> Wade, 
> 
> This reminds me of something I once saw with an application that was making 
> web service requests to FedEx. So are you saying that the response times are 
> bimodal? That you either get a remote response within a few seconds or the 
> request takes more than 60 seconds, and that you have no 20sec,30sec,40sec 
> requests?
> 
> And, if so, do those 60+ sec requests ever get a healthy response?
> 
> 
> Sent from my iPhone
> 
> On Jan 9, 2018, at 1:52 PM, Wade Girard  wrote:
> 
>> Hi nginx group,
>> 
>> If anyone has any ides on this, they would be appreciated.
>> 
>> Thanks
>> 
>> On Fri, Jan 5, 2018 at 6:28 AM, Wade Girard  wrote:
>> Hi Peter,
>> 
>> Thank You.
>> 
>> In my servlet I am making https requests to third party vendors to get data 
>> from them. The requests typically take 4~5 seconds, but every now any then 
>> one of the requests will take more than 60 seconds. So the connection from 
>> the client to nginx to tomcat will remain open, and at 60 seconds nginx is 
>> terminating the request to tomcat, even though the connection from the third 
>> party server to tomcat is still open.
>> 
>> I am also working with the third party vendor to have them see why their 
>> connections sometimes take more than 60 seconds.
>> 
>> Through googling I discovered that adding the settings proxy_send_timeout, 
>> proxy_read_timeout, proxy_connection_timeout, etc... to my location 
>> definition in my conf file could change the timeout to be different (higher) 
>> than the apparent default 60 second timeout. I use a Mac for development. I 
>> added these to my local conf file, and added the long connection request to 
>> test if the settings worked. They did. However they do not have the same 
>> effect for nginx installed on my production Ubuntu 16.x servers. I did not 
>> realize that these settings were limited by the OS that nginx is installed 
>> on. Are there are similar settings that will work for the Ubuntu 16.x OS to 
>> achieve the same result?
>> 
>> Wade
>> 
>> On Fri, Jan 5, 2018 at 1:33 AM, Peter Booth  wrote:
>> Wade,
>> 
>> I think that you are asking “hey why isn’t nginx behaving identically on 
>> MacOS and Linux when create a servlet that invokes Thread.sleep(30) 
>> before it returns a response?.”
>> 
>> Am I reading you correctly?
>> 
>> A flippant response would be to say: “because OS/X and Linux are different 
>> OSes that behave differently”
>> 
>> It would probably help us if you explained a little more about your test, 
>> why the sleep is there and what your goals are?
>> 
>> 
>> Peter
>> 
>> 
>>> On Jan 4, 2018, at 11:45 PM, Wade Girard  wrote:
>>> 
>>> I am not sure what is meant by this or what action you are asking me to 
>>> take. The settings, when added to nginx conf file on Mac OS server and 
>>> nginx reloaded take effect immediately and work as expected, the same 
>>> settings when added to nginx conf file on Ubuntu and nginx reloaded have no 
>>> effect at all. What steps can I take to have the proxy in nginx honor these 
>>> timeouts, or what other settings/actions can I take to make this work?
>>> 
>>> Thanks
>>> 
>>> On Thu, Jan 4, 2018 at 7:46 PM, Zhang Chao  wrote:
>>> > The version that is on the ubuntu servers was 1.10.xx. I just updated it 
>>> > to 
>>> >
>>> > nginx version: nginx/1.13.8
>>> >
>>> > And I am still having the same issue.
>>> >
>>> > How do I "Try to flush out some output early on so that nginx will know 
>>> > that Tomcat is alive."
>>> >
>>> > The nginx and tomcat connection is working fine for all 
>>> > requests/responses that take less t
>>> 
>>> Maybe you can flush out the HTTP response headers quickly.
>>> 
>>> 
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>>> 
>>> 
>>> 
>>> -- 
>>> Wade Girard
>>> c: 612.363.0902
>>> ___
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
>> 
>> 
>> ___
>> nginx mailing list
>> 

Re: MAP location in conf file

2017-12-29 Thread Aziz Rozyev
check docs:
http://nginx.org/en/docs/http/ngx_http_map_module.html

every directive description has a “Context” field, where you may find
in which configuration context you can put the directive.

regarding your questions, yes, you actually have to put it on the very top - in 
‘http'
context.

br,
Aziz.





> On 29 Dec 2017, at 04:06, li...@lazygranch.com wrote:
> 
> Presently I'm putting maps in the server location. Can they be put in
> the very top to make them work for all servers? If not, I can just make
> the maps into include files and insert as needed, but maybe making the
> map global is more efficient.
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Proxy pass and URL rewrite with upstream

2017-12-21 Thread Aziz Rozyev
Hi

create 4 separate upstreams for each of these apaches,

create 4 locations, within each location block proxy_pass to appropriate 
upstream.

avoid using sub_filters, they are mostly forrewriting bodies of html documents.

http {

# for phpadmin
upstream phpadminup {
   server phpadmin.ltda.local:80;
}

upstream whateverup {
   server whatevername.ltda.local:80;
}

server {
   listen 80;

   location /phpadmin/ {
  proxy_pass http://phpadmin;
   }
   
   location /whatevername/ {
  proxy_pass http://whatever;
   }

   ...

}



}
br,
Aziz.





> On 21 Dec 2017, at 17:09, M. Rodrigo Monteiro  
> wrote:
> 
> 
>  server XXX.XXX.XXX.XXX:80 fail_timeout=60;
>  server XXX.XXX.XXX.XXX:80 fail_timeout=60;
>  server XXX.XXX.XXX.XXX:80 fail_timeout=60;
>  server XXX.XXX.XXX.XXX:80 fail_timeout=60;
> }
> 
> # cat systems.ltda.local.conf
> 
> server {
>  listen 80;
>  server_name systems.ltda.local;
>  access_log /var/log/nginx/systems.ltda.local_access.log;
>  error_log /var/log/nginx/systems.ltda.local_error.log;
> 
> location /phpmyadmin {
>  proxy_pass http://wpapp/;
>  sub_filter "http://systems.ltda.local/phpmyadmin;
> "http://phpmyadmin.ltda.local;;
>  sub_filter "http://systems.ltda.local/phpmyadmin/; "http://phpmyadmin.ltda/;;
>  sub_filter_once off;
>  }

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Centos 7 file permission problem

2017-12-20 Thread Aziz Rozyev
no problem, btw, check out this post

https://www.nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/


br,
Aziz.





> On 21 Dec 2017, at 03:33, li...@lazygranch.com wrote:
> 
> Well that was it. You can't believe how many hours I wasted on that.
> Thanks. Double thanks. 
> I'm going to mention this in the Digital Ocean help pages. 
> 
> I disabled selinx, but I have a book laying around on how to set it up.
> Eh, it is on the list. 
> 
> On Wed, 20 Dec 2017 14:17:18 +0300
> Aziz Rozyev <aroz...@nginx.com> wrote:
> 
>> Hi,
>> 
>> have you checked this with disabled selinux ? 
>> 
>> br,
>> Aziz.
>> 
>> 
>> 
>> 
>> 
>>> On 20 Dec 2017, at 11:07, li...@lazygranch.com wrote:
>>> 
>>> I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I
>>> have the firewalls set up properly since I can see my browser
>>> requests in the access and error log. That said, I have file
>>> permission problem. 
>>> 
>>> nginx 1.12.2
>>> Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20
>>> 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
>>> 
>>> 
>>> nginx.conf (with comments removed for brevity and my domain name
>>> remove because google)
>>> ---
>>> user nginx;
>>> worker_processes auto;
>>> error_log /var/log/nginx/error.log;
>>> pid /run/nginx.pid;
>>> 
>>> events {
>>>   worker_connections 1024;
>>> }
>>> 
>>> http {
>>>   log_format  main  '$remote_addr - $remote_user [$time_local]
>>> "$request" ' '$status $body_bytes_sent "$http_referer" '
>>> '"$http_user_agent" "$http_x_forwarded_for"';
>>> 
>>>   access_log  /var/log/nginx/access.log  main;
>>> 
>>>   sendfileon;
>>>   tcp_nopush  on;
>>>   tcp_nodelay on;
>>>   keepalive_timeout   65;
>>>   types_hash_max_size 2048;
>>> 
>>>   include /etc/nginx/mime.types;
>>>   default_typeapplication/octet-stream;
>>> 
>>> server {
>>>   listen 80;
>>>   server_name mydomain.com www.mydomain.com;
>>> 
>>>   return 301 https://$host$request_uri;
>>> }
>>> 
>>>   server {
>>>   listen   443 ssl  http2;
>>>   server_name  mydomain.com www.mydomain.com;
>>>   ssl_dhparam /etc/ssl/certs/dhparam.pem;
>>>   root /usr/share/nginx/html/mydomain.com/public_html;
>>> 
>>> ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; #
>>> managed by Certbot
>>> ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
>>> # managed by Certbot ssl_ciphers HIGH:!aNULL:!MD5;
>>> ssl_prefer_server_ciphers on;
>>> 
>>>   location / {
>>>   root   /usr/share/nginx/html/mydomain.com/public_html;
>>>   index  index.html index.htm;
>>>   }
>>> #
>>>   error_page 404 /404.html;
>>>   location = /40x.html {
>>>   }
>>> #
>>>   error_page 500 502 503 504 /50x.html;
>>>   location = /50x.html {
>>>   }
>>>   }
>>> 
>>> }
>>> 
>>> I have firefox set up with no cache and do not save history.
>>> -
>>> access log:
>>> 
>>> mypi - - [20/Dec/2017:07:46:44 +] "GET /index.html HTTP/2.0"
>>> 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101
>>> Firefox/52.0" "-"
>>> 
>>> myip - - [20/Dec/2017:07:48:44 +] "GET /index.html
>>> HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0)
>>> Gecko/20100101 Firefox/52.0" "-"
>>> ---
>>> error log:
>>> 
>>> 2017/12/20 07:46:44 [error] 10146#0: *48 open()
>>> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed
>>> (13: Permission denied), client: myip, server: mydomain.com,
>>> request: "GET /index.html HTTP/2.0", host: "mydomain.com"
>>> 2017/12/20 07:48:44 [error] 10146#0: *48 open()
>>> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed
>>> (13: Permission denied), client: myip, server

Re: Centos 7 file permission problem

2017-12-20 Thread Aziz Rozyev
Hi,

have you checked this with disabled selinux ? 

br,
Aziz.





> On 20 Dec 2017, at 11:07, li...@lazygranch.com wrote:
> 
> I'm setting up a web server on a Centos 7 VPS. I'm relatively sure I
> have the firewalls set up properly since I can see my browser requests
> in the access and error log. That said, I have file permission problem. 
> 
> nginx 1.12.2
> Linux servername 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> 
> 
> nginx.conf (with comments removed for brevity and my domain name remove
> because google)
> ---
> user nginx;
> worker_processes auto;
> error_log /var/log/nginx/error.log;
> pid /run/nginx.pid;
> 
> events {
>worker_connections 1024;
> }
> 
> http {
>log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
>  '$status $body_bytes_sent "$http_referer" '
>  '"$http_user_agent" "$http_x_forwarded_for"';
> 
>access_log  /var/log/nginx/access.log  main;
> 
>sendfileon;
>tcp_nopush  on;
>tcp_nodelay on;
>keepalive_timeout   65;
>types_hash_max_size 2048;
> 
>include /etc/nginx/mime.types;
>default_typeapplication/octet-stream;
> 
> server {
>listen 80;
>server_name mydomain.com www.mydomain.com;
> 
>return 301 https://$host$request_uri;
> }
> 
>server {
>listen   443 ssl  http2;
>server_name  mydomain.com www.mydomain.com;
>ssl_dhparam /etc/ssl/certs/dhparam.pem;
>root /usr/share/nginx/html/mydomain.com/public_html;
> 
> ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed 
> by Certbot
> ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed 
> by Certbot
>ssl_ciphers HIGH:!aNULL:!MD5;
>ssl_prefer_server_ciphers on;
> 
>location / {
>root   /usr/share/nginx/html/mydomain.com/public_html;
>index  index.html index.htm;
>}
> #
>error_page 404 /404.html;
>location = /40x.html {
>}
> #
>error_page 500 502 503 504 /50x.html;
>location = /50x.html {
>}
>}
> 
> }
> 
> I have firefox set up with no cache and do not save history.
> -
> access log:
> 
> mypi - - [20/Dec/2017:07:46:44 +] "GET /index.html HTTP/2.0" 403 169
> "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101
> Firefox/52.0" "-"
> 
> myip - - [20/Dec/2017:07:48:44 +] "GET /index.html
> HTTP/2.0" 403 169 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:52.0)
> Gecko/20100101 Firefox/52.0" "-"
> ---
> error log:
> 
> 2017/12/20 07:46:44 [error] 10146#0: *48 open() 
> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: 
> Permission denied), client: myip, server: mydomain.com, request: "GET 
> /index.html HTTP/2.0", host: "mydomain.com"
> 2017/12/20 07:48:44 [error] 10146#0: *48 open() 
> "/usr/share/nginx/html/mydomain.com/public_html/index.html" failed (13: 
> Permission denied), client: myip, server: mydomain.com, request: "GET 
> /index.html HTTP/2.0", host: "mydomain.com"
> 
> 
> Directory permissions:
> For now, I made eveything 755 with ownership nginx:nginx I did chmod
> and chown with the -R option
> 
> /etc/nginx:
> drwxr-xr-x.  4 nginx nginx4096 Dec 20 07:39 nginx
> 
> /usr/share/nginx:
> drwxr-xr-x.   4 nginx nginx33 Dec 15 08:47 nginx
> 
> /var/log:
> drwx--. 2 nginx  nginx4096 Dec 20 07:51 nginx
> --
> systemctl status nginx
> ● nginx.service - The nginx HTTP and reverse proxy server
>   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor 
> preset: disabled)
>   Active: active (running) since Wed 2017-12-20 04:21:37 UTC; 3h 37min ago
>  Process: 10145 ExecReload=/bin/kill -s HUP $MAINPID (code=exited, 
> status=0/SUCCESS)
> Main PID: 9620 (nginx)
>   CGroup: /system.slice/nginx.service
>   ├─ 9620 nginx: master process /usr/sbin/nginx
>   └─10146 nginx: worker process
> 
> 
> Dec 20 07:18:33 servername systemd[1]: Reloaded The nginx HTTP and reverse 
> proxy server.
> --
> 
> ps aux | grep nginx
> root  9620  0.0  0.3  71504  3848 ?Ss   04:21   0:00 nginx: 
> master process /usr/sbin/nginx
> nginx10146  0.0  0.4  72004  4216 ?S07:18   0:00 nginx: 
> worker process
> root 10235  0.0  0.0 112660   952 pts/1S+   08:01   0:00 grep ngin
> 
> ---
> firewall-cmd --zone=public --list-all
> public (active)
>  target: default
>  icmp-block-inversion: no
>  interfaces: eth0
>  sources: 
>  services: ssh dhcpv6-client http https
>  ports: 
>  protocols: 
>  masquerade: no
>  forward-ports: 
>  source-ports: 
>  icmp-blocks: 
>  rich rules:
> 

Re: https upstream server и локальный backup http upstream

2017-12-16 Thread Aziz Rozyev
если вопрос в “горячем” резерве remote-a, то не очень ясно зачем эти усложнения,
просто сделайте location @fallback { internal; proxy_pass http://local_up; }, а 
в остальных
случаях на remote, с error_page @fallback. Собственно, Вы это уже давно сделали.


br,
Aziz.





> On 16 Dec 2017, at 23:20, Fedor Dikarev <f...@hamilton.rinet.ru> wrote:
> 
> Суть задачи в том числе и во втором "но":
>> Второе но: если этот прокси недоступен/не ответил/просто 500-тит, то
>> сделать fallback опять на локальную node.
> 
> то есть если у меня есть:
>> upstream remote { server 127.0.0.1:8081; }
>> upstream local { server 127.0.0.1:7070; }
> 
>> server { listen 8081; return 200 remote\n; }
>> server { listen 7070; return 200 local\n; }
> 
> то по заголовку я хожу в remote сервер:
>> curl -fs -H "X-Some-Header: yes" hamilton.rinet.ru/some_header
>> remote
> 
> Если же remote 500-тит, или вообще закоментировать его, то чтобы ходил в
> local:
>> curl -fs -H "X-Some-Header: yes" hamilton.rinet.ru/some_header
>> local
> 
> и да, эта конструкция ведет себя как надо:
>>location =/some_header {
>>proxy_intercept_errors on;
>>if ($remote) {
>>  proxy_pass http://remote;
>>  error_page 500 502 504 = @local;
>>}
>>proxy_pass http://local;
>>}
>>location @local {   
>>internal;
>>proxy_pass http://local;
>>}
> 
> Но в большой оригинальной задаче и так уже много if-ов, поэтому и
> хочется получить решение без использования if-а.
> 
> В идеале: собственно в виде backup_proto=http :) поскольку эта же задача
> максимально элегантно решалась через
>> upstream remote {
>>  server remote;
>>  server local backup;
>> }
> пока не появилось требование к https у upstream-а
> 
> 16.12.17 22:52, Aziz Rozyev пишет:
>> может я, конечно, не уловил сути задачи, но и без error_page переключается:
>> 
>> 39map $http_x_myheader $remote {
>> 40"" 0;
>> 41"test" 1;
>> 42}
>> 43
>> 44upstream remote_up {
>> 45server nginx.org:443;
>> 46}
>> 47
>> 48upstream local_up {
>> 49server localhost:7070;
>> 50}
>> 
>> 58server {
>> 59listen 8085;
>> 60location / {   
>>   
>> 62proxy_set_header Host $host;
>> 63proxy_set_header Connection "";
>> 64proxy_http_version 1.1;
>> 65
>> 66if ($remote) {
>> 67   proxy_pass https://remote_up;
>> 68}
>> 69proxy_pass http://local_up;
>> 70}
>> 71}
>> 
>> 73server {
>> 74listen 7070;
>> 75return 200 "OK";
>> 76}
>> 
>> 
>> [root@tc ~]# curl http://localhost:8085/
>> OK
>> 
>> [root@tc ~]# curl -H 'X-Myheader: test' http://localhost:8085/
>> 
>> > "http://www.w3.org/TR/html4/loose.dtd;>
>> > type="application/rss+xml" title="nginx news" 
>> href="http://nginx.org/index.rss;>nginx news
>> [ … ]
>> 
>> 
>> без if-a что-то не придумал, как обойтись.
>> 
>> br,
>> Aziz.
>> 
>> 
>> 
>> 
>> 
>>> On 16 Dec 2017, at 19:28, Fedor Dikarev <f...@hamilton.rinet.ru> wrote:
>>> 
>>> Попробовал этот вариант, и без error_page он не переключается на local.
>>> Но если вписать error_page внутрь if-а, то вроде как работает как нужно.
>>> Осталось убедить самого себя, что без if-а тут не обойтись. Хотя и очень
>>> хочется.
>>> 
>>> 16.12.17 15:21, Aziz Rozyev пишет:
>>>> а вариант с 2 апстримами не подходит?
>>>> 
>>>> upstream remote_up {
>>>>   remote_upstream:443;
>>>> }
>>>> 
>>>> upstream local_up {
>>>>   localhost:7070;
>>>> }
>>>> 
>>>> map $http_x_some_header $remote {
>>>> “”0;
>>>> “default” 1;
>>>> }
>>>> 
>>>> if ($remote) {
>>>> proxy_pass https://remote;
>>>> }
>>>> proxy_pass http://local;
>>>> 
>>>> 
>>>> br,
>>>> Aziz.
>>>> 
>>>> 

Re: https upstream server и локальный backup http upstream

2017-12-16 Thread Aziz Rozyev
а вариант с 2 апстримами не подходит?

upstream remote_up {
remote_upstream:443;
}

upstream local_up {
localhost:7070;
}

map $http_x_some_header $remote {
  “”0;
  “default” 1;
}

if ($remote) {
  proxy_pass https://remote;
}
proxy_pass http://local;


br,
Aziz.





> On 16 Dec 2017, at 12:50, Fedor Dikarev  wrote:
> 
> Привет!
> 
> Я тут пытаюсь навести красоту в одном конфиге Nginx-а и что-то пока
> совсем беда :-(
> 
> Формулировка, правда, изначально довольно извращенная:
> есть Nginx, по-умолчанию проксирует запрос в локально работающую node.
> Но если в запросе есть заголовок X-Some-Header, то запрос
> нужно спроксировать на другой сервер по https.
> Второе но: если этот прокси недоступен/не ответил/просто 500-тит, то
> сделать fallback опять на локальную node.
> 
> первая мысль была:
> map $http_x_some_header $use_backend {
>  "" http://localhost:7070;
>  default https://remote_upstream;
> }
> upstream remote_upstream {
>  server remote:443;
>  server localhost:7070 backup;
> }
> 
> но локальный сервер не https, и все не очень красиво :-(
> думал тут еще поднять на этом же nginx локальный https на порту 7073,
> проксировать в него как backup, но тут начинаются сложности с
> сертификатами.
> 
> Опять же думал о другом варианте:
> proxy_pass $use_backend;
> error_page 500 502 504 = @fallback_local_node;
> 
> location @fallback_local_node {
>  internal;
>  proxy_pass http://localhost:7070;
> }
> 
> но тут получается что если заголовка не было, node ответил 502, то мы
> пойдем в node еще раз. Не то, чтобы прям ужас, но некрасиво
> получается...
> 
> Может кто подскажет тут красивое решение?
> 
> Ну и как feature request: может можно добавить к опции backup для
> директивы server в upstream еще какой-нибудь параметр backup_proto=http
> или другую опцию backup_http, чтобы при переключении на backup сервер
> менялся и протокол обращения.
> -- 
> Fedor Dikarev
> ___
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS

2017-11-20 Thread Aziz Rozyev
Hi,

try 

1) curl -ivvv https:// to your upstreams. 
2) add server :443 (if your upstreams accepting ssl connections on 443)



br,
Aziz.





> On 20 Nov 2017, at 20:46, shivramg94  wrote:
> 
> I am trying to use nginx as a reverse proxy with upstream SSL. For this, I
> am using the below directive in the nginx configuration file 
> 
> proxy_pass https://; 
> 
> where "" is another file which has the list of
> upstream servers. 
> 
> upstream  { 
> server : weight=1; 
> keepalive 100; 
> } 
> 
> With this configuration if I try to reload the Nginx configuration, it fails
> intermittently with the below error message 
> 
> nginx: [emerg] host not found in upstream \"\" 
> 
> However, if I changed the protocol mentioned in the proxy_pass directive
> from https to http, then the reload goes through. 
> 
> Could anyone please explain what mistake I might be doing here? 
> 
> Thanks in advance.
> 
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?2,277415,277415#msg-277415
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Different Naxsi rulesets

2017-11-13 Thread Aziz Rozyev
hello,

how about logs? does naxisi provide any variables that can be monitored?

so far it seems that your rules in ‘strict|relaxed’ are not triggering, the 
‘default’
one will always hit (as expected), as it’s first location ‘/‘ from where you 
route to other 2 locations.

also, try to log in debug mode, may be that will give more insights.

br,
Aziz.





> On 13 Nov 2017, at 21:47, Jean-Paul Hemelaar <hemel...@desikkel.nl> wrote:
> 
> Hi,
> 
> I have updated the config to use 'map' instead of the if-statements. That's 
> indeed a better way.
> The problem however remains:
> 
> - Naxsi mainrules are in the http-block
> - Config similar to:
> 
> map $geoip_country_code $ruleSetCC {
> default "strict";
> CC1 "relaxed";
> CC2 "relaxed";
> }
> 
> location /strict/ {
>include /usr/local/nginx/naxsi.rules.strict;
> 
>proxy_pass  http://app-server/;
> }
> 
> location /relaxed/ {
>include /usr/local/nginx/naxsi.rules.relaxed;
> 
>proxy_pass  http://app-server/;
> }
> 
> location / {
>include /usr/local/nginx/naxsi.rules.default;
> 
>set $ruleSet $ruleSetCC;
>rewrite ^(.*)$ /$ruleSet$1 last;
> }
> 
> 
> It's always using naxsi.rules.default. If this line is removed it's not using 
> any rules (pass-all). 
> 
> Thanks so far!
> 
> JP
> 
> 
> 
> 
> 
> On Mon, Nov 13, 2017 at 2:14 PM, Aziz Rozyev <aroz...@nginx.com> wrote:
> At first glance config looks correct, so probably it’s something with naxi 
> rulesets.
> Btw, why don’t you use maps?
> 
> map $geoip_coutnry_code $strictness {
>   default “strict";
>   CC_1“not-so-strict";
>   CC_2“not-so-strict";
>   # .. more country codes;
> }
> 
> # strict and not-so-strict locations
> 
> map $strictness $path {
>"strict” "/strict/";
>"not-so-strict”  "/not-so-strict/“;
> }
> 
> location / {
>return 302 $path;
># ..
> }
> 
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
> > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar <hemel...@desikkel.nl> wrote:
> >
> > T THIS WORKS:
> >  # include /usr/local/n
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Different Naxsi rulesets

2017-11-13 Thread Aziz Rozyev
At first glance config looks correct, so probably it’s something with naxi 
rulesets.
Btw, why don’t you use maps? 

map $geoip_coutnry_code $strictness {
  default “strict";
  CC_1“not-so-strict";
  CC_2“not-so-strict";
  # .. more country codes;
}

# strict and not-so-strict locations

map $strictness $path {
   "strict” "/strict/";
   "not-so-strict”  "/not-so-strict/“;
}

location / {
   return 302 $path;
   # .. 
}
 

br,
Aziz.





> On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar  wrote:
> 
> T THIS WORKS:
>  # include /usr/local/n

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Different Naxsi rulesets

2017-11-12 Thread Aziz Rozyev
at least you’re missing or (|) operator between 

> TRUSTED_CC_2  and TRUSTED_CC_3



br,
Aziz.





> On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar  wrote:
> 
> Hi!
> 
> I'm using Nginx together with Naxsi; so not sure it this is the correct place 
> for this post, but I'll give it a try.
> 
> I want to configure two detection thresholds: a strict detection threshold 
> for 'far away countries', and a less-strict set
> for local countries. I'm using a setup like:
> 
> location /strict/ {
>  include /usr/local/nginx/naxsi.rules.strict;
> 
>  proxy_pass  http://app-server/;
> }
> 
> location /not_so_strict/ {
>  include /usr/local/nginx/naxsi.rules.not_so_strict;
> 
>  proxy_pass  http://app-server/;
> }
> 
> location / {
>  # REMOVED BUT THIS WORKS:
>  # include /usr/local/nginx/naxsi.rules.not_so_strict;
>  set $ruleSet "strict";
>  if ( $geoip_country_code ~ (TRUSTED_CC_1|TRUSTED_CC_2TRUSTED_CC_3) ) {
> set $ruleSet "not_so_strict";
>  }
> 
>  rewrite ^(.*)$ /$ruleSet$1 last;
> }
> 
> location /RequestDenied {
> return 403;
> }
> 
> 
> The naxsi.rules.strict file contains the check rules:
> CheckRule "$SQL >= 8" BLOCK;
> etc.
> 
> For some reason this doesn't work. The syntax is ok, and I can reload Nginx. 
> However the firewall never triggers. If I uncomment the include in the 
> location-block / it works perfectly.
> Any idea's why this doesn't work, or any better setup to use different 
> rulesets based on some variables?
> 
> Thanks,
> 
> JP
> 
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Динамический CRL для SSL сертификатов "клиентов"

2017-08-09 Thread Aziz Rozyev
Леонид, 

а вам не подходит вариант с auth_request и дополнительным скриптом? что-то 
такое:

 location / {
 auth_request /auth-proxy;
 add_header X-Client-Cert $ssl_client_cert;
 proxy_pass http://backend;
 proxy_set_header X-Client-Cert $ssl_client_cert;
 proxy_set_header X-Client-Dn $ssl_client_s_dn;
}
location = /auth-proxy {
internal;
proxy_pass http://127.0.0.1/;
proxy_pass_request_body off;
proxy_set_header Content-Length “”;
proxy_set_header X-Client-Dn $ssl_client_s_dn;
proxy_set_header X-Client-Cert $ssl_client_cert;
}


br,
Aziz.





> On 9 Aug 2017, at 09:28, Andrey Oktyabrskiy  wrote:
> 
> Илья Шипицин wrote:
>>Ansible и подобные утилиты хороши для "развёртки" продуктов.
>> вы, конечно, извините, но следуя вашей же логике ... рассылка по nginx
>> касается, собственно, nginx. а у вас задача распространения файлов.
> Обновление конфигурации nginx касается nginx непосредственно.
> ___
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Как же сделать,чтоб сайт уже начал открываться?

2017-07-07 Thread Aziz Rozyev
на официальном сайте есть инструкции: 

https://www.nginx.com/resources/wiki/start/topics/recipes/wordpress/


br,
Aziz.





> On 7 Jul 2017, at 18:01, Nadya  wrote:
> 
> Спасибо,что подсказали решение. Не мешало бы еще указать ссылки.
> 
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?21,275347,275357#msg-275357
> 
> ___
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru