Stream module logging questions

2018-02-28 Thread Grzegorz Kulewski
Hello,

1. How can I log the IP and (especially) the port used by nginx (proxy) to 
connect to upstream when stream module is used?
2. Can I somehow get a log entry also/instead at stream connection setup time, 
not only after it ends?
3. I think that $tcpinfo_* aren't supported in stream. Is there any reason for 
this?

-- 
Grzegorz Kulewski
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx Directory Autoindex

2018-02-28 Thread Miguel C
I'm unsure if thats possible without 3rd party module...

I've used fancyindex before when I wanted sorting.

On Wednesday, February 28, 2018, Luciano Mannucci 
wrote:

>
> Hello all,
>
> I have a directory served by nginx via autoindex (That works perfectly
> as documented :). I need to show the content in reverse order (ls -r),
> is there any rather simple method?
>
> Thanks in advance,
>
> Luciano.
> --
>  /"\ /Via A. Salaino, 7 - 20144 Milano (Italy)
>  \ /  ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250
>   X   AGAINST HTML MAIL/  E-MAIL: posthams...@sublink.sublink.org
>  / \  AND POSTINGS/   WWW: http://www.lesassaie.IT/
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>


-- 
Miguel Clara,
IT Consuling
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-02-28 Thread Peter Booth
This discussion is interesting, educational, and thought provoking.  Web 
architects
only learn “the right way” by first doing things “the wrong way” and seeing 
what happens.
Attila and Valery asked questions that sound logical, and I think there's value 
in exploring 
what would happen if their suggestions were implemented.

First caveat - nginx is deployed in all manner different scenarios on different 
hardware
and operating systems. Physical servers and VMs behave very differently, as do 
local
and remote storage. When an application writes to NFS mounted storage there's 
no guarantee 
that even and synch will correctly enforce a write barrier. Still, if we 
consider  real numbers:

On current model quad socket hosts, nginx can support well over 1 million 
requests per second (see TechEmpower benchmarks)
On the same hardware, a web app that writes to a Postgresql DB can do at least 
a few thousand writes per second.
A SATA drive might support  300 write IOPS, whilst an SSD will support 100x 
that.
What this means that doing fully synchronous writes can reduce your potential 
throughput 
by a factor of 100 or more. So it’s not a great way to ensure consistency.

But there are cheaper ways to achieve the same consistency and reliability 
characteristics:

If you are using Linux then your reads and write swill occur through the page 
cache - so the actual disk itself really doesn’t matter (whilst your host is 
up).
If you want to protect against loss of physical disk then use RAID.
If you want to protect against a random power failure then use drives with 
battery backed caches, so writes will get persisted when a server restarts 
after a power failure
If you want to protect against a crazy person hitting your server with an axe 
then write to two servers ...
But the bottom line is separation of concerns. Nginx should not use fsync 
because it isn’t nginx's business.

My two cents,

Peter


> On Feb 28, 2018, at 4:41 PM, Aziz Rozyev  wrote:
> 
> Hello!
> 
> On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote:
> 
>> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
>>> 
 Now, that nginx supports running threads, are there plans to convert at
 least DAV PUTs into it's own thread(pool), so make it possible to do
 non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
>>> No, there are no such plans.
>>> 
>>> (Also, trying to do fsync() might not be the best idea even in
>>> threads.  A reliable server might be a better option.)
>>> 
>> What do you mean by a reliable server?
>> I want to make sure when the HTTP operation returns, the file is on the 
>> disk, not just in a buffer waiting for an indefinite amount of time to 
>> be flushed.
>> This is what fsync is for.
> 
> The question here is - why you want the file to be on disk, and 
> not just in a buffer?  Because you expect the server to die in a 
> few seconds without flushing the file to disk?  How probable it 
> is, compared to the probability of the disk to die?  A more 
> reliable server can make this probability negligible, hence the 
> suggestion.
> 
> (Also, another question is what "on the disk" meas from physical 
> point of view.  In many cases this in fact means "somewhere in the 
> disk buffers", and a power outage can easily result in the file 
> being not accessible even after fsync().)
> 
>> Why doing this in a thread is not a good idea? It would'nt block nginx 
>> that way.
> 
> Because even in threads, fsync() is likely to cause performance 
> degradation.  It might be a better idea to let the OS manage 
> buffers instead.
> 
> -- 
> Maxim Dounin
> http://mdounin.ru/ 
> ___
> nginx mailing list
> nginx@nginx.org 
> http://mailman.nginx.org/mailman/listinfo/nginx 
> 
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-02-28 Thread Aziz Rozyev
here is a synthetic test on vm, not perfect, but representative:


[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=30960 
30960+0 records in
30960+0 records out
253624320 bytes (254 MB) copied, 0.834861 s, 304 MB/s

[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=30960 
conv=fsync
30960+0 records in
30960+0 records out
253624320 bytes (254 MB) copied, 0.854208 s, 297 MB/s

[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=61960
61960+0 records in
61960+0 records out
507576320 bytes (508 MB) copied, 1.71833 s, 295 MB/s
[root@nginx-single ~]# dd if=/dev/zero of=/writetest bs=8k count=61960 
conv=fsync
61960+0 records in
61960+0 records out
507576320 bytes (508 MB) copied, 1.74482 s, 291 MB/s


br,
Aziz.





> On 1 Mar 2018, at 00:41, Aziz Rozyev  wrote:
> 
> Valery, 
> 
> may you please suggest how you came to the conclusion that 
> 
> “fsync simply instructs OS to ensure consistency of a file"?
> 
> As far as understand simply instructing OS staff come at no cost, right?
> 
>> Without fsyncing file's data and metadata a client will receive a positive 
>> reply before data has reached the storage, thus leaving non-zero probability 
>> that states of two systems involved into a web transaction end up 
>> inconsistent.
> 
> 
> I understand why one may need consistency, but doing so with fsyncing is 
> non-sense.
> 
> Here is what man page says in that regard:
> 
> 
> fsync()  transfers  ("flushes")  all  modified  in-core data of (i.e., 
> modified buffer cache pages for) the file referred to by the file descriptor 
> fd to the disk device (or other permanent
>   storage device) so that all changed information can be retrieved even 
> after the system crashed or was rebooted.  This includes writing through or 
> flushing a disk cache if present.  The call
>   blocks until the device reports that the transfer has completed.  It 
> also flushes metadata information associated with the file (see stat(2)).
> 
> 
> 
> 
> br,
> Aziz.
> 
> 
> 
> 
> 
>> On 28 Feb 2018, at 21:24, Valery Kholodkov  
>> wrote:
>> 
>> It's completely clear why someone would need to flush file's data and 
>> metadata upon a WebDAV PUT operation. That is because many architectures 
>> expect a PUT operation to be completely settled before a reply is returned.
>> 
>> Without fsyncing file's data and metadata a client will receive a positive 
>> reply before data has reached the storage, thus leaving non-zero probability 
>> that states of two systems involved into a web transaction end up 
>> inconsistent.
>> 
>> Further, the exact moment when the data of certain specific file reaches the 
>> storage depends on numerous factors, for example, I/O contention. 
>> Consequently, the exact moment when the data of a file being uploaded 
>> reaches the storage can be only determined by executing fsync.
>> 
>> val
>> 
>> On 28-02-18 11:04, Aziz Rozyev wrote:
>>> While it’s not clear why one may need to flush the data on each http 
>>> operation,
>>> I can imagine to what performance degradation that may lead of.
>>> if it’s not a some kind of funny clustering among nodes, I wouldn't care 
>>> much
>>> where actual data is, RAM still should be much faster, than disk I/O.
>>> br,
>>> Aziz.
 On 28 Feb 2018, at 12:30, Nagy, Attila  wrote:
 
 On 02/27/2018 02:24 PM, Maxim Dounin wrote:
> 
>> Now, that nginx supports running threads, are there plans to convert at
>> least DAV PUTs into it's own thread(pool), so make it possible to do
>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
> No, there are no such plans.
> 
> (Also, trying to do fsync() might not be the best idea even in
> threads.  A reliable server might be a better option.)
> 
 What do you mean by a reliable server?
 I want to make sure when the HTTP operation returns, the file is on the 
 disk, not just in a buffer waiting for an indefinite amount of time to be 
 flushed.
 This is what fsync is for.
 
 Why doing this in a thread is not a good idea? It would'nt block nginx 
 that way.
>> 
>> ___
>> nginx mailing list
>> nginx@nginx.org
>> http://mailman.nginx.org/mailman/listinfo/nginx
> 

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-02-28 Thread Aziz Rozyev
Valery, 

may you please suggest how you came to the conclusion that 

“fsync simply instructs OS to ensure consistency of a file"?

As far as understand simply instructing OS staff come at no cost, right?

> Without fsyncing file's data and metadata a client will receive a positive 
> reply before data has reached the storage, thus leaving non-zero probability 
> that states of two systems involved into a web transaction end up 
> inconsistent.


I understand why one may need consistency, but doing so with fsyncing is 
non-sense.

Here is what man page says in that regard:


fsync()  transfers  ("flushes")  all  modified  in-core data of (i.e., modified 
buffer cache pages for) the file referred to by the file descriptor fd to the 
disk device (or other permanent
   storage device) so that all changed information can be retrieved even 
after the system crashed or was rebooted.  This includes writing through or 
flushing a disk cache if present.  The call
   blocks until the device reports that the transfer has completed.  It 
also flushes metadata information associated with the file (see stat(2)).




br,
Aziz.





> On 28 Feb 2018, at 21:24, Valery Kholodkov  wrote:
> 
> It's completely clear why someone would need to flush file's data and 
> metadata upon a WebDAV PUT operation. That is because many architectures 
> expect a PUT operation to be completely settled before a reply is returned.
> 
> Without fsyncing file's data and metadata a client will receive a positive 
> reply before data has reached the storage, thus leaving non-zero probability 
> that states of two systems involved into a web transaction end up 
> inconsistent.
> 
> Further, the exact moment when the data of certain specific file reaches the 
> storage depends on numerous factors, for example, I/O contention. 
> Consequently, the exact moment when the data of a file being uploaded reaches 
> the storage can be only determined by executing fsync.
> 
> val
> 
> On 28-02-18 11:04, Aziz Rozyev wrote:
>> While it’s not clear why one may need to flush the data on each http 
>> operation,
>> I can imagine to what performance degradation that may lead of.
>> if it’s not a some kind of funny clustering among nodes, I wouldn't care much
>> where actual data is, RAM still should be much faster, than disk I/O.
>> br,
>> Aziz.
>>> On 28 Feb 2018, at 12:30, Nagy, Attila  wrote:
>>> 
>>> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
 
> Now, that nginx supports running threads, are there plans to convert at
> least DAV PUTs into it's own thread(pool), so make it possible to do
> non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
 No, there are no such plans.
 
 (Also, trying to do fsync() might not be the best idea even in
 threads.  A reliable server might be a better option.)
 
>>> What do you mean by a reliable server?
>>> I want to make sure when the HTTP operation returns, the file is on the 
>>> disk, not just in a buffer waiting for an indefinite amount of time to be 
>>> flushed.
>>> This is what fsync is for.
>>> 
>>> Why doing this in a thread is not a good idea? It would'nt block nginx that 
>>> way.
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx 1.11 + fast-cgi cache + map + ssi

2018-02-28 Thread gz
Укажите флаг volatile, чтобы значения не кэшировались после первого
вычисления в рамках основного запроса.

map $request_uri $fastcgi_cache_key {
volatile;

default
$request_method|$host|$uri|$request_uri|$cookie_currency|$cookie_show_mode;
~^/objekti/.+
$request_method|$host|$uri|$request_uri|$cookie_currency|$http_x_requested_with;
~^/xml/yml.php $request_method|$host|$uri|$arg_type|$arg_nosim;
}

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,267467,278848#msg-278848

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: fsync()-in webdav PUT

2018-02-28 Thread itpp2012
Not waiting for fsync to complete makes calling fsync pointless, waiting for
fsync is blocking, thread based or otherwise.
The only midway solution is to implement fsync as a cgi, ea. a none-blocking
(background)fc call in combination with an OS resource lock.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,278788,278847#msg-278847

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


ngx_stream_auth_request?

2018-02-28 Thread Grzegorz Kulewski
Hello,

Could you add something similar to HTTP auth_request module for stream?

Basically I want to allow or deny access to TCP stream proxy based on the 
result of HTTP request. I want to pass to this request source and destination 
IP addresses and ports and possibly some more informations (results from TLS 
negotiation/preread and similar).

-- 
Grzegorz Kulewski
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-02-28 Thread Valery Kholodkov

On 28-02-18 15:08, Maxim Dounin wrote:

What do you mean by a reliable server?
I want to make sure when the HTTP operation returns, the file is on the
disk, not just in a buffer waiting for an indefinite amount of time to
be flushed.
This is what fsync is for.


The question here is - why you want the file to be on disk, and
not just in a buffer?  Because you expect the server to die in a
few seconds without flushing the file to disk?  How probable it
is, compared to the probability of the disk to die?  A more
reliable server can make this probability negligible, hence the
suggestion.


I think the point here is that lack of fsync leaves some questions 
unanswered. Adding fsync will simply put all dots above the "i"s.



(Also, another question is what "on the disk" meas from physical
point of view.  In many cases this in fact means "somewhere in the
disk buffers", and a power outage can easily result in the file
being not accessible even after fsync().)


Why doing this in a thread is not a good idea? It would'nt block nginx
that way.


Because even in threads, fsync() is likely to cause performance
degradation.  It might be a better idea to let the OS manage
buffers instead.


fsync does not cause performance degradation. fsync simply instructs OS 
to ensure consistency of a file. What causes performance degradation is 
expenditure of resources necessary to ensure consistency.


val
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: proxy_pass and concurrent connections

2018-02-28 Thread Дилян Палаузов

Hello,

thanks for your answer.

The documentation is explicit when an upstream server is used with proxy_pass, 
but it says nothing, when proxy_pass is used with an URL.  Being explicit 
somewhere and stating nothing on the same manner in an alternative scenario 
leaves space for interpretations, hence my question.

The text "Warning: The ticket field 'nginx_version' is invalid: nginx_version is 
required " is misleading.  It is not a warning but a permanent error.

Moreover the 'Version' field was set, the message shall be "The 'nginx -V' field at 
the bottom is required" (or alternatively remove the Version field and leave only 
'nging -V', then it will be absolutely clear what is meant).

Regards
  Dilian

On 02/28/18 16:52, Maxim Dounin wrote:

Hello!

On Wed, Feb 28, 2018 at 11:52:18AM +0100, Дилян Палаузов wrote:


when I try to enter a bug at
https://trac.nginx.org/nginx/newticket#ticket, choose as version
1.12.x and submit the system rejects the ticket with the
message:

Warning: The ticket field 'nginx_version' is invalid:
nginx_version is required


That's about the "nginx -V" field you've left blank.


And here is the actual question:

proxy_pass can accept an URL or a server group. When a server
group (upstream) is defined, then with max_conns= the maximum
number of simultaneous connections can be specified.

The documentation of proxy_pass shall clarify, whether by
default with URL only consequent connections are allowed/whether
defining upstream is the only way to introduce parallelism
towards the proxy.


Or not, bacause the answer is obvious unless you have very strange
background.  Moreover, the documentation explicitly explains the
default for "max_conns", so this should be obvious regardless of
the background:

: max_conns=number
: limits the maximum number of simultaneous active connections to
: the proxied server (1.11.5). Default value is zero, meaning there
: is no limit.

If you for some reason think that only consequent connections are
allowed, most likely you are facing limitations of your backend.


___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: fsync()-in webdav PUT

2018-02-28 Thread Valery Kholodkov
It's completely clear why someone would need to flush file's data and 
metadata upon a WebDAV PUT operation. That is because many architectures 
expect a PUT operation to be completely settled before a reply is returned.


Without fsyncing file's data and metadata a client will receive a 
positive reply before data has reached the storage, thus leaving 
non-zero probability that states of two systems involved into a web 
transaction end up inconsistent.


Further, the exact moment when the data of certain specific file reaches 
the storage depends on numerous factors, for example, I/O contention. 
Consequently, the exact moment when the data of a file being uploaded 
reaches the storage can be only determined by executing fsync.


val

On 28-02-18 11:04, Aziz Rozyev wrote:

While it’s not clear why one may need to flush the data on each http operation,
I can imagine to what performance degradation that may lead of.

if it’s not a some kind of funny clustering among nodes, I wouldn't care much
where actual data is, RAM still should be much faster, than disk I/O.


br,
Aziz.






On 28 Feb 2018, at 12:30, Nagy, Attila  wrote:

On 02/27/2018 02:24 PM, Maxim Dounin wrote:



Now, that nginx supports running threads, are there plans to convert at
least DAV PUTs into it's own thread(pool), so make it possible to do
non-blocking (from nginx's event loop PoV) fsync on the uploaded file?

No, there are no such plans.

(Also, trying to do fsync() might not be the best idea even in
threads.  A reliable server might be a better option.)


What do you mean by a reliable server?
I want to make sure when the HTTP operation returns, the file is on the disk, 
not just in a buffer waiting for an indefinite amount of time to be flushed.
This is what fsync is for.

Why doing this in a thread is not a good idea? It would'nt block nginx that way.


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx Directory Autoindex

2018-02-28 Thread Luciano Mannucci

Hello all,

I have a directory served by nginx via autoindex (That works perfectly
as documented :). I need to show the content in reverse order (ls -r),
is there any rather simple method?

Thanks in advance,

Luciano.
-- 
 /"\ /Via A. Salaino, 7 - 20144 Milano (Italy)
 \ /  ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250
  X   AGAINST HTML MAIL/  E-MAIL: posthams...@sublink.sublink.org
 / \  AND POSTINGS/   WWW: http://www.lesassaie.IT/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] $request_scheme variable

2018-02-28 Thread Maxim Dounin
Hello!

On Tue, Feb 27, 2018 at 10:32:40PM +, Chris Branch via nginx-devel wrote:

> Hi, just giving this patch some birthday bumps.
> 
> > On 27 Feb 2017, at 11:58, Chris Branch via nginx-devel 
> >  wrote:
> > 
> > # HG changeset patch
> > # User Chris Branch 
> > # Date 1488195909 0
> > #  Mon Feb 27 11:45:09 2017 +
> > # Node ID 05f555d65a33ebf005fedc569fb52eba3758e1d7
> > # Parent  87cf6ddb41c216876d13cffa5e637a61b159362c
> > $request_scheme variable.
> > 
> > Contains the URI scheme supplied by the client. If no scheme supplied,
> > equivalent to $scheme.
> > 
> > Scheme can be supplied by the client in two ways:
> > 
> > * HTTP/2 :scheme pseudo-header.
> > * HTTP/1 absolute URI in request line.

The $scheme variable is already documented as

: $scheme
: request scheme, “http” or “https”

and introducing additional variable with the $request_scheme might 
not be a good idea.

If we really need this for some reason, we should rather consider 
changing $scheme instead, like we do with $host.  This might be a 
bad idea for other reasons though, as an ability to supply 
incorrect $scheme might cause security problems.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Antw: [PATCH 1 of 2] Access log: Support for disabling escaping

2018-02-28 Thread Vladimir Homutov
On Thu, Feb 22, 2018 at 06:12:52PM +0100, Johannes Baiter wrote:
> Sorry, I accidentally submitted an incomplete version of the patch.
> Here is the corrected version.
>

Hello,

I've slightly updated the patch (also note your mail client have broken
it - you may want to update settings to avoid this), please take a look.

See also ticket 1450: https://trac.nginx.org/nginx/ticket/1450

# HG changeset patch
# User Vladimir Homutov 
# Date 1519834295 -10800
#  Wed Feb 28 19:11:35 2018 +0300
# Node ID d420ce6b46768ea7eb23bdec84f992212293af19
# Parent  20f139e9ffa84f1a1db6039547cd35fc4534
Access log: support for disabling escaping (ticket #1450).

Based on patches by Johannes Baiter 
and Calin Don.

diff --git a/src/http/modules/ngx_http_log_module.c 
b/src/http/modules/ngx_http_log_module.c
--- a/src/http/modules/ngx_http_log_module.c
+++ b/src/http/modules/ngx_http_log_module.c
@@ -90,6 +90,11 @@ typedef struct {
 } ngx_http_log_var_t;
 
 
+#define NGX_HTTP_LOG_ESCAPE_DEFAULT  0
+#define NGX_HTTP_LOG_ESCAPE_JSON 1
+#define NGX_HTTP_LOG_ESCAPE_NONE 2
+
+
 static void ngx_http_log_write(ngx_http_request_t *r, ngx_http_log_t *log,
 u_char *buf, size_t len);
 static ssize_t ngx_http_log_script_write(ngx_http_request_t *r,
@@ -126,7 +131,7 @@ static u_char *ngx_http_log_request_leng
 ngx_http_log_op_t *op);
 
 static ngx_int_t ngx_http_log_variable_compile(ngx_conf_t *cf,
-ngx_http_log_op_t *op, ngx_str_t *value, ngx_uint_t json);
+ngx_http_log_op_t *op, ngx_str_t *value, ngx_uint_t escape);
 static size_t ngx_http_log_variable_getlen(ngx_http_request_t *r,
 uintptr_t data);
 static u_char *ngx_http_log_variable(ngx_http_request_t *r, u_char *buf,
@@ -136,6 +141,10 @@ static size_t ngx_http_log_json_variable
 uintptr_t data);
 static u_char *ngx_http_log_json_variable(ngx_http_request_t *r, u_char *buf,
 ngx_http_log_op_t *op);
+static size_t ngx_http_log_unescaped_variable_getlen(ngx_http_request_t *r,
+uintptr_t data);
+static u_char *ngx_http_log_unescaped_variable(ngx_http_request_t *r,
+u_char *buf, ngx_http_log_op_t *op);
 
 
 static void *ngx_http_log_create_main_conf(ngx_conf_t *cf);
@@ -905,7 +914,7 @@ ngx_http_log_request_length(ngx_http_req
 
 static ngx_int_t
 ngx_http_log_variable_compile(ngx_conf_t *cf, ngx_http_log_op_t *op,
-ngx_str_t *value, ngx_uint_t json)
+ngx_str_t *value, ngx_uint_t escape)
 {
 ngx_int_t  index;
 
@@ -916,11 +925,18 @@ ngx_http_log_variable_compile(ngx_conf_t
 
 op->len = 0;
 
-if (json) {
+switch (escape) {
+case NGX_HTTP_LOG_ESCAPE_JSON:
 op->getlen = ngx_http_log_json_variable_getlen;
 op->run = ngx_http_log_json_variable;
+break;
 
-} else {
+case NGX_HTTP_LOG_ESCAPE_NONE:
+op->getlen = ngx_http_log_unescaped_variable_getlen;
+op->run = ngx_http_log_unescaped_variable;
+break;
+
+default: /* NGX_HTTP_LOG_ESCAPE_DEFAULT */
 op->getlen = ngx_http_log_variable_getlen;
 op->run = ngx_http_log_variable;
 }
@@ -1073,6 +1089,39 @@ ngx_http_log_json_variable(ngx_http_requ
 }
 
 
+static size_t
+ngx_http_log_unescaped_variable_getlen(ngx_http_request_t *r, uintptr_t data)
+{
+ngx_http_variable_value_t  *value;
+
+value = ngx_http_get_indexed_variable(r, data);
+
+if (value == NULL || value->not_found) {
+return 0;
+}
+
+value->escape = 0;
+
+return value->len;
+}
+
+
+static u_char *
+ngx_http_log_unescaped_variable(ngx_http_request_t *r, u_char *buf,
+ngx_http_log_op_t *op)
+{
+ngx_http_variable_value_t  *value;
+
+value = ngx_http_get_indexed_variable(r, op->data);
+
+if (value == NULL || value->not_found) {
+return buf;
+}
+
+return ngx_cpymem(buf, value->data, value->len);
+}
+
+
 static void *
 ngx_http_log_create_main_conf(ngx_conf_t *cf)
 {
@@ -1536,18 +1585,21 @@ ngx_http_log_compile_format(ngx_conf_t *
 size_t   i, len;
 ngx_str_t   *value, var;
 ngx_int_t   *flush;
-ngx_uint_t   bracket, json;
+ngx_uint_t   bracket, escape;
 ngx_http_log_op_t   *op;
 ngx_http_log_var_t  *v;
 
-json = 0;
+escape = NGX_HTTP_LOG_ESCAPE_DEFAULT;
 value = args->elts;
 
 if (s < args->nelts && ngx_strncmp(value[s].data, "escape=", 7) == 0) {
 data = value[s].data + 7;
 
 if (ngx_strcmp(data, "json") == 0) {
-json = 1;
+escape = NGX_HTTP_LOG_ESCAPE_JSON;
+
+} else if (ngx_strcmp(data, "none") == 0) {
+escape = NGX_HTTP_LOG_ESCAPE_NONE;
 
 } else if (ngx_strcmp(data, "default") != 0) {
 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
@@ -1636,7 +1688,7 @@ ngx_http_log_compile_format(ngx_conf_t *
 }
 }
 
-if (ngx_http_log_variable_compile(cf, op, , json)
+if (ngx_http_log_variable_compile(cf, op, , 

[njs] Skip empty buffers in HTTP response send().

2018-02-28 Thread Roman Arutyunyan
details:   http://hg.nginx.org/njs/rev/c86a0cc40ce5
branches:  
changeset: 454:c86a0cc40ce5
user:  Roman Arutyunyan 
date:  Wed Feb 28 19:16:25 2018 +0300
description:
Skip empty buffers in HTTP response send().

Such buffers lead to send errors and should never be sent.

diffstat:

 nginx/ngx_http_js_module.c |  4 
 1 files changed, 4 insertions(+), 0 deletions(-)

diffs (14 lines):

diff -r ab1f67b69707 -r c86a0cc40ce5 nginx/ngx_http_js_module.c
--- a/nginx/ngx_http_js_module.cWed Feb 28 16:20:11 2018 +0300
+++ b/nginx/ngx_http_js_module.cWed Feb 28 19:16:25 2018 +0300
@@ -891,6 +891,10 @@ ngx_http_js_ext_send(njs_vm_t *vm, njs_v
 return NJS_ERROR;
 }
 
+if (s.length == 0) {
+continue;
+}
+
 /* TODO: njs_value_release(vm, value) in buf completion */
 
 ngx_log_debug2(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Client certificates and check for DN?

2018-02-28 Thread rainer

Am 2018-02-28 16:41, schrieb Igor A. Ippolitov:

Hello.

I'm not sure about what do you really need, but it looks like you can
get almost the same result using a combination of map{} blocks and
conditionals.

Something like this:

map $ssl_client_s_dn $ou_matched {
    ~OU=whatever 1;
    default 0;
}
map $ssl_client_s_dn $cn_matched {
    ~CN=whatever 1;
    default 0;
}
map $ou_verified$cn_verified $unauthed {
    ~0 1
    default 0;
}
server {
    
    ssl_trusted_certificate path/to/public/certs;
    ssl_verify_client on;
    if ($unauthed) {return 403;}
}



OK, thanks a lot.


I'll look into it.

Currently, the exact details are still a bit murky.
Customer was very vague...
I'll know more Friday next week.



Regards,
Rainer
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass and concurrent connections

2018-02-28 Thread Maxim Dounin
Hello!

On Wed, Feb 28, 2018 at 11:52:18AM +0100, Дилян Палаузов wrote:

> when I try to enter a bug at 
> https://trac.nginx.org/nginx/newticket#ticket, choose as version 
> 1.12.x and submit the system rejects the ticket with the 
> message:
> 
> Warning: The ticket field 'nginx_version' is invalid: 
> nginx_version is required

That's about the "nginx -V" field you've left blank.

> And here is the actual question:
> 
> proxy_pass can accept an URL or a server group. When a server 
> group (upstream) is defined, then with max_conns= the maximum 
> number of simultaneous connections can be specified.
> 
> The documentation of proxy_pass shall clarify, whether by 
> default with URL only consequent connections are allowed/whether 
> defining upstream is the only way to introduce parallelism 
> towards the proxy.

Or not, bacause the answer is obvious unless you have very strange 
background.  Moreover, the documentation explicitly explains the 
default for "max_conns", so this should be obvious regardless of 
the background:

: max_conns=number
: limits the maximum number of simultaneous active connections to 
: the proxied server (1.11.5). Default value is zero, meaning there 
: is no limit.

If you for some reason think that only consequent connections are 
allowed, most likely you are facing limitations of your backend.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Client certificates and check for DN?

2018-02-28 Thread Igor A. Ippolitov

Hello.

I'm not sure about what do you really need, but it looks like you can 
get almost the same result using a combination of map{} blocks and 
conditionals.


Something like this:

map $ssl_client_s_dn $ou_matched {
    ~OU=whatever 1;
    default 0;
}
map $ssl_client_s_dn $cn_matched {
    ~CN=whatever 1;
    default 0;
}
map $ou_verified$cn_verified $unauthed {
    ~0 1
    default 0;
}
server {
    
    ssl_trusted_certificate path/to/public/certs;
    ssl_verify_client on;
    if ($unauthed) {return 403;}
}


On 28.02.2018 16:39, rai...@ultra-secure.de wrote:

Hi,

it seems most examples, even for apache, seem to assume that the 
client certificates are issued by your own CA.
In this case, you just need to check if your certificates were issued 
by this CA - and if they're not, it's game over.



However, I may have a case where the CA is a public CA and the client 
certificates need to be verified down to the correct O and OU.


How do you do this with nginx?

Something along these lines:

https://www.tbs-certificates.co.uk/FAQ/en/183.html


Best Regards
Rainer
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[nginx] Generic subrequests in memory.

2018-02-28 Thread Roman Arutyunyan
details:   http://hg.nginx.org/nginx/rev/20f139e9ffa8
branches:  
changeset: 7220:20f139e9ffa8
user:  Roman Arutyunyan 
date:  Wed Feb 28 16:56:58 2018 +0300
description:
Generic subrequests in memory.

Previously, only the upstream response body could be accessed with the
NGX_HTTP_SUBREQUEST_IN_MEMORY feature.  Now any response body from a subrequest
can be saved in a memory buffer.  It is available as a single buffer in r->out
and the buffer size is configured by the subrequest_output_buffer_size
directive.

Upstream, proxy and fastcgi code used to handle the old-style feature is
removed.

diffstat:

 src/http/modules/ngx_http_fastcgi_module.c|   30 --
 src/http/modules/ngx_http_proxy_module.c  |   30 --
 src/http/modules/ngx_http_ssi_filter_module.c |8 +-
 src/http/ngx_http_core_module.c   |   21 
 src/http/ngx_http_core_module.h   |2 +
 src/http/ngx_http_postpone_filter_module.c|   78 
 src/http/ngx_http_upstream.c  |  126 +-
 7 files changed, 107 insertions(+), 188 deletions(-)

diffs (428 lines):

diff -r d0d32b33167d -r 20f139e9ffa8 src/http/modules/ngx_http_fastcgi_module.c
--- a/src/http/modules/ngx_http_fastcgi_module.cThu Feb 22 17:25:43 
2018 +0300
+++ b/src/http/modules/ngx_http_fastcgi_module.cWed Feb 28 16:56:58 
2018 +0300
@@ -2512,36 +2512,6 @@ ngx_http_fastcgi_non_buffered_filter(voi
 break;
 }
 
-/* provide continuous buffer for subrequests in memory */
-
-if (r->subrequest_in_memory) {
-
-cl = u->out_bufs;
-
-if (cl) {
-buf->pos = cl->buf->pos;
-}
-
-buf->last = buf->pos;
-
-for (cl = u->out_bufs; cl; cl = cl->next) {
-ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-   "http fastcgi in memory %p-%p %O",
-   cl->buf->pos, cl->buf->last, ngx_buf_size(cl->buf));
-
-if (buf->last == cl->buf->pos) {
-buf->last = cl->buf->last;
-continue;
-}
-
-buf->last = ngx_movemem(buf->last, cl->buf->pos,
-cl->buf->last - cl->buf->pos);
-
-cl->buf->pos = buf->last - (cl->buf->last - cl->buf->pos);
-cl->buf->last = buf->last;
-}
-}
-
 return NGX_OK;
 }
 
diff -r d0d32b33167d -r 20f139e9ffa8 src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c  Thu Feb 22 17:25:43 2018 +0300
+++ b/src/http/modules/ngx_http_proxy_module.c  Wed Feb 28 16:56:58 2018 +0300
@@ -2321,36 +2321,6 @@ ngx_http_proxy_non_buffered_chunked_filt
 return NGX_ERROR;
 }
 
-/* provide continuous buffer for subrequests in memory */
-
-if (r->subrequest_in_memory) {
-
-cl = u->out_bufs;
-
-if (cl) {
-buf->pos = cl->buf->pos;
-}
-
-buf->last = buf->pos;
-
-for (cl = u->out_bufs; cl; cl = cl->next) {
-ngx_log_debug3(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-   "http proxy in memory %p-%p %O",
-   cl->buf->pos, cl->buf->last, ngx_buf_size(cl->buf));
-
-if (buf->last == cl->buf->pos) {
-buf->last = cl->buf->last;
-continue;
-}
-
-buf->last = ngx_movemem(buf->last, cl->buf->pos,
-cl->buf->last - cl->buf->pos);
-
-cl->buf->pos = buf->last - (cl->buf->last - cl->buf->pos);
-cl->buf->last = buf->last;
-}
-}
-
 return NGX_OK;
 }
 
diff -r d0d32b33167d -r 20f139e9ffa8 
src/http/modules/ngx_http_ssi_filter_module.c
--- a/src/http/modules/ngx_http_ssi_filter_module.c Thu Feb 22 17:25:43 
2018 +0300
+++ b/src/http/modules/ngx_http_ssi_filter_module.c Wed Feb 28 16:56:58 
2018 +0300
@@ -2231,9 +2231,11 @@ ngx_http_ssi_set_variable(ngx_http_reque
 {
 ngx_str_t  *value = data;
 
-if (r->upstream) {
-value->len = r->upstream->buffer.last - r->upstream->buffer.pos;
-value->data = r->upstream->buffer.pos;
+if (r->headers_out.status < NGX_HTTP_SPECIAL_RESPONSE
+&& r->out && r->out->buf)
+{
+value->len = r->out->buf->last - r->out->buf->pos;
+value->data = r->out->buf->pos;
 }
 
 return rc;
diff -r d0d32b33167d -r 20f139e9ffa8 src/http/ngx_http_core_module.c
--- a/src/http/ngx_http_core_module.c   Thu Feb 22 17:25:43 2018 +0300
+++ b/src/http/ngx_http_core_module.c   Wed Feb 28 16:56:58 2018 +0300
@@ -399,6 +399,13 @@ static ngx_command_t  ngx_http_core_comm
   offsetof(ngx_http_core_loc_conf_t, sendfile_max_chunk),
   NULL },
 
+{ ngx_string("subrequest_output_buffer_size"),
+  NGX_HTTP_MAIN_CONF|NGX_HTTP_SRV_CONF|NGX_HTTP_LOC_CONF|NGX_CONF_TAKE1,
+  ngx_conf_set_size_slot,
+  NGX_HTTP_LOC_CONF_OFFSET,
+  

Re: fsync()-in webdav PUT

2018-02-28 Thread Maxim Dounin
Hello!

On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote:

> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
> >
> >> Now, that nginx supports running threads, are there plans to convert at
> >> least DAV PUTs into it's own thread(pool), so make it possible to do
> >> non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
> > No, there are no such plans.
> >
> > (Also, trying to do fsync() might not be the best idea even in
> > threads.  A reliable server might be a better option.)
> >
> What do you mean by a reliable server?
> I want to make sure when the HTTP operation returns, the file is on the 
> disk, not just in a buffer waiting for an indefinite amount of time to 
> be flushed.
> This is what fsync is for.

The question here is - why you want the file to be on disk, and 
not just in a buffer?  Because you expect the server to die in a 
few seconds without flushing the file to disk?  How probable it 
is, compared to the probability of the disk to die?  A more 
reliable server can make this probability negligible, hence the 
suggestion.

(Also, another question is what "on the disk" meas from physical 
point of view.  In many cases this in fact means "somewhere in the 
disk buffers", and a power outage can easily result in the file 
being not accessible even after fsync().)

> Why doing this in a thread is not a good idea? It would'nt block nginx 
> that way.

Because even in threads, fsync() is likely to cause performance 
degradation.  It might be a better idea to let the OS manage 
buffers instead.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Files still on disc after inactive time

2018-02-28 Thread Maxim Dounin
Hello!

On Wed, Feb 28, 2018 at 05:22:14AM -0500, Andrzej Walas wrote:

> Can you answer?

The last recommendation you were given is to find out who and why 
killed nginx worker process, see here:

http://mailman.nginx.org/pipermail/nginx/2018-February/055648.html

If you think nginx processes are no longer killed, please make 
sure it is indeed the case: that is, make sure you've stopped 
nginx (make sure "ps -ef | grep nginx" shows no nginx processes), 
started it again (record "ps -ef | grep nginx" output here), and 
you've started to see "ignore long locked" messages after it, and 
no any critical / alert messages in between.  Additionally, 
compare "ps -ef | grep nginx" output with what you've got right 
after start - to make sure there are the same worker processes, 
and no processes were lost or restarted.

You may want to share all intermediate results of these steps here 
for us to make sure you've did it right.

If any of these steps indicate that nginx processes are still 
killed, consider further investigating the reasons.

If these steps demonstrate that "ignore long locked" messages 
appear without any crashes, consider testing various other things 
to futher isolate the cause.  In particular, if you use http2 - 
try disabling it to see if it helps.  If it does, we need debug 
logs of all requests to a particular resource since nginx start till 
"ignore long locked" message.  Futher information on how to 
configure debug logging can be found here:

http://nginx.org/en/docs/debugging_log.html

Note though that enabling debug logging will result in a lot of 
logs, and obtaining required logs with "inactive" set to 1d might 
not be trivial as you'll have to store at least the whole day of 
debug logs till the "ignore long locked" message will appear.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Client certificates and check for DN?

2018-02-28 Thread rainer

Hi,

it seems most examples, even for apache, seem to assume that the client 
certificates are issued by your own CA.
In this case, you just need to check if your certificates were issued by 
this CA - and if they're not, it's game over.



However, I may have a case where the CA is a public CA and the client 
certificates need to be verified down to the correct O and OU.


How do you do this with nginx?

Something along these lines:

https://www.tbs-certificates.co.uk/FAQ/en/183.html


Best Regards
Rainer
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


proxy_pass and concurrent connections

2018-02-28 Thread Дилян Палаузов

Hello,

when I try to enter a bug at https://trac.nginx.org/nginx/newticket#ticket, 
choose as version 1.12.x and submit the system rejects the ticket with the 
message:

Warning: The ticket field 'nginx_version' is invalid: nginx_version is required

And here is the actual question:

proxy_pass can accept an URL or a server group. When a server group (upstream) 
is defined, then with max_conns= the maximum number of simultaneous connections 
can be specified.

The documentation of proxy_pass shall clarify, whether by default with URL only 
consequent connections are allowed/whether defining upstream is the only way to 
introduce parallelism towards the proxy.

Regards
  Dilian
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[njs] Fixed String.prototype.toUTF8() function.

2018-02-28 Thread Igor Sysoev
details:   http://hg.nginx.org/njs/rev/ab1f67b69707
branches:  
changeset: 453:ab1f67b69707
user:  Igor Sysoev 
date:  Wed Feb 28 16:20:11 2018 +0300
description:
Fixed String.prototype.toUTF8() function.

A byte string returned by String.prototype.toUTF8() had length equal
to its size so the string can be processed later as an ASCII string.

diffstat:

 njs/njs_string.c |  5 +
 njs/test/njs_unit_test.c |  9 +
 2 files changed, 14 insertions(+), 0 deletions(-)

diffs (34 lines):

diff -r 0f1c3efcd894 -r ab1f67b69707 njs/njs_string.c
--- a/njs/njs_string.c  Tue Feb 27 14:11:00 2018 +0300
+++ b/njs/njs_string.c  Wed Feb 28 16:20:11 2018 +0300
@@ -1051,6 +1051,11 @@ njs_string_slice(njs_vm_t *vm, njs_value
 start += slice->start;
 size = slice->length;
 
+if (string->length == 0) {
+/* Byte string. */
+length = 0;
+}
+
 } else {
 /* UTF-8 string. */
 end = start + string->size;
diff -r 0f1c3efcd894 -r ab1f67b69707 njs/test/njs_unit_test.c
--- a/njs/test/njs_unit_test.c  Tue Feb 27 14:11:00 2018 +0300
+++ b/njs/test/njs_unit_test.c  Wed Feb 28 16:20:11 2018 +0300
@@ -3529,6 +3529,15 @@ static njs_unit_test_t  njs_test[] =
 { nxt_string("'α'.toUTF8()[0]"),
   nxt_string("\xCE") },
 
+{ nxt_string("/^\\x80$/.test('\\x80'.toBytes())"),
+  nxt_string("true") },
+
+{ nxt_string("/^\\xC2\\x80$/.test('\\x80'.toUTF8())"),
+  nxt_string("true") },
+
+{ nxt_string("'α'.toUTF8().toBytes()"),
+  nxt_string("α") },
+
 { nxt_string("var a = 'a'.toBytes() + 'α'; a + a.length"),
   nxt_string("aα3") },
 
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: Вопросы по http2 push

2018-02-28 Thread Nick Lavlinsky - Method Lab

28.02.2018 12:59, S.A.N пишет:
Если ресурс отданный push ответом, используется на странице и в ответе 
были

заголовки НТТР кеширование, браузер этот ресурс перемещает в НТТР кеш?


Да.

Когда я экспериментировал с push ответами, браузеры не перемещал push
ресурсы в НТТР кеш, вы тестировали только в новом Chrome, как обстоят дела в
других браузерах?
Сейчас потестил в Сhrome 66, FF 58.0.2 всё работает правильно, кеширует 
ресурсы, которые были запушены и использованы.



Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,278784,278824#msg-278824

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru



--

С уважением,
Лавлинский Николай,
Метод Лаб: делаем правильно!
www.methodlab.ru
+7 (499) 519-00-12

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Files still on disc after inactive time

2018-02-28 Thread Andrzej Walas
Can you answer?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,278589,278826#msg-278826

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-02-28 Thread Aziz Rozyev
While it’s not clear why one may need to flush the data on each http operation,
I can imagine to what performance degradation that may lead of. 

if it’s not a some kind of funny clustering among nodes, I wouldn't care much
where actual data is, RAM still should be much faster, than disk I/O.


br,
Aziz.





> On 28 Feb 2018, at 12:30, Nagy, Attila  wrote:
> 
> On 02/27/2018 02:24 PM, Maxim Dounin wrote:
>> 
>>> Now, that nginx supports running threads, are there plans to convert at
>>> least DAV PUTs into it's own thread(pool), so make it possible to do
>>> non-blocking (from nginx's event loop PoV) fsync on the uploaded file?
>> No, there are no such plans.
>> 
>> (Also, trying to do fsync() might not be the best idea even in
>> threads.  A reliable server might be a better option.)
>> 
> What do you mean by a reliable server?
> I want to make sure when the HTTP operation returns, the file is on the disk, 
> not just in a buffer waiting for an indefinite amount of time to be flushed.
> This is what fsync is for.
> 
> Why doing this in a thread is not a good idea? It would'nt block nginx that 
> way.
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Вопросы по http2 push

2018-02-28 Thread S.A.N
> Push cache очищается при закрытии 
> соединения, но все элементы при первом использовании браузером будут 
> помещены в http cache, так что всё нормально.
> Подробнее здесь: 
> https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/

Если ресурс отданный push ответом, используется на странице и в ответе были
заголовки НТТР кеширование, браузер этот ресурс перемещает в НТТР кеш?
Когда я экспериментировал с push ответами, браузеры не перемещал push
ресурсы в НТТР кеш, вы тестировали только в новом Chrome, как обстоят дела в
других браузерах?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,278784,278824#msg-278824

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Вопросы по http2 push

2018-02-28 Thread Nick Lavlinsky - Method Lab

28.02.2018 11:40, S.A.N пишет:

Не совсем понял ваши слова про "не кешируемый контент".

По спецификации НТТР 2, браузер push ответы могут кешировать только в
отдельном кеше соединенния (смотрите на connection_id в devtools), после
закрытия соединения кеш очищается.
Или я не прав, браузеры сохранят push ответы в общем кеше и потом можно их
использовать в разных connection и ревалидировать?


Нет, есть разные кеши: push cache и http cache. Наоборот, при push по 
спецификации рекомендуется использовать кешируемые ответы (все заголовки 
при push приходят как обычно). Push cache очищается при закрытии 
соединения, но все элементы при первом использовании браузером будут 
помещены в http cache, так что всё нормально.
Подробнее здесь: 
https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/



А use case простой: замена inline CSS, который блокирует отрисовку
страницы. А также любые ресурсы из критического пути рендеринга
страницы.

Для этих целей лучше использовать отдельные запросы, с заголовками НТТР
кешированием на год и инвалидировать этот кеш только по изменению юрл.

Push делает то же самое, только без ожидания запроса (экономия до 1RTT).

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,278784,278821#msg-278821

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru



--

С уважением,
Лавлинский Николай,
Метод Лаб: делаем правильно!
www.methodlab.ru
+7 (499) 519-00-12

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: Вопросы по http2 push

2018-02-28 Thread S.A.N
> Не совсем понял ваши слова про "не кешируемый контент".

По спецификации НТТР 2, браузер push ответы могут кешировать только в
отдельном кеше соединенния (смотрите на connection_id в devtools), после
закрытия соединения кеш очищается.
Или я не прав, браузеры сохранят push ответы в общем кеше и потом можно их
использовать в разных connection и ревалидировать?
 
> А use case простой: замена inline CSS, который блокирует отрисовку 
> страницы. А также любые ресурсы из критического пути рендеринга
> страницы.

Для этих целей лучше использовать отдельные запросы, с заголовками НТТР
кешированием на год и инвалидировать этот кеш только по изменению юрл.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,278784,278821#msg-278821

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru