Re: Nginx mail proxy

2018-03-02 Thread Maxim Dounin
Hello!

On Fri, Mar 02, 2018 at 09:54:31AM -0500, peanky wrote:

> > Bacause nginx smtp proxy is designed to protect / balance your own 
> > smtp backends. If you want to proxy to external smtp servers, 
> > consider using other solutions.
> 
> Thank you for answer!
> 1. what is the diff between "my smtp" and "3rd party smtp" from technical
> point of view?

The difference is assumptions made during development, and 
solutions implemented according to these assumptions.  Most 
obvious ones are, as already mentioned in this thread:

- you don't need to bother with authenticating to a backend, but 
  can use XCLIENT instead;

- you don't need to use SSL to your backends, and can assume 
  secure internal network instead.

Others include various protocol limitations when it comes to 
talking to backends (some exotic yet valid responses might not be 
recognized properly), and lack of various negotiations - e.g., 
SMTP pipelining must be supported by the backend if you list it in 
the smtp_capabilities.

> 2. which other solutions can you imagine? It's very interesting!

This depends on what you are trying to do.  In some basic cases a 
TCP proxy as provided by the nginx stream module might do the 
trick.  In some other - a properly configured SMTP server will be 
enough.

> 3. I've heared that "nginx mail module supports only non-ssl backeds". It's
> true?

Yes.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: NTLM sharepoint when use nginx reverse proxy

2018-03-02 Thread Francis Daly
On Fri, Mar 02, 2018 at 05:30:00AM -0500, sonpg wrote:

Hi there,

> my design is : enduser --> nginx -->  sites (sharepoint site:443, web:80;
> 443)
> if server listen in 80 will redirect to 443

That seems generally sensible.

> i try to use stream block but it can't use same port.

Ah: you have one nginx, but with one "stream { server { listen 80; } }"
and also one "http { server { listen 80; } }".

Yes, that will not work. (And is not a case I had imagined, when I sent
the previous mail.)

If you use both stream and http, they cannot both listen on the same ip:port.

You use "http" because you want nginx to reverse-proxy one or more
web sites. You use "stream" because you want nginx to reverse-proxy
one ntlm-authentication web site, and you know that nginx does not
reverse-proxy ntlm.

You use "stream" to send all inbound traffic to a specific backend server,
in order to get around nginx's lack of ntlm support. You can do that,
but you can not also use "http" on the same port, because that would
want to handle the same inbound traffic.

So you must choose to stop supporting the ntlm web site, or to stop
supporting more-than-one web site, or to use something other than nginx.

(Or to put the ntlm stream listener and the http listener on different
ip:ports -- you might be able to use multiple IP addresses, depending
on your setup.)

f
-- 
Francis Dalyfran...@daoine.org
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-03-02 Thread Valery Kholodkov

On 02-03-18 17:06, Maxim Dounin wrote:

The question here is - why you want the file to be on disk, and
not just in a buffer?  Because you expect the server to die in a
few seconds without flushing the file to disk?  How probable it
is, compared to the probability of the disk to die?  A more
reliable server can make this probability negligible, hence the
suggestion.

Because the files I upload to nginx servers are important to me. Please
step back a little and forget that we are talking about nginx or an HTTP
server.


If file are indeed important to you, you have to keep a second
copy in a different location, or even in multiple different
locations.  Trying to do fsync() won't save your data in a lot of
quite realistic scenarios, but certainly will imply performance
(and complexity, from nginx code point of view) costs.


But do you understand that even in a replicated setup the time interval 
when data reaches permanent storage might be significantly long and 
according to your assumptions is random and unpredictable.


In other words, without fsync() it's not possible to make any judgments 
about consistency of your data, consequently it's not possible to 
implement a program, that tells if your data is consistent or not.


Don't you think that your arguments are fundamentally flawed because you 
insist on probabilistic nature of the problem, while it is actually 
deterministic?


By the way, even LevelDB has options for synchronous writes:

https://github.com/google/leveldb/blob/master/doc/index.md#synchronous-writes

and it implements them with fsync()

Bitcoin Core varies these options depending on operation mode (see 
src/validation.cpp, src/txdb.cpp, src/dbwrapper.cpp).


Oh, I forgot, Bitcoin it's nonsense...

val
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-03-02 Thread Maxim Dounin
Hello!

On Fri, Mar 02, 2018 at 08:47:17PM +0100, Valery Kholodkov wrote:

> On 02-03-18 17:06, Maxim Dounin wrote:
> >>> The question here is - why you want the file to be on disk, and
> >>> not just in a buffer?  Because you expect the server to die in a
> >>> few seconds without flushing the file to disk?  How probable it
> >>> is, compared to the probability of the disk to die?  A more
> >>> reliable server can make this probability negligible, hence the
> >>> suggestion.
> >> Because the files I upload to nginx servers are important to me. Please
> >> step back a little and forget that we are talking about nginx or an HTTP
> >> server.
> >
> > If file are indeed important to you, you have to keep a second
> > copy in a different location, or even in multiple different
> > locations.  Trying to do fsync() won't save your data in a lot of
> > quite realistic scenarios, but certainly will imply performance
> > (and complexity, from nginx code point of view) costs.
> 
> But do you understand that even in a replicated setup the time interval 
> when data reaches permanent storage might be significantly long and 
> according to your assumptions is random and unpredictable.
> 
> In other words, without fsync() it's not possible to make any judgments 
> about consistency of your data, consequently it's not possible to 
> implement a program, that tells if your data is consistent or not.
> 
> Don't you think that your arguments are fundamentally flawed because you 
> insist on probabilistic nature of the problem, while it is actually 
> deterministic?

In no particular order:

1. There are no "my assumptions".

2. This is not about consistency, it's about fault tolerance.  
Everything is consistent unless a server crash happens.

3. Using fsync() can increase the chance that your data will 
survive a server crash / power outage.  It doesn't matter in many 
other scenarios though, for example, if your disk dies.

4. Trying to insist that reliability is deterministic looks unwise 
to me, but it's up to you to insist on anything you want.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-03-02 Thread Nagy, Attila

On 02/28/2018 03:08 PM, Maxim Dounin wrote:

The question here is - why you want the file to be on disk, and
not just in a buffer?  Because you expect the server to die in a
few seconds without flushing the file to disk?  How probable it
is, compared to the probability of the disk to die?  A more
reliable server can make this probability negligible, hence the
suggestion.
Because the files I upload to nginx servers are important to me. Please 
step back a little and forget that we are talking about nginx or an HTTP 
server.

We have data which we want to write to somewhere.
Check any of the database servers. Would you accept a DB server which 
can loose confirmed data or couldn't be configured that way that a 
write/insert/update/commit/whatever you use to modify or put data into 
it operation is reliably written by the time you receive acknowledgement?
Now try to use this example. I would like to use nginx to store files. 
That's what HTTP PUT is for.
Of course I'm not expecting that the server will die every day. But when 
that happens, I want to make sure that the confirmed data is there.
Let's take a look at various object storage systems, like ceph. Would 
you accept a confirmed write to be lost there? They make a great deal of 
work to make that impossible.
Now try to imagine that somebody doesn't need the complexity of -for 
example- ceph, but wants to store data with plain HTTP. And you got 
there. If you store data, then you want to make sure the data is there.

If you don't, why do you store it anyways?


(Also, another question is what "on the disk" meas from physical
point of view.  In many cases this in fact means "somewhere in the
disk buffers", and a power outage can easily result in the file
being not accessible even after fsync().)
Not with good software/hardware. (and it doesn't really have to be super 
good, but average)





Why doing this in a thread is not a good idea? It would'nt block nginx
that way.

Because even in threads, fsync() is likely to cause performance
degradation.  It might be a better idea to let the OS manage
buffers instead.

Sure, it will cause some (not much BTW in a good configuration). But if 
my primary goal is to store files reliably, why should I care?
I can solve that by using SSDs for logs, BBWCs and a lot more thing. But 
in the current way, I can't make sure that a HTTP PUT was really 
successful or it will be successful in some seconds or it will fail badly.


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-03-02 Thread Nagy, Attila

On 02/28/2018 11:33 PM, Peter Booth wrote:
This discussion is interesting, educational, and thought provoking. 
 Web architects
only learn “the right way” by first doing things “the wrong way” and 
seeing what happens.
Attila and Valery asked questions that sound logical, and I think 
there's value in exploring

what would happen if their suggestions were implemented.

First caveat - nginx is deployed in all manner different scenarios on 
different hardware
and operating systems. Physical servers and VMs behave very 
differently, as do local
and remote storage. When an application writes to NFS mounted storage 
there's no guarantee
that even and synch will correctly enforce a write barrier. Still, if 
we consider  real numbers:


  * On current model quad socket hosts, nginx can support well over 1
million requests per second (see TechEmpower benchmarks)
  * On the same hardware, a web app that writes to a Postgresql DB can
do at least a few thousand writes per second.
  * A SATA drive might support  300 write IOPS, whilst an SSD will
support 100x that.

What this means that doing fully synchronous writes can reduce your 
potential throughput

by a factor of 100 or more. So it’s not a great way to ensure consistency.

But there are cheaper ways to achieve the same consistency and 
reliability characteristics:


  * If you are using Linux then your reads and write swill occur
through the page cache - so the actual disk itself really doesn’t
matter (whilst your host is up).
  * If you want to protect against loss of physical disk then use RAID.
  * If you want to protect against a random power failure then use
drives with battery backed caches, so writes will get persisted
when a server restarts after a power failure

Sorry, but this point shows that you don't understand the problem. A 
BBWC won't save you from random power failure. Because the data is still 
in RAM!
BBWC will save you when you do an fsync and the end of the write (and 
that fsync will still write RAM, but it will be the controller's RAM 
which is protected by battery).
But nginx doesn't do this today. And that's what this discussion is all 
about...



  * If you want to protect against a crazy person hitting your server
with an axe then write to two servers ...


And still you won't have it reliably on your disks.


*But the bottom line is separation of concerns.* Nginx should not use 
fsync because it isn’t nginx's business.


Please suggest at least working solution, which is compatible with 
nginx's asynchronous architecture and it ensures that a successful HTTP 
PUT will mean the data written to a reliable store.


There are several filesystems which can be turned "fsync by default", 
but that will fail miserably because nginx does the writes in the same 
process in the same thread. That's what could be solved by doing at 
least the fsyncs in different threads, so they wouldn't block the main 
thread.


BTW, I'm not proposing this to be the default. It should be an optional 
setting, so if somebody want to maintain the current situation, they 
could do that.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-03-02 Thread Nagy, Attila

On 02/28/2018 11:04 AM, Aziz Rozyev wrote:

While it’s not clear why one may need to flush the data on each http operation,
I can imagine to what performance degradation that may lead of.
I store data on HTTP servers in a distributed manner and have a catalog 
of where each file is. If I get back a successful HTTP response for a 
PUT operation, I want it to be true, so the file must be on stable storage.
If I just write it to a buffer and something happens with the machine 
while the data is still in the buffer, I can't trust that response and I 
have to make sure the file is there in its entirety from time to time, 
which is much much much much more of a performance degradation.
With clever file systems and/or good hardware (battery backed write 
cache) it won't cost you much.
Anyways, it's completely irrelevant how fast you can write to RAM. The 
task here is to write reliably. And you can make it fast if you want 
with software and hardware.




if it’s not a some kind of funny clustering among nodes, I wouldn't care much
where actual data is, RAM still should be much faster, than disk I/O.

Let's turn the question this way: if you write to RAM, you can't make 
sure that the file really made it's way to the storage.
Why do you upload files to an HTTP server if you don't care whether they 
are there or not?

You could use /dev/null too. It's even more faster...
Or just make your upload_file() function to a dummy "return immediately" 
call.

That's even more faster. :)

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx mail proxy

2018-03-02 Thread peanky
> Bacause nginx smtp proxy is designed to protect / balance your own 
> smtp backends. If you want to proxy to external smtp servers, 
> consider using other solutions.

Thank you for answer!
1. what is the diff between "my smtp" and "3rd party smtp" from technical
point of view?
2. which other solutions can you imagine? It's very interesting!
3. I've heared that "nginx mail module supports only non-ssl backeds". It's
true?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,257510,278897#msg-278897

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx Directory Autoindex

2018-03-02 Thread James

On 01/03/2018 11:54, Luciano Mannucci wrote:


It is probably trivial to add an option
to the configuration, something like "autoindex_reverse_sort" (set to
off by default), though I don'nt know if it would be usefull for other
nginx users...


I'd like the option of order by date, "ls -t", "ls -rt".  This helps 
because the text order of numbers is "10 11 8 9", see:


http://nginx.org/download/

The latest version is 2/3 of the way down and if I was looking for an 
update then 1.13.10 would not be next to the current 1.13.9.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-03-02 Thread Aziz Rozyev
Atilla,

man page quote is related to the Valery’s argument that fsync wont affect 
performance, forget it.

It’s nonsense because you’re trying to solve the reliability problem at the 
different level,
it has been multiple times suggested here already by maxim and Paul, that it’s 
better 
to invest to the good server/storage infrastructure, instead of fsyncing each 
PUT.

Regarding the DB server analogy, you’re still not save from the power outages 
as long as your
transaction isn’t in a transaction log. 

If you’re still consent with syncing and ready to sacrifice your time, try 
mounting a file system
with ‘sync’ option.


br,
Aziz.





> On 2 Mar 2018, at 12:12, Nagy, Attila  wrote:
> 
> On 02/28/2018 03:08 PM, Maxim Dounin wrote:
>> The question here is - why you want the file to be on disk, and
>> not just in a buffer?  Because you expect the server to die in a
>> few seconds without flushing the file to disk?  How probable it
>> is, compared to the probability of the disk to die?  A more
>> reliable server can make this probability negligible, hence the
>> suggestion.
> Because the files I upload to nginx servers are important to me. Please step 
> back a little and forget that we are talking about nginx or an HTTP server.
> We have data which we want to write to somewhere.
> Check any of the database servers. Would you accept a DB server which can 
> loose confirmed data or couldn't be configured that way that a 
> write/insert/update/commit/whatever you use to modify or put data into it 
> operation is reliably written by the time you receive acknowledgement?
> Now try to use this example. I would like to use nginx to store files. That's 
> what HTTP PUT is for.
> Of course I'm not expecting that the server will die every day. But when that 
> happens, I want to make sure that the confirmed data is there.
> Let's take a look at various object storage systems, like ceph. Would you 
> accept a confirmed write to be lost there? They make a great deal of work to 
> make that impossible.
> Now try to imagine that somebody doesn't need the complexity of -for example- 
> ceph, but wants to store data with plain HTTP. And you got there. If you 
> store data, then you want to make sure the data is there.
> If you don't, why do you store it anyways?
> 
>> (Also, another question is what "on the disk" meas from physical
>> point of view.  In many cases this in fact means "somewhere in the
>> disk buffers", and a power outage can easily result in the file
>> being not accessible even after fsync().)
> Not with good software/hardware. (and it doesn't really have to be super 
> good, but average)
> 
>> 
>>> Why doing this in a thread is not a good idea? It would'nt block nginx
>>> that way.
>> Because even in threads, fsync() is likely to cause performance
>> degradation.  It might be a better idea to let the OS manage
>> buffers instead.
>> 
> Sure, it will cause some (not much BTW in a good configuration). But if my 
> primary goal is to store files reliably, why should I care?
> I can solve that by using SSDs for logs, BBWCs and a lot more thing. But in 
> the current way, I can't make sure that a HTTP PUT was really successful or 
> it will be successful in some seconds or it will fail badly.
> 
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fsync()-in webdav PUT

2018-03-02 Thread Nagy, Attila

On 02/28/2018 10:41 PM, Aziz Rozyev wrote:

Without fsyncing file's data and metadata a client will receive a positive 
reply before data has reached the storage, thus leaving non-zero probability 
that states of two systems involved into a web transaction end up inconsistent.


I understand why one may need consistency, but doing so with fsyncing is 
non-sense.

Here is what man page says in that regard:


fsync()  transfers  ("flushes")  all  modified  in-core data of (i.e., modified 
buffer cache pages for) the file referred to by the file descriptor fd to the disk device 
(or other permanent
storage device) so that all changed information can be retrieved even 
after the system crashed or was rebooted.  This includes writing through or 
flushing a disk cache if present.  The call
blocks until the device reports that the transfer has completed.  It 
also flushes metadata information associated with the file (see stat(2)).



Could you please elaborate what do you mean by calling this a nonsense?
Also I don't understand why you cited the man page. It clearly says this 
is what ensures that when fsync return, the file will be on stable storage.


What else method do you recommend if somebody wants to get an 
acknowledgement to the HTTP PUT only after the file is safely stored?

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx Directory Autoindex

2018-03-02 Thread Luciano Mannucci
On Fri, 2 Mar 2018 10:33:45 +
James  wrote:

> I'd like the option of order by date, "ls -t", "ls -rt".  This helps 
> because the text order of numbers is "10 11 8 9", see:
> 
> http://nginx.org/download/
Well, this is way less trivial than simply add a flag to reverse sort
order. It belongs indeed to fancyindex, that already does that:

fancyindex_default_sort

Syntax: fancyindex_default_sort [name | size | date | name_desc |
size_desc | date_desc]

Though you need to compile your own nginx to get that working (and
follow the module installation instructions :-).

Cheers,

Luciano.
-- 
 /"\ /Via A. Salaino, 7 - 20144 Milano (Italy)
 \ /  ASCII RIBBON CAMPAIGN / PHONE : +39 2 485781 FAX: +39 2 48578250
  X   AGAINST HTML MAIL/  E-MAIL: posthams...@sublink.sublink.org
 / \  AND POSTINGS/   WWW: http://www.lesassaie.IT/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: fsync()-in webdav PUT

2018-03-02 Thread Nagy, Attila

On 03/02/2018 11:42 AM, Aziz Rozyev wrote:

man page quote is related to the Valery’s argument that fsync wont affect 
performance, forget it.
Of course it affects performance. But as for how much: it depends on 
many factors. It's possible to build servers where the overall effect 
will be negligible.


It’s nonsense because you’re trying to solve the reliability problem at the 
different level,
it has been multiple times suggested here already by maxim and Paul, that it’s 
better
to invest to the good server/storage infrastructure, instead of fsyncing each 
PUT.
Yes, it has been suggested multiple times, the only problem is it's not 
true. No matter how good server/storage you have, if you write to 
unbacked memory buffers (which nginx does), you are toast.




Regarding the DB server analogy, you’re still not save from the power outages 
as long as your
transaction isn’t in a transaction log.

If you’re still consent with syncing and ready to sacrifice your time, try 
mounting a file system
with ‘sync’ option.

That's what really kills performance, because of the async nature of 
nginx. That's why I'm proposing an option to do the fsync at the end of 
the PUT (or maybe even the whole operation) in a thread(pool).


If you care about performance and reliability, that's the way it has to 
be solved.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx Directory Autoindex

2018-03-02 Thread James

On 02/03/2018 11:33, Luciano Mannucci wrote:


I'd like the option of order by date, "ls -t", "ls -rt".  This helps
because the text order of numbers is "10 11 8 9", see:

http://nginx.org/download/

Well, this is way less trivial than simply add a flag to reverse sort
order. It belongs indeed to fancyindex, that already does that:

...

Though you need to compile your own nginx to get that working (and
follow the module installation instructions :-).


Perhaps I should have expressed it as I'd like other people to sort by 
date and that isn't going to happen unless it's easy, ie, built in.


autoindex on | off | date | text | ... ;


James.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx Directory Autoindex

2018-03-02 Thread Vladimir Homutov
On Fri, Mar 02, 2018 at 02:03:36PM +, James wrote:
> On 02/03/2018 11:33, Luciano Mannucci wrote:
>
> >> I'd like the option of order by date, "ls -t", "ls -rt".  This helps
> >> because the text order of numbers is "10 11 8 9", see:
> >>
> >> http://nginx.org/download/
> > Well, this is way less trivial than simply add a flag to reverse sort
> > order. It belongs indeed to fancyindex, that already does that:
> ...
> > Though you need to compile your own nginx to get that working (and
> > follow the module installation instructions :-).
>
> Perhaps I should have expressed it as I'd like other people to sort by
> date and that isn't going to happen unless it's easy, ie, built in.
>
> autoindex on | off | date | text | ... ;
>
>
Well, if you want some interactive sorting, you have to do it  in
javascript; the native autoindex module is able to output json/jsonp,
so a simple js script may be used to implement any desired behaviour.


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Nginx Directory Autoindex

2018-03-02 Thread James

On 02/03/2018 14:17, Vladimir Homutov wrote:


Perhaps I should have expressed it as I'd like other people to sort by
date and that isn't going to happen unless it's easy, ie, built in.

autoindex on | off | date | text | ... ;



Well, if you want some interactive sorting, you have to do it  in
javascript; the native autoindex module is able to output json/jsonp,
so a simple js script may be used to implement any desired behaviour.


No, I want *other* people to sort their indices by date where 
appropriate.  It's more likely to happen if it's easy - no plugin, no 
scripting.


Having just done my monthly check for software updates on 421 projects I 
have a reason.  Many are in text sorted directory listings and would 
benefit from date ordering.



James.


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: [PATCH] HTTP/2: make http2 server support http1

2018-03-02 Thread Valentin V. Bartenev
On Friday 02 March 2018 15:53:07 Haitao Lv wrote:
> # HG changeset patch
> # User 吕海涛 
> # Date 1519976498 -28800
> #  Fri Mar 02 15:41:38 2018 +0800
> # Node ID 200955343460c4726015180f20c03e31c0b35ff6
> # Parent  81fae70d6cb81c67607931ec3ecc585a609c97e0
> make http2 server support http1
> 
[..]

It doesn't look like a useful feature. 
Could you please explain the use cases?


> diff -r 81fae70d6cb8 -r 200955343460 src/http/ngx_http.c
> --- a/src/http/ngx_http.c Thu Mar 01 20:25:50 2018 +0300
> +++ b/src/http/ngx_http.c Fri Mar 02 15:41:38 2018 +0800
[..]
> +void
> +ngx_http_v2_init_after_preface(ngx_event_t *rev, ngx_buf_t *buf)
> +{
>  ngx_connection_t  *c;
>  ngx_pool_cleanup_t*cln;
>  ngx_http_connection_t *hc;
> @@ -316,6 +323,12 @@
>  h2c->state.handler = hc->proxy_protocol ? 
> ngx_http_v2_state_proxy_protocol
>  : ngx_http_v2_state_preface;
>  
> +if (buf != NULL) {
> +ngx_memcpy(h2mcf->recv_buffer, buf->pos, buf->last - buf->pos);
> +h2c->state.buffer_used = buf->last - buf->pos;
> +h2c->state.handler = ngx_http_v2_state_head;
> +}
[..]

What if the received data is bigger than h2mcf->recv_buffer?

  wbr, Valentin V. Bartenev

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

[nginx] Access log: support for disabling escaping (ticket #1450).

2018-03-02 Thread Vladimir Homutov
details:   http://hg.nginx.org/nginx/rev/265c29b0b8b8
branches:  
changeset: 7223:265c29b0b8b8
user:  Vladimir Homutov 
date:  Thu Mar 01 11:42:55 2018 +0300
description:
Access log: support for disabling escaping (ticket #1450).

Based on patches by Johannes Baiter 
and Calin Don.

diffstat:

 src/http/modules/ngx_http_log_module.c |  68 ++
 src/stream/ngx_stream_log_module.c |  68 ++
 2 files changed, 120 insertions(+), 16 deletions(-)

diffs (277 lines):

diff -r 81fae70d6cb8 -r 265c29b0b8b8 src/http/modules/ngx_http_log_module.c
--- a/src/http/modules/ngx_http_log_module.cThu Mar 01 20:25:50 2018 +0300
+++ b/src/http/modules/ngx_http_log_module.cThu Mar 01 11:42:55 2018 +0300
@@ -90,6 +90,11 @@ typedef struct {
 } ngx_http_log_var_t;
 
 
+#define NGX_HTTP_LOG_ESCAPE_DEFAULT  0
+#define NGX_HTTP_LOG_ESCAPE_JSON 1
+#define NGX_HTTP_LOG_ESCAPE_NONE 2
+
+
 static void ngx_http_log_write(ngx_http_request_t *r, ngx_http_log_t *log,
 u_char *buf, size_t len);
 static ssize_t ngx_http_log_script_write(ngx_http_request_t *r,
@@ -126,7 +131,7 @@ static u_char *ngx_http_log_request_leng
 ngx_http_log_op_t *op);
 
 static ngx_int_t ngx_http_log_variable_compile(ngx_conf_t *cf,
-ngx_http_log_op_t *op, ngx_str_t *value, ngx_uint_t json);
+ngx_http_log_op_t *op, ngx_str_t *value, ngx_uint_t escape);
 static size_t ngx_http_log_variable_getlen(ngx_http_request_t *r,
 uintptr_t data);
 static u_char *ngx_http_log_variable(ngx_http_request_t *r, u_char *buf,
@@ -136,6 +141,10 @@ static size_t ngx_http_log_json_variable
 uintptr_t data);
 static u_char *ngx_http_log_json_variable(ngx_http_request_t *r, u_char *buf,
 ngx_http_log_op_t *op);
+static size_t ngx_http_log_unescaped_variable_getlen(ngx_http_request_t *r,
+uintptr_t data);
+static u_char *ngx_http_log_unescaped_variable(ngx_http_request_t *r,
+u_char *buf, ngx_http_log_op_t *op);
 
 
 static void *ngx_http_log_create_main_conf(ngx_conf_t *cf);
@@ -905,7 +914,7 @@ ngx_http_log_request_length(ngx_http_req
 
 static ngx_int_t
 ngx_http_log_variable_compile(ngx_conf_t *cf, ngx_http_log_op_t *op,
-ngx_str_t *value, ngx_uint_t json)
+ngx_str_t *value, ngx_uint_t escape)
 {
 ngx_int_t  index;
 
@@ -916,11 +925,18 @@ ngx_http_log_variable_compile(ngx_conf_t
 
 op->len = 0;
 
-if (json) {
+switch (escape) {
+case NGX_HTTP_LOG_ESCAPE_JSON:
 op->getlen = ngx_http_log_json_variable_getlen;
 op->run = ngx_http_log_json_variable;
+break;
 
-} else {
+case NGX_HTTP_LOG_ESCAPE_NONE:
+op->getlen = ngx_http_log_unescaped_variable_getlen;
+op->run = ngx_http_log_unescaped_variable;
+break;
+
+default: /* NGX_HTTP_LOG_ESCAPE_DEFAULT */
 op->getlen = ngx_http_log_variable_getlen;
 op->run = ngx_http_log_variable;
 }
@@ -1073,6 +1089,39 @@ ngx_http_log_json_variable(ngx_http_requ
 }
 
 
+static size_t
+ngx_http_log_unescaped_variable_getlen(ngx_http_request_t *r, uintptr_t data)
+{
+ngx_http_variable_value_t  *value;
+
+value = ngx_http_get_indexed_variable(r, data);
+
+if (value == NULL || value->not_found) {
+return 0;
+}
+
+value->escape = 0;
+
+return value->len;
+}
+
+
+static u_char *
+ngx_http_log_unescaped_variable(ngx_http_request_t *r, u_char *buf,
+ngx_http_log_op_t *op)
+{
+ngx_http_variable_value_t  *value;
+
+value = ngx_http_get_indexed_variable(r, op->data);
+
+if (value == NULL || value->not_found) {
+return buf;
+}
+
+return ngx_cpymem(buf, value->data, value->len);
+}
+
+
 static void *
 ngx_http_log_create_main_conf(ngx_conf_t *cf)
 {
@@ -1536,18 +1585,21 @@ ngx_http_log_compile_format(ngx_conf_t *
 size_t   i, len;
 ngx_str_t   *value, var;
 ngx_int_t   *flush;
-ngx_uint_t   bracket, json;
+ngx_uint_t   bracket, escape;
 ngx_http_log_op_t   *op;
 ngx_http_log_var_t  *v;
 
-json = 0;
+escape = NGX_HTTP_LOG_ESCAPE_DEFAULT;
 value = args->elts;
 
 if (s < args->nelts && ngx_strncmp(value[s].data, "escape=", 7) == 0) {
 data = value[s].data + 7;
 
 if (ngx_strcmp(data, "json") == 0) {
-json = 1;
+escape = NGX_HTTP_LOG_ESCAPE_JSON;
+
+} else if (ngx_strcmp(data, "none") == 0) {
+escape = NGX_HTTP_LOG_ESCAPE_NONE;
 
 } else if (ngx_strcmp(data, "default") != 0) {
 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
@@ -1636,7 +1688,7 @@ ngx_http_log_compile_format(ngx_conf_t *
 }
 }
 
-if (ngx_http_log_variable_compile(cf, op, , json)
+if (ngx_http_log_variable_compile(cf, op, , escape)
 != NGX_OK)
 {
 return NGX_CONF_ERROR;
diff -r 81fae70d6cb8 -r 

Re: WebSockets over HTTP/2

2018-03-02 Thread S.A.N
> А  в  чём  преимущества  HTTP2  для  Вас?  Вы пробовали делать замеры,
> сравнивающие HTTP2 с HTTP1.1?

Для нас преимущество только в мультиплексировании, других преимуществ нет, у
нас просто много параллельных ajax запросов, НТТР 2 push не использовали,
возможно тоже полезная штука, но практики не было.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,278858,278884#msg-278884

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WebSockets over HTTP/2

2018-03-02 Thread Валентин Бартенев
On Friday, 2 March 2018 13:12:31 MSK S.A.N wrote:
> > А  в  чём  преимущества  HTTP2  для  Вас?  Вы пробовали делать замеры,
> > сравнивающие HTTP2 с HTTP1.1?
> 
> Для нас преимущество только в мультиплексировании, других преимуществ нет, у
> нас просто много параллельных ajax запросов, НТТР 2 push не использовали,
> возможно тоже полезная штука, но практики не было.
> 

От самого мультиплексирования на уровне приложения приемуществ особых нет,
больше недостатков.  Несколько TCP соединений работают лучше, чем столько же
потоков, мультиплексированных внутри одного TCP соединения, как это сделано
в HTTP/2 (а в HTTP/2 сделано ещё и особенно плохо).

--
Валентин Бартенев
___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WebSockets over HTTP/2

2018-03-02 Thread S.A.N
> От самого мультиплексирования на уровне приложения приемуществ особых
> нет,
> больше недостатков.

Судите сами, на уровне приложения (браузера) сейчас есть два варианта:

НТТР 1х - лимит 8 открытых сокетов на 1 хост, все запросы становятся в эти 8
очередей.
НТТР 2   - в 1 сокет отправляются все запросы, получения ответов асинхроное,
лимитов на кол-во запросов почти нет, очередей почти нет.

У нас клиент веб приложения, занимается агрегированием данных полученных из
многих ajax запросов на один хост.
Часто нужно сделать 20-30 параллельных НТТР GET запросов, чтобы собрать все
нужные данные, без Н2 мы становились в последовательную очередь, потому что
в 8 сокетов столько запросов не влазит, в Н2 такой проблемы нет, для нас это
важно потому что многие GET ответы кешируются на уровне Nginx и нам не
выгодно стоять в очереди чтобы получить ответ из Nginx кеша.

P.S.
Нам кстати это (много параллельных GET запросов) нужно делать и на бекенде,
между бекенд приложениями, я написал issue для Unit Nginx
https://github.com/nginx/unit/issues/81
Есть реальные шансы что это когда-то будет реализовано?

Спасибо.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,278858,278891#msg-278891

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WebSockets over HTTP/2

2018-03-02 Thread Konstantin Tokarev


02.03.2018, 16:16, "S.A.N" :
>>  От самого мультиплексирования на уровне приложения приемуществ особых
>>  нет,
>>  больше недостатков.
>
> Судите сами, на уровне приложения (браузера) сейчас есть два варианта:
>
> НТТР 1х - лимит 8 открытых сокетов на 1 хост, все запросы становятся в эти 8
> очередей.
> НТТР 2 - в 1 сокет отправляются все запросы, получения ответов асинхроное,
> лимитов на кол-во запросов почти нет, очередей почти нет.
>
> У нас клиент веб приложения, занимается агрегированием данных полученных из
> многих ajax запросов на один хост.
> Часто нужно сделать 20-30 параллельных НТТР GET запросов, чтобы собрать все
> нужные данные, без Н2 мы становились в последовательную очередь, потому что
> в 8 сокетов столько запросов не влазит, в Н2 такой проблемы нет, для нас это
> важно потому что многие GET ответы кешируются на уровне Nginx и нам не
> выгодно стоять в очереди чтобы получить ответ из Nginx кеша.

Но к вебсокетам все это никакого отношения не имеет, не так ли?

>
> P.S.
> Нам кстати это (много параллельных GET запросов) нужно делать и на бекенде,
> между бекенд приложениями, я написал issue для Unit Nginx
> https://github.com/nginx/unit/issues/81
> Есть реальные шансы что это когда-то будет реализовано?
>
> Спасибо.
>
> Posted at Nginx Forum: 
> https://forum.nginx.org/read.php?21,278858,278891#msg-278891
>
> ___
> nginx-ru mailing list
> nginx-ru@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-ru

-- 
Regards,
Konstantin

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru

Re: WebSockets over HTTP/2

2018-03-02 Thread S.A.N
> Но к вебсокетам все это никакого отношения не имеет, не так ли?

Да, вы правы, мой ответ был в контексте НТТР запросов.

Польза для WebSocket использовать сокет Н2, в том что он будет держать
открытым сокет, который использует ajax запросы.
У нас может быть 20-30 ajax потом минут 15 тишина, юзер переключился на
другую вкладку и наш Н2 сокет через 10 минут браузер закрывает, юзер
возвращается и нам нужно открывать новый Н2 сокет для ajax запросов.

Так же у меня есть мысль, сделать идентификацию юзера по ключу
$server_id$nginx_worker$connection_id, эти значения юзер подделать не может,
их будет передавать на бекенд Nginx в кастом заголовке.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?21,278858,278893#msg-278893

___
nginx-ru mailing list
nginx-ru@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-ru