Re: Add support for buffering is scripted logs

2017-08-14 Thread Alexey Ivanov
gt; wrote: > >> -Original Message- >> From: nginx-devel [mailto:nginx-devel-boun...@nginx.org] On Behalf Of Alexey >> Ivanov >> Sent: Monday, August 14, 2017 9:25 PM >> To: nginx-devel@nginx.org >> Subject: Re: Add support for buffering is scripted logs

Re: Add support for buffering is scripted logs

2017-08-14 Thread Alexey Ivanov
using syslog for that particular usecase seem way more elegant, customizable, and simple. As a side bonus you won't block event loop on vfs operations (open/write/close). > On Aug 14, 2017, at 11:00 AM, Eran Kornblau wrote: > >> >> -Original Message- >>

Re: coredump in 1.10.3

2017-03-13 Thread Alexey Ivanov
We have couple of these per week, I was blaming our third party modules, but seems like vanilla is also affected. > On Mar 13, 2017, at 7:22 AM, George . wrote: > > Yes, for me it looks like memory corruption and really hard to guess with > only bt. > We will run with

Re: HTTP/2 upstream support

2017-01-18 Thread Alexey Ivanov
Just as a datapoint: why do you need that functionality? Can you describe your particular usecase? > On Jan 17, 2017, at 8:37 AM, Sreekanth M via nginx-devel > wrote: > > > Is HTTP/2 proxy support planned ? > > -Sreekanth > >

Re: How to contribute fix for checking x509 extended key attrs to nginx?

2017-01-10 Thread Alexey Ivanov
On Jan 10, 2017, at 3:41 PM, Ethan Rahn via nginx-devel wrote: > > Hello, > > I noticed that nginx does not check x509v3 certificates ( in > event/ngx_event_openssl.c::ngx_ssl_get_client_verify as an example ) to see > that the optional extended key usage settings are

Re: Why not remove UNIX domain socket before bind

2016-12-01 Thread Alexey Ivanov
Why not just use `flock(2)` there? > On Nov 30, 2016, at 6:57 AM, Maxim Dounin wrote: > > Hello! > > On Tue, Nov 29, 2016 at 01:30:25PM -0800, Shuxin Yang wrote: > >> Is there any reason not to delete UNIX domain socket before bind? > > To name a few, deleting a

Re: [PATCH] Added the $upstream_connection variable

2016-09-12 Thread Alexey Ivanov
+1 to that. Connection reuse to an upstream is a very important metric for Edge->DC communication. In our production since we have nginx on both sides we are are gathering that metric from the other side of the other side of a connection. I assume not everybody have that luxury, therefore that

Re: [PATCH 1 of 2] HTTP: add support for trailers in HTTP responses

2016-07-20 Thread Alexey Ivanov
> On Jul 20, 2016, at 6:23 PM, Maxim Dounin <mdou...@mdounin.ru> wrote: > > Hello! > > On Wed, Jul 20, 2016 at 03:34:46PM -0700, Alexey Ivanov wrote: > >> Speaking of trailers: we had couple of use cases for HTTP >> trailers, most of them were around stream

Re: [PATCH 1 of 2] HTTP: add support for trailers in HTTP responses

2016-07-20 Thread Alexey Ivanov
Speaking of trailers: we had couple of use cases for HTTP trailers, most of them were around streaming data to user. For example, when webpage is generated we send headers and part of the body(usually up to ``) almost immediately, but then we start querying all the micro services for the

Re: [PATCH] Variables: added $tcpinfo_retrans

2015-12-21 Thread Alexey Ivanov
# HG changeset patch # User Alexey Ivanov <savether...@gmail.com> # Date 1450520577 28800 # Sat Dec 19 02:22:57 2015 -0800 # Branch tcpi_retrans # Node ID b018f837480dbad3dc45f1a2ba93fb99bc625ef5 # Parent 78b4e10b4367b31367aad3c83c9c3acdd42397c4 Variables: added $tcpinfo_retrans Th

Error counters in nginx

2015-06-12 Thread Alexey Ivanov
Hi. I have a feature request: from system administrator point of view it would be nice to have counters for each type of error log message. For example right now nginx error.log consists of myriad of different error message formats: open() “%s” failed directory index of “%s” is

Re: problems when use fastcgi_pass to deliver request to backend

2015-05-31 Thread Alexey Ivanov
If your backend can’t handle 10k connections then you should limit them there. Forwarding requests to the backend that can not handle the request is generally a bad idea[1] an it is usually better to fail the request or make them wait for a available backend on proxy itself. Nginx can retry