Re: Port numbers in the access or error logs ?

2022-09-13 Thread Lucas Rolff
Yes, it’s documented in 
http://nginx.org/en/docs/http/ngx_http_core_module.html#variables

$remote_port is probably what you’re after.

On 13 Sep 2022, at 20:24, Michael Williams 
mailto:michael.glenn.willi...@totalvu.tv>> 
wrote:

Is there a way to include the request port number in each line of the access 
logs?
I'm on Debian 11, using free NGINX downloaded.

Many thanks,
Michael

[linkedin]

___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to 
nginx-le...@nginx.org

___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: Slice module 206 requirement

2022-07-10 Thread Lucas Rolff
You’re truly awesome! I’ll give the patch a try tomorrow - and thanks for the 
other bits and pieces of information, especially regarding the expectations as 
well.

I wish you an awesome Sunday!

Best Regards,
Lucas Rolff

> On 10 Jul 2022, at 10:35, Maxim Dounin  wrote:
> 
> Hello!
> 
> On Fri, Jul 08, 2022 at 07:13:33PM +0000, Lucas Rolff wrote:
> 
>> I’m having an nginx instance where I utilise the nginx slice 
>> module to slice upstream mp4 files when using proxy_cache.
>> 
>> However, I have an interesting origin where if sending a range 
>> request (which happens when the slice module is enabled), to a 
>> file that’s less than the slice range, the origin returns a 200 
>> OK, but with the range related headers such as content-range, 
>> but obviously the full file is returned since it’s within the 
>> requested range.
>> 
>> When playing the MP4s through Google Chrome and Firefox it works 
>> fine when going through the nginx proxy instance, however, it 
>> somehow breaks Safari (both on MacOS, and iOS) - I guess Safari 
>> is more strict.
>> When playing directly through the origin it works fine in all 
>> browsers.
>> 
>> The md5 of response from the origin remains the same, so it’s 
>> not that the response itself is an invalid MP4 file, and even if 
>> you compare the cache files on disk with a “working” origin and 
>> the “broken” origin (one sends a 206 Partial Content, another 
>> sends 200 OK) - the content of the cache files remain the same, 
>> except obviously the header section of the cache file.
>> 
>> The origin returns a 206 status code, only if the file exceeds 
>> the slice size, so if I configure a slice size of 5 megabyte, 
>> only files above 5 megabytes will give 206s. Anything under 5 
>> megabytes will result in a 200 OK with content-range and the 
>> correct content-length,
>> 
>> Looking in the slice module itself I see:
>> https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L116-L126
>> 
>> 
>>if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) {
>>if (r == r->main) {
>>ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module);
>>return ngx_http_next_header_filter(r);
>>}
>> 
>>ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
>>  "unexpected status code %ui in slice response",
>>  r->headers_out.status);
>>return NGX_ERROR;
>>}
>> 
>> This seems like the slice module expects a 206 status code to be 
>> returned,
> 
> For the main request, the code accepts two basic valid variants:
> 
> - 206, so the slice module will combine multiple responses to 
>  range requests as needed;
> 
> - anything else, so the slice module will give up and simply 
>  return the response to the client.
> 
> If the module sees a non-206 response to a subrequest, this is an 
> error, as the slice module expects underlying resources to be 
> immutable, and does not expect that some ranges can be requested, 
> while some other aren't.  This isn't something related to your 
> case though.
> 
>> however, later in the same function 
>> https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L200-L211
>> 
>> 
>>if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
>>if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) 
>> {
>>ctx->start = slcf->size
>> * (r->headers_out.content_offset / slcf->size);
>>}
>> 
>>ctx->end = r->headers_out.content_offset
>>   + r->headers_out.content_length_n;
>> 
>>} else {
>>ctx->end = cr.complete_length;
>>}
>> 
>> There it will do an else statement if the status code isn’t 206.
>> So would this piece of code ever be reached, since there’s the initial error?
> 
> Following the initial check, r->headers_out.status is explicitly 
> changed to NGX_HTTP_OK.  Later on the 
> ngx_http_next_header_filter() call might again change 
> r->headers_out.status as long as the client used a range request, 
> and this is what checked here.
> 
>> Additionally I don’t see in RFC7233 that 206 responses are an 
>> absolute requirement, additionally I don’t see content-range 
>> being prohibited/forbidden to be used for 200 OK responses.
>> Now, if one have a secondary proxy that modifies the response 
>> headers

Slice module 206 requirement

2022-07-08 Thread Lucas Rolff
Hi guys,

I’m having an nginx instance where I utilise the nginx slice module to slice 
upstream mp4 files when using proxy_cache.

However, I have an interesting origin where if sending a range request (which 
happens when the slice module is enabled), to a file that’s less than the slice 
range, the origin returns a 200 OK, but with the range related headers such as 
content-range, but obviously the full file is returned since it’s within the 
requested range.

When playing the MP4s through Google Chrome and Firefox it works fine when 
going through the nginx proxy instance, however, it somehow breaks Safari (both 
on MacOS, and iOS) - I guess Safari is more strict.
When playing directly through the origin it works fine in all browsers.

The md5 of response from the origin remains the same, so it’s not that the 
response itself is an invalid MP4 file, and even if you compare the cache files 
on disk with a “working” origin and the “broken” origin (one sends a 206 
Partial Content, another sends 200 OK) - the content of the cache files remain 
the same, except obviously the header section of the cache file.

The origin returns a 206 status code, only if the file exceeds the slice size, 
so if I configure a slice size of 5 megabyte, only files above 5 megabytes will 
give 206s. Anything under 5 megabytes will result in a 200 OK with 
content-range and the correct content-length,

Looking in the slice module itself I see:
https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L116-L126


if (r->headers_out.status != NGX_HTTP_PARTIAL_CONTENT) {
if (r == r->main) {
ngx_http_set_ctx(r, NULL, ngx_http_slice_filter_module);
return ngx_http_next_header_filter(r);
}

ngx_log_error(NGX_LOG_ERR, r->connection->log, 0,
  "unexpected status code %ui in slice response",
  r->headers_out.status);
return NGX_ERROR;
}

This seems like the slice module expects a 206 status code to be returned, 
however, later in the same function 
https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_slice_filter_module.c#L200-L211


if (r->headers_out.status == NGX_HTTP_PARTIAL_CONTENT) {
if (ctx->start + (off_t) slcf->size <= r->headers_out.content_offset) {
ctx->start = slcf->size
 * (r->headers_out.content_offset / slcf->size);
}

ctx->end = r->headers_out.content_offset
   + r->headers_out.content_length_n;

} else {
ctx->end = cr.complete_length;
}

There it will do an else statement if the status code isn’t 206.
So would this piece of code ever be reached, since there’s the initial error?

Additionally I don’t see in RFC7233 that 206 responses are an absolute 
requirement, additionally I don’t see content-range being prohibited/forbidden 
to be used for 200 OK responses.
Now, if one have a secondary proxy that modifies the response headers in 
between the origin returning 200 OK with the Content-Range header, and then 
strip out the Content-Range header, the nginx slice module seems to handle it 
fine, so somehow the combination of 200 OK and a Content-Range header being 
present seems to break the slice module from functioning.

I’m just curious why this happens within the slice module, and if there’s any 
possible solution for it (like allowing the combination of 200 OK and 
Content-Range, since those two would still indicate that the origin/upstream 
supports range requests) - obviously it would be nice to fix the upstream 
server but sometimes that’s sadly not possible.

I know the parts of the slice module haven’t been touched for years, so 
obviously it works for most people, just dipping my toes here to see if there’s 
a possible solution other than disabling slice when an origin returns 200 OK 
for files smaller than the slice size.

Thanks in advance

Best Regards,
Lucas Rolff
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: Strange problem with expires?

2022-03-01 Thread Lucas Rolff
= is for exact matches, so unless your static file is called / - it obviously 
won’t match that exact location.

No modifier (so no =) means it’s a prefix

Get Outlook for iOS

From: Grzegorz Kulewski 
Sent: Wednesday, March 2, 2022 4:40:48 AM
To: nginx@nginx.org 
Subject: Strange problem with expires?

Hello,

I am using nginx 1.21.0 to serve static files for one domain and when I have:

location = / {
expires epoch;
}

expire headers are not added for / but when I remove '=' they are.

Is this some bug or just me doing something stupid?

Can anybody reproduce it too?

--
Grzegorz Kulewski

___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org


Re: Memory usage in nginx proxy setup and use of min_uses

2021-05-19 Thread Lucas Rolff
> If you nevertheless observe 500 being returned in practice, this might be the 
> actual thing to focus on.

Even with sub 100 requests and 4 workers, I've experienced it multiple times, 
where simply because the number of cache keys got exceeded, it was throwing 500 
internal server errors for new uncached requests for hours on end (The 
particular instance, I have about 300 expired keys per 5 minutes)

When it happens again, I'll obviously investigate further if it's not supposed 
to happen.

> an attacker can easily request the same resource several times, moving it to 
> the "normal" category

Correct, an attacker can almost always find ways to do things if they want to, 
I've just yet to see them being "smart" enough to request the things multiple 
times.
Even if it's not an attacker, but a misconfigured application (That isn't 
directly managed by whoever manage the nginx server), if an application for 
example would pass through identifiers in the URI (imagine gclid or fbclid 
hashes) - these types of IDs are generally unique per visitor, query strings 
may differ, but we're only going to see that request once or twice in 99% of 
the cases where this happens. As a result of that we do not fill the disk 
because of min_uses, but we do fill the memory because it isn't cleared out 
before reaching the inactive option.

So at least in use-cases like that, we'd often be able to mitigate somewhat 
misconfigured applications - it's quite common within the CDN industry to see 
this issue anyway. While the ones running the CDN then obviously have to reach 
out to the customer and ask them to fix their application, it would be awesome 
to have a more proactive approach available, that would limit the importance of 
an urgent fix.

What I can hear is that you don't see the point of such feature, that's fine __

I guess the alternative is to use lua to hook into nginx for the cache 
metadata/shm (probably needs a custom nginx module as well since the shm isn't 
exposed in lua); Then one should be able to wipe out the keys that are useless 
that way.

Best Regards,
Lucas Rolff

On 18/05/2021, 03.27, "nginx on behalf of Maxim Dounin" 
 wrote:

Hello!

On Mon, May 17, 2021 at 07:33:43PM +, Lucas Rolff wrote:

> Hi Maxim!
> 
> > - The attack you are considering is not about "poisoning".  At 
> > most, it can be used to make the cache less efficient.
> 
> Poisoning is probably the wrong word indeed, and since nginx 
> doesn't really handle reaching the limit of keys_zone, it simply 
> starts to return a 500 internal server error. So I don't think 
> it's making the cache less efficient (Other than you won't be 
> able to cache that much), you're ending up breaking nginx 
> because when the keys_zone limit has been reached, nginx simply 
> starts returning 500 internal server error for items that are 
> not already in proxy_cache - if it would do an LRU/LFU on the 
> keys - then yes, you could probably end up with a cache less 
> efficient.

While 500 is possible in some cases, especially in configurations 
with many worker processes and high request concurrency, even in 
the worst case it's expected to happen at most for half of the 
requests, usually much less than that.  Further, cache manager 
monitors the number of cache items in the keys_zone, cleaning 
things in advance, making 500 almost impossible in practice.

If you nevertheless observe 500 being returned in practice, this 
might be the actual thing to focus on.

[...]

> Unless nginx very recently implemented that reaching keys_zone 
> limit, will start purging old cache - then no, it would still 
> break the nginx for non-cached requests (returning 500 internal 
> server error). If nginx has started to purge old things if the 
> limit is reached, then sure the attacker would still be able to 
> wipe out the cache.

Clearing old cache items when it is not possible to allocate a 
cache node dates back to initial cache support in nginx 0.7.44[1].  
And cache manager monitoring of the keys_zone and clearing it in 
advance dates back to nginx 1.9.13 released about five years 
ago[2].  Not sure any of these counts as "very recently".

> But let's say we have an "inactive" set to 24+ hours (Which is 
> often used for static files) - an attack where someone would 
> append random query strings - those keys would first be removed 
> after 24 hours (or higher, depending on the limit) - with a 
> separate flag, one could set this counter to something like 60 
> seconds (So delete the key from memory if the key haven't 
> reached it's min_uses within 60 seconds) - this way, you're 
> still rotating those keys out *a lot

Re: Memory usage in nginx proxy setup and use of min_uses

2021-05-17 Thread Lucas Rolff
Hi Maxim!

> - The attack you are considering is not about "poisoning".  At most, it can 
> be used to make the cache less efficient.

Poisoning is probably the wrong word indeed, and since nginx doesn't really 
handle reaching the limit of keys_zone, it simply starts to return a 500 
internal server error. So I don't think it's making the cache less efficient 
(Other than you won't be able to cache that much), you're ending up breaking 
nginx because when the keys_zone limit has been reached, nginx simply starts 
returning 500 internal server error for items that are not already in 
proxy_cache - if it would do an LRU/LFU on the keys - then yes, you could 
probably end up with a cache less efficient.

But as it stands currently, if one uses $request_uri an attacker could reach 
the keys_zone limit, and break all traffic that is not yet cached.
Even if one would not use $request_uri, but some specific argument where a 
query string that wouldn't directly affect the output, it could cause the same 
behavior.

An application where avoiding this is hard, is CDNs for example - while ideally 
one would not use $request_uri as the cache key, it's sometimes required by 
customer applications.

> At most, you can try to limit the number of keys an attacker will be able to 
> put into keys_zone

If we take an example of a CDN or any kind of reverse proxy; What impact would 
it have to have a proxy_cache_path for each domain? Lets say we're talking 
1 domains on a single nginx server. Normally one would share one or a 
couple (fast/slow storage for example), and the domain would be a part of the 
cache key

If we want to limit (per domain), it would require a proxy_cache_path per 
domain - which surely would be very flexible, but I also think you're then 
asking nginx to do a lot more management.

> But using separate inactive timer for keys not reached min_uses won't help 
> here: an attacker who is able to do arbitrary amount of requests will be able 
> to flush all cache items anyway.

Unless nginx very recently implemented that reaching keys_zone limit, will 
start purging old cache - then no, it would still break the nginx for 
non-cached requests (returning 500 internal server error). If nginx has started 
to purge old things if the limit is reached, then sure the attacker would still 
be able to wipe out the cache.

But let's say we have an "inactive" set to 24+ hours (Which is often used for 
static files) - an attack where someone would append random query strings - 
those keys would first be removed after 24 hours (or higher, depending on the 
limit) - with a separate flag, one could set this counter to something like 60 
seconds (So delete the key from memory if the key haven't reached it's min_uses 
within 60 seconds) - this way, you're still rotating those keys out *a lot* 
faster.

> In particular, this can be done with limit_req

If we'd limit this to 20 req/s, this would allow a single IP to use up 1.78 
million keys in the keys_zone if "inactive" is 24 hours - do this with 10 IPs, 
we're at 17.8 million.
If we'd flush the keys not reaching min_uses after 1 minute, we'd limit the 
keys in the keys_zone per IP to 1200 - the attacker can surely keep doing his 
20 requests per second, but since we're throwing out things, pretty quickly, 
we've decreased the "damage" a user can do from 1.78 million keys down to 1200 
keys, or even 12000 keys if we'd keep it for 10 minutes.

I still think such feature would be awesome, since it would allow better 
control (and play nicely with the proxy_cache_min_uses directive); 
proxy_cache_min_uses directive is often used to prevent excessive storage due 
to not enough hits; Being able to do the same with the keys_zone data as well 
as a part of it, would (I think) benefit quite a lot. Since it would solve (or 
at least help mitigate) the above from happening. It makes things just a tad 
harder to cause troubles.

Best Regards,
Lucas Rolff

On 17/05/2021, 21.06, "nginx on behalf of Maxim Dounin" 
 wrote:

Hello!

On Mon, May 17, 2021 at 02:47:33PM +, Lucas Rolff wrote:

> Hi Maxim,
> 
> Thanks a lot for your reply!
> 
> I'm indeed aware of the ~8k keys per mb of memory, I was just 
> wondering if it was handled differently when min_uses are in 
> use, but it does indeed make sense that nginx has to keep track 
> of it somehow, and the keys zone makes the most sense!
> 
> > Much like with any cache item, such keys are removed from the 
> > keys_zone if no matching requests are seen during the 
> > "inactive" time
> 
> That's a bummer, since that still allows memory "poisoning" - it 
> would be awesome to have another flag for proxy_cache_path to 
> control how long keys that have not yet reached min_uses are 
> kept in SHM.
> 

Re: Memory usage in nginx proxy setup and use of min_uses

2021-05-17 Thread Lucas Rolff
Hi Maxim,

Thanks a lot for your reply!

I'm indeed aware of the ~8k keys per mb of memory, I was just wondering if it 
was handled differently when min_uses are in use, but it does indeed make sense 
that nginx has to keep track of it somehow, and the keys zone makes the most 
sense!

> Much like with any cache item, such keys are removed from the keys_zone if no 
> matching requests are seen during the "inactive" time

That's a bummer, since that still allows memory "poisoning" - it would be 
awesome to have another flag for proxy_cache_path to control how long keys that 
have not yet reached min_uses are kept in SHM.
The benefit of this would be to say if min_uses have not been reached within 
let's say 5 minutes, then we purge those keys from SHM to clear up the memory.

For controlling the cache items - ideally we wanna use query strings as a part 
of the cache key, but still ideally prevent memory poisoning as above - the 
inactive flag for min_uses would be pretty useful for this - while it won't 
prevent it fully, we'd still be able to somewhat control memory even if people 
are trying to do the cache/memory poisoning.

Best Regards,
Lucas Rolff

On 17/05/2021, 16.37, "nginx on behalf of Maxim Dounin" 
 wrote:

Hello!

On Sun, May 16, 2021 at 04:46:17PM +, Lucas Rolff wrote:

> Hi everyone,
> 
> I have a few questions regarding proxy_cache and the use of 
> proxy_cache_min_uses in nginx:
> 
> Let’s assume you have an nginx server with proxy_cache enabled, 
> and you’ve set proxy_cache_min_uses to 5;
> 
> Q1: How does nginx internally keep track of the count for 
> min_uses? Is it using SHM to do it (and counts towards the 
> key_zone limit?), or something else?
> 
> Q2: How long time does nginx keep this information for the 
> number of accesses. Let’s say the file gets visited once in a 24 
> hour period; Would nginx keep the counter at 1 for that whole 
> period, or are there some set timeout where it’s “flushed”.
> 
> Q3: If you have a user who decides to access files with a random 
> query string on it; We want to prevent caching this to fill up 
> the storage (The main reason for setting the 
> proxy_cache_min_uses in the first place) – but are we gonna fill 
> up the memory (and keys_zone limit) regardless; If yes – is 
> there a way to prevent this?
> 
> Basically the goal is to understand even just broadly how 
> min_uses are counted, and possibly how to prevent memory from 
> being eaten up in case someone decides to access the same URL 
> once with millions of requests – if there’s any way to flush out 
> the memory for example, for anything that haven’t yet reached 
> the proxy_cache_min_uses if it indeed uses up memory.

The proxy_cache_min_uses basically means that if nginx sees a 
request whose uses count not reached the specified limit yet, it 
won't try to store the response to disk.  It will, however, keep 
the key in the keys_zone with the relevant information, notably 
the number of uses seen so far.  Quoting the proxy_cache_path 
directive description (http://nginx.org/r/proxy_cache_path):

"In addition, all active keys and information about data are stored 
in a shared memory zone, whose name and size are configured by the 
keys_zone parameter. One megabyte zone can store about 8 thousand 
keys."

Much like with any cache item, such keys are removed from the 
keys_zone if no matching requests are seen during the "inactive" 
time.  Similarly, least recently used keys are removed if there is 
not enough room in the keys_zone.

Much like with normal caching, you can control the cache key nginx 
uses.  If you don't want to take query string into account, you 
may want to configure proxy_cache_key without the query string 
(see http://nginx.org/r/proxy_cache_key).

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Memory usage in nginx proxy setup and use of min_uses

2021-05-16 Thread Lucas Rolff
Hi everyone,

I have a few questions regarding proxy_cache and the use of 
proxy_cache_min_uses in nginx:

Let’s assume you have an nginx server with proxy_cache enabled, and you’ve set 
proxy_cache_min_uses to 5;

Q1: How does nginx internally keep track of the count for min_uses? Is it using 
SHM to do it (and counts towards the key_zone limit?), or something else?

Q2: How long time does nginx keep this information for the number of accesses. 
Let’s say the file gets visited once in a 24 hour period; Would nginx keep the 
counter at 1 for that whole period, or are there some set timeout where it’s 
“flushed”.

Q3: If you have a user who decides to access files with a random query string 
on it; We want to prevent caching this to fill up the storage (The main reason 
for setting the proxy_cache_min_uses in the first place) – but are we gonna 
fill up the memory (and keys_zone limit) regardless; If yes – is there a way to 
prevent this?

Basically the goal is to understand even just broadly how min_uses are counted, 
and possibly how to prevent memory from being eaten up in case someone decides 
to access the same URL once with millions of requests – if there’s any way to 
flush out the memory for example, for anything that haven’t yet reached the 
proxy_cache_min_uses if it indeed uses up memory.

Best Regards,
Lucas Rolff



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: LiteSpeed 5.4 vs Nginx 1.16 benchmarks

2019-08-18 Thread Lucas Rolff
> Misconfigure Nginx

Which parts are misconfigured? If I run the tests and tweak it to use 
CloudFlares suggested SSL settings for example then it still doesn’t really 
change anything. And I’d assume CloudFlare want good SSL performance. So I’m 
curious what settings would be configured wrong, at least they accept PR’s to 
correct the config in case it’s wrong.

> and use an obsolete distro version of Nginx? 

It uses nginx stable repository, which isn’t exactly obsolete.

- Lucas



Get Outlook for iOS

From: nginx  on behalf of Mark Mielke 

Sent: Sunday, August 18, 2019 6:27:30 PM
To: nginx@nginx.org 
Subject: Re: LiteSpeed 5.4 vs Nginx 1.16 benchmarks

Any idea how they did what? Misconfigure Nginx and use an obsolete distro 
version of Nginx? 


On Sat., Aug. 17, 2019, 1:17 p.m. Christos Chatzaras, 
mailto:ch...@cretaforce.gr>> wrote:
Today I read this post:

http://www.webhostingtalk.com/showthread.php?t=1775139

In their changelog ( 
https://www.litespeedtech.com/products/litespeed-web-server/release-log ) I see 
that they did changes related to HTTP/2.

Any idea how they did it?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Do nginx 1.14 and 1.17 have compatible configuration file formats?

2019-07-15 Thread Lucas Rolff
I would say, install a box with 1.17, copy your config, and do a config test to 
see if it works 

From: nginx  on behalf of "Zheng, Qi" 

Reply-To: "nginx@nginx.org" 
Date: Monday, 15 July 2019 at 10.46
To: "'nginx@nginx.org'" 
Subject: RE: Do nginx 1.14 and 1.17 have compatible configuration file formats?

Hi,

Can someone help answer my question?
Or any other channel I can throw the question to?
Many  thanks.

Best Regards

SSP->LSE->Clear Linux Engineering (Shanghai)

From: Zheng, Qi
Sent: Thursday, July 11, 2019 9:04 AM
To: nginx@nginx.org
Subject: Do nginx 1.14 and 1.17 have compatible configuration file formats?

Hi,

I am now using nginx 1.14.
I am planning to upgrade it to the latest 1.17 version.
My question is do nginx 1.14 and 1.17 have compatible configuration file 
formats?
Can I use whatever I configured before for 1.14 on 1.17 version?
Thanks.

Best Regards

SSP->LSE->Clear Linux Engineering (Shanghai)

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: I'm about to embark on creating 12000 vhosts

2019-02-12 Thread Lucas Rolff
In haproxy, you simply specify a path where you have all your certificates.

frontend https_frontend
bind *:443 ssl crt /etc/haproxy/certs/default-cert.pem crt 
/etc/haproxy/certs alpn h2,http/1.1

This way, haproxy will read all certs, and when stuff comes in, it uses the 
host header to determine which certificate it should serve.

There was a thread on the haproxy mailing list not long ago, with managing more 
than 100k certificates per haproxy instance, and they’re working on further 
optimizations with those kinds of deployments (if it’s not already done.. 
haven’t checked to be honest).

Best Regards,

From: nginx  on behalf of Richard Paul 

Reply-To: "nginx@nginx.org" 
Date: Tuesday, 12 February 2019 at 10.04
To: "nginx@nginx.org" 
Subject: Re: I'm about to embark on creating 12000 vhosts

Hi Jeff

That's interesting, how do you manage the progamming to load the right 
certificate for the right domain coming in as the server name? We need to load 
the right certificate for the incoming domain and the 12000 figure is the 
number of unique vanity domains without the www. subdomains.

We're planning to follow the same path as you though, we're essentially putting 
these Nginx TLS terminators (fronted by GCP load balancers) in front of our 
existing Varnish caching and Nginx backend infrastructure which currently only 
listen on port 80.

I couldn't work out what the limits are at LE as it's not clear with regards to 
adding new unique domains limits. I'm going to have to ask in the forums at 
some point so that I can work out what our daily batches are going to be.

Kind regards,
Richard

On Mon, 2019-02-11 at 14:33 -0500, Jeff Dyke wrote:
I use haproxy in a similar way as stated by Rainer, rather than having hundreds 
and hundreds of config files (yes there are other ways), i have 1 for haproxy 
and 2(on multiple machines defined in HAProxy). One for my main domain that 
listens to an "real" server_name and another that listens to `server_name _;`  
All of the nginx servers simply listen on 80 and 81 to handle non H2 clients 
and the application does the correct thing with the domain.  Which is where 
YMMV as all applications differ.

I found this much simpler and easier to maintain over time.  I got around the 
LE limits by a staggered migration, so i was only requesting what was in the 
limit each day, then have a custom script that calls LE (which is also on the 
same machine as HAProxy) when certs are about 10 days out, so the staggering 
stays within the limits.  When i was using custom configuration, i was build 
them via python using a yaml file and nginx would effectively be a jinja2 
template.  But even that became onerous.  When going down the nginx path ensure 
you pay attention to the variables that control domain hash sizes. 
http://nginx.org/en/docs/hash.html

HTH, good luck!
Jeff

On Mon, Feb 11, 2019 at 1:58 PM Rainer Duffner 
mailto:rai...@ultra-secure.de>> wrote:



Am 11.02.2019 um 16:16 schrieb rick_pri 
mailto:nginx-fo...@forum.nginx.org>>:

However, our customers, with about 12000 domain names at present have


Let’s Encrypt rate limits will likely make these very difficult to obtain and 
also to renew.

If you own the DNS, maybe using Wildcard DNS entries is more practical.

Then, HAProxy allows to just drop all the certificates in a directory and let 
itself figure out the domain-names it has to answer.
At least, that’s what my co-worker told me.

Also, there’s the fabio LB with similar goal-posts.




___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

___

nginx mailing list


nginx@nginx.org





http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: .service ExecStartPre in example

2019-01-11 Thread Lucas Rolff
There's nothing wrong with testing the configuration before starting the web 
server.

The config is tested during restart, by the ExecStartPre. If you modify a 
config and you want to restart, you should execute nginx -t prior to restarting 
your service - but generally you'd want to use nginx -s reload as much as 
possible.

On 11/01/2019, 11.12, "nginx on behalf of Olaf van der Spek" 
 wrote:

What's the purpose of testing the configuration file in the systemd
example?
Just starting the server seems simpler.. and the test isn't run prior to a
restart request.


ExecStartPre=/usr/sbin/nginx -t

https://www.nginx.com/resources/wiki/start/topics/examples/systemd/

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,282654,282654#msg-282654

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Byte-range request not possible for proxy_cache if origin doesn't return accept-ranges header

2018-11-14 Thread Lucas Rolff
Hi Roman,

I can confirm that indeed does fix the problem, thanks!

I do wonder though, why not let nginx make the decision instead of relying on 
what the origin sends or does not send?

Thanks!

On 14/11/2018, 17.36, "nginx on behalf of Roman Arutyunyan" 
 wrote:

Hi,

On Wed, Nov 14, 2018 at 02:36:10PM +0000, Lucas Rolff wrote:
> Hi guys,
> 
> I've been investigating why byte-range requests didn't work for files 
that are cached in nginx with proxy_cache, I'd simply do something like:
> 
> $ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4
> 
> What would happen was that the full length of a file would be returned, 
despite being in the cache already (I know that the initial request, you can't 
seek into a file).
> 
> Now, after investigation, I compared it with another file that I knew 
worked fine, I looked in the file on disk, the only difference between the two 
files, was the fact that one cached file contained Accept-Ranges: bytes, and 
another didn't have it.
> 
> Investigating this, I tried to add the header Accept-Ranges: bytes on an 
origin server, and everything started to work from nginx as well.
> 
> Now, I understand that Accept-Ranges: bytes should be sent whenever a 
server supports byte-range requests.
> I'd expect that after nginx has fetched the full file, that it would be 
perfectly capable of doing byte-range requests itself, but it seems like it's 
not a possibility.
> 
> I'm not really sure if this is a bug or not, but I do find it odd that 
the behavior is something like: "If origin does not understand byte-range 
requests, then I also shouldn't understand them".
> 
> Is there a way to solve this on the nginx side directly to "fix" origin 
servers that do not send an Accept-Ranges header, or is it something that could 
possibly be fixed in such a way that nginx doesn't "require" the cached file to 
contain the "Accept-Ranges: bytes" header, to be able to do range requests to 
it?

The "proxy_force_ranges" directive enables byte ranges regardless of the
Accept-Ranges header.

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_force_ranges

-- 
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Byte-range request not possible for proxy_cache if origin doesn't return accept-ranges header

2018-11-14 Thread Lucas Rolff
Hi guys,

I've been investigating why byte-range requests didn't work for files that are 
cached in nginx with proxy_cache, I'd simply do something like:

$ curl -r 0-1023 https://cdn.domain.com/mymovie.mp4

What would happen was that the full length of a file would be returned, despite 
being in the cache already (I know that the initial request, you can't seek 
into a file).

Now, after investigation, I compared it with another file that I knew worked 
fine, I looked in the file on disk, the only difference between the two files, 
was the fact that one cached file contained Accept-Ranges: bytes, and another 
didn't have it.

Investigating this, I tried to add the header Accept-Ranges: bytes on an origin 
server, and everything started to work from nginx as well.

Now, I understand that Accept-Ranges: bytes should be sent whenever a server 
supports byte-range requests.
I'd expect that after nginx has fetched the full file, that it would be 
perfectly capable of doing byte-range requests itself, but it seems like it's 
not a possibility.

I'm not really sure if this is a bug or not, but I do find it odd that the 
behavior is something like: "If origin does not understand byte-range requests, 
then I also shouldn't understand them".

Is there a way to solve this on the nginx side directly to "fix" origin servers 
that do not send an Accept-Ranges header, or is it something that could 
possibly be fixed in such a way that nginx doesn't "require" the cached file to 
contain the "Accept-Ranges: bytes" header, to be able to do range requests to 
it?

Thanks in advance!

Best Regards,
Lucas Rolff
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Please DO NOT add [nginx] to subject

2018-10-15 Thread Lucas Rolff
Might be important to mention that services such as exchange doesn’t support 
subaddressing, so it’s a bit harder there :)

With that said, I’d love [nginx] in the header, regardless if it breaks DKIM or 
similar, I have mailing lists whitelisted anyway for that exact reason, because 
there’s already plenty of lists that break DKIM or SPF for that matter.

In my case, I don’t filter on mailing lists and put them in specific 
directories, they all end up in my inbox, and I click through them, if the 
subject interests me - and I see the email of the list, and know which list 
it’s from.

Additionally, I know most common names that post on the mailing list, so it’s 
easy to see which list it comes from ^_^

Get Outlook for iOS


From: 20306775700n behalf of
Sent: Monday, October 15, 2018 3:32 PM
To: nginx@nginx.org; Ralph Seichter
Subject: Re: Please DO NOT add [nginx] to subject


why not accept the advice you have been offered?

I read up on email extension on 
Gizmodo
 and Wikipedia and I'm very 
familiar with filtering and labeling (all list based mails are labeled 
automatically) but I still believe that a adding [nginx] would make the 
situation more comfortable.

You have no case

My case is that when I open my email application on a phone or desktop 
occasionally throughout the day I want to see what I've got today at a glance 
without the need clicking / tabbing into sub folders. I open the app see what I 
came in and decide if it is important or can it be done later. In order to make 
this decision quicker a label in the subject would improve it enormously as you 
focus only on the subject during such actions.


Anyone else what to share her/his thoughts bedsides me and Ralph?


https://en.wikipedia.org/wiki/Email_address#Subaddressing

On 15.10.2018 15:16, Ralph Seichter wrote:

On 15.10.18 14:59, Stefan Müller wrote:



but is seems others do or at least agree with me


So what if "others" agree with you? People agree with me as well, check
existing discussions about this issue.

If you challenge conventions that have been around for good reason, for
longer than some mailing list subscribers lived on this fair planet, you
better make a damn good case of it, based on evidence and not on your
limited personal experience in this particular matter (which is not
something to be ashamed of, just a learning opportunity). You have no
case, so why not accept the advice you have been offered?

-Ralph
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Avoiding Nginx restart when rsyncing cache across machines

2018-09-13 Thread Lucas Rolff
> How does one ensure cache consistency on all edges?

I wouldn't - you can never really rely on anything being consistent cached, 
there will always be stuff that doesn't follow the standards and thus can give 
an inconsistent state for one or more users.

What I'd do, would simply to be to purge the files whenever needed (and 
possibly warm them up if you want them to be "hot" when visitors arrive), sure 
the first 1-2 visitors in each location might have a bit slower request, but 
that's about it.

Alternatively you could just put a super low cache-control, when you're using 
proxy_cache_background_update and proxy_cache_use_stale_updating, nginx will 
ask the origin server if the file has changed - so if it haven't you'll simply 
get a 304 from the origin (if the origin supports it) - so you'll do more 
requests to the origin, but traffic will be minimal because it just returns 304 
not modified (plus some more headers).

Best Regards,
Lucas Rolff


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Avoiding Nginx restart when rsyncing cache across machines

2018-09-13 Thread Lucas Rolff
> The cache is pretty big and I want to limit unnecessary requests if I can.

30gb of cache and ~ 400k hits isn’t a lot.

> Cloudflare is in front of my machines and I pay for load balancing, firewall, 
> Argo among others. So there is a cost per request.

Doesn’t matter if you pay for load balancing, firewall, argo etc – implementing 
a secondary caching layer won’t increase your costs on the CloudFlare side of 
things, because you’re not communicating via CloudFlare but rather between 
machines – you’d connect your X amount of locations to a smaller amount of 
locations, doing direct traffic between your DigitalOcean instances – so no 
CloudFlare costs involved.

Communication between your CDN servers and your origin server also (IMO) 
shouldn’t go via any CloudFlare related products, so additional hits on the 
origin will be “free” in the expense of a bit higher load – however since it 
would be only a subset of locations that would request via the origin, and they 
then serve as the origin for your other servers – you’re effectively decreasing 
the origin traffic.

You should easily be able to get a 97-99% offload of your origin (in my own 
setup, it’s at 99.95% at this point), even without using a secondary layer, and 
performance can get improved by using stuff such as:

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_background_update

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale_updating
 

Nginx is smart enough to do a sub-request in the background to check if the 
origin request updated (using modified or etags e.g) – this way the origin 
communication would be little anyway.

The only Load Balancer / Argo / Firewall costs you should have is the “CDN 
Server -> end user” traffic, and that won’t increase or decrease by doing a 
normal proxy_cache setup or a setup with a secondary cache layer.

You also won’t increase costs by doing a warmup of your CDN servers – you could 
do something as simple as:

curl -o /dev/null -k -I --resolve cdn.yourdomain.com:80:127.0.0.1 
https://cdn.yourdomain.com/img/logo.png 

You could do the same with python or another language if you’re feeling more 
comfortable there.

However using a method like above, will result in your warmup being kept 
“local”, since you’re resolving the cdn.yourdomain.com to localhost, requests 
that are not yet cached will use whatever is configured in your proxy_pass in 
the nginx config.

> Admittedly I have a not so complex cache architecture. i.e. all cache 
> machines in front of the origin and it has worked so far

I would say it’s complex if you have to sync your content – many pull based 
CDN’s simply do a normal proxy_cache + proxy_pass setup, not syncing content, 
and then using some of the nifty features (such as 
proxy_cache_background_update and proxy_cache_use_stale_updating) to decrease 
the origin traffic, or possibly implementing a secondary layer if they’re still 
doing a lot of origin traffic (e.g. because of having a lot of “edge servers”) 
– if you’re like 10 servers, I wouldn’t even consider a secondary layer unless 
your origin is under heavy load and can’t handle 10 possible clients (CDN 
Servers).

Best Regards,
Lucas Rolff


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Avoiding Nginx restart when rsyncing cache across machines

2018-09-12 Thread Lucas Rolff
Can I ask, why do you need to start with a warm cache directly? Sure it will 
lower the requests to the origin, but you could implement a secondary caching 
layer if you wanted to (using nginx), so you’d have your primary cache in let’s 
say 10 locations, let's say spread across 3 continents (US, EU, Asia), then you 
could have a second layer that consist of a smaller amount of locations (1 
instance in each continent) - this way you'll warm up faster when you add new 
servers, and it won't really affect your origin server.

It's a lot more clean also because you're able to use proxy_cache which is 
really what (in my opinion) you should use when you're building caching proxies.

Generally I'd just slowly warm up new servers prior to putting them into 
production, get a list of top X files accessed, and loop over them to pull them 
in as a normal http request.

There's plenty of decent solutions (some more complex than others), but there 
should really never be a reason to having to sync your cache across machines - 
even for new servers.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: reverse proxy https not working

2018-08-26 Thread Lucas Rolff
> Both did the trick, but which one is better?

I personally prefer the $request_uri one because it’s very clear exactly what 
it does.

> I think I read somewhere that nginx would connect unencrypted to the backend, 
> and do the encryption / decryption, is this wrong then?

Nginx will connect the way you’ve told it to connect, if you’re connecting to a 
http backend, it will do plain communication over http – if you’re connecting 
to a https backend, it will establish a secure connection with the backend, and 
decrypt the response before encrypting it again when going to the client.

> It works on some of my other domains, so is this just an exeption?
> What I really ask is this: Should I change my other domains also, or should I 
> kepp them as they are as long as they work?

I would change it for consistency across your configs, but that’s my opinion – 
if it works then it’s all OK anyway, I don’t know the specific case when it 
will and will not work – so I by default set $request_uri because it works in 
99% of the cases, and I’ll only modify it if another behaviour is required.

Best Regards,
Lucas Rolff

From: nginx  on behalf of "Jungersen, Danjel - 
Jungersen Grafisk ApS" 
Organization: Jungersen Grafisk ApS
Reply-To: "nginx@nginx.org" 
Date: Sunday, 26 August 2018 at 11.29
To: "nginx@nginx.org" 
Subject: Re: reverse proxy https not working

Thanks !!!

 proxy_pass  https://192.168.1.3;
 proxy_pass  https://192.168.1.3$request_uri;

Both did the trick, but which one is better?

I will now try to re-enable all the "force encryption" settings.

And closing firewall ports to see what I can avoid having open.

I'm a bit of novice at proxies, so please be patient :-)
I will read the documentation sections you mentioned.

I think I read somewhere that nginx would connect unencrypted to the backend, 
and do the encryption / decryption, is this wrong then?
It works on some of my other domains, so is this just an exeption?
What I really ask is this: Should I change my other domains also, or should I 
kepp them as they are as long as they work?

It sounds like you recommend removing the "/" on all sites(?)

A current typical setup:

server {

  server_name www.printlight.dk;
  server_name printlight.dk;

  location / {

proxy_pass  http://192.168.20.3/;

proxy_set_header Host $host;

  }

listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/printlight.dk/fullchain.pem; # 
managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/printlight.dk/privkey.pem; # 
managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


}

server {
if ($host = www.printlight.dk) {
return 301 https://$host$request_uri;
} # managed by Certbot


if ($host = printlight.dk) {
return 301 https://$host$request_uri;
} # managed by Certbot



  listen 80;

  server_name www.printlight.dk;
  server_name printlight.dk;
return 404; # managed by Certbot




}



Best regards
Danjel


From: Lucas Rolff 
To:"nginx@nginx.org" 
Subject: Re: reverse proxy https not working
Date sent:  Sun, 26 Aug 2018 08:47:03 +
Send reply to: nginx@nginx.org

> > The vendor recommended me to use a reverse proxy
>
> Ideally the vendor should have a working config in that case, but, I do see a 
> few things that can
> be an issue.
>
> You’re serving https but proxying to an http backend – depending on how the 
> software works, a
> lot of the reverse URLs that is sent back, might be linking to http:// 
> instead of https://
>
> This in itself can break a lot of functionality, you might want to try to 
> proxy to an https backend
> – this might require that you create a self-signed certificate on the backend 
> (can be valid for 10
> years) – the backend software itself, if it has a way to enable “https”, 
> you’d have to set this as
> well.
>
> I also recommend removing the / (slash) in the end of the proxy_pass, this 
> will pass through the
> request URI from the client, as per documentation:
>
> > If proxy_pass is specified without a URI, the request URI is passed to the 
> > server in the same
> form as sent by a client when the original request is processed, or the full 
> normalized request
> URI is passed when processing the changed URI
>
> Alternatively do proxy_pass http://192.168.1.3$request_uri; or proxy_pass
> https://192.168.1.3$request_uri;
>
> Additionally, if your software uses Location or Refresh headers, then you 
> might want to look
> into proxy_redirect (
> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect )  to 
> rewrite th

Re: reverse proxy https not working

2018-08-26 Thread Lucas Rolff
> The vendor recommended me to use a reverse proxy

Ideally the vendor should have a working config in that case, but, I do see a 
few things that can be an issue.

You’re serving https but proxying to an http backend – depending on how the 
software works, a lot of the reverse URLs that is sent back, might be linking 
to http:// instead of https://

This in itself can break a lot of functionality, you might want to try to proxy 
to an https backend – this might require that you create a self-signed 
certificate on the backend (can be valid for 10 years) – the backend software 
itself, if it has a way to enable “https”, you’d have to set this as well.

I also recommend removing the / (slash) in the end of the proxy_pass, this will 
pass through the request URI from the client, as per documentation:

> If proxy_pass is specified without a URI, the request URI is passed to the 
> server in the same form as sent by a client when the original request is 
> processed, or the full normalized request URI is passed when processing the 
> changed URI

Alternatively do proxy_pass http://192.168.1.3$request_uri; or proxy_pass 
https://192.168.1.3$request_uri;

Additionally, if your software uses Location or Refresh headers, then you might 
want to look into proxy_redirect ( 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect )  to 
rewrite this on the “return” to the user.

Best Regards,
Lucas Rolff

From: nginx  on behalf of "Jungersen, Danjel - 
Jungersen Grafisk ApS" 
Organization: Jungersen Grafisk ApS
Reply-To: "nginx@nginx.org" 
Date: Sunday, 26 August 2018 at 10.33
To: "nginx@nginx.org" 
Subject: Re: reverse proxy https not working



From: Lucas Rolff 
To:"nginx@nginx.org" 
Subject: Re: reverse proxy https not working
Date sent:  Sun, 26 Aug 2018 08:19:28 +
Send reply to: nginx@nginx.org

> Which functions do not work?
Thats a bit hard to say, but I'll try..

It's a print production system.
1 part is approval of pages in a job.

When I try to open a page for approval the system should open up the page in 
large size.
That does not happen.
The thumbnails on the side works.
And as stated, when I do the same thing when connected via http, there are no 
issues.

>
> Be aware some software (WordPress being a good example) doesn’t always work 
> with reverse
> proxies that easy.
The vendor recommended me to use a reverse proxy

>
> Could you possibly include your nginx configuration? Especially your proxy 
> parts.

server {

  server_name portal.printlight.dk;

  client_max_body_size 1000m;  # (I tried with and without this line)

  error_log /etc/nginx/log warn;

  location / {

proxy_pass  http://192.168.1.3:80/;

proxy_set_header Host $host;

  }

listen 80;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/portal.printlight.dk/fullchain.pem; # 
managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/portal.printlight.dk/privkey.pem; 
# managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}


>
> From: nginx  on behalf of "Jungersen, Danjel -
> Jungersen Grafisk ApS"
> Organization: Jungersen Grafisk ApS
> Reply-To: "nginx@nginx.org" 
> Date: Sunday, 26 August 2018 at 10.13
> To: "nginx@nginx.org" 
> Subject: reverse proxy https not working
>
> Hi there.
>
> I have a setup that almost works.
> :-)
>
> I have a handful of domains that works as they should.
> Traffic as accepted and forwarded to my apache on another server (also in 
> dmz).
> I have setup certificates with certbot.
> I have green (encrypted) icon on my browser when I visit my sites.
>
> 1 site is running on my green network.
> When I connect to that site all seems to work.
> However, certain functions fail, but only when connected via https.
> If I change the setup so that port 80 is not redirected to 443, everything 
> works as long as I
> stay with http.
> As soon as I chenge the url to https:// some functions fail.
> I have tried but cannot understand the debug log.
>
> I don't see any hits on my firewall.
>
> Any clues?
> I will be happy to send config and logfiles, but I'm not sure exactly what to 
> send.
>
> Best regards
> Danjel
>


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: reverse proxy https not working

2018-08-26 Thread Lucas Rolff
Which functions do not work?

Be aware some software (WordPress being a good example) doesn’t always work 
with reverse proxies that easy.

Could you possibly include your nginx configuration? Especially your proxy 
parts.

From: nginx  on behalf of "Jungersen, Danjel - 
Jungersen Grafisk ApS" 
Organization: Jungersen Grafisk ApS
Reply-To: "nginx@nginx.org" 
Date: Sunday, 26 August 2018 at 10.13
To: "nginx@nginx.org" 
Subject: reverse proxy https not working

Hi there.

I have a setup that almost works.
:-)

I have a handful of domains that works as they should.
Traffic as accepted and forwarded to my apache on another server (also in dmz).
I have setup certificates with certbot.
I have green (encrypted) icon on my browser when I visit my sites.

1 site is running on my green network.
When I connect to that site all seems to work.
However, certain functions fail, but only when connected via https.
If I change the setup so that port 80 is not redirected to 443, everything 
works as long as I stay with http.
As soon as I chenge the url to https:// some functions fail.
I have tried but cannot understand the debug log.

I don't see any hits on my firewall.

Any clues?
I will be happy to send config and logfiles, but I'm not sure exactly what to 
send.

Best regards
Danjel

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_cache_background_update leads to 200 ms delay

2018-07-07 Thread Lucas Rolff
It's not a combination of tcp_nopush and proxy_cache_background_update that 
creates this delay.

tcp_nopush (TCP_CORK in Linux) introduces a delay of packets being sent for up 
to 200ms or until the packet size gets to the defined MTU.

proxy_cache_background_update (if I remember correctly), will do the common 
checks at the origin to check if a file changed, since this request performed 
is (often) less than the MTU, you'll end up having to wait for the 200ms delay.

So disabling tcp_nopush also disables the 200ms delay.

On 07/07/2018, 14.15, "nginx on behalf of stephan13360" 
 wrote:

Wow, thats it! The delay is gone.

For now I am satisfied that the delay is gone and will read up some more on
tcp_nopush.

For the future: Is there any information on why the combination of
tcp_nopush and proxy_cache_background_update create the delay and not the
STALE response you get when the backend ist down and
proxy_cache_background_update is off?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,280434,280444#msg-280444

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: location blocks, and if conditions in server context

2018-03-08 Thread Lucas Rolff
Hi Francis,

I indeed thought about having a separate server {} block in case there’s the 
http to https redirect for a specific domain.
Since it depends on the domain, I can’t make a general one to match everything.

>Or: you use $sslproxy_protocol. Where does that come from?

$sslproxy_protocol is a simple map doing:

map $https $sslproxy _protocol {
default "http";
SSL "https";
on  "https";
}

Best Regards,
Lucas Rolff

On 08/03/2018, 09.44, "nginx on behalf of Francis Daly" 
<nginx-boun...@nginx.org on behalf of fran...@daoine.org> wrote:

    On Wed, Mar 07, 2018 at 04:55:15PM +, Lucas Rolff wrote:

Hi there,

> This means I have something like:
> 
> 1: location ~* /.well-known
> 2: if condition doing redirect if protocol is http
> 3: location /
> 4: location /api
> 5: location /test
> 
> All my templates include 1 to 3, and *might* have additional locations.

> My issue is – because of this if condition that does the redirect to 
https – it also applies to my location ~* /.well-known – thus causing a 
redirect, and I want to prevent this, since it breaks the Let’s Encrypt 
validation (they do not accept 301 redirects).

> Is there a smart way without adding too much complexity, which is still 
super-fast (I know if is evil) ?

As phrased, I think the short answer to your question is "no".

However...

You optionally redirect things from http to https. Is that "you want
to redirect *everything* from http to https, apart from the letsencrypt
thing"? If so, you could potentially have just one

  server {
listen 80;
location / { return 301 https://$host$uri; }
location /.well-known/ { proxy_pass 
http://letsencrypt.validation.backend.com; }
  }

and a bunch of

  server {
listen 443;
  }

blocks.

Or: you use $sslproxy_protocol. Where does that come from?

If it is a thing that you create to decide whether or not to redirect
to https, then could you include a check for whether the request starts
with /.well-known/, and if so set it to something other than "http"?

f
-- 
Francis Dalyfran...@daoine.org
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: location blocks, and if conditions in server context

2018-03-07 Thread Lucas Rolff
Hi peter,

I generate configs already using a template engine (more specific Laravel 
Blade), so creating the functionality in the template is easy, however, I 
generally don’t like having server blocks that can be 100s of lines because of 
repeating things

I don’t know the internals of nginx fully, how it uses memory when storing 
configs, but I would assume that inheritance is better than duplication in 
terms of memory usage.

I’m just wondering if there’s a way I can avoid the if condition within the 
location blocks.

- lucas

Get Outlook for iOS<https://aka.ms/o0ukef>

From: nginx <nginx-boun...@nginx.org> on behalf of Peter Booth 
<peter_bo...@me.com>
Sent: Wednesday, March 7, 2018 11:08:40 PM
To: nginx@nginx.org
Subject: Re: location blocks, and if conditions in server context

I agree that avoiding if is a good thing. But avoiding duplication isn’t always 
good.

Have you considered a model where your configuration file is generated with a 
templating engine? The input file that you modify to add/remove/change 
configurations could be free of duplication but the conf file that nginx reads 
could be concrete and verbose

Sent from my iPhone

On Mar 7, 2018, at 11:55, Lucas Rolff 
<lu...@lucasrolff.com<mailto:lu...@lucasrolff.com>> wrote:

Hi guys,

I have a few hundred nginx zones, where I try to remove as much duplicate code 
as possible, and inherit as much as possible to prevent nginx from consuming 
memory (and also to keep things clean).

However I came across something today, that I don’t know how to get my head 
around without duplicating code, even within a single server context.

I have a set of distributed nginx servers, all these requires SSL certificates, 
where I use Let’s Encrypt to do this.
When doing the Let’s Encrypt validation, it uses a path such as 
/.well-known/acme-challenge/

For this, I made a location block such as:

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

Basically, I proxy_pass to the backend where I actually run the acme client – 
works great.

However, I have an option to force a redirect from http to https, and I’ve 
implemented that by doing an if condition on the server block level (so not 
within a location):

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

This means I have something like:

1: location ~* /.well-known
2: if condition doing redirect if protocol is http
3: location /
4: location /api
5: location /test

All my templates include 1 to 3, and *might* have additional locations.
I’ve decided to not put e.g. location /api inside the location / - because 
there’s things I don’t want to inherit, thus keeping them at the same “level”, 
and not a location context inside a location context.
Things I don’t want to inherit, is stuff such as headers, max_ranges directive 
etc.

My issue is – because of this if condition that does the redirect to https – it 
also applies to my location ~* /.well-known – thus causing a redirect, and I 
want to prevent this, since it breaks the Let’s Encrypt validation (they do not 
accept 301 redirects).

A solution would be to move the if condition into each location block that I 
want to have redirected, but then I start repeating myself 1, 2 or even 10 
times – which I don’t wanna do.

Is there a smart way without adding too much complexity, which is still 
super-fast (I know if is evil) ?

A config example is seen below:

server {
listen  80;
listen  443 ssl http2;

server_name secure.domain.com<http://secure.domain.com>;

access_log /var/log/nginx/secure.domain.com<http://secure.domain.com> main;

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

location / {

expires 10m;
etag off;

proxy_ignore_client_abort   on;
proxy_intercept_errors  on;
proxy_next_upstream error timeout invalid_header;
proxy_ignore_headersSet-Cookie Vary X-Accel-Expires Expires 
Cache-Control;
more_clear_headers  Set-Cookie Cookie Upgrade;

proxy_cache one;
proxy_cache_min_uses1;
proxy_cache_lockoff;
proxy_cache_use_stale   error timeout invalid_header updating 
http_500 http_502 http_503 http_504;

proxy_cache_valid 200   10m;
proxy_cache_valid any   1m;

proxy_cache_revalidate  on;
proxy_ssl_server_name   on;

include /etc/nginx/server.conf;

proxy_set_header Host backend-host.com<http://backend-host.com>;

proxy_cache_key"http://backend-host.com-1-$request_uri;;
proxy_pass http

location blocks, and if conditions in server context

2018-03-07 Thread Lucas Rolff
Hi guys,

I have a few hundred nginx zones, where I try to remove as much duplicate code 
as possible, and inherit as much as possible to prevent nginx from consuming 
memory (and also to keep things clean).

However I came across something today, that I don’t know how to get my head 
around without duplicating code, even within a single server context.

I have a set of distributed nginx servers, all these requires SSL certificates, 
where I use Let’s Encrypt to do this.
When doing the Let’s Encrypt validation, it uses a path such as 
/.well-known/acme-challenge/

For this, I made a location block such as:

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

Basically, I proxy_pass to the backend where I actually run the acme client – 
works great.

However, I have an option to force a redirect from http to https, and I’ve 
implemented that by doing an if condition on the server block level (so not 
within a location):

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

This means I have something like:

1: location ~* /.well-known
2: if condition doing redirect if protocol is http
3: location /
4: location /api
5: location /test

All my templates include 1 to 3, and *might* have additional locations.
I’ve decided to not put e.g. location /api inside the location / - because 
there’s things I don’t want to inherit, thus keeping them at the same “level”, 
and not a location context inside a location context.
Things I don’t want to inherit, is stuff such as headers, max_ranges directive 
etc.

My issue is – because of this if condition that does the redirect to https – it 
also applies to my location ~* /.well-known – thus causing a redirect, and I 
want to prevent this, since it breaks the Let’s Encrypt validation (they do not 
accept 301 redirects).

A solution would be to move the if condition into each location block that I 
want to have redirected, but then I start repeating myself 1, 2 or even 10 
times – which I don’t wanna do.

Is there a smart way without adding too much complexity, which is still 
super-fast (I know if is evil) ?

A config example is seen below:

server {
listen  80;
listen  443 ssl http2;

server_name secure.domain.com;

access_log /var/log/nginx/secure.domain.com main;

location ~* /.well-known {
proxy_pass http://letsencrypt.validation.backend.com$request_uri;
}

if ($sslproxy_protocol = "http") {
return 301 https://$host$request_uri;
}

location / {

expires 10m;
etag off;

proxy_ignore_client_abort   on;
proxy_intercept_errors  on;
proxy_next_upstream error timeout invalid_header;
proxy_ignore_headersSet-Cookie Vary X-Accel-Expires Expires 
Cache-Control;
more_clear_headers  Set-Cookie Cookie Upgrade;

proxy_cache one;
proxy_cache_min_uses1;
proxy_cache_lockoff;
proxy_cache_use_stale   error timeout invalid_header updating 
http_500 http_502 http_503 http_504;

proxy_cache_valid 200   10m;
proxy_cache_valid any   1m;

proxy_cache_revalidate  on;
proxy_ssl_server_name   on;

include /etc/nginx/server.conf;

proxy_set_header Host backend-host.com;

proxy_cache_key"http://backend-host.com-1-$request_uri;;
proxy_pass http://backend-host.com$request_uri;

proxy_redirect  off;
}
}

Thank you in advance!

Best Regards,
Lucas Rolff
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: What kind of problems will happen to nginx when updated from centos 6 to 7 ?

2018-02-20 Thread Lucas Rolff
You do not update from CentOS 6 to CentOS 7 – you install a new server – so 
you’ll have proper time to perform tests on a new box.

On 21/02/2018, 08.51, "nginx on behalf of mslee"  wrote:

Hello.

I prepare for migrate from centos 6 to 7 on my servers.
My servers is now providing web services to people. It must that there no
happen any problems to nginx when update from centos 6 to 7.
I've been trying to see if there are any of these cases, but I haven't see
anything yet.

If anyone's ever had a problem, let me know.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,278691,278691#msg-278691

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 2 of 16 cores are constantly maxing out - how to balance the load?

2018-01-11 Thread Lucas Rolff
In high traffic environments it generally make sense to “dedicate” a core to 
each RX and TX queue you have on the NIC – this way you lower the chances of a 
single core being overloaded from handling network and thus degrading 
performance.

And then at same time within nginx, map the individual processes to other cores.

So, let’s say say you have 8 cores and 1 RX and 1 TX queue:
Core 0: RX queue
Core 1: TX queue
Core 2 to 7: nginx processes

You’d then set nginx to 6 workers (if you’re not running other stuff on the 
box).

Now, in your case with php-fpm in the mix as well, controlling that can be hard 
( not sure if you can pin php-fpm processes to cores ) – but for nginx and 
RX/TX queues, it’s for sure possible.

From: nginx  on behalf of Raffael Vogler 

Reply-To: "nginx@nginx.org" 
Date: Thursday, 11 January 2018 at 11.55
To: "nginx@nginx.org" 
Subject: Re: 2 of 16 cores are constantly maxing out - how to balance the load?

Or would it make sense (if possible at all) to assign two or three more cores 
to networking interrupts?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 2 of 16 cores are constantly maxing out - how to balance the load?

2018-01-11 Thread Lucas Rolff
If it’s the same two cores, it might be another process that uses the same two 
cores and thus happens to max out.
One very likely possibility would be interrupts from e.g. networking. You can 
check /proc/interrupts to see where interrupts from the network happens.

From: nginx  on behalf of Raffael Vogler 

Reply-To: "nginx@nginx.org" 
Date: Thursday, 11 January 2018 at 11.14
To: "nginx@nginx.org" 
Subject: 2 of 16 cores are constantly maxing out - how to balance the load?

Hello!

I have nginx with php-fpm running on a 16 core Ubuntu 16.04 instance. The 
server is handling more than 10 million requests per hour.

https://imgur.com/a/iRZ7V

As you can see on the htop screenshot cores 6 and 7 are maxed out and that's 
the case constantly - even after restarting nginx those two cores stay at that 
level.

I wonder why is that so and how to balance the load more evenly?

Also I'm curious to know whether this might indicate a performance relevant 
issue or if it is most likely harmless and just looks odd.

> cat /etc/nginx/nginx.conf | grep -v '^\s*#'



user www-data;

worker_processes auto;

pid /run/nginx.pid;

events {

worker_connections 768;

}

http {

sendfile on;

tcp_nopush on;

tcp_nodelay on;

keepalive_timeout 65;

types_hash_max_size 2048;

include /etc/nginx/mime.types;

default_type application/octet-stream;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE

ssl_prefer_server_ciphers on;

access_log /var/log/nginx/access.log;

error_log /var/log/nginx/error.log;

gzip on;

gzip_disable "msie6";

include /etc/nginx/conf.d/*.conf;

include /etc/nginx/sites-enabled/*;

}

Thanks

Raffael
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: NTLM

2018-01-10 Thread Lucas Rolff
It’s only available for nginx-plus

Get Outlook for iOS

From: nginx  on behalf of Otto Kucera 
Sent: Wednesday, January 10, 2018 12:37:49 PM
To: nginx@nginx.org
Subject: NTLM


Hi all,


I am testing ntlm for a reverse proxy secanrio.


Info:

http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ntlm


this is my config:

upstream http_backend {
server 127.0.0.1:8080;

ntlm;
}

server {
   listen 443;
...

location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}

I always get this error:

nginx: [emerg] unknown directive "ntlm" in 
/etc/nginx/conf.d/test.conf:4


This is my version:

nginx version: nginx/1.12.2


What do I make wrong? Since version 1.9.2 this option should be possible.


Thanks,

Otto
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NGINX and RFC7540 (http2) violation

2017-12-28 Thread Lucas Rolff
Hi guys,

I was playing around with nginx and haproxy recently to decide whether to go 
for nginx or haproxy in a specific environment.
One of the requirements was http2 support which both pieces of software support 
(with nginx having supported it for a lot longer than haproxy).

However, one thing I saw, is that according to the http2 specification section 
8.1.2.2 (https://tools.ietf.org/html/rfc7540#section-8.1.2.2 ), HTTP2 does not 
use the Connection header field to indicate connection-specific headers in the 
protocol.

If a client sends a Connection: keep-alive the client effectively violates the 
specification which surely should not happen, but in case the client actually 
would send the Connection header the server MUST treat the messages containing 
the connection header as malformed.

I saw that this is not the case for nginx in any way, which causes it to not 
follow the actual specification.

Can I ask why it was decided to implement it to simply “ignore” the fact that a 
client might violate the spec? And is there any plans to make nginx compliant 
with the current http2 specification?

I’ve found that both Firefox and Safari violates this very specific section, 
and they’re violated because servers implementing the http2 specification 
allowed them to do so, effectively causing the specification not to be followed.

Thanks in advance.

Best Regards,
Lucas Rolff
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx optimal speed in limit_rate for video streams

2017-11-16 Thread Lucas Rolff
Depends on your bitrate of your movies as well, it will be hard to play 4K 
videos on 1 megabit/s

Get Outlook for iOS

From: nginx  on behalf of c0nw0nk 

Sent: Thursday, November 16, 2017 9:26:06 PM
To: nginx@nginx.org
Subject: Nginx optimal speed in limit_rate for video streams

So when dealing with mp4 etc video streams what is the best speed to send /
transfer files to people that does not cause delays in latency / lagging on
the video due etc.

My current :

location /video/ {
mp4;
limit_rate_after 1m;
limit_rate 1m;
}


On other sites when i download / watch videos it seems they transfer files
at speeds of 200k/s

Should I lower my rates ?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,277352,277352#msg-277352

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Nginx Listen directive with reuseport; SO_REUSEPORT

2017-10-23 Thread Lucas Rolff
What’s high traffic? At a previous employer we used it across the 
infrastructure, and I’d say it’s fairly high traffic (100s of gigabit of 
traffic).

On 23/10/2017, 21.15, "nginx on behalf of c0nw0nk"  wrote:

So on each server you can add to your listen directive.


listen 8181 default bind reuseport;


Cloudflare use it and posted in on their blog and github here (benchmark
stats included)

GitHub :

https://github.com/cloudflare/cloudflare-blog/tree/master/2017-10-accept-balancing
Cloudflare Blog :
https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/


I question if it is ideal in a production high traffic enviorment if using
"reuseport" will be beneficial or it is best to just leave it out as it is
by default anyway.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,277041,277041#msg-277041

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Reverse cache not working on start pages (solution founD)

2017-10-12 Thread Lucas Rolff
If your server gets hacked due to a single website, you have bigger problems, 
and mod_security won’t fix the issue.
Consult with security professionals or give the task of managing your 
infrastructure to someone that can properly secure the environment.

On 12/10/2017, 13.26, "nginx on behalf of Dingo"  wrote:

You are right. I didn't know what canonical url:s where, but now I know. Yes
there is in fact two servers. One server is running Apache with a website
that has maybe 10 different DNS-domains pointing to it and then there is
another server running IIS with lots of websites but usually only one
DNS-domain pointing to each of them. The IIS server has a control panel
software that enables customers to add both websites and DNS-records, so I
don't want to change the configuration in my nginx proxy every time someone
adds or changes something on that server, so there needs to be a bit of
compromising.

I have very limited knowledge about how to configure and protect webservers
and the reason all this is happening now, is that the IIS server has been
hacked due to an old wordpress vulnerability in a plugin called revslider,
so I have had to do things in a bit of a hurry. When I installed nginx i
didn't know that it was revslider, so nginx didn't fix the problem, so the
server got hacked once again. I have now installed modsecurity, which seems
to have stopped the problem.

I am seriously considering using nginx plus, but it's not entirely my
decision and my colleagues are already upset over all cost surrounding the
web-servers at the moment.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276670,276836#msg-276836

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Directive inheritance

2017-10-06 Thread Lucas Rolff

Hi Francis,

Thank you a lot for your response - from a directive point of view - I 
don't use a lot of different headers in that sense, it's really just 
some settings that I would want to avoid repeating again and again - I 
like clean configs - and generally speaking I really want to inherit as 
much as possible from the initial server, or even http context when 
possible - all I change usually can be things like the expires header, 
the proxy_cache_valid directive, or adding an additional header (CORS 
for example).


I do use some of the openresty modules such as the ngx_headers_more 
module, and it's pretty explicit about it's inheritance.


And thank you for the pointer regarding the _module_ctx and 
_merge_loc_conf functions, it gave me enough information regarding the 
http_proxy module as an example - it seem as long as there a 
"offsetof(ngx_http_proxy_loc_conf_t" - then it can be inherited, or it's 
a coincidence that it's missing the "offsetoff" for all directives that 
doesn't inherit in that module from top of my head.


Thanks again!


Francis Daly wrote:

On Fri, Oct 06, 2017 at 07:32:51PM +, Lucas Rolff wrote:

Hi there,


I know that there’s some settings such as proxy_pass which can’t inherit from 
the parent location or server block, however – is there any semi-easy way to 
figure out if a directive in nginx or it’s modules gets inherited or not? (I 
don’t mind digging around in some nginx source code)



I wonder if someone either knows a good way to figure out, or any document on 
the web that goes extensively into explaining what (might) inherit based on 
general design patterns.


My quick response, without doing too much research, is:

* "rewrite" module directives (if, return) don't inherit
* "handler" directives (proxy_pass, fastcgi_pass) don't inherit
* pretty much anything else that is valid in "location" does inherit

(That's probably not correct, but could be a good starting point for
experimentation.)

And be aware that inheritance is by replacement, or not at all -- so one
"add_header" in a location means that the only "add_header" relevant
to that location is the one that is there; while no "add_header" in a
location means that all of the ones inherited from server{} are relevant.


If you want the full details, it's a matter of Read The Fine Source --
each module has a "_module_ctx" which includes a function typically named
"_merge_loc_conf" which shows how each directive is set if it is not
defined in this location: unset, set to a default value, or inherited
from the previous level.

f


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Directive inheritance

2017-10-06 Thread Lucas Rolff
Hi guys,

I do a lot of nginx configuration which contains plenty of “location” blocks, 
however – I often see myself duplicating a lot of directives throughout my 
configuration which can sadly make a single nginx server block about 400 lines 
long, often due to repeated settings.

Not only is it a mess with big files (at least they’re generated 
automatically), but I also have the feeling I waste some memory if I keep 
redefining the settings again and again (or is nginx smart enough to 
“deduplicate” settings whenever possible?)

My configs usually look something like

server {

  location / {
 // sendfile, client_body_buffer_size, proxy_* settings, add_header repeated
   location  ~* \.(?:htm|html)$ {
   // sendfile, client_body_buffer_size, proxy_* settings, add_header 
repeated
}
location ~* \.(?:manifest|appcache|xml|json)$ {
   // sendfile, client_body_buffer_size, proxy_* settings, add_header 
repeated
   }
  }
}

I know that there’s some settings such as proxy_pass which can’t inherit from 
the parent location or server block, however – is there any semi-easy way to 
figure out if a directive in nginx or it’s modules gets inherited or not? (I 
don’t mind digging around in some nginx source code)

I could try to remove a bunch of directives from the lower location directives 
and see if things still work, however it would be very time consuming.

Reading the docs of nginx and it’s directive, *sometimes* the docs say whether 
a directive gets inherited or not, but it’s not always complete – such as 
sendfile for example, as far as I know it gets inherited, but it doesn’t say in 
the docs.

The directives I mostly use are things like:

proxy_*
Sendfile
Client_body_buffer_size
Add_header
Expires (these differs for each location block I have)

I wonder if someone either knows a good way to figure out, or any document on 
the web that goes extensively into explaining what (might) inherit based on 
general design patterns.

Also if anyone can either confirm or deny whether duplicating the directives 
actually increase memory usage – because if it has next to no additional 
resource usage – then I could save some time.

The amount of zones/server blocks are currently small, but I’d like to be able 
to scale it to thousands on fairly common hardware.

Best Regards,

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Using request URI path to store cached files instead of MD5 hash-based path

2017-10-06 Thread Lucas Rolff
Hi,

> Is it possible to change this behaviour through configuration to cache the 
> files using the request URI path itself, say, under the host-name directory 
> under the proxy_cache_path.

No, it’s not possible to do that with proxy_cache, you can however do it with 
proxy_store ( 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_store ).

> I think such a direct way of defining cached file paths would help in finding 
> or locating specific content in cache

You can already find cached file paths by calculating the md5 hash yourself, 
it’s rather easy.

> Also, it would be helpful in purging content from cache, even using 
> wild-carded expressions. 

You can easily purge the cache, just not wildcard expressions, for that you’d 
need the plus version of nginx.

> However, I seem to be missing the key benefit of why files are stored based 
> on MD5 hash based paths

One of the benefits I can think of, is that fact that you only deal with a-z0-9 
characters, using ascii characters ensures compatibility with every system, 
it’s lightweight since you only have to deal with a small set of characters, 
where if you would use $request_uri as an example you’d have to use UTF-8 or 
similar, it makes lookups a lot heavier to do, and there could be compatibility 
issues with characters, and the fact that $request_uri includes query strings 
as well, you’d end up with very weird filenames.

At same time, it wouldn’t surprise me that it’s a lot more efficient for nginx 
to have a consistent filename length when indexing data, you know that every 
file on the filesystem will be 32 characters long, you know exactly how much 
memory each file takes in memory, and you wouldn’t run into the problem where 
people have a request uri of a few hundred or even thousands of characters and 
possibly 10s or 100s of sub-directories.

I’m pretty sure that nginx decided to use an md5 hash due to a lot of benefits 
over storing it as proxy_store currently does. Maybe Maxim or someone else with 
extensive knowledge about the codebase and its design decisions can share 
briefly why.

Best Regards,
Lucas Rolff

On 05/10/2017, 13.29, "nginx on behalf of rnmx18" <nginx-boun...@nginx.org on 
behalf of nginx-fo...@forum.nginx.org> wrote:

Hi,

If proxy caching is enabled, NGINX is saving the files under subdirectories
of the proxy_cache_path, based on the MD5 hash of the cache-key and the
levels parameter value.

Is it possible to change this behaviour through configuration to cache the
files using the request URI path itself, say, under the host-name directory
under the proxy_cache_path.

For example, if the proxy_cache_path is /tmp/cache1 and the request is
http://www.example.com/movies/file1.mp4, then can the file get cached as
/tmp/cache1/www.example.com/movies/file1.mp4

I think such a direct way of defining cached file paths would help in
finding or locating specific content in cache. Also, it would be helpful in
purging content from cache, even using wild-carded expressions. 

However, I seem to be missing the key benefit of why files are stored based
on MD5 hash based paths.

Could someone explain the reason for using MD5 hash based file paths? 

Also, with vanilla-NGINX, if there is no configurable way to use direct
request URI paths, is there any external module which could help me to get
this functionality?

Thanks
Rajesh

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276700,276700#msg-276700

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: limit_conn is dropping valid connections and causing memory leaks on nginx reload

2017-09-30 Thread Lucas Rolff
Anoop,

He added v and double quotes around $binary_remote_addr.

Best Regards,

From: nginx  on behalf of Anoop Alias 

Reply-To: "nginx@nginx.org" 
Date: Saturday, 30 September 2017 at 12.14
To: Nginx 
Subject: Re: limit_conn is dropping valid connections and causing memory leaks 
on nginx reload

What is the change (workaround) you made ?I don't see a difference?

On Sat, Sep 30, 2017 at 3:35 PM, Dejan Grofelnik Pelzel 
> wrote:
Hello,

We are running the nginx 1.13.5  with HTTP/2 in a proxy_pass proxy_cache
configuration with clients having relatively long open connections. Our
system does automatic reloads for any new configuration and we recently
introduced a limit_conn to some of the config files. After that, I've
started noticing a rapid drop in connections and outgoing network every-time
the system would perform a configuration reload. Even stranger, on every
reload the memory usage would go up for about 1-2GB until ultimately
everything crashed if the reloads were too frequent. The memory usage did go
down after old workers were released, but that could take up to 30 minutes,
while the configuration could get reloaded up to twice per minute.

We used the following configuration as recommended by pretty much any
example:
limit_conn_zone $binary_remote_addr zone=1234con:10m;
limit_conn zone1234con 10;

I was able to verify the connection drop by doing a simple ab test, for
example, I would run ab -c 100 -n -k 1000 https://127.0.0.1/file.bin
990 of the connections went through, however, 10 would still be active.
Immediately after the reload, those would get dropped as well. Adding -r
option would help the problem, but that doesn't fix our problem.

Finally, after I tried to create a workaround, I've configured the limit
zone to:
limit_conn_zone "v$binary_remote_addr" zone=1234con:10m;

Suddenly everything magically started to work. The connections were not
being dropped, the limit worked as expected and even more surprisingly the
memory usage was not going up anymore. I've been tearing my hair out almost
all day yesterday trying to figure this out. While I was very happy to see
this resolved, I am now confused as to why nginx behaves in such a way.

I'm thinking this might likely be a bug, so I'm just wondering if anyone
could explain why it is happening or has a similar problem.

Thank you!

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276633,276633#msg-276633

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



--
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Can the cacheloader process stay alive and keep rebuilding or updating the cache metadata?

2017-09-29 Thread Lucas Rolff
> It would help in a use-case when there are 2 NGINX processes, both 
working with the same cache directory.


Why would you want 2 nginx processes to use the same cache directory? 
Explain your situation, what's your end-goal, etc.


If it's no minimize the amount of origin requests, you can build 
multiple layers of cache (fast and slow storage if you want), use load 
balancing mechanisms such as uri based balancing to spread the cache 
cross multiple servers and maybe use some of the special flags for 
balancing, so even if a machine goes down it wouldn't cause a full shift 
of data.


I'm sure that regardless of what your goal is - someone here will be 
able to suggest a (better) and already supported solution.


rnmx18 wrote:

It would help in a use-case when there are 2 NGINX processes, both working
with the same cache directory.

NGINX-A runs with a proxy-cache-path /disk1/cache with zone name "cacheA".

NGINX-B runs with the same proxy-cache-path /disk1/cache with zone name
"cacheB".

When NGINX-B adds content to the cache (say for URL test/a.html), the file
gets added to cache as /disk/cache1/test/a.html (again, avoiding md5 for
simplicity).

I think it may be nice if a subsequent request for this URL to NGINX-A would
result in a hit, as the file is available in the disk. However, today it
does not result in a HIT, as the in-memory metadata is missing for NGINX-A
for this URL. So, it would fetch from origin and add it again to cache, and
update its in-memory metadata.

Otherwise, a restart of NGINX-A would build up the cache metadata for files
found in the cache directory.

Thanks
Rajesh

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276624,276627#msg-276627

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Can the cacheloader process stay alive and keep rebuilding or updating the cache metadata?

2017-09-29 Thread Lucas Rolff
I can’t think of any scenario, you’d want that – care to explain why you’d like 
this behaviour?

On 29/09/2017, 22.28, "nginx on behalf of rnmx18"  wrote:

Hi,

As I understand, during startup, the cache loader process scans the files in
the defined proxy-cache-path directories, and builds up the
in-memory-metadata. Once the metadata is built-up, the cache loader process
exits.

Is there any mechanism by which, this cache loader process can be made to
stay alive, so that it can maybe periodically rebuild/update the in-memory
metadata by monitoring files in the corresponding directory?

Thanks
Rajesh

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276624,276624#msg-276624

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file?

2017-09-29 Thread Lucas Rolff
> In this model, only the originally cached file at /disk1/cache can be served 
> properly by the NGINX-proxy. 

You can balance disk usage using split_clients module in nginx, and use 
different proxy_cache’s (e.g. /disk1/cache, /disk2/cache and so on) as 
described in https://www.nginx.com/blog/nginx-caching-guide/ (Splitting the 
cache across multiple hard drives).


On 29/09/2017, 21.31, "nginx on behalf of rnmx18"  wrote:

Hi Lucas,

As long as the cached files (with the metadata at the beginning) reside in
the directory specified with the proxy_cache_path directive, they are fine.
The NGINX-proxy, which added them there in the first place, can serve the
content correctly, after skipping the right amount of metadata bytes.

In my case, I have a background application, which might copy the cached
file to an alternate location. For example, a file movies/welcome.mp4 may be
originally cached by NGINX-proxyon /disk1/cache as
/disk1/cache/movies/welcome.mp4  (For the moment, let us forget the
md5-based cache path for simplicity).  In my use-case, the file may either
stay there itself, or sometimes, my application may copy it to another
location  - say /disk2/cache/movies/welcome.mp4 or say,
/disk3/cache/movies/welcome.mp4, say for some kind of disk usage balancing..
The application exposes /disk1/pubroot/cache/movies/welcome.mp4 as the
published location.

In this model, only the originally cached file at /disk1/cache can be served
properly by the NGINX-proxy. 

The files in any of the other locations cannot be served by NGINX properly.
It cannot serve as a server, as the copied file contains metadata. Even if I
have other proxy-cache-paths defined for the alternate locations
(/disk2/cache or /disk3/cache), the NGINX-proxy also cannot serve them as
the corresponding in-memory metadata will not have the entries for these
files. The files in those paths, would have been physically copied "under
the hood" by another process, and not NGINX.

Thanks
Rajesh

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276612,276621#msg-276621

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Any method for NGINX (as a web-server) to skip metadata and serve content from cached file?

2017-09-29 Thread Lucas Rolff
Can I ask, what’s the problem with having the metadata in the files?

On 29/09/2017, 18.33, "nginx on behalf of rnmx18"  wrote:

Hi Reinis,

Thank you for that pointer to proxy_store directive.

I understand that this would be a useful option for static files. However,
my application currently cannot handle aspects like expiry, revalidation,
eviction etc. So, I guess I will not be able to use the proxy_store
directive for the type of content which I need NGINX to act as a proxy for.

Regards
Rajesh

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276612,276618#msg-276618

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Scaling nginx caching storage

2017-09-23 Thread Lucas Rolff
> if one node had the storage capacity to satisfy my needs it couldn't handle 
> all the requests

What amount of requests / traffic are we talking about, and which kind of 
hardware do you use?
You can make nginx serve 20+ gigabit of traffic from a single machine if the 
content is right, or 50k+ req/s

> But at this point i'm beginning to think if it's even worth it . Should i 
> settle for having multiple nginx nodes requesting the same item to our 
> upstream server ?

If you’re offloading 99.xx% of the content to nginx anyway, a few extra 
requests to the upstream shouldn’t really matter much.
You could even have multiple layers of nginx to lower the amount of upstream 
connections going to the server – so on your let’s say 10 nginx instances, you 
could use 1-2 nginx instances as upstream, and on those 1-2 nginx instances use 
the actual upstream.

Generally speaking you’ll have downsides with sharing storage or cache between 
multiple servers, and it just adds a lot of complexity to minimize the cost and 
then it might turn out you actually do not save anything anyway.

Best Regards,
Lucas

From: nginx <nginx-boun...@nginx.org> on behalf of Amir Keshavarz 
<amirk...@gmail.com>
Reply-To: "nginx@nginx.org" <nginx@nginx.org>
Date: Saturday, 23 September 2017 at 11.48
To: "nginx@nginx.org" <nginx@nginx.org>
Subject: Re: Scaling nginx caching storage

Sorry for the confusion .
My problem is that i need to cache items as much as possible so even if one 
node had the storage capacity to satisfy my needs it couldn't handle all the 
requests and we can't afford multiple nginx nodes request to our main server 
each time an item is requested on a different nginx node .

For that problem i have afew scenarios but they either have huge overhead on 
our servers and our network or are not suitable for sensitive production env 
because it causes weird problems ( sharing storage ) .

But at this point i'm beginning to think if it's even worth it . Should i 
settle for having multiple nginx nodes requesting the same item to our upstream 
server ?


On Sat, Sep 23, 2017 at 1:48 PM, Lucas Rolff 
<lu...@lucasrolff.com<mailto:lu...@lucasrolff.com>> wrote:
> is there any way to share a cache directory between two nginx instances ?
> If it can't be done what do you think is the best way to go when we need to 
> scale the nginx caching storage ?

One is about using same storage for two nginx instances, the other one is 
scaling the nginx cache storage.
I believe it’s two different things.

There’s nothing that prevents you from having two nginx instances reading from 
the same cache storage – however you will get into scenarios where if you try 
to write from both machines (Let’s say it tries to cache the same file on both 
nginx instances), you might have some issues.

Why exactly would you need two instances to share the same storage?
And what scale do you mean by scaling the nginx caching storage?

Currently there’s really only a limit to your disk size and the size of your 
keys_zone – if you have 50 terabytes of storage, just set the keys_zone size to 
be big enough to contain the amount of files you wanna manage (you can store 
about 8000 files per 1 megabyte).



From: nginx <nginx-boun...@nginx.org<mailto:nginx-boun...@nginx.org>> on behalf 
of Amir Keshavarz <amirk...@gmail.com<mailto:amirk...@gmail.com>>
Reply-To: "nginx@nginx.org<mailto:nginx@nginx.org>" 
<nginx@nginx.org<mailto:nginx@nginx.org>>
Date: Saturday, 23 September 2017 at 10.58
To: "nginx@nginx.org<mailto:nginx@nginx.org>" 
<nginx@nginx.org<mailto:nginx@nginx.org>>
Subject: Scaling nginx caching storage

Hello,
Since nginx stores some cache metadata in memory , is there any way to share a 
cache directory between two nginx instances ?

If it can't be done what do you think is the best way to go when we need to 
scale the nginx caching storage ?

Thanks

___
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Scaling nginx caching storage

2017-09-23 Thread Lucas Rolff
> is there any way to share a cache directory between two nginx instances ?
> If it can't be done what do you think is the best way to go when we need to 
> scale the nginx caching storage ?

One is about using same storage for two nginx instances, the other one is 
scaling the nginx cache storage.
I believe it’s two different things.

There’s nothing that prevents you from having two nginx instances reading from 
the same cache storage – however you will get into scenarios where if you try 
to write from both machines (Let’s say it tries to cache the same file on both 
nginx instances), you might have some issues.

Why exactly would you need two instances to share the same storage?
And what scale do you mean by scaling the nginx caching storage?

Currently there’s really only a limit to your disk size and the size of your 
keys_zone – if you have 50 terabytes of storage, just set the keys_zone size to 
be big enough to contain the amount of files you wanna manage (you can store 
about 8000 files per 1 megabyte).



From: nginx  on behalf of Amir Keshavarz 

Reply-To: "nginx@nginx.org" 
Date: Saturday, 23 September 2017 at 10.58
To: "nginx@nginx.org" 
Subject: Scaling nginx caching storage

Hello,
Since nginx stores some cache metadata in memory , is there any way to share a 
cache directory between two nginx instances ?

If it can't be done what do you think is the best way to go when we need to 
scale the nginx caching storage ?

Thanks
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: nginx cache growing well above max_size threshold

2017-09-14 Thread Lucas Rolff
Okay cool, I'll give it a try

In our case we do not run http2 on the machines since haproxy runs in front as 
well (which doesn't support http2)

I'll also try enable a bit more verbose logging on one of the machines to see 
what the logs say

Thanks a lot Maxim!

Best regards,
Lucas Rolff

Get Outlook for iOS<https://aka.ms/o0ukef>

From: nginx <nginx-boun...@nginx.org> on behalf of Maxim Dounin 
<mdou...@mdounin.ru>
Sent: Thursday, September 14, 2017 6:55:57 PM
To: nginx@nginx.org
Subject: Re: nginx cache growing well above max_size threshold

Hello!

On Thu, Sep 14, 2017 at 04:34:09PM +, Lucas Rolff wrote:

> I have a minor question, so I have an nginx box using
> proxy_cache, it has a key zone of 40 gigabyte (so it can cache
> 320 million files), a max_size of 1500 gigabyte for the cache
> and the inactive set to 30 days.
>
> However we experience that nginx goes well above the defined
> limit - in our case the max size is 1500 gigabyte, but the cache
> directory takes goes well above 1700 gigabyte.
>
> There's a total of 42.000.000 files currently on the system,
> meaning the average filesize is about 43 kilobyte.
>
> Normally I know that nginx can go slightly above the limit,
> until the cache manager purges the files, but it stays at about
> 1700 gigabyte constantly unless we manually clear out the size.
>
> I see there's a change in 1.13.1 that ignores long locked cache
> entries, is it possible that this bugfix actually fixes above
> issue?
>
> Upgrading is rather time consuming and we have to ensure nginx
> versions across the platform, so I wonder if anyone has some
> pointers if the above bugfix would maybe solve our issue.
> (currently the custom nginx version is based on nginx 1.10.3).

https://trac.nginx.org/nginx/ticket/1163

TL;DR:

This behaviour indicate there is a problem somewhere, likely
socket leaks or process crashes.  Reports suggests it might be
related to HTTP/2.  The change in 1.13.1 don't fix the root cause,
but will allow nginx to keep cache under max_size regardless of
the problem.

--
Maxim Dounin
http://nginx.org/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx cache growing well above max_size threshold

2017-09-14 Thread Lucas Rolff
Hi guys,


I have a minor question, so I have an nginx box using proxy_cache, it has a key 
zone of 40 gigabyte (so it can cache 320 million files), a max_size of 1500 
gigabyte for the cache and the inactive set to 30 days.


However we experience that nginx goes well above the defined limit - in our 
case the max size is 1500 gigabyte, but the cache directory takes goes well 
above 1700 gigabyte.

There's a total of 42.000.000 files currently on the system, meaning the 
average filesize is about 43 kilobyte.


Normally I know that nginx can go slightly above the limit, until the cache 
manager purges the files, but it stays at about 1700 gigabyte constantly unless 
we manually clear out the size.


I see there's a change in 1.13.1 that ignores long locked cache entries, is it 
possible that this bugfix actually fixes above issue?

Upgrading is rather time consuming and we have to ensure nginx versions across 
the platform, so I wonder if anyone has some pointers if the above bugfix would 
maybe solve our issue. (currently the custom nginx version is based on nginx 
1.10.3).


Best Regards,
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: MP4 module with pseudo streaming + proxy_cache

2017-09-12 Thread Lucas Rolff

> is it too much to ask for nginx to implement this

It depends on what you want to get implemented.

You can just have a location block in nginx handing mp4 and then using 
the slice module as Roman already mentioned, this will cause the initial 
chunk (which contains the MOOV atom) to be loaded pretty quickly even 
for big files, and thus enable pseudo streaming rather quickly (if the 
mp4 is not encoded with the MOOV atom in the end which happens in so 
many cases).


The only problem you'll have can be invalidating the cache of a file if 
you use the slice module, since you basically have to calculate every 
cache entry that you want to remove from the cache (starts from 0 and 
increments the number of bytes that you've set in the slice size), the 
only thing making it hard is the very last slice since this will be 
equal or less than your slice size.


So you can do it already out of the box using the slice module.

Now, sure it would be nice for the mp4 module to support pseudo 
streaming for files that are not yet in the cache - this however 
requires nginx to be aware of where to seek in a file that is not yet on 
the filesystem - it can be done, but I don't think it's super pretty.



tbs wrote:

is it too much to ask for nginx to implement this feature if others can do
it via their own developers?

i couldn't find developer that are familiar with this, i looked already.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,276322,276334#msg-276334

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Too many connections in waiting state

2017-09-07 Thread Lucas Rolff
Check if any of the sites you run on the server gets crawled by any crawlers 
around the time you see an increase – I know that a crawler such as Screaming 
Frog doesn’t handle servers that are capable of http2 connections and have it 
activated for sites that are getting crawled, and will result in connections 
with a “waiting” state in nginx.

It might be there’s other tools that behave the same way, but I’d personally 
look into what kind of traffic/requests happened that increased the waiting 
state a lot.

Best Regards,

From: nginx  on behalf of Anoop Alias 

Reply-To: "nginx@nginx.org" 
Date: Thursday, 7 September 2017 at 11.52
To: Nginx 
Subject: Too many connections in waiting state

Hi,

I see sometimes too many waiting connections on nginx .

This often gets cleared on a restart , but otherwise pileup

###
Active connections: 4930
server accepts handled requests
 442071 442071 584163
Reading: 2 Writing: 539 Waiting: 4420
###
[root@web1 ~]# grep keep /etc/nginx/conf.d/http_settings_custom.conf
keepalive_timeout   10s;
keepalive_requests  200;
keepalive_disable   msie6 safari;


[root@web1 ~]# nginx -V
nginx version: nginx/1.13.3
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
built with LibreSSL 2.5.5
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx 
--modules-path=/etc/nginx/modules --with-pcre=./pcre-8.41 --with-pcre-jit 
--with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.5 
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log 
--http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid 
--lock-path=/var/run/nginx.lock 
--http-client-body-temp-path=/var/cache/nginx/client_temp 
--http-proxy-temp-path=/var/cache/nginx/proxy_temp 
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp 
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp 
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody --group=nobody 
--with-http_ssl_module --with-http_realip_module --with-http_addition_module 
--with-http_sub_module --with-http_dav_module --with-http_flv_module 
--with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module 
--with-http_random_index_module --with-http_secure_link_module 
--with-http_stub_status_module --with-http_auth_request_module 
--add-dynamic-module=naxsi-http2/naxsi_src --with-file-aio --with-threads 
--with-stream --with-stream_ssl_module --with-http_slice_module --with-compat 
--with-http_v2_module --with-http_geoip_module=dynamic 
--add-dynamic-module=ngx_pagespeed-1.12.34.2-stable 
--add-dynamic-module=/usr/local/rvm/gems/ruby-2.4.1/gems/passenger-5.1.8/src/nginx_module
 --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60 
--add-dynamic-module=headers-more-nginx-module-0.32 
--add-dynamic-module=ngx_http_redis-0.3.8 
--add-dynamic-module=redis2-nginx-module 
--add-dynamic-module=srcache-nginx-module-0.31 
--add-dynamic-module=ngx_devel_kit-0.3.0 
--add-dynamic-module=set-misc-nginx-module-0.31 
--add-dynamic-module=testcookie-nginx-module 
--add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 
--with-ld-opt=-Wl,-E
###


What could be causing this? The server is quite capable and this happens only 
rarely


--
Anoop P Alias

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Get rid of args from $request_uri

2017-08-08 Thread Lucas Rolff
I use the set_misc module from openresty and do something like:

if ($request_uri ~ "([^/?]*)(?:\?|$)") {
  set $double_encoded_filename $1;
}
set_unescape_uri $encoded_uri $double_encoded_uri;

Can probably be improved, but I can use $encoded_uri and get the reslt you’re 
looking for c0nw0nk.



From: nginx  on behalf of Zhang Chao 

Reply-To: "nginx@nginx.org" 
Date: Tuesday, 8 August 2017 at 16.07
To: "nginx@nginx.org" 
Subject: Re: Get rid of args from $request_uri





On 8 August 2017 at 22:02:32, chilly_bang 
(nginx-fo...@forum.nginx.org) wrote:
c0nw0nk Wrote:
---
> why don't you use
>
> $uri

Is it not so, that $uri will output an encoded url?

$uri is always the one decode once time and merge the slash(if you enable it).
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: NGINX stale-while-revalidate cluster

2017-07-07 Thread Lucas Rolff
Instead of doing round robin load balancing why not do a URI based load 
balancing? Then you ensure your cached file is only present on a single machine 
behind the load balancer.

Sure there will be moments where this is not the case – let's assume that a box 
goes down, and traffic will switch, but in that case I'd as a "post task" take 
the moment from when the machine went down, until it came online again, find 
all requests that expired in the meantime, and flush it to ensure the entry is 
updated on the machine that had been down in the meantime.

It will still require some work, but at least over time your "overhead" should 
be less.

From:  nginx  on behalf of Joan Tomàs i Buliart 

Reply-To:  
Date:  Friday, 7 July 2017 at 11.52
To:  
Subject:  NGINX stale-while-revalidate cluster


 

Hi,
 
 We are implementing an stale-while-revalidate webserver cluster with NGINX. 
 
 We are using the new proxy_cache_background_update to answer request as soon 
as possible while NGINX updates the content from the origin in the background. 
This solution works perfectly when the requests for the same object are served 
by the same NGINX server (when we have only one server or when we have a 
previous load balancer that classifies the requests). 
 
 In our scenario we have a round robin load balancer (ELB) and we need to scale 
the webservers layer. So, as a consequence, only the Nginx that receive the 
request updates the cache content while the others keep the old version. This 
means that, we can send old versions of content due to the content not being 
updated on all the webservers. The problem accentuates when we put a CDN in 
front of the webservers.
 
 We are thinking on developing something that once an Nginx instance updates 
its cache would let know all other instances to get a copy of the newest 
content. We are thinking about processing NGINX logs and, when it detects a 
MISS, EXPIRED or UPDATING cache status, it makes a HEAD request to the other 
NGINXs on the cluster to force the invalidation of this content.
 
 


 Do any of you have dealt with this problem or a similar one?
 
 
 

We have also tried the post_action but it is blocking the client request until 
it completes. It is not clear for us which would be the best approach. The 
options that we are considering are:
 
 - NGINX module
 - LUA script
 - External script that process syslog entries from NGINX
 
 What would be your recommendation?
 
 
 Many thanks in advance,
 
 
  
-- 
  Joan Tomàs-Buliart 

 
 Joan Tomàs-Buliart
 +34 931 785 950
 www.marfeel.com
 Discover our referral program!!
 
 
___ nginx mailing list 
nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Measuring nginx's efficiency

2017-06-29 Thread Lucas Rolff
> Well, this php-engine is built into apache itself

Just because apache do have a built in PHP handler such as mod_dso doesn't mean 
it's actually used to serve static files ( I can tell you that the php engine 
is never hit if you serve static files)

> Anyway, considering only this fact, such a bad apache configuration should 
> not be significantly slower than that of nginx?
> Which ones?

Things like avoiding .htaccess, using mpm_event instead of prefork or worker, 
will both increase performance and decrease memory usage

> And how exactly can I measure this?

Benchmark
Change config
 Repeat

> Right now we have a pretty capable dedicated server which costs ca. 40Euro 
> per month and is an overkill for our needs.

True - but it's good to know what your stack is capable of doing in case of 
capacity planning, and to see whenever you should scale up your infrastructure 
- personally I optimize my environments even if I have plenty of resources, 
because I like being able to handle unexpected spikes in traffic

> Do you think I should stress a production server?

It's not up to me, or anyone else to decide - we do not know how your 
application works, and what it does - some people might be able to benchmark a 
server in product, others might not - it's a case by case thing in my opinion.
Just be aware of the consequences by benchmarking/stress testing, such as 
increased server load, increased response times and possible downtime in case 
you push it too hard.

I've personally done it plenty of times, but I do it in a controlled way and 
I'm fully aware of what can possibly go wrong.

Best Regards,






On 29/06/2017, 20.47, "nginx on behalf of ST"  wrote:

>> If your current apache configuration serves static files via the php engine, 
>> then you're doing something very wrong.
>Well, this php-engine is built into apache itself... Anyway, considering only 
>this fact, such a bad apache
>configuration should not be significantly slower than that of nginx?
>
>> You might or might not see any speed gain depending on your apache 
>> configuration, but you should see a big difference in the amount of 
>> resources used to serve traffic.
>Which ones? And how exactly can I measure this? This also might be a
>good point to convince my boss to switch...
>
>> As Valentin mentioned, it's about scalability majority of the time - and 
>> that in itself will decrease your costs in hardware or resources that is 
>> required to be able to serve your static traffic, and I'm sure whomever you 
>> have to prove to, why you should switch from Apache to nginx, would love to 
>> see that the cost of running your current setup might decrease to some or to 
>> huge extend.
>Right now we have a pretty capable dedicated server which costs ca.
>40Euro per month and is an overkill for our needs. So for now resources
>is not an issue that much...
>
>> 
>> If you run wrk as suggested below, you will get a bunch of useful data that 
>> will help you chose whichever software solution is the best to use.
>
>Do you think I should stress a production server?
>
>Thank you!
>
>
>> 
>> 
>> 
>> On 29/06/2017, 19.38, "nginx on behalf of ST" > behalf of smn...@gmail.com> wrote:
>> 
>> >On Thu, 2017-06-29 at 16:16 +0300, Valentin V. Bartenev wrote:
>> >> On Thursday 29 June 2017 15:32:21 ST wrote:
>> >> > On Thu, 2017-06-29 at 15:09 +0300, Valentin V. Bartenev wrote:
>> >> > > On Thursday 29 June 2017 14:00:37 ST wrote:
>> >> > > > Hello,
>> >> > > > 
>> >> > > > with your help I managed to configure nginx and our website now can 
>> >> > > > be
>> >> > > > accessed both - through apache and nginx.
>> >> > > > 
>> >> > > > Now, how can I prove to my boss that nginx is more efficient than 
>> >> > > > apache
>> >> > > > to switch to it? How do I measure its performance and compare it to 
>> >> > > > that
>> >> > > > of apache? Which tools would you recommend?
>> >> > > > 
>> >> > > > Thank you in advance!
>> >> > > > 
>> >> > > 
>> >> > > I suggest wrk.
>> >> > > 
>> >> > > https://github.com/wg/wrk
>> >> > > 
>> >> > 
>> >> > Should I stress our production system with this tool? Our system blocks
>> >> > users that make to many requests in a given amount of time...
>> >> > Also, how do I prove that static content is now served faster?
>> >> > 
>> >> > Thank you.
>> >> > 
>> >> 
>> >> Switching from Apache to nginx usually isn't about speed, but about 
>> >> scalability.
>> >> It's all about how many users/connections you can serve from the same 
>> >> hardware.
>> >> 
>> >
>> >Shouldn't it be also about speed, at least for static content, that no
>> >longer needs to be served through php-engine? And thus overall loading
>> >speed should be higher?
>> >
>> >___
>> >nginx mailing list
>> >nginx@nginx.org
>> >http://mailman.nginx.org/mailman/listinfo/nginx
>> ___
>> 

Re: Measuring nginx's efficiency

2017-06-29 Thread Lucas Rolff
If your current apache configuration serves static files via the php engine, 
then you're doing something very wrong.

You might or might not see any speed gain depending on your apache 
configuration, but you should see a big difference in the amount of resources 
used to serve traffic.
As Valentin mentioned, it's about scalability majority of the time - and that 
in itself will decrease your costs in hardware or resources that is required to 
be able to serve your static traffic, and I'm sure whomever you have to prove 
to, why you should switch from Apache to nginx, would love to see that the cost 
of running your current setup might decrease to some or to huge extend.

If you run wrk as suggested below, you will get a bunch of useful data that 
will help you chose whichever software solution is the best to use.



On 29/06/2017, 19.38, "nginx on behalf of ST"  wrote:

>On Thu, 2017-06-29 at 16:16 +0300, Valentin V. Bartenev wrote:
>> On Thursday 29 June 2017 15:32:21 ST wrote:
>> > On Thu, 2017-06-29 at 15:09 +0300, Valentin V. Bartenev wrote:
>> > > On Thursday 29 June 2017 14:00:37 ST wrote:
>> > > > Hello,
>> > > > 
>> > > > with your help I managed to configure nginx and our website now can be
>> > > > accessed both - through apache and nginx.
>> > > > 
>> > > > Now, how can I prove to my boss that nginx is more efficient than 
>> > > > apache
>> > > > to switch to it? How do I measure its performance and compare it to 
>> > > > that
>> > > > of apache? Which tools would you recommend?
>> > > > 
>> > > > Thank you in advance!
>> > > > 
>> > > 
>> > > I suggest wrk.
>> > > 
>> > > https://github.com/wg/wrk
>> > > 
>> > 
>> > Should I stress our production system with this tool? Our system blocks
>> > users that make to many requests in a given amount of time...
>> > Also, how do I prove that static content is now served faster?
>> > 
>> > Thank you.
>> > 
>> 
>> Switching from Apache to nginx usually isn't about speed, but about 
>> scalability.
>> It's all about how many users/connections you can serve from the same 
>> hardware.
>> 
>
>Shouldn't it be also about speed, at least for static content, that no
>longer needs to be served through php-engine? And thus overall loading
>speed should be higher?
>
>___
>nginx mailing list
>nginx@nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: [nginx logging module]$Request_time almost show 0.000 with proxy cache configuration

2017-06-22 Thread Lucas Rolff
> - a cache hit means that the resource should also be in the linux 
page cache - so no physical disk read needed.


That's a very wrong assumption to make, and only makes sense in very 
small scale setups - and multiple terabytes of memory isn't exactly 
cheap, that's why we have SSD storage to handle such things, and it 
would still be a "HIT" even if it's not within the memory.




Peter Booth wrote:

This might not be a bug at all. Remember that when nginx logs request
time it's doing so with millisecond precision. This is very, very 
coarse-grained when you consider what
modern hardware is capable of. The Tech Empower benchmarks shwo that 
an (openresty) nginx on
a quad-socket host can server more than 800,000 dynamic lua requests 
per second. We should expect

that static resources served from ngixn cache to be faster than this.

Remember:
 - a cache hit means that the resource should also be in the linux 
page cache - so no physical disk read needed.
- writing a small png file from memory to the network (on a 10G 
ethernet ) could take a few microsec. Depending on NIC IRQ 
consolidation settings this might be as much as 60/70micros.

- reading the time (gettimeofday()) will itself take about 30 nanoseconds.

These are al intervals that are too small to be visible to the 1ms 
granularity of the request_time logging.


My experience has been that very busy webservers running on even five 
year old hardware
will consistently log 0ms  request time for cache hits. If I saw 
anything different I'd be wondering

what was wrong with the environment.

Peter

On Jun 22, 2017, at 05:53 AM, jindov  wrote:


Hi guys,

I've configured for nginx to cache static like jpeg|png. The problem 
is if
request with MISS status, it will show a non-zero value request_time, 
but if

a HIT request, the request_time value is 0.000.
This is an nginx bug and is there anyway to resolve it.

My log format

```
log_format cache '$remote_addr - [$time_local] $upstream_cache_status
$upstream_addr '
'"$request" $status $body_bytes_sent $request_time
["$upstream_response_time"] "$http_referer" '
'"$http_user_agent" "$host" "$server_port"
"$connection"';
```

I read a topic about this but this is not informational. I've try to set
timer_resolution to 0ms but nothing was changed

Thanks

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,275053,275053#msg-275053


___
nginx mailing list
nginx@nginx.org 
http://mailman.nginx.org/mailman/listinfo/nginx

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Separate logs within the same server for different server names?

2017-06-15 Thread Lucas Rolff
http://nginx.org/en/docs/http/ngx_http_log_module.html


"The file path can contain variables (0.7.6+), but such logs have some 
constraints"

So yes, you can use things such as $host - but there will be a performance 
penalty.



On 15/06/2017, 13.02, "nginx on behalf of ST"  wrote:

>Hello,
>
>is it possible somehow to define separate logs within the same server{}
>for different server names (server_name one.org two.org;)?
>
>access_log /var/log/nginx$server_name/access.log;
>error_log /var/log/nginx$server_name/error.log;
>
>Thank you!
>
>___
>nginx mailing list
>nginx@nginx.org
>http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Serve index.html file if exists try_files + proxy_pass?

2017-05-01 Thread Lucas Rolff

Hi Francis,

Thanks for your reply.

A little about what I'm doing/trying to make work.

I use Minio (https://www.minio.io/) - which is a S3 compatible object 
storage server - it's a simple Go binary, that you pass an argument, 
simply a directory which you want to store your buckets and files in.


In my case, I've created a user called minio, with a homedir of /home/minio
I've configured nginx to run under the user "minio" as well, to ensure 
correct permissions.


Minio by default listens to 0.0.0.0:9000, and to simplify working with 
SSL certificates, I ended up putting nginx on the same machine, and all 
it does is basically a proxy_pass on localhost:9000


When I access https://minio.box.com// Minio will generate an XML 
containin a list of objects within a specific bucket (as per S3 API 
standards).


Example: https://gist.github.com/lucasRolff/7a0afb95103f6c93d8bc448f5c1c35f4

Since I do not want to expose this bucket object list, I want to do so 
if a bucket has the file "index.html" that it will serve this, instead 
of showing the bucket object list.


Minio run from /home/minio with the directory /home/minio being the 
storage directory.

This means, when I create a bucket called "storage", the directory
/home/minio/storage will be created - within the storage directory, 
objects will be placed, as it was normal files, so if I decide to upload 
index.html, then I will be able to find the exact file, with that name 
at path /home/minio/storage/index.html


Now on nginx, if I have the domain https://minio.box.com/storage/ - what 
I want to load is /home/minio/storage/index.html if the file exists, 
else load the bucket object list


If I access https://minio.box.com/images/ - it should look for the file 
/home/minio/images/index.html and serve if existing else load the bucket 
object list (basically, just proxy_pass as normal).


Any other request I do such as 
https://minio.box.com/images/nginx-rocks.png should go to my upstream 
server (localhost:9000)


> I also suspect that the correct solution is to just configure the 
upstream http server to serve the contents of index.html if it exists


If I could, I would have done that, but it returns a bucket object list 
as defined in the S3 API standard.


nginx itself can have a root /home/minio; defined - and the 'bucket' is 
just an actual folder on the file-system, with normal files.


The only problem I have is to serve index.html from within the current 
'bucket', so /images/ would load /home/minio/images/index.html


If I do try_files index.html @upstream;

Then try_files will base it on the root directive defined, in this case 
it would try look for /home/minio/index.html if I set the root directive 
to "/home/minio", correct?


I guess I could take try_files "${uri}index.html" @upstream; which would 
produce something like /home/minio/storage/index.html if you have 
/storage/ as the URI, but if URI is /storage/image1.png it would try to 
look for "/home/minio/storage/image1.pngindex.html" and for me that 
doesn't seem very efficient, since it would have to stat for a file on 
the file system for every request before actually going to my upstream.


I could maybe do:

location / {
  location ~ /$ {
try_files "${uri}index.html" @upstream;

  }

  // continue normal code here
}

location @upstream {
  proxy_pass http://127.0.0.1:9000;
}

I'm not sure if the above made it more clear.

Best Regards,
Lucas R


Francis Daly wrote:

On Sun, Apr 30, 2017 at 10:44:21AM +, Lucas Rolff wrote:

Hi there,


I have a small scenario where I have a backend (s3 compatible storage), which 
by default generates a directory listing overview of the files stored.
I want to be able to serve an "index.html" file if the file exists, else just 
proxy_pass as normally.


I think that it will be very useful to be utterly clear on the distinction
between a file and a url here. If you can describe what you want to happen
in exact terms, there is a much better chance that the configuration
you want will be clear.

A file is a thing available on the local nginx filesystem. Its full name
will be some thing like /usr/local/nginx/html/one/two.

A url is a thing available by proxy_pass:ing to a http server. (That's
good enough for these purposes.) Its full name will be something like
http://upstream/one/two.

(The http server on upstream may have a direct mapping between urls it
receives and files it knows about; that's because those files are on
upstream's local filesystem. Similarly, nginx receives requests which
are urls, and it may map them to files or to other urls. This can get
confusing. That's why it is useful to be explicit.)


https://gist.github.com/lucasRolff/c7ea13305e9bff40eb6729246cd7eb39

My nginx config for somewhat reason doesn't work – or maybe it's because I 
misunderstand how try_files actually work.


try_files checks for the existence of a file. In the common ca

Re: Http proxy module

2017-04-29 Thread Lucas Rolff
You shouldn't pass --with-http_proxy_module at all


From: nginx > on behalf 
of Roman Pastushkov via nginx >
Reply-To: "nginx@nginx.org" 
>
Date: Saturday, 29 April 2017 at 08.57
To: "nginx@nginx.org" 
>
Cc: Roman Pastushkov >
Subject: Http proxy module

Hello everyone! i am trying to build nginx with proxy module from sources
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx 
--modules-path=/usr/lib64/
nginx/modules --conf-path=/etc/nginx/nginx.conf 
--error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/acces
s.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body 
--http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastc
gi-temp-path=/var/lib/nginx/tmp/fastcgi 
--http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi 
--http-scgi-temp-path=/var/lib/nginx/t
mp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx 
--user=nginx --group=nginx  --with-ipv6 --with-http_ssl_m
odule --with-http_v2_module --with-http_realip_module 
--with-http_addition_module  --with-http_sub_module --with-http_dav_modu
le --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module 
--with-http_gzip_static_module --with-http_random_i
ndex_module --with-http_secure_link_module --with-http_degradation_module 
--with-http_slice_module --with-http_stub_status_mod
ule --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module 
--with-pcre --with-pcre-jit --with-stream=dynam
ic --with-stream_ssl_module  --with-http_proxy_module  --with-debug 
--with-cc-opt='-O3'


And me outs ./configure: error: invalid option "--with-http_proxy_module"

1.13 version and 1.12 too, wthats wrong?

Roman Pastushkov
xnucleargemi...@aol.com
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: URL-Rewriting not working

2017-04-09 Thread Lucas Rolff
In general try to avoid using the if directive too much.
https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/

For what you're trying to do, using a map would be the cleanest (and nicest) 
way I believe – someone can correct me if they want :-D

From: nginx > on behalf 
of Ajay Garg >
Reply-To: "nginx@nginx.org" 
>
Date: Sunday, 9 April 2017 at 17.37
To: "nginx@nginx.org" 
>
Subject: Re: URL-Rewriting not working

Hi Francis.

On Sun, Apr 9, 2017 at 8:47 PM, Francis Daly 
> wrote:
On Sun, Apr 09, 2017 at 06:36:51PM +0530, Ajay Garg wrote:

Hi there,

> Got it Francis !!

Good news.

> location / {
> auth_basic 'Restricted';
> auth_basic_user_file
> /home/20da689b45c84f2b80bc84d651ed573f/.htpasswd;
>
> if ($remote_user =
> "20da689b45c84f2b80bc84d651ed573f") {
> proxy_pass
> https://127.0.0.1:2000;
> }
>
> }

When you come to add the second user, you will see that you want one
file with all the user/pass details.


Yes, I have already changed it to use just one file.
Upon that, would not just multiple sections of "if" checks for $remote_user 
suffice, something like ::

#
server {
listen 2000 ssl;

ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;

location / {
auth_basic 'Restricted';
auth_basic_user_file 
/etc/nginx/ssl/.htpasswd;

if ($remote_user =  "user1") {
proxy_pass 
https://127.0.0.1:2001;
}

if ($remote_user =  "user2") {
proxy_pass 
https://127.0.0.1:2002;
}

   # and so on 

}
 }
#

Looking forward to hearing back from you.


Thanks and Regards,
Ajay




You will probably also see that it will be good to use a map
(http://nginx.org/r/map) to set a variable for the port to connect to,
based on $remote_user. Then your main config becomes just "proxy_pass
http://127.0.0.1:$per_user_port;;.

Note that I have not tested that, and expect that there may be some more
subtleties involved, such as perhaps requiring an explicit proxy_redirect
directive.

Note also that you will probably want to set a default value for
$per_user_port, and make sure that something sensible happens when that
value is used -- probably a response along the lines of "something isn't
fully set up on the server yet; please wait or let us know", so the user
is not confused.

Good luck with it,

f
--
Francis Dalyfran...@daoine.org
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



--
Regards,
Ajay
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Mechanism to avoid restarting nginx upon every change

2017-04-09 Thread Lucas Rolff
Hi Ajay,

If you generate the configuration, and issue a nginx reload – it won't cause 
any downtime. The master process will reread the configuration, start new 
workers, and gracefully shut down the old ones.
There's absolutely no downtime involved in this process.


From: nginx > on behalf 
of Ajay Garg >
Reply-To: "nginx@nginx.org" 
>
Date: Sunday, 9 April 2017 at 15.55
To: "nginx@nginx.org" 
>
Subject: Mechanism to avoid restarting nginx upon every change

Hi All.

We are wanting to implement a solution, wherein the user gets proxied to the 
appropriate local-url, depending upon the credentials.
Following architecture works like a charm (thanks a ton 
tofran...@daoine.org, without whom I would not have 
been able to reach here) ::


server {
listen 2000 ssl;

ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;

location / {
auth_basic 'Restricted';
auth_basic_user_file 
/etc/nginx/ssl/.htpasswd;

if ($remote_user =  "user1") {
proxy_pass 
https://127.0.0.1:2001;
}

if ($remote_user =  "user2") {
proxy_pass 
https://127.0.0.1:2002;
}

   # and so on 

}
 }



Things are good, except that adding any new user information requires 
reloading/restarting the nginx server, causing (however small) downtime.

Can this be avoided?
Can the above be implemented using some sort of database, so that the nginx 
itself does not have to be down, and the "remote_user <=> proxy_pass" mapping 
can be retrieved from a database instead?

Will be grateful for pointers.


Thanks and Regards,
Ajay
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Memory issue

2017-04-06 Thread Lucas Rolff
> cpanel stat generation cause thet nginx makes a lot of reload to grab new 
> file descriptor... no issue on that

Even though this is off-topic - if you issue a lot of reloads during cPanel 
stat generation, your hooks are configured wrong, since Apache in cPanel only 
reloads *once* during the whole process.

> Issue is nginx, I show you situation now with 2.58% used for 1 work, which is 
> same value for others, but gloably, nginx uses now 2.58%, this number is 
> increasing slowly at the rythm of nginx reloads asked..

I happen to use nginx (mainline version) myself on a cPanel server - no custom 
modules, there's no memory leak in the latest versions of nginx - I stay 
happily at 0.2% in memory (32 gigabyte server)

> Anoop is dev of Xtendweb stack using nginx core, he is investigating, and it 
> seems his solution if to use a pre-approved nginx core : 
> https://openresty.org/en/

Using OpenResty wouldn't solve your issues - you can use the exact same modules 
as OpenResty does, in a normal nginx build (be aware that some modules such as 
the lua module, and the echo-nginx module isn't yet building correctly against 
nginx 1.11.11+)

In the end, include only the extra modules you require - because as far as I 
can tell, in the mainline version of nginx, nothing (at least for me) has 
caused memory issues with workers, even when reloading a bunch of times.

It might very well be that one of the 3rd-party modules have not been fully 
tested to work with 1.11.13







On 06/04/2017, 17.32, "nginx on behalf of JohnCarne"  wrote:

>cpanel stat generation cause thet nginx makes a lot of reload to grab new
>file descriptor... no issue on that
>
>Issue is nginx, I show you situation now with 2.58% used for 1 work, which
>is same value for others, but gloably, nginx uses now 2.58%, this number is
>increasing slowly at the rythm of nginx reloads asked...
>
>
>[root@web1 ~]# ps alx | grep nginx
>599  711213  913692  20   0 3917936 3397132 ep_pol S ?  0:22
>nginx: worker process
>599  711224  913692  20   0 3918128 3397300 ep_pol S ?  0:24
>nginx: worker process
>599  711229  913692  20   0 3918392 3397456 ep_pol S ?  0:26
>nginx: worker process
>599  711238  913692  20   0 3918128 3397228 ep_pol S ?  0:20
>nginx: worker process
>599  711245  913692  20   0 3917936 3397144 ep_pol S ?  0:23
>nginx: worker process
>599  711248  913692  20   0 3918096 3397296 ep_pol S ?  0:18
>nginx: worker process
>599  711252  913692  20   0 3918392 3397392 ep_pol S ?  0:21
>nginx: worker process
>599  711255  913692  20   0 3918128 3397132 -   R?  0:19
>nginx: worker process
>599  711257  913692  20   0 3917580 3394632 ep_pol S ?  0:00
>nginx: cache manager process
>0 0  767011  766950  20   0 112652   956 pipe_w S+   pts/2  0:00
>grep --color=auto nginx
>5 0  913692   1  20   0 3917576 3396176 sigsus Ss ?74:26
>nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
>
>
>Output of nginx -T is taking 100's of pages, i can't pu it here...
>
>[root@web1 ~]# nginx -V
>nginx version: nginx/1.11.13
>built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
>built with LibreSSL 2.5.2
>TLS SNI support enabled
>configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
>--modules-path=/etc/nginx/modules --with-pcre=./pcre-8.40 --with-pcre-jit
>--with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.5.2
>--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log
>--http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid
>--lock-path=/var/run/nginx.lock
>--http-client-body-temp-path=/var/cache/nginx/client_temp
>--http-proxy-temp-path=/var/cache/nginx/proxy_temp
>--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
>--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
>--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nobody
>--group=nobody --with-http_ssl_module --with-http_realip_module
>--with-http_addition_module --with-http_sub_module --with-http_dav_module
>--with-http_flv_module --with-http_mp4_module --with-http_gunzip_module
>--with-http_gzip_static_module --with-http_random_index_module
>--with-http_secure_link_module --with-http_stub_status_module
>--with-http_auth_request_module --add-dynamic-module=naxsi-http2/naxsi_src
>--with-file-aio --with-threads --with-stream --with-stream_ssl_module
>--with-http_slice_module --with-compat --with-http_v2_module
>--with-http_geoip_module=dynamic
>--add-dynamic-module=ngx_pagespeed-release-1.11.33.4-beta
>--add-dynamic-module=/usr/local/rvm/gems/ruby-2.3.1/gems/passenger-5.1.2/src/nginx_module
>--add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.60
>--add-dynamic-module=headers-more-nginx-module-0.32
>--add-dynamic-module=ngx_http_redis-0.3.8
>--add-dynamic-module=redis2-nginx-module

Re: Binary upgrade with systemd

2017-04-04 Thread Lucas Rolff
According to the documentation: 
http://nginx.org/en/docs/control.html#upgrade


You'd have to send the QUIT signal to finish off upgrading (replacing) 
the binary during runtime.


Marc Soda wrote:

I sent WINCH to the old master.  In this case 32277.

After sending WINCH, I can send QUIT to the old master and it exits. 
 Everything looks fine at that point.  But it seems a little odd to 
have to do this.


On Apr 4, 2017, at 4:43 AM, Lucas Rolff <lu...@lucasrolff.com 
<mailto:lu...@lucasrolff.com>> wrote:


Hello Marc,

For which PID do you send the WINCH signal?


From: nginx <nginx-boun...@nginx.org 
<mailto:nginx-boun...@nginx.org>> on behalf of Marc Soda 
<m...@soda.fm <mailto:m...@soda.fm>>
Reply-To: "nginx@nginx.org <mailto:nginx@nginx.org>" <nginx@nginx.org 
<mailto:nginx@nginx.org>>

Date: Tuesday, 4 April 2017 at 04.04
To: "nginx@nginx.org <mailto:nginx@nginx.org>" <nginx@nginx.org 
<mailto:nginx@nginx.org>>

Subject: Binary upgrade with systemd

Hello,

I’m using nginx 1.10.3 custom built on Ubuntu 16.04.  I’m also
using the recommended systemd service file:

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

I’m try to do a no downtime upgrade with the USR2 and WINCH
signals.  Here is my process list before:

root 32277  0.0  0.4 1056672 71148 ?   Ss   21:51   0:00
nginx: master process /usr/local/nginx/sbin/nginx
www  32278  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32279  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32280  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32281  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32282  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32283  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32288  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32289  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32290  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32291  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32292  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32293  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32294  0.0  0.4 1056672 72212 ?   S21:51   0:00
 \_ nginx: cache manager process

and here it is after sending USR2:

root 32277  0.0  0.4 1056672 71868 ?   Ss   21:51   0:00
nginx: master process /usr/local/nginx/sbin/nginx
www  32278  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32279  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32280  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32281  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32282  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32283  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32288  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32289  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32290  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32291  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32292  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32293  0.0  0.4 1057924 73152 ?   S<   21:51   0:00
 \_ nginx: worker process
www  32294  0.0  0.4 1056672 72212 ?   S21:51   0:00
 \_ nginx: cache manager process
root 32461  5.5  0.5 1056676 82316 ?   S22:01   0:00
 \_ nginx: master process /usr/local/nginx/sbin/nginx
www  32465  0.0  0.4 1057928 73052 ?   S<   22:01   0:00
 \_ nginx: worker process
www  32466  0.0  0.4 1057928 73052 ?   S<   22:01   0:00
 \_ nginx: worker pr

Re: Binary upgrade with systemd

2017-04-04 Thread Lucas Rolff
Hello Marc,

For which PID do you send the WINCH signal?


From: nginx > on behalf 
of Marc Soda >
Reply-To: "nginx@nginx.org" 
>
Date: Tuesday, 4 April 2017 at 04.04
To: "nginx@nginx.org" 
>
Subject: Binary upgrade with systemd

Hello,

I’m using nginx 1.10.3 custom built on Ubuntu 16.04.  I’m also using the 
recommended systemd service file:

[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

I’m try to do a no downtime upgrade with the USR2 and WINCH signals.  Here is 
my process list before:

root 32277  0.0  0.4 1056672 71148 ?   Ss   21:51   0:00 nginx: master 
process /usr/local/nginx/sbin/nginx
www  32278  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32279  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32280  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32281  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32282  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32283  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32288  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32289  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32290  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32291  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32292  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32293  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32294  0.0  0.4 1056672 72212 ?   S21:51   0:00  \_ nginx: 
cache manager process

and here it is after sending USR2:

root 32277  0.0  0.4 1056672 71868 ?   Ss   21:51   0:00 nginx: master 
process /usr/local/nginx/sbin/nginx
www  32278  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32279  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32280  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32281  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32282  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32283  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32288  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32289  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32290  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32291  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32292  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32293  0.0  0.4 1057924 73152 ?   S<   21:51   0:00  \_ nginx: 
worker process
www  32294  0.0  0.4 1056672 72212 ?   S21:51   0:00  \_ nginx: 
cache manager process
root 32461  5.5  0.5 1056676 82316 ?   S22:01   0:00  \_ nginx: 
master process /usr/local/nginx/sbin/nginx
www  32465  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32466  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32467  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32468  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32469  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32470  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32471  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32472  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32473  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32474  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32475  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32476  0.0  0.4 1057928 73052 ?   S<   22:01   0:00  \_ nginx: 
worker process
www  32477  0.0  

Re: echo-nginx-module and 1.11.12

2017-03-26 Thread Lucas Rolff
When the pull request here gets merged into master, then it should work 
on 1.11.11 (and 1.11.12):


https://github.com/openresty/echo-nginx-module/pull/65

A. Schulze wrote:

Am 01.02.2016 um 23:53 schrieb Yichun Zhang (agentzh):

Hello!

On Fri, Jan 29, 2016 at 8:40 PM, Kurt Cancemi wrote:

I was doing some debugging and though I haven't found a fix. The problem is
in the ngx_http_echo_client_request_headers_variable() function c->buffer is
NULL when http v2 is used for some reason (internal to nginx).


This is expected since the HTTP/2 mode of NGINX reads the request
header into a different place. We should branch the code accordingly.

Regards,
-agentzh


Hello,

unfortunately the module fail to compile on 1.11.12
while compiling was successfully up to 1.11.10

cc -c -g -O2 -fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat 
-Werror=format-security -g -O2 -fdebug-prefix-map=/<>=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2  -I src/core 
-I src/event -I src/event/modules -I src/os/unix -I /usr/include -I objs -I src/http -I 
src/http/modules -I src/http/v2 -I src/stream \
 -o objs/addon/src/ngx_http_echo_request_info.o \
 ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c
./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c: In function 
'ngx_http_echo_client_request_headers_variable':
./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c:219:15: error: 
incompatible types when assigning to type 'ngx_buf_t * {aka struct ngx_buf_s 
*}' from type 'ngx_chain_t {aka struct ngx_chain_s}'
  b = hc->busy[i];
^
./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c:284:15: error: 
incompatible types when assigning to type 'ngx_buf_t * {aka struct ngx_buf_s 
*}' from type 'ngx_chain_t {aka struct ngx_chain_s}'
  b = hc->busy[i];
^
objs/Makefile:1523: recipe for target 
'objs/addon/src/ngx_http_echo_request_info.o' failed

I guess, something changed from 1.11.10 to 1.11.12 ...

Andreas
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Change target host in proxy_pass

2017-03-17 Thread Lucas Rolff
You can proxy_set_header Host – that should override whatever is defined in 
proxy_pass

From: nginx > on behalf 
of Tomasz Kapek >
Reply-To: "nginx@nginx.org" 
>
Date: Friday, 17 March 2017 at 12.12
To: "nginx@nginx.org" 
>
Subject: Change target host in proxy_pass

Hello,
I have NGINX acting as reverse proxy and I would like to achieve something like 
this:

When I get a request like this GET http://app1.mydomain.aa.com/aaa/bbb it 
should be converted to:
GET http://app1.mydomain.bb.com/aaa/bbb so such directive will do the job:

proxy_pass http://app1.mydomain.bb.com;
problem is that I want to convert host part automatically (regex) basing on 
incoming requests to NGINX  - app1.mydomain are not fixed they are changing 
very often.
Is it possible? Can anyone get a clue how proxy_pass statement should look like?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: slow https performance compared to http

2016-11-13 Thread Lucas Rolff

Because you have the TLS handshake that has to be done which is CPU bound

Try change things like ssl_ciphers (to something faster), and use 
ssl_session_cache/

--
Best Regards,
Lucas Rolff


gigihc11 wrote:

Hi, I have:
nginx 1.11.3
Ubuntu 16.04.1 LTS
openssl 1.0.2g-1ubuntu4.5  amd64
libssl1.0.0:amd64 1.0.2g-1ubuntu4.5
weak CPU: N3150
16 GB RAM

with this test-setup:
open_file_cache max=1000 inactive=360s;
open_file_cache_valid 30s;

I test on running ab command on the same host as nginx is.
For a 1.6KB text file I get 4600 req/s with http and 550 req/sec with https.
I get the same using or not gzip encoding.

Why is this huge performance difference?

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,270898,270898#msg-270898

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Blocking tens of thousands of IP's

2016-11-01 Thread Lucas Rolff
You could very well do a small ipset together with iptables, it's fast, 
and you don't have to reload for every subnet / ip you add.

Doing it within nginx is rather.. Yeah.

--
Best Regards,
Lucas Rolff


Cox, Eric S wrote:
Random, blocks, certain durations, etc. Its very random and or short 
lived which is something we don't want to move to the firewall at the 
moment


-Original Message-
*From:* Jeff Dyke [jeff.d...@gmail.com]
*Received:* Tuesday, 01 Nov 2016, 5:46PM
*To:* nginx@nginx.org [nginx@nginx.org]
*Subject:* Re: Blocking tens of thousands of IP's

what is your firewall?, that is the place to block subnets etc, i 
assume they are not random ips, they are likely from a block owned by 
someone??


On Tue, Nov 1, 2016 at 5:37 PM, CJ Ess <zxcvbn4...@gmail.com 
<mailto:zxcvbn4...@gmail.com>> wrote:


I don't think managing large lists of IPs is nginx's strength - as
far as I can tell all of its ACLs are arrays that have the be
iterated through on each request.

When I do have to manage IP lists in Nginx I try to compress the
lists into the most compact CIDR representation so there is less
to search. Here is a perl snippet I use to do that (handles ipv4
and ipv6):

#!/usr/bin/perl

use NetAddr::IP;

my @addresses;

foreach my $subnet (split(/\s+/, $list_of_ips)) {
  push(@addresses, NetAddr::IP->new($subnet));
}

foreach my $cidr (NetAddr::IP::compact(@addresses)) {
  if ($cidr->version == 4) {
print $cidr . "\n";
  } else {
print $cidr->short() . "/" . $cidr->masklen() . "\n";
}


On Tue, Nov 1, 2016 at 11:15 AM, Cox, Eric S <eric@kroger.com
<mailto:eric@kroger.com>> wrote:

Is anyone aware of a difference performance wise between using

return 403;

vs

deny all;

When mapping against a list of tens of thousands of ip?

Thanks




This e-mail message, including any attachments, is for the
sole use of the intended recipient(s) and may contain
information that is confidential and protected by law from
unauthorized disclosure. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail
and destroy all copies of the original message.

___
nginx mailing list
nginx@nginx.org <mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

<https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx=CwMFaQ=WUZzGzAb7_N4DvMsVhUlFrsw4WYzLoMP5bgx2U7ydPE=20GRp3QiDlDBgTH4mxQcOIMPCXcNvWGMx5Y0qmfF8VE=cjLNEY1x_976qWvGzhCEhvWYUU4DOBVUcO97nnDYX7o=GwNGeoaXa46JaCsfrdl3VQZpyNHqSzWwlLq3a0UNV2I=>



___
nginx mailing list
nginx@nginx.org <mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

<https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.nginx.org_mailman_listinfo_nginx=CwMFaQ=WUZzGzAb7_N4DvMsVhUlFrsw4WYzLoMP5bgx2U7ydPE=20GRp3QiDlDBgTH4mxQcOIMPCXcNvWGMx5Y0qmfF8VE=cjLNEY1x_976qWvGzhCEhvWYUU4DOBVUcO97nnDYX7o=GwNGeoaXa46JaCsfrdl3VQZpyNHqSzWwlLq3a0UNV2I=>





This e-mail message, including any attachments, is for the sole use of 
the intended recipient(s) and may contain information that is 
confidential and protected by law from unauthorized disclosure. Any 
unauthorized review, use, disclosure or distribution is prohibited. If 
you are not the intended recipient, please contact the sender by reply 
e-mail and destroy all copies of the original message.

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Pre-compressed (gzip) HTML using fastcgi_cache?

2016-10-31 Thread Lucas Rolff

Hello,

It's not strange behavior, it's expected.
What happens is that even though the key is the same - the actual 
returned content *might* be different, e.g. as an example:


If your origin returns Vary: accept-encoding

Nginx will cache based on this - so if accept-encoding differs it means 
the md5 (the path) will be different


So if your cache_key is $host$request_uri, if I request 
http://domain.com/text.html using a standard curl, my accept-encoding 
won't be there, file will be cached under hash X 
whenever a Google Chrome user comes along, do the exact same request to 
http://domain.com/text.html the cache_key will still be the same, but 
since chrome sends gzip, deflate (and some other), nginx will still 
cache it differently, thus resulting in different md5's on the filesystem.


If you use fastcgi_ignore_headers Vary; (I don't see this in the initial 
post), it shouldn't generate multiple md5's for the same key.


Basically nginx cache wants to work as it should and actually obey the 
vary header, if you don't want to obey it, you should ignore it. And use 
some other (like gzip_enabled variable) within your cache key to still 
generate 2 different files.


--
Best Regards,
Lucas Rolff


seo010 wrote:

Hi Lucas,

Thanks a lot for the suggestion. We were already using that solution but a
strange behavior occurred (see opening post). The first request uses an
expected MD5 hash of the KEY, and the client will keep using that hash (the
MISS/HIT header is accurate). However, requests from other clients will make
Nginx use a different (unknown) MD5 hash for the exact same content and KEY.
The cache file contains a row with "KEY: ..." that matches the expected KEY
and KEY for other MD5 hashes.

Do you have an idea what may cause this behavior?

Best Regards,
Jan Jaap

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,270604,270661#msg-270661

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Pre-compressed (gzip) HTML using fastcgi_cache?

2016-10-30 Thread Lucas Rolff
Well - then put fastcgi_ignore_headers Vary, make your map determine if 
the client support gzip or not, then you'll have 2 entries of 
everything, 1 gzipped and one not gzipped.


I'm not sure how much traffic we're talking about when it's about 'high 
traffic' - you'd probably want to run your proxy separately anyway, and 
then you can basically just scale out during peaks anyway.

--
Best Regards,
Lucas Rolff


seo010 wrote:

Hi!

It sounds like a good solution to improve the performance, however, I just
read the following post by Jake Archibald (Google Chrome developer).

"Yeah, ~10% of BBC visitors don’t support gzip compression. It was higher
during the day (15-20%) but lower in the evenings and weekends (<10%).
Pretty much puts the blame in the direction of corporate proxies."

https://www.stevesouders.com/blog/2009/11/11/whos-not-getting-gzip/

It appears that an amount of traffic would require gzip. For high traffic
websites it may not be a sufficient solution to guarantee optimal
performance. It is not a nice idea to have an aspect of an implemented
solution of which the stability and performance cannot be depended on.
Imagine a high traffic website that receives a spike in traffic after a TV
commercial. If just 5% of traffic would not support gzip, it may cause a
load that would reduce the overall performance of the website, potentially
causing a loss in revenue and user experience. Load tests may not have been
able to show the performance bottleneck, as they may not factor in gzip
support and it may not be possible to predict what amount of clients support
gzip. If a global website receives a traffic spike, it may be that for a
specific geographic area a larger percentage of users does not support gzip,
causing the server performance to fail.

Best Regards,
Jan Jaap

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,270604,270649#msg-270649

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Pre-compressed (gzip) HTML using fastcgi_cache?

2016-10-30 Thread Lucas Rolff
What you could do (I basically asked the same question 1 week ago), is 
that whenever you fastcgi_pass then enforce accept-encoding: gzip - 
meaning you'll always request gzipped content from your backend - then 
you can enable the gunzip directive by using "gunzip on;"


This means in case a client that does not support gzip compression, 
nginx will uncompress the file on the fly - where rest of the requests 
will be serving the file directly with a content-encoding: gzip - then 
the supporting client will automatically do whatever it should.


First of all it saves you a bunch of storage and it should give you the 
result you want. Serving (pre)compressed files to clients that support it.

--
Best Regards,
Lucas Rolff


seo010 wrote:

Hi *B. R.*!

Thanks a lot for the reply and information! The KEY however, does not
contain different data from http_accept_encoding. When viewing the contents
of the cache file it contains the exact same KEY for both MD5 hashes. Also,
it does not matter what browser is used for the first request. For example,
using a Google PageSpeed test at the first request will create the expected
MD5 hash for the KEY, and a next request using Chrome will create a new hash
for a file that contains the line "KEY: ..." that matches the KEY for the
first MD5 hash.

The third request also has a different KEY. I did not test any further, it
may be that the KEY will change for every new client. The KEY does remain
the same however for the same client. For example, the first request uses
the MD5 hash as expected for the KEY (as generated by MD5) and it will keep
using it in next requests.

As gzip compression causes a huge overhead on servers with high traffic, I
was wondering if Nginx would cache the gzip compressed result and if so, if
there is a setting with a maximum cache size. It would however, cause a
waste of cache space.

In tests the overhead added 4 tot 10ms on a powerful server for every
request compared with loading pre-compressed gzip HTML directly. It makes me
wonder what will be the effect on servers with high traffic.

As there appears to be no solution in Google, finding an answer may be
helpful for a lot of websites and it will make Nginx the best option for
full page cache.

Best Regards,
Jan Jaap

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,270604,270647#msg-270647

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Rewrite Vary header before stored in proxy_cache

2016-10-24 Thread Lucas Rolff

Hi Maxim,

Thank you a lot for the reply!


The best possible solution I can think of is to ask the client to fix the Vary 
header it returns


I completely agree, but sometimes it's hard to ask customers to do this, 
but I do try to do it as often as possible.




When using purge as availalbe in nginx-plus 
(http://nginx.org/r/proxy_cache_purge), it takes care of removing all cached 
variants, much like it does for wildcard purge requests.
Ahh cool! Nice one - maybe we'll be lucky that it gets to the open 
source version one day ;)



This can be done easily, just

 proxy_set_header Accept-Encoding "";

should be enough.  Alternatively, you can use

 proxy_set_header Accept-Encoding gzip;
 gunzip on;

to always ask gzipped resources and gunzip them when needed, see
http://nginx.org/en/docs/http/ngx_http_gunzip_module.html.
This is actually what I ended up doing, and it seems to work perfectly - 
still I have to gunzip if the client doesn't support gzip in first 
place, but the percentage is very minimal these days, so it seems like 
the best option, not only I save a bunch of storage (due to compression 
and only storing the file once and not 3 times) - but also makes purging 
super easy!


Once again, thanks a lot!

--
Best Regards,
Lucas Rolff


Maxim Dounin wrote:

Hello!

On Mon, Oct 24, 2016 at 06:38:25AM +0200, Lucas Rolff wrote:


Hi guys,

I'm building a small nginx reverse proxy to take care of a bunch of static
files for my clients - and it works great.

One thing I'm facing though is that some client sites sent "Vary:
Accept-Encoding, User-Agent" - which gives an awful cache hit rate - since
proxy_cache takes this into account, unless I use something like
"proxy_ignore_headers Vary;"

But ignoring Vary headers can cause other issues such as gzipped content
being sent to a non-gzip client.

So I'm looking for a way to basically rewrite the vary header to "Vary:
Accept-Encoding" before storing it in proxy_cache - but I wonder if this is
even possible in nginx, and if yes - can you give any pointers?

I found a temporary fix, and that is to ignore the Vary header, and using a
custom variable as a part of the cache key, that is either "", "gzip" or
"deflate" (I use a map to look at the Accept-Encoding header from the
client).

This works great - but I rather keep the cache key a bit clean (since I'll
use it later)

Do you guys have any recommendations how to make this happen?


The best possible solution I can think of is to ask the client to fix
the Vary header it returns.  Using User-Agent in Vary is something
one shouldn't use without a very good reason, and if there a
reason - it's likely a bad idea to strip from the Vary header.
And if there are no reasons, then it shouldn't be returned in the
first place.


Also as a side note, if I remove the custom variable from the cache key,
how would one actually purge the file then? I assume I have to send
different purge requests, since the cached file is based on the Vary:
accept-encoding - so I'd have to purge at least the amount of cached
encodings right?


When using purge as availalbe in nginx-plus
(http://nginx.org/r/proxy_cache_purge), it takes care of removing
all cached variants, much like it does for wildcard purge requests.


Also I could opt for another way, and that's always requesting a
uncompressed file from the origin (Is it simply not sending the
accept-encoding header, or should I do something else?), and then on every
request either decide to gzip it or not - the downside I see here, is the
fact that most clients request gzip,deflate content, so having to compress
on every request will use additional CPU resources.


This can be done easily, just

 proxy_set_header Accept-Encoding "";

should be enough.  Alternatively, you can use

 proxy_set_header Accept-Encoding gzip;
 gunzip on;

to always ask gzipped resources and gunzip them when needed, see
http://nginx.org/en/docs/http/ngx_http_gunzip_module.html.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Rewrite Vary header before stored in proxy_cache

2016-10-23 Thread Lucas Rolff
Hi guys,

I'm building a small nginx reverse proxy to take care of a bunch of static
files for my clients - and it works great.

One thing I'm facing though is that some client sites sent "Vary:
Accept-Encoding, User-Agent" - which gives an awful cache hit rate - since
proxy_cache takes this into account, unless I use something like
"proxy_ignore_headers Vary;"

But ignoring Vary headers can cause other issues such as gzipped content
being sent to a non-gzip client.

So I'm looking for a way to basically rewrite the vary header to "Vary:
Accept-Encoding" before storing it in proxy_cache - but I wonder if this is
even possible in nginx, and if yes - can you give any pointers?

I found a temporary fix, and that is to ignore the Vary header, and using a
custom variable as a part of the cache key, that is either "", "gzip" or
"deflate" (I use a map to look at the Accept-Encoding header from the
client).

This works great - but I rather keep the cache key a bit clean (since I'll
use it later)

Do you guys have any recommendations how to make this happen?

Also as a side note, if I remove the custom variable from the cache key,
how would one actually purge the file then? I assume I have to send
different purge requests, since the cached file is based on the Vary:
accept-encoding - so I'd have to purge at least the amount of cached
encodings right?

Also I could opt for another way, and that's always requesting a
uncompressed file from the origin (Is it simply not sending the
accept-encoding header, or should I do something else?), and then on every
request either decide to gzip it or not - the downside I see here, is the
fact that most clients request gzip,deflate content, so having to compress
on every request will use additional CPU resources.

Thanks in advance!

--
Best regards,
Lucas Rolff
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Problems with custom log file format

2016-08-23 Thread Lucas Rolff
1st one matches your log format, the 2nd one matches the `combined` 
format - so:


- Maybe you've still some old nginx processes that are still not closed 
after reloading, which causes it to log in the combined format, or can 
it be another vhost logging to same log file, without using access_log 
 main ?


Can you paste your *full* config?, that allows for easier debugging.

li...@lazygranch.com wrote:

Configuration file included in the post. I already checked it.


   Original Message  
From: Maxim Dounin

Sent: Tuesday, August 23, 2016 10:10 AM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: Problems with custom log file format

Hello!

On Tue, Aug 23, 2016 at 10:07:56AM -0700, li...@lazygranch.com wrote:


Looks like I have no takers on this problem. Should I filed a
bug report? If so, where?


I would recommend you to start with double-checking your
configuration instead.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Issue with HTTP/2 and async file upload from Safari on iOS

2016-06-04 Thread Lucas Rolff

https://trac.nginx.org/nginx/ticket/979
https://trac.nginx.org/nginx/ticket/959

It's a known bug


ZaneCEO 
5 June 2016 at 01:17
Hi guys,
I'm at my first deploy of Nginx with php-fpm after 10+ years of love with
Apache and mod_php. So far so (very) good.

I just have a peculiar issue with Safari on iOS. As you can read here
http://stackoverflow.com/questions/37635277/safari-on-ios-fails-to-ajax-upload-some-image-file-cannot-connect-to-server
, my webapp allows the user to select an image, client-resize it via 
JS and

then upload it via jQuery.

The problem is that Safari on iOS 9 sometimes fails the upload with the
error

POST , Could not connect to the server.

I just found out that when I disabled the HTTP/2 form my server config the
issue vanishes.

Is this a known issue somehow? Is there any other solution that doesn't
require me to go nuclear on HTTP/2?

Thanks for your help!

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,267385,267385#msg-267385


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 502 Bad Gateway once running php on command line

2016-03-23 Thread Lucas Rolff
When issuing php directly from the command-line, you don't even go 
through nginx.
php from the command-line relies on the php-cli which isn't talking to 
your nginx process nor php-fpm.



marcusy3k 
23 March 2016 at 10:05
Eventually I find what went wrong, it should be caused by both Zend 
OPcache

and XCache are installed, they may conflict each other in this case, once
I've removed the XCache, it works fine, the php command line would no 
longer

cause the php-fpm error.

XCache should be unnecessary when OPcache is running.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,265576,265584#msg-265584


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
marcusy3k 
23 March 2016 at 05:59
I have just installed:
- FreeBSD 10.2
- Nginx 1.8.1
- PHP 5.5.33

Nginx works fine with PHP that the web sites seems ok to run php pages.
However, once I run php on command line (e.g. php -v), the web site 
will get

"502 Bad Gateway" error, and I find the nginx error log as below:

[error] 714#0: *3 upstream prematurely closed connection while reading
response header from
upstream, client: _, server: www..com, request: "GET / HTTP/1.1",
upstream: "fastcgi://unix:/
var/run/php5-fpm.sock:", host: "_"

I have tried to use either sock or port, but the problem still 
exist... any

idea about what's wrong? thanks.

"php -v" shows below:
PHP 5.5.33 (cli) (built: Mar 15 2016 01:22:17)
Copyright (c) 1997-2015 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies
with XCache v3.2.0, Copyright (c) 2005-2014, by mOo
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend
Technologies
with XCache Cacher v3.2.0, Copyright (c) 2005-2014, by mOo

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,265576,265576#msg-265576


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: 502 Bad Gateway once running php on command line

2016-03-23 Thread Lucas Rolff

Hi,

What is the exact call you're trying to do?

- Lucas

marcusy3k wrote:

I have just installed:
- FreeBSD 10.2
- Nginx 1.8.1
- PHP 5.5.33

Nginx works fine with PHP that the web sites seems ok to run php pages.
However, once I run php on command line (e.g. php -v), the web site will get
"502 Bad Gateway" error, and I find the nginx error log as below:

[error] 714#0: *3 upstream prematurely closed connection while reading
response header from
upstream, client: _, server: www..com, request: "GET / HTTP/1.1",
upstream: "fastcgi://unix:/
var/run/php5-fpm.sock:", host: "_"

I have tried to use either sock or port, but the problem still exist... any
idea about what's wrong? thanks.

"php -v" shows below:
PHP 5.5.33 (cli) (built: Mar 15 2016 01:22:17)
Copyright (c) 1997-2015 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies
 with XCache v3.2.0, Copyright (c) 2005-2014, by mOo
 with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2015, by Zend
Technologies
 with XCache Cacher v3.2.0, Copyright (c) 2005-2014, by mOo

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,265576,265576#msg-265576

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: nginx 1.9.12 proxy_cache always returns MISS

2016-03-19 Thread Lucas Rolff

Seems like it's resolved:

$ curl -I 
http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg

HTTP/1.1 200 OK
Server: nginx
Date: Sat, 19 Mar 2016 20:42:46 GMT
Content-Type: image/jpeg
Content-Length: 53491
Connection: keep-alive
Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT
ETag: "d0f3-52daab51fbe80"
Expires: Sun, 19 Mar 2017 20:42:46 GMT
Cache-Control: max-age=31536000
Cache-Control: public, max-age=31536000
X-Cache-Status: MISS
Accept-Ranges: bytes

$ curl -I 
http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg

HTTP/1.1 200 OK
Server: nginx
Date: Sat, 19 Mar 2016 20:42:48 GMT
Content-Type: image/jpeg
Content-Length: 53491
Connection: keep-alive
Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT
ETag: "d0f3-52daab51fbe80"
Expires: Sun, 19 Mar 2017 20:42:48 GMT
Cache-Control: max-age=31536000
Cache-Control: public, max-age=31536000
X-Cache-Status: HIT
Accept-Ranges: bytes

CJ Ess wrote:
I think I've run into the problem before - move the proxypass 
statement from the top of the location stanza to the bottom, and I 
think that will solve your issue.



On Sat, Mar 19, 2016 at 4:10 PM, shiz > wrote:


Been playing with this for 2 days.

proxy_pass is working correctly but the proxy_cache_path remains empty
whatever I make.


Here's the source I use for tests:
root@NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I
http://www.kuriyama-truck.com/images/parts/13375/thumbnail_0/1_1.jpg
HTTP/1.1 200 OK
Date: Sat, 19 Mar 2016 18:15:16 GMT
Server: Apache/2.4.16 (Amazon) PHP/5.6.17
Last-Modified: Thu, 10 Mar 2016 05:01:30 GMT
ETag: "d0f3-52daab51fbe80"
Accept-Ranges: bytes
Content-Length: 53491
Content-Type: image/jpeg

Now here's the response from the nginx:
root@NC-PH-0657-10:/etc/nginx/snippets# curl -X GET -I

http://dev.ts-export.com/kuriyamacache/images/parts/13375/thumbnail_0/1_1.jpg
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 19 Mar 2016 18:14:46 GMT
Content-Type: image/jpeg
Content-Length: 53491
Connection: keep-alive
Expires: Sun, 19 Mar 2017 18:14:46 GMT
Cache-Control: max-age=31536000
Cache-Control: public
X-Cache-Status: MISS
Accept-Ranges: bytes

Here are the request headers from my browser:
GET /kuriyamacache/images/parts/13375/thumbnail_1/1_1.jpg HTTP/1.1
Host: dev.ts-export.com 
Connection: keep-alive
Cache-Control: max-age=0
Accept:
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36
(KHTML,
like Gecko) Chrome/49.0.2623.87 Safari/537.36
DNT: 1
Accept-Encoding: gzip, deflate, sdch
Accept-Language: fr-CA,fr;q=0.8,en-US;q=0.6,en;q=0.4,ja;q=0.2,de;q=0.2
Cookie: PRUM_EPISODES=s=1458412951203

Part of my setup:

proxy_cache_path /tmp/nginx/dev levels=1:2 keys_zone=my_zone:10m
max_size=10g inactive=60m use_temp_path=off;

server {

  set $rule_3 0;
  set $rule_4 0;
  set $rule_5 0;
  set $rule_8 0;
  set $rule_9 0;

  server_name dev.ts-export.com ;
  listen 80;
  listen [::]:80;

  root /home/tsuchi/public_html;

  if ($reject_those_args) {
return 403;
  }

  include snippets/filters.conf;

  error_page 404 /404.html;

  if ($request_uri ~ "^/index.(php|html?)$" ) {
return 301 $scheme://dev.ts-export.com ;
  }


  # no SSL for IE6-8 on XP and Android 2.3
  if ($scheme = https) {
set $rule_8 1$rule_8;
  }
  if ($http_user_agent ~ "MSIE (6|7|8).*Windows NT 5|Android 2\.3"){
set $rule_8 2$rule_8;
  }
  if ($rule_8 = "210"){
return 301 http://dev.ts-export.com$request_uri;
  }


  location = / {
allow all;
  }

  location = /robots.txt {
add_header X-Robots-Tag noindex;
  }


  location '/.well-known/acme-challenge' {
default_type "text/plain";
root/tmp/letsencrypt-auto;
  }

  include snippets/proxyimg.conf;

  location / {
try_files $uri $uri/ @rewrites;
allow all;
  }
 (...)
}

Contents of proxyimg.conf:

 location ^~ /kuriyamacache {
expires 1y;
access_log off;
log_not_found off;
resolver 127.0.0.1;

proxy_pass http://www.kuriyama-truck.com/;

proxy_cache my_zone;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_buffering on;

proxy_cache_valid  200 301 302  60m;
proxy_cache_valid  404  1m;
proxy_cache_use_stale error timeout http_500 http_502 http_503
http_504;
proxy_cache_revalidate on;
proxy_cache_lock on;

proxy_set_header Host $host;

Re: secure and httponly cookies

2016-03-07 Thread Lucas Rolff
Without knowing much about webseal (only simple googling), webseal 
really seems to be a very custom IBM product that does one thing: 
Integrate into Tivoli Access Manager - meaning they've very specific 
features (such as single sign-on) etc.
nginx is a general webserver, it doesn't hook into your backend system, 
usually you proxy some requests to it, or serve some files.


The only way I can think of, is by using LUA to rewrite the Set-Cookie 
headers, but it's not really a nice solution.



kris...@brocade.com wrote:

Thanks for the response.

Yes, i understand that. But here they dont create a secure or httponly
cookie in the backend (webseal/ibm portal).

Earlier we were using ibm http server (IHS) and were adding these flags in
the web server itself.

Now we are trying to replace IHS with nginx but not able to accomplish the
same here.

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,265137,265140#msg-265140

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: secure and httponly cookies

2016-03-07 Thread Lucas Rolff
This isn't really something you do on your web server but rather in your 
backend configuration (such as php.ini), etc.



kris...@brocade.com 
7 March 2016 at 20:38
Hi,

How to mark all the cookies from the backend servers as secure and
httponly?

Is there some config in NGINX available for this?

Thanks,
Krishna

Posted at Nginx Forum: 
https://forum.nginx.org/read.php?2,265137,265137#msg-265137


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: lose value of $uid_set

2016-02-25 Thread Lucas Rolff

What is the http status code of those failed requests?


youjie zeng 
25 February 2016 at 14:28
Hello, master of nginx, I have a question when using 
ngx_http_userid_module. here is the detail description:


nginx version:1.7
conf:

log_format  main '$remote_addr - $remote_user [$time_local] "$request" 
$uid_set'


...

server {

userid on;

userid_name user_id;

set $uid_reset myuid;

   ...

}


because i set $uid_reset not empty, $uid_set would get a new uuid 
every time nginx process a request.


but i found some strange things, I got value "-" of $uid_set, that 
means $uid_set did not get a value. even though over 92% request have 
correct value of $uid_set, but the 2% do not. And i did not found 
abnormal of those request.


Do you have any idea about this?

looking forward your reply!

have a nice day!



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass not seen as SNI-client according to Apache directive

2016-02-14 Thread Lucas Rolff

Hi Maxim,

Thank you a lot for the quick reply, I'll give it a test tomorrow morning!

And Robert has a valid point indeed, why is it actually disabled by default?


Robert Paprocki <mailto:rpapro...@fearnothingproductions.net>
14 February 2016 at 22:46


Out of curiosity, is there a philosophical/design reason this option 
is not enabled by default?


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Maxim Dounin <mailto:mdou...@mdounin.ru>
14 February 2016 at 21:58
Hello!


http://nginx.org/r/proxy_ssl_server_name

Lucas Rolff <mailto:lu...@slcoding.com>
14 February 2016 at 20:14
Hi guys,

I'm having a rather odd behavior - I use nginx as a reverse proxy 
(basically as a CDN) - where if the file isn't in cache, I do use 
proxy_pass to the origin server, to get the file and then cache it.


This works perfectly in most cases, but if the origin is running 
apache and happen to use the Apache Directive "SSLStrictSNIVHostCheck" 
where it's set to On.


Basically it decides whether a non-SNI client is allowed to access a 
name-based virtual host over SSL or not.
But when using proxy_pass this seems to the apache server that it's a 
non-SNI client:
[Sun Feb 14 19:32:50 2016] [error] No hostname was provided via SNI 
for a name based virtual host
[Sun Feb 14 19:33:00 2016] [error] No hostname was provided via SNI 
for a name based virtual host


I was able to replicate this issue on multiple nginx versions (both on 
1.8.1, 1.9.9 and 1.9.10).

It results in 403 forbidden for the client.

If I set the directive SSLStrictSNIVHostCheck to off, I do not get a 
403 forbidden - and the files I try to fetch gets fetched correctly. 
(Meaning proxy_pass do understand SNI).


The nginx zone does a proxy_pass https://my_domain; and the my_domain 
is running on a server that runs SNI.


Best Regards,
Lucas Rolff


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxy_pass not seen as SNI-client according to Apache directive

2016-02-14 Thread Lucas Rolff

Hi guys,

I'm having a rather odd behavior - I use nginx as a reverse proxy 
(basically as a CDN) - where if the file isn't in cache, I do use 
proxy_pass to the origin server, to get the file and then cache it.


This works perfectly in most cases, but if the origin is running apache 
and happen to use the Apache Directive "SSLStrictSNIVHostCheck" where 
it's set to On.


Basically it decides whether a non-SNI client is allowed to access a 
name-based virtual host over SSL or not.
But when using proxy_pass this seems to the apache server that it's a 
non-SNI client:
[Sun Feb 14 19:32:50 2016] [error] No hostname was provided via SNI for 
a name based virtual host
[Sun Feb 14 19:33:00 2016] [error] No hostname was provided via SNI for 
a name based virtual host


I was able to replicate this issue on multiple nginx versions (both on 
1.8.1, 1.9.9 and 1.9.10).

It results in 403 forbidden for the client.

If I set the directive SSLStrictSNIVHostCheck to off, I do not get a 403 
forbidden - and the files I try to fetch gets fetched correctly. 
(Meaning proxy_pass do understand SNI).


The nginx zone does a proxy_pass https://my_domain; and the my_domain is 
running on a server that runs SNI.


Best Regards,
Lucas Rolff

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Throughput with Loadbalancer

2015-09-29 Thread Lucas Rolff
You'll decrease your capacity to 1 gigabit, because you'll send it out 
via the load balancer again.
Else you need to look for "DSR" (Direct Server Return), I'm not 
completely sure if nginx actually supports this.



wolfgangpue 
29 Sep 2015 08:21
Hi,

I am not sure how the load balancer affect the data throughput.

For example: I have two nginx server with 1 Gps network connection.
When I configure the first server as nginx load balancer and as upstream
server and the second server only as upstream server.

Is the second upstream server sending its data directly to the client 
or is

the data routed through the load balancer (first server). When all data is
routed through the first server I have only a maximum bandwith of 1 
Gbps. If

the second server sends his data directly to the client I have a maximum
bandwith of 2 Gbps. And if the second server sends its data directly 
to the

server, does the response data contain the ip of the second upstream
server?

So which one is correct?

Best Regards
Wolfgang

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,261912,261912#msg-261912


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Throughput with Loadbalancer

2015-09-29 Thread Lucas Rolff
You could also use multiple A records on a DNS level, and let DNS 
balance the traffic between the two machines.




wolfgangpue <mailto:nginx-fo...@nginx.us>
29 Sep 2015 14:19
Lucas Rolff Wrote:
---


Ok, thank you. I think DSR is on a lower level and nginx has no 
influence on

it.

I will try another aproach. Because both ngnix servers contain the same
files I will code my own load balancer in my php frontend system and 
request

data from my nginx server in a round robin queue.

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,261912,261923#msg-261923


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Lucas Rolff <mailto:lu...@slcoding.com>
29 Sep 2015 08:32
You'll decrease your capacity to 1 gigabit, because you'll send it out 
via the load balancer again.
Else you need to look for "DSR" (Direct Server Return), I'm not 
completely sure if nginx actually supports this.





___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3)

2015-08-02 Thread Lucas Rolff

Be aware it doesn't work either in Chrome on mac :-)


tunist mailto:nginx-fo...@nginx.us
2 Aug 2015 13:16
oh, so the solution here was to add: add_header Accept-Ranges bytes;
to the site's config file.

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,260615,260702#msg-260702


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
tunist mailto:nginx-fo...@nginx.us
29 Jul 2015 13:14
greetings!

i am seeing an unexplained malfunction here with nginx when serving 
videos.
flv and mp4 files have different symptoms. mp4 streams correctly when 
i view
the file in firefox 39 in fedora 22, but in windows 7 (firefox 39) the 
file

cannot be 'seeked' and must be played linearly.
after speaking with the coders of video.js (the player i use), it was
determined that nginx is not returning byte range data appropriately 
(or at
all) - so seeking would not work. however, this does not explain why 
firefox

39 in fedora works perfectly and does not provide a solution as to how to
get nginx to serve correctly.

the only advice i have seen is to change the value of the 'max_ranges'
directive - but doing that has made no difference. i have left it as 
'unset'

- which i understand to mean 'unlimited'.

an example video from the server is here:
src=https://www.ureka.org/file/play/17924/censored%20on%20google%202.mp4;

any tips welcomed! thanks

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,260615,260615#msg-260615


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Serving from cache even when the origin server goes down

2015-06-29 Thread Lucas Rolff

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale

You can use multiple values e.g. the below is probably a good start:

proxy_cache_use_stale error timeout invalid_header updating;


Cherian Thomas mailto:cherian...@gmail.com
30 Jun 2015 03:27

Is it possible to configure Nginx to serve from cache even when the 
origin server is not accessible?


I am trying to figure out if I can use a replicated Nginx instance 
that has cache files rsynced (lsyncd 
http://t.sidekickopen03.com/e1t/c/5/f18dQhb0S7lC8dDMPbW2n0x6l2B9nMJW7t5XYg1qwrMlW63Bdqv8rBw8YW4XXPpC56dBXDf1zVF0j02?t=https%3A%2F%2Fcode.google.com%2Fp%2Flsyncd%2Fsi=6435350837723136pi=2518a249-a8f4-4dd6-9e96-783723ac8e1a) 
from the primary instance and serve from the replicated instance (DNS 
switch) if the primary goes down.


- Cherian

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: Malware in /tmp/nginx_client

2015-06-27 Thread Lucas Rolff
It's not harmful that they're there, but you could simply exclude the 
/tmp/nginx_client folder from maldet,


It's due to the option client_body_in_file_only being set to on in your 
nginx.conf (Sounds like you're using http://www.nginxcp.com/ for cpanel)

guillefar mailto:nginx-fo...@nginx.us
27 Jun 2015 15:45
The software maldet, discovered some malware in the the /tmp/nginx_client
directory, like this:


{HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0050030641
{HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0060442670


I did some research, and found out that indeed, there were some malicious
code in them.

I did a extensive search in the sites, and nothing malicious was found,
including the code that appeared in the tmp files.

Around the time the files were created, there were similar requests, to non
existent Worpress plugins, and to a file of the Worpres backend.

Digging up a little, I found this:
blog.inurl.com.br/2015/03/wordpress-revslider-exploit-0day-inurl.html

Basically an exploit for a Wordpress plugin vulnerability, (it doesn't
affect my sites, though), that do similar requests to the ones I found.

One of those, is a post request that includes an attacker's php, file that
thanks to this vulnerability will be uploaded to the site and it can be run
by the attacker.

So what it seems to be happenning is that nxing is caching post requests
with malicious code, that later is found by the antimalware software.

Could this be the case? I read and seems that Nginx does't cache post
request by default, so it seems odd.

Is there a way to know if that tmp files are caching internal or external
content?

I will be thankful for any info about it.

Nginx is working as reverse proxy only.


This is a bit of another file that was marked as malware:


--13530703071348311
Content-Disposition: form-data; name=uploader_url

http:/MISITE/wp-content/plugins/wp-symposium/server/php/
--13530703071348311
Content-Disposition: form-data; name=uploader_uid



1
--13530703071348311
Content-Disposition: form-data; name=uploader_dir

./NgzaJG
--13530703071348311
Content-Disposition: form-data; name=files[]; filename=SFAlTDrV.php
Content-Type: application/octet-stream


Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,259948,259948#msg-259948

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: static file performance staircase pattern

2015-05-11 Thread Lucas Rolff

It's not really required to serve it from the same sub-domain always.
The most optimal solution would be to add the canonical link header when 
serving using domain sharding.


But from a caching perspective, keeping the sharding consistent is 
indeed beneficial (you can use crc32 on the image name e.g. this will 
always return the same hash, and based on this do the domain sharding), 
but from a SEO perspective, it doesn't matter if you just do it right 
with canonical link.



Nikolaj Schomacker mailto:sjum...@gmail.com
11 May 2015 08:49
And a last thing you should be aware of if it applies to your case is 
SEO. Using multiple domains for images is perfectly fine in the eyes 
of Google, but be sure the same images is always served from the same 
subdomain. Also be sure to have all of the subdomains added to the 
same webmasters account as your main site.


~ Nikolaj

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Lucas Rolff mailto:lu...@slcoding.com
9 May 2015 20:24
What you should do, to increase the concurrent amount of requests, is 
to use domain-sharding, since as Paul mentioned, browsers have between 
4 and 8 (actually) simultaneous connections per domain, meaning if you 
introduce static1,2,3.domain.com, you will increase your concurrency.


But at same time you also need to be aware, that this can have a 
negative effect on your performance if you put too many domains, 
there's no golden rule on how many you need, it's all a site by site 
case, and it differs.
Also take into account your end-users connection can be limiting 
things heavily as well if you put too much concurrency (thus negative 
effect) - if you have a high number of concurrent requests being 
processed it will slow down the download time of each, meaning the 
perceived performance that the user see might get worse because it 
feels like the page is slower.


- Lucas

Paul Smith mailto:paul.j.smi...@gmail.com
9 May 2015 20:03
On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn

I am not an expert but I believe that most browsers only make between
4 to 6 simultaneous connections to a domain. So the first round of
requests are sent and the response received and then the second round
go out and are received back and so forth. Doing a search for
something like max downloads per domain may bring you better
information.

Paul

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Dennis Jacobfeuerborn mailto:denni...@conversis.de
9 May 2015 19:37
Hi,
I'm trying to find out how to effectively deliver pages with lots of
images on a page. Attached you see opening a static html page that
contains lots of img tags pointing to static images. Please also note
that all images are cached in the browser (hence the 304 response) so no
actual data needs to be downloaded.
All of this is happening on a CentOS 7 system using nginx 1.6.

The question I have is why is it that the responses get increasingly
longer? There is nothing else happening on that server and I also tried
various optimizations like keepalive, multi_accept, epoll,
open_file_cache, etc. but nothing seems to get rid of that staircase
pattern in the image.

Does anybody have an idea what the cause is for this behavior and how to
improve it?

Regards,
Dennis
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: static file performance staircase pattern

2015-05-09 Thread Lucas Rolff
What you should do, to increase the concurrent amount of requests, is to 
use domain-sharding, since as Paul mentioned, browsers have between 4 
and 8 (actually) simultaneous connections per domain, meaning if you 
introduce static1,2,3.domain.com, you will increase your concurrency.


But at same time you also need to be aware, that this can have a 
negative effect on your performance if you put too many domains, there's 
no golden rule on how many you need, it's all a site by site case, and 
it differs.
Also take into account your end-users connection can be limiting things 
heavily as well if you put too much concurrency (thus negative effect) - 
if you have a high number of concurrent requests being processed it will 
slow down the download time of each, meaning the perceived performance 
that the user see might get worse because it feels like the page is slower.


- Lucas


Paul Smith mailto:paul.j.smi...@gmail.com
9 May 2015 20:03
On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn

I am not an expert but I believe that most browsers only make between
4 to 6 simultaneous connections to a domain. So the first round of
requests are sent and the response received and then the second round
go out and are received back and so forth. Doing a search for
something like max downloads per domain may bring you better
information.

Paul

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Dennis Jacobfeuerborn mailto:denni...@conversis.de
9 May 2015 19:37
Hi,
I'm trying to find out how to effectively deliver pages with lots of
images on a page. Attached you see opening a static html page that
contains lots of img tags pointing to static images. Please also note
that all images are cached in the browser (hence the 304 response) so no
actual data needs to be downloaded.
All of this is happening on a CentOS 7 system using nginx 1.6.

The question I have is why is it that the responses get increasingly
longer? There is nothing else happening on that server and I also tried
various optimizations like keepalive, multi_accept, epoll,
open_file_cache, etc. but nothing seems to get rid of that staircase
pattern in the image.

Does anybody have an idea what the cause is for this behavior and how to
improve it?

Regards,
Dennis
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: please suggest performance tweak and the right siege options for load test

2015-03-18 Thread Lucas Rolff
Have you checked the socket level, and checking kernel log on all 3 
servers (nginx and load balancer) meanwhile doing the test?
It could be that for some reason you reach a limit really fast (We had 
an issue that we reached the nf_conntrack limit at 600 concurrent users 
because we had like 170 requests per page load)



halozen wrote:

2 nginx 1.4.6 web servers - ocfs cluster, web root inside mounted LUN
from SAN storage
2 MariaDB 5.5 servers - galera cluster, different network segment than
nginx web servers

nginx servers each two sockets quad core xeon, 128 gb ram
Load balanced via F5 load balancer (round-robin, http performance)

Based on my setup above, what options that I should use with siege to
perform load term to at least 5000 concurrent users?

There is a time when thousands of student storms university's web
application.

Below is result for 300 concurrent users.

# siege -c 300 -q -t 1m domain.com

siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc

Transactions: 370 hits
Availability:   25.38 %
Elapsed time:   47.06 secs
Data transferred:4.84 MB
Response time:   20.09 secs
Transaction rate:7.86 trans/sec
Throughput:0.10 MB/sec
Concurrency:  157.98
Successful transactions: 370
Failed transactions:1088
Longest transaction:   30.06
Shortest transaction:0.00

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,257373,257373#msg-257373

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Image serving via nginx are too slow, why ?

2014-08-26 Thread Lucas Rolff

Takes me 2.65 seconds to load the PDF with no caching.

tristanb wrote:

Thanks for your message,

I applyed your patched, restarted varnish, nginx and php5-fpm and it's still
the same.
Browing with browser cache off feels like the image are downloaded and
displayed in a progressive way because of the slowlyness.

Another example too is this PDF of 3Mo who takes3 minutes to display :
http://goo.gl/og3xG5

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,252816,252820#msg-252820

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Image serving via nginx are too slow, why ?

2014-08-26 Thread Lucas Rolff
I've been testing from a 10 megabit connection in Netherlands, 100mbit 
connection in netherlands, 500 mbit connection in netherlands, 500mbit 
connection in France, 100mbit connection in france and a 250 megabit 
connection in france, a 20 megabit connection in UK.

Can ask people from Denmark to do the same test.
But seems rather fast for all connections I've tested from.

- Lucas R

tristanb wrote:

Damn, I tested this on 3 different connection from 3 different providers
(all based in France though, where the server are)
- 20mbps ADSL by Orange
- 1Gbps fiber by Free
- 50 Mbps fiber by SFR

Where are you based, what are you connexion specs please ?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,252816,252822#msg-252822

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Image serving via nginx are too slow, why ?

2014-08-26 Thread Lucas Rolff

NL example:

mtr admin.yproximite.fr

HOST: server1Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. hosted.by.leaseweb.com0.0% 20.5   0.5   0.5   
0.5   0.0
  2. te0-7-0-3.hvc3.evo.leaseweb.  0.0% 20.7   0.7   0.6   
0.7   0.1
  3. ix-5-1-1-0.thar1.HNN-Amsterd  0.0% 20.3   0.3   0.3   
0.3   0.0
  4. if-10-2.tcore2.AV2-Amsterdam  0.0% 21.3   1.3   1.3   
1.3   0.0
  5. if-2-2.tcore1.AV2-Amsterdam.  0.0% 22.9   2.5   2.1   
2.9   0.6
  6. be3044.agr21.ams03.atlas.cog  0.0% 22.9   4.1   2.9   
5.4   1.8
  7. be2440.ccr42.ams03.atlas.cog  0.0% 22.0   4.8   2.0   
7.5   3.9
  8. be2266.ccr42.par01.atlas.cog  0.0% 2   11.3  12.5  11.3  
13.8   1.7
  9. be2309.ccr21.par04.atlas.cog  0.0% 2   11.5  11.5  11.5  
11.5   0.0
 10. 149.6.164.222 0.0% 2   13.6  13.7  13.6  
13.8   0.1
 11. dedibox-1-t.intf.routers.pro  0.0% 2   13.4  13.5  13.4  
13.6   0.2
 12. 49e-s46-1-a9k2.dc3.poneytele  0.0% 2   12.5  12.1  11.7  
12.5   0.6
 13. 88-190-234-137.rev.poneytele  0.0% 2   13.2  13.0  12.8  
13.2   0.2


France:
HOST: minecraft   Loss%   Snt   Last   Avg  Best  Wrst StDev
  1. 5.135.139.252 0.0% 20.4   0.3   0.3   
0.4   0.1
  2. rbx-g1-a9.fr.eu   0.0% 26.7   3.8   0.9   
6.7   4.1
  3. th2-g1-a9.fr.eu   0.0% 24.5   4.5   4.5   
4.5   0.0
  4. ???  100.0 20.0   0.0   0.0   
0.0   0.0
  5. ???  100.0 20.0   0.0   0.0   
0.0   0.0
  6. cbv-crs8-1-be1005.routers.pr  0.0% 28.0   6.8   5.6   
8.0   1.7
  7. bzn-9k-4-be1005.intf.routers  0.0% 25.1   5.2   5.1   
5.3   0.1
  8. dedibox-2-t.intf.routers.pro  0.0% 25.5   5.3   5.1   
5.5   0.3
  9. 195.154.1.146 0.0% 25.2   5.2   5.2   
5.2   0.0
 10. 49e-s46-1-a9k2.dc3.poneytele  0.0% 25.1   5.1   5.1   
5.1   0.0
 11. 88-190-234-137.rev.poneytele  0.0% 24.8   4.8   4.8   
4.8   0.0


NL2:
HOST: Lucass-MacBook-Pro.localLoss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 10.4.13.2520.0% 21.0   1.2   1.0   
1.4   0.3
  2.|-- 87.255.57.222  0.0% 21.5   1.5   1.5   
1.5   0.0
  3.|-- adm-b4-link.telia.net  0.0% 21.2   1.3   1.2   
1.4   0.2
  4.|-- adm-bb4-link.telia.net 0.0% 21.3   1.4   1.3   
1.4   0.1
  5.|-- adm-b5-link.telia.net  0.0% 21.7   2.1   1.7   
2.5   0.6
  6.|-- cogent-ic-130765-adm-b3.c  0.0% 22.4   3.1   2.4   
3.9   1.1
  7.|-- be2312.ccr42.ams03.atlas.  0.0% 22.9   2.9   2.9   
2.9   0.0
  8.|-- be2266.ccr42.par01.atlas.  0.0% 2   12.1  12.1  12.1  
12.1   0.0
  9.|-- be2309.ccr21.par04.atlas.  0.0% 2   13.7  13.9  13.7  
14.1   0.3
 10.|-- 149.6.165.198  0.0% 2   12.1  12.2  12.1  
12.2   0.0
 11.|-- dedibox-1-t.intf.routers.  0.0% 2   14.4  13.6  12.7  
14.4   1.2
 12.|-- 49e-s46-1-a9k2.dc3.poneyt  0.0% 2   12.8  12.7  12.5  
12.8   0.2
 13.|-- 88-190-234-137.rev.poneyt  0.0% 2   12.2  12.3  12.2  
12.4   0.1


NL3:
HOST: apiLoss%   Snt   Last   Avg  Best  Wrst StDev
  1. 82.196.14.1   0.0% 21.4   1.0   0.5   
1.4   0.7
  2. 83.231.213.61 0.0% 21.4   1.4   1.4   
1.4   0.0
  3. te0-7-0-3.agr21.ams03.atlas.  0.0% 20.9   1.0   0.9   
1.1   0.2
  4. be2434.ccr41.ams03.atlas.cog  0.0% 21.1   1.3   1.1   
1.5   0.3
  5. be2265.ccr41.par01.atlas.cog  0.0% 2   12.0  12.0  12.0  
12.0   0.1
  6. be2308.ccr21.par04.atlas.cog  0.0% 2   12.5  12.5  12.5  
12.6   0.0
  7. 149.6.165.214 0.0% 2   10.8  10.8  10.7  
10.8   0.0
  8. dedibox-1-t.intf.routers.pro  0.0% 2   12.6  12.6  12.6  
12.7   0.0
  9. 49e-s46-1-a9k2.dc3.poneytele  0.0% 2   11.3  11.3  11.3  
11.3   0.1
 10. 88-190-234-137.rev.poneytele  0.0% 2   12.3  12.3  12.3  
12.3   0.0



tristanb wrote:

A last thing, can you provide a traceroute please ?

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,252816,252828#msg-252828

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff

Well, it used to work before 1.6.0..

For me 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header 
shows that I should do:


proxy_pass_header Cache-Control;

So that should be correct

Best regards,
Lucas Rolff

Jonathan Matthews wrote:


On 1 Jul 2014 07:58, Lucas Rolff lu...@slcoding.com 
mailto:lu...@slcoding.com wrote:


 Hi guys,

 I'm currently running nginx version 1.6.0 (after upgrading from 1.4.4).

 Sadly I've found out, after upgrading proxy_pass_header seems to 
stop working, meaning no headers is passed from the upstream at all


You need to read the proxy_pass_header and proxy_hide_header reference 
documentation. You're using it wrongly, possibly because you've 
assumed it takes generic parameters instead of very specific ones.


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
So.. Where is the thing that states I can't use proxy_pass_header 
cache-control, or expires? :)))


Maybe I'm just stupid

Best regards,
Lucas Rolff

Jonathan Matthews wrote:


On 1 Jul 2014 10:34, Lucas Rolff lu...@slcoding.com 
mailto:lu...@slcoding.com wrote:


 Do you have a link to a documentation that has info about this then? 
Because in the below link, and in 
http://wiki.nginx.org/HttpProxyModule#proxy_pass_header theres nothing 
about what it accepts.


How about the doc you already found, and then the link that it contains:

 On 1 Jul 2014 10:20, Lucas Rolff lu...@slcoding.com 
mailto:lu...@slcoding.com wrote:
  For me 
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
I've verified that 1.4.4 works as it should, I receive the cache-control 
and expires headers sent from upstream (Apache 2.4 in this case), 
upgrading to nginx 1.6.0 breaks this, no config changes, nothing.


But thanks for the explanation Robert!
I'll try investigate it further to see if I can find the root cause, 
since for me this is very odd that it's suddenly not sent to the client 
anymore.


Best regards,
Lucas Rolff

Robert Paprocki wrote:

Can we move past passive aggressive posting to a public mailing list and
actually try to accomplish something?

The nginx docs indicate the following about proxy_pass_header

Permits passing otherwise disabled header fields from a proxied server
to a client.

'otherwise disabled header fields' are documented as the following (from
proxy_hide_header docs):

By default, nginx does not pass the header fields “Date”, “Server”,
“X-Pad”, and “X-Accel-...” from the response of a proxied server to a
client.

So I don't know why you would need to have proxy_pass_header
Cache-Control in the first place, since this wouldn't seem to be dropped
by default from the response of a proxied server to a client.

Have you tried downgrading back to 1.4.4 to confirm whatever problem
you're having doesn't exist within some other part of your
infrastructure that was potentially changed as part of your upgrade?


On 07/01/2014 01:09 AM, Jonathan Matthews wrote:

On 1 Jul 2014 11:01, Lucas Rolfflu...@slcoding.com
mailto:lu...@slcoding.com  wrote:

So.. Where is the thing that states I can't use proxy_pass_header

cache-control, or expires? :)))

The proxy_hide_header and  proxy_pass_header reference docs.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
I've been investigating, and seems like it's related to 1.6 or so - 
because 1.4.2 and 1.4.4 works perfectly with the config in the first email.


Any that can possibly reproduce this as well?

Best regards,
Lucas R

Robert Paprocki wrote:

Can we move past passive aggressive posting to a public mailing list and
actually try to accomplish something?

The nginx docs indicate the following about proxy_pass_header

Permits passing otherwise disabled header fields from a proxied server
to a client.

'otherwise disabled header fields' are documented as the following (from
proxy_hide_header docs):

By default, nginx does not pass the header fields “Date”, “Server”,
“X-Pad”, and “X-Accel-...” from the response of a proxied server to a
client.

So I don't know why you would need to have proxy_pass_header
Cache-Control in the first place, since this wouldn't seem to be dropped
by default from the response of a proxied server to a client.

Have you tried downgrading back to 1.4.4 to confirm whatever problem
you're having doesn't exist within some other part of your
infrastructure that was potentially changed as part of your upgrade?


On 07/01/2014 01:09 AM, Jonathan Matthews wrote:

On 1 Jul 2014 11:01, Lucas Rolfflu...@slcoding.com
mailto:lu...@slcoding.com  wrote:

So.. Where is the thing that states I can't use proxy_pass_header

cache-control, or expires? :)))

The proxy_hide_header and  proxy_pass_header reference docs.



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx



___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff

nginx:

curl -I http://domain.com/wp-content/uploads/2012/05/forside.png
HTTP/1.1 200 OK
Server: nginx
Date: Tue, 01 Jul 2014 10:42:06 GMT
Content-Type: image/png
Content-Length: 87032
Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT
Connection: keep-alive
Vary: Accept-Encoding
ETag: 51399b28-153f8
Accept-Ranges: bytes

Backend:

curl -I http://domain.com:8081/wp-content/uploads/2012/05/forside.png
HTTP/1.1 200 OK
Date: Tue, 01 Jul 2014 10:42:30 GMT
Server: Apache
Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT
Accept-Ranges: bytes
Content-Length: 87032
Cache-Control: max-age=2592000
Expires: Thu, 31 Jul 2014 10:42:30 GMT
Content-Type: image/png

So backend returns the headers just fine.

Best regards,
Lucas Rolff


Valentin V. Bartenev wrote:

On Tuesday 01 July 2014 10:30:47 Lucas Rolff wrote:

I've verified that 1.4.4 works as it should, I receive the cache-control
and expires headers sent from upstream (Apache 2.4 in this case),
upgrading to nginx 1.6.0 breaks this, no config changes, nothing.

But thanks for the explanation Robert!
I'll try investigate it further to see if I can find the root cause,
since for me this is very odd that it's suddenly not sent to the client
anymore.


[..]

They can be not sent because your backend stopped returning them for some
reason.  Try to investigate what happens on the wire between your backend
and nginx.

   wbr, Valentin V. Bartenev

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_pass_header not working in 1.6.0

2014-07-01 Thread Lucas Rolff
But if files was served from backend I would assume to see the 
$upstream_response_time  variable in nginx would return other stuff than 
a dash in 1.4.4

Like this, using logformat:
$request$status$body_bytes_sent$http_referer$http_user_agent$request_time$upstream_response_time'; 



GET /css/colors.css HTTP/1.1 304 0 http://viewabove.dk/?page_id=2; 
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 
(KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36 0.000 -


Again, configs is exactly the same, same operating system, same 
permissions, same site, so it seems odd to me, specially because nothing 
has been listed in the change logs about this 'fix' - it was in earlier 
versions, and was actually served by nginx, even when it did fetch 
headers from the backend.


Best regards,
Lucas Rolff

Valentin V. Bartenev wrote:

On Tuesday 01 July 2014 14:33:54 Lucas Rolff wrote:

Hmm, okay..

Then I'll go back to an old buggy version of nginx which gives me the
possibility to use the headers from Backend!


[..]

It doesn't do this either.  Probably, it just has different configuration or
permissions which results to that try_files always fails, and all requests are
served from your backend.

   wbr, Valentin V. Bartenev

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxy_pass_header not working in 1.6.0

2014-06-30 Thread Lucas Rolff

Hi guys,

I'm currently running nginx version 1.6.0 (after upgrading from 1.4.4).

Sadly I've found out, after upgrading proxy_pass_header seems to stop 
working, meaning no headers is passed from the upstream at all, I've 
tried setting caching headers, expires headers, removing ETag etc but 
nothing seems to go through.


I then wanted to test it, on other machines, because it could be a 
faulty installation, but I can replicate it on 3 different machines, I'm 
always getting my releases from https://github.com/nginx/nginx/releases.


My config looks as following:

https://gist.github.com/lucasRolff/c4a359d93b5906678a23

Do you guys know what can be wrong, and if there is a fix for it in any 
newer version of nginx, or if I should downgrade to 1.4.4 again (Where I 
know it's working at least).


Thanks in advance!

Best regards,
Lucas Rolff

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx