Re: [PATCH] HTTP/2: make http2 server support http1

2018-06-13 Thread 吕海涛
hello?

发自我的 iPhone

> 在 2018年4月2日,08:28,Haitao Lv  写道:
> 
> Any body is here?
> 
>> On Mar 21, 2018, at 11:36, Haitao Lv  wrote:
>> 
>> Thank you for reviewing.
>> 
>> And here is the patch that fix the breaking PROXY protocol functionality.
>> 
>> Sorry for disturbing.
>> 
>> diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
>> index 2db7a627..9f1b8544 100644
>> --- a/src/http/ngx_http_request.c
>> +++ b/src/http/ngx_http_request.c
>> @@ -17,6 +17,10 @@ static ssize_t 
>> ngx_http_read_request_header(ngx_http_request_t *r);
>> static ngx_int_t ngx_http_alloc_large_header_buffer(ngx_http_request_t *r,
>>ngx_uint_t request_line);
>> 
>> +#if (NGX_HTTP_V2)
>> +static void ngx_http_wait_v2_preface_handler(ngx_event_t *rev);
>> +#endif
>> +
>> static ngx_int_t ngx_http_process_header_line(ngx_http_request_t *r,
>>ngx_table_elt_t *h, ngx_uint_t offset);
>> static ngx_int_t ngx_http_process_unique_header_line(ngx_http_request_t *r,
>> @@ -325,7 +329,7 @@ ngx_http_init_connection(ngx_connection_t *c)
>> 
>> #if (NGX_HTTP_V2)
>>if (hc->addr_conf->http2) {
>> -rev->handler = ngx_http_v2_init;
>> +rev->handler = ngx_http_wait_v2_preface_handler;
>>}
>> #endif
>> 
>> @@ -381,6 +385,131 @@ ngx_http_init_connection(ngx_connection_t *c)
>> }
>> 
>> 
>> +#if (NGX_HTTP_V2)
>> +static void
>> +ngx_http_wait_v2_preface_handler(ngx_event_t *rev)
>> +{
>> +size_t size;
>> +ssize_tn;
>> +u_char*p;
>> +ngx_buf_t *b;
>> +ngx_connection_t  *c;
>> +ngx_http_connection_t *hc;
>> +static const u_charpreface[] = "PRI";
>> +
>> +c = rev->data;
>> +hc = c->data;
>> +
>> +size = sizeof(preface) - 1;
>> +
>> +if (hc->proxy_protocol) {
>> +size += NGX_PROXY_PROTOCOL_MAX_HEADER;
>> +}
>> +
>> +ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0,
>> +"http wait h2 preface handler");
>> +
>> +if (rev->timedout) {
>> +ngx_log_error(NGX_LOG_INFO, c->log, NGX_ETIMEDOUT, "client timed 
>> out");
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +if (c->close) {
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +b = c->buffer;
>> +
>> +if (b == NULL) {
>> +b = ngx_create_temp_buf(c->pool, size);
>> +if (b == NULL) {
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +c->buffer = b;
>> +
>> +} else if (b->start == NULL) {
>> +
>> +b->start = ngx_palloc(c->pool, size);
>> +if (b->start == NULL) {
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +b->pos = b->start;
>> +b->last = b->start;
>> +b->end = b->last + size;
>> +}
>> +
>> +n = c->recv(c, b->last, b->end - b->last);
>> +
>> +if (n == NGX_AGAIN) {
>> +
>> +if (!rev->timer_set) {
>> +ngx_add_timer(rev, c->listening->post_accept_timeout);
>> +ngx_reusable_connection(c, 1);
>> +}
>> +
>> +if (ngx_handle_read_event(rev, 0) != NGX_OK) {
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +/*
>> + * We are trying to not hold c->buffer's memory for an idle 
>> connection.
>> + */
>> +
>> +if (ngx_pfree(c->pool, b->start) == NGX_OK) {
>> +b->start = NULL;
>> +}
>> +
>> +return;
>> +}
>> +
>> +if (n == NGX_ERROR) {
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +if (n == 0) {
>> +ngx_log_error(NGX_LOG_INFO, c->log, 0,
>> +  "client closed connection");
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +b->last += n;
>> +
>> +if (hc->proxy_protocol) {
>> +hc->proxy_protocol = 0;
>> +
>> +p = ngx_proxy_protocol_read(c, b->pos, b->last);
>> +
>> +if (p == NULL) {
>> +ngx_http_close_connection(c);
>> +return;
>> +}
>> +
>> +b->pos = p;
>> +}
>> +
>> +if (b->last >= b->pos + sizeof(preface) - 1) {
>> +/* b will be freed in 
>> ngx_http_v2_init/ngx_http_wait_request_handler */
>> +
>> +if (ngx_strncmp(b->pos, preface, sizeof(preface) - 1) == 0) {
>> +ngx_http_v2_init(rev);
>> +} else {
>> +rev->handler = ngx_http_wait_request_handler;
>> +ngx_http_wait_request_handler(rev);
>> +}
>> +}
>> +}
>> +#endif
>> +
>> +
>> static void
>> ngx_http_wait_request_handler(ngx_event_t *rev)
>> {
>> @@ -393,6 +522,7 @@ ngx_http_wait_request_handler(ngx_event_t *rev)
>>ngx_http_core_srv_conf_t  *cscf;
>> 
>>c = rev->data;
>> +n = NGX_AGAIN;
>> 
>>ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http wait request 
>> handler");
>> 
>> @@ -434,9 +564,27 @@ 

Re: Should listen *:443 bind to IPv4 and IPv6 ?

2018-06-13 Thread Maxim Dounin
Hello!

On Wed, Jun 13, 2018 at 05:10:51PM +0200, Ralph Seichter wrote:

> On 13.06.18 14:19, Maxim Dounin wrote:
> 
> > The "listen *:443" snippet always created only IPv4 listening socket.
> 
> That's interesting. Maybe Gentoo Linux did indeed add a custom patch to
> previous nginx versions.
> 
> What is the shortest officially recommended way to bind nginx to port
> 443 for both IPv4 and IPv6? I should probably mention that my servers
> usually service multiple domains using TLS SNI.
> 
>   server {
> listen *:443 ssl;
> listen [::]:443;
>   }
> 
> works, but perhaps there is method with just one listen statement?

Using 

listen 443 ssl;
listen [::]:443 ssl;

should be good enough.

While it is possible to use just one listen statement with an IPv6 
address and "ipv6only=off", I would rather recommend to use an 
explicit configuration with two distinct listening sockets.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: status/usage of FRiCKLE/ngx_cache_purge. still reliable? alternatives?

2018-06-13 Thread PGNet Dev

Hi

On 6/12/18 12:03 AM, Andrei wrote:
- The sheer amount of added context switches (proxying was done local on 
a cPanel box, seeing 20-30k reqs/sec during peak hours)


Not clear what you mean here

- Having to manage two software versions, configs, auto config builders 
used by internal tools, etc


Not a huge headache here.  I can see this gets possibly annoying a scale 
with # of sites.



- More added headaches with central logging


Having Varnish's detailed logging is a bit plus, IME, for tracking down 
cache issues, specifically, and header issues in general.


No issues with 'central' logging.


- No projected TLS support in Varnish


Having a terminator out front hasn't been a problem, save for the 
additional config considerations.


- Bare minimum H2 support in Varnish vs a more mature implementation in 
Nginx


This one I'm somewhat aware of -- haven't yet convinced myself of 
if/where there's a really problematic bottleneck.



Since Nginx can pretty much do everything Varnish does, and more,


Except for the richness of the VCL ...

I decided to avoid the headaches and just jump over to Nginx (even though 
I've been an avid Varnish fan since 2.1.5). As for a VCL replacement and 
purging in Nginx, I suggest reading up on Lua and checking out openresty 
if you want streamlined updates and don't want to manually 
compile/manage modules. To avoid overloading the filesystem with added 
I/O from purge requests/scans/etc, I wrote a simple Perl script that 
handles all the PURGE requests in order to have regex support and 
control over the remoals (it basically validates ownership to purge on 
the related domain, queues removals, then has another thread for the 
cleanup).


My main problem so far is that WordPress appears to be generally 
Varnish-UNfriendly.


Not core, but plugins.  With Varnish, I'm having all SORTS of 
issues/artifacts cropping up.  So far, (my) VCL pass exceptions haven't 
been sufficient.


Without Varnish, there are far fewer 'surprises'.

Then again, I'm not a huge WP fan to begin with; it's a pain to debug 
anything beyond standard server config issues.  Caching in particular.


OTOH, my sites with Nginx+Varnish+Varnish with Symfony work without a hitch.

My leaning is, for WP, Nginx only.  For SF, Nginx+Varnish.  And, TBH, 
avoiding WP if/when I can.



Hope this helps some :)


It does, thx!
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


[njs] http internalRedirect() method.

2018-06-13 Thread Dmitry Volyntsev
details:   http://hg.nginx.org/njs/rev/c939541c37bc
branches:  
changeset: 535:c939541c37bc
user:  Dmitry Volyntsev 
date:  Wed Jun 13 14:15:43 2018 +0300
description:
http internalRedirect() method.

Performs internal redirect to the specified uri.

req.internalRedirect():
uri - string. If uri starts with '@' it is considered as a
named location.

diffstat:

 nginx/ngx_http_js_module.c |  92 +-
 1 files changed, 82 insertions(+), 10 deletions(-)

diffs (158 lines):

diff -r bf3d32cc6716 -r c939541c37bc nginx/ngx_http_js_module.c
--- a/nginx/ngx_http_js_module.cWed Jun 13 14:11:58 2018 +0300
+++ b/nginx/ngx_http_js_module.cWed Jun 13 14:15:43 2018 +0300
@@ -31,6 +31,7 @@ typedef struct {
 ngx_uint_t   done;
 ngx_int_tstatus;
 njs_opaque_value_t   request_body;
+ngx_str_tredirect_uri;
 } ngx_http_js_ctx_t;
 
 
@@ -51,6 +52,8 @@ typedef struct {
 static ngx_int_t ngx_http_js_content_handler(ngx_http_request_t *r);
 static void ngx_http_js_content_event_handler(ngx_http_request_t *r);
 static void ngx_http_js_content_write_event_handler(ngx_http_request_t *r);
+static void ngx_http_js_content_finalize(ngx_http_request_t *r,
+ngx_http_js_ctx_t *ctx);
 static ngx_int_t ngx_http_js_variable(ngx_http_request_t *r,
 ngx_http_variable_value_t *v, uintptr_t data);
 static ngx_int_t ngx_http_js_init_vm(ngx_http_request_t *r);
@@ -89,6 +92,8 @@ static njs_ret_t ngx_http_js_ext_finish(
 nxt_uint_t nargs, njs_index_t unused);
 static njs_ret_t ngx_http_js_ext_return(njs_vm_t *vm, njs_value_t *args,
 nxt_uint_t nargs, njs_index_t unused);
+static njs_ret_t ngx_http_js_ext_internal_redirect(njs_vm_t *vm,
+njs_value_t *args, nxt_uint_t nargs, njs_index_t unused);
 
 static njs_ret_t ngx_http_js_ext_log(njs_vm_t *vm, njs_value_t *args,
 nxt_uint_t nargs, njs_index_t unused);
@@ -589,6 +594,18 @@ static njs_external_t  ngx_http_js_ext_r
   NULL,
   ngx_http_js_ext_return,
   0 },
+
+{ nxt_string("internalRedirect"),
+  NJS_EXTERN_METHOD,
+  NULL,
+  0,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  ngx_http_js_ext_internal_redirect,
+  0 },
 };
 
 
@@ -683,8 +700,9 @@ ngx_http_js_content_event_handler(ngx_ht
 }
 
 /*
- * status is expected to be overriden by finish() or return() methods,
- * otherwise the content handler is considered invalid.
+ * status is expected to be overriden by finish(), return() or
+ * internalRedirect() methods, otherwise the content handler is
+ * considered invalid.
  */
 
 ctx->status = NGX_HTTP_INTERNAL_SERVER_ERROR;
@@ -704,10 +722,7 @@ ngx_http_js_content_event_handler(ngx_ht
 return;
 }
 
-ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-   "http js content rc: %i", ctx->status);
-
-ngx_http_finalize_request(r, ctx->status);
+ngx_http_js_content_finalize(r, ctx);
 }
 
 
@@ -725,10 +740,7 @@ ngx_http_js_content_write_event_handler(
 ctx = ngx_http_get_module_ctx(r, ngx_http_js_module);
 
 if (!njs_vm_pending(ctx->vm)) {
-ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-   "http js content rc: %i", ctx->status);
-
-ngx_http_finalize_request(r, ctx->status);
+ngx_http_js_content_finalize(r, ctx);
 return;
 }
 
@@ -764,6 +776,28 @@ ngx_http_js_content_write_event_handler(
 }
 
 
+static void
+ngx_http_js_content_finalize(ngx_http_request_t *r, ngx_http_js_ctx_t *ctx)
+{
+ngx_str_t  args;
+
+ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+   "http js content rc: %i", ctx->status);
+
+if (ctx->redirect_uri.len) {
+if (ctx->redirect_uri.data[0] == '@') {
+ngx_http_named_location(r, >redirect_uri);
+
+} else {
+ngx_http_split_args(r, >redirect_uri, );
+ngx_http_internal_redirect(r, >redirect_uri, );
+}
+}
+
+ngx_http_finalize_request(r, ctx->status);
+}
+
+
 static ngx_int_t
 ngx_http_js_variable(ngx_http_request_t *r, ngx_http_variable_value_t *v,
 uintptr_t data)
@@ -1391,6 +1425,44 @@ ngx_http_js_ext_return(njs_vm_t *vm, njs
 
 
 static njs_ret_t
+ngx_http_js_ext_internal_redirect(njs_vm_t *vm, njs_value_t *args,
+nxt_uint_t nargs, njs_index_t unused)
+{
+nxt_str_turi;
+ngx_http_js_ctx_t   *ctx;
+ngx_http_request_t  *r;
+
+if (nargs < 2) {
+njs_vm_error(vm, "too few arguments");
+return NJS_ERROR;
+}
+
+r = njs_value_data(njs_argument(args, 0));
+
+ctx = ngx_http_get_module_ctx(r, ngx_http_js_module);
+
+if (njs_vm_value_to_ext_string(vm, , njs_argument(args, 1), 0)
+== NJS_ERROR)
+{
+njs_vm_error(vm, "failed to convert uri arg");
+return NJS_ERROR;
+}
+
+if (uri.length == 0) {
+njs_vm_error(vm, "uri is empty");
+   

[nginx] Upstream: disable body cleanup with preserve_output (ticket #1565).

2018-06-13 Thread Maxim Dounin
details:   http://hg.nginx.org/nginx/rev/a10e5fe44762
branches:  
changeset: 7297:a10e5fe44762
user:  Maxim Dounin 
date:  Wed Jun 13 15:28:11 2018 +0300
description:
Upstream: disable body cleanup with preserve_output (ticket #1565).

With u->conf->preserve_output set the request body file might be used
after the response header is sent, so avoid cleaning it.  (Normally
this is not a problem as u->conf->preserve_output is only set with
r->request_body_no_buffering, but the request body might be already
written to a file in a different context.)

diffstat:

 src/http/ngx_http_upstream.c |  3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diffs (13 lines):

diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -2901,7 +2901,8 @@ ngx_http_upstream_send_response(ngx_http
 }
 
 if (r->request_body && r->request_body->temp_file
-&& r == r->main && !r->preserve_body)
+&& r == r->main && !r->preserve_body
+&& !u->conf->preserve_output)
 {
 ngx_pool_run_cleanup_file(r->pool, 
r->request_body->temp_file->file.fd);
 r->request_body->temp_file->file.fd = NGX_INVALID_FILE;
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Nginx crashing with image filter and cache enabled

2018-06-13 Thread Maxim Dounin
Hello!

On Mon, Jun 11, 2018 at 08:53:49AM -0400, ayman wrote:

> When enabling the cache on image filter; nginx workers crash and keep
> getting 500.
> 
> I'm using Nginx 1.14.0
> 
> error log:
> 2018/06/11 12:30:49 [alert] 46105#0: worker process 46705 exited on signal
> 11 (core dumped)
> 
> proxy_cache_path /opt/nginx/img-cache/resized levels=1:2
> keys_zone=resizedimages:10m max_size=3G;
> 
> location ~ ^/resize/(\d+)x(\d+)/(.*) {
> proxy_pass  https://proxypass/$3
> proxy_cache resizedimages;
> proxy_cache_key "$host$document_uri";
> proxy_temp_path off;
> proxy_cache_valid 200 1d;
> proxy_cache_valid any 1m;
> proxy_cache_use_stale error timeout invalid_header
> updating;
> 
> image_filterresize $1 $2;
> image_filter_jpeg_quality   90;
> image_filter_buffer 20M;
> image_filter_interlace  on;
> 
> }
> 
> If i disable the cache it's working perfectly!
> 
> Do you recommend to change anything in the config? What could be the issue?

You may want to provide "nginx -V" output, backtrace as obtained 
from the core dump, and details on the GD library used.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: Should listen *:443 bind to IPv4 and IPv6 ?

2018-06-13 Thread Maxim Dounin
Hello!

On Wed, Jun 13, 2018 at 11:01:09AM +0200, Ralph Seichter wrote:

> I wonder if I missed an announcement for a change in nginx behaviour
> or if some local issue is causing me problems. The configuration
> 
>   server {
> listen *:443 ssl default_server;
>   }
> 
> used to bind to both 0.0.0.0:443 and [::]:443, but since I updated to
> nginx 1.15.0 it only binds to IPv4 but no longer to IPv6. When I add
> a second listen directive
> 
>   server {
> listen *:443 ssl default_server;
> listen [::]:443 ssl default_server;
>   }
> 
> the server can be reached via both IPv6 and IPv4 again. Was this a
> deliberate change?

The "listen *:443" snippet always created only IPv4 listening 
socket.  Though I think I've seen some distributions patching 
nginx to create IPv6+IPv4 sockets instead.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


[njs] Merged HTTP Response and Reply into Request.

2018-06-13 Thread Dmitry Volyntsev
details:   http://hg.nginx.org/njs/rev/bf3d32cc6716
branches:  
changeset: 534:bf3d32cc6716
user:  Dmitry Volyntsev 
date:  Wed Jun 13 14:11:58 2018 +0300
description:
Merged HTTP Response and Reply into Request.

Splitting HTTP functionality into 3 objects Request, Response and Reply
introduced a lot of confusion as to which method should belong to which object.

New members of Request:
- req.status (res.status)
- req.parent (reply.parent)
- req.requestBody (req.body)
- req.responseBody (reply.body)
- req.headersIn (req.headers)
- req.headersOut (res.headers)
- req.sendHeader() (res.sendHeader())
- req.send() (res.send())
- req.finish() (res.finish())
- req.return() (res.return())

Deprecated members of Request:
- req.body (use req.requestBody or req.responseBody)
- req.headers (use req.headersIn or req.headersOut)
- req.response

Response is remained in place for backward compatibility and will be removed in
the following releases.  Reply is replaced with Request in the req.subrequest()
callback.  The deprecated properties will be removed in the following releases.

diffstat:

 nginx/ngx_http_js_module.c |  245 +++-
 1 files changed, 151 insertions(+), 94 deletions(-)

diffs (389 lines):

diff -r bc3f64aab9f9 -r bf3d32cc6716 nginx/ngx_http_js_module.c
--- a/nginx/ngx_http_js_module.cTue Jun 05 15:21:20 2018 +0300
+++ b/nginx/ngx_http_js_module.cWed Jun 13 14:11:58 2018 +0300
@@ -16,7 +16,6 @@ typedef struct {
 njs_vm_t*vm;
 const njs_extern_t  *req_proto;
 const njs_extern_t  *res_proto;
-const njs_extern_t  *rep_proto;
 } ngx_http_js_main_conf_t;
 
 
@@ -106,6 +105,10 @@ static njs_ret_t ngx_http_js_ext_get_rem
 njs_value_t *value, void *obj, uintptr_t data);
 static njs_ret_t ngx_http_js_ext_get_request_body(njs_vm_t *vm,
 njs_value_t *value, void *obj, uintptr_t data);
+static njs_ret_t ngx_http_js_ext_get_headers(njs_vm_t *vm, njs_value_t *value,
+void *obj, uintptr_t data);
+static njs_ret_t ngx_http_js_ext_foreach_headers(njs_vm_t *vm, void *obj,
+void *next); /*FIXME*/
 static njs_ret_t ngx_http_js_ext_get_header_in(njs_vm_t *vm, njs_value_t 
*value,
 void *obj, uintptr_t data);
 static njs_ret_t ngx_http_js_ext_foreach_header_in(njs_vm_t *vm, void *obj,
@@ -359,10 +362,34 @@ static njs_external_t  ngx_http_js_ext_r
   NULL,
   0 },
 
+{ nxt_string("parent"),
+  NJS_EXTERN_PROPERTY,
+  NULL,
+  0,
+  ngx_http_js_ext_get_parent,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  0 },
+
 { nxt_string("body"),
   NJS_EXTERN_PROPERTY,
   NULL,
   0,
+  ngx_http_js_ext_get_reply_body,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  0 },
+
+{ nxt_string("requestBody"),
+  NJS_EXTERN_PROPERTY,
+  NULL,
+  0,
   ngx_http_js_ext_get_request_body,
   NULL,
   NULL,
@@ -371,10 +398,34 @@ static njs_external_t  ngx_http_js_ext_r
   NULL,
   0 },
 
+{ nxt_string("responseBody"),
+  NJS_EXTERN_PROPERTY,
+  NULL,
+  0,
+  ngx_http_js_ext_get_reply_body,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  0 },
+
 { nxt_string("headers"),
   NJS_EXTERN_OBJECT,
   NULL,
   0,
+  ngx_http_js_ext_get_headers,
+  NULL,
+  NULL,
+  ngx_http_js_ext_foreach_headers,
+  ngx_http_js_ext_next_header,
+  NULL,
+  0 },
+
+{ nxt_string("headersIn"),
+  NJS_EXTERN_OBJECT,
+  NULL,
+  0,
   ngx_http_js_ext_get_header_in,
   NULL,
   NULL,
@@ -407,6 +458,30 @@ static njs_external_t  ngx_http_js_ext_r
   NULL,
   0 },
 
+{ nxt_string("status"),
+  NJS_EXTERN_PROPERTY,
+  NULL,
+  0,
+  ngx_http_js_ext_get_status,
+  ngx_http_js_ext_set_status,
+  NULL,
+  NULL,
+  NULL,
+  NULL,
+  offsetof(ngx_http_request_t, headers_out.status) },
+
+{ nxt_string("headersOut"),
+  NJS_EXTERN_OBJECT,
+  NULL,
+  0,
+  ngx_http_js_ext_get_header_out,
+  ngx_http_js_ext_set_header_out,
+  NULL,
+  ngx_http_js_ext_foreach_header_out,
+  ngx_http_js_ext_next_header,
+  NULL,
+  0 },
+
 { nxt_string("response"),
   NJS_EXTERN_PROPERTY,
   NULL,
@@ -466,105 +541,53 @@ static njs_external_t  ngx_http_js_ext_r
   NULL,
   ngx_http_js_ext_error,
   0 },
-};
-
-
-static njs_external_t  ngx_http_js_ext_reply[] = {
-
-{ nxt_string("headers"),
-  NJS_EXTERN_OBJECT,
+
+{ nxt_string("sendHeader"),
+  NJS_EXTERN_METHOD,
   NULL,
   0,
-  ngx_http_js_ext_get_header_out,
-  NULL,
-  NULL,
-  ngx_http_js_ext_foreach_header_out,
-  ngx_http_js_ext_next_header,
-  NULL,
-  0 },
-
-{ nxt_string("status"),
-  NJS_EXTERN_PROPERTY,
-  NULL,
-  0,
-  ngx_http_js_ext_get_status,
   NULL,
   NULL,
   NULL,
   NULL,
   

Should listen *:443 bind to IPv4 and IPv6 ?

2018-06-13 Thread Ralph Seichter
Hi folks,

I wonder if I missed an announcement for a change in nginx behaviour
or if some local issue is causing me problems. The configuration

  server {
listen *:443 ssl default_server;
  }

used to bind to both 0.0.0.0:443 and [::]:443, but since I updated to
nginx 1.15.0 it only binds to IPv4 but no longer to IPv6. When I add
a second listen directive

  server {
listen *:443 ssl default_server;
listen [::]:443 ssl default_server;
  }

the server can be reached via both IPv6 and IPv4 again. Was this a
deliberate change?

-Ralph
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Support for ticket #557

2018-06-13 Thread Jefferson Carpenter
Just want to show my support for allowing `autoindex` to include 
dotfiles (ticket #557).


I am relatively new to nginx, and have been using it in increasingly 
large and complex capacities recently.  Specifically, more than once I 
have now set up location blocks that basically enable directory 
browsing.  These location blocks generally look like this:


location ~ ^/git/?(.*)$ {
root /home/aoeu/git-webserver;
autoindex on;
try_files /$1 /$1/ 404;
}

(where that location block takes requests to the /git/ path on my domain 
and allows it to be browsed as my local /home/aoeu/git-webserver 
directory - generally I am interested in turning a particular path on my 
domain into a file browse of a particular directory on my server).


Problem with this being, the `autoindex on` directive skips over hidden 
(`.`) files when it generates directory listings, and cannot be 
configured not to.


I'm still up in the air about how best to allow my sites to list and 
statically serve files.  More than simply displaying hidden (`.`) files, 
I would like to be able to configure (maybe through a regular 
expression) specifically what files are to be hidden, but given 
`autoindex on` displaying all files (not hiding `.` files) this could 
probably be done effectively enough by modifying the regular expression 
that my location block matches paths against.


That is all.  If anyone has ideas on plugins that could help me create 
browsable directory listings *including* all dot files that would be 
great - I did see 
https://www.nginx.com/resources/wiki/modules/fancy_index/ but I don't 
think that supports my full use case of mapping a specific path on my 
domain onto a specific directory on my computer.  I also saw some code 
under ticket #557 that would help me to recompile nginx so that 
`autoindex on` does not skip over dot files, and that's probably what 
I'll do as the most direct way to meet my wants and needs in lieu of any 
way to do it without compiling nginx locally.


Jefferson

___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx