[nginx] HTTP/2: flushing of the SSL buffer in transition to the idle state.

2016-07-19 Thread Valentin Bartenev
details:   http://hg.nginx.org/nginx/rev/72282dd5884e
branches:  
changeset: 6642:72282dd5884e
user:  Valentin Bartenev 
date:  Tue Jul 19 20:34:17 2016 +0300
description:
HTTP/2: flushing of the SSL buffer in transition to the idle state.

It fixes potential connection leak if some unsent data was left in the SSL
buffer.  Particularly, that could happen when a client canceled the stream
after the HEADERS frame has already been created.  In this case no other
frames might be produced and the HEADERS frame alone didn't flush the buffer.

diffstat:

 src/http/v2/ngx_http_v2.c |  20 ++--
 1 files changed, 18 insertions(+), 2 deletions(-)

diffs (37 lines):

diff -r b5d1c17181ca -r 72282dd5884e src/http/v2/ngx_http_v2.c
--- a/src/http/v2/ngx_http_v2.c Tue Jul 19 20:34:02 2016 +0300
+++ b/src/http/v2/ngx_http_v2.c Tue Jul 19 20:34:17 2016 +0300
@@ -599,7 +599,8 @@ error:
 static void
 ngx_http_v2_handle_connection(ngx_http_v2_connection_t *h2c)
 {
-ngx_connection_t  *c;
+ngx_int_trc;
+ngx_connection_t*c;
 ngx_http_v2_srv_conf_t  *h2scf;
 
 if (h2c->last_out || h2c->processing) {
@@ -614,7 +615,22 @@ ngx_http_v2_handle_connection(ngx_http_v
 }
 
 if (c->buffered) {
-return;
+h2c->blocked = 1;
+
+rc = ngx_http_v2_send_output_queue(h2c);
+
+h2c->blocked = 0;
+
+if (rc == NGX_ERROR) {
+ngx_http_close_connection(c);
+return;
+}
+
+if (rc == NGX_AGAIN) {
+return;
+}
+
+/* rc == NGX_OK */
 }
 
 h2scf = ngx_http_get_module_srv_conf(h2c->http_connection->conf_ctx,

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/2: fixed send timer handling.

2016-07-19 Thread Valentin Bartenev
details:   http://hg.nginx.org/nginx/rev/e78eca6bfaf0
branches:  
changeset: 6640:e78eca6bfaf0
user:  Valentin Bartenev 
date:  Tue Jul 19 20:31:09 2016 +0300
description:
HTTP/2: fixed send timer handling.

Checking for return value of c->send_chain() isn't sufficient since there
are data can be left in the SSL buffer.  Now the wew->ready flag is used
instead.

In particular, this fixed a connection leak in cases when all streams were
closed, but there's still some data to be sent in the SSL buffer and the
client forgot about the connection.

diffstat:

 src/http/v2/ngx_http_v2.c |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 82efcedb310b -r e78eca6bfaf0 src/http/v2/ngx_http_v2.c
--- a/src/http/v2/ngx_http_v2.c Tue Jul 19 20:30:21 2016 +0300
+++ b/src/http/v2/ngx_http_v2.c Tue Jul 19 20:31:09 2016 +0300
@@ -549,7 +549,7 @@ ngx_http_v2_send_output_queue(ngx_http_v
 c->tcp_nodelay = NGX_TCP_NODELAY_SET;
 }
 
-if (cl) {
+if (!wev->ready) {
 ngx_add_timer(wev, clcf->send_timeout);
 
 } else {

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/2: refactored ngx_http_v2_send_output_queue().

2016-07-19 Thread Valentin Bartenev
details:   http://hg.nginx.org/nginx/rev/b5d1c17181ca
branches:  
changeset: 6641:b5d1c17181ca
user:  Valentin Bartenev 
date:  Tue Jul 19 20:34:02 2016 +0300
description:
HTTP/2: refactored ngx_http_v2_send_output_queue().

Now it returns NGX_AGAIN if there's still data to be sent.

diffstat:

 src/http/v2/ngx_http_v2.c |  20 ++--
 1 files changed, 10 insertions(+), 10 deletions(-)

diffs (44 lines):

diff -r e78eca6bfaf0 -r b5d1c17181ca src/http/v2/ngx_http_v2.c
--- a/src/http/v2/ngx_http_v2.c Tue Jul 19 20:31:09 2016 +0300
+++ b/src/http/v2/ngx_http_v2.c Tue Jul 19 20:34:02 2016 +0300
@@ -478,7 +478,7 @@ ngx_http_v2_send_output_queue(ngx_http_v
 wev = c->write;
 
 if (!wev->ready) {
-return NGX_OK;
+return NGX_AGAIN;
 }
 
 cl = NULL;
@@ -549,15 +549,6 @@ ngx_http_v2_send_output_queue(ngx_http_v
 c->tcp_nodelay = NGX_TCP_NODELAY_SET;
 }
 
-if (!wev->ready) {
-ngx_add_timer(wev, clcf->send_timeout);
-
-} else {
-if (wev->timer_set) {
-ngx_del_timer(wev);
-}
-}
-
 for ( /* void */ ; out; out = fn) {
 fn = out->next;
 
@@ -582,6 +573,15 @@ ngx_http_v2_send_output_queue(ngx_http_v
 
 h2c->last_out = frame;
 
+if (!wev->ready) {
+ngx_add_timer(wev, clcf->send_timeout);
+return NGX_AGAIN;
+}
+
+if (wev->timer_set) {
+ngx_del_timer(wev);
+}
+
 return NGX_OK;
 
 error:

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/2: avoid sending output queue if there's nothing to send.

2016-07-19 Thread Valentin Bartenev
details:   http://hg.nginx.org/nginx/rev/82efcedb310b
branches:  
changeset: 6639:82efcedb310b
user:  Valentin Bartenev 
date:  Tue Jul 19 20:30:21 2016 +0300
description:
HTTP/2: avoid sending output queue if there's nothing to send.

Particularly this fixes alerts on OS X and NetBSD systems when HTTP/2 is
configured over plain TCP sockets.

On these systems calling writev() with no data leads to EINVAL errors
being logged as "writev() failed (22: Invalid argument) while processing
HTTP/2 connection".

diffstat:

 src/http/v2/ngx_http_v2.c |  10 ++
 1 files changed, 10 insertions(+), 0 deletions(-)

diffs (20 lines):

diff -r a2b310a8b2af -r 82efcedb310b src/http/v2/ngx_http_v2.c
--- a/src/http/v2/ngx_http_v2.c Tue Jul 19 20:22:44 2016 +0300
+++ b/src/http/v2/ngx_http_v2.c Tue Jul 19 20:30:21 2016 +0300
@@ -410,6 +410,16 @@ ngx_http_v2_write_handler(ngx_event_t *w
 
 ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http2 write handler");
 
+if (h2c->last_out == NULL && !c->buffered) {
+
+if (wev->timer_set) {
+ngx_del_timer(wev);
+}
+
+ngx_http_v2_handle_connection(h2c);
+return;
+}
+
 h2c->blocked = 1;
 
 rc = ngx_http_v2_send_output_queue(h2c);

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/2: always handle streams in error state.

2016-07-19 Thread Valentin Bartenev
details:   http://hg.nginx.org/nginx/rev/a2b310a8b2af
branches:  
changeset: 6638:a2b310a8b2af
user:  Valentin Bartenev 
date:  Tue Jul 19 20:22:44 2016 +0300
description:
HTTP/2: always handle streams in error state.

Previously, a stream could be closed by timeout if it was canceled
while its send window was exhausted.

diffstat:

 src/http/v2/ngx_http_v2_filter_module.c |  14 --
 1 files changed, 8 insertions(+), 6 deletions(-)

diffs (30 lines):

diff -r 699e409a3e0c -r a2b310a8b2af src/http/v2/ngx_http_v2_filter_module.c
--- a/src/http/v2/ngx_http_v2_filter_module.c   Tue Jul 19 20:22:44 2016 +0300
+++ b/src/http/v2/ngx_http_v2_filter_module.c   Tue Jul 19 20:22:44 2016 +0300
@@ -1294,18 +1294,20 @@ static ngx_inline void
 ngx_http_v2_handle_stream(ngx_http_v2_connection_t *h2c,
 ngx_http_v2_stream_t *stream)
 {
-ngx_event_t  *wev;
+ngx_connection_t  *fc;
 
-if (stream->handled || stream->blocked || stream->exhausted) {
+if (stream->handled || stream->blocked) {
 return;
 }
 
-wev = stream->request->connection->write;
+fc = stream->request->connection;
 
-if (!wev->delayed) {
-stream->handled = 1;
-ngx_queue_insert_tail(>posted, >queue);
+if (!fc->error && (stream->exhausted || fc->write->delayed)) {
+return;
 }
+
+stream->handled = 1;
+ngx_queue_insert_tail(>posted, >queue);
 }
 
 

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/2: always send GOAWAY while worker is shutting down.

2016-07-19 Thread Valentin Bartenev
details:   http://hg.nginx.org/nginx/rev/ea284434db0f
branches:  
changeset: 6636:ea284434db0f
user:  Valentin Bartenev 
date:  Tue Jul 19 20:22:44 2016 +0300
description:
HTTP/2: always send GOAWAY while worker is shutting down.

Previously, if the worker process exited, GOAWAY was sent to connections in
idle state, but connections with active streams were closed without GOAWAY.

diffstat:

 src/http/v2/ngx_http_v2.c |  2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diffs (12 lines):

diff -r 6acaa638fa07 -r ea284434db0f src/http/v2/ngx_http_v2.c
--- a/src/http/v2/ngx_http_v2.c Fri Jul 15 15:18:57 2016 +0300
+++ b/src/http/v2/ngx_http_v2.c Tue Jul 19 20:22:44 2016 +0300
@@ -615,7 +615,7 @@ ngx_http_v2_handle_connection(ngx_http_v
 }
 
 if (ngx_terminate || ngx_exiting) {
-ngx_http_close_connection(c);
+ngx_http_v2_finalize_connection(h2c, NGX_HTTP_V2_NO_ERROR);
 return;
 }
 

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/2: prevented output of the HEADERS frame for canceled streams.

2016-07-19 Thread Valentin Bartenev
details:   http://hg.nginx.org/nginx/rev/699e409a3e0c
branches:  
changeset: 6637:699e409a3e0c
user:  Valentin Bartenev 
date:  Tue Jul 19 20:22:44 2016 +0300
description:
HTTP/2: prevented output of the HEADERS frame for canceled streams.

It's useless to generate HEADERS if the stream has been canceled already.

diffstat:

 src/http/v2/ngx_http_v2_filter_module.c |  8 ++--
 1 files changed, 6 insertions(+), 2 deletions(-)

diffs (25 lines):

diff -r ea284434db0f -r 699e409a3e0c src/http/v2/ngx_http_v2_filter_module.c
--- a/src/http/v2/ngx_http_v2_filter_module.c   Tue Jul 19 20:22:44 2016 +0300
+++ b/src/http/v2/ngx_http_v2_filter_module.c   Tue Jul 19 20:22:44 2016 +0300
@@ -165,6 +165,12 @@ ngx_http_v2_header_filter(ngx_http_reque
 return NGX_OK;
 }
 
+fc = r->connection;
+
+if (fc->error) {
+return NGX_ERROR;
+}
+
 if (r->method == NGX_HTTP_HEAD) {
 r->header_only = 1;
 }
@@ -255,8 +261,6 @@ ngx_http_v2_header_filter(ngx_http_reque
 len += 1 + ngx_http_v2_literal_size("Wed, 31 Dec 1986 18:00:00 GMT");
 }
 
-fc = r->connection;
-
 if (r->headers_out.location && r->headers_out.location->value.len) {
 
 if (r->headers_out.location->value.data[0] == '/') {

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Should nginx' default shipped fastcgi_param file updated to mitigate httpoxy?

2016-07-19 Thread Maxim Dounin
Hello!

On Tue, Jul 19, 2016 at 03:48:16PM +0200, Thomas Deutschmann wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> Hi,
> 
> I am proxy maintaining the nginx package on Gentoo.
> 
> Regarding the recent "httpoxy" problem (you already published a blog
> posting [1] with instructions how to mitigate the problem) we are
> unsure if we should update our package to ship your mitigation per
> default, i.e. altering your "fastcgi_param" file and add
> 
> > fastcgi_param  HTTP_PROXY "";
> 
> This would protect default configurations. However some setups might
> require a proxy which could break when fastcgi_param file will be
> sourced after user's configuration.
> 
> 
> - From my point of view this is a user education problem: If they know
> what they are doing they won't have to do anything: They should be
> fine already or at least will set their required values *after*
> sourcing the default fastcgi_param file.
> 
> For Gentoo we would use our elog and/or news system to tell the user
> about the changes.
> 
> 
> However we want to know if you, upstream, are going to change the
> default shipped fastcgi_param file (don't forget the .conf file) with
> the next upcoming release to include a "safer" default configuration
> as well or if there are reasons not to ship such a default and maybe
> you recommend us also to do nothing.

I don't think that the default should be changed.

The problem is about improperly using the HTTP_PROXY environment 
variable in CGI[-like] contexts.  And this is what should be 
fixed.  Much like any other uses of HTTP_* environment variables.

While filtering particular headers can be effectively used as a 
mitigation before all the affected uses are fixed, it doesn't 
looks like a good long-term solution.

-- 
Maxim Dounin
http://nginx.org/

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Should nginx' default shipped fastcgi_param file updated to mitigate httpoxy?

2016-07-19 Thread Thomas Deutschmann
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

I am proxy maintaining the nginx package on Gentoo.

Regarding the recent "httpoxy" problem (you already published a blog
posting [1] with instructions how to mitigate the problem) we are
unsure if we should update our package to ship your mitigation per
default, i.e. altering your "fastcgi_param" file and add

> fastcgi_param  HTTP_PROXY "";

This would protect default configurations. However some setups might
require a proxy which could break when fastcgi_param file will be
sourced after user's configuration.


- From my point of view this is a user education problem: If they know
what they are doing they won't have to do anything: They should be
fine already or at least will set their required values *after*
sourcing the default fastcgi_param file.

For Gentoo we would use our elog and/or news system to tell the user
about the changes.


However we want to know if you, upstream, are going to change the
default shipped fastcgi_param file (don't forget the .conf file) with
the next upcoming release to include a "safer" default configuration
as well or if there are reasons not to ship such a default and maybe
you recommend us also to do nothing.

Thanks.


[1]
https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-ngi
nx/


- -- 
Regards,
Thomas
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.1

iQJ8BAEBCgBmBQJXji+LXxSAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXQzM0M1ODQ4MkM0MDIyOTJEMkUzQzVDMDY5
NzA5RjkwQzNDOTZGRkM4AAoJEJcJ+Qw8lv/IIFMQAIl3gyTbLRVnX22RPrQcV/Be
NI5WSp+hd+D2DMSxunf5Rljedt2Yw7ODCtq3GCF3bC0xDMuMwsyHzxlUtvhUYqz1
PYz8n/b/76ba/rN0mMu3HWiCBbvnJ+gFd0QMNL8vP4ucabqYyPteTYN7ksSROh6C
hDej3VFDYYQsTHLhG8E8q4l9FcxEuOFnOK4H1B1aR9ti+juwysALbXa8rHx5JgYU
mgYbJvajB59gf6ks5VhN3HKHxZLdpvL8fPHwQw+pQIEpKRG5Qe11bOzRmsqQ7zvo
UagfvkIUHtBMnj5HH9mHGHY/Y1CVVWLwD81mC1kDpvJzlaKBhWPGm4a1g4Lnm+B4
sm5xQXF2s21mdp+PTB2qn6AujC5Lh4WPcHM0ZhJ4HTo15L0Z/4sbt/dh6s99I6Va
1G1YXDzZSUB9N777YYjIslNKXGFHM1oBx2UsChVo40PnvmQidZKJ1z9n0cOaiUVd
IRM1DAL6FCNCrPpPhgRKVs+VfJoNwCndD47zLhhy2xGvJUbUr9i3u6pF9THf3Nhp
LCaIQunB1r01QY0aUJT3WK6NfFcdyXy8SCtrTT8PWa/cNLCZ0yCe4DYLczgnby9F
dyTHXg8BjP/o+kQHl4e+Z7tEuAmmRgQ/BUehWyJppp/VuCVfILBfthquO++ItGCP
Z4yj87/isys7QInSO7I1
=H+YL
-END PGP SIGNATURE-

___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel