Re: [nginx] QUIC: "handshake_timeout" configuration parameter.

2024-04-10 Thread Vladimir Homutov via nginx-devel
On Tue, Apr 09, 2024 at 03:02:21PM +0400, Roman Arutyunyan wrote:
> Hello Vladimir,
>
> On Mon, Apr 08, 2024 at 03:03:27PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
> > On Fri, Sep 22, 2023 at 03:36:25PM +, Roman Arutyunyan wrote:
> > > details:   https://hg.nginx.org/nginx/rev/ad3d34ddfdcc
> > > branches:
> > > changeset: 9158:ad3d34ddfdcc
> > > user:  Roman Arutyunyan 
> > > date:  Wed Sep 13 17:59:37 2023 +0400
> > > description:
> > > QUIC: "handshake_timeout" configuration parameter.
> > >
> > > Previously QUIC did not have such parameter and handshake duration was
> > > controlled by HTTP/3.  However that required creating and storing HTTP/3
> > > session on first client datagram.  Apparently there's no convenient way to
> > > store the session object until QUIC handshake is complete.  In the 
> > > followup
> > > patches session creation will be postponed to init() callback.
> > >
> >
> > [...]
> >
> > > diff -r daf8f5ba23d8 -r ad3d34ddfdcc src/event/quic/ngx_event_quic.c
> > > --- a/src/event/quic/ngx_event_quic.c Fri Sep 01 20:31:46 2023 +0400
> > > +++ b/src/event/quic/ngx_event_quic.c Wed Sep 13 17:59:37 2023 +0400
> > > @@ -211,6 +211,8 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
> > >  qc = ngx_quic_get_connection(c);
> > >
> > >  ngx_add_timer(c->read, qc->tp.max_idle_timeout);
> > > +ngx_add_timer(>close, qc->conf->handshake_timeout);
> > > +
> >
> > It looks like I've hit an issue with early data in such case.
> > See the attached patch with details.
>
> Indeed, there's an issue there.
>
> > While there, I suggest a little debug improvement to better track
> > stream and their parent connections.
> >
> >
>
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1712576340 -10800
> > #  Mon Apr 08 14:39:00 2024 +0300
> > # Node ID 6e79f4ec40ed1c1ffec6a46b453051c01e556610
> > # Parent  99e7050ac886f7c70a4048691e46846b930b1e28
> > QUIC: fixed close timer processing with early data.
> >
> > The ngx_quic_run() function uses qc->close timer to limit the handshake
> > duration.  Normally it is removed by ngx_quic_do_init_streams() which is
> > called once when we are done with initial SSL processing.
> >
> > The problem happens when the client sends early data and streams are
> > initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
> > The order of set/remove timer calls is now reversed; the close timer is
> > set up and the timer fires when assigned, starting the unexpected connection
> > close process.
> >
> > The patch moves timer cancelling right before the place where the stream
> > initialization flag is tested, thus making it work with early data.
> >
> > The issue was introduced in ad3d34ddfdcc.
> >
> > diff --git a/src/event/quic/ngx_event_quic_streams.c 
> > b/src/event/quic/ngx_event_quic_streams.c
> > --- a/src/event/quic/ngx_event_quic_streams.c
> > +++ b/src/event/quic/ngx_event_quic_streams.c
> > @@ -575,6 +575,10 @@ ngx_quic_init_streams(ngx_connection_t *
> >
> >  qc = ngx_quic_get_connection(c);
> >
> > +if (!qc->closing && qc->close.timer_set) {
> > +ngx_del_timer(>close);
> > +}
> > +
> >  if (qc->streams.initialized) {
> >  return NGX_OK;
> >  }
> > @@ -630,10 +634,6 @@ ngx_quic_do_init_streams(ngx_connection_
> >
> >  qc->streams.initialized = 1;
> >
> > -if (!qc->closing && qc->close.timer_set) {
> > -ngx_del_timer(>close);
> > -}
> > -
> >  return NGX_OK;
> >  }
>
> This assumes that ngx_quic_init_streams() is always called on handshake end,
> even if not needed.  This is true now, but it's not something we can to rely 
> on.
>
> Also, we probably don't need to limit handshake duration after streams are
> initialized.  Application level will set the required keepalive timeout for
> this.  Also, we need to include OCSP validation time in handshake timeout,
> which your removed.
>
> I assume a simpler solution would be not to set the timer in ngx_quic_run()
> if streams are already initialized.

Agreed, see the updated patch:


# HG changeset patch
# User Vladimir Khomutov 
# Date 1712731090 -10800
#  Wed Apr 10 09:38:10 2024 +0300
# Node ID 155c9093de9db02e3c0a511a45930d39ff51c709
# Parent  99e7050ac886f7c70a4048691e46846b930b1e28
QUIC: fixed clos

Re: [nginx] QUIC: "handshake_timeout" configuration parameter.

2024-04-08 Thread Vladimir Homutov via nginx-devel
On Fri, Sep 22, 2023 at 03:36:25PM +, Roman Arutyunyan wrote:
> details:   https://hg.nginx.org/nginx/rev/ad3d34ddfdcc
> branches:
> changeset: 9158:ad3d34ddfdcc
> user:  Roman Arutyunyan 
> date:  Wed Sep 13 17:59:37 2023 +0400
> description:
> QUIC: "handshake_timeout" configuration parameter.
>
> Previously QUIC did not have such parameter and handshake duration was
> controlled by HTTP/3.  However that required creating and storing HTTP/3
> session on first client datagram.  Apparently there's no convenient way to
> store the session object until QUIC handshake is complete.  In the followup
> patches session creation will be postponed to init() callback.
>

[...]

> diff -r daf8f5ba23d8 -r ad3d34ddfdcc src/event/quic/ngx_event_quic.c
> --- a/src/event/quic/ngx_event_quic.c Fri Sep 01 20:31:46 2023 +0400
> +++ b/src/event/quic/ngx_event_quic.c Wed Sep 13 17:59:37 2023 +0400
> @@ -211,6 +211,8 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
>  qc = ngx_quic_get_connection(c);
>
>  ngx_add_timer(c->read, qc->tp.max_idle_timeout);
> +ngx_add_timer(>close, qc->conf->handshake_timeout);
> +

It looks like I've hit an issue with early data in such case.
See the attached patch with details.

While there, I suggest a little debug improvement to better track
stream and their parent connections.


# HG changeset patch
# User Vladimir Khomutov 
# Date 1712576340 -10800
#  Mon Apr 08 14:39:00 2024 +0300
# Node ID 6e79f4ec40ed1c1ffec6a46b453051c01e556610
# Parent  99e7050ac886f7c70a4048691e46846b930b1e28
QUIC: fixed close timer processing with early data.

The ngx_quic_run() function uses qc->close timer to limit the handshake
duration.  Normally it is removed by ngx_quic_do_init_streams() which is
called once when we are done with initial SSL processing.

The problem happens when the client sends early data and streams are
initialized in the ngx_quic_run() -> ngx_quic_handle_datagram() call.
The order of set/remove timer calls is now reversed; the close timer is
set up and the timer fires when assigned, starting the unexpected connection
close process.

The patch moves timer cancelling right before the place where the stream
initialization flag is tested, thus making it work with early data.

The issue was introduced in ad3d34ddfdcc.

diff --git a/src/event/quic/ngx_event_quic_streams.c 
b/src/event/quic/ngx_event_quic_streams.c
--- a/src/event/quic/ngx_event_quic_streams.c
+++ b/src/event/quic/ngx_event_quic_streams.c
@@ -575,6 +575,10 @@ ngx_quic_init_streams(ngx_connection_t *
 
 qc = ngx_quic_get_connection(c);
 
+if (!qc->closing && qc->close.timer_set) {
+ngx_del_timer(>close);
+}
+
 if (qc->streams.initialized) {
 return NGX_OK;
 }
@@ -630,10 +634,6 @@ ngx_quic_do_init_streams(ngx_connection_
 
 qc->streams.initialized = 1;
 
-if (!qc->closing && qc->close.timer_set) {
-ngx_del_timer(>close);
-}
-
 return NGX_OK;
 }
 
# HG changeset patch
# User Vladimir Khomutov 
# Date 1712575741 -10800
#  Mon Apr 08 14:29:01 2024 +0300
# Node ID d9b80de50040bb8ac2a7e193971d1dfeb579cfc9
# Parent  6e79f4ec40ed1c1ffec6a46b453051c01e556610
QUIC: added debug logging of stream creation.

Currently, it is hard to associate stream connection number with its parent
connection.  The typical case is to identify QUIC connection number given
some user-visible URI (which occurs in request stream).

The patch adds the debug log message which reports about stream creation in
the stream log and also shows the parent connection number.

diff --git a/src/event/quic/ngx_event_quic_streams.c 
b/src/event/quic/ngx_event_quic_streams.c
--- a/src/event/quic/ngx_event_quic_streams.c
+++ b/src/event/quic/ngx_event_quic_streams.c
@@ -805,6 +805,10 @@ ngx_quic_create_stream(ngx_connection_t 
 
 ngx_rbtree_insert(>streams.tree, >node);
 
+ngx_log_debug2(NGX_LOG_DEBUG_EVENT, sc->log, 0,
+   "quic stream id:0x%xL created in connection *%uA", id,
+   c->log->connection);
+
 return qs;
 }
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: nginxQuic: скорость загрузки при активации kTLS

2024-01-03 Thread Vladimir Homutov via nginx-ru
On Wed, Jan 03, 2024 at 05:12:19PM +0300, izor...@gmail.com wrote:
> Здравствуйте, Илья.
> Результат анализа дампа:
> Using local file 
> /nix/store/dd7van8jrcmnxmwdsbkyyzhd98myzg2j-nginxQuic-1.25.3/bin/nginx.
> Argument "MSWin32" isn't numeric in numeric eq (==) at 
> /run/current-system/sw/bin/pprof line 5047.
> Argument "linux" isn't numeric in numeric eq (==) at 
> /run/current-system/sw/bin/pprof line 5047.
> Using local file /var/www/test/profile/ktls.7743.
> Warning: address  128f35:      eb ae                  jmp    128ee5 is longer 
> than address length 16
> Total: 3431 samples
>     1225  35.7%  35.7%    1225  35.7% epoll_wait
>     875  25.5%  61.2%      880  25.6% __sendmsg
>     477  13.9%  75.1%      477  13.9% _aesni_ctr32_ghash_6x
>     146  4.3%  79.4%      146  4.3% pthread_cond_signal@@GLIBC_2.3.2
>     127  3.7%  83.1%      127  3.7% __memmove_avx_unaligned_erms
>     123  3.6%  86.7%      127  3.7% __recvmsg
>       58  1.7%  88.3%      58  1.7% __lll_lock_wake
>       16  0.5%  88.8%      16  0.5% __strcmp_avx2
>       15  0.4%  89.2%    1867  54.4% ngx_epoll_process_events
>       15  0.4%  89.7%      51  1.5% ngx_quic_create_frame
>       14  0.4%  90.1%      14  0.4% aesni_ctr32_encrypt_blocks
>       14  0.4%  90.5%      255  7.4% ngx_quic_recvmsg
>       13  0.4%  90.9%      14  0.4% evp_cipher_init_internal
>       13  0.4%  91.3%    1540  44.9% ngx_quic_output
>       11  0.3%  91.6%      11  0.3% gcm_ghash_avx
>       10  0.3%  91.9%      10  0.3% ngx_quic_parse_frame
>       8  0.2%  92.1%        8  0.2% __pthread_disable_asynccancel
>       7  0.2%  92.3%        7  0.2% ngx_quic_commit_send
>       6  0.2%  92.5%        6  0.2% aesni_encrypt
>       6  0.2%  92.7%      506  14.7% generic_aes_gcm_cipher_update
>       6  0.2%  92.8%      114  3.3% ngx_http_write_filter
>       6  0.2%  93.0%      598  17.4% ngx_quic_crypto_common
> ...
>  
> Если использовать протокол HTTP/1.1
> Using local file 
> /nix/store/dd7van8jrcmnxmwdsbkyyzhd98myzg2j-nginxQuic-1.25.3/bin/nginx.
> Argument "MSWin32" isn't numeric in numeric eq (==) at 
> /run/current-system/sw/bin/pprof line 5047.
> Argument "linux" isn't numeric in numeric eq (==) at 
> /run/current-system/sw/bin/pprof line 5047.
> Using local file /var/www/test/profile/ktls.9140.
> Warning: address  128f35:      eb ae                  jmp    128ee5 is longer 
> than address length 16
> Total: 2354 samples
>     2329  98.9%  98.9%    2329  98.9% sendfile64
>       7  0.3%  99.2%        7  0.3% __sched_yield
>       5  0.2%  99.4%        5  0.2% epoll_wait
>       2  0.1%  99.5%    2335  99.2% ngx_http_sub_body_filter
>       2  0.1%  99.6%    2339  99.4% ngx_http_writer
>       1  0.0%  99.7%        1  0.0% CRYPTO_free
>       1  0.0%  99.7%    2330  99.0% SSL_sendfile
>       1  0.0%  99.7%        1  0.0% __GI___clock_gettime
>       1  0.0%  99.8%        7  0.3% ngx_epoll_process_events
>       1  0.0%  99.8%    2336  99.2% ngx_http_copy_filter
>       1  0.0%  99.9%    2337  99.3% ngx_http_range_body_filter
>       1  0.0%  99.9%    2333  99.1% ngx_http_xslt_body_filter
>       1  0.0% 100.0%    2332  99.1% ngx_ssl_send_chain
>       1  0.0% 100.0%        1  0.0% xmlMutexLock
>       0  0.0% 100.0%        1  0.0% ERR_clear_error
>       0  0.0% 100.0%    2354 100.0% __libc_start_call_main
>       0  0.0% 100.0%    2354 100.0% __libc_start_main_impl
>       0  0.0% 100.0%    2354 100.0% _start
>       0  0.0% 100.0%    2354 100.0% main
> ...

quic_gso (nginx.org/r/quic_gso) включён ?
___
nginx-ru mailing list
nginx-ru@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-ru


Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams

2023-12-28 Thread Vladimir Homutov via nginx-devel
On Thu, Dec 28, 2023 at 04:31:41PM +0300, Maxim Dounin wrote:
> Hello!
>
> On Wed, Dec 27, 2023 at 04:17:38PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
>
> > On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote:
> > > Hello!
> > >
> > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via 
> > > nginx-devel wrote:
> > >
> > > > Hello, everyone,
> > > >
> > > > and Merry Christmas to all!
> > > >
> > > > I'm a developer of an nginx fork Angie.  Recently we implemented
> > > > an HTTP/3 proxy support in our fork [1].
> > > >
> > > > We'd like to contribute this functionality to nginx OSS community.
> > > > Hence here is a patch series backported from Angie to the current
> > > > head of nginx mainline branch (1.25.3)
> > >
> > > Thank you for the patches.
> > >
> > > Are there any expected benefits from HTTP/3 being used as a
> > > protocol to upstream servers?
> >
> > Personally, I don't see much.
> >
> > Probably, faster connection establishing to due 0RTT support (need to be
> > implemented) and better multiplexing (again, if implemented wisely).
> > I have made some simple benchmarks, and it looks more or less similar
> > to usual SSL connections.
>
> Thanks for the details.
>
> Multiplexing is available since introduction of the FastCGI
> protocol, yet to see it working in upstream connections.  As for
> 0-RTT, using keepalive connections is probably more efficient
> anyway (and not really needed for upstream connections in most
> cases as well).

With HTTP/3 and keepalive we can have just one quic "connection" per upstream
server (in extreme). We perform heavy handshake once, and leave it open.
Next we just create HTTP/3 streams to perform request. They can perfectly
run in parallel and use same quic connection. Probably, this is something
worth implementing, with limitations of course: we don't want to mix
requests from different (classes of) clients in same connection, we
don't want eternal life of such connection and we need means to control
level of such multiplexing.

>
> > >
> > > [...]
> > >
> > > > Probably, the HTTP/3 proxy should be implemented in a separate 
> > > > module.
> > > > Currently it is a patch to the HTTP proxy module to minimize 
> > > > boilerplate.
> > >
> > > Sure.  I'm very much against the idea of mixing different upstream
> > > protocols in a single protocol module.
> >
> > noted.
> >
> > > (OTOH, there are some uncertain plans to make proxy module able to
> > > work with other protocols based on the scheme, such as in
> > > "proxy_pass fastcgi://127.0.0.1:9000;".  This is mostly irrelevant
> > > though, and might never happen.)
> >
> > well, currently we have separate proxying modules that are similar enough to
> > think about merging them like suggested. Not sure if one big module with
> > methods will worth it, as semantics is slightly different.
> >
> > proxy modules are already addons on top of upstream module, which does
> > the heavy lifting. What requires improvement is probably the
> > configuration that makes user to remember many similar directives doing
> > the same thing but for different protocols.
>
> Yep, making things easier to configure (and modify, if something
> related to configuration directives is changed or additional
> protocol is added) is the main motivator.  Still, there are indeed
> differences between protocol modules, and this makes single module
> inconvenient sometimes.  As such, plans are uncertain (and the
> previous attempt to do this failed miserably).
>
> --
> Maxim Dounin
> http://mdounin.ru/
> ___
> nginx-devel mailing list
> nginx-devel@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx-devel
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams

2023-12-27 Thread Vladimir Homutov via nginx-devel
On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote:
> Hello!
>
> On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
>
> > Hello, everyone,
> >
> > and Merry Christmas to all!
> >
> > I'm a developer of an nginx fork Angie.  Recently we implemented
> > an HTTP/3 proxy support in our fork [1].
> >
> > We'd like to contribute this functionality to nginx OSS community.
> > Hence here is a patch series backported from Angie to the current
> > head of nginx mainline branch (1.25.3)
>
> Thank you for the patches.
>
> Are there any expected benefits from HTTP/3 being used as a
> protocol to upstream servers?

Personally, I don't see much.

Probably, faster connection establishing to due 0RTT support (need to be
implemented) and better multiplexing (again, if implemented wisely).
I have made some simple benchmarks, and it looks more or less similar
to usual SSL connections.

>
> [...]
>
> > Probably, the HTTP/3 proxy should be implemented in a separate module.
> > Currently it is a patch to the HTTP proxy module to minimize 
> > boilerplate.
>
> Sure.  I'm very much against the idea of mixing different upstream
> protocols in a single protocol module.

noted.

> (OTOH, there are some uncertain plans to make proxy module able to
> work with other protocols based on the scheme, such as in
> "proxy_pass fastcgi://127.0.0.1:9000;".  This is mostly irrelevant
> though, and might never happen.)

well, currently we have separate proxying modules that are similar enough to
think about merging them like suggested. Not sure if one big module with
methods will worth it, as semantics is slightly different.

proxy modules are already addons on top of upstream module, which does
the heavy lifting. What requires improvement is probably the
configuration that makes user to remember many similar directives doing
the same thing but for different protocols.


___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 11 of 12] Proxy: HTTP/3 support

2023-12-25 Thread Vladimir Homutov via nginx-devel
Example configuration:

location /foo {
proxy_http_version 3;
proxy_pass https://http3-server.example.com:4433;
}


 src/http/modules/ngx_http_proxy_module.c  |  2276 -
 src/http/modules/ngx_http_upstream_keepalive_module.c |47 +-
 src/http/ngx_http_header_filter_module.c  |50 +
 src/http/ngx_http_request.h   | 2 +
 src/http/ngx_http_upstream.c  |   556 -
 src/http/ngx_http_upstream.h  |14 +
 src/http/v3/ngx_http_v3.h | 7 +
 src/http/v3/ngx_http_v3_parse.c   |36 +-
 src/http/v3/ngx_http_v3_request.c |23 +
 src/http/v3/ngx_http_v3_uni.c |45 +-
 10 files changed, 3018 insertions(+), 38 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703160644 -10800
#  Thu Dec 21 15:10:44 2023 +0300
# Node ID 6150bf13f72af4f2ecc918381a2d5a8916eaf8e5
# Parent  fcbbdbc00cbf51dc54f6da114e12ba5ec0f278cc
Proxy: HTTP/3 support.

Example configuration:

location /foo {
proxy_http_version 3;
proxy_pass https://http3-server.example.com:4433;
}

diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -10,6 +10,10 @@
 #include 
 #include 
 
+#if (NGX_HTTP_V3 && NGX_QUIC_OPENSSL_COMPAT)
+#include 
+#endif
+
 
 #define  NGX_HTTP_PROXY_COOKIE_SECURE   0x0001
 #define  NGX_HTTP_PROXY_COOKIE_SECURE_ON0x0002
@@ -131,6 +135,7 @@ typedef struct {
 #if (NGX_HTTP_V3)
 ngx_str_t  host;
 ngx_uint_t host_set;
+ngx_flag_t enable_hq;
 #endif
 } ngx_http_proxy_loc_conf_t;
 
@@ -146,6 +151,8 @@ typedef struct {
 
 #if (NGX_HTTP_V3)
 ngx_str_t  host;
+ngx_http_v3_parse_t   *v3_parse;
+size_t data_recvd;
 #endif
 
 unsigned   head:1;
@@ -253,6 +260,80 @@ static ngx_int_t ngx_http_proxy_set_ssl(
 #endif
 static void ngx_http_proxy_set_vars(ngx_url_t *u, ngx_http_proxy_vars_t *v);
 
+#if (NGX_HTTP_V3)
+
+/* context for creating http/3 request */
+typedef struct {
+/* calculated length of request */
+size_t n;
+
+/* encode method state */
+ngx_str_t  method;
+
+/* encode path state */
+size_t loc_len;
+size_t uri_len;
+uintptr_t  escape;
+ngx_uint_t unparsed_uri;
+
+/* encode headers state */
+size_t max_head;
+ngx_http_proxy_headers_t  *headers;
+ngx_http_script_engine_t   le;
+ngx_http_script_engine_t   e;
+
+} ngx_http_v3_proxy_ctx_t;
+
+
+static char *ngx_http_v3_proxy_host_key(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
+static ngx_int_t ngx_http_v3_proxy_merge_quic(ngx_conf_t *cf,
+ngx_http_proxy_loc_conf_t *conf, ngx_http_proxy_loc_conf_t *prev);
+
+static ngx_int_t ngx_http_v3_proxy_create_request(ngx_http_request_t *r);
+
+static ngx_chain_t *ngx_http_v3_create_headers_frame(ngx_http_request_t *r,
+ngx_buf_t *hbuf);
+static ngx_chain_t *ngx_http_v3_create_data_frame(ngx_http_request_t *r,
+ngx_chain_t *body, size_t size);
+static ngx_inline ngx_uint_t ngx_http_v3_map_method(ngx_uint_t method);
+static ngx_int_t ngx_http_v3_proxy_encode_method(ngx_http_request_t *r,
+ngx_http_v3_proxy_ctx_t *v3c, ngx_buf_t *b);
+static ngx_int_t ngx_http_v3_proxy_encode_authority(ngx_http_request_t *r,
+ngx_http_v3_proxy_ctx_t *v3c, ngx_buf_t *b);
+static ngx_int_t ngx_http_v3_proxy_encode_path(ngx_http_request_t *r,
+ngx_http_v3_proxy_ctx_t *v3c, ngx_buf_t *b);
+static ngx_int_t ngx_http_v3_proxy_encode_headers(ngx_http_request_t *r,
+ngx_http_v3_proxy_ctx_t *v3c, ngx_buf_t *b);
+static ngx_int_t ngx_http_v3_proxy_body_length(ngx_http_request_t *r,
+ngx_http_v3_proxy_ctx_t *v3c);
+static ngx_chain_t *ngx_http_v3_proxy_encode_body(ngx_http_request_t *r,
+ngx_http_v3_proxy_ctx_t *v3c);
+static ngx_int_t ngx_http_v3_proxy_body_output_filter(void *data,
+ngx_chain_t *in);
+
+static ngx_int_t ngx_http_v3_proxy_reinit_request(ngx_http_request_t *r);
+static ngx_int_t ngx_http_v3_proxy_process_status_line(ngx_http_request_t *r);
+static void ngx_http_v3_proxy_abort_request(ngx_http_request_t *r);
+static void ngx_http_v3_proxy_finalize_request(ngx_http_request_t *r,
+ngx_int_t rc);
+static ngx_int_t ngx_http_v3_proxy_process_header(ngx_http_request_t *r,
+ngx_str_t *name, ngx_str_t *value);
+
+static ngx_int_t ngx_http_v3_proxy_headers_done(ngx_http_request_t *r);
+static ngx_int_t ngx_http_v3_proxy_process_pseudo_header(ngx_http_request_t *r,
+ngx_str_t *name, ngx_str_t *value);
+static ngx_int_t 

[PATCH 10 of 12] Added host/host_set logic to proxy module

2023-12-25 Thread Vladimir Homutov via nginx-devel
Patch is to be merged with next.
This is basically a copy from grpc proxy.


 src/http/modules/ngx_http_proxy_module.c |  67 
 1 files changed, 67 insertions(+), 0 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703082897 -10800
#  Wed Dec 20 17:34:57 2023 +0300
# Node ID fcbbdbc00cbf51dc54f6da114e12ba5ec0f278cc
# Parent  183d5a20c159a380d9a7562f3188d91aea465ab7
Added host/host_set logic to proxy module.

Patch is to be merged with next.
This is basically a copy from grpc proxy.

diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -1,5 +1,6 @@
 
 /*
+ * Copyright (C) 2023 Web Server LLC
  * Copyright (C) Igor Sysoev
  * Copyright (C) Nginx, Inc.
  */
@@ -126,6 +127,11 @@ typedef struct {
 ngx_str_t  ssl_crl;
 ngx_array_t   *ssl_conf_commands;
 #endif
+
+#if (NGX_HTTP_V3)
+ngx_str_t  host;
+ngx_uint_t host_set;
+#endif
 } ngx_http_proxy_loc_conf_t;
 
 
@@ -138,6 +144,10 @@ typedef struct {
 ngx_chain_t   *free;
 ngx_chain_t   *busy;
 
+#if (NGX_HTTP_V3)
+ngx_str_t  host;
+#endif
+
 unsigned   head:1;
 unsigned   internal_chunked:1;
 unsigned   header_sent:1;
@@ -958,6 +968,9 @@ ngx_http_proxy_handler(ngx_http_request_
 u = r->upstream;
 
 if (plcf->proxy_lengths == NULL) {
+#if (NGX_HTTP_V3)
+ctx->host = plcf->host;
+#endif
 ctx->vars = plcf->vars;
 u->schema = plcf->vars.schema;
 #if (NGX_HTTP_SSL)
@@ -1128,6 +1141,22 @@ ngx_http_proxy_eval(ngx_http_request_t *
 u->resolved->port = (in_port_t) (url.no_port ? port : url.port);
 u->resolved->no_port = url.no_port;
 
+#if (NGX_HTTP_V3)
+if (url.family != AF_UNIX) {
+
+if (url.no_port) {
+ctx->host = url.host;
+
+} else {
+ctx->host.len = url.host.len + 1 + url.port_text.len;
+ctx->host.data = url.host.data;
+}
+
+} else {
+ngx_str_set(>host, "localhost");
+}
+#endif
+
 return NGX_OK;
 }
 
@@ -3351,6 +3380,9 @@ ngx_http_proxy_create_loc_conf(ngx_conf_
  * conf->ssl_ciphers = { 0, NULL };
  * conf->ssl_trusted_certificate = { 0, NULL };
  * conf->ssl_crl = { 0, NULL };
+ *
+ * conf->host = { 0, NULL };
+ * conf->host_set = 0;
  */
 
 conf->upstream.store = NGX_CONF_UNSET;
@@ -3859,6 +3891,9 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t
 conf->upstream.upstream = prev->upstream.upstream;
 conf->location = prev->location;
 conf->vars = prev->vars;
+#if (NGX_HTTP_V3)
+conf->host = prev->host;
+#endif
 
 conf->proxy_lengths = prev->proxy_lengths;
 conf->proxy_values = prev->proxy_values;
@@ -3905,6 +3940,10 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t
 #if (NGX_HTTP_CACHE)
 conf->headers_cache = prev->headers_cache;
 #endif
+
+#if (NGX_HTTP_V3)
+conf->host_set = prev->host_set;
+#endif
 }
 
 rc = ngx_http_proxy_init_headers(cf, conf, >headers,
@@ -3937,6 +3976,10 @@ ngx_http_proxy_merge_loc_conf(ngx_conf_t
 #if (NGX_HTTP_CACHE)
 prev->headers_cache = conf->headers_cache;
 #endif
+
+#if (NGX_HTTP_V3)
+conf->host_set = prev->host_set;
+#endif
 }
 
 return NGX_CONF_OK;
@@ -3989,6 +4032,14 @@ ngx_http_proxy_init_headers(ngx_conf_t *
 src = conf->headers_source->elts;
 for (i = 0; i < conf->headers_source->nelts; i++) {
 
+#if (NGX_HTTP_V3)
+if (src[i].key.len == 4
+&& ngx_strncasecmp(src[i].key.data, (u_char *) "Host", 4) == 0)
+{
+conf->host_set = 1;
+}
+#endif
+
 s = ngx_array_push(_merged);
 if (s == NULL) {
 return NGX_ERROR;
@@ -4203,6 +4254,22 @@ ngx_http_proxy_pass(ngx_conf_t *cf, ngx_
 plcf->vars.schema.data = url->data;
 plcf->vars.key_start = plcf->vars.schema;
 
+#if (NGX_HTTP_V3)
+if (u.family != AF_UNIX) {
+
+if (u.no_port) {
+plcf->host = u.host;
+
+} else {
+plcf->host.len = u.host.len + 1 + u.port_text.len;
+plcf->host.data = u.host.data;
+}
+
+} else {
+ngx_str_set(>host, "localhost");
+}
+#endif
+
 ngx_http_proxy_set_vars(, >vars);
 
 plcf->location = clcf->name;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 09 of 12] HTTP/3: added $quic_connection variable

2023-12-25 Thread Vladimir Homutov via nginx-devel
The variable contains number of main quic connection (shared between streams).
This is useful for keepalive tests.


 src/http/v3/ngx_http_v3_module.c |  40 
 1 files changed, 40 insertions(+), 0 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703082840 -10800
#  Wed Dec 20 17:34:00 2023 +0300
# Node ID 183d5a20c159a380d9a7562f3188d91aea465ab7
# Parent  1fb8eae095661a3fa1ce5598528d81dddc0811a6
HTTP/3: added $quic_connection variable.

The variable contains number of main quic connection (shared between streams).
This is useful for keepalive tests.

diff --git a/src/http/v3/ngx_http_v3_module.c b/src/http/v3/ngx_http_v3_module.c
--- a/src/http/v3/ngx_http_v3_module.c
+++ b/src/http/v3/ngx_http_v3_module.c
@@ -1,5 +1,6 @@
 
 /*
+ * Copyright (C) 2023 Web Server LLC
  * Copyright (C) Nginx, Inc.
  * Copyright (C) Roman Arutyunyan
  */
@@ -12,6 +13,8 @@
 
 static ngx_int_t ngx_http_v3_variable(ngx_http_request_t *r,
 ngx_http_variable_value_t *v, uintptr_t data);
+static ngx_int_t ngx_http_v3_quic_connection_variable(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data);
 static ngx_int_t ngx_http_v3_add_variables(ngx_conf_t *cf);
 static void *ngx_http_v3_create_srv_conf(ngx_conf_t *cf);
 static char *ngx_http_v3_merge_srv_conf(ngx_conf_t *cf, void *parent,
@@ -117,6 +120,9 @@ static ngx_http_variable_t  ngx_http_v3_
 
 { ngx_string("http3"), NULL, ngx_http_v3_variable, 0, 0, 0 },
 
+{ ngx_string("quic_connection"), NULL, ngx_http_v3_quic_connection_variable,
+  0, 0, 0 },
+
   ngx_http_null_variable
 };
 
@@ -158,6 +164,40 @@ ngx_http_v3_variable(ngx_http_request_t 
 
 
 static ngx_int_t
+ngx_http_v3_quic_connection_variable(ngx_http_request_t *r,
+ngx_http_variable_value_t *v, uintptr_t data)
+{
+u_char *p;
+ngx_connection_t   *c;
+ngx_quic_stream_t  *qs;
+
+if (r->connection->quic) {
+qs = r->connection->quic;
+
+c = qs->parent;
+
+p = ngx_pnalloc(r->pool, NGX_ATOMIC_T_LEN);
+if (p == NULL) {
+return NGX_ERROR;
+}
+
+v->len = ngx_sprintf(p, "%uA", c->number) - p;
+v->valid = 1;
+v->no_cacheable = 0;
+v->not_found = 0;
+v->data = p;
+
+return NGX_OK;
+}
+
+*v = ngx_http_variable_null_value;
+
+return NGX_OK;
+
+}
+
+
+static ngx_int_t
 ngx_http_v3_add_variables(ngx_conf_t *cf)
 {
 ngx_http_variable_t  *var, *v;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 08 of 12] Upstream: separate function to handle upstream connection closing

2023-12-25 Thread Vladimir Homutov via nginx-devel
No functional changes.


 src/http/ngx_http_upstream.c |  91 ---
 1 files changed, 43 insertions(+), 48 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1695647888 -10800
#  Mon Sep 25 16:18:08 2023 +0300
# Node ID 1fb8eae095661a3fa1ce5598528d81dddc0811a6
# Parent  f8275ecea4a7b18ae128f4e622ec50aa139cc6e1
Upstream: separate function to handle upstream connection closing.

No functional changes.

diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -98,6 +98,8 @@ static void ngx_http_upstream_dummy_hand
 ngx_http_upstream_t *u);
 static void ngx_http_upstream_next(ngx_http_request_t *r,
 ngx_http_upstream_t *u, ngx_uint_t ft_type);
+static void ngx_http_upstream_close_peer_connection(ngx_http_request_t *r,
+ngx_http_upstream_t *u, ngx_uint_t no_send);
 static void ngx_http_upstream_cleanup(void *data);
 static void ngx_http_upstream_finalize_request(ngx_http_request_t *r,
 ngx_http_upstream_t *u, ngx_int_t rc);
@@ -4462,25 +4464,7 @@ ngx_http_upstream_next(ngx_http_request_
 }
 
 if (u->peer.connection) {
-ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-   "close http upstream connection: %d",
-   u->peer.connection->fd);
-#if (NGX_HTTP_SSL)
-
-if (u->peer.connection->ssl) {
-u->peer.connection->ssl->no_wait_shutdown = 1;
-u->peer.connection->ssl->no_send_shutdown = 1;
-
-(void) ngx_ssl_shutdown(u->peer.connection);
-}
-#endif
-
-if (u->peer.connection->pool) {
-ngx_destroy_pool(u->peer.connection->pool);
-}
-
-ngx_close_connection(u->peer.connection);
-u->peer.connection = NULL;
+ngx_http_upstream_close_peer_connection(r, u, 1);
 }
 
 ngx_http_upstream_connect(r, u);
@@ -4488,6 +4472,39 @@ ngx_http_upstream_next(ngx_http_request_
 
 
 static void
+ngx_http_upstream_close_peer_connection(ngx_http_request_t *r,
+ngx_http_upstream_t *u, ngx_uint_t no_send)
+{
+ngx_pool_t   *pool;
+ngx_connection_t *c;
+
+c = u->peer.connection;
+
+ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
+   "close http upstream connection: %d", c->fd);
+
+#if (NGX_HTTP_SSL)
+if (c->ssl) {
+c->ssl->no_wait_shutdown = 1;
+c->ssl->no_send_shutdown = no_send;
+
+(void) ngx_ssl_shutdown(c);
+}
+#endif
+
+pool = c->pool;
+
+ngx_close_connection(c);
+
+if (pool) {
+ngx_destroy_pool(pool);
+}
+
+u->peer.connection = NULL;
+}
+
+
+static void
 ngx_http_upstream_cleanup(void *data)
 {
 ngx_http_request_t *r = data;
@@ -4544,37 +4561,15 @@ ngx_http_upstream_finalize_request(ngx_h
 }
 
 if (u->peer.connection) {
-
-#if (NGX_HTTP_SSL)
-
 /* TODO: do not shutdown persistent connection */
 
-if (u->peer.connection->ssl) {
-
-/*
- * We send the "close notify" shutdown alert to the upstream only
- * and do not wait its "close notify" shutdown alert.
- * It is acceptable according to the TLS standard.
- */
-
-u->peer.connection->ssl->no_wait_shutdown = 1;
-
-(void) ngx_ssl_shutdown(u->peer.connection);
-}
-#endif
-
-ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
-   "close http upstream connection: %d",
-   u->peer.connection->fd);
-
-if (u->peer.connection->pool) {
-ngx_destroy_pool(u->peer.connection->pool);
-}
-
-ngx_close_connection(u->peer.connection);
-}
-
-u->peer.connection = NULL;
+/*
+ * We send the "close notify" shutdown alert to the upstream only
+ * and do not wait its "close notify" shutdown alert.
+ * It is acceptable according to the TLS standard.
+ */
+ngx_http_upstream_close_peer_connection(r, u, 0);
+}
 
 if (u->pipe && u->pipe->temp_file) {
 ngx_log_debug1(NGX_LOG_DEBUG_HTTP, r->connection->log, 0,
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 07 of 12] Upstream: refactored upstream initialization

2023-12-25 Thread Vladimir Homutov via nginx-devel
No functional changes.  This will be used by the following patches.


 src/http/ngx_http_upstream.c |  133 +++---
 1 files changed, 74 insertions(+), 59 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703082747 -10800
#  Wed Dec 20 17:32:27 2023 +0300
# Node ID f8275ecea4a7b18ae128f4e622ec50aa139cc6e1
# Parent  e1c4b57622ea1d8b65db495e88f3cd7c0c5f95ea
Upstream: refactored upstream initialization.

No functional changes.  This will be used by the following patches.

diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c
--- a/src/http/ngx_http_upstream.c
+++ b/src/http/ngx_http_upstream.c
@@ -1,5 +1,6 @@
 
 /*
+ * Copyright (C) 2023 Web Server LLC
  * Copyright (C) Igor Sysoev
  * Copyright (C) Nginx, Inc.
  */
@@ -37,6 +38,8 @@ static void ngx_http_upstream_check_brok
 ngx_event_t *ev);
 static void ngx_http_upstream_connect(ngx_http_request_t *r,
 ngx_http_upstream_t *u);
+static ngx_int_t ngx_http_upstream_configure(ngx_http_request_t *r,
+ngx_http_upstream_t *u, ngx_connection_t *c);
 static ngx_int_t ngx_http_upstream_reinit(ngx_http_request_t *r,
 ngx_http_upstream_t *u);
 static void ngx_http_upstream_send_request(ngx_http_request_t *r,
@@ -1527,9 +1530,8 @@ ngx_http_upstream_check_broken_connectio
 static void
 ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u)
 {
-ngx_int_t  rc;
-ngx_connection_t  *c;
-ngx_http_core_loc_conf_t  *clcf;
+ngx_int_t  rc;
+ngx_connection_t  *c;
 
 r->connection->log->action = "connecting to upstream";
 
@@ -1587,16 +1589,6 @@ ngx_http_upstream_connect(ngx_http_reque
 c->write->handler = ngx_http_upstream_handler;
 c->read->handler = ngx_http_upstream_handler;
 
-u->write_event_handler = ngx_http_upstream_send_request_handler;
-u->read_event_handler = ngx_http_upstream_process_header;
-
-c->sendfile &= r->connection->sendfile;
-u->output.sendfile = c->sendfile;
-
-if (r->connection->tcp_nopush == NGX_TCP_NOPUSH_DISABLED) {
-c->tcp_nopush = NGX_TCP_NOPUSH_DISABLED;
-}
-
 if (c->pool == NULL) {
 
 /* we need separate pool here to be able to cache SSL connections */
@@ -1614,52 +1606,17 @@ ngx_http_upstream_connect(ngx_http_reque
 c->read->log = c->log;
 c->write->log = c->log;
 
-/* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */
-
-clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
-
-u->writer.out = NULL;
-u->writer.last = >writer.out;
-u->writer.connection = c;
-u->writer.limit = clcf->sendfile_max_chunk;
-
-if (u->request_sent) {
-if (ngx_http_upstream_reinit(r, u) != NGX_OK) {
-ngx_http_upstream_finalize_request(r, u,
-   NGX_HTTP_INTERNAL_SERVER_ERROR);
-return;
-}
-}
-
-if (r->request_body
-&& r->request_body->buf
-&& r->request_body->temp_file
-&& r == r->main)
-{
-/*
- * the r->request_body->buf can be reused for one request only,
- * the subrequests should allocate their own temporary bufs
- */
-
-u->output.free = ngx_alloc_chain_link(r->pool);
-if (u->output.free == NULL) {
-ngx_http_upstream_finalize_request(r, u,
-   NGX_HTTP_INTERNAL_SERVER_ERROR);
-return;
-}
-
-u->output.free->buf = r->request_body->buf;
-u->output.free->next = NULL;
-u->output.allocated = 1;
-
-r->request_body->buf->pos = r->request_body->buf->start;
-r->request_body->buf->last = r->request_body->buf->start;
-r->request_body->buf->tag = u->output.tag;
-}
-
-u->request_sent = 0;
-u->request_body_sent = 0;
-u->request_body_blocked = 0;
+c->sendfile &= r->connection->sendfile;
+
+if (r->connection->tcp_nopush == NGX_TCP_NOPUSH_DISABLED) {
+c->tcp_nopush = NGX_TCP_NOPUSH_DISABLED;
+}
+
+if (ngx_http_upstream_configure(r, u, c) != NGX_OK) {
+ngx_http_upstream_finalize_request(r, u,
+   NGX_HTTP_INTERNAL_SERVER_ERROR);
+return;
+}
 
 if (rc == NGX_AGAIN) {
 ngx_add_timer(c->write, u->conf->connect_timeout);
@@ -1679,6 +1636,64 @@ ngx_http_upstream_connect(ngx_http_reque
 }
 
 
+static ngx_int_t
+ngx_http_upstream_configure(ngx_http_request_t *r, ngx_http_upstream_t *u,
+ngx_connection_t *c)
+{
+ngx_http_core_loc_conf_t  *clcf;
+
+u->write_event_handler = ngx_http_upstream_send_request_handler;
+u->read_event_handler = ngx_http_upstream_process_header;
+
+u->output.sendfile = c->sendfile;
+
+/* init or reinit the ngx_output_chain() and ngx_chain_writer() contexts */
+
+clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
+
+u->writer.out = NULL;
+u->writer.last = 

[PATCH 05 of 12] QUIC: client loss detection updates

2023-12-25 Thread Vladimir Homutov via nginx-devel
Patch subject is complete summary.


 src/event/quic/ngx_event_quic_ack.c |  69 +++-
 1 files changed, 66 insertions(+), 3 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703081117 -10800
#  Wed Dec 20 17:05:17 2023 +0300
# Node ID f275f3a9992ca09a34a5281269d05e23136c6f0b
# Parent  f39271dd260b831fac70c776904d9f5ded053968
QUIC: client loss detection updates.

diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c
--- a/src/event/quic/ngx_event_quic_ack.c
+++ b/src/event/quic/ngx_event_quic_ack.c
@@ -307,6 +307,18 @@ ngx_quic_handle_ack_frame_range(ngx_conn
 ngx_post_event(>push, _posted_events);
 }
 
+if (qc->client && ctx->level == ssl_encryption_initial) {
+/*
+ * RFC 9002   6.2.1. Computing PTO
+ *
+ * the PTO backoff is not reset at a client that is not yet certain
+ * that the server has finished validating the client's address. That
+ * is, a client does not reset the PTO backoff factor on receiving
+ * acknowledgments in Initial packets.
+ */
+return NGX_OK;
+}
+
 qc->pto_count = 0;
 
 return NGX_OK;
@@ -383,8 +395,8 @@ ngx_quic_congestion_reset(ngx_quic_conne
 ngx_memzero(>congestion, sizeof(ngx_quic_congestion_t));
 
 qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size,
-   ngx_max(2 * qc->tp.max_udp_payload_size,
-   14720));
+ngx_max(2 * qc->tp.max_udp_payload_size,
+14720));
 qc->congestion.ssthresh = (size_t) -1;
 qc->congestion.recovery_start = ngx_current_msec;
 }
@@ -804,6 +816,30 @@ ngx_quic_set_lost_timer(ngx_connection_t
 return;
 }
 
+/* no lost packets and no in-flight packets */
+if (qc->client && !c->ssl->handshaked
+&& ngx_quic_keys_available(qc->keys, ssl_encryption_handshake, 1))
+{
+/*
+ * 6.2.2.1
+ *
+ * That is, the client MUST set the PTO timer if the client has not
+ * received an acknowledgment for any of its Handshake packets and the
+ * handshake is not confirmed (see Section 4.1.2 of [QUIC-TLS]), even
+ * if there are no packets in flight.
+ */
+
+ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_handshake);
+
+pto = (ngx_quic_pto(c, ctx) << qc->pto_count);
+
+ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
+   "quic client lost timer pto:%M", pto);
+
+qc->pto.handler = ngx_quic_pto_handler; ngx_add_timer(>pto, pto);
+return;
+}
+
 ngx_log_debug0(NGX_LOG_DEBUG_EVENT, c->log, 0, "quic lost timer unset");
 }
 
@@ -850,7 +886,7 @@ void ngx_quic_lost_handler(ngx_event_t *
 void
 ngx_quic_pto_handler(ngx_event_t *ev)
 {
-ngx_uint_t  i;
+ngx_uint_t  i, sent;
 ngx_msec_t  now;
 ngx_queue_t*q;
 ngx_msec_int_t  w;
@@ -864,6 +900,7 @@ ngx_quic_pto_handler(ngx_event_t *ev)
 c = ev->data;
 qc = ngx_quic_get_connection(c);
 now = ngx_current_msec;
+sent = 0;
 
 for (i = 0; i < NGX_QUIC_SEND_CTX_LAST; i++) {
 
@@ -896,6 +933,32 @@ ngx_quic_pto_handler(ngx_event_t *ev)
 ngx_quic_close_connection(c, NGX_ERROR);
 return;
 }
+
+sent = 1;
+}
+
+
+/*
+ * RFC 9002  6.2.2.1  Before Address Validation
+ *
+ * When the PTO fires, the client MUST send a Handshake packet if it has
+ * Handshake keys, otherwise it MUST send an Initial packet in a UDP
+ * datagram with a payload of at least 1200 bytes.
+ */
+
+if (qc->client && !c->ssl->handshaked && !sent) {
+
+if (ngx_quic_keys_available(qc->keys, ssl_encryption_handshake, 1)) {
+ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_handshake);
+
+} else {
+ctx = ngx_quic_get_send_ctx(qc, ssl_encryption_initial);
+}
+
+if (ngx_quic_ping_peer(c, ctx) != NGX_OK) {
+ngx_quic_close_connection(c, NGX_ERROR);
+return;
+}
 }
 
 qc->pto_count++;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 06 of 12] HTTP/3: make http/3 request defines available

2023-12-25 Thread Vladimir Homutov via nginx-devel
Patch subject is complete summary.


 src/http/v3/ngx_http_v3.h   |  20 
 src/http/v3/ngx_http_v3_filter_module.c |  20 +---
 2 files changed, 21 insertions(+), 19 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703082710 -10800
#  Wed Dec 20 17:31:50 2023 +0300
# Node ID e1c4b57622ea1d8b65db495e88f3cd7c0c5f95ea
# Parent  f275f3a9992ca09a34a5281269d05e23136c6f0b
HTTP/3: make http/3 request defines available.

diff --git a/src/http/v3/ngx_http_v3.h b/src/http/v3/ngx_http_v3.h
--- a/src/http/v3/ngx_http_v3.h
+++ b/src/http/v3/ngx_http_v3.h
@@ -1,5 +1,6 @@
 
 /*
+ * Copyright (C) 2023 Web Server LLC
  * Copyright (C) Roman Arutyunyan
  * Copyright (C) Nginx, Inc.
  */
@@ -72,6 +73,25 @@
 #define NGX_HTTP_V3_ERR_CONNECT_ERROR  0x10f
 #define NGX_HTTP_V3_ERR_VERSION_FALLBACK   0x110
 
+/* static table indices */
+#define NGX_HTTP_V3_HEADER_AUTHORITY   0
+#define NGX_HTTP_V3_HEADER_PATH_ROOT   1
+#define NGX_HTTP_V3_HEADER_CONTENT_LENGTH_ZERO 4
+#define NGX_HTTP_V3_HEADER_DATE6
+#define NGX_HTTP_V3_HEADER_LAST_MODIFIED   10
+#define NGX_HTTP_V3_HEADER_LOCATION12
+#define NGX_HTTP_V3_HEADER_METHOD_GET  17
+#define NGX_HTTP_V3_HEADER_SCHEME_HTTP 22
+#define NGX_HTTP_V3_HEADER_SCHEME_HTTPS23
+#define NGX_HTTP_V3_HEADER_STATUS_200  25
+#define NGX_HTTP_V3_HEADER_ACCEPT_ENCODING 31
+#define NGX_HTTP_V3_HEADER_CONTENT_TYPE_TEXT_PLAIN 53
+#define NGX_HTTP_V3_HEADER_VARY_ACCEPT_ENCODING59
+#define NGX_HTTP_V3_HEADER_ACCEPT_LANGUAGE 72
+#define NGX_HTTP_V3_HEADER_SERVER  92
+#define NGX_HTTP_V3_HEADER_USER_AGENT  95
+
+
 /* QPACK errors */
 #define NGX_HTTP_V3_ERR_DECOMPRESSION_FAILED   0x200
 #define NGX_HTTP_V3_ERR_ENCODER_STREAM_ERROR   0x201
diff --git a/src/http/v3/ngx_http_v3_filter_module.c b/src/http/v3/ngx_http_v3_filter_module.c
--- a/src/http/v3/ngx_http_v3_filter_module.c
+++ b/src/http/v3/ngx_http_v3_filter_module.c
@@ -1,5 +1,6 @@
 
 /*
+ * Copyright (C) 2023 Web Server LLC
  * Copyright (C) Roman Arutyunyan
  * Copyright (C) Nginx, Inc.
  */
@@ -10,25 +11,6 @@
 #include 
 
 
-/* static table indices */
-#define NGX_HTTP_V3_HEADER_AUTHORITY 0
-#define NGX_HTTP_V3_HEADER_PATH_ROOT 1
-#define NGX_HTTP_V3_HEADER_CONTENT_LENGTH_ZERO   4
-#define NGX_HTTP_V3_HEADER_DATE  6
-#define NGX_HTTP_V3_HEADER_LAST_MODIFIED 10
-#define NGX_HTTP_V3_HEADER_LOCATION  12
-#define NGX_HTTP_V3_HEADER_METHOD_GET17
-#define NGX_HTTP_V3_HEADER_SCHEME_HTTP   22
-#define NGX_HTTP_V3_HEADER_SCHEME_HTTPS  23
-#define NGX_HTTP_V3_HEADER_STATUS_20025
-#define NGX_HTTP_V3_HEADER_ACCEPT_ENCODING   31
-#define NGX_HTTP_V3_HEADER_CONTENT_TYPE_TEXT_PLAIN   53
-#define NGX_HTTP_V3_HEADER_VARY_ACCEPT_ENCODING  59
-#define NGX_HTTP_V3_HEADER_ACCEPT_LANGUAGE   72
-#define NGX_HTTP_V3_HEADER_SERVER92
-#define NGX_HTTP_V3_HEADER_USER_AGENT95
-
-
 typedef struct {
 ngx_chain_t *free;
 ngx_chain_t *busy;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 04 of 12] QUIC: client support

2023-12-25 Thread Vladimir Homutov via nginx-devel
Patch subject is complete summary.


 src/event/quic/ngx_event_quic.c|  617 -
 src/event/quic/ngx_event_quic.h|   11 +
 src/event/quic/ngx_event_quic_ack.c|   13 +
 src/event/quic/ngx_event_quic_ack.h|2 +
 src/event/quic/ngx_event_quic_connection.h |   10 +
 src/event/quic/ngx_event_quic_connid.c |   64 ++-
 src/event/quic/ngx_event_quic_connid.h |7 +-
 src/event/quic/ngx_event_quic_migration.c  |   10 +-
 src/event/quic/ngx_event_quic_openssl_compat.c |4 +
 src/event/quic/ngx_event_quic_output.c |  109 +++-
 src/event/quic/ngx_event_quic_protection.c |  101 ++-
 src/event/quic/ngx_event_quic_protection.h |3 +
 src/event/quic/ngx_event_quic_socket.c |   71 ++-
 src/event/quic/ngx_event_quic_socket.h |4 +-
 src/event/quic/ngx_event_quic_ssl.c|  172 +-
 src/event/quic/ngx_event_quic_ssl.h|3 +
 src/event/quic/ngx_event_quic_streams.c|  550 +++--
 src/event/quic/ngx_event_quic_streams.h|3 +
 src/event/quic/ngx_event_quic_tokens.c |   48 +
 src/event/quic/ngx_event_quic_tokens.h |9 +
 src/event/quic/ngx_event_quic_transport.c  |  214 ++--
 src/event/quic/ngx_event_quic_transport.h  |7 +-
 22 files changed, 1677 insertions(+), 355 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703082264 -10800
#  Wed Dec 20 17:24:24 2023 +0300
# Node ID f39271dd260b831fac70c776904d9f5ded053968
# Parent  f54423e057f909b1d644cc0af316d67b91cd408f
QUIC: client support.

diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c
+++ b/src/event/quic/ngx_event_quic.c
@@ -18,6 +18,11 @@ static ngx_int_t ngx_quic_handle_statele
 static void ngx_quic_input_handler(ngx_event_t *rev);
 static void ngx_quic_close_handler(ngx_event_t *ev);
 
+static void ngx_quic_dummy_handler(ngx_event_t *ev);
+static void ngx_quic_client_input_handler(ngx_event_t *rev);
+static ngx_int_t ngx_quic_client_start(ngx_connection_t *c,
+ngx_quic_header_t *pkt);
+
 static ngx_int_t ngx_quic_handle_datagram(ngx_connection_t *c, ngx_buf_t *b,
 ngx_quic_conf_t *conf);
 static ngx_int_t ngx_quic_handle_packet(ngx_connection_t *c,
@@ -188,8 +193,16 @@ ngx_quic_apply_transport_params(ngx_conn
 qc->streams.server.bidi.max = peer_tp->initial_max_streams_bidi;
 qc->streams.server.uni.max = peer_tp->initial_max_streams_uni;
 
+if (qc->client) {
+ngx_memcpy(qc->path->cid->sr_token,
+   peer_tp->sr_token, NGX_QUIC_SR_TOKEN_LEN);
+}
+
 ngx_memcpy(>peer_tp, peer_tp, sizeof(ngx_quic_tp_t));
 
+/* apply transport parameters to early created streams */
+ngx_quic_streams_init_state(c);
+
 return NGX_OK;
 }
 
@@ -222,10 +235,339 @@ ngx_quic_run(ngx_connection_t *c, ngx_qu
 }
 
 
+static void
+ngx_quic_dummy_handler(ngx_event_t *ev)
+{
+}
+
+
+ngx_int_t
+ngx_quic_create_client(ngx_quic_conf_t *conf, ngx_connection_t *c)
+{
+int value;
+ngx_log_t  *log;
+ngx_quic_connection_t  *qc;
+
+#if (NGX_HAVE_IP_MTU_DISCOVER)
+
+if (c->sockaddr->sa_family == AF_INET) {
+value = IP_PMTUDISC_DO;
+
+if (setsockopt(c->fd, IPPROTO_IP, IP_MTU_DISCOVER,
+   (const void *) , sizeof(int))
+== -1)
+{
+ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno,
+  "setsockopt(IP_MTU_DISCOVER) "
+  "for quic conn failed, ignored");
+}
+}
+
+#elif (NGX_HAVE_IP_DONTFRAG)
+
+if (c->sockaddr->sa_family == AF_INET) {
+value = 1;
+
+if (setsockopt(c->fd, IPPROTO_IP, IP_DONTFRAG,
+   (const void *) , sizeof(int))
+== -1)
+{
+ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno,
+  "setsockopt(IP_DONTFRAG) "
+  "for quic conn failed, ignored");
+}
+}
+
+#endif
+
+#if (NGX_HAVE_INET6)
+
+#if (NGX_HAVE_IPV6_MTU_DISCOVER)
+
+if (c->sockaddr->sa_family == AF_INET6) {
+value = IPV6_PMTUDISC_DO;
+
+if (setsockopt(c->fd, IPPROTO_IPV6, IPV6_MTU_DISCOVER,
+   (const void *) , sizeof(int))
+== -1)
+{
+ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno,
+  "setsockopt(IPV6_MTU_DISCOVER) "
+  "for quic conn failed, ignored");
+}
+}
+
+#elif (NGX_HAVE_IP_DONTFRAG)
+
+if (c->sockaddr->sa_family == AF_INET6) {
+
+value = 1;
+
+if (setsockopt(c->fd, IPPROTO_IPV6, IPV6_DONTFRAG,
+   (const void *) , sizeof(int))
+== -1)
+{
+ngx_log_error(NGX_LOG_ALERT, c->log, ngx_socket_errno,
+  

[PATCH 02 of 12] QUIC: renamed "ctp" to "peer_tp"

2023-12-25 Thread Vladimir Homutov via nginx-devel
The "ctp" refers to "client transport parameters", but in the code that
supports both client and server, the name is confusing, thus rename.


 src/event/quic/ngx_event_quic.c|  41 +
 src/event/quic/ngx_event_quic_ack.c|   8 ++--
 src/event/quic/ngx_event_quic_connection.h |   5 +-
 src/event/quic/ngx_event_quic_connid.c |   4 +-
 src/event/quic/ngx_event_quic_migration.c  |   7 ++-
 src/event/quic/ngx_event_quic_openssl_compat.c |  11 +++---
 src/event/quic/ngx_event_quic_ssl.c|   9 +++--
 src/event/quic/ngx_event_quic_streams.c|   9 +++--
 8 files changed, 51 insertions(+), 43 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703080262 -10800
#  Wed Dec 20 16:51:02 2023 +0300
# Node ID f30bd37ac6b6b2f051883d0173942794ea73d8fb
# Parent  5ea917e44e03e88a2b6bc935510839a5a14e5dae
QUIC: renamed "ctp" to "peer_tp".

The "ctp" refers to "client transport parameters", but in the code that
supports both client and server, the name is confusing, thus rename.

diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c
+++ b/src/event/quic/ngx_event_quic.c
@@ -1,5 +1,6 @@
 
 /*
+ * Copyright (C) 2023 Web Server LLC
  * Copyright (C) Nginx, Inc.
  */
 
@@ -122,7 +123,7 @@ ngx_quic_connstate_dbg(ngx_connection_t 
 
 
 ngx_int_t
-ngx_quic_apply_transport_params(ngx_connection_t *c, ngx_quic_tp_t *ctp)
+ngx_quic_apply_transport_params(ngx_connection_t *c, ngx_quic_tp_t *peer_tp)
 {
 ngx_str_t   scid;
 ngx_quic_connection_t  *qc;
@@ -132,16 +133,16 @@ ngx_quic_apply_transport_params(ngx_conn
 scid.data = qc->path->cid->id;
 scid.len = qc->path->cid->len;
 
-if (scid.len != ctp->initial_scid.len
-|| ngx_memcmp(scid.data, ctp->initial_scid.data, scid.len) != 0)
+if (scid.len != peer_tp->initial_scid.len
+|| ngx_memcmp(scid.data, peer_tp->initial_scid.data, scid.len) != 0)
 {
 ngx_log_error(NGX_LOG_INFO, c->log, 0,
   "quic client initial_source_connection_id mismatch");
 return NGX_ERROR;
 }
 
-if (ctp->max_udp_payload_size < NGX_QUIC_MIN_INITIAL_SIZE
-|| ctp->max_udp_payload_size > NGX_QUIC_MAX_UDP_PAYLOAD_SIZE)
+if (peer_tp->max_udp_payload_size < NGX_QUIC_MIN_INITIAL_SIZE
+|| peer_tp->max_udp_payload_size > NGX_QUIC_MAX_UDP_PAYLOAD_SIZE)
 {
 qc->error = NGX_QUIC_ERR_TRANSPORT_PARAMETER_ERROR;
 qc->error_reason = "invalid maximum packet size";
@@ -151,7 +152,7 @@ ngx_quic_apply_transport_params(ngx_conn
 return NGX_ERROR;
 }
 
-if (ctp->active_connection_id_limit < 2) {
+if (peer_tp->active_connection_id_limit < 2) {
 qc->error = NGX_QUIC_ERR_TRANSPORT_PARAMETER_ERROR;
 qc->error_reason = "invalid active_connection_id_limit";
 
@@ -160,7 +161,7 @@ ngx_quic_apply_transport_params(ngx_conn
 return NGX_ERROR;
 }
 
-if (ctp->ack_delay_exponent > 20) {
+if (peer_tp->ack_delay_exponent > 20) {
 qc->error = NGX_QUIC_ERR_TRANSPORT_PARAMETER_ERROR;
 qc->error_reason = "invalid ack_delay_exponent";
 
@@ -169,7 +170,7 @@ ngx_quic_apply_transport_params(ngx_conn
 return NGX_ERROR;
 }
 
-if (ctp->max_ack_delay >= 16384) {
+if (peer_tp->max_ack_delay >= 16384) {
 qc->error = NGX_QUIC_ERR_TRANSPORT_PARAMETER_ERROR;
 qc->error_reason = "invalid max_ack_delay";
 
@@ -178,16 +179,16 @@ ngx_quic_apply_transport_params(ngx_conn
 return NGX_ERROR;
 }
 
-if (ctp->max_idle_timeout > 0
-&& ctp->max_idle_timeout < qc->tp.max_idle_timeout)
+if (peer_tp->max_idle_timeout > 0
+&& peer_tp->max_idle_timeout < qc->tp.max_idle_timeout)
 {
-qc->tp.max_idle_timeout = ctp->max_idle_timeout;
+qc->tp.max_idle_timeout = peer_tp->max_idle_timeout;
 }
 
-qc->streams.server_max_streams_bidi = ctp->initial_max_streams_bidi;
-qc->streams.server_max_streams_uni = ctp->initial_max_streams_uni;
+qc->streams.server_max_streams_bidi = peer_tp->initial_max_streams_bidi;
+qc->streams.server_max_streams_uni = peer_tp->initial_max_streams_uni;
 
-ngx_memcpy(>ctp, ctp, sizeof(ngx_quic_tp_t));
+ngx_memcpy(>peer_tp, peer_tp, sizeof(ngx_quic_tp_t));
 
 return NGX_OK;
 }
@@ -226,7 +227,7 @@ ngx_quic_new_connection(ngx_connection_t
 ngx_quic_header_t *pkt)
 {
 ngx_uint_t  i;
-ngx_quic_tp_t  *ctp;
+ngx_quic_tp_t  *peer_tp;
 ngx_quic_connection_t  *qc;
 
 qc = ngx_pcalloc(c->pool, sizeof(ngx_quic_connection_t));
@@ -288,13 +289,13 @@ ngx_quic_new_connection(ngx_connection_t
 return NULL;
 }
 
-ctp = >ctp;
+peer_tp = >peer_tp;
 
 /* defaults to be used before actual client parameters are received */
-ctp->max_udp_payload_size = NGX_QUIC_MAX_UDP_PAYLOAD_SIZE;
-ctp->ack_delay_exponent = 

[PATCH 03 of 12] QUIC: added a structure for stream limits/counters

2023-12-25 Thread Vladimir Homutov via nginx-devel
To simplify code dealing with stream states when both client and server
are supported, instead of 8 named fields, use two structures split into
uni/bidi and client/server.


 src/event/quic/ngx_event_quic.c|   8 ++--
 src/event/quic/ngx_event_quic_ack.c|   4 +-
 src/event/quic/ngx_event_quic_connection.h |  23 +++
 src/event/quic/ngx_event_quic_streams.c|  60 +++---
 4 files changed, 50 insertions(+), 45 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1697113160 -10800
#  Thu Oct 12 15:19:20 2023 +0300
# Node ID f54423e057f909b1d644cc0af316d67b91cd408f
# Parent  f30bd37ac6b6b2f051883d0173942794ea73d8fb
QUIC: added a structure for stream limits/counters.

To simplify code dealing with stream states when both client and server
are supported, instead of 8 named fields, use two structures split into
uni/bidi and client/server.

diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c
+++ b/src/event/quic/ngx_event_quic.c
@@ -185,8 +185,8 @@ ngx_quic_apply_transport_params(ngx_conn
 qc->tp.max_idle_timeout = peer_tp->max_idle_timeout;
 }
 
-qc->streams.server_max_streams_bidi = peer_tp->initial_max_streams_bidi;
-qc->streams.server_max_streams_uni = peer_tp->initial_max_streams_uni;
+qc->streams.server.bidi.max = peer_tp->initial_max_streams_bidi;
+qc->streams.server.uni.max = peer_tp->initial_max_streams_uni;
 
 ngx_memcpy(>peer_tp, peer_tp, sizeof(ngx_quic_tp_t));
 
@@ -303,8 +303,8 @@ ngx_quic_new_connection(ngx_connection_t
 qc->streams.recv_max_data = qc->tp.initial_max_data;
 qc->streams.recv_window = qc->streams.recv_max_data;
 
-qc->streams.client_max_streams_uni = qc->tp.initial_max_streams_uni;
-qc->streams.client_max_streams_bidi = qc->tp.initial_max_streams_bidi;
+qc->streams.client.uni.max = qc->tp.initial_max_streams_uni;
+qc->streams.client.bidi.max = qc->tp.initial_max_streams_bidi;
 
 qc->congestion.window = ngx_min(10 * qc->tp.max_udp_payload_size,
 ngx_max(2 * qc->tp.max_udp_payload_size,
diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c
--- a/src/event/quic/ngx_event_quic_ack.c
+++ b/src/event/quic/ngx_event_quic_ack.c
@@ -614,8 +614,8 @@ ngx_quic_resend_frames(ngx_connection_t 
 case NGX_QUIC_FT_MAX_STREAMS:
 case NGX_QUIC_FT_MAX_STREAMS2:
 f->u.max_streams.limit = f->u.max_streams.bidi
- ? qc->streams.client_max_streams_bidi
- : qc->streams.client_max_streams_uni;
+ ? qc->streams.client.bidi.max
+ : qc->streams.client.uni.max;
 ngx_quic_queue_frame(qc, f);
 break;
 
diff --git a/src/event/quic/ngx_event_quic_connection.h b/src/event/quic/ngx_event_quic_connection.h
--- a/src/event/quic/ngx_event_quic_connection.h
+++ b/src/event/quic/ngx_event_quic_connection.h
@@ -136,6 +136,18 @@ struct ngx_quic_socket_s {
 
 
 typedef struct {
+uint64_t  max;
+uint64_t  count;
+} ngx_quic_stream_ctl_t;
+
+
+typedef struct {
+ngx_quic_stream_ctl_t uni;
+ngx_quic_stream_ctl_t bidi;
+} ngx_quic_stream_peer_t;
+
+
+typedef struct {
 ngx_rbtree_t  tree;
 ngx_rbtree_node_t sentinel;
 
@@ -150,15 +162,8 @@ typedef struct {
 uint64_t  send_offset;
 uint64_t  send_max_data;
 
-uint64_t  server_max_streams_uni;
-uint64_t  server_max_streams_bidi;
-uint64_t  server_streams_uni;
-uint64_t  server_streams_bidi;
-
-uint64_t  client_max_streams_uni;
-uint64_t  client_max_streams_bidi;
-uint64_t  client_streams_uni;
-uint64_t  client_streams_bidi;
+ngx_quic_stream_peer_tserver;
+ngx_quic_stream_peer_tclient;
 
 ngx_uint_tinitialized;
  /* unsigned  initialized:1; */
diff --git a/src/event/quic/ngx_event_quic_streams.c b/src/event/quic/ngx_event_quic_streams.c
--- a/src/event/quic/ngx_event_quic_streams.c
+++ b/src/event/quic/ngx_event_quic_streams.c
@@ -59,47 +59,47 @@ ngx_quic_open_stream(ngx_connection_t *c
 }
 
 if (bidi) {
-if (qc->streams.server_streams_bidi
->= qc->streams.server_max_streams_bidi)
+if (qc->streams.server.bidi.count
+>= qc->streams.server.bidi.max)
 {
 ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
"quic too many server bidi 

[PATCH 00 of 12] HTTP/3 proxying to upstreams

2023-12-25 Thread Vladimir Homutov via nginx-devel
Hello, everyone,

and Merry Christmas to all!

I'm a developer of an nginx fork Angie.  Recently we implemented
an HTTP/3 proxy support in our fork [1].

We'd like to contribute this functionality to nginx OSS community.
Hence here is a patch series backported from Angie to the current
head of nginx mainline branch (1.25.3)

If you find patching and building nginx from source irritating in order
to test the feature, you can use the prebuilt packages of Angie [2]

[1] https://angie.software/en/http_proxy/#proxy-http-version
[2] https://angie.software/en/install/

Your feedback is welcome!

__.  .--,
*-/___,  ,-/___,-/___,-/___,-/___,   _.-.=,{\/ _/  /`)
 `\ _ ),-/___,-/___,-/___,-/___, ) _..-'`-(`._(_.;`   /
  /< \\=`\ _ )`\ _ )`\ _ )`\ _ )<`--''` (__\_/___,
 /< <\ https://http3-server.example.com:4433;
}
}

You may also need to configure SNI using the appropriate values
for the "proxy_ssl_name" and "proxy_ssl_server_name" directives
as well as certificates and other related things.

There is a number of proxy_http3_ directives is available that
configure quic settings.  For the interop testing purposes, HQ support
is available.


Below are technical details about the current state of the patch set.

 *** TESTS ***

The patchset includes tests which are added to the "t" directory for
convenience. Copy them to nginx-tests and run them as usual.
Most of them are proxy tests adapted for use with HTTP/3.

 *** LIMITATIONS ***

The following features are NOT implemented:

 * Trailers: it requires full trailers support in nginx first
 * Connection migration: does not seem necessary for proxying scenarios
 * 0RTT: currently not supported

The SSL library requirements are the same as for the server-side support.
There are some interoperability issues when using different libraries on
client and server: the combination of  client + openssl/compat
and server + boringssl leads to a handshake failure with an error:

>> SSL_do_handshake() failed (SSL: error:1132:SSL routines:
>>  OPENSSL_internal:UNEXPECTED_COMPATIBILITY_MODE)


 *** MULTIPLEXING ***

With keepalive disabled, the HTTP/3 connection to backend is very similar
to a normal TCP SSL connection: the connection is established, handshake
is performed, the request stream is created and everything is closed
when the request is completed.

With keepalive enabled, the underlying QUIC connection is cached,
and can be reused by another client. Each client is using its own
QUIC connection.

Theoretically, it is possible to use only one quic connection to each
backend and use separate HTTP/3 streams to make requests. This is NOT
currently implemented, as it requires more changes to the upstream
and keepalive modules and has security implications.


 *** INTERNALS ***

This is a first attempt to integrating the HTTP/3 proxy into nginx,
so all currently exposed interfaces are not final.

Probably, the HTTP/3 proxy should be implemented in a separate module.
Currently it is a patch to the HTTP proxy module to minimize boilerplate.

Things that need improvement:
- client interface: the way to create client, start handshake and
  create first stream to use for request;
  The way SSL sessions are supported doesn't look good.

- upstreams interface: one way is to hide quic details and make it
   feel more SSL-like, maybe even kind of SSL module.
   Probably need a separate keepalive module for HTTP/3 to
   allow some controlled level of multiplexing.

- connection termination is quite tricky due to the handling of
  the underlying quic UDP connection and stream requests.
  Closing an HTTP/3 connection may be incorrect in some cases.

- Some interop tests still fail. This is partly due to the nature
  of the tests. This part requires more work with hard-to-reproduce
  cases.

___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 01 of 12] QUIC: fixed accounting of in-flight PING frames

2023-12-25 Thread Vladimir Homutov via nginx-devel
Previously, such frames were not accounted as in-flight, and they were not
stored in sent queue.  This prevented proper PTO calculation and ACK handling.


 src/event/quic/ngx_event_quic_ack.c |  62 +---
 1 files changed, 43 insertions(+), 19 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1703154552 -10800
#  Thu Dec 21 13:29:12 2023 +0300
# Node ID 5ea917e44e03e88a2b6bc935510839a5a14e5dae
# Parent  cc16989c6d61385027c1ebfd43929f8369fa5f62
QUIC: fixed accounting of in-flight PING frames.

Previously, such frames were not accounted as in-flight, and they were not
stored in sent queue.  This prevented proper PTO calculation and ACK handling.

diff --git a/src/event/quic/ngx_event_quic_ack.c b/src/event/quic/ngx_event_quic_ack.c
--- a/src/event/quic/ngx_event_quic_ack.c
+++ b/src/event/quic/ngx_event_quic_ack.c
@@ -1,5 +1,6 @@
 
 /*
+ * Copyright (C) 2023 Web Server LLC
  * Copyright (C) Nginx, Inc.
  */
 
@@ -43,6 +44,8 @@ static ngx_msec_t ngx_quic_pcg_duration(
 static void ngx_quic_persistent_congestion(ngx_connection_t *c);
 static void ngx_quic_congestion_lost(ngx_connection_t *c,
 ngx_quic_frame_t *frame);
+static ngx_int_t ngx_quic_ping_peer(ngx_connection_t *c,
+ngx_quic_send_ctx_t *ctx);
 static void ngx_quic_lost_handler(ngx_event_t *ev);
 
 
@@ -834,7 +837,7 @@ void ngx_quic_lost_handler(ngx_event_t *
 void
 ngx_quic_pto_handler(ngx_event_t *ev)
 {
-ngx_uint_t  i, n;
+ngx_uint_t  i;
 ngx_msec_t  now;
 ngx_queue_t*q;
 ngx_msec_int_t  w;
@@ -876,20 +879,9 @@ ngx_quic_pto_handler(ngx_event_t *ev)
"quic pto %s pto_count:%ui",
ngx_quic_level_name(ctx->level), qc->pto_count);
 
-for (n = 0; n < 2; n++) {
-
-f = ngx_quic_alloc_frame(c);
-if (f == NULL) {
-goto failed;
-}
-
-f->level = ctx->level;
-f->type = NGX_QUIC_FT_PING;
-f->ignore_congestion = 1;
-
-if (ngx_quic_frame_sendto(c, f, 0, qc->path) == NGX_ERROR) {
-goto failed;
-}
+if (ngx_quic_ping_peer(c, ctx) != NGX_OK) {
+ngx_quic_close_connection(c, NGX_ERROR);
+return;
 }
 }
 
@@ -898,13 +890,45 @@ ngx_quic_pto_handler(ngx_event_t *ev)
 ngx_quic_set_lost_timer(c);
 
 ngx_quic_connstate_dbg(c);
+}
 
-return;
+
+static ngx_int_t
+ngx_quic_ping_peer(ngx_connection_t *c, ngx_quic_send_ctx_t *ctx)
+{
+ngx_uint_t  i;
+ngx_quic_frame_t   *f;
+ngx_quic_congestion_t  *cg;
+ngx_quic_connection_t  *qc;
+
+qc = ngx_quic_get_connection(c);
+
+cg = >congestion;
+
+for (i = 0; i < 2; i++) {
 
-failed:
+f = ngx_quic_alloc_frame(c);
+if (f == NULL) {
+return NGX_ERROR;
+}
+
+f->level = ctx->level;
+f->type = NGX_QUIC_FT_PING;
+f->ignore_congestion = 1;
+f->len = ngx_quic_create_frame(NULL, f);
 
-ngx_quic_close_connection(c, NGX_ERROR);
-return;
+if (ngx_quic_frame_sendto(c, f, 0, qc->path) != NGX_OK) {
+return NGX_ERROR;
+}
+
+ngx_queue_insert_tail(>sent, >queue);
+cg->in_flight += f->plen;
+}
+
+ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
+   "quic congestion send if:%uz", cg->in_flight);
+
+return NGX_OK;
 }
 
 
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 1 of 2] HTTP: uniform overflow checks in ngx_http_alloc_large_header_buffer

2023-11-29 Thread Vladimir Homutov via nginx-devel
On Tue, Nov 28, 2023 at 05:58:23AM +0300, Maxim Dounin wrote:
> Hello!
>
> On Fri, Nov 10, 2023 at 12:11:54PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
>
> > If URI is not fully parsed yet, some pointers are not set.
> > As a result, the calculation of "new + (ptr - old)" expression
> > may overflow. In such a case, just avoid calculating it, as value
> > will be set correctly later by the parser in any case.
> >
> > The issue was found by GCC undefined behaviour sanitizer.
> >
> >
> >  src/http/ngx_http_request.c |  34 ++
> >  1 files changed, 26 insertions(+), 8 deletions(-)
> >
> >
>
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1699604478 -10800
> > #  Fri Nov 10 11:21:18 2023 +0300
> > # Node ID 505e927eb7a75f0fdce4caddb4ab9d9c71c9b9c8
> > # Parent  dadd13fdcf5228c8e8380e120d4621002e3b0919
> > HTTP: uniform overflow checks in ngx_http_alloc_large_header_buffer.
> >
> > If URI is not fully parsed yet, some pointers are not set.
> > As a result, the calculation of "new + (ptr - old)" expression
> > may overflow. In such a case, just avoid calculating it, as value
> > will be set correctly later by the parser in any case.
> >
> > The issue was found by GCC undefined behaviour sanitizer.
>
> I would rather refrain from saying this is an issue, it is not
> (unless a compiler actually starts to do silly things as long as
> it sees something formally defined as "undefined behavior" in C
> standard, and this would be indeed an issue - in the compiler).
> As already noted in the initial review, the code as written is
> completely safe in practice.  For such mostly style commits we
> usually write something like "Prodded by...".

totally agreed

>
> Also note that the issue is not an overflow, but rather
> subtraction of pointers which do not belong to the same array
> object (C11, 6.5.6 Additive operators, p.9):
>
> : When two pointers are subtracted, both shall point to elements
> : of the same array object, or one past the last element of the
> : array object
>
> The undefined behaviour is present as long as "ptr" and "old" are
> not in the same buffer (i.e., array object), which is the case
> when "ptr" is not set.  And another one follows when trying to add
> the (already undefined) subtraction result to "new" (since the
> result is not going to belong to the same array object):
>
> : If both the pointer operand and the result point to elements of
> : the same array object, or one past the last element of the array
> : object, the evaluation shall not produce an overflow; otherwise,
> : the behavior is undefined.
>
> Overflow here might be an indicator that certainly there is an
> undefined behaviour, but it's just an indicator.
>
> You may want to rewrite commit log accordingly.

The commit log was updated.

> > diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
> > --- a/src/http/ngx_http_request.c
> > +++ b/src/http/ngx_http_request.c
> > @@ -1718,14 +1718,23 @@ ngx_http_alloc_large_header_buffer(ngx_h
[...]
> >  if (r->host_start) {
>
> See review of the second patch about r->port_start / r->port_end.
> I would rather change it similarly for now.

I would prefer to remove both, so this patch has nothing about it.

updated patch:


# HG changeset patch
# User Vladimir Khomutov 
# Date 1701245585 -10800
#  Wed Nov 29 11:13:05 2023 +0300
# Node ID 7c8ecb3fee20dfbb9a627441377dd09509988e2a
# Parent  dacad3a9c7b8435a4c67ad2b67f261e7b4e36d8e
HTTP: uniform checks in ngx_http_alloc_large_header_buffer().

If URI is not fully parsed yet, some pointers are not set.  As a result,
the calculation of "new + (ptr - old)" expression is flawed.

According to  C11, 6.5.6 Additive operators, p.9:

: When two pointers are subtracted, both shall point to elements
: of the same array object, or one past the last element of the
: array object

Since "ptr" is not set, subtraction leads to undefined behaviour, because
"ptr" and "old" are not in the same buffer (i.e. array objects).

Prodded by GCC undefined behaviour sanitizer.

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1718,14 +1718,23 @@ ngx_http_alloc_large_header_buffer(ngx_h
 r->request_end = new + (r->request_end - old);
 }
 
-r->method_end = new + (r->method_end - old);
-
-r->uri_start = new + (r->uri_start - old);
-r->uri_end = new + (r->uri_end - old);
+if

Re: [PATCH 2 of 2] HTTP: removed unused r->port_start

2023-11-29 Thread Vladimir Homutov via nginx-devel
On Tue, Nov 28, 2023 at 05:57:39AM +0300, Maxim Dounin wrote:
> Hello!
>
> On Fri, Nov 10, 2023 at 12:11:55PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
>
> >
> > It is no longer used since the refactoring in 8e5bf1bc87e2 (2008).
>
> Neither r->port_start nor r->port_end were ever used.
>
> The r->port_end is set by the parser, though it was never used by
> the following code (and was never usable, since not copied by the
> ngx_http_alloc_large_header_buffer() without r->port_start set).
>
> The 8e5bf1bc87e2 commit is completely unrelated, it is about
> refactoring of the ngx_parse_inet_url() function, which had a
> local variable named "port_start".

exactly, thanks for noticing.

>
> >
> > diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
> > --- a/src/http/ngx_http_request.c
> > +++ b/src/http/ngx_http_request.c
> > @@ -1744,8 +1744,7 @@ ngx_http_alloc_large_header_buffer(ngx_h
> >  }
> >  }
> >
> > -if (r->port_start) {
> > -r->port_start = new + (r->port_start - old);
> > +if (r->port_end) {
> >  r->port_end = new + (r->port_end - old);
> >  }
> >
> > diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h
> > --- a/src/http/ngx_http_request.h
> > +++ b/src/http/ngx_http_request.h
> > @@ -597,7 +597,6 @@ struct ngx_http_request_s {
> >  u_char   *schema_end;
> >  u_char   *host_start;
> >  u_char   *host_end;
> > -u_char   *port_start;
> >  u_char   *port_end;
> >
> >  unsigned  http_minor:16;
>
> I don't think it's a good change.  Rather, we should either remove
> both, or (eventually) fix these and provide some valid usage of
> the port as parsed either from the request line or from the Host
> header, similarly to the $host variable.
>

I think that we should remove both, as unused code still needs to be
maintained without any advantage, as this example shows.
Restoring it will be trivial, if ever required.



# HG changeset patch
# User Vladimir Khomutov 
# Date 1701165434 -10800
#  Tue Nov 28 12:57:14 2023 +0300
# Node ID dacad3a9c7b8435a4c67ad2b67f261e7b4e36d8e
# Parent  f366007dd23a6ce8e8427c1b3042781b618a2ade
HTTP: removed unused r->port_start and r->port_end.

Neither r->port_start nor r->port_end were ever used.

The r->port_end is set by the parser, though it was never used by
the following code (and was never usable, since not copied by the
ngx_http_alloc_large_header_buffer() without r->port_start set).

diff --git a/src/http/ngx_http_parse.c b/src/http/ngx_http_parse.c
--- a/src/http/ngx_http_parse.c
+++ b/src/http/ngx_http_parse.c
@@ -451,19 +451,16 @@ ngx_http_parse_request_line(ngx_http_req
 
 switch (ch) {
 case '/':
-r->port_end = p;
 r->uri_start = p;
 state = sw_after_slash_in_uri;
 break;
 case '?':
-r->port_end = p;
 r->uri_start = p;
 r->args_start = p + 1;
 r->empty_path_in_uri = 1;
 state = sw_uri;
 break;
 case ' ':
-r->port_end = p;
 /*
  * use single "/" from request line to preserve pointers,
  * if request line will be copied to large client buffer
diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1735,11 +1735,6 @@ ngx_http_alloc_large_header_buffer(ngx_h
 }
 }
 
-if (r->port_start) {
-r->port_start = new + (r->port_start - old);
-r->port_end = new + (r->port_end - old);
-}
-
 if (r->uri_ext) {
 r->uri_ext = new + (r->uri_ext - old);
 }
diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h
--- a/src/http/ngx_http_request.h
+++ b/src/http/ngx_http_request.h
@@ -597,8 +597,6 @@ struct ngx_http_request_s {
 u_char   *schema_end;
 u_char   *host_start;
 u_char   *host_end;
-u_char   *port_start;
-u_char   *port_end;
 
 unsigned  http_minor:16;
 unsigned  http_major:16;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 1 of 2] HTTP: uniform overflow checks in ngx_http_alloc_large_header_buffer

2023-11-10 Thread Vladimir Homutov via nginx-devel
If URI is not fully parsed yet, some pointers are not set.
As a result, the calculation of "new + (ptr - old)" expression
may overflow. In such a case, just avoid calculating it, as value
will be set correctly later by the parser in any case.

The issue was found by GCC undefined behaviour sanitizer.


 src/http/ngx_http_request.c |  34 ++
 1 files changed, 26 insertions(+), 8 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1699604478 -10800
#  Fri Nov 10 11:21:18 2023 +0300
# Node ID 505e927eb7a75f0fdce4caddb4ab9d9c71c9b9c8
# Parent  dadd13fdcf5228c8e8380e120d4621002e3b0919
HTTP: uniform overflow checks in ngx_http_alloc_large_header_buffer.

If URI is not fully parsed yet, some pointers are not set.
As a result, the calculation of "new + (ptr - old)" expression
may overflow. In such a case, just avoid calculating it, as value
will be set correctly later by the parser in any case.

The issue was found by GCC undefined behaviour sanitizer.

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1718,14 +1718,23 @@ ngx_http_alloc_large_header_buffer(ngx_h
 r->request_end = new + (r->request_end - old);
 }
 
-r->method_end = new + (r->method_end - old);
-
-r->uri_start = new + (r->uri_start - old);
-r->uri_end = new + (r->uri_end - old);
+if (r->method_end) {
+r->method_end = new + (r->method_end - old);
+}
+
+if (r->uri_start) {
+r->uri_start = new + (r->uri_start - old);
+}
+
+if (r->uri_end) {
+r->uri_end = new + (r->uri_end - old);
+}
 
 if (r->schema_start) {
 r->schema_start = new + (r->schema_start - old);
-r->schema_end = new + (r->schema_end - old);
+if (r->schema_end) {
+r->schema_end = new + (r->schema_end - old);
+}
 }
 
 if (r->host_start) {
@@ -1754,9 +1763,18 @@ ngx_http_alloc_large_header_buffer(ngx_h
 
 } else {
 r->header_name_start = new;
-r->header_name_end = new + (r->header_name_end - old);
-r->header_start = new + (r->header_start - old);
-r->header_end = new + (r->header_end - old);
+
+if (r->header_name_end) {
+r->header_name_end = new + (r->header_name_end - old);
+}
+
+if (r->header_start) {
+r->header_start = new + (r->header_start - old);
+}
+
+if (r->header_end) {
+r->header_end = new + (r->header_end - old);
+}
 }
 
 r->header_in = b;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 0 of 2] [patch] some issues found by gcc undef sanitizer

2023-11-10 Thread Vladimir Homutov via nginx-devel

> As already noted off-list, this is certainly not the only field
> which might be not yet set when
> ngx_http_alloc_large_header_buffer() is called.  From the patch
> context as shown, at least r->method_end and r->uri_start might
> not be set as well, leading to similar overflows.  And certainly
> there are other fields as well.

Agreed, there is a clear pattern in this case.
I have updated the patch to test other cases as well.

Also, I've created a separate patch to remove r->port_start,
which is actually unused and looks like remnant of old refactoring.

___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 2 of 2] HTTP: removed unused r->port_start

2023-11-10 Thread Vladimir Homutov via nginx-devel
It is no longer used since the refactoring in 8e5bf1bc87e2 (2008).


 src/http/ngx_http_request.c |  3 +--
 src/http/ngx_http_request.h |  1 -
 2 files changed, 1 insertions(+), 3 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1699603821 -10800
#  Fri Nov 10 11:10:21 2023 +0300
# Node ID 6f957e137407d8f3f7e34f413c92103004b44594
# Parent  505e927eb7a75f0fdce4caddb4ab9d9c71c9b9c8
HTTP: removed unused r->port_start.

It is no longer used since the refactoring in 8e5bf1bc87e2 (2008).

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1744,8 +1744,7 @@ ngx_http_alloc_large_header_buffer(ngx_h
 }
 }
 
-if (r->port_start) {
-r->port_start = new + (r->port_start - old);
+if (r->port_end) {
 r->port_end = new + (r->port_end - old);
 }
 
diff --git a/src/http/ngx_http_request.h b/src/http/ngx_http_request.h
--- a/src/http/ngx_http_request.h
+++ b/src/http/ngx_http_request.h
@@ -597,7 +597,6 @@ struct ngx_http_request_s {
 u_char   *schema_end;
 u_char   *host_start;
 u_char   *host_end;
-u_char   *port_start;
 u_char   *port_end;
 
 unsigned  http_minor:16;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [patch] quic PTO counter fixes

2023-11-09 Thread Vladimir Homutov via nginx-devel
> On Thu, Oct 26, 2023 at 03:08:55AM +0400, Sergey Kandaurov wrote:
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1697031803 -10800
> > #  Wed Oct 11 16:43:23 2023 +0300
> > # Node ID 9ba2840e88f62343b3bd794e43900781dab43686
> > # Parent  1f188102fbd944df797e8710f70cccee76164add
> > QUIC: fixed handling of PTO counter.
> >
> > The RFC 9002 clearly says in "6.2. Probe Timeout":
> > ...
> > As with loss detection, the PTO is per packet number space.
> > That is, a PTO value is computed per packet number space.
> >
> > Despite that, current code is using per-connection PTO counter.
> > For example, this may lead to situation when packet loss at handshake
> > level will affect PTO calculation for initial packets, preventing
> > send of new probes.
>
> Although PTO value is per packet number space, PTO backoff is not,
> see "6.2.1 Computing PTO":
>
> : When ack-eliciting packets in multiple packet number spaces are in flight, 
> the
> : timer MUST be set to the earlier value of the Initial and Handshake packet
> : number spaces.

And I read this fragment as:
- there are multiple timer values (i.e. per packet number space)
  (and value is pto * backoff)
- we have to choose the earliest value

The ngx_quic_pto() has nothing that depends on packet number space
(with the minor exception that we add max_ack_delay at application level
 after the handshake)
So pto_count is the only thing that can make timer values to be
different in different packet number spaces.

>
> But:
>
> : When a PTO timer expires, the PTO backoff MUST be increased <..>
>
> : This exponential reduction in the sender's rate is important because 
> consecutive
> : PTOs might be caused by loss of packets or acknowledgments due to severe
> : congestion.  Even when there are ack-eliciting packets in flight in multiple
> : packet number spaces, the exponential increase in PTO occurs across all 
> spaces
> : to prevent excess load on the network.  For example, a timeout in the 
> Initial
> : packet number space doubles the length of the timeout in the Handshake 
> packet
> : number space.

yes, this really looks like contradiction.
At least I don't understand how it is possible to have PTO value
different by packet number space given the way we calculate it.

> Even if that would be proven otherwise, I don't think the description
> provides detailed explanation.  It describes a pretty specific use case,
> when both Initial and Handshake packet number spaces have in-flight packets
> with different PTO timeout (i.e. different f->last).  Typically they are
> sent coalesced (e.g. CRYPTO frames for ServerHello and (at least)
> EncryptedExtensions TLS messages).
> In interop tests, though, it might be different: such packets may be
> sent separately, with Handshake packet thus having a later PTO timeout.
> If such, PTO timer will first fire for the Initial packet, then for Handshake,
> which will result in PTO backoff accumulated for each packet:
>
>  t1: <- Initial (lost)
>  t2: <- Handshake (lost)
> t1': pto(t1) timeout
>  <- Initial (pto_count=1)
> t2': pto(t2) timeout
>  <- Handshake (pto_count=2)
> t1'': pto(t1') timeout
>  <- Initial (pto_count=3)
>
> So, I would supplement the description with the phrase that that's
> fair typically with uncoalesced packets seen in interop tests, and
> that the same is true vice verse with packet loss at initial packet
> number space affecting PTO backoff in handshake packet number space.
>
> But see above about PTO backoff increase across all spaces.

I tend to think that it is better to leave things as is.
maybe RFC needs some better wording in this case.

I've checked ngtcp2 and msquic and it it looks like both
handle pto counter per-connection too;
(see pto_count in ngtcp2 and QUIC_LOSS_DETECTION.ProbeCount in msquic)


> > Additionally, one case of successful ACK receiving was missing:
> > PING frames are not stored in the ctx->sent queue, thus PTO was not
> > reset when corresponding packets were acknowledged.
>
> See below.
>
> >
> > diff --git a/src/event/quic/ngx_event_quic.c 
> > b/src/event/quic/ngx_event_quic.c
> > --- a/src/event/quic/ngx_event_quic.c
> > +++ b/src/event/quic/ngx_event_quic.c
> > @@ -1088,8 +1088,6 @@ ngx_quic_discard_ctx(ngx_connection_t *c
> >
> >  ngx_quic_keys_discard(qc->keys, level);
> >
> > -qc->pto_count = 0;
> > -
> >  ctx = ngx_quic_get_send_ctx(qc, level);
> >
> >  ngx_quic_free_buffer(c, >crypto);
> > @@ -1120,6 +1118,7 @@ ngx_quic_discard_ctx(ngx_connection_t *c
> >  }
> >
> >  ctx->send_ack = 0;
> > +ctx->pto_count = 0;
> >
> >  ngx_quic_set_lost_timer(c);
> >  }
> > diff --git a/src/event/quic/ngx_event_quic_ack.c 
> > b/src/event/quic/ngx_event_quic_ack.c
> > --- a/src/event/quic/ngx_event_quic_ack.c
> > +++ b/src/event/quic/ngx_event_quic_ack.c
> > @@ -286,8 +286,12 @@ ngx_quic_handle_ack_frame_range(ngx_conn
> >  if (!found) {
> >
> >  if (max < ctx->pnum) {
> > -

[PATCH 2 of 2] HTTP: suppressed possible overflow in interim r->uri_end calculation

2023-10-27 Thread Vladimir Homutov via nginx-devel
If URI is not fully parsed yet, the r->uri_end pointer is NULL.
As a result, calculation of "new + (r->uri_end - old)" expression
may overflow.  In such case, just avoid calculating it, as r->uri_end
will be set correctly later by the parser in any case.

The issue was found by GCC undefined behaviour sanitizer.


 src/http/ngx_http_request.c |  4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1698407686 -10800
#  Fri Oct 27 14:54:46 2023 +0300
# Node ID 1b28902de1c648fc2586bba8e05c2ff63e0e33cb
# Parent  ef9f124b156aff0e9f66057e438af835bd7a60d2
HTTP: suppressed possible overflow in interim r->uri_end calculation.

If URI is not fully parsed yet, the r->uri_end pointer is NULL.
As a result, calculation of "new + (r->uri_end - old)" expression
may overflow.  In such case, just avoid calculating it, as r->uri_end
will be set correctly later by the parser in any case.

The issue was found by GCC undefined behaviour sanitizer.

diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -1721,7 +1721,9 @@ ngx_http_alloc_large_header_buffer(ngx_h
 r->method_end = new + (r->method_end - old);
 
 r->uri_start = new + (r->uri_start - old);
-r->uri_end = new + (r->uri_end - old);
+if (r->uri_end) {
+r->uri_end = new + (r->uri_end - old);
+}
 
 if (r->schema_start) {
 r->schema_start = new + (r->schema_start - old);
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH 1 of 2] Core: avoid calling memcpy() in edge cases

2023-10-27 Thread Vladimir Homutov via nginx-devel
Patch subject is complete summary.


 src/core/ngx_cycle.c |  10 ++
 src/core/ngx_resolver.c  |   2 +-
 src/core/ngx_string.c|  15 +++
 src/http/modules/ngx_http_proxy_module.c |   4 ++--
 src/http/ngx_http_file_cache.c   |   4 +++-
 src/http/ngx_http_variables.c|   3 +++
 src/mail/ngx_mail_auth_http_module.c |  12 +---
 src/stream/ngx_stream_script.c   |   4 +++-
 8 files changed, 42 insertions(+), 12 deletions(-)


# HG changeset patch
# User Vladimir Khomutov 
# Date 1698407658 -10800
#  Fri Oct 27 14:54:18 2023 +0300
# Node ID ef9f124b156aff0e9f66057e438af835bd7a60d2
# Parent  ea1f29c2010cda4940b741976f103d547308815a
Core: avoid calling memcpy() in edge cases.

diff --git a/src/core/ngx_cycle.c b/src/core/ngx_cycle.c
--- a/src/core/ngx_cycle.c
+++ b/src/core/ngx_cycle.c
@@ -115,10 +115,12 @@ ngx_init_cycle(ngx_cycle_t *old_cycle)
 old_cycle->conf_file.len + 1);
 
 cycle->conf_param.len = old_cycle->conf_param.len;
-cycle->conf_param.data = ngx_pstrdup(pool, _cycle->conf_param);
-if (cycle->conf_param.data == NULL) {
-ngx_destroy_pool(pool);
-return NULL;
+if (cycle->conf_param.len) {
+cycle->conf_param.data = ngx_pstrdup(pool, _cycle->conf_param);
+if (cycle->conf_param.data == NULL) {
+ngx_destroy_pool(pool);
+return NULL;
+}
 }
 
 
diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c
--- a/src/core/ngx_resolver.c
+++ b/src/core/ngx_resolver.c
@@ -4206,7 +4206,7 @@ ngx_resolver_dup(ngx_resolver_t *r, void
 
 dst = ngx_resolver_alloc(r, size);
 
-if (dst == NULL) {
+if (dst == NULL || size == 0 || src == NULL) {
 return dst;
 }
 
diff --git a/src/core/ngx_string.c b/src/core/ngx_string.c
--- a/src/core/ngx_string.c
+++ b/src/core/ngx_string.c
@@ -252,6 +252,11 @@ ngx_vslprintf(u_char *buf, u_char *last,
 case 'V':
 v = va_arg(args, ngx_str_t *);
 
+if (v->len == 0 || v->data == NULL) {
+fmt++;
+continue;
+}
+
 buf = ngx_sprintf_str(buf, last, v->data, v->len, hex);
 fmt++;
 
@@ -260,6 +265,11 @@ ngx_vslprintf(u_char *buf, u_char *last,
 case 'v':
 vv = va_arg(args, ngx_variable_value_t *);
 
+if (vv->len == 0 || vv->data == NULL) {
+fmt++;
+continue;
+}
+
 buf = ngx_sprintf_str(buf, last, vv->data, vv->len, hex);
 fmt++;
 
@@ -268,6 +278,11 @@ ngx_vslprintf(u_char *buf, u_char *last,
 case 's':
 p = va_arg(args, u_char *);
 
+if (slen == 0 || p == NULL) {
+fmt++;
+continue;
+}
+
 buf = ngx_sprintf_str(buf, last, p, slen, hex);
 fmt++;
 
diff --git a/src/http/modules/ngx_http_proxy_module.c b/src/http/modules/ngx_http_proxy_module.c
--- a/src/http/modules/ngx_http_proxy_module.c
+++ b/src/http/modules/ngx_http_proxy_module.c
@@ -1205,7 +1205,7 @@ ngx_http_proxy_create_key(ngx_http_reque
 
 key->data = p;
 
-if (r->valid_location) {
+if (r->valid_location && ctx->vars.uri.len) {
 p = ngx_copy(p, ctx->vars.uri.data, ctx->vars.uri.len);
 }
 
@@ -1422,7 +1422,7 @@ ngx_http_proxy_create_request(ngx_http_r
 b->last = ngx_copy(b->last, r->unparsed_uri.data, r->unparsed_uri.len);
 
 } else {
-if (r->valid_location) {
+if (r->valid_location && ctx->vars.uri.len) {
 b->last = ngx_copy(b->last, ctx->vars.uri.data, ctx->vars.uri.len);
 }
 
diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c
+++ b/src/http/ngx_http_file_cache.c
@@ -1270,7 +1270,9 @@ ngx_http_file_cache_set_header(ngx_http_
 
 if (c->etag.len <= NGX_HTTP_CACHE_ETAG_LEN) {
 h->etag_len = (u_char) c->etag.len;
-ngx_memcpy(h->etag, c->etag.data, c->etag.len);
+if (c->etag.len) {
+ngx_memcpy(h->etag, c->etag.data, c->etag.len);
+}
 }
 
 if (c->vary.len) {
diff --git a/src/http/ngx_http_variables.c b/src/http/ngx_http_variables.c
--- a/src/http/ngx_http_variables.c
+++ b/src/http/ngx_http_variables.c
@@ -2157,6 +2157,9 @@ ngx_http_variable_request_body(ngx_http_
 
 for ( /* void */ ; cl; cl = cl->next) {
 buf = cl->buf;
+if (buf->last == buf->pos) {
+continue;
+}
 p = ngx_cpymem(p, buf->pos, buf->last - buf->pos);
 }
 
diff --git a/src/mail/ngx_mail_auth_http_module.c b/src/mail/ngx_mail_auth_http_module.c
--- a/src/mail/ngx_mail_auth_http_module.c
+++ b/src/mail/ngx_mail_auth_http_module.c
@@ -1314,11 +1314,15 @@ ngx_mail_auth_http_create_request(ngx_ma
 *b->last++ = CR; 

[PATCH 0 of 2] [patch] some issues found by gcc undef sanitizer

2023-10-27 Thread Vladimir Homutov via nginx-devel

Hello,

Below are two patches, created by results of running nginx-tests with
GCC undefined behaviour sanitizer enabled.

The first one is about memcpy() with NULL second argument calls, which
are considere undefined behaviour by sanitizer.  While the actual harm
is arguable, having such calls is not a good practice.

Most of them are results of passing empty ngx_str_t, either for logging
or in some other cases.

I've decided to test arguments in ngx_resolver_dup() as it seems that
adding checks to the calling code will introduce to much changes. YMMV.

In ngx_http_variables_request_body() all buffers are copied to output,
which may include special. Probably the check must be ngx_buf_special() ?

Other cases are obvious checks that allow to skip copy if there is
nothing to do actually.


___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [patch] quic PTO counter fixes

2023-10-26 Thread Vladimir Homutov via nginx-devel
On Fri, Oct 27, 2023 at 12:27:22AM +0400, Sergey Kandaurov wrote:
> On Thu, Oct 26, 2023 at 05:20:39PM +0300, Vladimir Homutov wrote:
> > On Thu, Oct 26, 2023 at 03:08:55AM +0400, Sergey Kandaurov wrote:
> > > On Wed, Oct 11, 2023 at 04:58:47PM +0300, Vladimir Homutov via 
> > > nginx-devel wrote:
> > [..]
> >
> > > > diff --git a/src/event/quic/ngx_event_quic_output.c 
> > > > b/src/event/quic/ngx_event_quic_output.c
> > > > --- a/src/event/quic/ngx_event_quic_output.c
> > > > +++ b/src/event/quic/ngx_event_quic_output.c
> > > > @@ -563,8 +563,6 @@ ngx_quic_output_packet(ngx_connection_t
> > > >  pkt.need_ack = 1;
> > > >  }
> > > >
> > > > -ngx_quic_log_frame(c->log, f, 1);
> > > > -
> > > >  flen = ngx_quic_create_frame(p, f);
> > > >  if (flen == -1) {
> > > >  return NGX_ERROR;
> > > > @@ -578,6 +576,8 @@ ngx_quic_output_packet(ngx_connection_t
> > > >  f->last = now;
> > > >  f->plen = 0;
> > > >
> > > > +ngx_quic_log_frame(c->log, f, 1);
> > > > +
> > > >  nframes++;
> > >
> > > I'd rather move setting frame fields before calling
> > > ngx_quic_log_frame()/ngx_quic_create_frame()
> > > to preserve consistency with other places, i.e.:
> > > - set fields
> > > - log frame
> > > - create frame
> > >
> > > To look as follows:
> > >
> > > :f->pnum = ctx->pnum;
> > > :f->first = now;
> > > :f->last = now;
> > > :f->plen = 0;
> > > :
> > > :ngx_quic_log_frame(c->log, f, 1);
> > > :
> > > :flen = ngx_quic_create_frame(p, f);
> > > :
> >
> > agreed
> >
> > > >  }
> > > >
> > > > @@ -925,6 +925,13 @@ ngx_quic_send_early_cc(ngx_connection_t
> > > >
> > > >  res.data = dst;
> > > >
> > > > +ngx_log_debug7(NGX_LOG_DEBUG_EVENT, c->log, 0,
> > > > +   "quic packet tx %s bytes:%ui need_ack:%d"
> > > > +   " number:%L encoded nl:%d trunc:0x%xD frame:%ui]",
> > >
> > > typo: closing square bracket
> >
> > thanks, removed
> >
> > > Not sure we need logging for a (particular) frame in packet logging,
> > > not to say that it looks like a layering violation.
> > > Anyway, it is logged nearby, for example:
> > >
> > >  quic frame tx init:0 CONNECTION_CLOSE err:11 invalid address validation 
> > > token ft:0
> > >  quic packet tx init bytes:36 need_ack:0 number:0 encoded nl:1 trunc:0x0
> > >
> > > So I'd remove this part.
> >
> > agreed, frame logging removed
> >
> > > > +   ngx_quic_level_name(pkt.level), pkt.payload.len,
> > > > +   pkt.need_ack, pkt.number, pkt.num_len, pkt.trunc,
> > > > +   frame->type);
> > > > +
> > >
> > > BTW, it would make sense to get a new macro / inline function
> > > for packet tx logging, similar to ngx_quic_log_frame(),
> > > since we will have three places with identical ngx_log_debug7().
> >
> > actually, four (we have also retry), so having a macro is a good idea
> >
> > updated patch attached
>
> Well, I don't think retry needs logging, because this is not a real
> packet, it carries a token and is used to construct a Retry packet
> (which is also a special packet) later in ngx_quic_encrypt().
> Logging such a construct is bogus, because nearly all fields aren't
> initialized to sensible values, personally I've got the following:
>
>  quic packet tx init bytes:0 need_ack:0 number:0 encoded nl:0 trunc:0x0

yes, this makes sense, removed.

# HG changeset patch
# User Vladimir Khomutov 
# Date 1698352509 -10800
#  Thu Oct 26 23:35:09 2023 +0300
# Node ID d62960a9e75f07a1d260cf7aaad965f56a9520c2
# Parent  25a2efd97a3e21d106ce4547a763b77eb9c732ad
QUIC: improved packet and frames debug tracing.

Currently, packets generated by ngx_quic_frame_sendto() and
ngx_quic_send_early_cc() are not logged, thus making it hard
to read logs due to gaps appearing in packet numbers sequence.

At frames level, it is handy to see immediately packet number
in which they arrived or being sent.

diff --git a/src/event/quic/ngx_event_quic_frames.c b/src/event/quic/ngx_event_quic_frames.

Re: [patch] quic PTO counter fixes

2023-10-26 Thread Vladimir Homutov via nginx-devel
On Thu, Oct 26, 2023 at 03:08:55AM +0400, Sergey Kandaurov wrote:
> On Wed, Oct 11, 2023 at 04:58:47PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
[..]

> > diff --git a/src/event/quic/ngx_event_quic_output.c 
> > b/src/event/quic/ngx_event_quic_output.c
> > --- a/src/event/quic/ngx_event_quic_output.c
> > +++ b/src/event/quic/ngx_event_quic_output.c
> > @@ -563,8 +563,6 @@ ngx_quic_output_packet(ngx_connection_t
> >  pkt.need_ack = 1;
> >  }
> >
> > -ngx_quic_log_frame(c->log, f, 1);
> > -
> >  flen = ngx_quic_create_frame(p, f);
> >  if (flen == -1) {
> >  return NGX_ERROR;
> > @@ -578,6 +576,8 @@ ngx_quic_output_packet(ngx_connection_t
> >  f->last = now;
> >  f->plen = 0;
> >
> > +ngx_quic_log_frame(c->log, f, 1);
> > +
> >  nframes++;
>
> I'd rather move setting frame fields before calling
> ngx_quic_log_frame()/ngx_quic_create_frame()
> to preserve consistency with other places, i.e.:
> - set fields
> - log frame
> - create frame
>
> To look as follows:
>
> :f->pnum = ctx->pnum;
> :f->first = now;
> :f->last = now;
> :f->plen = 0;
> :
> :ngx_quic_log_frame(c->log, f, 1);
> :
> :flen = ngx_quic_create_frame(p, f);
> :

agreed

> >  }
> >
> > @@ -925,6 +925,13 @@ ngx_quic_send_early_cc(ngx_connection_t
> >
> >  res.data = dst;
> >
> > +ngx_log_debug7(NGX_LOG_DEBUG_EVENT, c->log, 0,
> > +   "quic packet tx %s bytes:%ui need_ack:%d"
> > +   " number:%L encoded nl:%d trunc:0x%xD frame:%ui]",
>
> typo: closing square bracket

thanks, removed

> Not sure we need logging for a (particular) frame in packet logging,
> not to say that it looks like a layering violation.
> Anyway, it is logged nearby, for example:
>
>  quic frame tx init:0 CONNECTION_CLOSE err:11 invalid address validation 
> token ft:0
>  quic packet tx init bytes:36 need_ack:0 number:0 encoded nl:1 trunc:0x0
>
> So I'd remove this part.

agreed, frame logging removed

> > +   ngx_quic_level_name(pkt.level), pkt.payload.len,
> > +   pkt.need_ack, pkt.number, pkt.num_len, pkt.trunc,
> > +   frame->type);
> > +
>
> BTW, it would make sense to get a new macro / inline function
> for packet tx logging, similar to ngx_quic_log_frame(),
> since we will have three places with identical ngx_log_debug7().

actually, four (we have also retry), so having a macro is a good idea

updated patch attached
# HG changeset patch
# User Vladimir Khomutov 
# Date 1698329226 -10800
#  Thu Oct 26 17:07:06 2023 +0300
# Node ID b8cdb9518f877fb3ed6386731df1e263eeae8e7c
# Parent  25a2efd97a3e21d106ce4547a763b77eb9c732ad
QUIC: improved packet and frames debug tracing.

Currently, packets generated by ngx_quic_frame_sendto() and
ngx_quic_send_early_cc() are not logged, thus making it hard
to read logs due to gaps appearing in packet numbers sequence.

At frames level, it is handy to see immediately packet number
in which they arrived or being sent.

diff --git a/src/event/quic/ngx_event_quic_frames.c b/src/event/quic/ngx_event_quic_frames.c
--- a/src/event/quic/ngx_event_quic_frames.c
+++ b/src/event/quic/ngx_event_quic_frames.c
@@ -886,8 +886,8 @@ ngx_quic_log_frame(ngx_log_t *log, ngx_q
 break;
 }
 
-ngx_log_debug4(NGX_LOG_DEBUG_EVENT, log, 0, "quic frame %s %s %*s",
-   tx ? "tx" : "rx", ngx_quic_level_name(f->level),
+ngx_log_debug5(NGX_LOG_DEBUG_EVENT, log, 0, "quic frame %s %s:%ui %*s",
+   tx ? "tx" : "rx", ngx_quic_level_name(f->level), f->pnum,
p - buf, buf);
 }
 
diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -35,6 +35,15 @@
 #define NGX_QUIC_SOCKET_RETRY_DELAY  10 /* ms, for NGX_AGAIN on write */
 
 
+#define ngx_quic_log_packet(log, pkt) \
+ngx_log_debug6(NGX_LOG_DEBUG_EVENT, log, 0,   \
+   "quic packet tx %s bytes:%ui need_ack:%d"  \
+   " number:%L encoded nl:%d trunc:0x%xD",\
+   ngx_quic_level_name((pkt)->level), (pkt)->payload.len, \
+   (pkt)->need_ack, (pkt)->number, (pkt)->num_len,\
+(pkt)->trunc);
+
+
 static ngx_int_t ngx_quic_

[patch] quic PTO counter fixes

2023-10-11 Thread Vladimir Homutov via nginx-devel
Hello,

a couple of patches in the quic code:

first patch improves a bit debugging, and the second patch contains
fixes for PTO counter calculation - see commit log for details.

This helps with some clients in interop handshakeloss/handshakecorruption
testcases


# HG changeset patch
# User Vladimir Khomutov 
# Date 1697031939 -10800
#  Wed Oct 11 16:45:39 2023 +0300
# Node ID 1f188102fbd944df797e8710f70cccee76164add
# Parent  cdda286c0f1b4b10f30d4eb6a63fefb9b8708ecc
QUIC: improved packet and frames debug tracing.

Currently, packets generated by ngx_quic_frame_sendto() and
ngx_quic_send_early_cc() are not logged, thus making it hard
to read logs due to gaps appearing in packet numbers sequence.
For such special packets, a frame type being sent is also output.

At frames level, it is handy to see immediately packet number
in which they arrived or being sent.

diff --git a/src/event/quic/ngx_event_quic_frames.c b/src/event/quic/ngx_event_quic_frames.c
--- a/src/event/quic/ngx_event_quic_frames.c
+++ b/src/event/quic/ngx_event_quic_frames.c
@@ -886,8 +886,8 @@ ngx_quic_log_frame(ngx_log_t *log, ngx_q
 break;
 }
 
-ngx_log_debug4(NGX_LOG_DEBUG_EVENT, log, 0, "quic frame %s %s %*s",
-   tx ? "tx" : "rx", ngx_quic_level_name(f->level),
+ngx_log_debug5(NGX_LOG_DEBUG_EVENT, log, 0, "quic frame %s %s:%ui %*s",
+   tx ? "tx" : "rx", ngx_quic_level_name(f->level), f->pnum,
p - buf, buf);
 }
 
diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -563,8 +563,6 @@ ngx_quic_output_packet(ngx_connection_t 
 pkt.need_ack = 1;
 }
 
-ngx_quic_log_frame(c->log, f, 1);
-
 flen = ngx_quic_create_frame(p, f);
 if (flen == -1) {
 return NGX_ERROR;
@@ -578,6 +576,8 @@ ngx_quic_output_packet(ngx_connection_t 
 f->last = now;
 f->plen = 0;
 
+ngx_quic_log_frame(c->log, f, 1);
+
 nframes++;
 }
 
@@ -925,6 +925,13 @@ ngx_quic_send_early_cc(ngx_connection_t 
 
 res.data = dst;
 
+ngx_log_debug7(NGX_LOG_DEBUG_EVENT, c->log, 0,
+   "quic packet tx %s bytes:%ui need_ack:%d"
+   " number:%L encoded nl:%d trunc:0x%xD frame:%ui]",
+   ngx_quic_level_name(pkt.level), pkt.payload.len,
+   pkt.need_ack, pkt.number, pkt.num_len, pkt.trunc,
+   frame.type);
+
 if (ngx_quic_encrypt(, ) != NGX_OK) {
 return NGX_ERROR;
 }
@@ -1179,6 +1186,10 @@ ngx_quic_frame_sendto(ngx_connection_t *
 pad = 4 - pkt.num_len;
 min_payload = ngx_max(min_payload, pad);
 
+#if (NGX_DEBUG)
+frame->pnum = pkt.number;
+#endif
+
 len = ngx_quic_create_frame(NULL, frame);
 if (len > NGX_QUIC_MAX_UDP_PAYLOAD_SIZE) {
 return NGX_ERROR;
@@ -1201,6 +1212,13 @@ ngx_quic_frame_sendto(ngx_connection_t *
 
 res.data = dst;
 
+ngx_log_debug7(NGX_LOG_DEBUG_EVENT, c->log, 0,
+   "quic packet tx %s bytes:%ui need_ack:%d"
+   " number:%L encoded nl:%d trunc:0x%xD frame:%ui",
+   ngx_quic_level_name(pkt.level), pkt.payload.len,
+   pkt.need_ack, pkt.number, pkt.num_len, pkt.trunc,
+   frame->type);
+
 if (ngx_quic_encrypt(, ) != NGX_OK) {
 return NGX_ERROR;
 }
diff --git a/src/event/quic/ngx_event_quic_transport.c b/src/event/quic/ngx_event_quic_transport.c
--- a/src/event/quic/ngx_event_quic_transport.c
+++ b/src/event/quic/ngx_event_quic_transport.c
@@ -1135,6 +1135,9 @@ ngx_quic_parse_frame(ngx_quic_header_t *
 }
 
 f->level = pkt->level;
+#if (NGX_DEBUG)
+f->pnum = pkt->pn;
+#endif
 
 return p - start;
 
# HG changeset patch
# User Vladimir Khomutov 
# Date 1697031803 -10800
#  Wed Oct 11 16:43:23 2023 +0300
# Node ID 9ba2840e88f62343b3bd794e43900781dab43686
# Parent  1f188102fbd944df797e8710f70cccee76164add
QUIC: fixed handling of PTO counter.

The RFC 9002 clearly says in "6.2. Probe Timeout":
...
As with loss detection, the PTO is per packet number space.
That is, a PTO value is computed per packet number space.

Despite that, current code is using per-connection PTO counter.
For example, this may lead to situation when packet loss at handshake
level will affect PTO calculation for initial packets, preventing
send of new probes.

Additionally, one case of successful ACK receiving was missing:
PING frames are not stored in the ctx->sent queue, thus PTO was not
reset when corresponding packets were acknowledged.

diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c
+++ b/src/event/quic/ngx_event_quic.c
@@ -1088,8 +1088,6 @@ ngx_quic_discard_ctx(ngx_connection_t *c
 
 ngx_quic_keys_discard(qc->keys, level);
 
-qc->pto_count = 0;
-
 ctx = 

Re: [PATCH] QUIC openssl compat mode error handling

2023-09-22 Thread Vladimir Homutov via nginx-devel
On Fri, Sep 22, 2023 at 07:30:50PM +0400, Roman Arutyunyan wrote:
> Hi Vladimir,
>
> On Fri, Sep 22, 2023 at 03:44:08PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1695386443 -10800
> > #  Fri Sep 22 15:40:43 2023 +0300
> > # Node ID 974ba23e68909ba708616410aa77074213d4d1e5
> > # Parent  5741eddf82e826766cd0f5ec7c6fe383145ca581
> > QUIC: handle add_handhshake_data() callback errors in compat.
> >
> > The error may be triggered by incorrect transport parameter sent by client.
> > The expected behaviour in this case is to close connection complaining
> > about incorrect parameter.  Currently the connection just times out.
> >
> > diff --git a/src/event/quic/ngx_event_quic_openssl_compat.c 
> > b/src/event/quic/ngx_event_quic_openssl_compat.c
> > --- a/src/event/quic/ngx_event_quic_openssl_compat.c
> > +++ b/src/event/quic/ngx_event_quic_openssl_compat.c
> > @@ -408,7 +408,10 @@ ngx_quic_compat_message_callback(int wri
> > "quic compat tx %s len:%uz ",
> > ngx_quic_level_name(level), len);
> >
> > -(void) com->method->add_handshake_data(ssl, level, buf, len);
> > +if (com->method->add_handshake_data(ssl, level, buf, len) != 1) {
> > +ngx_post_event(>close, _posted_events);
> > +return;
> > +}
> >
> >  break;
>
> Thanks for the patch.  Indeed, it's a simple way to handle errors in 
> callbacks.
> I'd also handle the error in send_alert(), even though we don't generate any
> errors in it now.

Yes, although I was not sure if we need to close connection if we failed
to send alert (but probably if we are sending it, everything is already
bad enough). In either case, handling both cases similarly looks
as a way to go.

>
> --
> Roman Arutyunyan

> # HG changeset patch
> # User Vladimir Khomutov 
> # Date 1695396237 -14400
> #  Fri Sep 22 19:23:57 2023 +0400
> # Node ID 3db945fda515014d220151046d02f3960bcfca0a
> # Parent  32b5aaebcca51854de6e1f8a40798edb13662edb
> QUIC: handle callback errors in compat.
>
> The error may be triggered in add_handhshake_data() by incorrect transport
> parameter sent by client.  The expected behaviour in this case is to close
> connection complaining about incorrect parameter.  Currently the connection
> just times out.
>
> diff --git a/src/event/quic/ngx_event_quic_openssl_compat.c 
> b/src/event/quic/ngx_event_quic_openssl_compat.c
> --- a/src/event/quic/ngx_event_quic_openssl_compat.c
> +++ b/src/event/quic/ngx_event_quic_openssl_compat.c
> @@ -408,7 +408,9 @@ ngx_quic_compat_message_callback(int wri
> "quic compat tx %s len:%uz ",
> ngx_quic_level_name(level), len);
>
> -(void) com->method->add_handshake_data(ssl, level, buf, len);
> +if (com->method->add_handshake_data(ssl, level, buf, len) != 1) {
> +goto failed;
> +}
>
>  break;
>
> @@ -420,11 +422,19 @@ ngx_quic_compat_message_callback(int wri
> "quic compat %s alert:%ui len:%uz ",
> ngx_quic_level_name(level), alert, len);
>
> -(void) com->method->send_alert(ssl, level, alert);
> +if (com->method->send_alert(ssl, level, alert) != 1) {
> +goto failed;
> +}
>  }
>
>  break;
>  }
> +
> +return;
> +
> +failed:
> +
> +ngx_post_event(>close, _posted_events);
>  }
>

Looks good!


___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH] QUIC openssl compat mode error handling

2023-09-22 Thread Vladimir Homutov via nginx-devel
# HG changeset patch
# User Vladimir Khomutov 
# Date 1695386443 -10800
#  Fri Sep 22 15:40:43 2023 +0300
# Node ID 974ba23e68909ba708616410aa77074213d4d1e5
# Parent  5741eddf82e826766cd0f5ec7c6fe383145ca581
QUIC: handle add_handhshake_data() callback errors in compat.

The error may be triggered by incorrect transport parameter sent by client.
The expected behaviour in this case is to close connection complaining
about incorrect parameter.  Currently the connection just times out.

diff --git a/src/event/quic/ngx_event_quic_openssl_compat.c 
b/src/event/quic/ngx_event_quic_openssl_compat.c
--- a/src/event/quic/ngx_event_quic_openssl_compat.c
+++ b/src/event/quic/ngx_event_quic_openssl_compat.c
@@ -408,7 +408,10 @@ ngx_quic_compat_message_callback(int wri
"quic compat tx %s len:%uz ",
ngx_quic_level_name(level), len);

-(void) com->method->add_handshake_data(ssl, level, buf, len);
+if (com->method->add_handshake_data(ssl, level, buf, len) != 1) {
+ngx_post_event(>close, _posted_events);
+return;
+}

 break;
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 2 of 4] QUIC: always add ACK frame to the queue head

2023-08-10 Thread Vladimir Homutov via nginx-devel
On Thu, Aug 10, 2023 at 08:02:06PM +0400, Sergey Kandaurov wrote:
>
> > On 27 Jul 2023, at 16:42, Roman Arutyunyan  wrote:
> >
> > # HG changeset patch
> > # User Roman Arutyunyan 
> > # Date 1690461509 -14400
> > #  Thu Jul 27 16:38:29 2023 +0400
> > # Node ID 0d12ada84c168c62e9bae847af2725641da583d0
> > # Parent  2fd16fc76920ef0b8ea2fa64858934e38c4477c5
> > QUIC: always add ACK frame to the queue head.
> >
> > Previously it was added to the tail as all other frames.  However, if the
> > amount of queued data is large, it could delay the delivery of ACK, which
> > could trigger frames retransmissions and slow down the connection.
> >
> > diff --git a/src/event/quic/ngx_event_quic_output.c 
> > b/src/event/quic/ngx_event_quic_output.c
> > --- a/src/event/quic/ngx_event_quic_output.c
> > +++ b/src/event/quic/ngx_event_quic_output.c
> > @@ -1175,7 +1175,9 @@ ngx_quic_send_ack(ngx_connection_t *c, n
> > frame->u.ack.range_count = ctx->nranges;
> > frame->u.ack.first_range = ctx->first_range;
> >
> > -ngx_quic_queue_frame(qc, frame);
> > +ngx_queue_insert_head(>frames, >queue);
> > +
> > +frame->len = ngx_quic_create_frame(NULL, frame);
> >
> > return NGX_OK;
> > }
>
> place frame->len first, to other frame assignments?
>
> Otherwise, looks good.

The ngx_quic_queue_frame() functions also posts the push event.

Most likely it will be set anyway, but formally the change missing it.
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Enable QUIC with Tongsuo SSL library

2023-05-26 Thread Vladimir Homutov via nginx-devel

# HG changeset patch
# User Vladimir Khomutov 
# Date 1677761453 -10800
#  Thu Mar 02 15:50:53 2023 +0300
# Node ID 348772f63be2b77a893b8d101c6b6905382a5735
# Parent  8eae1b4f1c5528b063351804168a6085f5f50b42
QUIC: added support for the Tongsuo SSL library.

For the needs of QUIC, this is basically openssl-1.1.1h with
BoringSSL-compatible QUIC support.

The library was developed by AliBaba and was previously called BabaSSL,
thus macro names [1].

[1] https://github.com/Tongsuo-Project/Tongsuo

diff --git a/src/event/quic/ngx_event_quic_ssl.c b/src/event/quic/ngx_event_quic_ssl.c
--- a/src/event/quic/ngx_event_quic_ssl.c
+++ b/src/event/quic/ngx_event_quic_ssl.c
@@ -12,6 +12,7 @@
 
 #if defined OPENSSL_IS_BORINGSSL  \
 || defined LIBRESSL_VERSION_NUMBER\
+|| defined BABASSL_VERSION_NUMBER \
 || NGX_QUIC_OPENSSL_COMPAT
 #define NGX_QUIC_BORINGSSL_API   1
 #endif
___
nginx-devel mailing list
nginx-devel@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: QUIC: position of RTT and congestion

2022-11-30 Thread Vladimir Homutov via nginx-devel
On Wed, Nov 30, 2022 at 08:10:29PM +0800, Yu Zhu wrote:
>
> Hi,
>
> As described in "rfc 9002 6. Loss Detection",  "RTT and congestion
> control are properties of the path", so moves first_rtt,
> latest_rtt, avg_rtt, min_rtt, rttvar and congestion from
> ngx_quic_connection_t to struct ngx_quic_path_t looks more
> reasonable?

yes, you are right.

Currently per-path calculations are not implemented, as well as path mtu
discovery and some other things.

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [patch] ngx_cpp_test module build issue cleanup

2022-11-24 Thread Vladimir Homutov via nginx-devel
On Thu, Nov 24, 2022 at 06:46:15PM +0300, Maxim Dounin wrote:
> Hello!
>
> On Thu, Nov 24, 2022 at 02:31:33PM +0300, Vladimir Homutov via nginx-devel 
> wrote:
>
> > On Thu, Nov 24, 2022 at 01:25:30PM +0400, Sergey Kandaurov wrote:
> > >
> > > > On 23 Nov 2022, at 21:50, Vladimir Homutov via nginx-devel 
> > > >  wrote:
> > > >
> > > > Hello,
> > > >
> > > > the simplest ./configure --with-cpp_test_module leads to build error
> > > > after successful configuration:
> > > >
> > > > src/misc/ngx_cpp_test_module.cpp:13:12: fatal error: ngx_mail.h: No 
> > > > such file or directory
> > > >   13 |   #include 
> > > >  |^~~~
> > > > compilation terminated.
> > > >
> > > >
> > > > # HG changeset patch
> > > > # User Vladimir Khomutov 
> > > > # Date 1669225034 -10800
> > > > #  Wed Nov 23 20:37:14 2022 +0300
> > > > # Node ID 6237563c81707c8c2453cb0a7509ddaf64c02f4e
> > > > # Parent  49e7db44b57c9f4d54b87d19a696178b913aec5c
> > > > The ngx_cpp_test_module build requires mail and stream.
> > > >
> > > > # HG changeset patch
> > > > # User Vladimir Khomutov 
> > > > # Date 1669225742 -10800
> > > > #  Wed Nov 23 20:49:02 2022 +0300
> > > > # Node ID 12c04127e3fe4d6aa689ef3bcf3ae0834e7e9ed5
> > > > # Parent  b809f53d3f5bd04df36ac338845289d8e60a888b
> > > > The ngx_cpp_test_module build requires mail and stream.
> > > >
> > > > diff --git a/auto/modules b/auto/modules
> > > > --- a/auto/modules
> > > > +++ b/auto/modules
> > > > @@ -1358,6 +1358,17 @@ if [ $NGX_GOOGLE_PERFTOOLS = YES ]; then
> > > > fi
> > > >
> > > > if [ $NGX_CPP_TEST = YES ]; then
> > > > +
> > > > +if [ $MAIL = NO ]; then
> > > > +echo "$0: error: ngx_cpp_test_module assumes \"--with-mail\""
> > > > +exit 1
> > > > +fi
> > > > +
> > > > +if [ $STREAM = NO ]; then
> > > > +echo "$0: error: ngx_cpp_test_module assumes \"--with-stream\""
> > > > +exit 1
> > > > +fi
> > > > +
> > > > ngx_module_name=
> > > > ngx_module_incs=
> > > > ngx_module_deps=
> > > >
> > >
> > > Hello,
> > >
> > > if at all try to fix it,
> > > --without-http would also need to be addressed.
> >
> > yes, you are right. missed that since it is enabled by default.
> >
> > A bit shorter patch:
> >
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1669289342 -10800
> > #  Thu Nov 24 14:29:02 2022 +0300
> > # Node ID fd671044ba73ab8a32e558ba9d4dbe718f2b7a54
> > # Parent  b809f53d3f5bd04df36ac338845289d8e60a888b
> > The ngx_cpp_test_module build requires http, mail and stream.
> >
> > diff --git a/auto/modules b/auto/modules
> > --- a/auto/modules
> > +++ b/auto/modules
> > @@ -1358,6 +1358,12 @@ if [ $NGX_GOOGLE_PERFTOOLS = YES ]; then
> >  fi
> >
> >  if [ $NGX_CPP_TEST = YES ]; then
> > +
> > +if [ $HTTP = NO -o $MAIL = NO -o $STREAM = NO ]; then
> > +echo "$0: error: ngx_cpp_test_module requires http, mail and 
> > stream"
> > +exit 1
> > +fi
> > +
> >  ngx_module_name=
> >  ngx_module_incs=
> >  ngx_module_deps=
>
> Following other configure error messages (see auto/lib/zlib/conf,
> auto/lib/pcre/conf, or auto/lib/google-perftools/conf), this
> should be "the C++ test module ...", if at all.
>
> Also, it probably should be in auto/options, since it is a
> verification of configure options, while the auto/modules is
> expected to construct various internal lists based on the modules
> being enabled.
>
> Overall, I'm not convinced it is actually needed, since the module
> is basically a development tool, and not expected to be used by
> users who are not aware of how it is expected to be used, why the
> compilation could fail if mail or stream aren't compiled in, and
> how to fix it even without reconfiguring nginx with mail and
> stream.
>

I tend to agree that it's not worth effort.
Probably we need to have macro defined depending on presence of
HTTP/MAIL/STREAM subsystems, and use them in source code.
But I can hardly imaging code that needs it.
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [patch] ngx_cpp_test module build issue cleanup

2022-11-24 Thread Vladimir Homutov via nginx-devel
On Thu, Nov 24, 2022 at 01:25:30PM +0400, Sergey Kandaurov wrote:
>
> > On 23 Nov 2022, at 21:50, Vladimir Homutov via nginx-devel 
> >  wrote:
> >
> > Hello,
> >
> > the simplest ./configure --with-cpp_test_module leads to build error
> > after successful configuration:
> >
> > src/misc/ngx_cpp_test_module.cpp:13:12: fatal error: ngx_mail.h: No such 
> > file or directory
> >   13 |   #include 
> >  |^~~~
> > compilation terminated.
> >
> >
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1669225034 -10800
> > #  Wed Nov 23 20:37:14 2022 +0300
> > # Node ID 6237563c81707c8c2453cb0a7509ddaf64c02f4e
> > # Parent  49e7db44b57c9f4d54b87d19a696178b913aec5c
> > The ngx_cpp_test_module build requires mail and stream.
> >
> > # HG changeset patch
> > # User Vladimir Khomutov 
> > # Date 1669225742 -10800
> > #  Wed Nov 23 20:49:02 2022 +0300
> > # Node ID 12c04127e3fe4d6aa689ef3bcf3ae0834e7e9ed5
> > # Parent  b809f53d3f5bd04df36ac338845289d8e60a888b
> > The ngx_cpp_test_module build requires mail and stream.
> >
> > diff --git a/auto/modules b/auto/modules
> > --- a/auto/modules
> > +++ b/auto/modules
> > @@ -1358,6 +1358,17 @@ if [ $NGX_GOOGLE_PERFTOOLS = YES ]; then
> > fi
> >
> > if [ $NGX_CPP_TEST = YES ]; then
> > +
> > +if [ $MAIL = NO ]; then
> > +echo "$0: error: ngx_cpp_test_module assumes \"--with-mail\""
> > +exit 1
> > +fi
> > +
> > +if [ $STREAM = NO ]; then
> > +echo "$0: error: ngx_cpp_test_module assumes \"--with-stream\""
> > +exit 1
> > +fi
> > +
> > ngx_module_name=
> > ngx_module_incs=
> > ngx_module_deps=
> >
>
> Hello,
>
> if at all try to fix it,
> --without-http would also need to be addressed.

yes, you are right. missed that since it is enabled by default.

A bit shorter patch:

# HG changeset patch
# User Vladimir Khomutov 
# Date 1669289342 -10800
#  Thu Nov 24 14:29:02 2022 +0300
# Node ID fd671044ba73ab8a32e558ba9d4dbe718f2b7a54
# Parent  b809f53d3f5bd04df36ac338845289d8e60a888b
The ngx_cpp_test_module build requires http, mail and stream.

diff --git a/auto/modules b/auto/modules
--- a/auto/modules
+++ b/auto/modules
@@ -1358,6 +1358,12 @@ if [ $NGX_GOOGLE_PERFTOOLS = YES ]; then
 fi

 if [ $NGX_CPP_TEST = YES ]; then
+
+if [ $HTTP = NO -o $MAIL = NO -o $STREAM = NO ]; then
+echo "$0: error: ngx_cpp_test_module requires http, mail and stream"
+exit 1
+fi
+
 ngx_module_name=
 ngx_module_incs=
 ngx_module_deps=
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


[patch] ngx_cpp_test module build issue cleanup

2022-11-23 Thread Vladimir Homutov via nginx-devel
Hello,

the simplest ./configure --with-cpp_test_module leads to build error
after successful configuration:

src/misc/ngx_cpp_test_module.cpp:13:12: fatal error: ngx_mail.h: No such file 
or directory
   13 |   #include 
  |^~~~
compilation terminated.


# HG changeset patch
# User Vladimir Khomutov 
# Date 1669225034 -10800
#  Wed Nov 23 20:37:14 2022 +0300
# Node ID 6237563c81707c8c2453cb0a7509ddaf64c02f4e
# Parent  49e7db44b57c9f4d54b87d19a696178b913aec5c
The ngx_cpp_test_module build requires mail and stream.

# HG changeset patch
# User Vladimir Khomutov 
# Date 1669225742 -10800
#  Wed Nov 23 20:49:02 2022 +0300
# Node ID 12c04127e3fe4d6aa689ef3bcf3ae0834e7e9ed5
# Parent  b809f53d3f5bd04df36ac338845289d8e60a888b
The ngx_cpp_test_module build requires mail and stream.

diff --git a/auto/modules b/auto/modules
--- a/auto/modules
+++ b/auto/modules
@@ -1358,6 +1358,17 @@ if [ $NGX_GOOGLE_PERFTOOLS = YES ]; then
 fi

 if [ $NGX_CPP_TEST = YES ]; then
+
+if [ $MAIL = NO ]; then
+echo "$0: error: ngx_cpp_test_module assumes \"--with-mail\""
+exit 1
+fi
+
+if [ $STREAM = NO ]; then
+echo "$0: error: ngx_cpp_test_module assumes \"--with-stream\""
+exit 1
+fi
+
 ngx_module_name=
 ngx_module_incs=
 ngx_module_deps=

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: how to avoid new quic connection distributed to old workers when nginx-quic reload

2022-05-30 Thread Vladimir Homutov via nginx-devel
On Mon, May 30, 2022 at 06:55:15PM +0800, 朱宇 wrote:
> Hi,
>
>
> in "src/event/quic/bpf/ngx_quic_reuseport_helper.c",  if can not find socket 
> by dcid (cookie), udp packet will be distributed by kernel.
>
>
> so when nginx-quic reload, how to avoid new quic connecion packet distributed 
> to old workers which result in old worker processes can't exit?
>
>
> thanks

Old (exiting) workers stop accepting new connections and ignores such
packet or replies with 'retry' if configured. The sender assumes that the
packet was lost or gets the 'retry' packet and repeats send. Hopefully,
the next time old worker will already exit and the packet will be
delivered to new worker. This is not ideal, but this is how it works now.

See 
http://hg.nginx.org/nginx-quic/file/quic/src/event/quic/ngx_event_quic.c#l913

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH 1 of 4] QUIC: fixed-length buffers for secrets

2022-02-22 Thread Vladimir Homutov
On Mon, Feb 21, 2022 at 05:51:42PM +0300, Sergey Kandaurov wrote:
> On Mon, Feb 21, 2022 at 02:10:31PM +0300, Vladimir Homutov wrote:
> > Patch subject is complete summary.
> >
> >
> >  src/event/quic/ngx_event_quic_protection.c |  202 
> > +++-
> >  1 files changed, 105 insertions(+), 97 deletions(-)
> >
> >
>
> > # HG changeset patch
> > # User Vladimir Homutov 
> > # Date 1645440604 -10800
> > #  Mon Feb 21 13:50:04 2022 +0300
> > # Branch quic
> > # Node ID 1a0a12bef7f00b5422d449b2d4642fff39e0a47e
> > # Parent  55b38514729b8f848709b31295e72d6886a7a433
> > QUIC: fixed-length buffers for secrets.
> >
> > diff --git a/src/event/quic/ngx_event_quic_protection.c 
> > b/src/event/quic/ngx_event_quic_protection.c
> > --- a/src/event/quic/ngx_event_quic_protection.c
> > +++ b/src/event/quic/ngx_event_quic_protection.c
> > @@ -17,6 +17,8 @@
> >
> >  #define NGX_QUIC_AES_128_KEY_LEN  16
> >
> > +#define NGX_QUIC_KEY_LEN  32
> > +
> >  #define NGX_AES_128_GCM_SHA2560x1301
> >  #define NGX_AES_256_GCM_SHA3840x1302
> >  #define NGX_CHACHA20_POLY1305_SHA256  0x1303
> > @@ -30,6 +32,27 @@
> >
> >
> >  typedef struct {
> > +size_tlen;
> > +u_chardata[SHA256_DIGEST_LENGTH];
> > +} ngx_quic_okm_t;
> > +
> > +typedef struct {
> > +size_tlen;
> > +u_chardata[NGX_QUIC_KEY_LEN];
> > +} ngx_quic_key_t;
> > +
> > +typedef struct {
> > +size_tlen;
> > +u_chardata[NGX_QUIC_KEY_LEN];
> > +} ngx_quic_hp_t;
> > +
> > +typedef struct {
> > +size_tlen;
> > +u_chardata[NGX_QUIC_IV_LEN];
> > +} ngx_quic_iv_t;
>
> Style: two empty lines between struct declarations.

thanks, fixed this

>
> > +
> > +
> > +typedef struct {
> >  const ngx_quic_cipher_t  *c;
> >  const EVP_CIPHER *hp;
> >  const EVP_MD *d;
> > @@ -37,10 +60,10 @@ typedef struct {
> >
> >
> >  typedef struct ngx_quic_secret_s {
> > -ngx_str_t secret;
> > -ngx_str_t key;
> > -ngx_str_t iv;
> > -ngx_str_t hp;
> > +ngx_quic_okm_tsecret;
> > +ngx_quic_key_tkey;
> > +ngx_quic_iv_t iv;
> > +ngx_quic_hp_t hp;
> >  } ngx_quic_secret_t;
> >
> >
> > @@ -57,6 +80,29 @@ struct ngx_quic_keys_s {
> >  };
> >
> >
> > +typedef struct {
> > +size_tout_len;
> > +u_char   *out;
> > +
> > +size_tprk_len;
> > +const uint8_t*prk;
> > +
> > +size_tlabel_len;
> > +const u_char *label;
> > +
> > +size_tinfo_len;
> > +uint8_t   info[20];
> > +} ngx_quic_hkdf_t;
> > +
> > +#define ngx_quic_hkdf_set(label, out, prk) 
> >\
> > +{  
> >\
> > +(out)->len, (out)->data,   
> >\
> > +(prk)->len, (prk)->data,   
> >\
> > +(sizeof(label) - 1), (u_char *)(label),
> >\
> > +0, { 0 }   
> >\
> > +}
> > +
> > +
> >  static ngx_int_t ngx_hkdf_expand(u_char *out_key, size_t out_len,
> >  const EVP_MD *digest, const u_char *prk, size_t prk_len,
> >  const u_char *info, size_t info_len);
> > @@ -78,8 +124,8 @@ static ngx_int_t ngx_quic_tls_seal(const
> >  ngx_str_t *ad, ngx_log_t *log);
> >  static ngx_int_t ngx_quic_tls_hp(ngx_log_t *log, const EVP_CIPHER *cipher,
> >  ngx_quic_secret_t *s, u_char *out, u_char *in);
> > -static ngx_int_t ngx_quic_hkdf_expand(ngx_pool_t *pool, const EVP_MD 
> > *digest,
> > -ngx_str_t *out, ngx_str_t *label, const uint8_t *prk, size_t prk_len);
> > +static ngx_int_t ngx_quic_hkdf_expand(ngx_quic_hkdf_t *hkdf,
> > +const EVP_MD *digest, ngx_pool_t *pool);
> >
> >  static ngx_int_t ngx_quic_create_packet(ngx_q

Re: Clients fail to connect via HTTP3 over QUIC

2022-02-21 Thread Vladimir Homutov

22.02.2022 00:43, David Hu via nginx-devel пишет:

I have compiled the latest master branch of nginx-quic with these options:

nginx version: nginx/1.21.7 (8861:b5c87e0e57ef)
built with OpenSSL 3.0.1+quic 14 Dec 2021
TLS SNI support enabled
configure arguments: --prefix=/usr/local/nginx --build=8861:b5c87e0e57ef 
--with-debug --with-http_ssl_module --with-http_v2_module 
--with-stream_quic_module --with-http_v3_module 
--with-cc-opt='-I/usr/local/include/openssl -O0 -DNGX_HTTP_V3_HQ=1' 
--with-ld-opt=-L/usr/local/lib64


and OpenSSL version (quictls):
OpenSSL 3.0.1+quic 14 Dec 2021 (Library: OpenSSL 3.0.1+quic 14 Dec 2021)
built on: Sun Feb 20 01:43:12 2022 UTC
platform: linux-x86_64
options:  bn(64,64)
compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 
-DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC 
-DOPENSSL_BUILDING_OPENSSL -DNDEBUG -DOPENSSL_TLS_SECURITY_LEVEL=2

OPENSSLDIR: "/usr/local/ssl"
ENGINESDIR: "/usr/local/lib64/engines-81.3"
MODULESDIR: "/usr/local/lib64/ossl-modules"
Seeding source: os-specific
CPUINFO: OPENSSL_ia32cap=0xfff83203078b:0x4219c01ab


And my nginx config file http block looks like this:
[redacted sensitive configs]
http {
     [redacted some configs]
     quic_retry on;
     http3_push on;
     http3_hq on;
}

However clients cannot cannot to my server either through H3 or HQ anymore

Wireshark shows handshake failure
CONNECTION_CLOSE (Transport) Error code: CRYPTO_ERROR (No application 
Protocol)

     Frame Type: CONNECTION_CLOSE (Transport) (0x001c)
     Error code: CRYPTO_ERROR (376)
     TLS Alert Description: No application Protocol (120)
     Frame Type: 0
     Reason phrase Length: 16
     Reason phrase: handshake failed


How am I supposed to solve this?


First, check the logs, the error should be logged. Message supposes your 
client did not send proper protocol (or no ALPN at all). We've recently
removed draft version suppoort 
(http://hg.nginx.org/nginx-quic/rev/d8865baab732), so now only quic v1 
is supported, and "h3" should be used for application protocol. You may 
want to check your configuration for 'Alt-Svc' header.

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


[PATCH 4 of 4] QUIC: avoided pool usage in token calculation

2022-02-21 Thread Vladimir Homutov
Patch subject is complete summary.


 src/event/quic/ngx_event_quic_output.c|  11 +--
 src/event/quic/ngx_event_quic_tokens.c|  22 --
 src/event/quic/ngx_event_quic_tokens.h|  14 +-
 src/event/quic/ngx_event_quic_transport.h |   1 +
 4 files changed, 27 insertions(+), 21 deletions(-)


# HG changeset patch
# User Vladimir Homutov 
# Date 1645440587 -10800
#  Mon Feb 21 13:49:47 2022 +0300
# Branch quic
# Node ID b3fb81ecc3431c4dbf9e849d72d13a84fe02703b
# Parent  dfc2fc335990e05da1a6f087ca75721cbf8c8891
QUIC: avoided pool usage in token calculation.

diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -1009,10 +1009,13 @@ ngx_quic_send_retry(ngx_connection_t *c,
 
 u_char buf[NGX_QUIC_RETRY_BUFFER_SIZE];
 u_char dcid[NGX_QUIC_SERVER_CID_LEN];
+u_char tbuf[NGX_QUIC_TOKEN_BUF_SIZE];
 
 expires = ngx_time() + NGX_QUIC_RETRY_TOKEN_LIFETIME;
 
-if (ngx_quic_new_token(c, c->sockaddr, c->socklen, conf->av_token_key,
+token.data = tbuf;
+
+if (ngx_quic_new_token(c->log, c->sockaddr, c->socklen, conf->av_token_key,
, >dcid, expires, 1)
 != NGX_OK)
 {
@@ -1075,11 +1078,15 @@ ngx_quic_send_new_token(ngx_connection_t
 ngx_quic_frame_t   *frame;
 ngx_quic_connection_t  *qc;
 
+u_char  tbuf[NGX_QUIC_TOKEN_BUF_SIZE];
+
 qc = ngx_quic_get_connection(c);
 
 expires = ngx_time() + NGX_QUIC_NEW_TOKEN_LIFETIME;
 
-if (ngx_quic_new_token(c, path->sockaddr, path->socklen,
+token.data = tbuf;
+
+if (ngx_quic_new_token(c->log, path->sockaddr, path->socklen,
qc->conf->av_token_key, , NULL, expires, 0)
 != NGX_OK)
 {
diff --git a/src/event/quic/ngx_event_quic_tokens.c b/src/event/quic/ngx_event_quic_tokens.c
--- a/src/event/quic/ngx_event_quic_tokens.c
+++ b/src/event/quic/ngx_event_quic_tokens.c
@@ -11,14 +11,6 @@
 #include 
 
 
-#define NGX_QUIC_MAX_TOKEN_SIZE  64
-/* SHA-1(addr)=20 + sizeof(time_t) + retry(1) + odcid.len(1) + odcid */
-
-/* RFC 3602, 2.1 and 2.4 for AES-CBC block size and IV length */
-#define NGX_QUIC_AES_256_CBC_IV_LEN  16
-#define NGX_QUIC_AES_256_CBC_BLOCK_SIZE  16
-
-
 static void ngx_quic_address_hash(struct sockaddr *sockaddr, socklen_t socklen,
 ngx_uint_t no_port, u_char buf[20]);
 
@@ -48,7 +40,7 @@ ngx_quic_new_sr_token(ngx_connection_t *
 
 
 ngx_int_t
-ngx_quic_new_token(ngx_connection_t *c, struct sockaddr *sockaddr,
+ngx_quic_new_token(ngx_log_t *log, struct sockaddr *sockaddr,
 socklen_t socklen, u_char *key, ngx_str_t *token, ngx_str_t *odcid,
 time_t exp, ngx_uint_t is_retry)
 {
@@ -81,10 +73,6 @@ ngx_quic_new_token(ngx_connection_t *c, 
 iv_len = NGX_QUIC_AES_256_CBC_IV_LEN;
 
 token->len = iv_len + len + NGX_QUIC_AES_256_CBC_BLOCK_SIZE;
-token->data = ngx_pnalloc(c->pool, token->len);
-if (token->data == NULL) {
-return NGX_ERROR;
-}
 
 ctx = EVP_CIPHER_CTX_new();
 if (ctx == NULL) {
@@ -119,7 +107,7 @@ ngx_quic_new_token(ngx_connection_t *c, 
 EVP_CIPHER_CTX_free(ctx);
 
 #ifdef NGX_QUIC_DEBUG_PACKETS
-ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0,
+ngx_log_debug2(NGX_LOG_DEBUG_EVENT, log, 0,
"quic new token len:%uz %xV", token->len, token);
 #endif
 
@@ -268,10 +256,8 @@ ngx_quic_validate_token(ngx_connection_t
 
 if (odcid.len) {
 pkt->odcid.len = odcid.len;
-pkt->odcid.data = ngx_pstrdup(c->pool, );
-if (pkt->odcid.data == NULL) {
-return NGX_ERROR;
-}
+pkt->odcid.data = pkt->odcid_data;
+ngx_memcpy(pkt->odcid.data, odcid.data, odcid.len);
 
 } else {
 pkt->odcid = pkt->dcid;
diff --git a/src/event/quic/ngx_event_quic_tokens.h b/src/event/quic/ngx_event_quic_tokens.h
--- a/src/event/quic/ngx_event_quic_tokens.h
+++ b/src/event/quic/ngx_event_quic_tokens.h
@@ -12,9 +12,21 @@
 #include 
 
 
+#define NGX_QUIC_MAX_TOKEN_SIZE  64
+/* SHA-1(addr)=20 + sizeof(time_t) + retry(1) + odcid.len(1) + odcid */
+
+/* RFC 3602, 2.1 and 2.4 for AES-CBC block size and IV length */
+#define NGX_QUIC_AES_256_CBC_IV_LEN  16
+#define NGX_QUIC_AES_256_CBC_BLOCK_SIZE  16
+
+#define NGX_QUIC_TOKEN_BUF_SIZE (NGX_QUIC_AES_256_CBC_IV_LEN  \
++ NGX_QUIC_MAX_TOKEN_SIZE \
++ NGX_QUIC_AES_256_CBC_BLOCK_SIZE)
+
+
 ngx_int_t ngx_quic_new_sr_token(ngx_connection_t *c, ngx_str_t *cid,
 u_char *secret, u_char *token);
-ngx_int_t ngx_quic_new_token(ngx_connection_t *c, struct sockaddr *sockaddr,
+ngx_int_t ngx_quic_new

[PATCH 3 of 4] QUIC: removed ngx_quic_keys_new()

2022-02-21 Thread Vladimir Homutov
The ngx_quic_keys_t structure is now exposed.
This allows to use it in contexts where no pool/connection is available,
i.e. early packet processing.


 src/event/quic/ngx_event_quic.c|   2 +-
 src/event/quic/ngx_event_quic_output.c |   8 ++--
 src/event/quic/ngx_event_quic_protection.c |  53 --
 src/event/quic/ngx_event_quic_protection.h |  48 ++-
 4 files changed, 52 insertions(+), 59 deletions(-)


# HG changeset patch
# User Vladimir Homutov 
# Date 1645440522 -10800
#  Mon Feb 21 13:48:42 2022 +0300
# Branch quic
# Node ID dfc2fc335990e05da1a6f087ca75721cbf8c8891
# Parent  950a45270e862b02f43ed1df02a9146e8686b8e5
QUIC: removed ngx_quic_keys_new().

The ngx_quic_keys_t structure is now exposed.
This allows to use it in contexts where no pool/connection is available,
i.e. early packet processing.

diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c
+++ b/src/event/quic/ngx_event_quic.c
@@ -238,7 +238,7 @@ ngx_quic_new_connection(ngx_connection_t
 return NULL;
 }
 
-qc->keys = ngx_quic_keys_new(c->pool);
+qc->keys = ngx_pcalloc(c->pool, sizeof(ngx_quic_keys_t));
 if (qc->keys == NULL) {
 return NULL;
 }
diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -928,6 +928,7 @@ ngx_quic_send_early_cc(ngx_connection_t 
 {
 ssize_tlen;
 ngx_str_t  res;
+ngx_quic_keys_tkeys;
 ngx_quic_frame_t   frame;
 ngx_quic_header_t  pkt;
 
@@ -956,10 +957,9 @@ ngx_quic_send_early_cc(ngx_connection_t 
 return NGX_ERROR;
 }
 
-pkt.keys = ngx_quic_keys_new(c->pool);
-if (pkt.keys == NULL) {
-return NGX_ERROR;
-}
+ngx_memzero(, sizeof(ngx_quic_keys_t));
+
+pkt.keys = 
 
 if (ngx_quic_keys_set_initial_secret(pkt.keys, >dcid, c->log)
 != NGX_OK)
diff --git a/src/event/quic/ngx_event_quic_protection.c b/src/event/quic/ngx_event_quic_protection.c
--- a/src/event/quic/ngx_event_quic_protection.c
+++ b/src/event/quic/ngx_event_quic_protection.c
@@ -10,15 +10,11 @@
 #include 
 
 
-/* RFC 5116, 5.1 and RFC 8439, 2.3 for all supported ciphers */
-#define NGX_QUIC_IV_LEN   12
 /* RFC 9001, 5.4.1.  Header Protection Application: 5-byte mask */
 #define NGX_QUIC_HP_LEN   5
 
 #define NGX_QUIC_AES_128_KEY_LEN  16
 
-#define NGX_QUIC_KEY_LEN  32
-
 #define NGX_AES_128_GCM_SHA2560x1301
 #define NGX_AES_256_GCM_SHA3840x1302
 #define NGX_CHACHA20_POLY1305_SHA256  0x1303
@@ -32,54 +28,12 @@
 
 
 typedef struct {
-size_tlen;
-u_chardata[SHA256_DIGEST_LENGTH];
-} ngx_quic_okm_t;
-
-typedef struct {
-size_tlen;
-u_chardata[NGX_QUIC_KEY_LEN];
-} ngx_quic_key_t;
-
-typedef struct {
-size_tlen;
-u_chardata[NGX_QUIC_KEY_LEN];
-} ngx_quic_hp_t;
-
-typedef struct {
-size_tlen;
-u_chardata[NGX_QUIC_IV_LEN];
-} ngx_quic_iv_t;
-
-
-typedef struct {
 const ngx_quic_cipher_t  *c;
 const EVP_CIPHER *hp;
 const EVP_MD *d;
 } ngx_quic_ciphers_t;
 
 
-typedef struct ngx_quic_secret_s {
-ngx_quic_okm_tsecret;
-ngx_quic_key_tkey;
-ngx_quic_iv_t iv;
-ngx_quic_hp_t hp;
-} ngx_quic_secret_t;
-
-
-typedef struct {
-ngx_quic_secret_t client;
-ngx_quic_secret_t server;
-} ngx_quic_secrets_t;
-
-
-struct ngx_quic_keys_s {
-ngx_quic_secrets_tsecrets[NGX_QUIC_ENCRYPTION_LAST];
-ngx_quic_secrets_tnext_key;
-ngx_uint_tcipher;
-};
-
-
 typedef struct {
 size_tout_len;
 u_char   *out;
@@ -731,13 +685,6 @@ ngx_quic_keys_set_encryption_secret(ngx_
 }
 
 
-ngx_quic_keys_t *
-ngx_quic_keys_new(ngx_pool_t *pool)
-{
-return ngx_pcalloc(pool, sizeof(ngx_quic_keys_t));
-}
-
-
 ngx_uint_t
 ngx_quic_keys_available(ngx_quic_keys_t *keys,
 enum ssl_encryption_level_t level)
diff --git a/src/event/quic/ngx_event_quic_protection.h b/src/event/quic/ngx_event_quic_protection.h
--- a/src/event/quic/ngx_event_quic_protection.h
+++ b/src/event/quic/ngx_event_quic_protection.h
@@ -16,8 +16,54 @@
 
 #define NGX_QUIC_ENCRYPTION_LAST  ((ssl_encryption_application) + 1)
 
+/* RFC 5116, 5.1 and RFC 8439, 2.3 for all supported ciphers */
+#define NGX_QUIC_IV_LEN   12
 
-ngx_quic_keys_t *ngx_quic_keys_new(ngx_pool_t *pool);
+#define NGX_QUIC_KEY_LEN  32
+
+
+typedef struct {
+size_tlen;
+u_chardata[SHA256_DIGEST_LENGTH];
+} ngx_quic_okm_t;
+
+type

[PATCH 2 of 4] QUIC: avoided pool usage in ngx_quic_protection.c

2022-02-21 Thread Vladimir Homutov
Patch subject is complete summary.


 src/event/quic/ngx_event_quic.c|   2 +-
 src/event/quic/ngx_event_quic_output.c |   2 +-
 src/event/quic/ngx_event_quic_protection.c |  37 -
 src/event/quic/ngx_event_quic_protection.h |   6 ++--
 src/event/quic/ngx_event_quic_ssl.c|   8 +++---
 5 files changed, 24 insertions(+), 31 deletions(-)


# HG changeset patch
# User Vladimir Homutov 
# Date 1645440574 -10800
#  Mon Feb 21 13:49:34 2022 +0300
# Branch quic
# Node ID 950a45270e862b02f43ed1df02a9146e8686b8e5
# Parent  1a0a12bef7f00b5422d449b2d4642fff39e0a47e
QUIC: avoided pool usage in ngx_quic_protection.c.

diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c
+++ b/src/event/quic/ngx_event_quic.c
@@ -325,7 +325,7 @@ ngx_quic_new_connection(ngx_connection_t
 }
 }
 
-if (ngx_quic_keys_set_initial_secret(c->pool, qc->keys, >dcid)
+if (ngx_quic_keys_set_initial_secret(qc->keys, >dcid, c->log)
 != NGX_OK)
 {
 return NULL;
diff --git a/src/event/quic/ngx_event_quic_output.c b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -961,7 +961,7 @@ ngx_quic_send_early_cc(ngx_connection_t 
 return NGX_ERROR;
 }
 
-if (ngx_quic_keys_set_initial_secret(c->pool, pkt.keys, >dcid)
+if (ngx_quic_keys_set_initial_secret(pkt.keys, >dcid, c->log)
 != NGX_OK)
 {
 return NGX_ERROR;
diff --git a/src/event/quic/ngx_event_quic_protection.c b/src/event/quic/ngx_event_quic_protection.c
--- a/src/event/quic/ngx_event_quic_protection.c
+++ b/src/event/quic/ngx_event_quic_protection.c
@@ -125,7 +125,7 @@ static ngx_int_t ngx_quic_tls_seal(const
 static ngx_int_t ngx_quic_tls_hp(ngx_log_t *log, const EVP_CIPHER *cipher,
 ngx_quic_secret_t *s, u_char *out, u_char *in);
 static ngx_int_t ngx_quic_hkdf_expand(ngx_quic_hkdf_t *hkdf,
-const EVP_MD *digest, ngx_pool_t *pool);
+const EVP_MD *digest, ngx_log_t *log);
 
 static ngx_int_t ngx_quic_create_packet(ngx_quic_header_t *pkt,
 ngx_str_t *res);
@@ -191,8 +191,8 @@ ngx_quic_ciphers(ngx_uint_t id, ngx_quic
 
 
 ngx_int_t
-ngx_quic_keys_set_initial_secret(ngx_pool_t *pool, ngx_quic_keys_t *keys,
-ngx_str_t *secret)
+ngx_quic_keys_set_initial_secret(ngx_quic_keys_t *keys, ngx_str_t *secret,
+ngx_log_t *log)
 {
 size_t  is_len;
 uint8_t is[SHA256_DIGEST_LENGTH];
@@ -229,12 +229,12 @@ ngx_quic_keys_set_initial_secret(ngx_poo
 .len = is_len
 };
 
-ngx_log_debug0(NGX_LOG_DEBUG_EVENT, pool->log, 0,
+ngx_log_debug0(NGX_LOG_DEBUG_EVENT, log, 0,
"quic ngx_quic_set_initial_secret");
 #ifdef NGX_QUIC_DEBUG_CRYPTO
-ngx_log_debug3(NGX_LOG_DEBUG_EVENT, pool->log, 0,
+ngx_log_debug3(NGX_LOG_DEBUG_EVENT, log, 0,
"quic salt len:%uz %*xs", sizeof(salt), sizeof(salt), salt);
-ngx_log_debug3(NGX_LOG_DEBUG_EVENT, pool->log, 0,
+ngx_log_debug3(NGX_LOG_DEBUG_EVENT, log, 0,
"quic initial secret len:%uz %*xs", is_len, is_len, is);
 #endif
 
@@ -263,7 +263,7 @@ ngx_quic_keys_set_initial_secret(ngx_poo
 };
 
 for (i = 0; i < (sizeof(seq) / sizeof(seq[0])); i++) {
-if (ngx_quic_hkdf_expand([i], digest, pool) != NGX_OK) {
+if (ngx_quic_hkdf_expand([i], digest, log) != NGX_OK) {
 return NGX_ERROR;
 }
 }
@@ -273,17 +273,10 @@ ngx_quic_keys_set_initial_secret(ngx_poo
 
 
 static ngx_int_t
-ngx_quic_hkdf_expand(ngx_quic_hkdf_t *h, const EVP_MD *digest, ngx_pool_t *pool)
+ngx_quic_hkdf_expand(ngx_quic_hkdf_t *h, const EVP_MD *digest, ngx_log_t *log)
 {
 uint8_t  *p;
 
-if (h->out == NULL) {
-h->out = ngx_pnalloc(pool, h->out_len);
-if (h->out == NULL) {
-return NGX_ERROR;
-}
-}
-
 h->info_len = 2 + 1 + h->label_len + 1;
 
 h->info[0] = 0;
@@ -297,13 +290,13 @@ ngx_quic_hkdf_expand(ngx_quic_hkdf_t *h,
 h->prk, h->prk_len, h->info, h->info_len)
 != NGX_OK)
 {
-ngx_ssl_error(NGX_LOG_INFO, pool->log, 0,
+ngx_ssl_error(NGX_LOG_INFO, log, 0,
   "ngx_hkdf_expand(%*s) failed", h->label_len, h->label);
 return NGX_ERROR;
 }
 
 #ifdef NGX_QUIC_DEBUG_CRYPTO
-ngx_log_debug5(NGX_LOG_DEBUG_EVENT, pool->log, 0,
+ngx_log_debug5(NGX_LOG_DEBUG_EVENT, log, 0,
"quic expand \"%*s\" key len:%uz %*xs",
h->label_len, h->label, h->out_len, h->out_len, h->out);
 #endif
@@ -684,7 +677,7 @@ failed:
 
 
 ngx_int_t
-ngx_quic_keys_set_encryption_secret(ngx_pool_t *pool, ngx_uint_t is_write,
+ngx_quic_keys_set_encryption_secret(ngx_log_t

[PATCH 1 of 4] QUIC: fixed-length buffers for secrets

2022-02-21 Thread Vladimir Homutov
Patch subject is complete summary.


 src/event/quic/ngx_event_quic_protection.c |  202 +++-
 1 files changed, 105 insertions(+), 97 deletions(-)


# HG changeset patch
# User Vladimir Homutov 
# Date 1645440604 -10800
#  Mon Feb 21 13:50:04 2022 +0300
# Branch quic
# Node ID 1a0a12bef7f00b5422d449b2d4642fff39e0a47e
# Parent  55b38514729b8f848709b31295e72d6886a7a433
QUIC: fixed-length buffers for secrets.

diff --git a/src/event/quic/ngx_event_quic_protection.c b/src/event/quic/ngx_event_quic_protection.c
--- a/src/event/quic/ngx_event_quic_protection.c
+++ b/src/event/quic/ngx_event_quic_protection.c
@@ -17,6 +17,8 @@
 
 #define NGX_QUIC_AES_128_KEY_LEN  16
 
+#define NGX_QUIC_KEY_LEN  32
+
 #define NGX_AES_128_GCM_SHA2560x1301
 #define NGX_AES_256_GCM_SHA3840x1302
 #define NGX_CHACHA20_POLY1305_SHA256  0x1303
@@ -30,6 +32,27 @@
 
 
 typedef struct {
+size_tlen;
+u_chardata[SHA256_DIGEST_LENGTH];
+} ngx_quic_okm_t;
+
+typedef struct {
+size_tlen;
+u_chardata[NGX_QUIC_KEY_LEN];
+} ngx_quic_key_t;
+
+typedef struct {
+size_tlen;
+u_chardata[NGX_QUIC_KEY_LEN];
+} ngx_quic_hp_t;
+
+typedef struct {
+size_tlen;
+u_chardata[NGX_QUIC_IV_LEN];
+} ngx_quic_iv_t;
+
+
+typedef struct {
 const ngx_quic_cipher_t  *c;
 const EVP_CIPHER *hp;
 const EVP_MD *d;
@@ -37,10 +60,10 @@ typedef struct {
 
 
 typedef struct ngx_quic_secret_s {
-ngx_str_t secret;
-ngx_str_t key;
-ngx_str_t iv;
-ngx_str_t hp;
+ngx_quic_okm_tsecret;
+ngx_quic_key_tkey;
+ngx_quic_iv_t iv;
+ngx_quic_hp_t hp;
 } ngx_quic_secret_t;
 
 
@@ -57,6 +80,29 @@ struct ngx_quic_keys_s {
 };
 
 
+typedef struct {
+size_tout_len;
+u_char   *out;
+
+size_tprk_len;
+const uint8_t*prk;
+
+size_tlabel_len;
+const u_char *label;
+
+size_tinfo_len;
+uint8_t   info[20];
+} ngx_quic_hkdf_t;
+
+#define ngx_quic_hkdf_set(label, out, prk)\
+{ \
+(out)->len, (out)->data,  \
+(prk)->len, (prk)->data,  \
+(sizeof(label) - 1), (u_char *)(label),   \
+0, { 0 }  \
+}
+
+
 static ngx_int_t ngx_hkdf_expand(u_char *out_key, size_t out_len,
 const EVP_MD *digest, const u_char *prk, size_t prk_len,
 const u_char *info, size_t info_len);
@@ -78,8 +124,8 @@ static ngx_int_t ngx_quic_tls_seal(const
 ngx_str_t *ad, ngx_log_t *log);
 static ngx_int_t ngx_quic_tls_hp(ngx_log_t *log, const EVP_CIPHER *cipher,
 ngx_quic_secret_t *s, u_char *out, u_char *in);
-static ngx_int_t ngx_quic_hkdf_expand(ngx_pool_t *pool, const EVP_MD *digest,
-ngx_str_t *out, ngx_str_t *label, const uint8_t *prk, size_t prk_len);
+static ngx_int_t ngx_quic_hkdf_expand(ngx_quic_hkdf_t *hkdf,
+const EVP_MD *digest, ngx_pool_t *pool);
 
 static ngx_int_t ngx_quic_create_packet(ngx_quic_header_t *pkt,
 ngx_str_t *res);
@@ -204,28 +250,20 @@ ngx_quic_keys_set_initial_secret(ngx_poo
 client->iv.len = NGX_QUIC_IV_LEN;
 server->iv.len = NGX_QUIC_IV_LEN;
 
-struct {
-ngx_str_t   label;
-ngx_str_t  *key;
-ngx_str_t  *prk;
-} seq[] = {
+ngx_quic_hkdf_t seq[] = {
 /* labels per RFC 9001, 5.1. Packet Protection Keys */
-{ ngx_string("tls13 client in"), >secret,  },
-{ ngx_string("tls13 quic key"),  >key,>secret },
-{ ngx_string("tls13 quic iv"),   >iv, >secret },
-{ ngx_string("tls13 quic hp"),   >hp, >secret },
-{ ngx_string("tls13 server in"), >secret,  },
-{ ngx_string("tls13 quic key"),  >key,>secret },
-{ ngx_string("tls13 quic iv"),   >iv, >secret },
-{ ngx_string("tls13 quic hp"),   >hp, >secret },
+ngx_quic_hkdf_set("tls13 client in", >secret, ),
+ngx_quic_hkdf_set("tls13 quic key",  >key,>secret),
+ngx_quic_hkdf_set("tls13 quic iv",   >iv, >secret),
+ngx_quic_hkdf_set("tls13 quic hp",   >hp, >secret),
+ngx_quic_hkdf_set("tls13 server in", >secret, ),
+ngx_quic_hkdf_set("tls13 

[PATCH 0 of 4] [QUIC] avoid pool allocations

2022-02-21 Thread Vladimir Homutov


  it is desirable to avoid pool allocations at early stages of quic connection
  processing. Currently, code in protection.c and tokens.c allocates memory
  dynamically, while this is not strictly necessary, as allocated objects have
  fixed size and sometimes short lifetime.  The patchset revises this cases
  and removes pool usage.

  This patchset prepares base to more lightweight early packet processing
  (parsing, retry and rejection with error without creating connection object
   and memory allocations)


___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [QUIC] padding of Initial packets

2022-02-09 Thread Vladimir Homutov
On Tue, Feb 08, 2022 at 03:42:54PM +0300, Vladimir Homutov wrote:
> On Tue, Feb 08, 2022 at 02:10:04PM +0300, Andrey Kolyshkin wrote:
> > Hello.
> >
> > This patch is strange.
> > 1. ngx_quic_revert_send can set to ctx an uninitialized value from
> > preserved_pnum. (example if min > len and i = 0, only 0 element is filled
> > in preserved_pnum but restored all)
> > 2. ngx_quic_revert_send will restored pnum for ctx that have already called
> > ngx_quic_output_packet and the packet with this pnum will be queued.
> > (example if min > len and i = 1)
>
> thank you for noticing.
> indeed, this needs to be fixed. we don't want to restore contexts we
> didn't yet touch.


The suggested fix is below. Also, while investigating the issue
thoroughly, we found that it is also possible to run into negative ctx->inflight
when discarding context. This is  addressed by a second patch.

# HG changeset patch
# User Vladimir Homutov 
# Date 1644411201 -10800
#  Wed Feb 09 15:53:21 2022 +0300
# Branch quic
# Node ID a4fb28741e19af426228e64b8d2c02ed3950b538
# Parent  dde5cb0205ef8c2a2a3255e7bd369a9c644f2049
QUIC: fixed output context restoring.

The cd8018bc81a5 fixed unintended send of non-padded initial packets,
but failed to restore context properly: only processed contexts need
to be restored.  As a consequence, a packet number could be restored
from uninitialized value.

diff --git a/src/event/quic/ngx_event_quic_output.c 
b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -165,7 +165,7 @@ ngx_quic_create_datagrams(ngx_connection
 if (min > len) {
 /* padding can't be applied - avoid sending the packet */

-for (i = 0; i < NGX_QUIC_SEND_CTX_LAST; i++) {
+while (i-- > 0) {
 ctx = >send_ctx[i];
 ngx_quic_revert_send(c, ctx, preserved_pnum[i]);
 }


# HG changeset patch
# User Vladimir Homutov 
# Date 1644411102 -10800
#  Wed Feb 09 15:51:42 2022 +0300
# Branch quic
# Node ID 2e27c45e2edb2c9540b211040d314b1748865820
# Parent  a4fb28741e19af426228e64b8d2c02ed3950b538
QUIC: fixed in-flight bytes accounting.

Initially, frames are genereated and stored in ctx->frames.
Next, ngx_quic_output() collects frames to be sent in in ctx->sending.
On failure, ngx_quic_revert_sned() returns frames into ctx->frames.

On success, the ngx_quic_commit_send() moves ack-eliciting frames into
ctx->sent and frees non-ack-eliciting frames.
This function also updates in-flight bytes counter, so only actually sent
frames are accounted.

The counter is decremented in the following cases:
 - acknowledgment is received
 - packet was declared lost
 - we are discarding context completely

In each of this cases frame is removed from ctx->sent queue and in-flight
counter is accordingly decremented.

The patch fixes the case of discarding context - only removing frames
from ctx->sent must be followed by in-flight bytes counter decrement,
otherwise cg->in_flight could experience type underflow.

The issue appeared in b1676cd64dc9.

diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
--- a/src/event/quic/ngx_event_quic.c
+++ b/src/event/quic/ngx_event_quic.c
@@ -1092,7 +1092,6 @@ ngx_quic_discard_ctx(ngx_connection_t *c
 ngx_queue_remove(q);

 f = ngx_queue_data(q, ngx_quic_frame_t, queue);
-ngx_quic_congestion_ack(c, f);
 ngx_quic_free_frame(c, f);
 }

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [quic] Why use sendmsg in loop instead of sendmmsg

2022-02-09 Thread Vladimir Homutov
On Wed, Feb 09, 2022 at 06:51:26AM +, Gao,Yan(媒体云) wrote:
> HI
>ngx_quic_create_datagrams use sendmsg in loop when without gso. Can use 
> sendmmsg directly?

there are some reasons we don't do it:

first, attempt to send multiple packets at once makes the code more
complex, especially when you have to deal with multiple encryption levels.
Typically, this is an initial stage of the connection (i.e. handshake)
and you won't get much performance boost from sending multiple packets
at once. That's why we switch to GSO only for application-level packets.

second, sendmmsg() (while being useful) still doesn't provide breakthrough
performance gain. Probably, it would be beneficial to have sendmmsg()
support as well, but currently this is not a top priority.


___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [QUIC] padding of Initial packets

2022-02-08 Thread Vladimir Homutov
On Tue, Feb 08, 2022 at 02:10:04PM +0300, Andrey Kolyshkin wrote:
> Hello.
>
> This patch is strange.
> 1. ngx_quic_revert_send can set to ctx an uninitialized value from
> preserved_pnum. (example if min > len and i = 0, only 0 element is filled
> in preserved_pnum but restored all)
> 2. ngx_quic_revert_send will restored pnum for ctx that have already called
> ngx_quic_output_packet and the packet with this pnum will be queued.
> (example if min > len and i = 1)

thank you for noticing.
indeed, this needs to be fixed. we don't want to restore contexts we
didn't yet touch.


>
>
> On Wed, Feb 2, 2022 at 2:07 PM Sergey Kandaurov  wrote:
>
> >
> > > On 2 Feb 2022, at 13:55, Vladimir Homutov  wrote:
> > >
> > > # HG changeset patch
> > > # User Vladimir Homutov 
> > > # Date 1643796973 -10800
> > > #  Wed Feb 02 13:16:13 2022 +0300
> > > # Branch quic
> > > # Node ID fbfbcf66990e8964bcf308f3869f37d1a1acceeb
> > > # Parent  8c6645ecaeb6cbf27976fd9035440bfcab943117
> > > QUIC: fixed padding of initial packets in case of limited path.
> > >
> > > Previously, non-padded initial packet could be sent as a result of the
> > > following situation:
> > >
> > > - initial queue is not empty (so padding to 1200 is required)
> > > - handhsake queue is not empty (so padding is to be added after h/s
> > packet)
> >
> > handshake
> >
> > > - path is limited
> > >
> > > If serializing handshake packet would violate path limit, such packet was
> > > omitted, and the non-padded initial packet was sent.
> > >
> > > The fix is to avoid sending the packet at all in such case.  This
> > follows the
> > > original intention introduced in c5155a0cb12f.
> > >
> > > diff --git a/src/event/quic/ngx_event_quic_output.c
> > b/src/event/quic/ngx_event_quic_output.c
> > > --- a/src/event/quic/ngx_event_quic_output.c
> > > +++ b/src/event/quic/ngx_event_quic_output.c
> > > @@ -158,7 +158,14 @@ ngx_quic_create_datagrams(ngx_connection
> > >   ? NGX_QUIC_MIN_INITIAL_SIZE - (p - dst) : 0;
> > >
> > > if (min > len) {
> > > -continue;
> > > +/* padding can't be applied - avoid sending the packet
> > */
> > > +
> > > +for (i = 0; i < NGX_QUIC_SEND_CTX_LAST; i++) {
> > > +ctx = >send_ctx[i];
> > > +ngx_quic_revert_send(c, ctx, preserved_pnum[i]);
> >
> > this could be simplified to reduce ctx variable:
> > ngx_quic_revert_send(c, >send_ctx[i], preserved_pnum[i]);
> >
> > but it won't fit into 80 line, so that's good just as well
> >
> > > +}
> > > +
> > > +return NGX_OK;
> > > }
> > >
> > > n = ngx_quic_output_packet(c, ctx, p, len, min);
> > >
> >
> > --
> > Sergey Kandaurov
> >
> > ___
> > nginx-devel mailing list -- nginx-devel@nginx.org
> > To unsubscribe send an email to nginx-devel-le...@nginx.org
> >
>
>
> --
> Best regards, Andrey

> ___
> nginx-devel mailing list -- nginx-devel@nginx.org
> To unsubscribe send an email to nginx-devel-le...@nginx.org

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH] QUIC: stream lingering

2022-02-08 Thread Vladimir Homutov

On 2/8/22 15:18, Roman Arutyunyan wrote:

On Tue, Feb 08, 2022 at 02:45:19PM +0300, Vladimir Homutov wrote:

On Mon, Feb 07, 2022 at 05:16:17PM +0300, Roman Arutyunyan wrote:

Hi,

On Fri, Feb 04, 2022 at 04:56:23PM +0300, Vladimir Homutov wrote:

On Tue, Feb 01, 2022 at 04:39:59PM +0300, Roman Arutyunyan wrote:

# HG changeset patch
# User Roman Arutyunyan 
# Date 1643722727 -10800
#  Tue Feb 01 16:38:47 2022 +0300
# Branch quic
# Node ID db31ae16c1f2050be9c9f6b1f117ab6725b97dd4
# Parent  308ac307b3e6952ef0c5ccf10cc82904c59fa4c3
QUIC: stream lingering.

Now ngx_quic_stream_t is decoupled from ngx_connection_t in a way that it
can persist after connection is closed by application.  During this period,
server is expecting stream final size from client for correct flow control.
Also, buffered output is sent to client as more flow control credit is granted.


[..]


+static ngx_int_t
+ngx_quic_stream_flush(ngx_quic_stream_t *qs)
+{
+size_t  limit, len;
+ngx_uint_t  last;
+ngx_chain_t*out, *cl;
+ngx_quic_frame_t   *frame;
+ngx_connection_t   *pc;
+ngx_quic_connection_t  *qc;
+
+if (qs->send_state != NGX_QUIC_STREAM_SEND_SEND) {
+return NGX_OK;
+}
+
+pc = qs->parent;
+qc = ngx_quic_get_connection(pc);
+
+limit = ngx_quic_max_stream_flow(qs);
+last = 0;
+
+out = ngx_quic_read_chain(pc, >out, limit);
+if (out == NGX_CHAIN_ERROR) {
+return NGX_ERROR;
+}
+
+len = 0;
+last = 0;


this assignment looks duplicate.


Thanks, fixed.


[..]


+static ngx_int_t
+ngx_quic_close_stream(ngx_quic_stream_t *qs)
+{
  ngx_connection_t   *pc;
  ngx_quic_frame_t   *frame;
-ngx_quic_stream_t  *qs;
  ngx_quic_connection_t  *qc;

-qs = c->quic;
  pc = qs->parent;
  qc = ngx_quic_get_connection(pc);

-ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
-   "quic stream id:0x%xL cleanup", qs->id);
+if (!qc->closing) {
+if (qs->recv_state == NGX_QUIC_STREAM_RECV_RECV
+|| qs->send_state == NGX_QUIC_STREAM_SEND_READY
+|| qs->send_state == NGX_QUIC_STREAM_SEND_SEND)
+{


so basically this are the states where we need to wait for FIN?
and thus avoid closing till we get it.
I would add a comment here.


On the receiving end we wait either for fin or for reset to have final size.
On the sending end we wait for everything that's buffered to be sent.
Added a comment about that.


[..]

+if (qs->connection == NULL) {
+return ngx_quic_close_stream(qs);
+}
+
  ngx_quic_set_event(qs->connection->write);


this pattern - check connection, close if NULL and set event seem to
repeat. Maybe it's worth to try to put this check/action into
ngx_quic_set_event somehow ? we could instead have
set_read_event/set_write_event maybe.


I thought about this too, but it's not always that simple.  And even if it was,
the new function/macro would have unclear semantics.  Let's just remember this
as a possible future optimiation.


+static ngx_int_t
+ngx_quic_stream_flush(ngx_quic_stream_t *qs)
+

[..]

+if (len == 0 && !last) {
+return NGX_OK;
+}
+
+frame = ngx_quic_alloc_frame(pc);
+if (frame == NULL) {
+return NGX_ERROR;
+}
+
+frame = ngx_quic_alloc_frame(pc);
+if (frame == NULL) {
+return NGX_ERROR;
+}


one more dup here.


Yes, thanks.


Overal, it looks good, but the testing revealed another issue: with big
buffer sizes we run into issue of too long chains in ngx_quic_write_chain().
As discussed, this certainly needs optimization - probably adding some
pointer to the end to facilitate appending, or something else.


It's true ngx_quic_write_chain() needs to be optimized.  When the buffered
chain is big, it takes too much time to find the write point.  I'll address
this is a separate patch.  Meanwhile, attached is an updated version of the
current one.

In the new version of the patch I also eliminated the
ngx_quic_max_stream_flow() function and embedded its content in
ngx_quic_stream_flush().


yes, this looks correct - flow limit should not consider buffer as it
was before.

I think we should check for limit == 0 before doing read_chain and this
is good place for debug logging about 'hit MAX_DATA/MAX_STREAM_DATA' that
was removed by update.


I don't know how much do we really need those messages.  What really needs to
be added here is sending DATA_BLOCKED/STREAM_DATA_BLOCKED, for which I
already have a separate patch.  That patch also adds some logging.
Once we finish with optimization, I'll send it out.

ok, good.



Apart from logging, checking limit == 0 does not seem to make sense, because
even if the limit is zero, we should still proceed, since we are still able to
send fin.

yes, exactly.

I have no more concerns regarding this patch, updated version looks good 
(considering further

Re: [PATCH] QUIC: stream lingering

2022-02-08 Thread Vladimir Homutov
On Mon, Feb 07, 2022 at 05:16:17PM +0300, Roman Arutyunyan wrote:
> Hi,
>
> On Fri, Feb 04, 2022 at 04:56:23PM +0300, Vladimir Homutov wrote:
> > On Tue, Feb 01, 2022 at 04:39:59PM +0300, Roman Arutyunyan wrote:
> > > # HG changeset patch
> > > # User Roman Arutyunyan 
> > > # Date 1643722727 -10800
> > > #  Tue Feb 01 16:38:47 2022 +0300
> > > # Branch quic
> > > # Node ID db31ae16c1f2050be9c9f6b1f117ab6725b97dd4
> > > # Parent  308ac307b3e6952ef0c5ccf10cc82904c59fa4c3
> > > QUIC: stream lingering.
> > >
> > > Now ngx_quic_stream_t is decoupled from ngx_connection_t in a way that it
> > > can persist after connection is closed by application.  During this 
> > > period,
> > > server is expecting stream final size from client for correct flow 
> > > control.
> > > Also, buffered output is sent to client as more flow control credit is 
> > > granted.
> > >
> > [..]
> >
> > > +static ngx_int_t
> > > +ngx_quic_stream_flush(ngx_quic_stream_t *qs)
> > > +{
> > > +size_t  limit, len;
> > > +ngx_uint_t  last;
> > > +ngx_chain_t*out, *cl;
> > > +ngx_quic_frame_t   *frame;
> > > +ngx_connection_t   *pc;
> > > +ngx_quic_connection_t  *qc;
> > > +
> > > +if (qs->send_state != NGX_QUIC_STREAM_SEND_SEND) {
> > > +return NGX_OK;
> > > +}
> > > +
> > > +pc = qs->parent;
> > > +qc = ngx_quic_get_connection(pc);
> > > +
> > > +limit = ngx_quic_max_stream_flow(qs);
> > > +last = 0;
> > > +
> > > +out = ngx_quic_read_chain(pc, >out, limit);
> > > +if (out == NGX_CHAIN_ERROR) {
> > > +return NGX_ERROR;
> > > +}
> > > +
> > > +len = 0;
> > > +last = 0;
> >
> > this assignment looks duplicate.
>
> Thanks, fixed.
>
> > [..]
> >
> > > +static ngx_int_t
> > > +ngx_quic_close_stream(ngx_quic_stream_t *qs)
> > > +{
> > >  ngx_connection_t   *pc;
> > >  ngx_quic_frame_t   *frame;
> > > -ngx_quic_stream_t  *qs;
> > >  ngx_quic_connection_t  *qc;
> > >
> > > -qs = c->quic;
> > >  pc = qs->parent;
> > >  qc = ngx_quic_get_connection(pc);
> > >
> > > -ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> > > -   "quic stream id:0x%xL cleanup", qs->id);
> > > +if (!qc->closing) {
> > > +if (qs->recv_state == NGX_QUIC_STREAM_RECV_RECV
> > > +|| qs->send_state == NGX_QUIC_STREAM_SEND_READY
> > > +|| qs->send_state == NGX_QUIC_STREAM_SEND_SEND)
> > > +{
> >
> > so basically this are the states where we need to wait for FIN?
> > and thus avoid closing till we get it.
> > I would add a comment here.
>
> On the receiving end we wait either for fin or for reset to have final size.
> On the sending end we wait for everything that's buffered to be sent.
> Added a comment about that.
>
> > [..]
> > > +if (qs->connection == NULL) {
> > > +return ngx_quic_close_stream(qs);
> > > +}
> > > +
> > >  ngx_quic_set_event(qs->connection->write);
> >
> > this pattern - check connection, close if NULL and set event seem to
> > repeat. Maybe it's worth to try to put this check/action into
> > ngx_quic_set_event somehow ? we could instead have
> > set_read_event/set_write_event maybe.
>
> I thought about this too, but it's not always that simple.  And even if it 
> was,
> the new function/macro would have unclear semantics.  Let's just remember this
> as a possible future optimiation.
>
> > > +static ngx_int_t
> > > +ngx_quic_stream_flush(ngx_quic_stream_t *qs)
> > > +
> > [..]
> > > +if (len == 0 && !last) {
> > > +return NGX_OK;
> > > +}
> > > +
> > > +frame = ngx_quic_alloc_frame(pc);
> > > +if (frame == NULL) {
> > > +return NGX_ERROR;
> > > +}
> > > +
> > > +frame = ngx_quic_alloc_frame(pc);
> > > +if (frame == NULL) {
> > > +return NGX_ERROR;
> > > +}
> >
> > one more dup here.
>
> Yes, thanks.
>
> > Overal, it looks good, but the testing revealed another issue: w

Re: [PATCH] QUIC: stream lingering

2022-02-04 Thread Vladimir Homutov
On Tue, Feb 01, 2022 at 04:39:59PM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1643722727 -10800
> #  Tue Feb 01 16:38:47 2022 +0300
> # Branch quic
> # Node ID db31ae16c1f2050be9c9f6b1f117ab6725b97dd4
> # Parent  308ac307b3e6952ef0c5ccf10cc82904c59fa4c3
> QUIC: stream lingering.
>
> Now ngx_quic_stream_t is decoupled from ngx_connection_t in a way that it
> can persist after connection is closed by application.  During this period,
> server is expecting stream final size from client for correct flow control.
> Also, buffered output is sent to client as more flow control credit is 
> granted.
>
[..]

> +static ngx_int_t
> +ngx_quic_stream_flush(ngx_quic_stream_t *qs)
> +{
> +size_t  limit, len;
> +ngx_uint_t  last;
> +ngx_chain_t*out, *cl;
> +ngx_quic_frame_t   *frame;
> +ngx_connection_t   *pc;
> +ngx_quic_connection_t  *qc;
> +
> +if (qs->send_state != NGX_QUIC_STREAM_SEND_SEND) {
> +return NGX_OK;
> +}
> +
> +pc = qs->parent;
> +qc = ngx_quic_get_connection(pc);
> +
> +limit = ngx_quic_max_stream_flow(qs);
> +last = 0;
> +
> +out = ngx_quic_read_chain(pc, >out, limit);
> +if (out == NGX_CHAIN_ERROR) {
> +return NGX_ERROR;
> +}
> +
> +len = 0;
> +last = 0;

this assignment looks duplicate.

[..]

> +static ngx_int_t
> +ngx_quic_close_stream(ngx_quic_stream_t *qs)
> +{
>  ngx_connection_t   *pc;
>  ngx_quic_frame_t   *frame;
> -ngx_quic_stream_t  *qs;
>  ngx_quic_connection_t  *qc;
>
> -qs = c->quic;
>  pc = qs->parent;
>  qc = ngx_quic_get_connection(pc);
>
> -ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> -   "quic stream id:0x%xL cleanup", qs->id);
> +if (!qc->closing) {
> +if (qs->recv_state == NGX_QUIC_STREAM_RECV_RECV
> +|| qs->send_state == NGX_QUIC_STREAM_SEND_READY
> +|| qs->send_state == NGX_QUIC_STREAM_SEND_SEND)
> +{

so basically this are the states where we need to wait for FIN?
and thus avoid closing till we get it.
I would add a comment here.

[..]
> +if (qs->connection == NULL) {
> +return ngx_quic_close_stream(qs);
> +}
> +
>  ngx_quic_set_event(qs->connection->write);

this pattern - check connection, close if NULL and set event seem to
repeat. Maybe it's worth to try to put this check/action into
ngx_quic_set_event somehow ? we could instead have
set_read_event/set_write_event maybe.


> +static ngx_int_t
> +ngx_quic_stream_flush(ngx_quic_stream_t *qs)
> +
[..]
> +if (len == 0 && !last) {
> +return NGX_OK;
> +}
> +
> +frame = ngx_quic_alloc_frame(pc);
> +if (frame == NULL) {
> +return NGX_ERROR;
> +}
> +
> +frame = ngx_quic_alloc_frame(pc);
> +if (frame == NULL) {
> +return NGX_ERROR;
> +}

one more dup here.


Overal, it looks good, but the testing revealed another issue: with big
buffer sizes we run into issue of too long chains in ngx_quic_write_chain().
As discussed, this certainly needs optimization - probably adding some
pointer to the end to facilitate appending, or something else.


___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH] QUIC: do not arm loss detection timer if nothing was sent

2022-02-02 Thread Vladimir Homutov
On Wed, Feb 02, 2022 at 03:05:07PM +0300, Sergey Kandaurov wrote:
> # HG changeset patch
> # User Sergey Kandaurov 
> # Date 1643803485 -10800
> #  Wed Feb 02 15:04:45 2022 +0300
> # Branch quic
> # Node ID 768445d1ba6e2bce9001704c52b516ad421ae776
> # Parent  cd8018bc81a52ca7de2eb4e779dfd574c8a661a2
> QUIC: do not arm loss detection timer if nothing was sent.
>
> Notably, this became quite practicable after the recent fix in cd8018bc81a5.
>
> diff --git a/src/event/quic/ngx_event_quic_output.c 
> b/src/event/quic/ngx_event_quic_output.c
> --- a/src/event/quic/ngx_event_quic_output.c
> +++ b/src/event/quic/ngx_event_quic_output.c
> @@ -109,7 +109,9 @@ ngx_quic_output(ngx_connection_t *c)
>  ngx_add_timer(c->read, qc->tp.max_idle_timeout);
>  }
>
> -ngx_quic_set_lost_timer(c);
> +if (in_flight != cg->in_flight) {
> +ngx_quic_set_lost_timer(c);
> +}
>
>  return NGX_OK;
>  }
>

Instead of adding one more check, I would invert condition and test if
we need to set any timers first, and then arm whatever needed;
This would simplify conditions and make logic simpler;
i.e. something like:


diff --git a/src/event/quic/ngx_event_quic_output.c 
b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -104,7 +104,12 @@ ngx_quic_output(ngx_connection_t *c)
 return NGX_ERROR;
 }

-if (in_flight != cg->in_flight && !qc->send_timer_set && !qc->closing) {
+if (in_flight == cg->in_flight || qc->closing) {
+/* no ack-eliciting data was sent or we are done */
+return NGX_OK;
+}
+
+if (!qc->send_timer_set) {
 qc->send_timer_set = 1;
 ngx_add_timer(c->read, qc->tp.max_idle_timeout);
 }

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


[QUIC] padding of Initial packets

2022-02-02 Thread Vladimir Homutov
# HG changeset patch
# User Vladimir Homutov 
# Date 1643796973 -10800
#  Wed Feb 02 13:16:13 2022 +0300
# Branch quic
# Node ID fbfbcf66990e8964bcf308f3869f37d1a1acceeb
# Parent  8c6645ecaeb6cbf27976fd9035440bfcab943117
QUIC: fixed padding of initial packets in case of limited path.

Previously, non-padded initial packet could be sent as a result of the
following situation:

 - initial queue is not empty (so padding to 1200 is required)
 - handhsake queue is not empty (so padding is to be added after h/s packet)
 - path is limited

If serializing handshake packet would violate path limit, such packet was
omitted, and the non-padded initial packet was sent.

The fix is to avoid sending the packet at all in such case.  This follows the
original intention introduced in c5155a0cb12f.

diff --git a/src/event/quic/ngx_event_quic_output.c 
b/src/event/quic/ngx_event_quic_output.c
--- a/src/event/quic/ngx_event_quic_output.c
+++ b/src/event/quic/ngx_event_quic_output.c
@@ -158,7 +158,14 @@ ngx_quic_create_datagrams(ngx_connection
   ? NGX_QUIC_MIN_INITIAL_SIZE - (p - dst) : 0;

 if (min > len) {
-continue;
+/* padding can't be applied - avoid sending the packet */
+
+for (i = 0; i < NGX_QUIC_SEND_CTX_LAST; i++) {
+ctx = >send_ctx[i];
+ngx_quic_revert_send(c, ctx, preserved_pnum[i]);
+}
+
+return NGX_OK;
 }

 n = ngx_quic_output_packet(c, ctx, p, len, min);

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH 0 of 4] QUIC stream states and events

2022-01-31 Thread Vladimir Homutov
On Mon, Jan 31, 2022 at 06:21:03PM +0300, Roman Arutyunyan wrote:
> - added zero size handling in stream recv()
> - renamed http3 uni stream handlers
> - added style patch

looks good to me
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH 3 of 3] QUIC: stream event setting function

2022-01-31 Thread Vladimir Homutov
On Mon, Jan 31, 2022 at 10:34:08AM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1643187691 -10800
> #  Wed Jan 26 12:01:31 2022 +0300
> # Branch quic
> # Node ID 9f5c59800a9894aad00b06df93ec454aab97372d
> # Parent  d3c6dea9454c48ded14b8c087dffc4dea46f78ef
> QUIC: stream event setting function.
>
> The function ngx_quic_set_event() is now called instead of posting events
> directly.
>
> diff --git a/src/event/quic/ngx_event_quic_streams.c 
> b/src/event/quic/ngx_event_quic_streams.c
> --- a/src/event/quic/ngx_event_quic_streams.c
> +++ b/src/event/quic/ngx_event_quic_streams.c
> @@ -34,6 +34,7 @@ static ngx_int_t ngx_quic_control_flow(n
>  static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last);
>  static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c);
>  static ngx_int_t ngx_quic_update_max_data(ngx_connection_t *c);
> +static void ngx_quic_set_event(ngx_event_t *ev);
>
>
>  ngx_connection_t *
> @@ -156,7 +157,6 @@ ngx_quic_close_streams(ngx_connection_t
>  {
>  ngx_pool_t *pool;
>  ngx_queue_t*q;
> -ngx_event_t*rev, *wev;
>  ngx_rbtree_t   *tree;
>  ngx_rbtree_node_t  *node;
>  ngx_quic_stream_t  *qs;
> @@ -195,17 +195,8 @@ ngx_quic_close_streams(ngx_connection_t
>  qs->recv_state = NGX_QUIC_STREAM_RECV_RESET_RECVD;
>  qs->send_state = NGX_QUIC_STREAM_SEND_RESET_SENT;
>
> -rev = qs->connection->read;
> -rev->ready = 1;
> -
> -wev = qs->connection->write;
> -wev->ready = 1;
> -
> -ngx_post_event(rev, _posted_events);
> -
> -if (rev->timer_set) {
> -ngx_del_timer(rev);
> -}
> +ngx_quic_set_event(qs->connection->read);
> +ngx_quic_set_event(qs->connection->write);
>
>  #if (NGX_DEBUG)
>  ns++;
> @@ -1024,7 +1015,6 @@ ngx_quic_handle_stream_frame(ngx_connect
>  ngx_quic_frame_t *frame)
>  {
>  uint64_t  last;
> -ngx_event_t  *rev;
>  ngx_connection_t *sc;
>  ngx_quic_stream_t*qs;
>  ngx_quic_connection_t*qc;
> @@ -1102,12 +1092,7 @@ ngx_quic_handle_stream_frame(ngx_connect
>  }
>
>  if (f->offset == qs->recv_offset) {
> -rev = sc->read;
> -rev->ready = 1;
> -
> -if (rev->active) {
> -ngx_post_event(rev, _posted_events);
> -}
> +ngx_quic_set_event(sc->read);
>  }
>
>  return NGX_OK;
> @@ -1118,7 +1103,6 @@ ngx_int_t
>  ngx_quic_handle_max_data_frame(ngx_connection_t *c,
>  ngx_quic_max_data_frame_t *f)
>  {
> -ngx_event_t*wev;
>  ngx_rbtree_t   *tree;
>  ngx_rbtree_node_t  *node;
>  ngx_quic_stream_t  *qs;
> @@ -1140,12 +1124,7 @@ ngx_quic_handle_max_data_frame(ngx_conne
>   node = ngx_rbtree_next(tree, node))
>  {
>  qs = (ngx_quic_stream_t *) node;
> -wev = qs->connection->write;
> -
> -if (wev->active) {
> -wev->ready = 1;
> -ngx_post_event(wev, _posted_events);
> -}
> +ngx_quic_set_event(qs->connection->write);
>  }
>  }
>
> @@ -1206,7 +1185,6 @@ ngx_quic_handle_max_stream_data_frame(ng
>  ngx_quic_header_t *pkt, ngx_quic_max_stream_data_frame_t *f)
>  {
>  uint64_tsent;
> -ngx_event_t*wev;
>  ngx_quic_stream_t  *qs;
>  ngx_quic_connection_t  *qc;
>
> @@ -1236,12 +1214,7 @@ ngx_quic_handle_max_stream_data_frame(ng
>  sent = qs->connection->sent;
>
>  if (sent >= qs->send_max_data) {
> -wev = qs->connection->write;
> -
> -if (wev->active) {
> -wev->ready = 1;
> -ngx_post_event(wev, _posted_events);
> -}
> +ngx_quic_set_event(qs->connection->write);
>  }
>
>  qs->send_max_data = f->limit;
> @@ -1254,7 +1227,6 @@ ngx_int_t
>  ngx_quic_handle_reset_stream_frame(ngx_connection_t *c,
>  ngx_quic_header_t *pkt, ngx_quic_reset_stream_frame_t *f)
>  {
> -ngx_event_t*rev;
>  ngx_connection_t   *sc;
>  ngx_quic_stream_t  *qs;
>  ngx_quic_connection_t  *qc;
> @@ -1308,12 +1280,7 @@ ngx_quic_handle_reset_stream_frame(ngx_c
>  return NGX_ERROR;
>  }
>
> -rev = sc->read;
> -rev->ready = 1;
> -
> -if (rev->active) {
> -ngx_post_event(rev, _posted_events);
> -}
> +ngx_quic_set_event(qs->connection->read);
>
>  return NGX_OK;
>  }
> @@ -1323,7 +1290,6 @@ ngx_int_t
>  ngx_quic_handle_stop_sending_frame(ngx_connection_t *c,
>  ngx_quic_header_t *pkt, ngx_quic_stop_sending_frame_t *f)
>  {
> -ngx_event_t*wev;
>  ngx_quic_stream_t  *qs;
>  ngx_quic_connection_t  *qc;
>
> @@ -1350,12 +1316,7 @@ ngx_quic_handle_stop_sending_frame(ngx_c
>  return NGX_ERROR;
>  }
>
> -wev = qs->connection->write;
> -
> -if (wev->active) {
> -   

Re: [PATCH 2 of 3] HTTP/3: proper uni stream closure detection

2022-01-31 Thread Vladimir Homutov
On Mon, Jan 31, 2022 at 10:34:07AM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1643611590 -10800
> #  Mon Jan 31 09:46:30 2022 +0300
> # Branch quic
> # Node ID d3c6dea9454c48ded14b8c087dffc4dea46f78ef
> # Parent  8dcb9908989401d750b14fe5dccf444a5485c23d
> HTTP/3: proper uni stream closure detection.
>
> Previously, closure detection for server-initiated uni streams was not 
> properly
> implemented.  Instead, HTTP/3 code relied on QUIC code posting the read event
> and setting rev->error when it needed to close the stream.  Then, regular
> uni stream read handler called c->recv() and received error, which closed the
> stream.  This was an ad-hoc solution.  If, for whatever reason, the read
> handler was called earlier, c->recv() would return 0, which would also close
> the stream.
>
> Now server-initiated uni streams have a separate read event handler for
> tracking stream closure.  The handler calls c->recv(), which normally returns
> 0, but may return error in case of closure.
>
> diff --git a/src/http/v3/ngx_http_v3_uni.c b/src/http/v3/ngx_http_v3_uni.c
> --- a/src/http/v3/ngx_http_v3_uni.c
> +++ b/src/http/v3/ngx_http_v3_uni.c
> @@ -26,6 +26,7 @@ typedef struct {
>
>  static void ngx_http_v3_close_uni_stream(ngx_connection_t *c);
>  static void ngx_http_v3_uni_read_handler(ngx_event_t *rev);
> +static void ngx_http_v3_dummy_read_handler(ngx_event_t *wev);
>  static void ngx_http_v3_dummy_write_handler(ngx_event_t *wev);
>  static void ngx_http_v3_push_cleanup(void *data);
>  static ngx_connection_t *ngx_http_v3_get_uni_stream(ngx_connection_t *c,
> @@ -252,6 +253,32 @@ failed:
>
>
>  static void
> +ngx_http_v3_dummy_read_handler(ngx_event_t *rev)

should it be ngx_http_v3_uni_dummy_read_handler?

> +{
> +u_char ch;
> +ngx_connection_t  *c;
> +
> +c = rev->data;
> +
> +ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "http3 dummy read 
> handler");
> +
> +if (rev->ready) {
> +if (c->recv(c, , 1) != 0) {
> +ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_NO_ERROR, 
> NULL);
> +ngx_http_v3_close_uni_stream(c);
> +return;
> +}
> +}
> +
> +if (ngx_handle_read_event(rev, 0) != NGX_OK) {
> +ngx_http_v3_finalize_connection(c, NGX_HTTP_V3_ERR_INTERNAL_ERROR,
> +NULL);
> +ngx_http_v3_close_uni_stream(c);
> +}
> +}
> +
> +
> +static void
>  ngx_http_v3_dummy_write_handler(ngx_event_t *wev)
>  {
>  ngx_connection_t  *c;
> @@ -393,7 +420,7 @@ ngx_http_v3_get_uni_stream(ngx_connectio
>
>  sc->data = us;
>
> -sc->read->handler = ngx_http_v3_uni_read_handler;
> +sc->read->handler = ngx_http_v3_dummy_read_handler;
>  sc->write->handler = ngx_http_v3_dummy_write_handler;
>
>  if (index >= 0) {
> @@ -409,6 +436,8 @@ ngx_http_v3_get_uni_stream(ngx_connectio
>  goto failed;
>  }
>
> +ngx_post_event(sc->read, _posted_events);
> +
>  return sc;
>
>  failed:

Looks ok
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH 1 of 3] QUIC: introduced explicit stream states

2022-01-31 Thread Vladimir Homutov
On Mon, Jan 31, 2022 at 10:34:06AM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1643611562 -10800
> #  Mon Jan 31 09:46:02 2022 +0300
> # Branch quic
> # Node ID 8dcb9908989401d750b14fe5dccf444a5485c23d
> # Parent  81a3429db8b00ec9fc476d3687d1cd18088f3365
> QUIC: introduced explicit stream states.
>
> This allows to eliminate the usage of stream connection event flags for 
> tracking
> stream state.
>
> diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h
> --- a/src/event/quic/ngx_event_quic.h
> +++ b/src/event/quic/ngx_event_quic.h
> @@ -28,6 +28,26 @@
>  #define NGX_QUIC_STREAM_UNIDIRECTIONAL   0x02
>
>
> +typedef enum {
> +NGX_QUIC_STREAM_SEND_READY = 0,
> +NGX_QUIC_STREAM_SEND_SEND,
> +NGX_QUIC_STREAM_SEND_DATA_SENT,
> +NGX_QUIC_STREAM_SEND_DATA_RECVD,
> +NGX_QUIC_STREAM_SEND_RESET_SENT,
> +NGX_QUIC_STREAM_SEND_RESET_RECVD
> +} ngx_quic_stream_send_state_e;
> +
> +
> +typedef enum {
> +NGX_QUIC_STREAM_RECV_RECV = 0,
> +NGX_QUIC_STREAM_RECV_SIZE_KNOWN,
> +NGX_QUIC_STREAM_RECV_DATA_RECVD,
> +NGX_QUIC_STREAM_RECV_DATA_READ,
> +NGX_QUIC_STREAM_RECV_RESET_RECVD,
> +NGX_QUIC_STREAM_RECV_RESET_READ
> +} ngx_quic_stream_recv_state_e;
> +
> +
>  typedef struct {
>  ngx_ssl_t *ssl;
>
> @@ -66,6 +86,8 @@ struct ngx_quic_stream_s {
>  ngx_chain_t   *in;
>  ngx_chain_t   *out;
>  ngx_uint_t cancelable;  /* unsigned  cancelable:1; */
> +ngx_quic_stream_send_state_e  send_state;
> +ngx_quic_stream_recv_state_e  recv_state;
>  };

let's fix this little style incosistency in a separate patch by moving
all struct stuff to the right.

[..]

> @@ -780,8 +764,23 @@ ngx_quic_stream_recv(ngx_connection_t *c
>
>  ngx_quic_free_chain(pc, in);
>
> -if (qs->in == NULL) {
> -rev->ready = rev->pending_eof;
> +if (len == 0) {

this also covers the case when ngx_quic_stream_recv() is called
with zero-length buffer. Not sure what semantic should be implemented.

man 2 read says:

If count is zero, read() may detect the errors described below.  In
the absence of any errors, or if read() does not check for errors, a
read()  with a count of 0 returns zero and has no other effects.

i.e. if we have data in buffer, but we are called with zero, we should
not change state probably and handle this case separately.

> +rev->ready = 0;
> +
> +if (qs->recv_state == NGX_QUIC_STREAM_RECV_SIZE_KNOWN
> +&& qs->recv_offset == qs->final_size)
> +{
> +qs->recv_state = NGX_QUIC_STREAM_RECV_DATA_READ;
> +}
> +
> +if (qs->recv_state == NGX_QUIC_STREAM_RECV_DATA_READ) {
> +rev->eof = 1;
> +return 0;
> +}
> +
> +ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> +   "quic stream id:0x%xL recv() not ready", qs->id);
> +return NGX_AGAIN;
>  }
>
>  ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0,


side note:  while looking at state transitions, i've realized we never send
STREAM_DATA_BLOCKED and DATA_BLOCKED.

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null

2022-01-30 Thread Vladimir Homutov
On Fri, Jan 28, 2022 at 02:09:31PM +, Gao,Yan(媒体云) wrote:
> > c->quic is never set on main connection (it is not really needed there).
> > ngx_http_v3_init() is first called with main connection, and later it is
> > called with _another_ connection that is a stream, and it has c->quic set.
>
> > ngx_ssl_shutdown() is not supposed to do something on stream
> > connections, ssl object is shared with main connection. all necessary
> > cleanup will be done by main connection handlers.
>
> ngx_http_v3_init() is only called in ngx_http_init_connection, as ls->handler.
> And then ngx_quic_listen add the main quic connection to udp rbtree.
> It call main quic connection read->handler If find connection in
> ngx_lookup_udp_connection, else call ls->handler.

> But when ngx_http_v3_init() is called by _another_ connection that is a 
> stream?

the ngx_http_v3_init() may be called with either main or stream quic
connection.

for main connection c->quic is NULL, and ngx_quic_run() is invoked,
after that it returns.

if c->quic is set, then ngx_http_v3_init() proceeds further, and
initializes HTTP/3 stream and proceeds to processing requests.

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null

2022-01-28 Thread Vladimir Homutov
On Fri, Jan 28, 2022 at 03:29:06AM +, Gao,Yan(媒体云) wrote:
> > first time you get there with main nginx connection, when a first QUIC
> > packet arrives. Thus test c->quic. and if it is NULL it means we need
> > to create main quic connection and proceed with the handshake.
>
> > When the handshake is complete, a stream will be created, and the
> > ngx_quic_init_stream_handler() will be called which will invoke
> > listening handler, and we will return into ngx_http_v3_init() with
> > stream connection that has c->quic set and follow the other path.
>
> Yes, I understand. But what you said, as stream connection that has c->quic 
> set, when main nginx connection c->quic set?
> ngx_ssl_shutdown and ngx_http_v3_init check c->quic == NULL, but it is never 
> set.
> No problem?

c->quic is never set on main connection (it is not really needed there).
ngx_http_v3_init() is first called with main connection, and later it is
called with _another_ connection that is a stream, and it has c->quic set.

ngx_ssl_shutdown() is not supposed to do something on stream
connections, ssl object is shared with main connection. all necessary
cleanup will be done by main connection handlers.
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null

2022-01-27 Thread Vladimir Homutov
On Thu, Jan 27, 2022 at 04:33:08AM +, Gao,Yan(媒体云) wrote:
> > The main quic connection is created in ngx_quic_new_connection(), which
> > calls ngx_quic_open_sockets() and it sets c->udp for the first time.
>
> > When packet arrives, c->udp is updated by ngx_lookup_udp_connection().
>
> > The main connection does not have c->quic set; this is used in stream
> > connections. To access main connection from quic stream, c->quic->parent
> > may be used.
>
> ngx_event_recvmsg->(ls->handler) ngx_http_init_connection->ngx_http_v3_init:
> if (c->quic == NULL) {
> h3scf->quic.timeout = clcf->keepalive_timeout;
> ngx_quic_run(c, >quic);
> return;
> }
>
> And, why check c->quic == NULL, as it is never set

first time you get there with main nginx connection, when a first QUIC
packet arrives. Thus test c->quic. and if it is NULL it means we need
to create main quic connection and proceed with the handshake.

When the handshake is complete, a stream will be created, and the
ngx_quic_init_stream_handler() will be called which will invoke
listening handler, and we will return into ngx_http_v3_init() with
stream connection that has c->quic set and follow the other path.
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


[nginx] Core: added autotest for UDP segmentation offloading.

2022-01-27 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/c0a432c0301b
branches:  
changeset: 8004:c0a432c0301b
user:  Vladimir Homutov 
date:  Wed Jan 26 20:40:00 2022 +0300
description:
Core: added autotest for UDP segmentation offloading.

diffstat:

 auto/os/linux  |  16 
 src/os/unix/ngx_linux_config.h |   4 
 2 files changed, 20 insertions(+), 0 deletions(-)

diffs (38 lines):

diff -r 0f6cc8f73744 -r c0a432c0301b auto/os/linux
--- a/auto/os/linux Tue Jan 25 15:48:58 2022 +0300
+++ b/auto/os/linux Wed Jan 26 20:40:00 2022 +0300
@@ -232,4 +232,20 @@ ngx_feature_test="struct crypt_data  cd;
 ngx_include="sys/vfs.h"; . auto/include
 
 
+# UDP segmentation offloading
+
+ngx_feature="UDP_SEGMENT"
+ngx_feature_name="NGX_HAVE_UDP_SEGMENT"
+ngx_feature_run=no
+ngx_feature_incs="#include 
+  #include 
+  #include "
+ngx_feature_path=
+ngx_feature_libs=
+ngx_feature_test="socklen_t optlen = sizeof(int);
+  int val;
+  getsockopt(0, SOL_UDP, UDP_SEGMENT, , )"
+. auto/feature
+
+
 CC_AUX_FLAGS="$cc_aux_flags -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64"
diff -r 0f6cc8f73744 -r c0a432c0301b src/os/unix/ngx_linux_config.h
--- a/src/os/unix/ngx_linux_config.hTue Jan 25 15:48:58 2022 +0300
+++ b/src/os/unix/ngx_linux_config.hWed Jan 26 20:40:00 2022 +0300
@@ -103,6 +103,10 @@ typedef struct iocb  ngx_aiocb_t;
 #include 
 #endif
 
+#if (NGX_HAVE_UDP_SEGMENT)
+#include 
+#endif
+
 
 #define NGX_LISTEN_BACKLOG511
 
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


[nginx] Core: added function for local source address cmsg.

2022-01-27 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/0f6cc8f73744
branches:  
changeset: 8003:0f6cc8f73744
user:  Vladimir Homutov 
date:  Tue Jan 25 15:48:58 2022 +0300
description:
Core: added function for local source address cmsg.

diffstat:

 src/event/ngx_event_udp.c   |  92 
 src/event/ngx_event_udp.h   |   2 +
 src/os/unix/ngx_udp_sendmsg_chain.c |  65 ++
 3 files changed, 77 insertions(+), 82 deletions(-)

diffs (221 lines):

diff -r cfe1284e5d1d -r 0f6cc8f73744 src/event/ngx_event_udp.c
--- a/src/event/ngx_event_udp.c Tue Jan 25 15:48:56 2022 +0300
+++ b/src/event/ngx_event_udp.c Tue Jan 25 15:48:58 2022 +0300
@@ -46,18 +46,8 @@ ngx_event_recvmsg(ngx_event_t *ev)
 ngx_connection_t  *c, *lc;
 static u_char  buffer[65535];
 
-#if (NGX_HAVE_MSGHDR_MSG_CONTROL)
-
-#if (NGX_HAVE_IP_RECVDSTADDR)
-u_char msg_control[CMSG_SPACE(sizeof(struct in_addr))];
-#elif (NGX_HAVE_IP_PKTINFO)
-u_char msg_control[CMSG_SPACE(sizeof(struct in_pktinfo))];
-#endif
-
-#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO)
-u_char msg_control6[CMSG_SPACE(sizeof(struct in6_pktinfo))];
-#endif
-
+#if (NGX_HAVE_ADDRINFO_CMSG)
+u_char msg_control[CMSG_SPACE(sizeof(ngx_addrinfo_t))];
 #endif
 
 if (ev->timedout) {
@@ -92,25 +82,13 @@ ngx_event_recvmsg(ngx_event_t *ev)
 msg.msg_iov = iov;
 msg.msg_iovlen = 1;
 
-#if (NGX_HAVE_MSGHDR_MSG_CONTROL)
-
+#if (NGX_HAVE_ADDRINFO_CMSG)
 if (ls->wildcard) {
+msg.msg_control = _control;
+msg.msg_controllen = sizeof(msg_control);
 
-#if (NGX_HAVE_IP_RECVDSTADDR || NGX_HAVE_IP_PKTINFO)
-if (ls->sockaddr->sa_family == AF_INET) {
-msg.msg_control = _control;
-msg.msg_controllen = sizeof(msg_control);
-}
-#endif
-
-#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO)
-if (ls->sockaddr->sa_family == AF_INET6) {
-msg.msg_control = _control6;
-msg.msg_controllen = sizeof(msg_control6);
-}
-#endif
-}
-
+ngx_memzero(_control, sizeof(msg_control));
+   }
 #endif
 
 n = recvmsg(lc->fd, , 0);
@@ -129,7 +107,7 @@ ngx_event_recvmsg(ngx_event_t *ev)
 return;
 }
 
-#if (NGX_HAVE_MSGHDR_MSG_CONTROL)
+#if (NGX_HAVE_ADDRINFO_CMSG)
 if (msg.msg_flags & (MSG_TRUNC|MSG_CTRUNC)) {
 ngx_log_error(NGX_LOG_ALERT, ev->log, 0,
   "recvmsg() truncated data");
@@ -159,7 +137,7 @@ ngx_event_recvmsg(ngx_event_t *ev)
 local_sockaddr = ls->sockaddr;
 local_socklen = ls->socklen;
 
-#if (NGX_HAVE_MSGHDR_MSG_CONTROL)
+#if (NGX_HAVE_ADDRINFO_CMSG)
 
 if (ls->wildcard) {
 struct cmsghdr  *cmsg;
@@ -171,59 +149,9 @@ ngx_event_recvmsg(ngx_event_t *ev)
  cmsg != NULL;
  cmsg = CMSG_NXTHDR(, cmsg))
 {
-
-#if (NGX_HAVE_IP_RECVDSTADDR)
-
-if (cmsg->cmsg_level == IPPROTO_IP
-&& cmsg->cmsg_type == IP_RECVDSTADDR
-&& local_sockaddr->sa_family == AF_INET)
-{
-struct in_addr  *addr;
-struct sockaddr_in  *sin;
-
-addr = (struct in_addr *) CMSG_DATA(cmsg);
-sin = (struct sockaddr_in *) local_sockaddr;
-sin->sin_addr = *addr;
-
+if (ngx_get_srcaddr_cmsg(cmsg, local_sockaddr) == NGX_OK) {
 break;
 }
-
-#elif (NGX_HAVE_IP_PKTINFO)
-
-if (cmsg->cmsg_level == IPPROTO_IP
-&& cmsg->cmsg_type == IP_PKTINFO
-&& local_sockaddr->sa_family == AF_INET)
-{
-struct in_pktinfo   *pkt;
-struct sockaddr_in  *sin;
-
-pkt = (struct in_pktinfo *) CMSG_DATA(cmsg);
-sin = (struct sockaddr_in *) local_sockaddr;
-sin->sin_addr = pkt->ipi_addr;
-
-break;
-}
-
-#endif
-
-#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO)
-
-if (cmsg->cmsg_level == IPPROTO_IPV6
-&& cmsg->cmsg_type == IPV6_PKTINFO
-&& local_sockaddr->sa_family == AF_INET6)
-{
-struct in6_pktinfo   *pkt6;
-struct sockaddr_in6  *sin6;
-
-pkt6 = (struct in6_pktinfo *) CMSG_DATA(cmsg);
-sin6 = (struct sockaddr_in6 *) local_sockaddr;
-sin6->sin6_addr = pkt6->ipi6_addr;
-
-break;
-}
-
-#endif
-
 }
 }
 
diff -r cfe1284e5d1d 

[nginx] Core: made the ngx_sendmsg() function non-static.

2022-01-27 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/cfe1284e5d1d
branches:  
changeset: 8002:cfe1284e5d1d
user:  Vladimir Homutov 
date:  Tue Jan 25 15:48:56 2022 +0300
description:
Core: made the ngx_sendmsg() function non-static.

The NGX_HAVE_ADDRINFO_CMSG macro is defined when at least one of methods
to deal with corresponding control message is available.

diffstat:

 src/event/ngx_event_udp.h   |   32 ++
 src/os/unix/ngx_udp_sendmsg_chain.c |  169 ---
 2 files changed, 129 insertions(+), 72 deletions(-)

diffs (279 lines):

diff -r 8206ecdcd837 -r cfe1284e5d1d src/event/ngx_event_udp.h
--- a/src/event/ngx_event_udp.h Tue Jan 25 15:41:48 2022 +0300
+++ b/src/event/ngx_event_udp.h Tue Jan 25 15:48:56 2022 +0300
@@ -13,7 +13,39 @@
 
 
 #if !(NGX_WIN32)
+
+#if ((NGX_HAVE_MSGHDR_MSG_CONTROL)\
+ && (NGX_HAVE_IP_SENDSRCADDR || NGX_HAVE_IP_RECVDSTADDR   \
+ || NGX_HAVE_IP_PKTINFO   \
+ || (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO)))
+#define NGX_HAVE_ADDRINFO_CMSG  1
+
+#endif
+
+
+#if (NGX_HAVE_ADDRINFO_CMSG)
+
+typedef union {
+#if (NGX_HAVE_IP_SENDSRCADDR || NGX_HAVE_IP_RECVDSTADDR)
+struct in_addraddr;
+#endif
+
+#if (NGX_HAVE_IP_PKTINFO)
+struct in_pktinfo pkt;
+#endif
+
+#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO)
+struct in6_pktinfopkt6;
+#endif
+} ngx_addrinfo_t;
+
+size_t ngx_set_srcaddr_cmsg(struct cmsghdr *cmsg,
+struct sockaddr *local_sockaddr);
+
+#endif
+
 void ngx_event_recvmsg(ngx_event_t *ev);
+ssize_t ngx_sendmsg(ngx_connection_t *c, struct msghdr *msg, int flags);
 void ngx_udp_rbtree_insert_value(ngx_rbtree_node_t *temp,
 ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel);
 #endif
diff -r 8206ecdcd837 -r cfe1284e5d1d src/os/unix/ngx_udp_sendmsg_chain.c
--- a/src/os/unix/ngx_udp_sendmsg_chain.c   Tue Jan 25 15:41:48 2022 +0300
+++ b/src/os/unix/ngx_udp_sendmsg_chain.c   Tue Jan 25 15:48:56 2022 +0300
@@ -12,7 +12,7 @@
 
 static ngx_chain_t *ngx_udp_output_chain_to_iovec(ngx_iovec_t *vec,
 ngx_chain_t *in, ngx_log_t *log);
-static ssize_t ngx_sendmsg(ngx_connection_t *c, ngx_iovec_t *vec);
+static ssize_t ngx_sendmsg_vec(ngx_connection_t *c, ngx_iovec_t *vec);
 
 
 ngx_chain_t *
@@ -88,7 +88,7 @@ ngx_udp_unix_sendmsg_chain(ngx_connectio
 
 send += vec.size;
 
-n = ngx_sendmsg(c, );
+n = ngx_sendmsg_vec(c, );
 
 if (n == NGX_ERROR) {
 return NGX_CHAIN_ERROR;
@@ -204,24 +204,13 @@ ngx_udp_output_chain_to_iovec(ngx_iovec_
 
 
 static ssize_t
-ngx_sendmsg(ngx_connection_t *c, ngx_iovec_t *vec)
+ngx_sendmsg_vec(ngx_connection_t *c, ngx_iovec_t *vec)
 {
-ssize_tn;
-ngx_err_t  err;
-struct msghdr  msg;
-
-#if (NGX_HAVE_MSGHDR_MSG_CONTROL)
+struct msghdrmsg;
 
-#if (NGX_HAVE_IP_SENDSRCADDR)
-u_char msg_control[CMSG_SPACE(sizeof(struct in_addr))];
-#elif (NGX_HAVE_IP_PKTINFO)
-u_char msg_control[CMSG_SPACE(sizeof(struct in_pktinfo))];
-#endif
-
-#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO)
-u_char msg_control6[CMSG_SPACE(sizeof(struct in6_pktinfo))];
-#endif
-
+#if (NGX_HAVE_ADDRINFO_CMSG)
+struct cmsghdr  *cmsg;
+u_char   msg_control[CMSG_SPACE(sizeof(ngx_addrinfo_t))];
 #endif
 
 ngx_memzero(, sizeof(struct msghdr));
@@ -234,88 +223,115 @@ ngx_sendmsg(ngx_connection_t *c, ngx_iov
 msg.msg_iov = vec->iovs;
 msg.msg_iovlen = vec->count;
 
-#if (NGX_HAVE_MSGHDR_MSG_CONTROL)
+#if (NGX_HAVE_ADDRINFO_CMSG)
+if (c->listening && c->listening->wildcard && c->local_sockaddr) {
+
+msg.msg_control = msg_control;
+msg.msg_controllen = sizeof(msg_control);
+ngx_memzero(msg_control, sizeof(msg_control));
+
+cmsg = CMSG_FIRSTHDR();
+
+msg.msg_controllen = ngx_set_srcaddr_cmsg(cmsg, c->local_sockaddr);
+}
+#endif
+
+return ngx_sendmsg(c, , 0);
+}
+
+
+#if (NGX_HAVE_ADDRINFO_CMSG)
 
-if (c->listening && c->listening->wildcard && c->local_sockaddr) {
+size_t
+ngx_set_srcaddr_cmsg(struct cmsghdr *cmsg, struct sockaddr *local_sockaddr)
+{
+size_tlen;
+#if (NGX_HAVE_IP_SENDSRCADDR)
+struct in_addr   *addr;
+struct sockaddr_in   *sin;
+#elif (NGX_HAVE_IP_PKTINFO)
+struct in_pktinfo*pkt;
+struct sockaddr_in   *sin;
+#endif
+
+#if (NGX_HAVE_INET6 && NGX_HAVE_IPV6_RECVPKTINFO)
+struct in6_pktinfo   *pkt6;
+struct sockaddr_in6  *sin6;
+#endif
+
+
+#if (NGX_HAVE_IP_SENDSRCADDR) || (NGX_HAVE_IP_PKTINFO)
+
+if (local_sockaddr->sa_family == AF_INET) {
+
+cmsg->cmsg_level = IPPROTO_IP;
 
 #if (NGX_HAVE_IP_SENDSRCADDR)
 
-if (c->local_sockaddr->sa_family == AF_INET) {
-struct cmsghdr  *cmsg;
-  

[nginx] Core: the ngx_event_udp.h header file.

2022-01-27 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/8206ecdcd837
branches:  
changeset: 8001:8206ecdcd837
user:  Vladimir Homutov 
date:  Tue Jan 25 15:41:48 2022 +0300
description:
Core: the ngx_event_udp.h header file.

diffstat:

 auto/sources  |   3 ++-
 src/event/ngx_event.h |   7 +--
 src/event/ngx_event_udp.h |  24 
 3 files changed, 27 insertions(+), 7 deletions(-)

diffs (65 lines):

diff -r 60b8f529db13 -r 8206ecdcd837 auto/sources
--- a/auto/sources  Thu Jan 27 13:44:09 2022 +0300
+++ b/auto/sources  Tue Jan 25 15:41:48 2022 +0300
@@ -89,7 +89,8 @@ EVENT_DEPS="src/event/ngx_event.h \
 src/event/ngx_event_timer.h \
 src/event/ngx_event_posted.h \
 src/event/ngx_event_connect.h \
-src/event/ngx_event_pipe.h"
+src/event/ngx_event_pipe.h \
+src/event/ngx_event_udp.h"
 
 EVENT_SRCS="src/event/ngx_event.c \
 src/event/ngx_event_timer.c \
diff -r 60b8f529db13 -r 8206ecdcd837 src/event/ngx_event.h
--- a/src/event/ngx_event.h Thu Jan 27 13:44:09 2022 +0300
+++ b/src/event/ngx_event.h Tue Jan 25 15:41:48 2022 +0300
@@ -494,12 +494,6 @@ extern ngx_module_t   ngx_event_
 
 
 void ngx_event_accept(ngx_event_t *ev);
-#if !(NGX_WIN32)
-void ngx_event_recvmsg(ngx_event_t *ev);
-void ngx_udp_rbtree_insert_value(ngx_rbtree_node_t *temp,
-ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel);
-#endif
-void ngx_delete_udp_connection(void *data);
 ngx_int_t ngx_trylock_accept_mutex(ngx_cycle_t *cycle);
 ngx_int_t ngx_enable_accept_events(ngx_cycle_t *cycle);
 u_char *ngx_accept_log_error(ngx_log_t *log, u_char *buf, size_t len);
@@ -529,6 +523,7 @@ ngx_int_t ngx_send_lowat(ngx_connection_
 
 #include 
 #include 
+#include 
 
 #if (NGX_WIN32)
 #include 
diff -r 60b8f529db13 -r 8206ecdcd837 src/event/ngx_event_udp.h
--- /dev/null   Thu Jan 01 00:00:00 1970 +
+++ b/src/event/ngx_event_udp.h Tue Jan 25 15:41:48 2022 +0300
@@ -0,0 +1,24 @@
+
+/*
+ * Copyright (C) Nginx, Inc.
+ */
+
+
+#ifndef _NGX_EVENT_UDP_H_INCLUDED_
+#define _NGX_EVENT_UDP_H_INCLUDED_
+
+
+#include 
+#include 
+
+
+#if !(NGX_WIN32)
+void ngx_event_recvmsg(ngx_event_t *ev);
+void ngx_udp_rbtree_insert_value(ngx_rbtree_node_t *temp,
+ngx_rbtree_node_t *node, ngx_rbtree_node_t *sentinel);
+#endif
+
+void ngx_delete_udp_connection(void *data);
+
+
+#endif /* _NGX_EVENT_UDP_H_INCLUDED_ */
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


[nginx] Version bump.

2022-01-27 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/60b8f529db13
branches:  
changeset: 8000:60b8f529db13
user:  Vladimir Homutov 
date:  Thu Jan 27 13:44:09 2022 +0300
description:
Version bump.

diffstat:

 src/core/nginx.h |  4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diffs (14 lines):

diff -r 56ead48cfe88 -r 60b8f529db13 src/core/nginx.h
--- a/src/core/nginx.h  Tue Jan 25 18:03:52 2022 +0300
+++ b/src/core/nginx.h  Thu Jan 27 13:44:09 2022 +0300
@@ -9,8 +9,8 @@
 #define _NGINX_H_INCLUDED_
 
 
-#define nginx_version  1021006
-#define NGINX_VERSION  "1.21.6"
+#define nginx_version  1021007
+#define NGINX_VERSION  "1.21.7"
 #define NGINX_VER  "nginx/" NGINX_VERSION
 
 #ifdef NGX_BUILD
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null

2022-01-26 Thread Vladimir Homutov
On Wed, Jan 26, 2022 at 10:00:06AM +, Gao,Yan(媒体云) wrote:
> > the case you are describing is not what see in backtrace. And in
> > described case connection is main quic connection which has process
> > c->quic pointer set.
>
> I only find sc->quic = qs; in ngx_quic_create_stream,and this is stream 
> connection, not the main quic connection.
> How the main quic connection c->quic set?

The main quic connection is created in ngx_quic_new_connection(), which
calls ngx_quic_open_sockets() and it sets c->udp for the first time.

When packet arrives, c->udp is updated by ngx_lookup_udp_connection().

The main connection does not have c->quic set; this is used in stream
connections. To access main connection from quic stream, c->quic->parent
may be used.

>
> And the local code at this position:
> changeset:   8813:c37ea624c307
> branch:  quic
> tag: tip
> user:Roman Arutyunyan 
> date:Fri Jan 21 11:20:18 2022 +0300
> summary: QUIC: changed debug message.

can you confirm that the problem occured using this code and no other
patches? In any case, it would be useful to enable debug and get
debug log or at least reproduce on a binary without optimization to get
meaninigful backtrace.
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null

2022-01-26 Thread Vladimir Homutov
On Wed, Jan 26, 2022 at 06:38:13AM +, Gao,Yan(媒体云) wrote:
> Why sc->type = SOCK_STREAM in ngx_quic_create_stream? Should it be SOCK_DGRAM?

no, SOCK_STREAM is a correct setting for the quic streams.
SOCK_DGRAM is only used for main quic connection which actually handles
UDP datagrams and deals with QUIC protocol. Streams is an abstract layer
that utilizes ngx_connection_t with custom event handling.

> I guess the problem function call chain: final_early_data(openssl)->
> quic_set_encryption_secrets-> ngx_quic_set_encryption_secrets ->
> ngx_quic_init_streams -> ngx_ssl_ocsp_validate-> ngx_handle_read_event

> But this connection->quic would always be null, and cannot jump to
> quic if branch in ngx_handle_read_event

the case you are describing is not what see in backtrace. And in
described case connection is main quic connection which has process
c->quic pointer set.

> >  Thank you for report!
> >  Can you please enable debug and provide debug log?
>
> Sorry, this is a very rare case, and do not know how to trigger this bug 
> steadily
> here is more data from the stack

ok, what exactly code revision are you running ? Line numbers (if
correct) guess that it's something quite different from the current.

normally, you only see c->udp->dgram = NULL only in packets that were
not dispatched by dcid to any existing connection, and the handler is
ngx_quic_run().

If packet goes to known connection, c->udp->dgram is initialized and the
handler is ngx_quic_input_handler().

Hope this helps.

> p *c
> $1 = {data = 0x7efd695c74c0, read = 0xf2aa990, write = 0xfa72ca0, fd = 5547, 
> recv = 0x4a7c9a , send = 0x4ab5b9 , 
> recv_chain = 0x0,
>   send_chain = 0x4ab7a7 , listening = 0x29cf140, 
> sent = 0, log = 0x7efd695c73f0, pool = 0x7efd695c7330, type = 2, sockaddr = 
> 0x7efd695c7380, socklen = 16,
>   addr_text = {len = 15, data = 0x7efd695c74b0 "123.101.125.168.H\270(\v"}, 
> proxy_protocol = 0x0, quic = 0x0, ssl = 0x1e491e8, udp = 0x1e49150, 
> local_sockaddr = 0x7efd695c7440, local_socklen = 16,
>   buffer = 0x7efd695c7450, queue = {prev = 0x0, next = 0x0}, number = 
> 433923428, start_time = 3194843312, requests = 0, buffered = 0, log_error = 
> 2, timedout = 0, error = 0,
>   destroyed = 0, idle = 0, reusable = 0, close = 0, shared = 1, sendfile = 0, 
> sndlowat = 0, tcp_nodelay = 0, tcp_nopush = 0, need_last_buf = 0}
>
> p *c->ssl
> $2 = {connection = 0x7efd708fdb00, session_ctx = 0x7efd69052970, last = 0, 
> buf = 0x0, buffer_size = 16384,
>   handler = 0x0, session = 0x0, save_session = 0x0, saved_read_handler = 0x0, 
> saved_write_handler = 0x0, ocsp = 0x0, early_buf = 0 '\000', handshaked = 0, 
> handshake_rejected = 0, renegotiation = 0,
>   buffer = 1, sendfile = 0, no_wait_shutdown = 1, no_send_shutdown = 0, 
> shutdown_without_free = 0, handshake_buffer_set = 0, try_early_data = 0, 
> in_early = 0, in_ocsp = 0, early_preread = 0, write_blocked = 0}
>
> And you can see it happened before handshaked
>
> Gao,Yan(ACG VCP)

> ___
> nginx-devel mailing list -- nginx-devel@nginx.org
> To unsubscribe send an email to nginx-devel-le...@nginx.org

___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [quic] ngx_quic_input_handler Segmentation fault because c->udp->dgram is null

2022-01-25 Thread Vladimir Homutov

On 1/25/22 13:05, Gao,Yan(媒体云) wrote:

loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic 
unknown transport param id:0x20, skipped while SSL handshaking, client: 
223.90.188.154, server: 0.0.0.0:7232
loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic 
unknown transport param id:0x3127, skipped while SSL handshaking, client: 
223.90.188.154, server: 0.0.0.0:7232
loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic 
unknown transport param id:0x4752, skipped while SSL handshaking, client: 
223.90.188.154, server: 0.0.0.0:7232
loggingHost:kfcm-jorcol-82 2022/01/24 09:09:13 [info] 2224#0: *1007558770 quic 
reserved transport param id:0x3a86dd60d110621a, skipped while SSL handshaking, 
client: 223.90.188.154, server: 0.0.0.0:7232

Gao,Yan(ACG VCP)

在 2022/1/25 下午5:20,“Gao,Yan(媒体云)” 写入:

 Program terminated with signal SIGSEGV, Segmentation fault.
 #0  0x004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at 
src/event/quic/ngx_event_quic.c:497
 497src/event/quic/ngx_event_quic.c: No such file or directory.
 (gdb) bt
 #0  0x004bc3f9 in ngx_quic_input_handler (rev=0x2a119170) at 
src/event/quic/ngx_event_quic.c:497
 #1  0x004b011e in ngx_epoll_process_events (cycle=0x17011ab0, 
timer=, flags=) at 
src/event/modules/ngx_epoll_module.c:928
 #2  0x004a6ab1 in ngx_process_events_and_timers 
(cycle=cycle@entry=0x17011ab0) at src/event/ngx_event.c:262
 #3  0x004ae487 in ngx_worker_process_cycle (cycle=0x17011ab0, 
data=) at src/os/unix/ngx_process_cycle.c:727
 #4  0x004acc01 in ngx_spawn_process (cycle=cycle@entry=0x17011ab0, 
proc=proc@entry=0x4ae397 , data=data@entry=0x3, 
name=name@entry=0x9386ee "worker process",
 respawn=respawn@entry=-4) at src/os/unix/ngx_process.c:199
 #5  0x004ad723 in ngx_start_worker_processes 
(cycle=cycle@entry=0x17011ab0, n=16, type=type@entry=-4) at 
src/os/unix/ngx_process_cycle.c:350
 #6  0x004aefc0 in ngx_master_process_cycle (cycle=0x17011ab0, 
cycle@entry=0x289e7a0) at src/os/unix/ngx_process_cycle.c:235
 #7  0x004878e8 in main (argc=3, argv=) at 
src/core/nginx.c:397
 (gdb) p c->udp->dgram
 $1 = (ngx_udp_dgram_t *) 0x0

 Gao,Yan(ACG VCP)


Thank you for report!
Can you please enable debug and provide debug log?
___
nginx-devel mailing list -- nginx-devel@nginx.org
To unsubscribe send an email to nginx-devel-le...@nginx.org


Re: [PATCH 3 of 3] QUIC: stream recv shutdown support

2021-12-13 Thread Vladimir Homutov
On Mon, Dec 13, 2021 at 03:03:58PM +0300, Roman Arutyunyan wrote:
> On Fri, Dec 10, 2021 at 10:38:00AM +0300, Vladimir Homutov wrote:
> > On Fri, Nov 26, 2021 at 04:11:33PM +0300, Roman Arutyunyan wrote:
> > > On Thu, Nov 25, 2021 at 05:20:51PM +0300, Roman Arutyunyan wrote:
> > > > # HG changeset patch
> > > > # User Roman Arutyunyan 
> > > > # Date 1637695967 -10800
> > > > #  Tue Nov 23 22:32:47 2021 +0300
> > > > # Branch quic
> > > > # Node ID e1de02d829f7f85b1e2e6b289ec4c20318712321
> > > > # Parent  3d2354bfa1a2a257b9f73772ad0836585be85a6c
> > > > QUIC: stream recv shutdown support.
> > > >
> > > > Recv shutdown sends STOP_SENDING to client.  Both send and recv shutdown
> > > > functions are now called from stream cleanup handler.  While here, 
> > > > setting
> > > > c->read->pending_eof is moved down to fix recv shutdown in the cleanup 
> > > > handler.
> > >
> > > This definitely needs some improvement.  Now it's two patches.
> >
> > I suggest merging both into one (also, second needs rebasing)
>
> OK let's merge them.
>
> > > [..]
> > >
> > > --
> > > Roman Arutyunyan
> >
> > > # HG changeset patch
> > > # User Roman Arutyunyan 
> > > # Date 1637931593 -10800
> > > #  Fri Nov 26 15:59:53 2021 +0300
> > > # Branch quic
> > > # Node ID c2fa3e7689a4e286f45ccbac2288ade5966273b8
> > > # Parent  3d2354bfa1a2a257b9f73772ad0836585be85a6c
> > > QUIC: do not shutdown write part of a client uni stream.
> > >
> > > diff --git a/src/event/quic/ngx_event_quic_streams.c 
> > > b/src/event/quic/ngx_event_quic_streams.c
> > > --- a/src/event/quic/ngx_event_quic_streams.c
> > > +++ b/src/event/quic/ngx_event_quic_streams.c
> > > @@ -267,13 +267,20 @@ ngx_quic_shutdown_stream(ngx_connection_
> > >  return NGX_OK;
> > >  }
> > >
> > > +qs = c->quic;
> > > +
> > > +if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0
> > > +&& (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL))
> > > +{
> > > +return NGX_OK;
> > > +}
> > > +
> > >  wev = c->write;
> > >
> > >  if (wev->error) {
> > >  return NGX_OK;
> > >  }
> > >
> > > -qs = c->quic;
> > >  pc = qs->parent;
> > >  qc = ngx_quic_get_connection(pc);
> > >
> >
> > this one looks good
> >
> >
> > > # HG changeset patch
> > > # User Roman Arutyunyan 
> > > # Date 1637932014 -10800
> > > #  Fri Nov 26 16:06:54 2021 +0300
> > > # Branch quic
> > > # Node ID ed0cefd9fc434a7593f2f9e4b9a98ce65aaf05e9
> > > # Parent  c2fa3e7689a4e286f45ccbac2288ade5966273b8
> > > QUIC: write and full stream shutdown support.
> > >
> > > Full stream shutdown is now called from stream cleanup handler instead of
> > > explicitly sending frames.  The call is moved up not to be influenced by
> > > setting c->read->pending_eof, which was erroneously set too early.
> > >
> > > diff --git a/src/event/quic/ngx_event_quic_streams.c 
> > > b/src/event/quic/ngx_event_quic_streams.c
> > > --- a/src/event/quic/ngx_event_quic_streams.c
> > > +++ b/src/event/quic/ngx_event_quic_streams.c
> > > @@ -13,6 +13,8 @@
> > >  #define NGX_QUIC_STREAM_GONE (void *) -1
> > >
> > >
> > > +static ngx_int_t ngx_quic_shutdown_stream_send(ngx_connection_t *c);
> > > +static ngx_int_t ngx_quic_shutdown_stream_recv(ngx_connection_t *c);
> > >  static ngx_quic_stream_t *ngx_quic_get_stream(ngx_connection_t *c, 
> > > uint64_t id);
> > >  static ngx_int_t ngx_quic_reject_stream(ngx_connection_t *c, uint64_t 
> > > id);
> > >  static void ngx_quic_init_stream_handler(ngx_event_t *ev);
> > > @@ -257,16 +259,31 @@ ngx_quic_reset_stream(ngx_connection_t *
> > >  ngx_int_t
> > >  ngx_quic_shutdown_stream(ngx_connection_t *c, int how)
> > >  {
> > > +if (how == NGX_RW_SHUTDOWN || how == NGX_WRITE_SHUTDOWN) {
> > > +if (ngx_quic_shutdown_stream_send(c) != NGX_OK) {
> > > +return NGX_ERROR;
> > > +}
> > > +}
> > > +
> > > +if (how == NGX_RW_SHUTDOWN || how == NGX_READ_SHUTDOWN) {
> > > +if (ngx_quic_s

Re: Congestion control questions

2021-12-09 Thread Vladimir Homutov
On Tue, Dec 07, 2021 at 06:05:48PM +0800, sun edward wrote:
> Hi dev team,
>I have some questions about congestion control,   what's the current
> congestion control algorithm in nginx quic,  is there any way or plan to
> support CUBIC or BBR in nginx quic?
>
> thanks & regards

Currently we have implemented minimalistic congestion control, as
described in RFC 9002 [1].
There are no exact plans for implementing more advanced schemes, but it
is quite obvious that this area needs improvements, so we will have to
do something about it in future.


[1] https://www.rfc-editor.org/rfc/rfc9002.html#name-congestion-control
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 3 of 3] QUIC: stream recv shutdown support

2021-12-09 Thread Vladimir Homutov
On Fri, Nov 26, 2021 at 04:11:33PM +0300, Roman Arutyunyan wrote:
> On Thu, Nov 25, 2021 at 05:20:51PM +0300, Roman Arutyunyan wrote:
> > # HG changeset patch
> > # User Roman Arutyunyan 
> > # Date 1637695967 -10800
> > #  Tue Nov 23 22:32:47 2021 +0300
> > # Branch quic
> > # Node ID e1de02d829f7f85b1e2e6b289ec4c20318712321
> > # Parent  3d2354bfa1a2a257b9f73772ad0836585be85a6c
> > QUIC: stream recv shutdown support.
> >
> > Recv shutdown sends STOP_SENDING to client.  Both send and recv shutdown
> > functions are now called from stream cleanup handler.  While here, setting
> > c->read->pending_eof is moved down to fix recv shutdown in the cleanup 
> > handler.
>
> This definitely needs some improvement.  Now it's two patches.

I suggest merging both into one (also, second needs rebasing)

>
> [..]
>
> --
> Roman Arutyunyan

> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1637931593 -10800
> #  Fri Nov 26 15:59:53 2021 +0300
> # Branch quic
> # Node ID c2fa3e7689a4e286f45ccbac2288ade5966273b8
> # Parent  3d2354bfa1a2a257b9f73772ad0836585be85a6c
> QUIC: do not shutdown write part of a client uni stream.
>
> diff --git a/src/event/quic/ngx_event_quic_streams.c 
> b/src/event/quic/ngx_event_quic_streams.c
> --- a/src/event/quic/ngx_event_quic_streams.c
> +++ b/src/event/quic/ngx_event_quic_streams.c
> @@ -267,13 +267,20 @@ ngx_quic_shutdown_stream(ngx_connection_
>  return NGX_OK;
>  }
>
> +qs = c->quic;
> +
> +if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0
> +&& (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL))
> +{
> +return NGX_OK;
> +}
> +
>  wev = c->write;
>
>  if (wev->error) {
>  return NGX_OK;
>  }
>
> -qs = c->quic;
>  pc = qs->parent;
>  qc = ngx_quic_get_connection(pc);
>

this one looks good


> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1637932014 -10800
> #  Fri Nov 26 16:06:54 2021 +0300
> # Branch quic
> # Node ID ed0cefd9fc434a7593f2f9e4b9a98ce65aaf05e9
> # Parent  c2fa3e7689a4e286f45ccbac2288ade5966273b8
> QUIC: write and full stream shutdown support.
>
> Full stream shutdown is now called from stream cleanup handler instead of
> explicitly sending frames.  The call is moved up not to be influenced by
> setting c->read->pending_eof, which was erroneously set too early.
>
> diff --git a/src/event/quic/ngx_event_quic_streams.c 
> b/src/event/quic/ngx_event_quic_streams.c
> --- a/src/event/quic/ngx_event_quic_streams.c
> +++ b/src/event/quic/ngx_event_quic_streams.c
> @@ -13,6 +13,8 @@
>  #define NGX_QUIC_STREAM_GONE (void *) -1
>
>
> +static ngx_int_t ngx_quic_shutdown_stream_send(ngx_connection_t *c);
> +static ngx_int_t ngx_quic_shutdown_stream_recv(ngx_connection_t *c);
>  static ngx_quic_stream_t *ngx_quic_get_stream(ngx_connection_t *c, uint64_t 
> id);
>  static ngx_int_t ngx_quic_reject_stream(ngx_connection_t *c, uint64_t id);
>  static void ngx_quic_init_stream_handler(ngx_event_t *ev);
> @@ -257,16 +259,31 @@ ngx_quic_reset_stream(ngx_connection_t *
>  ngx_int_t
>  ngx_quic_shutdown_stream(ngx_connection_t *c, int how)
>  {
> +if (how == NGX_RW_SHUTDOWN || how == NGX_WRITE_SHUTDOWN) {
> +if (ngx_quic_shutdown_stream_send(c) != NGX_OK) {
> +return NGX_ERROR;
> +}
> +}
> +
> +if (how == NGX_RW_SHUTDOWN || how == NGX_READ_SHUTDOWN) {
> +if (ngx_quic_shutdown_stream_recv(c) != NGX_OK) {
> +return NGX_ERROR;
> +}
> +}
> +
> +return NGX_OK;
> +}
> +
> +
> +static ngx_int_t
> +ngx_quic_shutdown_stream_send(ngx_connection_t *c)
> +{
>  ngx_event_t*wev;
>  ngx_connection_t   *pc;
>  ngx_quic_frame_t   *frame;
>  ngx_quic_stream_t  *qs;
>  ngx_quic_connection_t  *qc;
>
> -if (how != NGX_WRITE_SHUTDOWN) {
> -return NGX_OK;
> -}
> -
>  qs = c->quic;
>
>  if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0
> @@ -290,7 +307,7 @@ ngx_quic_shutdown_stream(ngx_connection_
>  }
>
>  ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> -   "quic stream id:0x%xL shutdown", qs->id);
> +   "quic stream id:0x%xL send shutdown", qs->id);
>
>  frame->level = ssl_encryption_application;
>  frame->type = NGX_QUIC_FT_STREAM;
> @@ -311,6 +328,55 @@ ngx_quic_shutdown_stream(ngx_connection_
>  }
>
>
> +static ngx_int_t
> +ngx_quic_shutdown_stream_recv(ngx_connection_t *c)
> +{
> +ngx_event_t*rev;
> +ngx_connection_t   *pc;
> +ngx_quic_frame_t   *frame;
> +ngx_quic_stream_t  *qs;
> +ngx_quic_connection_t  *qc;
> +
> +qs = c->quic;
> +
> +if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED)
> +&& (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL))
> +{

maybe it's worth trying to move server/client bidi/uni tests into
ngx_quic_shutdown_stream() ? It looks like more natural place to
test which end to shut, and whether we need to do it at all.

> +return 

Re: [PATCH 2 of 2] QUIC: handle DATA_BLOCKED frame from client

2021-11-22 Thread Vladimir Homutov
On Mon, Nov 22, 2021 at 02:54:20PM +0300, Roman Arutyunyan wrote:
> On Mon, Nov 22, 2021 at 11:26:10AM +0300, Vladimir Homutov wrote:
> > On Wed, Nov 17, 2021 at 10:31:02AM +0300, Roman Arutyunyan wrote:
> > > # HG changeset patch
> > > # User Roman Arutyunyan 
> > > # Date 1637086755 -10800
> > > #  Tue Nov 16 21:19:15 2021 +0300
> > > # Branch quic
> > > # Node ID 0fb2613594f6bd8dd8f07a30c69900866b573158
> > > # Parent  4e3a7fc0533192f51a01042a1e9dd2b595881420
> > > QUIC: handle DATA_BLOCKED frame from client.
> > >
> > > Previously the frame was not handled and connection was closed with an 
> > > error.
> > > Now, after receiving this frame, global flow control is updated and new
> > > flow control credit is sent to client.
> > >
> > > diff --git a/src/event/quic/ngx_event_quic.c 
> > > b/src/event/quic/ngx_event_quic.c
> > > --- a/src/event/quic/ngx_event_quic.c
> > > +++ b/src/event/quic/ngx_event_quic.c
> > > @@ -1252,6 +1252,17 @@ ngx_quic_handle_frames(ngx_connection_t
> > >
> > >  break;
> > >
> > > +case NGX_QUIC_FT_DATA_BLOCKED:
> > > +
> > > +if (ngx_quic_handle_data_blocked_frame(c, pkt,
> > > +   _blocked)
> > > +!= NGX_OK)
> > > +{
> > > +return NGX_ERROR;
> > > +}
> > > +
> > > +break;
> > > +
> > >  case NGX_QUIC_FT_STREAM_DATA_BLOCKED:
> > >
> > >  if (ngx_quic_handle_stream_data_blocked_frame(c, pkt,
> > > diff --git a/src/event/quic/ngx_event_quic_streams.c 
> > > b/src/event/quic/ngx_event_quic_streams.c
> > > --- a/src/event/quic/ngx_event_quic_streams.c
> > > +++ b/src/event/quic/ngx_event_quic_streams.c
> > > @@ -32,6 +32,7 @@ static void ngx_quic_stream_cleanup_hand
> > >  static ngx_int_t ngx_quic_control_flow(ngx_connection_t *c, uint64_t 
> > > last);
> > >  static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t 
> > > last);
> > >  static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c);
> > > +static ngx_int_t ngx_quic_update_max_data(ngx_connection_t *c);
> > >
> > >
> > >  ngx_connection_t *
> > > @@ -1188,6 +1189,14 @@ ngx_quic_handle_streams_blocked_frame(ng
> > >
> > >
> > >  ngx_int_t
> > > +ngx_quic_handle_data_blocked_frame(ngx_connection_t *c,
> > > +ngx_quic_header_t *pkt, ngx_quic_data_blocked_frame_t *f)
> > > +{
> > > +return ngx_quic_update_max_data(c);
> > > +}
> > > +
> > > +
> > > +ngx_int_t
> > >  ngx_quic_handle_stream_data_blocked_frame(ngx_connection_t *c,
> > >  ngx_quic_header_t *pkt, ngx_quic_stream_data_blocked_frame_t *f)
> > >  {
> > > @@ -1544,7 +1553,6 @@ ngx_quic_update_flow(ngx_connection_t *c
> > >  uint64_tlen;
> > >  ngx_event_t*rev;
> > >  ngx_connection_t   *pc;
> > > -ngx_quic_frame_t   *frame;
> > >  ngx_quic_stream_t  *qs;
> > >  ngx_quic_connection_t  *qc;
> > >
> > > @@ -1577,22 +1585,9 @@ ngx_quic_update_flow(ngx_connection_t *c
> > >  if (qc->streams.recv_max_data
> > >  <= qc->streams.recv_offset + qc->streams.recv_window / 2)
> > >  {
> > > -qc->streams.recv_max_data = qc->streams.recv_offset
> > > -+ qc->streams.recv_window;
> > > -
> > > -ngx_log_debug1(NGX_LOG_DEBUG_EVENT, pc->log, 0,
> > > -   "quic flow update md:%uL", 
> > > qc->streams.recv_max_data);
> > > -
> > > -frame = ngx_quic_alloc_frame(pc);
> > > -if (frame == NULL) {
> > > +if (ngx_quic_update_max_data(pc) != NGX_OK) {
> > >  return NGX_ERROR;
> > >  }
> > > -
> > > -frame->level = ssl_encryption_application;
> > > -frame->type = NGX_QUIC_FT_MAX_DATA;
> > > -frame->u.max_data.max_data = qc->streams.recv_max_data;
> > > -
> > > -ngx_quic_queue_frame(qc, frame);
> > >  }
> > >
> > >  return NGX_OK;
> > > @@ -1637,6 +1632,41 @@ ngx_quic_update_max_stream_data(ngx_conn
> > >  }
>

Re: [PATCH 2 of 2] QUIC: handle DATA_BLOCKED frame from client

2021-11-22 Thread Vladimir Homutov
On Wed, Nov 17, 2021 at 10:31:02AM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1637086755 -10800
> #  Tue Nov 16 21:19:15 2021 +0300
> # Branch quic
> # Node ID 0fb2613594f6bd8dd8f07a30c69900866b573158
> # Parent  4e3a7fc0533192f51a01042a1e9dd2b595881420
> QUIC: handle DATA_BLOCKED frame from client.
>
> Previously the frame was not handled and connection was closed with an error.
> Now, after receiving this frame, global flow control is updated and new
> flow control credit is sent to client.
>
> diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
> --- a/src/event/quic/ngx_event_quic.c
> +++ b/src/event/quic/ngx_event_quic.c
> @@ -1252,6 +1252,17 @@ ngx_quic_handle_frames(ngx_connection_t
>
>  break;
>
> +case NGX_QUIC_FT_DATA_BLOCKED:
> +
> +if (ngx_quic_handle_data_blocked_frame(c, pkt,
> +   _blocked)
> +!= NGX_OK)
> +{
> +return NGX_ERROR;
> +}
> +
> +break;
> +
>  case NGX_QUIC_FT_STREAM_DATA_BLOCKED:
>
>  if (ngx_quic_handle_stream_data_blocked_frame(c, pkt,
> diff --git a/src/event/quic/ngx_event_quic_streams.c 
> b/src/event/quic/ngx_event_quic_streams.c
> --- a/src/event/quic/ngx_event_quic_streams.c
> +++ b/src/event/quic/ngx_event_quic_streams.c
> @@ -32,6 +32,7 @@ static void ngx_quic_stream_cleanup_hand
>  static ngx_int_t ngx_quic_control_flow(ngx_connection_t *c, uint64_t last);
>  static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last);
>  static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c);
> +static ngx_int_t ngx_quic_update_max_data(ngx_connection_t *c);
>
>
>  ngx_connection_t *
> @@ -1188,6 +1189,14 @@ ngx_quic_handle_streams_blocked_frame(ng
>
>
>  ngx_int_t
> +ngx_quic_handle_data_blocked_frame(ngx_connection_t *c,
> +ngx_quic_header_t *pkt, ngx_quic_data_blocked_frame_t *f)
> +{
> +return ngx_quic_update_max_data(c);
> +}
> +
> +
> +ngx_int_t
>  ngx_quic_handle_stream_data_blocked_frame(ngx_connection_t *c,
>  ngx_quic_header_t *pkt, ngx_quic_stream_data_blocked_frame_t *f)
>  {
> @@ -1544,7 +1553,6 @@ ngx_quic_update_flow(ngx_connection_t *c
>  uint64_tlen;
>  ngx_event_t*rev;
>  ngx_connection_t   *pc;
> -ngx_quic_frame_t   *frame;
>  ngx_quic_stream_t  *qs;
>  ngx_quic_connection_t  *qc;
>
> @@ -1577,22 +1585,9 @@ ngx_quic_update_flow(ngx_connection_t *c
>  if (qc->streams.recv_max_data
>  <= qc->streams.recv_offset + qc->streams.recv_window / 2)
>  {
> -qc->streams.recv_max_data = qc->streams.recv_offset
> -+ qc->streams.recv_window;
> -
> -ngx_log_debug1(NGX_LOG_DEBUG_EVENT, pc->log, 0,
> -   "quic flow update md:%uL", qc->streams.recv_max_data);
> -
> -frame = ngx_quic_alloc_frame(pc);
> -if (frame == NULL) {
> +if (ngx_quic_update_max_data(pc) != NGX_OK) {
>  return NGX_ERROR;
>  }
> -
> -frame->level = ssl_encryption_application;
> -frame->type = NGX_QUIC_FT_MAX_DATA;
> -frame->u.max_data.max_data = qc->streams.recv_max_data;
> -
> -ngx_quic_queue_frame(qc, frame);
>  }
>
>  return NGX_OK;
> @@ -1637,6 +1632,41 @@ ngx_quic_update_max_stream_data(ngx_conn
>  }
>
>
> +static ngx_int_t
> +ngx_quic_update_max_data(ngx_connection_t *c)
> +{
> +uint64_trecv_max_data;
> +ngx_quic_frame_t   *frame;
> +ngx_quic_connection_t  *qc;
> +
> +qc = ngx_quic_get_connection(c);
> +
> +recv_max_data = qc->streams.recv_offset + qc->streams.recv_window;
> +
> +if (qc->streams.recv_max_data == recv_max_data) {
> +return NGX_OK;
> +}

same question as in previous patch; logic is the same;

> +
> +qc->streams.recv_max_data = recv_max_data;
> +
> +ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> +   "quic flow update md:%uL", qc->streams.recv_max_data);
> +
> +frame = ngx_quic_alloc_frame(c);

looks like the same issue as in the previous patch - should be pc here

> +if (frame == NULL) {
> +return NGX_ERROR;
> +}
> +
> +frame->level = ssl_encryption_application;
> +frame->type = NGX_QUIC_FT_MAX_DATA;
> +frame->u.max_data.max_data = qc->streams.recv_max_data;
> +
> +ngx_quic_queue_frame(qc, frame);
> +
> +return NGX_OK;
> +}
> +
> +
>  ngx_int_t
>  ngx_quic_handle_read_event(ngx_event_t *rev, ngx_uint_t flags)
>  {
> diff --git a/src/event/quic/ngx_event_quic_streams.h 
> b/src/event/quic/ngx_event_quic_streams.h
> --- a/src/event/quic/ngx_event_quic_streams.h
> +++ b/src/event/quic/ngx_event_quic_streams.h
> @@ -20,6 +20,8 @@ ngx_int_t ngx_quic_handle_max_data_frame
>  ngx_quic_max_data_frame_t *f);
>  ngx_int_t 

Re: [PATCH 1 of 2] QUIC: update stream flow control credit on STREAM_DATA_BLOCKED

2021-11-22 Thread Vladimir Homutov
On Wed, Nov 17, 2021 at 11:17:27AM +0300, Roman Arutyunyan wrote:
> On Wed, Nov 17, 2021 at 10:31:01AM +0300, Roman Arutyunyan wrote:
> > # HG changeset patch
> > # User Roman Arutyunyan 
> > # Date 1637133234 -10800
> > #  Wed Nov 17 10:13:54 2021 +0300
> > # Branch quic
> > # Node ID 4e3a7fc0533192f51a01042a1e9dd2b595881420
> > # Parent  4ad8fc79cb33257c928a9098a87324b350576551
> > QUIC: update stream flow control credit on STREAM_DATA_BLOCKED.
> >
> > Previously, after receiving STREAM_DATA_BLOCKED, current flow control limit
> > was sent to client.  Now, if the limit can be updated to the full window 
> > size,
> > it is updated and the new value is sent to client, otherwise nothing is 
> > sent.
> >
> > The change lets client update flow control credit on demand.  Also, it saves
> > traffic by not sending MAX_STREAM_DATA with the same value twice.
> >
> > diff --git a/src/event/quic/ngx_event_quic_streams.c 
> > b/src/event/quic/ngx_event_quic_streams.c
> > --- a/src/event/quic/ngx_event_quic_streams.c
> > +++ b/src/event/quic/ngx_event_quic_streams.c
> > @@ -31,6 +31,7 @@ static size_t ngx_quic_max_stream_flow(n
> >  static void ngx_quic_stream_cleanup_handler(void *data);
> >  static ngx_int_t ngx_quic_control_flow(ngx_connection_t *c, uint64_t last);
> >  static ngx_int_t ngx_quic_update_flow(ngx_connection_t *c, uint64_t last);
> > +static ngx_int_t ngx_quic_update_max_stream_data(ngx_connection_t *c);
> >
> >
> >  ngx_connection_t *
> > @@ -1190,8 +1191,6 @@ ngx_int_t
> >  ngx_quic_handle_stream_data_blocked_frame(ngx_connection_t *c,
> >  ngx_quic_header_t *pkt, ngx_quic_stream_data_blocked_frame_t *f)
> >  {
> > -uint64_tlimit;
> > -ngx_quic_frame_t   *frame;
> >  ngx_quic_stream_t  *qs;
> >  ngx_quic_connection_t  *qc;
> >
> > @@ -1217,29 +1216,10 @@ ngx_quic_handle_stream_data_blocked_fram
> >  return NGX_OK;
> >  }
> >
> > -limit = qs->recv_max_data;
> > -
> > -if (ngx_quic_init_stream(qs) != NGX_OK) {
> > -return NGX_ERROR;
> > -}
> > -
> > -} else {
> > -limit = qs->recv_max_data;
> > +return ngx_quic_init_stream(qs);
> >  }
> >
> > -frame = ngx_quic_alloc_frame(c);
> > -if (frame == NULL) {
> > -return NGX_ERROR;
> > -}
> > -
> > -frame->level = pkt->level;
> > -frame->type = NGX_QUIC_FT_MAX_STREAM_DATA;
> > -frame->u.max_stream_data.id = f->id;
> > -frame->u.max_stream_data.limit = limit;
> > -
> > -ngx_quic_queue_frame(qc, frame);
> > -
> > -return NGX_OK;
> > +return ngx_quic_update_max_stream_data(qs->connection);
> >  }
> >
> >
> > @@ -1587,22 +1567,9 @@ ngx_quic_update_flow(ngx_connection_t *c
> >  if (!rev->pending_eof && !rev->error
> >  && qs->recv_max_data <= qs->recv_offset + qs->recv_window / 2)
> >  {
> > -qs->recv_max_data = qs->recv_offset + qs->recv_window;
> > -
> > -ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> > -   "quic flow update msd:%uL", qs->recv_max_data);
> > -
> > -frame = ngx_quic_alloc_frame(pc);
> > -if (frame == NULL) {
> > +if (ngx_quic_update_max_stream_data(c) != NGX_OK) {
> >  return NGX_ERROR;
> >  }
> > -
> > -frame->level = ssl_encryption_application;
> > -frame->type = NGX_QUIC_FT_MAX_STREAM_DATA;
> > -frame->u.max_stream_data.id = qs->id;
> > -frame->u.max_stream_data.limit = qs->recv_max_data;
> > -
> > -ngx_quic_queue_frame(qc, frame);
> >  }
> >
> >  qc->streams.recv_offset += len;
> > @@ -1632,6 +1599,44 @@ ngx_quic_update_flow(ngx_connection_t *c
> >  }
> >
> >
> > +static ngx_int_t
> > +ngx_quic_update_max_stream_data(ngx_connection_t *c)
> > +{
> > +uint64_trecv_max_data;
> > +ngx_quic_frame_t   *frame;
> > +ngx_quic_stream_t  *qs;
> > +ngx_quic_connection_t  *qc;
> > +
> > +qs = c->quic;
> > +qc = ngx_quic_get_connection(qs->parent);
> > +
> > +recv_max_data = qs->recv_offset + qs->recv_window;
> > +
> > +if (qs->recv_max_data == recv_max_data) {

shouldn't it be >= ? (i.e. we want to avoid sending frame if current
window doesn't extend recv_max_data; could qs->recv_window change ?)

> > +return NGX_OK;
> > +}
> > +
> > +qs->recv_max_data = recv_max_data;
> > +
> > +ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> > +   "quic flow update msd:%uL", qs->recv_max_data);
> > +
> > +frame = ngx_quic_alloc_frame(c);
>
> The argument should be "pc":
>
> frame = ngx_quic_alloc_frame(pc);

also, it need to be declared/initialized, similar to other places

>
> > +if (frame == NULL) {
> > +return NGX_ERROR;
> > +}
> > +
> > +frame->level = ssl_encryption_application;
> > +frame->type = NGX_QUIC_FT_MAX_STREAM_DATA;
> > +frame->u.max_stream_data.id = qs->id;
> > +frame->u.max_stream_data.limit = 

Re: [PATCH 2 of 3] HTTP/3: allowed QUIC stream connection reuse

2021-11-17 Thread Vladimir Homutov

17.11.2021 10:12, Roman Arutyunyan пишет:

On Tue, Nov 16, 2021 at 12:18:47PM +0300, Vladimir Homutov wrote:

On Mon, Nov 15, 2021 at 03:33:25PM +0300, Roman Arutyunyan wrote:

# HG changeset patch
# User Roman Arutyunyan 
# Date 1636646820 -10800
#  Thu Nov 11 19:07:00 2021 +0300
# Branch quic
# Node ID 801103b7645d93d0d06f63019e54d9e76f1baa6c
# Parent  d2c193aa84800da00314f1af72ae722d964445a4
QUIC: reject streams which we could not create.

The reasons why a stream may not be created by server currently include hitting
worker_connections limit and memory allocation error.  Previously in these
cases the entire QUIC connection was closed and all its streams were shut down.
Now the new stream is rejected and existing streams continue working.

To reject an HTTP/3 request stream, RESET_STREAM and STOP_SENDING with
H3_REQUEST_REJECTED error code are sent to client.  HTTP/3 uni streams and
Stream streams are not rejected.

diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h
--- a/src/event/quic/ngx_event_quic.h
+++ b/src/event/quic/ngx_event_quic.h
@@ -61,6 +61,9 @@ typedef struct {
  ngx_flag_t retry;
  ngx_flag_t gso_enabled;
  ngx_str_t  host_key;
+ngx_int_t  close_stream_code;
+ngx_int_t  reject_uni_stream_code;
+ngx_int_t  reject_bidi_stream_code;


i would prefer stream_close_code and stream_reject_code_uni|bidi,
a bit similar to transport parameter naming like
'initial_max_stream_data_bidi_local', YMMV


OK, let's do this.


  u_char av_token_key[NGX_QUIC_AV_KEY_LEN];
  u_char sr_token_key[NGX_QUIC_SR_KEY_LEN];
  } ngx_quic_conf_t;
diff --git a/src/event/quic/ngx_event_quic_streams.c 
b/src/event/quic/ngx_event_quic_streams.c
--- a/src/event/quic/ngx_event_quic_streams.c
+++ b/src/event/quic/ngx_event_quic_streams.c
@@ -15,6 +15,7 @@

  static ngx_quic_stream_t *ngx_quic_create_client_stream(ngx_connection_t *c,
  uint64_t id);
+static ngx_int_t ngx_quic_reject_stream(ngx_connection_t *c, uint64_t id);
  static ngx_int_t ngx_quic_init_stream(ngx_quic_stream_t *qs);
  static void ngx_quic_init_streams_handler(ngx_connection_t *c);
  static ngx_quic_stream_t *ngx_quic_create_stream(ngx_connection_t *c,
@@ -377,8 +378,13 @@ ngx_quic_create_client_stream(ngx_connec
  for ( /* void */ ; min_id < id; min_id += 0x04) {

  qs = ngx_quic_create_stream(c, min_id);
+
  if (qs == NULL) {
-return NULL;
+if (ngx_quic_reject_stream(c, min_id) != NGX_OK) {
+return NULL;
+}
+
+continue;
  }

  if (ngx_quic_init_stream(qs) != NGX_OK) {
@@ -390,7 +396,66 @@ ngx_quic_create_client_stream(ngx_connec
  }
  }

-return ngx_quic_create_stream(c, id);
+qs = ngx_quic_create_stream(c, id);
+
+if (qs == NULL) {
+if (ngx_quic_reject_stream(c, id) != NGX_OK) {
+return NULL;
+}
+
+return NGX_QUIC_STREAM_GONE;
+}
+
+return qs;
+}
+
+
+static ngx_int_t
+ngx_quic_reject_stream(ngx_connection_t *c, uint64_t id)
+{
+uint64_tcode;
+ngx_quic_frame_t   *frame;
+ngx_quic_connection_t  *qc;
+
+qc = ngx_quic_get_connection(c);
+
+code = (id & NGX_QUIC_STREAM_UNIDIRECTIONAL)
+   ? qc->conf->reject_uni_stream_code
+   : qc->conf->reject_bidi_stream_code;
+
+ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0,
+   "quic stream id:0x%xL reject err:0x%xL", id, code);


Here we may decline stream rejection, but have already logged it.
I suggest putting debug below 'code == 0' test.


Zero code still carries some information.  If it looks misleading, then yes,
let's move it below.



not really necessary, let it stay as is; we have many places where we 
debug just fact that function was entered.



+if (code == 0) {
+return NGX_DECLINED;
+}
+
+frame = ngx_quic_alloc_frame(c);
+if (frame == NULL) {
+return NGX_ERROR;
+}
+
+frame->level = ssl_encryption_application;
+frame->type = NGX_QUIC_FT_RESET_STREAM;
+frame->u.reset_stream.id = id;
+frame->u.reset_stream.error_code = code;
+frame->u.reset_stream.final_size = 0;
+
+ngx_quic_queue_frame(qc, frame);
+
+frame = ngx_quic_alloc_frame(c);
+if (frame == NULL) {
+return NGX_ERROR;
+}
+
+frame->level = ssl_encryption_application;
+frame->type = NGX_QUIC_FT_STOP_SENDING;
+frame->u.stop_sending.id = id;
+frame->u.stop_sending.error_code = code;
+
+ngx_quic_queue_frame(qc, frame);
+
+return NGX_OK;
  }


@@ -866,7 +931,9 @@ ngx_quic_stream_cleanup_handler(void *da
  if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0
  || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0)
  {
-

Re: [PATCH 2 of 3] HTTP/3: allowed QUIC stream connection reuse

2021-11-16 Thread Vladimir Homutov
On Mon, Nov 15, 2021 at 03:33:25PM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1636646820 -10800
> #  Thu Nov 11 19:07:00 2021 +0300
> # Branch quic
> # Node ID 801103b7645d93d0d06f63019e54d9e76f1baa6c
> # Parent  d2c193aa84800da00314f1af72ae722d964445a4
> QUIC: reject streams which we could not create.
>
> The reasons why a stream may not be created by server currently include 
> hitting
> worker_connections limit and memory allocation error.  Previously in these
> cases the entire QUIC connection was closed and all its streams were shut 
> down.
> Now the new stream is rejected and existing streams continue working.
>
> To reject an HTTP/3 request stream, RESET_STREAM and STOP_SENDING with
> H3_REQUEST_REJECTED error code are sent to client.  HTTP/3 uni streams and
> Stream streams are not rejected.
>
> diff --git a/src/event/quic/ngx_event_quic.h b/src/event/quic/ngx_event_quic.h
> --- a/src/event/quic/ngx_event_quic.h
> +++ b/src/event/quic/ngx_event_quic.h
> @@ -61,6 +61,9 @@ typedef struct {
>  ngx_flag_t retry;
>  ngx_flag_t gso_enabled;
>  ngx_str_t  host_key;
> +ngx_int_t  close_stream_code;
> +ngx_int_t  reject_uni_stream_code;
> +ngx_int_t  reject_bidi_stream_code;

i would prefer stream_close_code and stream_reject_code_uni|bidi,
a bit similar to transport parameter naming like
'initial_max_stream_data_bidi_local', YMMV


>  u_char av_token_key[NGX_QUIC_AV_KEY_LEN];
>  u_char sr_token_key[NGX_QUIC_SR_KEY_LEN];
>  } ngx_quic_conf_t;
> diff --git a/src/event/quic/ngx_event_quic_streams.c 
> b/src/event/quic/ngx_event_quic_streams.c
> --- a/src/event/quic/ngx_event_quic_streams.c
> +++ b/src/event/quic/ngx_event_quic_streams.c
> @@ -15,6 +15,7 @@
>
>  static ngx_quic_stream_t *ngx_quic_create_client_stream(ngx_connection_t *c,
>  uint64_t id);
> +static ngx_int_t ngx_quic_reject_stream(ngx_connection_t *c, uint64_t id);
>  static ngx_int_t ngx_quic_init_stream(ngx_quic_stream_t *qs);
>  static void ngx_quic_init_streams_handler(ngx_connection_t *c);
>  static ngx_quic_stream_t *ngx_quic_create_stream(ngx_connection_t *c,
> @@ -377,8 +378,13 @@ ngx_quic_create_client_stream(ngx_connec
>  for ( /* void */ ; min_id < id; min_id += 0x04) {
>
>  qs = ngx_quic_create_stream(c, min_id);
> +
>  if (qs == NULL) {
> -return NULL;
> +if (ngx_quic_reject_stream(c, min_id) != NGX_OK) {
> +return NULL;
> +}
> +
> +continue;
>  }
>
>  if (ngx_quic_init_stream(qs) != NGX_OK) {
> @@ -390,7 +396,66 @@ ngx_quic_create_client_stream(ngx_connec
>  }
>  }
>
> -return ngx_quic_create_stream(c, id);
> +qs = ngx_quic_create_stream(c, id);
> +
> +if (qs == NULL) {
> +if (ngx_quic_reject_stream(c, id) != NGX_OK) {
> +return NULL;
> +}
> +
> +return NGX_QUIC_STREAM_GONE;
> +}
> +
> +return qs;
> +}
> +
> +
> +static ngx_int_t
> +ngx_quic_reject_stream(ngx_connection_t *c, uint64_t id)
> +{
> +uint64_tcode;
> +ngx_quic_frame_t   *frame;
> +ngx_quic_connection_t  *qc;
> +
> +qc = ngx_quic_get_connection(c);
> +
> +code = (id & NGX_QUIC_STREAM_UNIDIRECTIONAL)
> +   ? qc->conf->reject_uni_stream_code
> +   : qc->conf->reject_bidi_stream_code;
> +
> +ngx_log_debug2(NGX_LOG_DEBUG_EVENT, c->log, 0,
> +   "quic stream id:0x%xL reject err:0x%xL", id, code);

Here we may decline stream rejection, but have already logged it.
I suggest putting debug below 'code == 0' test.

> +
> +if (code == 0) {
> +return NGX_DECLINED;
> +}
> +
> +frame = ngx_quic_alloc_frame(c);
> +if (frame == NULL) {
> +return NGX_ERROR;
> +}
> +
> +frame->level = ssl_encryption_application;
> +frame->type = NGX_QUIC_FT_RESET_STREAM;
> +frame->u.reset_stream.id = id;
> +frame->u.reset_stream.error_code = code;
> +frame->u.reset_stream.final_size = 0;
> +
> +ngx_quic_queue_frame(qc, frame);
> +
> +frame = ngx_quic_alloc_frame(c);
> +if (frame == NULL) {
> +return NGX_ERROR;
> +}
> +
> +frame->level = ssl_encryption_application;
> +frame->type = NGX_QUIC_FT_STOP_SENDING;
> +frame->u.stop_sending.id = id;
> +frame->u.stop_sending.error_code = code;
> +
> +ngx_quic_queue_frame(qc, frame);
> +
> +return NGX_OK;
>  }
>
>
> @@ -866,7 +931,9 @@ ngx_quic_stream_cleanup_handler(void *da
>  if ((qs->id & NGX_QUIC_STREAM_SERVER_INITIATED) == 0
>  || (qs->id & NGX_QUIC_STREAM_UNIDIRECTIONAL) == 0)
>  {
> -if (!c->read->pending_eof && !c->read->error) {
> +if (!c->read->pending_eof && !c->read->error
> +&& qc->conf->close_stream_code)
> +{
>  frame = 

[nginx] Mail: connections with wrong ALPN protocols are now rejected.

2021-10-20 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/dc955d274130
branches:  
changeset: 7938:dc955d274130
user:  Vladimir Homutov 
date:  Wed Oct 20 09:45:34 2021 +0300
description:
Mail: connections with wrong ALPN protocols are now rejected.

This is a recommended behavior by RFC 7301 and is useful
for mitigation of protocol confusion attacks [1].

For POP3 and IMAP protocols IANA-assigned ALPN IDs are used [2].
For the SMTP protocol "smtp" is used.

[1] https://alpaca-attack.com/
[2] https://www.iana.org/assignments/tls-extensiontype-values/

diffstat:

 src/mail/ngx_mail.h |   1 +
 src/mail/ngx_mail_imap_module.c |   1 +
 src/mail/ngx_mail_pop3_module.c |   1 +
 src/mail/ngx_mail_smtp_module.c |   1 +
 src/mail/ngx_mail_ssl_module.c  |  58 +
 5 files changed, 62 insertions(+), 0 deletions(-)

diffs (126 lines):

diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail.h
--- a/src/mail/ngx_mail.h   Wed Oct 20 09:50:02 2021 +0300
+++ b/src/mail/ngx_mail.h   Wed Oct 20 09:45:34 2021 +0300
@@ -324,6 +324,7 @@ typedef ngx_int_t (*ngx_mail_parse_comma
 
 struct ngx_mail_protocol_s {
 ngx_str_t   name;
+ngx_str_t   alpn;
 in_port_t   port[4];
 ngx_uint_t  type;
 
diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_imap_module.c
--- a/src/mail/ngx_mail_imap_module.c   Wed Oct 20 09:50:02 2021 +0300
+++ b/src/mail/ngx_mail_imap_module.c   Wed Oct 20 09:45:34 2021 +0300
@@ -46,6 +46,7 @@ static ngx_str_t  ngx_mail_imap_auth_met
 
 static ngx_mail_protocol_t  ngx_mail_imap_protocol = {
 ngx_string("imap"),
+ngx_string("\x04imap"),
 { 143, 993, 0, 0 },
 NGX_MAIL_IMAP_PROTOCOL,
 
diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_pop3_module.c
--- a/src/mail/ngx_mail_pop3_module.c   Wed Oct 20 09:50:02 2021 +0300
+++ b/src/mail/ngx_mail_pop3_module.c   Wed Oct 20 09:45:34 2021 +0300
@@ -46,6 +46,7 @@ static ngx_str_t  ngx_mail_pop3_auth_met
 
 static ngx_mail_protocol_t  ngx_mail_pop3_protocol = {
 ngx_string("pop3"),
+ngx_string("\x04pop3"),
 { 110, 995, 0, 0 },
 NGX_MAIL_POP3_PROTOCOL,
 
diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_smtp_module.c
--- a/src/mail/ngx_mail_smtp_module.c   Wed Oct 20 09:50:02 2021 +0300
+++ b/src/mail/ngx_mail_smtp_module.c   Wed Oct 20 09:45:34 2021 +0300
@@ -39,6 +39,7 @@ static ngx_str_t  ngx_mail_smtp_auth_met
 
 static ngx_mail_protocol_t  ngx_mail_smtp_protocol = {
 ngx_string("smtp"),
+ngx_string("\x04smtp"),
 { 25, 465, 587, 0 },
 NGX_MAIL_SMTP_PROTOCOL,
 
diff -r db6b630e6086 -r dc955d274130 src/mail/ngx_mail_ssl_module.c
--- a/src/mail/ngx_mail_ssl_module.cWed Oct 20 09:50:02 2021 +0300
+++ b/src/mail/ngx_mail_ssl_module.cWed Oct 20 09:45:34 2021 +0300
@@ -14,6 +14,12 @@
 #define NGX_DEFAULT_ECDH_CURVE  "auto"
 
 
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+static int ngx_mail_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn,
+const unsigned char **out, unsigned char *outlen,
+const unsigned char *in, unsigned int inlen, void *arg);
+#endif
+
 static void *ngx_mail_ssl_create_conf(ngx_conf_t *cf);
 static char *ngx_mail_ssl_merge_conf(ngx_conf_t *cf, void *parent, void 
*child);
 
@@ -244,6 +250,54 @@ ngx_module_t  ngx_mail_ssl_module = {
 static ngx_str_t ngx_mail_ssl_sess_id_ctx = ngx_string("MAIL");
 
 
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+
+static int
+ngx_mail_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out,
+unsigned char *outlen, const unsigned char *in, unsigned int inlen,
+void *arg)
+{
+unsigned int   srvlen;
+unsigned char *srv;
+ngx_connection_t  *c;
+ngx_mail_session_t*s;
+ngx_mail_core_srv_conf_t  *cscf;
+#if (NGX_DEBUG)
+unsigned int   i;
+#endif
+
+c = ngx_ssl_get_connection(ssl_conn);
+s = c->data;
+
+#if (NGX_DEBUG)
+for (i = 0; i < inlen; i += in[i] + 1) {
+ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0,
+   "SSL ALPN supported by client: %*s",
+   (size_t) in[i], [i + 1]);
+}
+#endif
+
+cscf = ngx_mail_get_module_srv_conf(s, ngx_mail_core_module);
+
+srv = cscf->protocol->alpn.data;
+srvlen = cscf->protocol->alpn.len;
+
+if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen,
+  in, inlen)
+!= OPENSSL_NPN_NEGOTIATED)
+{
+return SSL_TLSEXT_ERR_ALERT_FATAL;
+}
+
+ngx_log_debug2(NGX_LOG_DEBUG_MAIL, c->log, 0,
+   "SSL ALPN selected: %*s", (size_t) *outlen, *out);
+
+return SSL_TLSEXT_ERR_OK;
+}
+
+#endif
+
+
 static void *
 ngx_mail_ssl_create_conf(ngx_conf_t *cf)
 {
@

[nginx] HTTP: connections with wrong ALPN protocols are now rejected.

2021-10-20 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/db6b630e6086
branches:  
changeset: 7937:db6b630e6086
user:  Vladimir Homutov 
date:  Wed Oct 20 09:50:02 2021 +0300
description:
HTTP: connections with wrong ALPN protocols are now rejected.

This is a recommended behavior by RFC 7301 and is useful
for mitigation of protocol confusion attacks [1].

To avoid possible negative effects, list of supported protocols
was extended to include all possible HTTP protocol ALPN IDs
registered by IANA [2], i.e. "http/1.0" and "http/0.9".

[1] https://alpaca-attack.com/
[2] https://www.iana.org/assignments/tls-extensiontype-values/

diffstat:

 src/http/modules/ngx_http_ssl_module.c |  13 ++---
 1 files changed, 6 insertions(+), 7 deletions(-)

diffs (39 lines):

diff -r b9e02e9b2f1d -r db6b630e6086 src/http/modules/ngx_http_ssl_module.c
--- a/src/http/modules/ngx_http_ssl_module.cTue Oct 19 12:19:59 2021 +0300
+++ b/src/http/modules/ngx_http_ssl_module.cWed Oct 20 09:50:02 2021 +0300
@@ -17,7 +17,7 @@ typedef ngx_int_t (*ngx_ssl_variable_han
 #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5"
 #define NGX_DEFAULT_ECDH_CURVE  "auto"
 
-#define NGX_HTTP_ALPN_PROTO "\x08http/1.1"
+#define NGX_HTTP_ALPN_PROTOS"\x08http/1.1\x08http/1.0\x08http/0.9"
 
 
 #ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
@@ -442,21 +442,20 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t 
 hc = c->data;
 
 if (hc->addr_conf->http2) {
-srv = (unsigned char *) NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO;
-srvlen = sizeof(NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO) - 1;
-
+srv = (unsigned char *) NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTOS;
+srvlen = sizeof(NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTOS) - 1;
 } else
 #endif
 {
-srv = (unsigned char *) NGX_HTTP_ALPN_PROTO;
-srvlen = sizeof(NGX_HTTP_ALPN_PROTO) - 1;
+srv = (unsigned char *) NGX_HTTP_ALPN_PROTOS;
+srvlen = sizeof(NGX_HTTP_ALPN_PROTOS) - 1;
 }
 
 if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen,
   in, inlen)
 != OPENSSL_NPN_NEGOTIATED)
 {
-return SSL_TLSEXT_ERR_NOACK;
+return SSL_TLSEXT_ERR_ALERT_FATAL;
 }
 
 ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Stream: the "ssl_alpn" directive.

2021-10-20 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/b9e02e9b2f1d
branches:  
changeset: 7936:b9e02e9b2f1d
user:  Vladimir Homutov 
date:  Tue Oct 19 12:19:59 2021 +0300
description:
Stream: the "ssl_alpn" directive.

The directive sets the server list of supported application protocols
and requires one of this protocols to be negotiated if client is using
ALPN.

diffstat:

 src/event/ngx_event_openssl.c  |3 +
 src/stream/ngx_stream_ssl_module.c |  117 +
 src/stream/ngx_stream_ssl_module.h |1 +
 3 files changed, 121 insertions(+), 0 deletions(-)

diffs (200 lines):

diff -r eb6c77e6d55d -r b9e02e9b2f1d src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c Thu Oct 14 11:46:23 2021 +0300
+++ b/src/event/ngx_event_openssl.c Tue Oct 19 12:19:59 2021 +0300
@@ -3134,6 +3134,9 @@ ngx_ssl_connection_error(ngx_connection_
 #ifdef SSL_R_CALLBACK_FAILED
 || n == SSL_R_CALLBACK_FAILED/*  234 */
 #endif
+#ifdef SSL_R_NO_APPLICATION_PROTOCOL
+|| n == SSL_R_NO_APPLICATION_PROTOCOL/*  235 */
+#endif
 || n == SSL_R_UNEXPECTED_MESSAGE /*  244 */
 || n == SSL_R_UNEXPECTED_RECORD  /*  245 */
 || n == SSL_R_UNKNOWN_ALERT_TYPE /*  246 */
diff -r eb6c77e6d55d -r b9e02e9b2f1d src/stream/ngx_stream_ssl_module.c
--- a/src/stream/ngx_stream_ssl_module.cThu Oct 14 11:46:23 2021 +0300
+++ b/src/stream/ngx_stream_ssl_module.cTue Oct 19 12:19:59 2021 +0300
@@ -25,6 +25,11 @@ static void ngx_stream_ssl_handshake_han
 #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
 int ngx_stream_ssl_servername(ngx_ssl_conn_t *ssl_conn, int *ad, void *arg);
 #endif
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+static int ngx_stream_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn,
+const unsigned char **out, unsigned char *outlen,
+const unsigned char *in, unsigned int inlen, void *arg);
+#endif
 #ifdef SSL_R_CERT_CB_ERROR
 static int ngx_stream_ssl_certificate(ngx_ssl_conn_t *ssl_conn, void *arg);
 #endif
@@ -45,6 +50,8 @@ static char *ngx_stream_ssl_password_fil
 void *conf);
 static char *ngx_stream_ssl_session_cache(ngx_conf_t *cf, ngx_command_t *cmd,
 void *conf);
+static char *ngx_stream_ssl_alpn(ngx_conf_t *cf, ngx_command_t *cmd,
+void *conf);
 
 static char *ngx_stream_ssl_conf_command_check(ngx_conf_t *cf, void *post,
 void *data);
@@ -211,6 +218,13 @@ static ngx_command_t  ngx_stream_ssl_com
   offsetof(ngx_stream_ssl_conf_t, conf_commands),
   _stream_ssl_conf_command_post },
 
+{ ngx_string("ssl_alpn"),
+  NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_1MORE,
+  ngx_stream_ssl_alpn,
+  NGX_STREAM_SRV_CONF_OFFSET,
+  0,
+  NULL },
+
   ngx_null_command
 };
 
@@ -446,6 +460,46 @@ ngx_stream_ssl_servername(ngx_ssl_conn_t
 #endif
 
 
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+
+static int
+ngx_stream_ssl_alpn_select(ngx_ssl_conn_t *ssl_conn, const unsigned char **out,
+unsigned char *outlen, const unsigned char *in, unsigned int inlen,
+void *arg)
+{
+ngx_str_t *alpn;
+#if (NGX_DEBUG)
+unsigned int   i;
+ngx_connection_t  *c;
+
+c = ngx_ssl_get_connection(ssl_conn);
+
+for (i = 0; i < inlen; i += in[i] + 1) {
+ngx_log_debug2(NGX_LOG_DEBUG_STREAM, c->log, 0,
+   "SSL ALPN supported by client: %*s",
+   (size_t) in[i], [i + 1]);
+}
+
+#endif
+
+alpn = arg;
+
+if (SSL_select_next_proto((unsigned char **) out, outlen, alpn->data,
+  alpn->len, in, inlen)
+!= OPENSSL_NPN_NEGOTIATED)
+{
+return SSL_TLSEXT_ERR_ALERT_FATAL;
+}
+
+ngx_log_debug2(NGX_LOG_DEBUG_STREAM, c->log, 0,
+   "SSL ALPN selected: %*s", (size_t) *outlen, *out);
+
+return SSL_TLSEXT_ERR_OK;
+}
+
+#endif
+
+
 #ifdef SSL_R_CERT_CB_ERROR
 
 int
@@ -605,6 +659,7 @@ ngx_stream_ssl_create_conf(ngx_conf_t *c
  * scf->client_certificate = { 0, NULL };
  * scf->trusted_certificate = { 0, NULL };
  * scf->crl = { 0, NULL };
+ * scf->alpn = { 0, NULL };
  * scf->ciphers = { 0, NULL };
  * scf->shm_zone = NULL;
  */
@@ -663,6 +718,7 @@ ngx_stream_ssl_merge_conf(ngx_conf_t *cf
 ngx_conf_merge_str_value(conf->trusted_certificate,
  prev->trusted_certificate, "");
 ngx_conf_merge_str_value(conf->crl, prev->crl, "");
+ngx_conf_merge_str_value(conf->alpn, prev->alpn, "");
 
 ngx_conf_merge_str_value(conf->ecdh_curve, prev->ecdh_curve,
  NGX_DEFAULT_ECDH_CURVE);
@@ -723,6 +779,13 @@ ngx_stream_ssl_merge_conf(ngx_conf_t *cf
 

[nginx] SSL: added $ssl_alpn_protocol variable.

2021-10-20 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/eb6c77e6d55d
branches:  
changeset: 7935:eb6c77e6d55d
user:  Vladimir Homutov 
date:  Thu Oct 14 11:46:23 2021 +0300
description:
SSL: added $ssl_alpn_protocol variable.

The variable contains protocol selected by ALPN during handshake and
is empty otherwise.

diffstat:

 src/event/ngx_event_openssl.c  |  30 ++
 src/event/ngx_event_openssl.h  |   2 ++
 src/http/modules/ngx_http_ssl_module.c |   3 +++
 src/stream/ngx_stream_ssl_module.c |   3 +++
 4 files changed, 38 insertions(+), 0 deletions(-)

diffs (78 lines):

diff -r 61abb35bb8cf -r eb6c77e6d55d src/event/ngx_event_openssl.c
--- a/src/event/ngx_event_openssl.c Fri Oct 15 10:02:15 2021 +0300
+++ b/src/event/ngx_event_openssl.c Thu Oct 14 11:46:23 2021 +0300
@@ -4699,6 +4699,36 @@ ngx_ssl_get_server_name(ngx_connection_t
 
 
 ngx_int_t
+ngx_ssl_get_alpn_protocol(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t *s)
+{
+#ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
+
+unsigned int  len;
+const unsigned char  *data;
+
+SSL_get0_alpn_selected(c->ssl->connection, , );
+
+if (len > 0) {
+
+s->data = ngx_pnalloc(pool, len);
+if (s->data == NULL) {
+return NGX_ERROR;
+}
+
+ngx_memcpy(s->data, data, len);
+s->len = len;
+
+return NGX_OK;
+}
+
+#endif
+
+s->len = 0;
+return NGX_OK;
+}
+
+
+ngx_int_t
 ngx_ssl_get_raw_certificate(ngx_connection_t *c, ngx_pool_t *pool, ngx_str_t 
*s)
 {
 size_t   len;
diff -r 61abb35bb8cf -r eb6c77e6d55d src/event/ngx_event_openssl.h
--- a/src/event/ngx_event_openssl.h Fri Oct 15 10:02:15 2021 +0300
+++ b/src/event/ngx_event_openssl.h Thu Oct 14 11:46:23 2021 +0300
@@ -265,6 +265,8 @@ ngx_int_t ngx_ssl_get_early_data(ngx_con
 ngx_str_t *s);
 ngx_int_t ngx_ssl_get_server_name(ngx_connection_t *c, ngx_pool_t *pool,
 ngx_str_t *s);
+ngx_int_t ngx_ssl_get_alpn_protocol(ngx_connection_t *c, ngx_pool_t *pool,
+ngx_str_t *s);
 ngx_int_t ngx_ssl_get_raw_certificate(ngx_connection_t *c, ngx_pool_t *pool,
 ngx_str_t *s);
 ngx_int_t ngx_ssl_get_certificate(ngx_connection_t *c, ngx_pool_t *pool,
diff -r 61abb35bb8cf -r eb6c77e6d55d src/http/modules/ngx_http_ssl_module.c
--- a/src/http/modules/ngx_http_ssl_module.cFri Oct 15 10:02:15 2021 +0300
+++ b/src/http/modules/ngx_http_ssl_module.cThu Oct 14 11:46:23 2021 +0300
@@ -358,6 +358,9 @@ static ngx_http_variable_t  ngx_http_ssl
 { ngx_string("ssl_server_name"), NULL, ngx_http_ssl_variable,
   (uintptr_t) ngx_ssl_get_server_name, NGX_HTTP_VAR_CHANGEABLE, 0 },
 
+{ ngx_string("ssl_alpn_protocol"), NULL, ngx_http_ssl_variable,
+  (uintptr_t) ngx_ssl_get_alpn_protocol, NGX_HTTP_VAR_CHANGEABLE, 0 },
+
 { ngx_string("ssl_client_cert"), NULL, ngx_http_ssl_variable,
   (uintptr_t) ngx_ssl_get_certificate, NGX_HTTP_VAR_CHANGEABLE, 0 },
 
diff -r 61abb35bb8cf -r eb6c77e6d55d src/stream/ngx_stream_ssl_module.c
--- a/src/stream/ngx_stream_ssl_module.cFri Oct 15 10:02:15 2021 +0300
+++ b/src/stream/ngx_stream_ssl_module.cThu Oct 14 11:46:23 2021 +0300
@@ -266,6 +266,9 @@ static ngx_stream_variable_t  ngx_stream
 { ngx_string("ssl_server_name"), NULL, ngx_stream_ssl_variable,
   (uintptr_t) ngx_ssl_get_server_name, NGX_STREAM_VAR_CHANGEABLE, 0 },
 
+{ ngx_string("ssl_alpn_protocol"), NULL, ngx_stream_ssl_variable,
+  (uintptr_t) ngx_ssl_get_alpn_protocol, NGX_STREAM_VAR_CHANGEABLE, 0 },
+
 { ngx_string("ssl_client_cert"), NULL, ngx_stream_ssl_variable,
   (uintptr_t) ngx_ssl_get_certificate, NGX_STREAM_VAR_CHANGEABLE, 0 },
 
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] HTTP/2: removed support for NPN.

2021-10-20 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/61abb35bb8cf
branches:  
changeset: 7934:61abb35bb8cf
user:  Vladimir Homutov 
date:  Fri Oct 15 10:02:15 2021 +0300
description:
HTTP/2: removed support for NPN.

NPN was replaced with ALPN, published as RFC 7301 in July 2014.
It used to negotiate SPDY (and, in transition, HTTP/2).

NPN supported appeared in OpenSSL 1.0.1. It does not work with TLSv1.3 [1].
ALPN is supported since OpenSSL 1.0.2.

The NPN support was dropped in Firefox 53 [2] and Chrome 51 [3].

[1] https://github.com/openssl/openssl/issues/3665.
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1248198
[3] https://www.chromestatus.com/feature/5767920709795840

diffstat:

 src/http/modules/ngx_http_ssl_module.c |  59 ++---
 src/http/ngx_http.c|   5 +-
 src/http/ngx_http_request.c|  14 +---
 src/http/v2/ngx_http_v2.h  |   3 +-
 4 files changed, 9 insertions(+), 72 deletions(-)

diffs (166 lines):

diff -r 2f443cac3f1e -r 61abb35bb8cf src/http/modules/ngx_http_ssl_module.c
--- a/src/http/modules/ngx_http_ssl_module.cMon Oct 18 16:46:59 2021 +0300
+++ b/src/http/modules/ngx_http_ssl_module.cFri Oct 15 10:02:15 2021 +0300
@@ -17,7 +17,7 @@ typedef ngx_int_t (*ngx_ssl_variable_han
 #define NGX_DEFAULT_CIPHERS "HIGH:!aNULL:!MD5"
 #define NGX_DEFAULT_ECDH_CURVE  "auto"
 
-#define NGX_HTTP_NPN_ADVERTISE  "\x08http/1.1"
+#define NGX_HTTP_ALPN_PROTO "\x08http/1.1"
 
 
 #ifdef TLSEXT_TYPE_application_layer_protocol_negotiation
@@ -26,11 +26,6 @@ static int ngx_http_ssl_alpn_select(ngx_
 const unsigned char *in, unsigned int inlen, void *arg);
 #endif
 
-#ifdef TLSEXT_TYPE_next_proto_neg
-static int ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn,
-const unsigned char **out, unsigned int *outlen, void *arg);
-#endif
-
 static ngx_int_t ngx_http_ssl_static_variable(ngx_http_request_t *r,
 ngx_http_variable_value_t *v, uintptr_t data);
 static ngx_int_t ngx_http_ssl_variable(ngx_http_request_t *r,
@@ -444,15 +439,14 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t 
 hc = c->data;
 
 if (hc->addr_conf->http2) {
-srv =
-   (unsigned char *) NGX_HTTP_V2_ALPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE;
-srvlen = sizeof(NGX_HTTP_V2_ALPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1;
+srv = (unsigned char *) NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO;
+srvlen = sizeof(NGX_HTTP_V2_ALPN_PROTO NGX_HTTP_ALPN_PROTO) - 1;
 
 } else
 #endif
 {
-srv = (unsigned char *) NGX_HTTP_NPN_ADVERTISE;
-srvlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1;
+srv = (unsigned char *) NGX_HTTP_ALPN_PROTO;
+srvlen = sizeof(NGX_HTTP_ALPN_PROTO) - 1;
 }
 
 if (SSL_select_next_proto((unsigned char **) out, outlen, srv, srvlen,
@@ -471,44 +465,6 @@ ngx_http_ssl_alpn_select(ngx_ssl_conn_t 
 #endif
 
 
-#ifdef TLSEXT_TYPE_next_proto_neg
-
-static int
-ngx_http_ssl_npn_advertised(ngx_ssl_conn_t *ssl_conn,
-const unsigned char **out, unsigned int *outlen, void *arg)
-{
-#if (NGX_HTTP_V2 || NGX_DEBUG)
-ngx_connection_t  *c;
-
-c = ngx_ssl_get_connection(ssl_conn);
-ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, "SSL NPN advertised");
-#endif
-
-#if (NGX_HTTP_V2)
-{
-ngx_http_connection_t  *hc;
-
-hc = c->data;
-
-if (hc->addr_conf->http2) {
-*out =
-(unsigned char *) NGX_HTTP_V2_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE;
-*outlen = sizeof(NGX_HTTP_V2_NPN_ADVERTISE NGX_HTTP_NPN_ADVERTISE) - 1;
-
-return SSL_TLSEXT_ERR_OK;
-}
-}
-#endif
-
-*out = (unsigned char *) NGX_HTTP_NPN_ADVERTISE;
-*outlen = sizeof(NGX_HTTP_NPN_ADVERTISE) - 1;
-
-return SSL_TLSEXT_ERR_OK;
-}
-
-#endif
-
-
 static ngx_int_t
 ngx_http_ssl_static_variable(ngx_http_request_t *r,
 ngx_http_variable_value_t *v, uintptr_t data)
@@ -792,11 +748,6 @@ ngx_http_ssl_merge_srv_conf(ngx_conf_t *
 SSL_CTX_set_alpn_select_cb(conf->ssl.ctx, ngx_http_ssl_alpn_select, NULL);
 #endif
 
-#ifdef TLSEXT_TYPE_next_proto_neg
-SSL_CTX_set_next_protos_advertised_cb(conf->ssl.ctx,
-  ngx_http_ssl_npn_advertised, NULL);
-#endif
-
 if (ngx_ssl_ciphers(cf, >ssl, >ciphers,
 conf->prefer_server_ciphers)
 != NGX_OK)
diff -r 2f443cac3f1e -r 61abb35bb8cf src/http/ngx_http.c
--- a/src/http/ngx_http.c   Mon Oct 18 16:46:59 2021 +0300
+++ b/src/http/ngx_http.c   Fri Oct 15 10:02:15 2021 +0300
@@ -1338,13 +1338,12 @@ ngx_http_add_address(ngx_conf_t *cf, ngx
 }
 
 #if (NGX_HTTP_V2 && NGX_HTTP_SSL  \
- && !defined TLSEXT_TYPE_application_layer_protocol_negotiation   \
- && !defined TLSEXT_TYPE_next_proto_neg)
+ && !defined TLSEXT_TYPE_application_layer_protocol_negotiation)
 

Re: [PATCH 3 of 5] HTTP/3: traffic-based flood detection

2021-10-13 Thread Vladimir Homutov
On Thu, Oct 07, 2021 at 02:36:16PM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1633602162 -10800
> #  Thu Oct 07 13:22:42 2021 +0300
> # Branch quic
> # Node ID 31561ac584b74d29af9a442afca47821a98217b2
> # Parent  1b87f4e196cce2b7aae33a63ca6dfc857b99f2b7
> HTTP/3: traffic-based flood detection.
>
> With this patch, all traffic over HTTP/3 bidi and uni streams is counted in
> the h3c->total_bytes field, and payload traffic is counted in the
> h3c->payload_bytes field.  As long as total traffic is many times larger than
> payload traffic, we consider this to be a flood.
>
> Request header traffic is counted as if all fields are literal.  Response
> header traffic is counted as is.

[..]

this looks more complex than QUIC part, as we don't have clear
understanding what 'payload' is.

Attempt to count literal fields vs bytes leads to situations when
payload is greater than total due to en/decoding. It looks like
it doesn't harm though, as the difference is not that big and we
should not have something like zip-bomb here
(i.e. decoded payload increases greatly in length, while total is quite
small)

I'm not sure that assuming reserved frames is not a good payload
is a good idea. While we don't know what is there, RFC tells us
not assume anything about their meaning. On the other side,
we can definitely consider huge number of reserved frames as a flood,
as we don't make any progress with request as we receive them
and waste resources.

overal, it looks working, and I have no better ideas how we can improve
it.
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: Should continue when ngx_quic_bpf_group_add_socket failed with adding one socket during reloading

2021-10-13 Thread Vladimir Homutov

13.10.2021 09:46, Gao,Yan(媒体云) пишет:

ngx_quic_bpf_module_init:
Should continue when ngx_quic_bpf_group_add_socket failed with adding one 
socket during reloading?

Gao,Yan(ACG VCP)



Hello Gao Yan,

this is a hard question. I would say that only valid reason to
fail there is if you hit some kernel limits. Otherwise it is some
bug in code that should be fixed. If you fail to add sockets into
map on reload, you end up in inconsistent state anyway, and
there is not much you can do, but restart nginx completely.

I hope this helps.
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH 2 of 5] HTTP/3: fixed request length calculation

2021-10-12 Thread Vladimir Homutov
On Thu, Oct 07, 2021 at 02:36:15PM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1633521076 -10800
> #  Wed Oct 06 14:51:16 2021 +0300
> # Branch quic
> # Node ID 1b87f4e196cce2b7aae33a63ca6dfc857b99f2b7
> # Parent  d53039c3224e8227979c113f621e532aef7c0f9b
> HTTP/3: fixed request length calculation.
>
> Previously, when request was blocked, r->request_length was not updated.
>
> diff --git a/src/http/v3/ngx_http_v3_request.c 
> b/src/http/v3/ngx_http_v3_request.c
> --- a/src/http/v3/ngx_http_v3_request.c
> +++ b/src/http/v3/ngx_http_v3_request.c
> @@ -297,6 +297,8 @@ ngx_http_v3_process_request(ngx_event_t
>  break;
>  }
>
> +r->request_length += b->pos - p;
> +
>  if (rc == NGX_BUSY) {
>  if (rev->error) {
>  ngx_http_close_request(r, NGX_HTTP_CLOSE);
> @@ -310,8 +312,6 @@ ngx_http_v3_process_request(ngx_event_t
>  break;
>  }
>
> -r->request_length += b->pos - p;
> -
>  if (rc == NGX_AGAIN) {
>  continue;
>  }

Looks good
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 1 of 5] HTTP/3: removed client-side encoder support

2021-10-12 Thread Vladimir Homutov
On Thu, Oct 07, 2021 at 02:36:14PM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1633520939 -10800
> #  Wed Oct 06 14:48:59 2021 +0300
> # Branch quic
> # Node ID d53039c3224e8227979c113f621e532aef7c0f9b
> # Parent  1ead7d64e9934c1a6c0d9dd3c5f1a3d643b926d6
> HTTP/3: removed client-side encoder support.
>
> Dynamic tables are not used when generating responses anyway.
>
> diff --git a/src/http/v3/ngx_http_v3_streams.c 
> b/src/http/v3/ngx_http_v3_streams.c
> --- a/src/http/v3/ngx_http_v3_streams.c
> +++ b/src/http/v3/ngx_http_v3_streams.c
> @@ -480,155 +480,6 @@ failed:
>
>
>  ngx_int_t
> -ngx_http_v3_send_ref_insert(ngx_connection_t *c, ngx_uint_t dynamic,
> -ngx_uint_t index, ngx_str_t *value)
> -{
> -u_char*p, buf[NGX_HTTP_V3_PREFIX_INT_LEN * 2];
> -size_t n;
> -ngx_connection_t  *ec;
> -
> -ngx_log_debug3(NGX_LOG_DEBUG_HTTP, c->log, 0,
> -   "http3 client ref insert, %s[%ui] \"%V\"",
> -   dynamic ? "dynamic" : "static", index, value);
> -
> -ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER);
> -if (ec == NULL) {
> -return NGX_ERROR;
> -}
> -
> -p = buf;
> -
> -*p = (dynamic ? 0x80 : 0xc0);
> -p = (u_char *) ngx_http_v3_encode_prefix_int(p, index, 6);
> -
> -/* XXX option for huffman? */
> -*p = 0;
> -p = (u_char *) ngx_http_v3_encode_prefix_int(p, value->len, 7);
> -
> -n = p - buf;
> -
> -if (ec->send(ec, buf, n) != (ssize_t) n) {
> -goto failed;
> -}
> -
> -if (ec->send(ec, value->data, value->len) != (ssize_t) value->len) {
> -goto failed;
> -}
> -
> -return NGX_OK;
> -
> -failed:
> -
> -ngx_http_v3_close_uni_stream(ec);
> -
> -return NGX_ERROR;
> -}
> -
> -
> -ngx_int_t
> -ngx_http_v3_send_insert(ngx_connection_t *c, ngx_str_t *name, ngx_str_t 
> *value)
> -{
> -u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN];
> -size_t n;
> -ngx_connection_t  *ec;
> -
> -ngx_log_debug2(NGX_LOG_DEBUG_HTTP, c->log, 0,
> -   "http3 client insert \"%V\":\"%V\"", name, value);
> -
> -ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER);
> -if (ec == NULL) {
> -return NGX_ERROR;
> -}
> -
> -/* XXX option for huffman? */
> -buf[0] = 0x40;
> -n = (u_char *) ngx_http_v3_encode_prefix_int(buf, name->len, 5) - buf;
> -
> -if (ec->send(ec, buf, n) != (ssize_t) n) {
> -goto failed;
> -}
> -
> -if (ec->send(ec, name->data, name->len) != (ssize_t) name->len) {
> -goto failed;
> -}
> -
> -/* XXX option for huffman? */
> -buf[0] = 0;
> -n = (u_char *) ngx_http_v3_encode_prefix_int(buf, value->len, 7) - buf;
> -
> -if (ec->send(ec, buf, n) != (ssize_t) n) {
> -goto failed;
> -}
> -
> -if (ec->send(ec, value->data, value->len) != (ssize_t) value->len) {
> -goto failed;
> -}
> -
> -return NGX_OK;
> -
> -failed:
> -
> -ngx_http_v3_close_uni_stream(ec);
> -
> -return NGX_ERROR;
> -}
> -
> -
> -ngx_int_t
> -ngx_http_v3_send_set_capacity(ngx_connection_t *c, ngx_uint_t capacity)
> -{
> -u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN];
> -size_t n;
> -ngx_connection_t  *ec;
> -
> -ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0,
> -   "http3 client set capacity %ui", capacity);
> -
> -ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER);
> -if (ec == NULL) {
> -return NGX_ERROR;
> -}
> -
> -buf[0] = 0x20;
> -n = (u_char *) ngx_http_v3_encode_prefix_int(buf, capacity, 5) - buf;
> -
> -if (ec->send(ec, buf, n) != (ssize_t) n) {
> -ngx_http_v3_close_uni_stream(ec);
> -return NGX_ERROR;
> -}
> -
> -return NGX_OK;
> -}
> -
> -
> -ngx_int_t
> -ngx_http_v3_send_duplicate(ngx_connection_t *c, ngx_uint_t index)
> -{
> -u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN];
> -size_t n;
> -ngx_connection_t  *ec;
> -
> -ngx_log_debug1(NGX_LOG_DEBUG_HTTP, c->log, 0,
> -   "http3 client duplicate %ui", index);
> -
> -ec = ngx_http_v3_get_uni_stream(c, NGX_HTTP_V3_STREAM_ENCODER);
> -if (ec == NULL) {
> -return NGX_ERROR;
> -}
> -
> -buf[0] = 0;
> -n = (u_char *) ngx_http_v3_encode_prefix_int(buf, index, 5) - buf;
> -
> -if (ec->send(ec, buf, n) != (ssize_t) n) {
> -ngx_http_v3_close_uni_stream(ec);
> -return NGX_ERROR;
> -}
> -
> -return NGX_OK;
> -}
> -
> -
> -ngx_int_t
>  ngx_http_v3_send_ack_section(ngx_connection_t *c, ngx_uint_t stream_id)
>  {
>  u_char buf[NGX_HTTP_V3_PREFIX_INT_LEN];
> diff --git a/src/http/v3/ngx_http_v3_streams.h 
> b/src/http/v3/ngx_http_v3_streams.h
> --- a/src/http/v3/ngx_http_v3_streams.h
> +++ b/src/http/v3/ngx_http_v3_streams.h
> @@ -27,13 +27,6 @@ ngx_int_t ngx_http_v3_cancel_stream(ngx_
>
>  

Re: [PATCH 5 of 5] QUIC: limited the total number of frames

2021-10-12 Thread Vladimir Homutov
On Thu, Oct 07, 2021 at 02:36:18PM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1633603050 -10800
> #  Thu Oct 07 13:37:30 2021 +0300
> # Branch quic
> # Node ID 25aeebb9432182a6246fedba6b1024f3d61e959b
> # Parent  e20f00b8ac9005621993ea19375b1646c9182e7b
> QUIC: limited the total number of frames.
>
> Exceeding 1 allocated frames is considered a flood.
>
> diff --git a/src/event/quic/ngx_event_quic_connection.h 
> b/src/event/quic/ngx_event_quic_connection.h
> --- a/src/event/quic/ngx_event_quic_connection.h
> +++ b/src/event/quic/ngx_event_quic_connection.h
> @@ -228,10 +228,8 @@ struct ngx_quic_connection_s {
>  ngx_chain_t  *free_bufs;
>  ngx_buf_t*free_shadow_bufs;
>
> -#ifdef NGX_QUIC_DEBUG_ALLOC
>  ngx_uint_tnframes;
>  ngx_uint_tnbufs;
> -#endif

nbufs are actually used only inside NGX_QUIC_DEBUG_ALLOC macro...

>
>  ngx_quic_streams_tstreams;
>  ngx_quic_congestion_t congestion;
> diff --git a/src/event/quic/ngx_event_quic_frames.c 
> b/src/event/quic/ngx_event_quic_frames.c
> --- a/src/event/quic/ngx_event_quic_frames.c
> +++ b/src/event/quic/ngx_event_quic_frames.c
> @@ -38,18 +38,22 @@ ngx_quic_alloc_frame(ngx_connection_t *c
> "quic reuse frame n:%ui", qc->nframes);
>  #endif
>
> -} else {
> +} else if (qc->nframes < 1) {
>  frame = ngx_palloc(c->pool, sizeof(ngx_quic_frame_t));
>  if (frame == NULL) {
>  return NULL;
>  }
>
> -#ifdef NGX_QUIC_DEBUG_ALLOC
>  ++qc->nframes;
>
> +#ifdef NGX_QUIC_DEBUG_ALLOC
>  ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "quic alloc frame n:%ui", qc->nframes);
>  #endif
> +
> +} else {
> +ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected");
> +return NULL;
>  }
>
>  ngx_memzero(frame, sizeof(ngx_quic_frame_t));
> @@ -372,9 +376,9 @@ ngx_quic_alloc_buf(ngx_connection_t *c)
>
>  cl->buf = b;
>
> -#ifdef NGX_QUIC_DEBUG_ALLOC
>  ++qc->nbufs;

... so this change seems unnecessary

>
> +#ifdef NGX_QUIC_DEBUG_ALLOC
>  ngx_log_debug1(NGX_LOG_DEBUG_EVENT, c->log, 0,
> "quic alloc buffer n:%ui", qc->nbufs);
>  #endif

note: again, the patch follows approach used in HTTP/2 for limiting number of
allocated frames and uses same constant.

as a whole, should be working.
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH 4 of 5] QUIC: traffic-based flood detection

2021-10-12 Thread Vladimir Homutov
On Thu, Oct 07, 2021 at 02:36:17PM +0300, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan 
> # Date 1633602816 -10800
> #  Thu Oct 07 13:33:36 2021 +0300
> # Branch quic
> # Node ID e20f00b8ac9005621993ea19375b1646c9182e7b
> # Parent  31561ac584b74d29af9a442afca47821a98217b2
> QUIC: traffic-based flood detection.
>
> With this patch, all traffic over a QUIC connection is compared to traffic
> over QUIC streams.  As long as total traffic is many times larger than stream
> traffic, we consider this to be a flood.
>
> diff --git a/src/event/quic/ngx_event_quic.c b/src/event/quic/ngx_event_quic.c
> --- a/src/event/quic/ngx_event_quic.c
> +++ b/src/event/quic/ngx_event_quic.c
> @@ -662,13 +662,17 @@ ngx_quic_close_timer_handler(ngx_event_t
>  static ngx_int_t
>  ngx_quic_input(ngx_connection_t *c, ngx_buf_t *b, ngx_quic_conf_t *conf)
>  {
> -u_char *p;
> -ngx_int_t   rc;
> -ngx_uint_t  good;
> -ngx_quic_header_t   pkt;
> +size_t  size;
> +u_char *p;
> +ngx_int_t   rc;
> +ngx_uint_t  good;
> +ngx_quic_header_t   pkt;
> +ngx_quic_connection_t  *qc;
>
>  good = 0;
>
> +size = b->last - b->pos;
> +
>  p = b->pos;
>
>  while (p < b->last) {
> @@ -701,7 +705,8 @@ ngx_quic_input(ngx_connection_t *c, ngx_
>
>  if (rc == NGX_DONE) {
>  /* stop further processing */
> -return NGX_DECLINED;
> +good = 0;
> +break;
>  }

this chunk looks unnecessary: we will test 'good' after the loop and
return NGX_DECLINED anyway in this case (good = 0).

>
>  if (rc == NGX_OK) {
> @@ -733,7 +738,27 @@ ngx_quic_input(ngx_connection_t *c, ngx_
>  p = b->pos;
>  }
>
> -return good ? NGX_OK : NGX_DECLINED;
> +if (!good) {
> +return NGX_DECLINED;
> +}
> +
> +qc = ngx_quic_get_connection(c);
> +
> +if (qc) {
> +qc->received += size;
> +
> +if ((uint64_t) (c->sent + qc->received) / 8 >
> +(qc->streams.sent + qc->streams.recv_last) + 1048576)
> +{

note: the comparison is intentionally similar to one used HTTP/2 for the
same purposes

> +ngx_log_error(NGX_LOG_INFO, c->log, 0, "quic flood detected");
> +
> +qc->error = NGX_QUIC_ERR_NO_ERROR;
> +qc->error_reason = "QUIC flood detected";
> +return NGX_ERROR;
> +}
> +}
> +
> +return NGX_OK;
>  }
>
>
> diff --git a/src/event/quic/ngx_event_quic_connection.h 
> b/src/event/quic/ngx_event_quic_connection.h
> --- a/src/event/quic/ngx_event_quic_connection.h
> +++ b/src/event/quic/ngx_event_quic_connection.h
> @@ -236,6 +236,8 @@ struct ngx_quic_connection_s {
>  ngx_quic_streams_tstreams;
>  ngx_quic_congestion_t congestion;
>
> +off_t received;
> +
>  ngx_uint_terror;
>  enum ssl_encryption_level_t   error_level;
>  ngx_uint_terror_ftype;

As a whole, it seems to be working good enough.
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[nginx] Stream: added half-close support.

2021-09-22 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/bfad703459b4
branches:  
changeset: 7929:bfad703459b4
user:  Vladimir Homutov 
date:  Wed Sep 22 10:20:00 2021 +0300
description:
Stream: added half-close support.

The "proxy_half_close" directive enables handling of TCP half close.  If
enabled, connection to proxied server is kept open until both read ends get
EOF.  Write end shutdown is properly transmitted via proxy.

diffstat:

 src/stream/ngx_stream_proxy_module.c |  36 
 src/stream/ngx_stream_upstream.h |   1 +
 2 files changed, 37 insertions(+), 0 deletions(-)

diffs (92 lines):

diff -r 97cf8284fd19 -r bfad703459b4 src/stream/ngx_stream_proxy_module.c
--- a/src/stream/ngx_stream_proxy_module.c  Fri Sep 10 12:59:22 2021 +0300
+++ b/src/stream/ngx_stream_proxy_module.c  Wed Sep 22 10:20:00 2021 +0300
@@ -31,6 +31,7 @@ typedef struct {
 ngx_uint_t   next_upstream_tries;
 ngx_flag_t   next_upstream;
 ngx_flag_t   proxy_protocol;
+ngx_flag_t   half_close;
 ngx_stream_upstream_local_t *local;
 ngx_flag_t   socket_keepalive;
 
@@ -245,6 +246,13 @@ static ngx_command_t  ngx_stream_proxy_c
   offsetof(ngx_stream_proxy_srv_conf_t, proxy_protocol),
   NULL },
 
+{ ngx_string("proxy_half_close"),
+  NGX_STREAM_MAIN_CONF|NGX_STREAM_SRV_CONF|NGX_CONF_FLAG,
+  ngx_conf_set_flag_slot,
+  NGX_STREAM_SRV_CONF_OFFSET,
+  offsetof(ngx_stream_proxy_srv_conf_t, half_close),
+  NULL },
+
 #if (NGX_STREAM_SSL)
 
 { ngx_string("proxy_ssl"),
@@ -1755,6 +1763,24 @@ ngx_stream_proxy_process(ngx_stream_sess
 }
 
 if (dst) {
+
+if (dst->type == SOCK_STREAM && pscf->half_close
+&& src->read->eof && !u->half_closed && !dst->buffered)
+{
+if (ngx_shutdown_socket(dst->fd, NGX_WRITE_SHUTDOWN) == -1) {
+ngx_connection_error(c, ngx_socket_errno,
+ ngx_shutdown_socket_n " failed");
+
+ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR);
+return;
+}
+
+u->half_closed = 1;
+ngx_log_debug1(NGX_LOG_DEBUG_STREAM, s->connection->log, 0,
+   "stream proxy %s socket shutdown",
+   from_upstream ? "client" : "upstream");
+}
+
 if (ngx_handle_write_event(dst->write, 0) != NGX_OK) {
 ngx_stream_proxy_finalize(s, NGX_STREAM_INTERNAL_SERVER_ERROR);
 return;
@@ -1833,6 +1859,13 @@ ngx_stream_proxy_test_finalize(ngx_strea
 return NGX_DECLINED;
 }
 
+if (pscf->half_close) {
+/* avoid closing live connections until both read ends get EOF */
+if (!(c->read->eof && pc->read->eof && !c->buffered && !pc->buffered)) 
{
+ return NGX_DECLINED;
+}
+}
+
 handler = c->log->handler;
 c->log->handler = NULL;
 
@@ -2052,6 +2085,7 @@ ngx_stream_proxy_create_srv_conf(ngx_con
 conf->proxy_protocol = NGX_CONF_UNSET;
 conf->local = NGX_CONF_UNSET_PTR;
 conf->socket_keepalive = NGX_CONF_UNSET;
+conf->half_close = NGX_CONF_UNSET;
 
 #if (NGX_STREAM_SSL)
 conf->ssl_enable = NGX_CONF_UNSET;
@@ -2110,6 +2144,8 @@ ngx_stream_proxy_merge_srv_conf(ngx_conf
 ngx_conf_merge_value(conf->socket_keepalive,
   prev->socket_keepalive, 0);
 
+ngx_conf_merge_value(conf->half_close, prev->half_close, 0);
+
 #if (NGX_STREAM_SSL)
 
 ngx_conf_merge_value(conf->ssl_enable, prev->ssl_enable, 0);
diff -r 97cf8284fd19 -r bfad703459b4 src/stream/ngx_stream_upstream.h
--- a/src/stream/ngx_stream_upstream.h  Fri Sep 10 12:59:22 2021 +0300
+++ b/src/stream/ngx_stream_upstream.h  Wed Sep 22 10:20:00 2021 +0300
@@ -142,6 +142,7 @@ typedef struct {
 ngx_stream_upstream_state_t   *state;
 unsigned   connected:1;
 unsigned   proxy_protocol:1;
+unsigned   half_closed:1;
 } ngx_stream_upstream_t;
 
 
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [nginx-quic] Segmentation offloading

2021-07-27 Thread Vladimir Homutov
On Mon, Jul 26, 2021 at 04:08:02PM -0500, Lucas Cuminato wrote:
> Hello,
>
> I was testing this feature the other day but unsure if it's doing the right
> thing.
> Nginx is generating 65k UDP datagrams which are then being fragmented at
> the IP layer.
> Reading the spec, rfc9000, it looks like IP fragmentation is not allowed
> (Section 14).
>
> "UDP datagrams MUST NOT be fragmented at the IP layer. In IPv4
>
> IPv4 ], the
> Don't Fragment (DF) bit MUST be set if possible, to
> prevent fragmentation on the path."
>
>
> Also, it doesn't seem to be respecting the client's endpoint
> max_udp_payload_size.
>
>
> Can you please confirm if this is desired ?

Hi Lucas,

thank you for the feedback.

Of course, 65K datagrams is not something expected. It looks like GSO is
not working properly in your case. The expected result is that kernel
will split 65K buffer into smaller UDP datagrams of specified (segment) size,
and this segment size respects QUIC settings.

Do you see it in the wire? If yes, please share output of configure
script, debug log [1], and output of 'nginx -T'.
Are you running nginx on hardware directly or is it some virtual machine?
NIC/interface details are valuable (ethtool -k ,
ip link show ).


[1] http://nginx.org/en/docs/debugging_log.html
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: QUIC and HTTP/3 roadmap blog post

2021-07-16 Thread Vladimir Homutov
On Tue, Jul 13, 2021 at 05:29:15PM +0530, Raminda Subashana wrote:
> Hi Maxim,
>
> Just tested nginx-quic release and there is a performance issue. I compared
> it with Cloudflare quic experimental release which is based on nginx 1.16.
>
> It is almost 3 times slower than 1.16. Below config worked for me and it
> never advertised h3-29. if you have specific config file to test appreciate
> if you can share
>

Hello Raminda,

I've just looked at your results (in your letter 14/07 with PDFs
attached), and here is a summary:

-+--+-+
metric   | nginx-1.16.1 | nginx-quic  |
-+--+-+
 |  | |
avg rps  | 25   | 25  |
max rps  | 80   | 61  |
 |  | |
avg resp | 564  | 597 |
95% resp | 570  | 591 |
max resp | 1550 | 1342|
 |  | |
-+--+-+
 |  | |
FCP* | 0.4 s| 0.6 s   |
SI   | 0.8 s| 3.7 s   |
LCP  | 0.4 s| 0.9 s   |
TTI  | 0.5 s| 1.9 s   |
TBT  | 0 ms | 0 ms|
CLS  | 0.016| 0.015   |
 |  | |
Rx   | 240.973  | 240.489 |
Tx   | 388.72   | 388.524 |
-+--+-+

* First Contentful Paint, Speed Index,
Largest Contentful Paint, Time To Interactive,
Total Blocking Time, Cumulative Layout Shift

Looking at it, I don't see any real difference, except in metrics
related to rendering, like some syntetic 'Speed Index'.

You may want to dive into details how your application interacts with
server and find out what happens, if such results are repeatable. Maybe
some difference in HTTP/3 implementation affect it, but I have no idea
how this index is calculated.

Also, 25 rps is really low load, unless your system is a very slow
machine. What are the parameters of your machine?

Finally, I'd like to know, how did you manage to get QUIC HTTP/3 support
in k6.io ? I don't see it in opensource version?  Is it some dev branch?





___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: QUIC and HTTP/3 roadmap blog post

2021-07-15 Thread Vladimir Homutov

14.07.2021 13:39, Raminda Subashana writes:

Hi Vladimir,

Please see below; details & herewith attached another detail report as a 
PDF. I tested with Magento 2.4.2 & below results based on it. PHP 7.4 on 
Ubuntu 20.04 LTS




Hi Raminda,

thank you for the feedback!

can you please send full nginx config (to produce, run 'nginx -T')
and nginx configure options (nginx -V).

It would be also interesting to see results of vanilla nginx with https 
(and TLS 1.3) as a baseline.


What was the request used for testing? Is it request for some static 
file ? Of which size?


Thank you.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: QUIC and HTTP/3 roadmap blog post

2021-07-15 Thread Vladimir Homutov

13.07.2021 15:42, Marcin Wanat пишет:

Hi Maxim,

does Nginx have plans to adopt BBR as congestion control when using QUIC ?

Regards,
Marcin Wanat



Hi Marcin Wanat,

Short-term, there are no such plans. We still have plenty of things to 
do. Currently for congestion we use what is described in RFC 9002.

There are no objections in general to introduction of other algorithms.

Any feedback with real statistics how we behave under different 
circumstances will be useful.


Thank you for question!
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: QUIC and HTTP/3 roadmap blog post

2021-07-13 Thread Vladimir Homutov
On Tue, Jul 13, 2021 at 06:55:14PM +1000, Mathew Heard wrote:
> Hi Maxim,
>
> Really interesting read.
>
> Do you have any plans for resolving the SIGHUP causes session closure
> issues that currently exist with nginx-quic? The closure of long lived
> connections has been a thorn in the side of people doing HTTP/1.1 web
> sockets (and probably HTTP/2 push) for many years. With HTTP/3 (QUIC)
> it's even more pronounced.
>
> From my point of view its the single biggest obstacle to the QUIC
> upgrade. as a user.
>
> Regards,
> Mathew

Hi Mathew,

connections are handled in worker processes, and reload means running
new worker processes, that don't have state for existing connections.
QUIC doesn't change how nginx handles connections, so there are no
specific plans to change it.

Can you please elaborate how HTTP/3 makes things worse from your
perspective?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: QUIC and HTTP/3 roadmap blog post

2021-07-13 Thread Vladimir Homutov
On Tue, Jul 13, 2021 at 05:29:15PM +0530, Raminda Subashana wrote:
> Hi Maxim,
>
> Just tested nginx-quic release and there is a performance issue. I compared
> it with Cloudflare quic experimental release which is based on nginx 1.16.
>
> It is almost 3 times slower than 1.16. Below config worked for me and it
> never advertised h3-29. if you have specific config file to test appreciate
> if you can share
>
> server {
> listen 443 ssl;  # TCP listener for HTTP/1.1
> listen 443 http3 reuseport;  # UDP listener for QUIC+HTTP/3
>
> ssl_protocols   TLSv1.3; # QUIC requires TLS 1.3
> ssl_certificate ssl/www.example.com.crt;
> ssl_certificate_key ssl/www.example.com.key;
>
> add_header Alt-Svc 'h3=":443"';   # Advertise that HTTP/3 is available
> add_header QUIC-Status $quic; # Sent when QUIC was used
> }

Hi Raminda,

Can you please describe how do you measure performance? What clients do
you use?  Full nginx configuration would be nice to see also.
Do you measure request time or maybe overall throughput or something
else. Any details are appreciated.

Please ensure that for perforamnce tests you use nginx built without debug,
as it produces quite a lot messages if built in debug mode.


___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


[nginx] Core: added the ngx_rbtree_data() macro.

2021-06-21 Thread Vladimir Homutov
details:   https://hg.nginx.org/nginx/rev/0c5e84096d99
branches:  
changeset: 7875:0c5e84096d99
user:  Vladimir Homutov 
date:  Mon Jun 21 09:42:43 2021 +0300
description:
Core: added the ngx_rbtree_data() macro.

diffstat:

 src/core/ngx_rbtree.h   |  3 +++
 src/core/ngx_resolver.c |  4 +---
 src/event/ngx_event_timer.c |  4 ++--
 3 files changed, 6 insertions(+), 5 deletions(-)

diffs (48 lines):

diff -r d1079d6b2f19 -r 0c5e84096d99 src/core/ngx_rbtree.h
--- a/src/core/ngx_rbtree.h Fri Jun 18 04:00:21 2021 +0300
+++ b/src/core/ngx_rbtree.h Mon Jun 21 09:42:43 2021 +0300
@@ -47,6 +47,9 @@ struct ngx_rbtree_s {
 (tree)->sentinel = s; \
 (tree)->insert = i
 
+#define ngx_rbtree_data(node, type, link) \
+(type *) ((u_char *) (node) - offsetof(type, link))
+
 
 void ngx_rbtree_insert(ngx_rbtree_t *tree, ngx_rbtree_node_t *node);
 void ngx_rbtree_delete(ngx_rbtree_t *tree, ngx_rbtree_node_t *node);
diff -r d1079d6b2f19 -r 0c5e84096d99 src/core/ngx_resolver.c
--- a/src/core/ngx_resolver.c   Fri Jun 18 04:00:21 2021 +0300
+++ b/src/core/ngx_resolver.c   Mon Jun 21 09:42:43 2021 +0300
@@ -51,9 +51,7 @@ typedef struct {
 } ngx_resolver_an_t;
 
 
-#define ngx_resolver_node(n) \
-(ngx_resolver_node_t *)  \
-((u_char *) (n) - offsetof(ngx_resolver_node_t, node))
+#define ngx_resolver_node(n)  ngx_rbtree_data(n, ngx_resolver_node_t, node)
 
 
 static ngx_int_t ngx_udp_connect(ngx_resolver_connection_t *rec);
diff -r d1079d6b2f19 -r 0c5e84096d99 src/event/ngx_event_timer.c
--- a/src/event/ngx_event_timer.c   Fri Jun 18 04:00:21 2021 +0300
+++ b/src/event/ngx_event_timer.c   Mon Jun 21 09:42:43 2021 +0300
@@ -73,7 +73,7 @@ ngx_event_expire_timers(void)
 return;
 }
 
-ev = (ngx_event_t *) ((char *) node - offsetof(ngx_event_t, timer));
+ev = ngx_rbtree_data(node, ngx_event_t, timer);
 
 ngx_log_debug2(NGX_LOG_DEBUG_EVENT, ev->log, 0,
"event timer del: %d: %M",
@@ -113,7 +113,7 @@ ngx_event_no_timers_left(void)
  node;
  node = ngx_rbtree_next(_event_timer_rbtree, node))
 {
-ev = (ngx_event_t *) ((char *) node - offsetof(ngx_event_t, timer));
+ev = ngx_rbtree_data(node, ngx_event_t, timer);
 
 if (!ev->cancelable) {
 return NGX_AGAIN;
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [nginx-quic]

2021-06-14 Thread Vladimir Homutov

14.06.2021 19:43, Lucas Cuminato пишет:

Hi, Vladimir, thanks for replying.

I'm not using any protocol over QUIC, just using QUIC to send/receive 
raw data to/from my application and the server, and having nginx proxy 
it to a TCP server.
I do have a proxy_pass configured in my setup. I just omitted for 
simplicity.


R,
Lucas.


Ok, so you have custom backend that knows what to do with QUIC streams?
And you backend is TCP-based? Sounds quite interesting. Or does it deal
with single stream only?

Anyway, right now it fails at ALPN stage. Probably, in future, we may
introduce some configuration directive to control it. It is not yet
absolutely clear how the stream module should deal with quic.

Yoy may want to try to copy the code wich sets ALPN callback from 
http_quic module and provides some meaningful value for protocol.





On Mon, Jun 14, 2021 at 11:35 AM Vladimir Homutov <mailto:v...@nginx.com>> wrote:


14.06.2021 18:08, Lucas Cuminato пишет:
 > Hello,
 >
 > Not sure If this is a bug in nginx-quic or if I'm not configuring
 > it correctly but when trying to use nginx-quic with the following
settings.
 >
 > stream {
 >      server {
 >          listen  quic reuseport;
 >          ssl_session_cache off;
 >          ssl_client_certificate ca.pem
 >          ssl_verify_client on;
 >          ssl_session_tickets off;
 >          ssl_certificate         cert.pem
 >          ssl_certificate_key    key.pem;
 >          ssl_protocols       TLSv1.3;
 >      }
 > }
 >
 > and using a standalone application that uses ngtcp2 to try to
connect to
 > nginx-quic, I get a TLS alert saying that "No application protocol".
 > I've tracked this down and it seems like nginx-quic is not
setting any
 > ALPN for the SSL context when using QUIC as a stream (in
 > ngx_stream_ssl_module.c).
 > It does it set it when using QUIC as HTTP
(in ngx_http_ssl_module.c).
 > Now, I believe ALPN is mandatory for QUIC according to the
 > QUIC-TRANSPORT draft, so this might be a bug.
 > By copying the code done in ngx_http_ssl_module.c for setting the
ALPN
 > and using it in ngx_stream_ssl_module.c, I was able to make my
 > standalone app connect and transfer data, but not sure
 > if this is the right fix.
 >
 > R,
 > Lucas.
 >
Hello,
this is expected with stream module.
ALPN is required, but is not clear what protocol (http3? other protocol
over quic?) is going to be used.
Can you please elaborate your use case? What are you going to achieve?
Also, the suggested configuration is not going to work, since you don't
have any content handling module (i.e. proxy_pass or return).





___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [nginx-quic]

2021-06-14 Thread Vladimir Homutov

14.06.2021 18:08, Lucas Cuminato пишет:

Hello,

Not sure If this is a bug in nginx-quic or if I'm not configuring 
it correctly but when trying to use nginx-quic with the following settings.


stream {
     server {
         listen  quic reuseport;
         ssl_session_cache off;
         ssl_client_certificate ca.pem
         ssl_verify_client on;
         ssl_session_tickets off;
         ssl_certificate         cert.pem
         ssl_certificate_key    key.pem;
         ssl_protocols       TLSv1.3;
     }
}

and using a standalone application that uses ngtcp2 to try to connect to 
nginx-quic, I get a TLS alert saying that "No application protocol".
I've tracked this down and it seems like nginx-quic is not setting any 
ALPN for the SSL context when using QUIC as a stream (in 
ngx_stream_ssl_module.c).
It does it set it when using QUIC as HTTP (in ngx_http_ssl_module.c). 
Now, I believe ALPN is mandatory for QUIC according to the 
QUIC-TRANSPORT draft, so this might be a bug.
By copying the code done in ngx_http_ssl_module.c for setting the ALPN 
and using it in ngx_stream_ssl_module.c, I was able to make my 
standalone app connect and transfer data, but not sure

if this is the right fix.

R,
Lucas.


Hello,
this is expected with stream module.
ALPN is required, but is not clear what protocol (http3? other protocol 
over quic?) is going to be used.

Can you please elaborate your use case? What are you going to achieve?
Also, the suggested configuration is not going to work, since you don't
have any content handling module (i.e. proxy_pass or return).


___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [QUIC][BUG] function 'ngx_hkdf_extract ' has memory leak when use OPENSSL but not BoringSSL.

2021-03-12 Thread Vladimir Homutov
On Tue, Mar 09, 2021 at 10:17:43PM -0500, lingtao.klt wrote:
> In ngx_hkdf_expand, when use OPENSSL, the *pctx need to  be free.
>
>
> ```
>
> static ngx_int_t
> ngx_hkdf_expand(u_char *out_key, size_t out_len, const EVP_MD *digest,
> const uint8_t *prk, size_t prk_len, const u_char *info, size_t
> info_len)
> {
> #ifdef OPENSSL_IS_BORINGSSL
> if (HKDF_expand(out_key, out_len, digest, prk, prk_len, info, info_len)
> == 0)
> {
> return NGX_ERROR;
> }
> #else
>
> EVP_PKEY_CTX  *pctx;
>
> pctx = EVP_PKEY_CTX_new_id(EVP_PKEY_HKDF, NULL);
>
> if (EVP_PKEY_derive_init(pctx) <= 0) {
> return NGX_ERROR;
> }
>
> if (EVP_PKEY_CTX_hkdf_mode(pctx, EVP_PKEY_HKDEF_MODE_EXPAND_ONLY) <= 0)
> {
> return NGX_ERROR;
> }
>
> if (EVP_PKEY_CTX_set_hkdf_md(pctx, digest) <= 0) {
> return NGX_ERROR;
> }
>
> if (EVP_PKEY_CTX_set1_hkdf_key(pctx, prk, prk_len) <= 0) {
> return NGX_ERROR;
> }
>
> if (EVP_PKEY_CTX_add1_hkdf_info(pctx, info, info_len) <= 0) {
> return NGX_ERROR;
> }
>
> if (EVP_PKEY_derive(pctx, out_key, _len) <= 0) {
> return NGX_ERROR;
> }
>
> #endif
>
> return NGX_OK;
> }
>
> ```
Thank you for reporting, this was fixed:

http://hg.nginx.org/nginx-quic/rev/1c48629cfa74
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx


Re: [QUIC] When old worker listen fd detach ebpf reuseport group when reload

2021-03-09 Thread Vladimir Homutov

10.03.2021 06:17, Gao,Yan(ACG VCP) пишет:

Hello Vladimir Homutov,


I'm not sure I understand what you are trying to do.



Do you have some issues with existing quic implementations in nginx?


I just want to know how nginx handle old and new quic connections when 
reload.


Nginx keep quic connections open when reload to complete old connections.

But new connections can still be handled by old workers.

Can the listening fd detach from reuseport group with keeping open, as 
kernel says, ebpf only look up an unconnected socket for a packet (UDP)


Gao,Yan(ACG VCP)



Each worker process has it's own socket, identified by SO_COOKIE.
Such sockets belong to same reuseport group. BPF is used to route
packets with the same key (injected into DCID when connection is 
established) to the same socket.


The reload part is not yet complete. New connections may reach old
workers. Since worker knows it is terminating, it will not accept
such connection. Client will retry, and next time it will probably
reach new worker (or the old one again).

You cannot touch old socket, since it is needed to work with existing
connections in old worker (and it needs to be in reuseport group, so 
that packets could reach proper worker).


As I said, there still work to do in regard to reloads and upgrades.
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [QUIC] When old worker listen fd detach ebpf reuseport group when reload?

2021-03-09 Thread Vladimir Homutov

09.03.2021 17:43, Gao,Yan(ACG VCP) пишет:

We cannot close quic fd to let old session complete when reload.
Can detach ebpf reuseport group manually when ngx_close_listening_sockets?


Hello Gao,Yan,

I'm not sure I understand what you are trying to do.
Do you have some issues with existing quic implementations in nginx?



Linxu kernel
commit e57892f50a07953053dcb1e0c9431197e569c258
Merge: bfdfa51702de 0ab5539f8584
Author: Alexei Starovoitov 
Date:   Fri Jul 17 20:18:18 2020 -0700

 Merge branch 'bpf-socket-lookup'

 Jakub Sitnicki says:

 BPF sk_lookup program runs when transport layer is looking up a listening
 socket for a new connection request (TCP), or when looking up an
 unconnected socket for a packet (UDP).

 To select a socket BPF program fetches it from a map holding socket
 references, like SOCKMAP or SOCKHASH, calls bpf_sk_assign(ctx, sk, ...)
 helper to record the selection, and returns SK_PASS code. Transport layer
 then uses the selected socket as a result of socket lookup.



___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel



___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

  1   2   3   >