Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams
Hello! On Thu, Dec 28, 2023 at 05:23:38PM +0300, Vladimir Homutov via nginx-devel wrote: > On Thu, Dec 28, 2023 at 04:31:41PM +0300, Maxim Dounin wrote: > > Hello! > > > > On Wed, Dec 27, 2023 at 04:17:38PM +0300, Vladimir Homutov via nginx-devel > > wrote: > > > > > On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote: > > > > Hello! > > > > > > > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via > > > > nginx-devel wrote: > > > > > > > > > Hello, everyone, > > > > > > > > > > and Merry Christmas to all! > > > > > > > > > > I'm a developer of an nginx fork Angie. Recently we implemented > > > > > an HTTP/3 proxy support in our fork [1]. > > > > > > > > > > We'd like to contribute this functionality to nginx OSS community. > > > > > Hence here is a patch series backported from Angie to the current > > > > > head of nginx mainline branch (1.25.3) > > > > > > > > Thank you for the patches. > > > > > > > > Are there any expected benefits from HTTP/3 being used as a > > > > protocol to upstream servers? > > > > > > Personally, I don't see much. > > > > > > Probably, faster connection establishing to due 0RTT support (need to be > > > implemented) and better multiplexing (again, if implemented wisely). > > > I have made some simple benchmarks, and it looks more or less similar > > > to usual SSL connections. > > > > Thanks for the details. > > > > Multiplexing is available since introduction of the FastCGI > > protocol, yet to see it working in upstream connections. As for > > 0-RTT, using keepalive connections is probably more efficient > > anyway (and not really needed for upstream connections in most > > cases as well). > > With HTTP/3 and keepalive we can have just one quic "connection" per upstream > server (in extreme). We perform heavy handshake once, and leave it open. > Next we just create HTTP/3 streams to perform request. They can perfectly > run in parallel and use same quic connection. Probably, this is something > worth implementing, with limitations of course: we don't want to mix > requests from different (classes of) clients in same connection, we > don't want eternal life of such connection and we need means to control > level of such multiplexing. Multiplexing has various downsides: already mentioned security implications, issues with balancing requests between upstream entities not directly visible to the client (such as different worker processes), added complexity. And, as already mentioned, it is not something new in HTTP/3. [...] -- Maxim Dounin http://mdounin.ru/ ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams
On Thu, 28 Dec 2023 17:23:38 +0300 Vladimir Homutov via nginx-devel wrote: > On Thu, Dec 28, 2023 at 04:31:41PM +0300, Maxim Dounin wrote: > > Hello! > > > > On Wed, Dec 27, 2023 at 04:17:38PM +0300, Vladimir Homutov via nginx-devel > > wrote: > > > > > On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote: > > > > Hello! > > > > > > > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via > > > > nginx-devel wrote: > > > > > > > > > Hello, everyone, > > > > > > > > > > and Merry Christmas to all! > > > > > > > > > > I'm a developer of an nginx fork Angie. Recently we implemented > > > > > an HTTP/3 proxy support in our fork [1]. > > > > > > > > > > We'd like to contribute this functionality to nginx OSS community. > > > > > Hence here is a patch series backported from Angie to the current > > > > > head of nginx mainline branch (1.25.3) > > > > > > > > Thank you for the patches. > > > > > > > > Are there any expected benefits from HTTP/3 being used as a > > > > protocol to upstream servers? > > > > > > Personally, I don't see much. > > > > > > Probably, faster connection establishing to due 0RTT support (need to be > > > implemented) and better multiplexing (again, if implemented wisely). > > > I have made some simple benchmarks, and it looks more or less similar > > > to usual SSL connections. > > > > Thanks for the details. > > > > Multiplexing is available since introduction of the FastCGI > > protocol, yet to see it working in upstream connections. As for > > 0-RTT, using keepalive connections is probably more efficient > > anyway (and not really needed for upstream connections in most > > cases as well). > > With HTTP/3 and keepalive we can have just one quic "connection" per upstream > server (in extreme). We perform heavy handshake once, and leave it open. > Next we just create HTTP/3 streams to perform request. They can perfectly > run in parallel and use same quic connection. Probably, this is something > worth implementing, with limitations of course: we don't want to mix > requests from different (classes of) clients in same connection, we > don't want eternal life of such connection and we need means to control > level of such multiplexing. > Those heavy handshakes wouldn't be the only concern either... Lack of upstream multiplexing has come up as a concern in the past with the grpc module (which lacks it) due to that amplification effect of client side h2 connections and streams being translated into x*y upstream connections. This poses a danger of ephemeral port exhaustion when targeting relatively few upstream servers (such as proxying to an L4 load balancer instead of direct to application servers). This necessitates provisioning a ton of VIPs and using proxy_bind (which isn't always practical / is a pain). It would be exactly the same for h3 (and more so once grpc over h3 eventually becomes solid, especially bidi). > > > > > > > > > > [...] > > > > > > > > > Probably, the HTTP/3 proxy should be implemented in a separate > > > > > module. > > > > > Currently it is a patch to the HTTP proxy module to minimize > > > > > boilerplate. > > > > > > > > Sure. I'm very much against the idea of mixing different upstream > > > > protocols in a single protocol module. > > > > > > noted. > > > > > > > (OTOH, there are some uncertain plans to make proxy module able to > > > > work with other protocols based on the scheme, such as in > > > > "proxy_pass fastcgi://127.0.0.1:9000;". This is mostly irrelevant > > > > though, and might never happen.) > > > > > > well, currently we have separate proxying modules that are similar enough > > > to > > > think about merging them like suggested. Not sure if one big module with > > > methods will worth it, as semantics is slightly different. > > > > > > proxy modules are already addons on top of upstream module, which does > > > the heavy lifting. What requires improvement is probably the > > > configuration that makes user to remember many similar directives doing > > > the same thing but for different protocols. > > > > Yep, making things easier to configure (and modify, if something > > related to configuration directives is changed or additional > > protocol is added) is the main motivator. Still, there are indeed > > differences between protocol modules, and this makes single module > > inconvenient sometimes. As such, plans are uncertain (and the > > previous attempt to do this failed miserably). > > > > ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams
On Thu, Dec 28, 2023 at 04:31:41PM +0300, Maxim Dounin wrote: > Hello! > > On Wed, Dec 27, 2023 at 04:17:38PM +0300, Vladimir Homutov via nginx-devel > wrote: > > > On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote: > > > Hello! > > > > > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via > > > nginx-devel wrote: > > > > > > > Hello, everyone, > > > > > > > > and Merry Christmas to all! > > > > > > > > I'm a developer of an nginx fork Angie. Recently we implemented > > > > an HTTP/3 proxy support in our fork [1]. > > > > > > > > We'd like to contribute this functionality to nginx OSS community. > > > > Hence here is a patch series backported from Angie to the current > > > > head of nginx mainline branch (1.25.3) > > > > > > Thank you for the patches. > > > > > > Are there any expected benefits from HTTP/3 being used as a > > > protocol to upstream servers? > > > > Personally, I don't see much. > > > > Probably, faster connection establishing to due 0RTT support (need to be > > implemented) and better multiplexing (again, if implemented wisely). > > I have made some simple benchmarks, and it looks more or less similar > > to usual SSL connections. > > Thanks for the details. > > Multiplexing is available since introduction of the FastCGI > protocol, yet to see it working in upstream connections. As for > 0-RTT, using keepalive connections is probably more efficient > anyway (and not really needed for upstream connections in most > cases as well). With HTTP/3 and keepalive we can have just one quic "connection" per upstream server (in extreme). We perform heavy handshake once, and leave it open. Next we just create HTTP/3 streams to perform request. They can perfectly run in parallel and use same quic connection. Probably, this is something worth implementing, with limitations of course: we don't want to mix requests from different (classes of) clients in same connection, we don't want eternal life of such connection and we need means to control level of such multiplexing. > > > > > > > [...] > > > > > > > Probably, the HTTP/3 proxy should be implemented in a separate > > > > module. > > > > Currently it is a patch to the HTTP proxy module to minimize > > > > boilerplate. > > > > > > Sure. I'm very much against the idea of mixing different upstream > > > protocols in a single protocol module. > > > > noted. > > > > > (OTOH, there are some uncertain plans to make proxy module able to > > > work with other protocols based on the scheme, such as in > > > "proxy_pass fastcgi://127.0.0.1:9000;". This is mostly irrelevant > > > though, and might never happen.) > > > > well, currently we have separate proxying modules that are similar enough to > > think about merging them like suggested. Not sure if one big module with > > methods will worth it, as semantics is slightly different. > > > > proxy modules are already addons on top of upstream module, which does > > the heavy lifting. What requires improvement is probably the > > configuration that makes user to remember many similar directives doing > > the same thing but for different protocols. > > Yep, making things easier to configure (and modify, if something > related to configuration directives is changed or additional > protocol is added) is the main motivator. Still, there are indeed > differences between protocol modules, and this makes single module > inconvenient sometimes. As such, plans are uncertain (and the > previous attempt to do this failed miserably). > > -- > Maxim Dounin > http://mdounin.ru/ > ___ > nginx-devel mailing list > nginx-devel@nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx-devel ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams
Hello! On Wed, Dec 27, 2023 at 04:17:38PM +0300, Vladimir Homutov via nginx-devel wrote: > On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote: > > Hello! > > > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via nginx-devel > > wrote: > > > > > Hello, everyone, > > > > > > and Merry Christmas to all! > > > > > > I'm a developer of an nginx fork Angie. Recently we implemented > > > an HTTP/3 proxy support in our fork [1]. > > > > > > We'd like to contribute this functionality to nginx OSS community. > > > Hence here is a patch series backported from Angie to the current > > > head of nginx mainline branch (1.25.3) > > > > Thank you for the patches. > > > > Are there any expected benefits from HTTP/3 being used as a > > protocol to upstream servers? > > Personally, I don't see much. > > Probably, faster connection establishing to due 0RTT support (need to be > implemented) and better multiplexing (again, if implemented wisely). > I have made some simple benchmarks, and it looks more or less similar > to usual SSL connections. Thanks for the details. Multiplexing is available since introduction of the FastCGI protocol, yet to see it working in upstream connections. As for 0-RTT, using keepalive connections is probably more efficient anyway (and not really needed for upstream connections in most cases as well). > > > > [...] > > > > > Probably, the HTTP/3 proxy should be implemented in a separate module. > > > Currently it is a patch to the HTTP proxy module to minimize > > > boilerplate. > > > > Sure. I'm very much against the idea of mixing different upstream > > protocols in a single protocol module. > > noted. > > > (OTOH, there are some uncertain plans to make proxy module able to > > work with other protocols based on the scheme, such as in > > "proxy_pass fastcgi://127.0.0.1:9000;". This is mostly irrelevant > > though, and might never happen.) > > well, currently we have separate proxying modules that are similar enough to > think about merging them like suggested. Not sure if one big module with > methods will worth it, as semantics is slightly different. > > proxy modules are already addons on top of upstream module, which does > the heavy lifting. What requires improvement is probably the > configuration that makes user to remember many similar directives doing > the same thing but for different protocols. Yep, making things easier to configure (and modify, if something related to configuration directives is changed or additional protocol is added) is the main motivator. Still, there are indeed differences between protocol modules, and this makes single module inconvenient sometimes. As such, plans are uncertain (and the previous attempt to do this failed miserably). -- Maxim Dounin http://mdounin.ru/ ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams
On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote: > Hello! > > On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via nginx-devel > wrote: > > > Hello, everyone, > > > > and Merry Christmas to all! > > > > I'm a developer of an nginx fork Angie. Recently we implemented > > an HTTP/3 proxy support in our fork [1]. > > > > We'd like to contribute this functionality to nginx OSS community. > > Hence here is a patch series backported from Angie to the current > > head of nginx mainline branch (1.25.3) > > Thank you for the patches. > > Are there any expected benefits from HTTP/3 being used as a > protocol to upstream servers? Personally, I don't see much. Probably, faster connection establishing to due 0RTT support (need to be implemented) and better multiplexing (again, if implemented wisely). I have made some simple benchmarks, and it looks more or less similar to usual SSL connections. > > [...] > > > Probably, the HTTP/3 proxy should be implemented in a separate module. > > Currently it is a patch to the HTTP proxy module to minimize > > boilerplate. > > Sure. I'm very much against the idea of mixing different upstream > protocols in a single protocol module. noted. > (OTOH, there are some uncertain plans to make proxy module able to > work with other protocols based on the scheme, such as in > "proxy_pass fastcgi://127.0.0.1:9000;". This is mostly irrelevant > though, and might never happen.) well, currently we have separate proxying modules that are similar enough to think about merging them like suggested. Not sure if one big module with methods will worth it, as semantics is slightly different. proxy modules are already addons on top of upstream module, which does the heavy lifting. What requires improvement is probably the configuration that makes user to remember many similar directives doing the same thing but for different protocols. ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
Re: [PATCH 00 of 12] HTTP/3 proxying to upstreams
Hello! On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via nginx-devel wrote: > Hello, everyone, > > and Merry Christmas to all! > > I'm a developer of an nginx fork Angie. Recently we implemented > an HTTP/3 proxy support in our fork [1]. > > We'd like to contribute this functionality to nginx OSS community. > Hence here is a patch series backported from Angie to the current > head of nginx mainline branch (1.25.3) Thank you for the patches. Are there any expected benefits from HTTP/3 being used as a protocol to upstream servers? [...] > Probably, the HTTP/3 proxy should be implemented in a separate module. > Currently it is a patch to the HTTP proxy module to minimize boilerplate. Sure. I'm very much against the idea of mixing different upstream protocols in a single protocol module. (OTOH, there are some uncertain plans to make proxy module able to work with other protocols based on the scheme, such as in "proxy_pass fastcgi://127.0.0.1:9000;". This is mostly irrelevant though, and might never happen.) [...] -- Maxim Dounin http://mdounin.ru/ ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel
[PATCH 00 of 12] HTTP/3 proxying to upstreams
Hello, everyone, and Merry Christmas to all! I'm a developer of an nginx fork Angie. Recently we implemented an HTTP/3 proxy support in our fork [1]. We'd like to contribute this functionality to nginx OSS community. Hence here is a patch series backported from Angie to the current head of nginx mainline branch (1.25.3) If you find patching and building nginx from source irritating in order to test the feature, you can use the prebuilt packages of Angie [2] [1] https://angie.software/en/http_proxy/#proxy-http-version [2] https://angie.software/en/install/ Your feedback is welcome! __. .--, *-/___, ,-/___,-/___,-/___,-/___, _.-.=,{\/ _/ /`) `\ _ ),-/___,-/___,-/___,-/___, ) _..-'`-(`._(_.;` / /< \\=`\ _ )`\ _ )`\ _ )`\ _ )<`--''` (__\_/___, /< <\ https://http3-server.example.com:4433; } } You may also need to configure SNI using the appropriate values for the "proxy_ssl_name" and "proxy_ssl_server_name" directives as well as certificates and other related things. There is a number of proxy_http3_ directives is available that configure quic settings. For the interop testing purposes, HQ support is available. Below are technical details about the current state of the patch set. *** TESTS *** The patchset includes tests which are added to the "t" directory for convenience. Copy them to nginx-tests and run them as usual. Most of them are proxy tests adapted for use with HTTP/3. *** LIMITATIONS *** The following features are NOT implemented: * Trailers: it requires full trailers support in nginx first * Connection migration: does not seem necessary for proxying scenarios * 0RTT: currently not supported The SSL library requirements are the same as for the server-side support. There are some interoperability issues when using different libraries on client and server: the combination of client + openssl/compat and server + boringssl leads to a handshake failure with an error: >> SSL_do_handshake() failed (SSL: error:1132:SSL routines: >> OPENSSL_internal:UNEXPECTED_COMPATIBILITY_MODE) *** MULTIPLEXING *** With keepalive disabled, the HTTP/3 connection to backend is very similar to a normal TCP SSL connection: the connection is established, handshake is performed, the request stream is created and everything is closed when the request is completed. With keepalive enabled, the underlying QUIC connection is cached, and can be reused by another client. Each client is using its own QUIC connection. Theoretically, it is possible to use only one quic connection to each backend and use separate HTTP/3 streams to make requests. This is NOT currently implemented, as it requires more changes to the upstream and keepalive modules and has security implications. *** INTERNALS *** This is a first attempt to integrating the HTTP/3 proxy into nginx, so all currently exposed interfaces are not final. Probably, the HTTP/3 proxy should be implemented in a separate module. Currently it is a patch to the HTTP proxy module to minimize boilerplate. Things that need improvement: - client interface: the way to create client, start handshake and create first stream to use for request; The way SSL sessions are supported doesn't look good. - upstreams interface: one way is to hide quic details and make it feel more SSL-like, maybe even kind of SSL module. Probably need a separate keepalive module for HTTP/3 to allow some controlled level of multiplexing. - connection termination is quite tricky due to the handling of the underlying quic UDP connection and stream requests. Closing an HTTP/3 connection may be incorrect in some cases. - Some interop tests still fail. This is partly due to the nature of the tests. This part requires more work with hard-to-reproduce cases. ___ nginx-devel mailing list nginx-devel@nginx.org https://mailman.nginx.org/mailman/listinfo/nginx-devel