Hello!
On Wed, Dec 27, 2023 at 10:56:44AM +0800, Jiuzhou Cui wrote:
> Hello!
>
>
> # HG changeset patch
> # User Jiuzhou Cui
> # Date 1703645578 -28800
> # Wed Dec 27 10:52:58 2023 +0800
> # Node ID 474ae07e47272e435d81c0ca9e4867aae35c30ab
> # Parent ee40e2b1d0833b46128a357fbc84c6e23be9b
Hello!
On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via nginx-devel
wrote:
> Hello, everyone,
>
> and Merry Christmas to all!
>
> I'm a developer of an nginx fork Angie. Recently we implemented
> an HTTP/3 proxy support in our fork [1].
>
> We'd like to contribute this function
Thank you for your reply.
Firstly, we meet the problem. And this patch works for me.
My scenario is after send response body about 10-20MB, we just set:
1. limit_rate = 1KB
2. limit_rate_after = body_bytes_sent
3. proxy_buffering = "on" (I think this is the key issue)
At the request begin
On Wed, Dec 27, 2023 at 02:48:04PM +0300, Maxim Dounin wrote:
> Hello!
>
> On Mon, Dec 25, 2023 at 07:52:41PM +0300, Vladimir Homutov via nginx-devel
> wrote:
>
> > Hello, everyone,
> >
> > and Merry Christmas to all!
> >
> > I'm a developer of an nginx fork Angie. Recently we implemented
> > an
Hello!
On Wed, Dec 27, 2023 at 08:38:15PM +0800, Jiuzhou Cui wrote:
> Thank you for your reply.
>
> Firstly, we meet the problem. And this patch works for me.
>
> My scenario is after send response body about 10-20MB, we just set:
> 1. limit_rate = 1KB
> 2. limit_rate_after = body_bytes_sent
>
On Wed, Dec 13, 2023 at 06:06:59PM +0400, Roman Arutyunyan wrote:
> # HG changeset patch
> # User Roman Arutyunyan
> # Date 1702476295 -14400
> # Wed Dec 13 18:04:55 2023 +0400
> # Node ID 844486cdd43a32d10b78493d7e7b80e9e2239d7e
> # Parent 6c8595b77e667bd58fd28186939ed820f2e55e0e
> Stream:
>You modify r->limit_rate and r->limit_rate_after from your module
>after sending some parts of the response?
Yes. This isn't a bug but a requirement, so we need handle it.
OK, I got your idea and no more questions. Thanks.
At 2023-12-27 21:58:48, "Maxim Dounin" wrote:
>Hello!
>
>