On Monday 21 December 2015 13:42:27 Stefan Hellkvist wrote:
>
> > On 21 Dec 2015, at 13:36, Valentin V. Bartenev <[email protected]> wrote:
> >
> > On Monday 21 December 2015 13:18:43 Stefan Hellkvist wrote:
> >> On Mon, Dec 21, 2015 at 12:26 PM, Ruslan Ermilov <[email protected]> wrote:
> >>
> >>> On Mon, Dec 21, 2015 at 11:00:09AM +0100, Stefan Hellkvist wrote:
> >>>> Hi,
> >>>>
> >>>> From reading the code and the docs I have gotten the impression that
> >>>> limit_rate (and limit_rate_after) is per ngx_connection which (I think)
> >>>> means that it is per HTTP request and not per socket. Am I right in this
> >>>> conclusion or is the limit actually per socket/TCP connection?
> >>>
> >>> The docs at http://nginx.org/r/limit_rate says it clearly that the limit
> >>> is set per a request, and describes one of the possible cases how this
> >>> limit can be "avoided" by the client.
> >>>
> >>> The limit is implemented on the ngx_connection_t level which is usually
> >>> mapped 1:1 to a physical connection.
> >>>
> >>
> >> In our case we have clients that use pipelining where several requests
> >> share the same tcp session. An ngx_connection_t is mapped 1:1 to the
> >> requests and not to the physical socket in this case, am I right?
> >>
> > [..]
> >
> > Regardless of the internal implementation it's better to think that
> > "limit_rate" and "limit_rate_after" currently work per request only.
> >
> > The ngx_connection_t is mapped to the physical socket, but the number
> > of sent bytes is reseted to zero on each request in the connection.
>
>
> Interesting! So perhaps a quick fix for my current use case would be to avoid
> resetting the "sent bytes” on each request? In that case the limit will be
> counted per socket rather than request. Probably not a generic solution that
> everybody would like, as it probably breaks other use cases, but perhaps
> something I can quickly try out on a private branch.
That will break limit_rate.
The other peculiarity of the current implementation is that it limits
the average rate, and the average is calculated by this formula:
rate = bytes_sent / (current_time - request_start_time)
You may have better luck with the patch below (untested):
diff -r def9c9c9ae05 -r 9e66c0bf7efd src/http/ngx_http_write_filter_module.c
--- a/src/http/ngx_http_write_filter_module.c Sat Dec 12 10:32:58 2015 +0300
+++ b/src/http/ngx_http_write_filter_module.c Mon Dec 21 16:59:07 2015 +0300
@@ -219,7 +219,7 @@ ngx_http_write_filter(ngx_http_request_t
}
if (r->limit_rate) {
- if (r->limit_rate_after == 0) {
+ if (c->requests == 1 && r->limit_rate_after == 0) {
r->limit_rate_after = clcf->limit_rate_after;
}
wbr, Valentin V. Bartenev
_______________________________________________
nginx-devel mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx-devel