Re: [PATCH] Allow using nodelay=N semantics in limit_req configuration

2018-10-31 Thread Щучкин Петр
Hello, Maxim! It takes me some time to express new parameter with words, but finally it is as simple as "start to delay after N requests". I agree it more suitable then nodelay= and there is no need to warn about delay>burst. Thank you, I'll be very happy to see this feature in upstream. PS There is now a small gotcha when trying to calculate delay (in seconds) based on excess value reported in log file, one should remember to subtract delay value from excess to get correct delay.  I am attaching updated tests in case you found them useful: changeset:   1390:6ca33dd13079tag:         tipuser:        Peter Shchuchkin date:        Sun Oct 28 11:34:41 2018 +0300summary:     Tests: testing limit_req with "burst=A delay=B" configuration diff -r 73a9504ae6fd -r 6ca33dd13079 limit_req.t--- a/limit_req.t    Fri Oct 19 18:49:45 2018 +0300+++ b/limit_req.t    Sun Oct 28 11:34:41 2018 +0300@@ -21,7 +21,7 @@ select STDERR; $| = 1; select STDOUT; $| = 1; -my $t = Test::Nginx->new()->has(qw/http limit_req/)->plan(6);+my $t = Test::Nginx->new()->has(qw/http limit_req/)->plan(8);  $t->write_file_expand('nginx.conf', <<'EOF'); @@ -38,6 +38,7 @@     limit_req_zone  $binary_remote_addr  zone=one:1m   rate=2r/s;     limit_req_zone  $binary_remote_addr  zone=long:1m  rate=2r/s;     limit_req_zone  $binary_remote_addr  zone=fast:1m  rate=1000r/s;+    limit_req_zone  $binary_remote_addr  zone=mixed:1m  rate=10r/s;      server {         listen       127.0.0.1:8080;@@ -56,6 +57,12 @@         location /fast {             limit_req    zone=fast  burst=1;         }+        location /mixed {+            limit_req    zone=mixed  burst=2 delay=1;+        }+        location /mixed-pass {+            limit_req    zone=mixed  burst=2 nodelay;+        }     } } @@ -64,6 +71,8 @@ $t->write_file('test1.html', 'XtestX'); $t->write_file('long.html', "1234567890\n" x (1 << 16)); $t->write_file('fast.html', 'XtestX');+$t->write_file('mixed.html', 'XtestX');+$t->write_file('mixed-pass.html', 'XtestX'); $t->run();  ###@@ -94,4 +103,20 @@ select undef, undef, undef, 0.1; like(http_get('/fast.html'), qr/^HTTP\/1.. 200 /m, 'negative excess'); +# make sure requests are delayed when 0 < delay < burst+# this test should fail when using nodelay or when delay>=burst++http_get('/mixed.html');+http_get('/mixed.html');+http_get('/mixed.html');+like(http_get('/mixed.html'), qr/^HTTP\/1.. 200 /m, 'mixed request is not rejected');++# make sure it is possible to partially fill up excess through /mixed.html and get last request rejected+# this test should fail when using "burst=A" with all delayed++http_get('/mixed.html');+http_get('/mixed.html');+http_get('/mixed-pass.html');+like(http_get('/mixed-pass.html'), qr/^HTTP\/1.. 503 /m, 'mixed request is rejected');+ ###   ___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel

Re: [PATCH] Allow using nodelay=N semantics in limit_req configuration

2018-10-29 Thread Maxim Dounin
Hello!

On Sun, Oct 28, 2018 at 05:58:45PM +0300, Peter Shchuchkin wrote:

> # HG changeset patch
> # User Peter Shchuchkin 
> # Date 1540737213 -10800
> #  Sun Oct 28 17:33:33 2018 +0300
> # Node ID 70c0d476999d9b893c644606624134248ac7abad
> # Parent  874d47ac871a4b62fbe0ff5d469a8ad7bc5a4160
> Allow using nodelay=N semantics in limit_req configuration
> 
> This allows to use reasonably low limits while not forcing delay on normal
> users.
> 
> In addition to standard "burst=A nodelay" form now the following form of
> limit_req may be used:
> burst=A nodelay=B, where B must be 0 <= B <= burst
> 
> burst=A nodelay=0 means the same as just "burst=A"
> burst=A nodelay=A means the same as "burst=A nodelay"
> burst=A nodelay=B means the first B requests matching limit_zone variable will
> not be delayed and next requests will be delayed. The delay is calculated
> against excess over B thus B+1 request will have effective excess=1.
> 
> When using limit_req with nodelay the responsibility of limiting requests 
> speed
> is on the client.
> If client don't want or can't correctly limit its speed it will get 503 errors
> and you will get numerous messages in error and access logs.
> When using limit_req without nodelay, then every request that comes faster 
> then
> expected speed will be delayed. This is not always convenient. Sometimes you
> want to allow normal client to make a bunch of requests as fast as possible
> while still having configured limit on request speed.
> 
> Using this new semantics you can get the best from two worlds. Specifying
> burst=A nodelay=B you allow clients to make B requests without any delay (and
> without warnings in error log). If B requests are exceeded by client then
> further requests are delayed, effectively limiting client rps to desired limit
> without returning 503 errors. Thus one can ensure maximum speed for clients
> with expected usage profile and limit all other clients to certain speed
> without errors.

[...]

I've posted a patch for this a while ago, see here:
 
http://mailman.nginx.org/pipermail/nginx-devel/2016-April/008136.html

Please take a look if it works for you, and/or you have any 
comments.

The most noticeable difference I see is how delay is calculated, and 
probably your variant with only counting excess above the nodelay 
level is more logical.

-- 
Maxim Dounin
http://mdounin.ru/
___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


Re: [PATCH] Allow using nodelay=N semantics in limit_req configuration

2018-10-28 Thread Щучкин Петр
Hello!

Sorry, I've duplicated this message.

Additionally I've tried to send with hg email changes to nginx-test in the same 
thread, but I'm not sure I've succeed and instead new thread was created with 
changes in tests project.

Please review this changes, I hope this may be really useful for some real 
world use cases.

I'm using nodelay_val = -1 when parsing config to allow nodelay=0 and thus to 
be able to distinguish between value not set and value set to 0.

Also there is casting of excess variable to (ngx_uint_t) in several because 
I've seen such example in the same file when comparing burst to excess.


Thank you!



___
nginx-devel mailing list
nginx-devel@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx-devel


[PATCH] Allow using nodelay=N semantics in limit_req configuration

2018-10-28 Thread Peter Shchuchkin
# HG changeset patch
# User Peter Shchuchkin 
# Date 1540737213 -10800
#  Sun Oct 28 17:33:33 2018 +0300
# Node ID 70c0d476999d9b893c644606624134248ac7abad
# Parent  874d47ac871a4b62fbe0ff5d469a8ad7bc5a4160
Allow using nodelay=N semantics in limit_req configuration

This allows to use reasonably low limits while not forcing delay on normal
users.

In addition to standard "burst=A nodelay" form now the following form of
limit_req may be used:
burst=A nodelay=B, where B must be 0 <= B <= burst

burst=A nodelay=0 means the same as just "burst=A"
burst=A nodelay=A means the same as "burst=A nodelay"
burst=A nodelay=B means the first B requests matching limit_zone variable will
not be delayed and next requests will be delayed. The delay is calculated
against excess over B thus B+1 request will have effective excess=1.

When using limit_req with nodelay the responsibility of limiting requests speed
is on the client.
If client don't want or can't correctly limit its speed it will get 503 errors
and you will get numerous messages in error and access logs.
When using limit_req without nodelay, then every request that comes faster then
expected speed will be delayed. This is not always convenient. Sometimes you
want to allow normal client to make a bunch of requests as fast as possible
while still having configured limit on request speed.

Using this new semantics you can get the best from two worlds. Specifying
burst=A nodelay=B you allow clients to make B requests without any delay (and
without warnings in error log). If B requests are exceeded by client then
further requests are delayed, effectively limiting client rps to desired limit
without returning 503 errors. Thus one can ensure maximum speed for clients
with expected usage profile and limit all other clients to certain speed
without errors.

diff -r 874d47ac871a -r 70c0d476999d 
src/http/modules/ngx_http_limit_req_module.c
--- a/src/http/modules/ngx_http_limit_req_module.c  Fri Oct 19 13:50:36 
2018 +0800
+++ b/src/http/modules/ngx_http_limit_req_module.c  Sun Oct 28 17:33:33 
2018 +0300
@@ -499,12 +499,11 @@
 
 excess = *ep;
 
-if (excess == 0 || (*limit)->nodelay) {
+if (excess == 0 || ((ngx_uint_t) excess < (*limit)->nodelay)) {
 max_delay = 0;
-
 } else {
 ctx = (*limit)->shm_zone->data;
-max_delay = excess * 1000 / ctx->rate;
+max_delay = (excess - (*limit)->nodelay) * 1000 / ctx->rate;
 }
 
 while (n--) {
@@ -544,11 +543,16 @@
 
 ctx->node = NULL;
 
-if (limits[n].nodelay) {
+/*
+ * Delay when:
+ * excess > 0, nodelay = 0
+ * excess > 0, nodelay > 0, excess >= nodelay
+ */
+if ((ngx_uint_t) excess < limits[n].nodelay) {
 continue;
 }
 
-delay = excess * 1000 / ctx->rate;
+delay = (excess - limits[n].nodelay) * 1000 / ctx->rate;
 
 if (delay > max_delay) {
 max_delay = delay;
@@ -875,7 +879,7 @@
 {
 ngx_http_limit_req_conf_t  *lrcf = conf;
 
-ngx_int_tburst;
+ngx_int_tburst, nodelay_val;
 ngx_str_t   *value, s;
 ngx_uint_t   i, nodelay;
 ngx_shm_zone_t  *shm_zone;
@@ -885,6 +889,7 @@
 
 shm_zone = NULL;
 burst = 0;
+nodelay_val = -1;
 nodelay = 0;
 
 for (i = 1; i < cf->args->nelts; i++) {
@@ -915,7 +920,20 @@
 continue;
 }
 
-if (ngx_strcmp(value[i].data, "nodelay") == 0) {
+if (ngx_strncmp(value[i].data, "nodelay=", 8) == 0) {
+
+nodelay_val = ngx_atoi(value[i].data + 8, value[i].len - 8);
+nodelay = 1;
+if (nodelay_val < 0) {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "invalid nodelay \"%V\" - "
+   "must not be less then 0",
+   [i]);
+return NGX_CONF_ERROR;
+}
+
+continue;
+} else if (ngx_strcmp(value[i].data, "nodelay") == 0) {
 nodelay = 1;
 continue;
 }
@@ -925,6 +943,23 @@
 return NGX_CONF_ERROR;
 }
 
+if (nodelay) {
+/* nodelay without explicit value */
+if (nodelay_val < 0) {
+nodelay_val = burst;
+}
+
+if (nodelay_val > burst) {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+"invalid nodelay value: %i - "
+"must not be greater then burst: %i",
+nodelay_val, burst);
+return NGX_CONF_ERROR;
+}
+} else {
+nodelay_val = 0;
+}
+
 if (shm_zone == NULL) {
 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
"\"%V\" must have \"zone\" parameter",
@@ -956,7 +991,7 @@
 
 limit->shm_zone = shm_zone;
 limit->burst = burst * 1000;
-

[PATCH] Allow using nodelay=N semantics in limit_req configuration

2018-10-28 Thread Peter Shchuchkin
# HG changeset patch
# User Peter Shchuchkin 
# Date 1540737213 -10800
#  Sun Oct 28 17:33:33 2018 +0300
# Node ID 70c0d476999d9b893c644606624134248ac7abad
# Parent  874d47ac871a4b62fbe0ff5d469a8ad7bc5a4160
Allow using nodelay=N semantics in limit_req configuration

This allows to use reasonably low limits while not forcing delay on normal
users.

In addition to standard "burst=A nodelay" form now the following form of
limit_req may be used:
burst=A nodelay=B, where B must be 0 <= B <= burst

burst=A nodelay=0 means the same as just "burst=A"
burst=A nodelay=A means the same as "burst=A nodelay"
burst=A nodelay=B means the first B requests matching limit_zone variable will
not be delayed and next requests will be delayed. The delay is calculated
against excess over B thus B+1 request will have effective excess=1.

When using limit_req with nodelay the responsibility of limiting requests speed
is on the client.
If client don't want or can't correctly limit its speed it will get 503 errors
and you will get numerous messages in error and access logs.
When using limit_req without nodelay, then every request that comes faster then
expected speed will be delayed. This is not always convenient. Sometimes you
want to allow normal client to make a bunch of requests as fast as possible
while still having configured limit on request speed.

Using this new semantics you can get the best from two worlds. Specifying
burst=A nodelay=B you allow clients to make B requests without any delay (and
without warnings in error log). If B requests are exceeded by client then
further requests are delayed, effectively limiting client rps to desired limit
without returning 503 errors. Thus one can ensure maximum speed for clients
with expected usage profile and limit all other clients to certain speed
without errors.

diff -r 874d47ac871a -r 70c0d476999d 
src/http/modules/ngx_http_limit_req_module.c
--- a/src/http/modules/ngx_http_limit_req_module.c  Fri Oct 19 13:50:36 
2018 +0800
+++ b/src/http/modules/ngx_http_limit_req_module.c  Sun Oct 28 17:33:33 
2018 +0300
@@ -499,12 +499,11 @@
 
 excess = *ep;
 
-if (excess == 0 || (*limit)->nodelay) {
+if (excess == 0 || ((ngx_uint_t) excess < (*limit)->nodelay)) {
 max_delay = 0;
-
 } else {
 ctx = (*limit)->shm_zone->data;
-max_delay = excess * 1000 / ctx->rate;
+max_delay = (excess - (*limit)->nodelay) * 1000 / ctx->rate;
 }
 
 while (n--) {
@@ -544,11 +543,16 @@
 
 ctx->node = NULL;
 
-if (limits[n].nodelay) {
+/*
+ * Delay when:
+ * excess > 0, nodelay = 0
+ * excess > 0, nodelay > 0, excess >= nodelay
+ */
+if ((ngx_uint_t) excess < limits[n].nodelay) {
 continue;
 }
 
-delay = excess * 1000 / ctx->rate;
+delay = (excess - limits[n].nodelay) * 1000 / ctx->rate;
 
 if (delay > max_delay) {
 max_delay = delay;
@@ -875,7 +879,7 @@
 {
 ngx_http_limit_req_conf_t  *lrcf = conf;
 
-ngx_int_tburst;
+ngx_int_tburst, nodelay_val;
 ngx_str_t   *value, s;
 ngx_uint_t   i, nodelay;
 ngx_shm_zone_t  *shm_zone;
@@ -885,6 +889,7 @@
 
 shm_zone = NULL;
 burst = 0;
+nodelay_val = -1;
 nodelay = 0;
 
 for (i = 1; i < cf->args->nelts; i++) {
@@ -915,7 +920,20 @@
 continue;
 }
 
-if (ngx_strcmp(value[i].data, "nodelay") == 0) {
+if (ngx_strncmp(value[i].data, "nodelay=", 8) == 0) {
+
+nodelay_val = ngx_atoi(value[i].data + 8, value[i].len - 8);
+nodelay = 1;
+if (nodelay_val < 0) {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+   "invalid nodelay \"%V\" - "
+   "must not be less then 0",
+   [i]);
+return NGX_CONF_ERROR;
+}
+
+continue;
+} else if (ngx_strcmp(value[i].data, "nodelay") == 0) {
 nodelay = 1;
 continue;
 }
@@ -925,6 +943,23 @@
 return NGX_CONF_ERROR;
 }
 
+if (nodelay) {
+/* nodelay without explicit value */
+if (nodelay_val < 0) {
+nodelay_val = burst;
+}
+
+if (nodelay_val > burst) {
+ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
+"invalid nodelay value: %i - "
+"must not be greater then burst: %i",
+nodelay_val, burst);
+return NGX_CONF_ERROR;
+}
+} else {
+nodelay_val = 0;
+}
+
 if (shm_zone == NULL) {
 ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
"\"%V\" must have \"zone\" parameter",
@@ -956,7 +991,7 @@
 
 limit->shm_zone = shm_zone;
 limit->burst = burst * 1000;
-