Peter,

Thanks for your reply.

What I’d really like is to understand how to tune nginx to avoid the delays 
when I run my tests.

I am comfortable with the overly optimistic results from my current “closed 
model” test design.  Once I determine my system’s throughput limits I will 
introduce significant think times into my scripts so that much larger user 
populations are required to produce the same work demand.  This will more 
closely approximate an “open model” test design.

Could you provide more explanation as to why a different load generation tool 
would avoid triggering a DDOS response from nginx?  My first guess would have 
been that they would also generate requests from a single IP address, and thus 
look the same as a JMeter load.

I did try my test with JMeter driving workload from 2 different machines at the 
same time.  I ran each machine ‘s workload at a low enough level that 
individually they did not trigger the 1 second delay.  The combined workload 
did trigger the delay for each of the JMeter workload generators.  I’m not sure 
how many machines would be required to avoid the collective response from nginx.

Thanks,

John


From: nginx [mailto:nginx-boun...@nginx.org] On Behalf Of Peter Booth
Sent: Monday, March 26, 2018 3:57 PM
To: nginx@nginx.org
Subject: Re: Nginx throttling issue?

You’re correct that this is the ddos throttling.  The real question is what do 
you want to do?  JMeter with zero think time is an imperfect load generator- 
this is only one complication. The bigger one is the open/closed model issue. 
With you design you have back ptesssure from your system under test to your 
load generator. A jmeter virtual user will only ever issue a request when the 
prior one completes. Real users are not so well behaved which is why your test 
results will always be over optimistic with this design.

Better approach us to use a load generator that replicates the desired request 
distribution without triggering the ddos protection. Wrk2, Tsung, httperf are 
candidates, as well as the cloud based load generator services. Also see Neil 
Gunther’s paper on how to combine multiple jmeter instances to replicate real 
world tragic patterns.

Peter
Sent from my iPhone

On Mar 26, 2018, at 4:21 PM, John Melom 
<john.me...@spok.com<mailto:john.me...@spok.com>> wrote:
Hi,

I am load testing our system using Jmeter as a load generator.  We execute a 
script consisting of an https request executing in a loop.  The loop does not 
contain a think time, since at this point I am not trying to emulate a “real 
user”.  I want to get a quick look at our system capacity.  Load on our system 
is increased by increasing the number of Jmeter threads executing our script.  
Each Jmeter thread references different data.

Our system is in AWS with an ELB fronting Nginx, which serves as a reverse 
proxy for our Docker Swarm application cluster.

At moderate loads, a subset of our https requests start experiencing to a 1 
second delay in addition to their normal response time.  The delay is not due 
to resource contention.  System utilizations remain low.  The response times 
cluster around 4 values:  0 millilseconds, 50 milliseconds, 1 second, and 1.050 
seconds.  Right now, I am most interested in understanding and eliminating the 
1 second delay that gives the clusters at 1 second and 1.050 seconds.

The attachment shows a response time scatterplot from one of our runs.  The 
x-axis is the number of seconds into the run, the y-axis is the response time 
in milliseconds.  The plotted data shows the response time of requests at the 
time they occurred in the run.

If I run the test bypassing the ELB and Nginx, this delay does not occur.
If I bypass the ELB, but include Nginx in the request path, the delay returns.

This leads me to believe the 1 second delay is coming from Nginx.

One possible candidate Nginx DDOS.  Since all requests are coming from the same 
Jmeter system, I expect they share the same originating IP address.  I 
attempted to control DDOS throttling by setting limit_req as shown in the 
nginx.conf fragment below:

http {
…
    limit_req_zone $binary_remote_addr zone=perf:20m rate=10000r/s;
…
    server {
…
        location /myReq {
            limit_req zone=perf burst=600;
            proxy_pass xxx.xxx.xxx.xxx;
        }
….
    }

The thinking behind the values set in this conf file is that my aggregate 
demand would not exceed 10000 requests per second, so throttling of requests 
should not occur.  If there were short bursts more intense than that, the burst 
value would buffer these requests.

This tuning did not change my results.  I still get the 1 second delay.

Am I implementing this correctly?
Is there something else I should be trying?

The responses are not large, so I don’t believe limit_req is the answer.
I have a small number of intense users, so limit_conn does not seem likely to 
be the answer either.

Thanks,

John Melom
Performance Test Engineer
Spōk, Inc.
+1 (952) 230 5311 Office
john.me...@spok.com<mailto:john.me...@spok.com>

<image003.jpg><http://info.spok.com/spokmobilevid>


________________________________
NOTE: This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain confidential and/or privileged 
information. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you have received this e-mail in error, please contact the 
sender by replying to this email, and destroy all copies of the original 
message and any material included with this email.
<rawRespScatterplot.png>
_______________________________________________
nginx mailing list
nginx@nginx.org<mailto:nginx@nginx.org>
http://mailman.nginx.org/mailman/listinfo/nginx

________________________________
NOTE: This email message and any attachments are for the sole use of the 
intended recipient(s) and may contain confidential and/or privileged 
information. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you have received this e-mail in error, please contact the 
sender by replying to this email, and destroy all copies of the original 
message and any material included with this email.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to