On Thu, Mar 12, 2015 at 12:57 PM, Victor Stinner <[email protected]>
wrote:

> Hi,
>
> 2015-03-11 22:43 GMT+01:00 Ludovic Gasc <[email protected]>:
> > As promised, this is the benchmarks based on your remarks:
> > http://blog.gmludo.eu/2015/03/benchmark-python-web-production-stack.html
>
> I really don't understand the round 4.
>
> Django+Meinheld: 3992.68 requests/sec and 1,031,238 errors. The number
> is errors is very high: 86% are requests fail with an error.
>
> API-Hour w/o Nginx: 3646.15 requests/sec, 0 errors, but 9.74 seconds
> for the *average* latency. An average of almost 10 sec looks very
> high. What is the biggest latency? 60 seconds or more? I'm not sure
> that 60 seconds of latency is acceptable for a web page. Personally, I
> close a webpage if it takes more 30 seconds to load.
>

You have more latency with API-Hour compare to others, because API-Hour
serves a big response. while the others, Nginx returns a small response
with an error.


>
> Other setup all have an average latency between 120 ms and 40 ms which
> look more acceptable for a webpage. But I'm more that these numbers
> are useful if more than 50% of requests fail.
>
>
> The round 5 looks like a normal benchmark: no HTTP error at all,
> average latency lower than 100 ms.
>
> It's surprising that Django&Flask have *very* close performances (a
> difference of 3.3%). It looks like Django and Flask have the same
> bottleneck. Could it an arbitrary limit in nginx config?


For this point, I've tried to be the contrary, especially to have 16
workers instead of 1, as recommended in gunicorn documentation:
https://github.com/Eyepea/API-Hour/blob/master/benchmarks/etc/nginx/nginx.conf


> It would help
> to see numbers for more and less concurrent connections (ex: 10, 50,
> 100, 200, 500 clients). To check if it's linear or not.
>
>
> Again, I don't understand the round 6. What's the purpose of testing
> 10 requests/sec during 30 seconds? The previous test showed that all
> setup support more than 500 requests per second during 5 minutes
> without any error. This test doesn't stress anything.
>

You've got the point: It was a benchmark to avoid to stress anything.


>
>
> I understand that round 4 and 6 use a single client. Is it realistic?
> Most of time the bottleneck comes from the number of concurrent
> *clients*, no? I don't think that a server is stressed by a single
> client in practice (I'm not talking about DoS, but regular usage of a
> web server).
>

You have concurrent requests with wrk/wrk2.


>
> Victor
>

Reply via email to