Hi,

2015-03-11 22:43 GMT+01:00 Ludovic Gasc <[email protected]>:
> As promised, this is the benchmarks based on your remarks:
> http://blog.gmludo.eu/2015/03/benchmark-python-web-production-stack.html

I really don't understand the round 4.

Django+Meinheld: 3992.68 requests/sec and 1,031,238 errors. The number
is errors is very high: 86% are requests fail with an error.

API-Hour w/o Nginx: 3646.15 requests/sec, 0 errors, but 9.74 seconds
for the *average* latency. An average of almost 10 sec looks very
high. What is the biggest latency? 60 seconds or more? I'm not sure
that 60 seconds of latency is acceptable for a web page. Personally, I
close a webpage if it takes more 30 seconds to load.

Other setup all have an average latency between 120 ms and 40 ms which
look more acceptable for a webpage. But I'm more that these numbers
are useful if more than 50% of requests fail.


The round 5 looks like a normal benchmark: no HTTP error at all,
average latency lower than 100 ms.

It's surprising that Django&Flask have *very* close performances (a
difference of 3.3%). It looks like Django and Flask have the same
bottleneck. Could it an arbitrary limit in nginx config? It would help
to see numbers for more and less concurrent connections (ex: 10, 50,
100, 200, 500 clients). To check if it's linear or not.


Again, I don't understand the round 6. What's the purpose of testing
10 requests/sec during 30 seconds? The previous test showed that all
setup support more than 500 requests per second during 5 minutes
without any error. This test doesn't stress anything.


I understand that round 4 and 6 use a single client. Is it realistic?
Most of time the bottleneck comes from the number of concurrent
*clients*, no? I don't think that a server is stressed by a single
client in practice (I'm not talking about DoS, but regular usage of a
web server).

Victor

Reply via email to