On Wed, 11 Mar 2015 14:43:44 -0700 (PDT) Ludovic Gasc <[email protected]> wrote: > Hi people, > > As promised, this is the benchmarks based on your > remarks: > http://blog.gmludo.eu/2015/03/benchmark-python-web-production-stack.html > I've started to receive positive feedbacks from few users who use API-Hour > on production, it seems we aren't alone to observe a positive performance > improvements with this architecture.
How can you get almost the same number of errors for all 4 different setups, i.e. Django+Meinheld, Django+uWSGI, Flask+Meinheld, Flask+uWSGI? And, when decreasing the number of errors, the fast that all 4 setups get almost the same number of requests per second also doesn't really make sense. Or, it means that you're not testing any of those setups, but rather some fixed overhead that's specific to your benchmarking setup. YMMV, but I personally wouldn't take any of those numbers seriously. Regards Antoine.
