Hi Antoine,

I'm really interested about your suggestions to improve my benchmarks
suite, it's the goal.
You can use your motivation to prove I'm wrong, I will be happy at the end,
because I will learn something new.

Maybe I've made a mistake somewhere, I'm not an uWSGI expert, as I've told
us before.
Or maybe it's because I've very simplistic problems.
Or maybe, everybody has bias about architectures, who knows where is the
"truth" ?
Certainly the "truth" is a mixin of several causes.
Nevertheless, even if I've missed something in the benchmark, I believe
less and less it will have a big impact on the numbers.
And, even if somebody find a big error, I'm pretty sure I won't be alone to
do this error on production.

If you go to PyCON-US this year, we will discuss directly face-to-face, it
will be easier.

Kind regards.

--
Ludovic Gasc

On Wed, Mar 11, 2015 at 11:36 PM, Antoine Pitrou <[email protected]>
wrote:

> On Wed, 11 Mar 2015 14:43:44 -0700 (PDT)
> Ludovic Gasc <[email protected]> wrote:
> > Hi people,
> >
> > As promised, this is the benchmarks based on your
> > remarks:
> http://blog.gmludo.eu/2015/03/benchmark-python-web-production-stack.html
> > I've started to receive positive feedbacks from few users who use
> API-Hour
> > on production, it seems we aren't alone to observe a positive performance
> > improvements with this architecture.
>
> How can you get almost the same number of errors for all 4
> different setups, i.e. Django+Meinheld, Django+uWSGI, Flask+Meinheld,
> Flask+uWSGI?
>
> And, when decreasing the number of errors, the fast that all 4 setups
> get almost the same number of requests per second also doesn't really
> make sense.  Or, it means that you're not testing any of those setups,
> but rather some fixed overhead that's specific to your benchmarking
> setup.
>
> YMMV, but I personally wouldn't take any of those numbers seriously.
>
> Regards
>
> Antoine.
>
>
>

Reply via email to