On Thu, Feb 26, 2015 at 8:19 AM, Aymeric Augustin <
[email protected]> wrote:

> On 26 févr. 2015, at 00:00, Antoine Pitrou <[email protected]> wrote:
>
> > On Wed, 25 Feb 2015 23:44:33 +0100
> > Ludovic Gasc <[email protected]> wrote:
> >>
> >> I've disabled keep_alive in api_hour, I quickly tested on agents list
> >> webservices via localhost, I've 3334.52 req/s instead of 4179 req/s,
> 0.233
> >> latency average instead of 0.098 and 884 errors instead of 0 errors.
> >> It isn't a big change compare to others Web frameworks values, but it's
> a
> >> change.
> >
> > IMO, the fact that you get so many errors indicates that something is
> > probably wrong in your benchmark setup. It is difficult to believe that
> > Flask and Django would believe so badly in such a simple (almost
> > simplistic) workload.
>
> If you push concurrency too far — like 5000 threads — I expect
> performance to plummet and that kind of results. I suspect that’s
> the situation Ludovic’s benchmark creates. It's a pathological use
> case for threads.
>

I don't understand you: To my knowledge, in my benchmark, I don't use
threads, only processes.
Moreover, what does it mean "too far" and "pathological use" ?
What's the difference between my benchmark and a server that receives a lot
of requests ?
For you, this use case doesn't happen on production ? Or you maybe you have
a tip to avoid that ?

Reply via email to