This is probably an effect of the Python GIL. Python has no real 
parallelization even if you have concurrency. The more concurrent request 
the more inefficient it gets, even if (and specifically if) you have 
multiple computing cores.

The way you achieve better performance is by using processes, not threads. 
Other web servers allow you to configure multiple processes, but rocket 
does not.

On Tuesday, 21 August 2012 08:38:29 UTC-5, David Marko wrote:
>
> I have latest web2py from trunk, Python 2.7(win7) with standalone web2py 
> (using default Rocket server) . I just benchmarked a simple page without a 
> model (just to see how high I can get when striping all unnecessary code 
> ...) and can see some strange thing for me. To test I'm using apache 
> benchmark. When I set concurrency level 5 or up to 10, I'm getting cca 90 
> req/sec. When increasing concurrency level to 20 (or higher) the req/sec 
> drops to around 15-20 req/sec .  Why is this?  Is there a way how to 
> get(configure something?) stable performance even under higher load?

-- 



Reply via email to