I think so but this may be server dependent and machine dependent. Async 
servers probably do not have this problem.

On Tuesday, 21 August 2012 09:17:10 UTC-5, David Marko wrote:
>
> I see, so is it the way to limit the max number of threads to e.g. 10, so 
> the system is not killed by the concurrency and as a result I will get 90 
> req/sec back?
>
> Dne úterý, 21. srpna 2012 15:45:58 UTC+2 Massimo Di Pierro napsal(a):
>>
>> This is probably an effect of the Python GIL. Python has no real 
>> parallelization even if you have concurrency. The more concurrent request 
>> the more inefficient it gets, even if (and specifically if) you have 
>> multiple computing cores.
>>
>> The way you achieve better performance is by using processes, not 
>> threads. Other web servers allow you to configure multiple processes, but 
>> rocket does not.
>>
>> On Tuesday, 21 August 2012 08:38:29 UTC-5, David Marko wrote:
>>>
>>> I have latest web2py from trunk, Python 2.7(win7) with standalone web2py 
>>> (using default Rocket server) . I just benchmarked a simple page without a 
>>> model (just to see how high I can get when striping all unnecessary code 
>>> ...) and can see some strange thing for me. To test I'm using apache 
>>> benchmark. When I set concurrency level 5 or up to 10, I'm getting cca 90 
>>> req/sec. When increasing concurrency level to 20 (or higher) the req/sec 
>>> drops to around 15-20 req/sec .  Why is this?  Is there a way how to 
>>> get(configure something?) stable performance even under higher load?
>>
>>

-- 



Reply via email to