That looks like a nice increase. You might be able to get more than 100 
users by using an evented wsgi instance. You can launch web2py with 
'anyserver.py' using gevent, gunicorn, mongrel2. if using gevent, you could 
try the monkey.patch_all(). Also, with gevent, I like to add in sleep(0) 
between database access and serialization, and every row of serialization. 
Also, you can look at adding the gevent backdoor, which is useful if you 
want to troubleshoot while it's in development.

On Sunday, April 14, 2013 4:32:39 PM UTC-7, BlueShadow wrote:
>
>
> thanks to the awesome help of Ricardo Pedroso my performance increased by 
> a factor of 100 (at least).
>
> Well what were the problems:
>
> first of all my vserver cpu is very slow. (600MHz 1 Core) Which is just 
> rediculous in those times my mobile phone has a faster processor.
>
> apache-modwsgi-web2py is pretty processor intensive especially if you ad a 
> mod_pagespeed.
>
>  
>
> what did we do (decending order of performance increase):
>
> we enabled a lot of caching, 
>
> removed code from models to modules
>
> splitted controlers (from over 20 funktions to less than 7 each)
>
> compiled application
>
> used html instead of python helper functions
>
>  
>
> The last thing we did was switching from apache to nginx: which is just 
> awesome:
>
> just to show the diffrence I list some of our test results:
>
> after optimization in apache:
>
> apache bench:
>
> ab -n 100 -c 2 domain.com
>
> This is ApacheBench, Version 2.3
>
> ...
> Server Software: Apache/2.2.22
>
> Document Path: /
> Document Length: 27429 bytes
>
> Concurrency Level: 2
> Time taken for tests: 17.198 seconds
> Complete requests: 100
> ...
>
> Requests per second: 5.81 [#/sec] (mean)
> Time per request: 343.961 [ms] (mean)
> Time per request: 171.981 [ms] (mean, across all concurrent requests)
> Transfer rate: 158.22 [Kbytes/sec] received
>
> Connection Times (ms)
>                                 min        mean    [+/-sd]                 
> median max
> Connect:             0             0            0.0          0             
> 0
> Processing:        138         343         81.8       332        639
> Waiting:               138         341         82.3       328         638
> Total:                    138         343         81.8       332         
> 639
>
> Percentage of the requests served within a certain time (ms)
> 50% 332
> 66% 358
> 75% 365
> 80% 396
> 90% 456
> 95% 493
> 98% 634
> 99% 639
> 100% 639 (longest request)
>
>  
>
> with nginx(and yes the size is smaller because I compressed the pngs)
>
>  
>
> ab -n 100 -c 2 domain.com
>
> ...
>
> Server Software:        nginx/1.1.19
>
> ...
>
> Document Length:        25420 bytes
>
>  
>
> Concurrency Level:      2
>
> Time taken for tests:   10.821 seconds
>
> Complete requests:      100
>
> ...
>
> Requests per second:    9.24 [#/sec] (mean)
>
> Time per request:       216.427 [ms] (mean)
>
> Time per request:       108.214 [ms] (mean, across all concurrent 
> requests)
>
> Transfer rate:          233.14 [Kbytes/sec] received
>
>  
>
> Connection Times (ms)
>
>                                min        mean    [+/-sd]                 
> median   max
>
> Connect:             0             0             0.0          0            
>   0
>
> Processing:        25           215         42.9       207           387
>
> Waiting:               25           214         42.6       207           
> 387
>
> Total:                    25           215         42.9       207         
>   387
>
>  
>
> Percentage of the requests served within a certain time (ms)
>
>   50%    207
>
>   66%    210
>
>   75%    217
>
>   80%    265
>
>   90%    272
>
>   95%    278
>
>   98%    286
>
>   99%    387
>
>  100%    387 (longest request)
>
>  
>
> webpagetest.org gives now results which are not that good but I suspect 
> that it depends on their server cpu utilization a lot. and after I tried 
> some other pages which I knew where good before I come to the conclusion 
> that this site isn't reliable at all at the moment. the compilation which 
> made things worse I mentioned earlier is probably a result of that too.
>
> loadimpact.com tells me that the server can handle 100 users at the same 
> time before the server starts to fail. So a pretty good improvement. 
> unfurtuanatly I don't have results from before from loadimpact.
>
>  
>
> I hope it helps some people which are looking for more performance 
> especially with slow cpus
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to