With 100 table definitions, even a single request will probably be 
relatively slow. You might want to see if you can conditionally define your 
tables depending on the controller/function called (you can set up 
conditional model files or move the definitions to modules that get 
imported and run when needed).

Anthony

On Monday, November 25, 2013 12:23:37 PM UTC-5, Saurabh Kumar wrote:
>
> The Web2py response slows down when faced with concurrent requests. On 
> doing some profiling, I found out that this occurs because DAL start 
> responding slowly. Here is the profile log, that confirms the same:
>
> root@replica:/home/www-data/lambda# python c.py  mod-20131125T164445.prof
>          16217 function calls (16199 primitive calls) in 1.558 seconds
>
>    Ordered by: internal time, call count
>    List reduced from 188 to 50 due to restriction <50>
>
>    ncalls  tottime  percall  cumtime  percall filename:lineno(function)
>      1710    0.342    0.000    0.530    0.000 
> /home/www-data/lambda/gluon/dal.py:8247(__getitem__)
>        14    0.188    0.013    0.188    0.013 
> /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py:97(nextset)
>       183    0.152    0.001    0.152    0.001 
> /usr/lib/python2.7/threading.py:146(acquire)
>       366    0.131    0.000    0.132    0.000 
> /usr/lib/python2.7/genericpath.py:38(isdir)
>       102    0.107    0.001    0.241    0.002 
> /home/www-data/lambda/gluon/dal.py:8025(__init__)
>       595    0.102    0.000    0.556    0.001 
> /home/www-data/lambda/gluon/dal.py:9100(__init__)
>      2242    0.086    0.000    0.100    0.000 
> /home/www-data/lambda/gluon/dal.py:8317(__setattr__)
>       596    0.055    0.000    0.055    0.000 
> /home/www-data/lambda/gluon/dal.py:5060(cleanup)
>       379    0.052    0.000    0.052    0.000 
> /usr/lib/python2.7/posixpath.py:68(join)
>         3    0.048    0.016    0.048    0.016 
> /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py:277(_do_query)
>       101    0.038    0.000    0.681    0.007 
> /home/www-data/lambda/gluon/dal.py:7658(lazy_define_table)
>       101    0.036    0.000    0.073    0.001 
> /home/www-data/lambda/gluon/dal.py:8198(_create_references)
>       520    0.035    0.000    0.214    0.000 
> /home/www-data/lambda/gluon/dal.py:6620(sqlhtml_validators)
>       101    0.035    0.000    0.717    0.007 
> /home/www-data/lambda/gluon/dal.py:7628(define_table)
>         1    0.029    0.029    1.369    1.369 
> /home/www-data/lambda/applications/chat/compiled/models/db.py:1(<module>)
>       158    0.024    0.000    0.024    0.000 
> /home/www-data/lambda/gluon/validators.py:2564(__init__)
>      1384    0.015    0.000    0.138    0.000 
> /home/www-data/lambda/gluon/dal.py:8339(__iter__)
>       102    0.014    0.000    0.014    0.000 
> /home/www-data/lambda/gluon/dal.py:457(pluralize)
>       695    0.013    0.000    0.013    0.000 
> /home/www-data/lambda/gluon/dal.py:8288(__setitem__)
>       414    0.013    0.000    0.013    0.000 
> /home/www-data/lambda/gluon/dal.py:7762(__getattr__)
>        89    0.012    0.000    0.014    0.000 
> /home/www-data/lambda/gluon/validators.py:450(__init__)
>         6    0.005    0.001    0.005    0.001 
> /usr/lib/python2.7/socket.py:223(meth)
>
>
> To further confirm this, I removed most of the table definitions (~100) 
> and kept just the auth_user table in db.py. This improved the response time 
> significantly.
>
> Concurrency Level:      25
> Time taken for tests:   2.150 seconds
> Complete requests:      50
> Failed requests:        0
> Write errors:           0
> Total transferred:      588512 bytes
> HTML transferred:       567000 bytes
> Requests per second:    23.25 [#/sec] (mean)
> Time per request:       1075.173 [ms] (mean)
> Time per request:       43.007 [ms] (mean, across all concurrent requests)
> Transfer rate:          267.27 [Kbytes/sec] received
>
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        1    1   0.1      1       1
> Processing:   291  901 296.5   1011    1230
> Waiting:      288  899 296.4   1011    1230
> Total:        292  902 296.5   1012    1232
>
> Percentage of the requests served within a certain time (ms)
>   50%   1012
>   66%   1103
>   75%   1120
>   80%   1140
>   90%   1182
>   95%   1229
>   98%   1232
>   99%   1232
>  100%   1232 (longest request)
>
>
> Earlier benchmark logs looked like
>
>
> Concurrency Level:      25
> Time taken for tests:   8.882 seconds
> Complete requests:      50
> Failed requests:        0
> Write errors:           0
> Total transferred:      588190 bytes
> HTML transferred:       567000 bytes
> Requests per second:    5.63 [#/sec] (mean)
> Time per request:       4440.767 [ms] (mean)
> Time per request:       177.631 [ms] (mean, across all concurrent requests)
> Transfer rate:          64.67 [Kbytes/sec] received
>
> Connection Times (ms)
>               min  mean[+/-sd] median   max
> Connect:        1    1   0.7      1       6
> Processing:  1883 4011 1211.9   3863    5865
> Waiting:     1882 4008 1211.6   3856    5861
> Total:       1884 4012 1211.9   3864    5867
>
> Percentage of the requests served within a certain time (ms)
>   50%   3864
>   66%   4845
>   75%   5009
>   80%   5165
>   90%   5557
>   95%   5748
>   98%   5867
>   99%   5867
>  100%   5867 (longest request)
>  
> Any clues on what could be the cause of this? 
>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to