Here is the output of the two commands and one with  -n 500 -c 100.

I think as you say the problem is with table definition code, which I am
trying to sort out. If you have any other suggestions regarding the same
(speeding up and concurrency) please let me know. I am really thankful to
all of you for helping me out :)

I have one more question. Is it possible to lazy load only some tables in
place of all of them? Currently what I found out is that, lazy_tables=True
argument could be passed in the DAL constructor. Is these a simple way to
say that load these tables immediately and the rest of them lazily?

On Tue, Nov 26, 2013 at 9:00 PM, Anthony <[email protected]> wrote:

> Are you comparing how long it takes to do 25 requests done in parallel to
> how long it takes to do 25 requests serially, or are you comparing how long
> it takes to do 25 request in parallel to how long it takes to do 1 request
> all by itself? You seem to be making the latter comparison and expecting
> the machine to be able to do 25 requests just as fast as it can do a single
> request. Instead, trying comparing:
>
> ab -n 50 -c 1 [url]
>
>
Concurrency Level:      1
Time taken for tests:   5.791 seconds
Complete requests:      50
Failed requests:        0
Write errors:           0
Total transferred:      558150 bytes
HTML transferred:       531500 bytes
Requests per second:    8.63 [#/sec] (mean)
Time per request:       115.813 [ms] (mean)
Time per request:       115.813 [ms] (mean, across all concurrent requests)
Transfer rate:          94.13 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    1   0.2      1       2
Processing:    79  115  90.7     86     629
Waiting:       79  114  90.8     86     629
Total:         80  116  90.7     87     631

Percentage of the requests served within a certain time (ms)
  50%     87
  66%     90
  75%     95
  80%     98
  90%    218
  95%    240
  98%    631
  99%    631
 100%    631 (longest request)



> with
>
> ab -n 50 -c 25 [url]
>
>
Concurrency Level:      25
Time taken for tests:   3.461 seconds
Complete requests:      50
Failed requests:        0
Write errors:           0
Total transferred:      558150 bytes
HTML transferred:       531500 bytes
Requests per second:    14.45 [#/sec] (mean)
Time per request:       1730.347 [ms] (mean)
Time per request:       69.214 [ms] (mean, across all concurrent requests)
Transfer rate:          157.50 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    2   0.5      1       3
Processing:   392 1214 599.6   1093    2531
Waiting:      392 1214 599.6   1093    2531
Total:        393 1216 599.6   1095    2533
WARNING: The median and mean for the initial connection time are not within
a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%   1095
  66%   1520
  75%   1650
  80%   1762
  90%   2118
  95%   2203
  98%   2533
  99%   2533
 100%   2533 (longest request)


ab -n 500 -c 100 [url]

Concurrency Level:      100
Time taken for tests:   29.829 seconds
Complete requests:      500
Failed requests:        1
   (Connect: 0, Receive: 0, Length: 1, Exceptions: 0)
Write errors:           0
Total transferred:      5581331 bytes
HTML transferred:       5314831 bytes
Requests per second:    16.76 [#/sec] (mean)
Time per request:       5965.761 [ms] (mean)
Time per request:       59.658 [ms] (mean, across all concurrent requests)
Transfer rate:          182.73 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    2   1.8      1       9
Processing:   379 5392 2761.9   5080   14603
Waiting:      379 5391 2761.6   5080   14603
Total:        387 5394 2761.1   5081   14605

Percentage of the requests served within a certain time (ms)
  50%   5081
  66%   6040
  75%   6914
  80%   7500
  90%   9714
  95%  10752
  98%  11747
  99%  12821
 100%  14605 (longest request)




> and observe the *total time* taken for the entire 50 requests in each
> case.
>
> Regarding table definitions, yes, they do take up non-trivial amounts of
> CPU time. When you define a table, it calls the gluon.dal.Table.__init__
> method -- check it out.
>
>
Checking.


> Anthony
>
> On Tuesday, November 26, 2013 9:32:37 AM UTC-5, Saurabh Kumar wrote:
>>
>>
>> On Tue, Nov 26, 2013 at 7:27 PM, Anthony <[email protected]> wrote:
>>
>>> *Case 1:*
>>>> Number of Tables in db.py: ~100 (all with migrate=False)
>>>> No. of concurrent requests: 1
>>>> Time taken per request: 900 ms
>>>>
>>>> *Case 2:*
>>>> Number of Tables in db.py: ~100 (all with migrate=False)
>>>> No. of concurrent requests: 25
>>>> Time taken per request: 4440 ms
>>>>
>>>
>>> For apples to apples comparison, you should look at the "(mean, across
>>> all concurrent requests)" value. In case #2, that is only 177.6 ms. With
>>> multiple concurrent requests, of course each request is going to take
>>> longer to complete from start to finish (each thread is sharing system
>>> resources, so can't run as fast as when only a single thread is processing
>>> a request). However, because the requests are being processed in parallel,
>>> you have to divide the overall average time per request by the concurrency
>>> level to get the true time spent on each request (i.e., if 50 requests take
>>> an average of 4.4 seconds per request processed 25 at a time, then the true
>>> time spent on each request is 4.4 seconds / 25 = 177.6 ms). This is the
>>> number that should be compared to the single request case. Alternatively,
>>> you can just compare the total test time in both cases (assuming you ran
>>> the same number of total requests in each case).
>>>
>>
>> I am still unclear.
>>
>> Have a look at the longest response time which is 5.8 s as compared to <1
>> s. This would mean that as soon as the website is faced with 25 odd users,
>> the response time goes up six times, which is certainly not a good thing. I
>> am sure CPU/memory/Bandwidth are not bottleneck here.
>>
>> Comparing total time taken in both the cases wont be a fair comparison.
>> One of them is doing things concurrently and other one sequentially. If
>> both were expected to be the same, then why do things in parallel in the
>> first place?
>>
>> I understand that expected time for 25 concurrent requests ~= 1*[time
>> for a single request] is wrong because of sharing of resources. But it
>> should be considerably less than 25*[time for a single request]. And as
>> 25 is a very small number I'd expect it to be close to 1*[time for a
>> single request]. The CPU time taken in processing a light request should be
>> close to zero and it should not be a bottleneck while processing 25 such
>> requests in parallel. If CPU is turning out to be the bottleneck here,
>> there must be something wrong in what are are doing. A lot of table
>> definitions is one such things. And, just curious, why is table definition
>> expensive in the first place if migrate is set to False and these is no
>> need for database interaction while defining tables.
>>
>> Regarding the optimization of  table definitions suggestion, yes it
>> definitely makes sense and answers my doubts. I will do the needful and
>> post how it turns out.
>>
>>
>>>
>>> Anthony
>>>
>>> --
>>> Resources:
>>> - 
>>> http://web2py.com<http://www.google.com/url?q=http%3A%2F%2Fweb2py.com&sa=D&sntz=1&usg=AFQjCNE7x6wflFTAQ11b-FhtMwFfvltXeg>
>>> - 
>>> http://web2py.com/book<http://www.google.com/url?q=http%3A%2F%2Fweb2py.com%2Fbook&sa=D&sntz=1&usg=AFQjCNFAv433a0RL4nfaYxTbZ4cHi4Q78A>(Documentation)
>>> - 
>>> http://github.com/web2py/web2py<http://www.google.com/url?q=http%3A%2F%2Fgithub.com%2Fweb2py%2Fweb2py&sa=D&sntz=1&usg=AFQjCNHSwgWBkjuiIoo30014e8BB_iCDag>(Source
>>>  code)
>>> - https://code.google.com/p/web2py/issues/list (Report Issues)
>>> ---
>>> You received this message because you are subscribed to a topic in the
>>> Google Groups "web2py-users" group.
>>> To unsubscribe from this topic, visit https://groups.google.com/d/
>>> topic/web2py/rH1C7iXMPNA/unsubscribe.
>>> To unsubscribe from this group and all its topics, send an email to
>>> [email protected].
>>>
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>  --
> Resources:
> - http://web2py.com
> - http://web2py.com/book (Documentation)
> - http://github.com/web2py/web2py (Source code)
> - https://code.google.com/p/web2py/issues/list (Report Issues)
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "web2py-users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/web2py/rH1C7iXMPNA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to