> > On Tue, Nov 26, 2013 at 2:44 AM, Anthony <[email protected] > <javascript:>>wrote: > >> So, is time for 25 concurrent requests significantly greater than >> 25*[time for a single request]? >> >> > No, time for 25 concurrent requests is smaller than 25*[time for a single > request]. > Avg. time taken per request is 4 to 5 times though. They should be > processed in parallel not sequentially, right? > > Ideally, I would expect time for 25 concurrent requests ~= 1*[time for a > single request]. >
No, that's not how concurrency works. When you process 25 requests simultaneously, the computer doesn't suddenly gain 25x it's normal computing power. Each request is handled in a separate thread, but all of those threads still have to share the same computing resources. So, each request will take longer to process than it would in the single request case. It doesn't look like concurrency is slowing things down for you. The problem is that you have 100 table definitions. Even with migrations turned off, table definitions take some time, so you should avoid having so many in each request. At a minimum, you should set lazy_tables=True. That will postpone much of the table definition code until each table is actually referenced in the code (so tables that are never referenced within a given request will not get fully defined). In addition, you can use conditional models to execute only some of the models on each request (i.e., those needed for the particular controller/function being called) and/or define models in modules and import where needed. Anthony -- Resources: - http://web2py.com - http://web2py.com/book (Documentation) - http://github.com/web2py/web2py (Source code) - https://code.google.com/p/web2py/issues/list (Report Issues) --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/groups/opt_out.

