On Tuesday, April 14, 2015 at 10:51:31 PM UTC+2, Louis Amon wrote:
>
> I'm trying to optimize the load-speed of my app, and I'm considering using 
> conditional models.
>
> I've also been using lazy_tables for a while now and I can attest for the 
> speed boost, but I don't really understand how and why.
>
>
there's a lot of machinery around table definition. If you have ~50 tables 
you may not even notice it (you may start noticing it for > 50), but the 
point is that some machinery is there. 
lazy_table "defers" that machinery to the moment you try to access a table 
in a request, only for that table. That machinery gets cut from a 
theoretical 100% to a 2%, speed-wise.
This means that a model with 50 tables will still be read, but if your 
controller needs 1 table, ~49 "machineries" won't be executed, saving that 
98% for each and every of the 49 not-needed tables. 
Apart from corner-cases where you'd need a referenced table to be 
"machinered" even if you're using a single table in your controller (e.g., 
smartgrid), and it's a chicken-and-egg problem that will probably be never 
fixed, lazy_tables, coupled with migrate=False, is the uttermost important 
no-frills setting that one should enable in production, as it doesn't 
impact in any way any piece of code of your app, and gives a HUGE speedup. 
Corner cases can be readily "patched" just putting in a controller 
something like "db.tablename_that_needs_to_be_machinered".

Can someone explain how lazy_tables compare to response.models_to_run in 
> terms of filtering and executing models ?
>

response.models_to_run (and conditional models, that are just a smart 
implementation over response.models_to_run) instead skip the execution of 
some of the models alltogether. This means that that piece of python in 
models, that is usually executed for each and every request, is skipped. It 
means also that if you have a table defined in a model that isn't executed, 
it won't be there when you'll ask it from the controller.
This is also a speedup, but requires to craft your application in such a 
way that each piece of code in controllers needs to be tied to the models 
that are executed in that request. 
It's not a big "con" per se, but in medium-sized apps this means that you 
*need* some "vision" on the overall architecture: you'll quickly find out 
that some tables (or methods) are needed in all controllers (as such, that 
piece of model needs to be executed at every request) and some tables in 
"a" and "b" controllers but not in the "c" one. You'll then move models 
around and then find out that 1 of the tables needed in "a" and "b" is also 
needed in "c". Suffice to say that without a clean organization, you'll 
face either "moves" of controller functions around controllers, or 
"duplications" of some models around. 

As everything, a bit of general thumb rule (don't take it for granted or 
set in stone): a single-file model "lazy_tables(ed)" with 100 tables may 
cut the response time from 600ms to 100ms. 
Splitting that model in 20 files (~5 tables per model) will add ~50ms 
(either to the original 600 or to the "lazy"(ed) 100ms). Skipping the 
execution of 15 model files will save 30ms.

<tl;dr> skipping models is cool, but requires vision and code 
reorganization. It doesn't even come close to the istant gain provided by 
lazy(ing) table definitions. That doesn't mean that you can't use 
lazy_tables with conditional models (you should use lazy_tables pretty much 
always), but don't expect big improvements unless you're working with 50 
model files or so. 

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to