Hi Iceberg -

With regards to bypassing source code parsing overhead, I think that
overhead only occurs when the site is not compiled. Once you compile a
site in the web2py admin pages the .pyc bytecode files are created for
the models, and then on each webpage request it looks like these .pyc
files are loaded directly into the prepared environment with the
request, response objects etc. and then the models are run in.

(apologies if you are already well versed in the above, I am only just
finding these things out myself!)

Just to confirm this, attached to another post on this thread I have
some example code with the model definitions now in a module Setting
ENABLE_APPLICATION_SCOPE to false at the top of this file causes it to
create model objects every page request, so this is basically your
suggestion above to move model definitions into a module file, With
this arrangement I clocked much the same timings as when I just had
all model definitions in one models\db.py file - no speed increase
unfortunately just from using modules (although I did get a speed
increase by reusing a shared object between page requests)

For using caching.ram , I did not go with this for now. My thinking
from my experience in other development environments was caching
usually involves serialising and deserialising objects. It will never
be as fast as if you can just hold onto one shared object in a static
variable, so I concentrated the next test I did around this.

Cheers,
 - Alex

On Aug 24, 1:28 pm, Iceberg <[email protected]> wrote:

> Thanks for the interesting test, Pearson (What_Ho).
>
> Besides of trying Yarko's "putting table definitions in gluon just as
> a quick hack to compare performance", would you mind try a more
> general way?
>
> A. Put table definitions into applications/yourapp/modules/
> yourtable.py, and then just import them from your  db.py? I assume
> this is enough to bypass the source code parsing overhead in each
> request, although the db objects are still rebuild in every request.
>
> B. If you do plan A, you might probably like to do a plan B, just
> simply compile your app with full db.py, then benchmark again. Maybe A
> and B provide similar result.
>
> C. I am not sure yet, but if plan A helps a bit, we can again use the
> cache trick to reuse the same db object for every request. Something
> like this:
>   from applications.yourapp.modules.yourtable import _init_db
>   db=cache.ram('my_db', _init_db(), 99999999)
> Can it give another boost?
>
> Sincerely,
> Iceberg
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/web2py?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to