*Massimo and Nico:*
Thanks for looking into those things, can't wait!
*RAM Cache and DAL?*
I've been looking into conditional models and attempting to combine them
with the module based system just to see how far I can take it and I've run
into a question:
Is there any reason I shouldn't use cache.ram for a DAL instance? I can't
use the automatic migration tools since our data-structure wouldn't allow
for that kind of thing (running a single column update against some of the
bigger tables can take 30 minutes+ and we want that in a controlled
environment, probably outside of web2py). So, with migration out of the
picture could I do this in the models to avoid recurring re-definition of
tables?
def load_models():
db = DAL('postgres://localhost:5432/Demo')
db.define_table('table', Field('field')
db.define_table('table2', Field('field')
#etc...
return db
db = cache.ram('datamodels', lambda: load_models(), time_expire=None)
My real goal is to just get the datamodel remembered between requests
(since it'd be redundant to load it every time). I suppose it's really
just a process specific singleton, but it does make some difference. Here
are some non-scientific benchmarks I performed on my data model:
All tables defined in request: ~420ms
All tables defined in request w/ cache hit: ~90ms
All tables defined in request (compiled app): ~350ms
All tables defined in request w/ cache hit (compiled app): ~25ms
Obviously the first request off of a cold start would be fairly slow, but
all subsequent requests would benefit greatly. By using caching with the
DAL class am I potentially hurting myself in some way?
On Saturday, May 26, 2012 11:13:17 AM UTC+1, Nico de Groot wrote:
>
> Hi David,
> Got Jenkins running on mac and windows with unittests, will send you
> details later.
> Nico de Groot