2016-11-15 13:19 GMT+01:00 Cédric Krier <cedric.kr...@b2ck.com>:

> On 2016-11-15 03:14, Mikhail Savushkin wrote:
> > Now we have another blocker on the road - the Pool building process is
> too
> > long to be fired on every request. And we cant afford collecting Pools
> for
> > all of the DBs in memory, since it will eat all the available memory
> pretty
> > quick.
>
> Do you have measures for that?
>

I would be interested as well to get some metrics ! How many DBs are you
talking about ?


> > So we have an idea: rewrite a small part of the pool.py module, so
> > that it will build only the currently needed classes for the particular
> > request, i.e. patch the "Pool.get()" method mostly. And throw the data
> away
> > after request. So, we hope that building only a small part of all the
> > classes out there will cut the building time a lot, and will allow us to
> do
> > so on each request without significant response time lags.
>
> Some requests may require to access many classes in such case it will
> not be very efficient to build the class on the fly.
> Also to build a class, you need to query the database to know which
> modules are activated and build the dependency graph, so this will
> generate a lot of queries.


A hybrid solution could maybe be to select some "core" models that will
always be loaded in memory (typically all ir / res models), and load the
others on demand. But I agree with Cedric that I am not sure the gain would
be that interesting even in the best case scenario (i.e. only one or two
models
to build).

Jean Cavallo
*Coopengo*

-- 
You received this message because you are subscribed to the Google Groups 
"tryton-dev" group.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tryton-dev/CAAc4%2BPbPCaCPwPiqfcH7OO1F-NkOSTooepL40CKcm0xfhxRh-Q%40mail.gmail.com.

Reply via email to