Metrics are:

"""
DBs in Pool  RSS, kBytes                 Added to memory, kBytes
0             65,020                     -
1             86,576                     +21,500
2            105,520                     +19,000
3            114,520                     + 9,000
4            123,708                     + 9,000
5            132,444                     + 9,000
"""


And regarding the efficiency - thats why we wanna use "maps", possibly stored 
in some Redis.

Btw, we would be totally happy to just store the entire Pool in Redis and fetch 
it from Redis per request, but as far as I get, serializing Pool is practically 
impossible.


As long as we're talking about millions of DBs, this straightforward 
approach wont work.

вторник, 15 ноября 2016 г., 17:19:24 UTC+3 пользователь Giovanni написал:
>
>
> 2016-11-15 13:19 GMT+01:00 Cédric Krier <cedric...@b2ck.com <javascript:>>
> :
>
>> On 2016-11-15 03:14, Mikhail Savushkin wrote:
>> > Now we have another blocker on the road - the Pool building process is 
>> too
>> > long to be fired on every request. And we cant afford collecting Pools 
>> for
>> > all of the DBs in memory, since it will eat all the available memory 
>> pretty
>> > quick.
>>
>> Do you have measures for that?
>>
>
> I would be interested as well to get some metrics ! How many DBs are you
> talking about ?
>  
>
>> > So we have an idea: rewrite a small part of the pool.py module, so
>> > that it will build only the currently needed classes for the particular
>> > request, i.e. patch the "Pool.get()" method mostly. And throw the data 
>> away
>> > after request. So, we hope that building only a small part of all the
>> > classes out there will cut the building time a lot, and will allow us 
>> to do
>> > so on each request without significant response time lags.
>>
>> Some requests may require to access many classes in such case it will
>> not be very efficient to build the class on the fly.
>> Also to build a class, you need to query the database to know which
>> modules are activated and build the dependency graph, so this will
>> generate a lot of queries.
>
>
> A hybrid solution could maybe be to select some "core" models that will
> always be loaded in memory (typically all ir / res models), and load the
> others on demand. But I agree with Cedric that I am not sure the gain would
> be that interesting even in the best case scenario (i.e. only one or two 
> models
> to build).
>
> Jean Cavallo
> *Coopengo*
>

-- 
You received this message because you are subscribed to the Google Groups 
"tryton-dev" group.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tryton-dev/28dd8b43-725b-47a4-b7b1-8eb03933bd37%40googlegroups.com.

Reply via email to