On Jan 11, 2008, at 1:08 PM, Chris McDonough wrote:
Related.... but fuzzier... is it your expectation that the amount of memory used for a database walk routine that tries to do memory management via some combination of connection.cacheMinimize-or- cacheGC() every-n-iterations (no calls to individual objects' _p_deactivate) should be close to one which uses _p_deactivate aggressively against the objects being walked? In my experience, no combination of the "aggregate" cache management calls seems to work nearly as well as aggressively _p_deactivate-ing walked objects directly while walking a large object tree (at least under ZODB 3.6). It seems like doing cacheMinimize, etc just doesn't really have much effect on real memory usage during the walk when it's the only thing used. It's a difficult thing to test, as you need a truly huge database to finaly see the failure mode (which is that you run out of RAM ;-), but that's my experience anyway.

Python isn't good at returning memory to the OS, so you really can't free it. Calling _p_deactivate along the way prevents it from growing much in the first place.

BTW, in situations in which you don't know which objects to deactivate, an alternative is to call cacheGC on the connection frequently. This is fairly inexpensive and incremental.


Jim Fulton
Zope Corporation

Zope-Dev maillist  -  Zope-Dev@zope.org
**  No cross posts or HTML encoding!  **
(Related lists - http://mail.zope.org/mailman/listinfo/zope-announce
http://mail.zope.org/mailman/listinfo/zope )

Reply via email to