I have a database consisting mainly of an IOBTree of ~700,000 
items (each persistent mappings), and an zc.catalog indexing them by one 
of the mapping's fields (a date).  I am opening the FileStorage in 
read_only mode.  For each day in the index, I get the day's mappings and 
read the contents of another field.  As I go through each day, the memory 
usage explodes (over 32 GB).  Is there a way to configure the cache to 
automatically keep itself under the value of the cache size parameter?

        To avoid this problem in most cases, I wrap the IOBTree in another 
object which does nothing more than call db.cacheMinimize after every 
10000 items are iterated over.  But for random access, that's not an 
option.

        From reading the archives, it sounds like cache cleaning does not 
happen while running a transaction.  Is that my problem?  I'm in read-only 
mode, so I can't perform a transaction anyway, but could the ZODB think I 
want to?  Or is my problem something else?

        Maybe if someone can point me to a description of how the caching 
works.

-- 
Anthony Foglia
Princeton Consultants
(609) 987-8787 x233
_______________________________________________
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to