Hi Tim,

Tim Peters wrote:

    import ZODB
    print ZODB.__version__

to find out.

Good to know, thanks...

I have a Stepper (zopectl run on steroids) job that deals with lots of
big objects.

Can you quantify this?

60,000 File objects of the order of 2Mb each.

It does not do cacheMinimize().  It tries to reduce the memory cache to the
target number of objects specified for that cache, which is not at all the
same as cache minimization (which latter shoots for a target size of 0).
Whether that's "sane" or not depends on the product of:

    the cache's target number of objects


    "the average" byte size of an object

Ah, that'll do it, I wondered why it was only this step that was hurting. My guess is that our cache size settings with lots of max-sized PData objects lead to the RAM blowup...

...oh well, if only the ZODB cache was RAM-usage-based ratehr than object count based ;-)

thanks for the info!


Simplistix - Content Management, Zope & Python Consulting
           - http://www.simplistix.co.uk
For more information about ZODB, see the ZODB Wiki:

ZODB-Dev mailing list  -  ZODB-Dev@zope.org

Reply via email to