Dieter Maurer wrote:

Currently, the ZODB cache can only be controlled via the maximal number
of objects. This makes configuration complex as the actual limiting
factor is the amount of available RAM and it is very difficult to
estimate the size of the objects in the cache.

I therefore propose the implementation of cache replacement policies
based on the estimated size of its objects.

I don't suppose there's any way that, as part of this work, the size of the cache could also be limited DURING a transaction? I'm sure we've all been through the situation where some loop accidentally pulls most of a ZODB into memory because the loop happens in one transaction.

It would also mean a lot of the abuses of get_transaction().commit() and get_transaction().abort() (and the ._p_deactivate() stuff ZopeFind) wouldn't be needed anymore...

cheers,

Chris

--
Simplistix - Content Management, Zope & Python Consulting
           - http://www.simplistix.co.uk

_______________________________________________
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
http://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to