Chris Withers wrote:
The only way that ZODB could keep such a promise would be to
disallow loading new objects, generating errors under some circumstances.
...or to bump objects out of the cache mid-way through a transaction,
generating errors if there are no non-modified objects that can be
unloaded, hence my interest in the _v_ and _p_sticky discussion...
That's what I said. Memory limits could be exceeded by modified
or sticky objects. The only way to guarantee a memory limit (assuming
that we could actually measure memory usage) is to refuse to load objects
at some point.
I'm sure things _can_ be better than this, I just wish I could help...
Well, I think that Dieter's proposal is a good start. First, we need
to get to a saner architecture so we can make progress on cache
BTW, it occurs to me that a tool could be written (as a custom unpickler)
to give much better estimates of object size based on analysis of the pickle.
If this was integrated with unickling, then the added cost would probably be
Jim Fulton mailto:[EMAIL PROTECTED] Python Powered!
CTO (540) 361-1714 http://www.python.org
Zope Corporation http://www.zope.com http://www.zope.org
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org