Dieter Maurer wrote:
A new proposal:
It outlines how to implement a ZODB cache limited not be the
number of containing objects but by their estimated memory size.
Feedback welcome -- either here or in the Wiki.
I'm going to reply here because I think discussion is easier here, at
least in the short term.
First, I like the gist of the proposal. I think the proposal
should be split into 2 proposals:
1. Refactor (persistence and) cache management to reduce coupling
between persistence, database, and cache frameworks.
This needs more thought and specification. I'm gonna spend
some time today on a proposal that is mostly complementary to
I'll note that, as a guiding principle, any refactoring we do
should allow pure-python implementation. This means that APIs
need to be Python APIs, although we should consider efficient
C implementations when designing these APIs.
2. Provide object size information to the cache for use in it's
I wouldn't do much to this proposal until you've had a chance to read
Jim Fulton mailto:[EMAIL PROTECTED] Python Powered!
CTO (540) 361-1714 http://www.python.org
Zope Corporation http://www.zope.com http://www.zope.org
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org