Currently, the ZODB cache can only be controlled via the maximal number
of objects. This makes configuration complex as the actual limiting
factor is the amount of available RAM and it is very difficult to
estimate the size of the objects in the cache.

I therefore propose the implementation of cache replacement policies
based on the estimated size of its objects.

I propose to use the pickle size as the size estimate.
The connection could store the pickle size in the object
as "_p_size" (and may call a hook function "_p_estimateSize",
if it is defined -- but I do not think, we need this).
I am aware that the actual size of an object may significantly
differ from its pickle size, but usually, they will at least
be in the same order.

As additional limiting parameters, I propose "MAX_OBJECT_SIZE" and

Objects with size >= "MAX_OBJECT_SIZE" are invalidated at the next
possible time (at a transaction boundary) before other potential
invalidations are considered.
The purpose of the limit it to prevent a single (or few) large objects 
to flush large amounts of small objects. Such large objects
are managed in a special (doubly linked) list in order to quickly locate them.

After large objects are flushed, the replacement policy works
as it does now. However, beside the number of objects, their
total estimated size is accumulated. As soon as
either the "MAX_OBJECT_NUMBER" or "MAX_TOTAL_SIZE" is reached,
the remaining objects are invalidated (as far as possible).

For more information about ZODB, see the ZODB Wiki:

ZODB-Dev mailing list  -

Reply via email to