Izak Burger wrote at 2008-9-17 12:10 +0200:
>I'm sure this question has been asked before, but it drives me nuts so I
>figured I'll ask again. This is a problem that has been bugging me for
>ages. Why does zope memory use never decrease? Okay, I've seen it
>decrease maybe by a couple megabyte, but never by much. It seems the
>general way to run zope is to put in some kind of monitoring, and
>restart it when memory goes out of bounds. In general it always uses
>more and more RAM until the host starts paging to disk. This sort of
>baby-sitting just seems wrong to me.
This is standard behaviour with long running processes on
a system without memory compaction:
It almost is a consequence of the "increased entropy" theorem.
Memory tends to fragment over time.
Some memory requests cannot be satisfied by the fragments (because
the individual fragments are not large enough and compaction is
not available) and therefore a new large block is requested
from the operation system.
>It doesn't seem to make any difference if you set the cache-size to a
>smaller number of objects or use a different number of threads. Over
>time things always go from good to bad and then on to worse. I have only
>two theories: a memory leak, or an issue with garbage collection (python
The lack of compactions together with weaknesses in *nix memory management
(*nix essentially provides "mmap" and "brk". "mmap" is not adequate
for large numbers of small memory requests and "brk" can only allocate/release
at the heap boundary).
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org