Hello,

That approach works for us, on DBs over 100GB.

Let's CC zodb-dev, which seems to be the better place to discuss this.

On 12/12/2012 09:39 AM, Jeroen Michiel wrote:

Thanks for the reply!

I already tried
transaction.savepoint()
every minute, but that didn't help: I only saw the memory usage dropping the
first time, but never after.

I changed the code to what you suggested, but it still doesn't seem to help.
Something must be wrong somewhere along the line, but I don't have a clue
where to begin looking.
Would using something like guppy (or heapy, or what it's called) reveal
something?

Could it be something about objects with circular references not being able
to be garbage-collected?
The objects in my DB are quite complex, so something like that might
actually be happening.


Adam Groszer-3 wrote:

Well it loads too many objects in a single transaction.
Doing this after some iterations (10k?, depends on your object sizes)
helps usually:

def forceSavepoint(anyPersistentObject=None):
      transaction.savepoint(optimistic=True)

      if anyPersistentObject is not None:
          #and clear picklecache
          conn = anyPersistentObject._p_jar
          conn.cacheGC()


--
Best regards,
   Adam GROSZER
--
Quote of the day:
A liberal is someone too poor to be a capitalist and too rich to be a
communist.
_______________________________________________
Zope3-users mailing list
zope3-us...@zope.org
https://mail.zope.org/mailman/listinfo/zope3-users




--
Best regards,
 Adam GROSZER
--
Quote of the day:
The Atomic Age is here to stay - but are we?  -  Bennett Cerf
_______________________________________________
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to