On Thu, Apr 22, 2010 at 03:00:16PM +0200, Jürgen Herrmann wrote:
> today i ran a check script which iterates over approx 150000 objects
> and does some sanity check calculations on these. during this loop
> i saw the zope process use up to about 4.5gb memory. the database 
> has ~3.5million objects in it.
> 
> i set the zodb cache size for the mount point in question to 10000
> objects. obviously this limit is not honoured during one transaction:
> 
> connection                     active objects      total objects
> ...
> <Connection at 24c10410>       263104              462834
> ...
> 
> so two questions here:
> - would the byte-limited zodb cache setting help here?
> - if no - how can i iterate over a big set of objects without
>   forcing them to stay in the cache for the whole transaction?
>   after all i just need each object once during the iteration.

Use savepoints:

   for n, obj in enumerate(your_objects):
       perform_sanity_check(obj)
       if n % 10000 == 0:
           transaction.savepoint()

Real-world example:
http://bazaar.launchpad.net/~schooltool-owners/schooltool/schooltool/annotate/head:/src/schooltool/generations/evolve26.py

Marius Gedminas
-- 
http://pov.lt/ -- Zope 3 consulting and development

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Zope-Dev maillist  -  Zope-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zope-dev
**  No cross posts or HTML encoding!  **
(Related lists - 
 https://mail.zope.org/mailman/listinfo/zope-announce
 https://mail.zope.org/mailman/listinfo/zope )

Reply via email to