On Thu, Jun 14, 2012 at 5:34 PM, Ralf Hauenschild
<ralf_hauensch...@gmx.de> wrote:
> Unfortunately, my aim using ZODB was, to store dictionaries of a size of
> ~3GB memory size to the hard drive, after the data has been read from there.
> Of course, this makes only sense, if retrieving the dictionaries from ZODB
> back to the RAM goes faster than parsing the original files via readline()
> again.
> This could not be accomplished:
> The transaction.commit() alone took over an our for a dictionary, which was
> initially parsed from a file in 5 Minutes!
> So i guess, using ZODB for large files is not recommended. But in the case
> of small files, my RAM is big enough anyway, so unfortunately ZODB should
> have no use for me at the moment, or should it?

Do not store large items as persistent objects, no. Your options:

* Store as a ZODB blob, if this is one chunk of data.

* Store as a tree of persistent objects if parts need changing over
time. Future commits would be a lot smaller that way. I'd parse out
the original large dictionary in a separate process, perhaps chunk the
commits (so build up the structure over several commits).

-- 
Martijn Pieters
_______________________________________________
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to