After lots of googling and browsing the source I can answer some of my  

> - What's the difference between storing bigger objects as blobs and as
> plain large strings?

Plain large strings cannot be streamed for instance. Products like Zope  
chop up their file uploads into 64kb chunks which are then stored as  
individual objects in the zodb.

> - Can I stream in parts of a blob/large string without having to read all
> of it?

I can get a file handle to a blob. Strings are always read as a whole.

> - Where can I find example code on zodb blobs? E.g. how do I save a blob,
> how do I read it back in?

The ZODB/tests directory features a few blob doctests which provide all  
the necessary code to get started. Having this on would be nice  
(especially since the doctests are already ReST-formatted).

Additionally I made some quick performance tests. I committed 1kb sized  
objects and I can do about 40 transaction/s if one object is changed per  
transaction. For 100kb objects it's also around 40 transactions/s. Only  
for object sizes bigger than that the raw I/O throughput seems to start to  

Still don't know the answers to these:

- Does it make sense to use ZODB in this scenario? My data is not suited  
well for an RDBMS.
- Are there more complications to blobs other than a slightly different  
backup procedure?
- Is it ok to use cross-database references? Or is this better avoided at  
all cost?

And new questions:

- Does the _p_invalidate hooking as outlined at work reliably?
- Are there any performance penalties by using very large invalidation  
queues (i.e. 300,000 objects) to reduce client cache verification time?  
 From what I've read it only seems to consume memory.

For more information about ZODB, see the ZODB Wiki:

ZODB-Dev mailing list  -

Reply via email to