-----BEGIN PGP SIGNED MESSAGE-----
> After lots of googling and browsing the source I can answer some of my
>> - What's the difference between storing bigger objects as blobs and as
>> plain large strings?
> Plain large strings cannot be streamed for instance. Products like Zope
> chop up their file uploads into 64kb chunks which are then stored as
> individual objects in the zodb.
That was the strategy before blobs. ZODB versions since 3.8 support
storage of BLOb data as files on the filesystem.
>> - Can I stream in parts of a blob/large string without having to read all
>> of it?
> I can get a file handle to a blob. Strings are always read as a whole.
>> - Where can I find example code on zodb blobs? E.g. how do I save a blob,
>> how do I read it back in?
> The ZODB/tests directory features a few blob doctests which provide all
> the necessary code to get started. Having this on zodb.org would be nice
> (especially since the doctests are already ReST-formatted).
> Additionally I made some quick performance tests. I committed 1kb sized
> objects and I can do about 40 transaction/s if one object is changed per
> transaction. For 100kb objects it's also around 40 transactions/s. Only
> for object sizes bigger than that the raw I/O throughput seems to start to
40 tps sounds low: are you pushing blob content over the wire somehow?
> Still don't know the answers to these:
> - Does it make sense to use ZODB in this scenario? My data is not suited
> well for an RDBMS.
YMMV. I still default to using ZODB for anything at all, unless the
problem smells very strongly relational.
> - Are there more complications to blobs other than a slightly different
> backup procedure?
You need to think about how the blob data is shared between ZEO clients
(your appserver) and the ZEO storage server: opinions vary here, but I
would prefer to have the blobs living in a writable shared filesystem,
in order to avoid the necessity of fetching their data over ZEO on the
individual clients which were not the one "pushing" the blob into the
> - Is it ok to use cross-database references? Or is this better avoided at
> all cost?
I would normally avoid them out of habit. They seem to work, though.
> And new questions:
> - Does the _p_invalidate hooking as outlined at
> http://email@example.com/msg00637.html work reliably?
Never tried it, nor felt the need.
> - Are there any performance penalties by using very large invalidation
> queues (i.e. 300,000 objects) to reduce client cache verification time?
At a minimum, RAM occupied by that queue might be better used elsewhere.
I just don't use persistent caches, and tend to reboot appservers in
rotation after the ZEO storage has been down for any significant period
(almost never happens).
> From what I've read it only seems to consume memory.
Note that the ZEO storage server makes copies of that queue to avoid
Tres Seaver +1 540-429-0999 tsea...@palladion.com
Palladion Software "Excellence by Design" http://palladion.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
-----END PGP SIGNATURE-----
For more information about ZODB, see the ZODB Wiki:
ZODB-Dev mailing list - ZODB-Dev@zope.org