40 tps sounds low:  are you pushing blob content over the wire somehow?

I have seen the ZEO storage committing transactions at least an order of
magnitude faster than that (e.g., when processing incoming newswire
feeds).  I would guess that there could have been some other latencies
involved in your setup (e.g., that 0-100ms lag you mention below).

See my attached test script. It outputs 45-55 transactions/s for 100 byte sized payload. Maybe there's a very fundamental flaw in the way the test is setup. Note that I am testing on a regular desktop machine (Windows 7, WoW64, 4GB RAM, 1TB hard disk capable of transfer rates >100MB/s).

The zeo server and clients will be in different physical locations, so I'd
probably have to employ some shared filesystem which can deal with that.
Speaking of locations of server and clients, is it a problem - as in zeo
will perform very badly under these circumstances as it was not designed
for this - if they are not in the same location (typical latency 0-100ms)?

That depends on the mix of reads and writes in your application.  I have
personnally witnessed a case where the clients stayed up and serving
pages over a whole weekend in a clusterfsck where both the ZEO server
and the monitoring infrastructure went belly up.  This was for a large
corporate intranet, in case that helps:  the problem surfaced
mid-morning on Monday when the employee in charge of updating the lunch
menu for the week couldn't save the changes.

Haha, I hope they solved this critical problem in time!

In my case the clients might be down for a couple of days (typically 1 or 2 days) and they should not spend 30 mins in cache verification time each
time they reconnect. So if these 300k objects take up 1k each, then they
occupy 300 MB of ram which I am fine with.

If the client is disconnected for any period of time, it is far more
likely that just dumping the cache and starting over fresh will be a
win.  The 'invalidation_queue' is primarily to support clients which
remain up while the storage server is down or unreachable.

Yes, taking the verification time hit is my plan for now. However, dumping the whole client cache is something I'd like to avoid, since the app I am working on will not work over a corporate intranet and thus the bandwidth for transferring the blobs is limited (and so can take up considerable time). Maybe I am overestimating the whole client cache problem though.

Thanks again for your valuable advice,
-Matthias

Attachment: test.py
Description: Binary data

_______________________________________________
For more information about ZODB, see the ZODB Wiki:
http://www.zope.org/Wikis/ZODB/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev

Reply via email to