We have an application whereby multimegabyte strings, each generated from the contents of several thousands of small objects in ZODB, are downloaded from zope as a CSV files. A particular CSV file is typically downloaded once or twice a day at a maximum.

The major issue is that it takes a long time for the download to start. Would it be possible to use a stream iterator (introduced in zope 2.7.1) or should we simply do chunks of REQUEST.write() ? What are the requirements for using chunked REQUEST.write - do some specific HTTP headers need to be set or does zope take care of it all?


Any advice or specific pointers would be appreciated. I've googled all I could find about the stream iterators, serving blobs etc. but all the info seems to be focused on serving large filesystem-based data. I am not sure what parts of that info applies to an use case that involves reading data from ZODB, or if stream iterators make sense for the use case at all.

I guess we could generate a temporary file and then serve that using a stream iterator. I understand some parts of the zope machinery could perhaps be released faster that way, but I wonder if the benefits are worth the extra step.

Thanks!

 Petri

_______________________________________________
Zope maillist  -  Zope@zope.org
http://mail.zope.org/mailman/listinfo/zope
**   No cross posts or HTML encoding!  **
(Related lists - http://mail.zope.org/mailman/listinfo/zope-announce
http://mail.zope.org/mailman/listinfo/zope-dev )

Reply via email to