Petri Savolainen <[EMAIL PROTECTED]> wrote:
> We have an application whereby multimegabyte strings, each generated
> from the contents of several thousands of small objects in ZODB, are
> downloaded from zope as a CSV files. A particular CSV file is typically
> downloaded once or twice a day at a maximum.
> The major issue is that it takes a long time for the download to start.
> Would it be possible to use a stream iterator (introduced in zope 2.7.1)
> or should we simply do chunks of REQUEST.write() ? What are the
> requirements for using chunked REQUEST.write - do some specific HTTP
> headers need to be set or does zope take care of it all?
You may need to set the Content-Length yourself, if you want it.
Otherwise REQUEST.write is pretty simple to use.
The only thing is that it ties up a Zope thread during the output.
> Any advice or specific pointers would be appreciated. I've googled all I
> could find about the stream iterators, serving blobs etc. but all the
> info seems to be focused on serving large filesystem-based data. I am
> not sure what parts of that info applies to an use case that involves
> reading data from ZODB, or if stream iterators make sense for the use
> case at all.
> I guess we could generate a temporary file and then serve that using a
> stream iterator.
It would still take time to generate the temporary file, though.
> I understand some parts of the zope machinery could
> perhaps be released faster that way, but I wonder if the benefits are
> worth the extra step.
If you do it properly you end up handing off the file to the asyncore
machinery which frees your Zope threads to do other more intersting
things than serving bytes.
Florent Guillaume, Nuxeo (Paris, France) CTO, Director of R&D
+33 1 40 33 71 59 http://nuxeo.com [EMAIL PROTECTED]
Zope maillist - Zope@zope.org
** No cross posts or HTML encoding! **
(Related lists -