Chris McDonough wrote:
I have put a new proposal up at
which deals with serving large "static" content objects faster from Zope
2.  This is based on some work that Paul Winkler and I did at the PyCon
Zope 2 sprint.  Comments appreciated.  I think this would also be very
useful for Zope 3; the concept would just need to be adapted to the new
server architecture in use there instead of ZServer.

This sounds useful for serving content from the filesystem.

However, I'm a little concerned about this because producers must not read from the object database. After the application finishes interpreting a request and returns a producer to be processed asynchronously, it closes the ZODB connection. If the producer then tried to load some ZODB object, ZODB would do one of two things. If the connection is still closed, ZODB would raise an error. If the connection happens to be re-opened by another thread, ZODB might allow it, but it has a chance of going insane if it happens to be loading or storing something in the other thread at the same time.

To work around this, you can give to the producer an object that contains the chunks already loaded from the database. If you do that, though, I think you've lost all the benefit of using a producer.

Another workaround might be to open a special ZODB connection just for the main thread. The producer should load objects from that connection instead of the application thread's connection. Hey, I think that would solve another problem at the same time: multiple requests for the same large object would use the same ZODB cache. Wow, do you see any holes in that plan? I'm thinking that would win both speed and memory.


Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists - )

Reply via email to