as a lower bound. (Note that the cache is still allowed to grow indefinitely within the scope of a request, however.)
This is the single biggest cause of Zope becoming unresponsive for me:
people doing silly things that drag hordes of objects into memory i na single request,
Now that would be an interesting feature: An upper bound on the number of objects a request is allowed touch, period. If a request requires more than that it's rolled back.
Hmm, not so useful, is people just keep retryign the request.
What I'd like to see if the cache checking it's size on object load.
If the object load would cause the cache to go above it's maximum number, then boot an object out of the cache in order to make room for the new one.
So, you'd get slowness because of cache thrashing on THAT PARTICULAR REQUEST, but at least you'd be able to control the amount of memory Zope actually uses and other requests would stand a chance of beind processed normally.
<snip very enlightening explanation of why a Python process using lots of memory, even for a short period of time, is a 'bad thing'>
That was pretty informative, but does give even more of a good reason why we really need to be able to put a maximum upper bound on the amount of memory Zope can use at any one point...
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists - http://mail.zope.org/mailman/listinfo/zope-announce