Jim Fulton wrote:
Chris Withers wrote:
Jim Fulton wrote:
- I wonder if an argument could be made than we shouldn't
  implicitly deactivate an object that has been accessed in a
  a transaction while the transaction is still running.

Would this prevent ZODB from ever promising not to use more than a certain amount of memory?

Yes, and non-necessarily.  :)

Don't follow what that means...

The only way that ZODB could keep such a promise would be to
disallow loading new objects, generating errors under some circumstances.

...or to bump objects out of the cache mid-way through a transaction, generating errors if there are no non-modified objects that can be unloaded, hence my interest in the _v_ and _p_sticky discussion...

The biggest zodb-related performance problems I've seen are when a scripter writes code that drags way more objects into memory than any sane script should. The creates a HUGE python process which never releases the memory back to the os (I believe that may be fixed in Python 2.5?) which causes all kinds of performance problems...

It should be possible to prevent scripters shooting themselves in the foot!

Sorry, that's impossible.

Well, okay, but the behaviour in this area is pretty crummy right now. We have an object cache that doesn't know anything about object size, and which is also happy to balloon to using all the memory on a machine, sometimes to the point where all you can do is get the machine hosters to hit the big red button :-(

I'm sure things _can_ be better than this, I just wish I could help...


Simplistix - Content Management, Zope & Python Consulting
           - http://www.simplistix.co.uk
For more information about ZODB, see the ZODB Wiki:

ZODB-Dev mailing list  -  ZODB-Dev@zope.org

Reply via email to