On Thu, 2003-08-28 at 17:13, Shane Hathaway wrote:
> Leonardo Rochael Almeida wrote:
> > Let's assume we're not hitting a 2.6 bug. we could be hitting a 2.6
> > feature :-) I've read Casey's article about optimizing ZCatalog and I
> > know there has been a lot of ZCatalog optimization work. Has there been
> > any changes in 2.6 (in ZCatalog or thread cache or anywhere else) that
> > was a tradeoff of memory for speed?
> Not that I know of.  To the contrary, I think--we sometimes can't do an 
> optimization because it would consume extra memory.

Well, as long as I can put an upper bound on the consumed memory, and
the tradeoff is reasonable, I'd be glad to trade memory for speed.

> > You mean that one ZCatalog object is in the cache, and (indirectly)
> > attached to it are thousands of DateTimes? Then we have a huge problem.
> Why?  Although the catalog manages those DateTime objects, the DateTimes 
> are in many ZODB records.

The problem is: there should be a more direct relation between the
number of allowed objects in the cache and the number of refcounts. As
you say, the DateTimes are in ZODB Records, so if I see a DateTime
refcount of, say 120k, I should be seeing at least that many objects in
the caches, but I was seeing under 3k objects, unless there's something
else that could be counted as a refcount multiplying factor, say if
something was keeping a reference to the class for every instance
besides the instance itself.

> > I was counting on the "objects in cache" parameter to put an upper bound
> > in Zope memory consumption, even if this upper bound was a little fuzzy.
> In the past, the cache size parameter was very fuzzy.  It was not an 
> upper bound.  Now, it is not only an upper bound, it also tends to act 
> as a lower bound.  (Note that the cache is still allowed to grow 
> indefinitely within the scope of a request, however.)

> > [..] but for now we need to deal with the emergency that is the fact that
> > migrating to 2.6.1 put us in the very hot spot of not having a way to
> > put an upper bound to Zope memory consumption.
> I would characterize it the other way around; there was no way to 
> establish a hard boundary until now.  I suggest you reduce the ZODB 
> cache size to 2000 (or less) and see if the problem goes away.

The first time you sugested this I didn't even bother because we were
seeing our memory down the drain even while the caches where still at
1600 objects each (we had our limit at 5000).

But as we ran out of options, and seeing that flushing the cache got us
rid of the DateTimes. We decided to test the possibility of keeping Zope
thrashing its cache, even at the expense of performance, to see if it
could shed the DateTimes. 

So we set the cache limit to 1000 objects, and to our amazement, Zope
sat comfortably at 150MB RSS, and the performance was very good. Some
heavy ZCatalog pages still send Zope for a walk for a few seconds (for
those and all other requests) but the site becomes responsive again in
under 20 seconds. Not only that, but the DateTime refcount isn't even
among the top 5 (the BTrees are winning there now).

So we now have breathing space to do the great ZCatalog reform.

> > Anyone know off the top of their head how do I get to the object caches?
> app._p_jar._cache will get you started.  It is dictionary-like.

Thanks, as soon as we get more RAM in the machine, I'll see if I can
switch the cache to a high number again and see where all those
DateTimes are hanging.

Cheers, Leo

Ideas don't stay in some minds very long because they don't like
solitary confinement.

Zope-Dev maillist  -  [EMAIL PROTECTED]
**  No cross posts or HTML encoding!  **
(Related lists - 
 http://mail.zope.org/mailman/listinfo/zope )

Reply via email to