Gerhard Schmidt wrote at 2007-5-21 08:42 +0200:
> ...
>We have expirienced some problem with ZEO when requests take to long. 
>Sometimes we have Problems like this 
>2007-05-21T03:36:22 INFO ZEO.StorageServer (56016/ 
>Transaction blocked waiting for storage. Clients waiting: 1.
>2007-05-21T03:36:22 INFO ZEO.StorageServer (56016/ 
>Transaction blocked waiting for storage. Clients waiting: 2.
> ...

What you see here are informations over commit lock contention:

  Between the "vote" (start of) and the "finish" (end of)
  (second and third part of the commit protocol) 
  a "FileStorage" must hold a lock to prevent modifications
  of the voted state.
  When another connection tries to commit during this time,
  ZEO will report a "transaction blocked ...".

I had to enhance ZEO to report which transactions are blocking
in order to quickly resolve problems.

> ...
>This incident wasn't a Problem because it was resolved within on second.
>But sometimes situations like this take up tum 30 seconds to resolve. 
>The site is completly unresponsiv in this time and take up to 10 minutes
>to resume normal opration (Responsetimes < 1 sec per dynamic page) 

That's strange. I have never seen this (though commit lock contention
is not unusual in a site busy writing (as ours)).

It should not have any lasting effects....

> ....
>It seams there is a posibility for an deadlock when requests take to much
>time to process. 

Never observed something like this -- but a colleague already
had committed a monster transaction -- which committed for ages....
That prompted me to add the more detailed information to the
"Transaction blocked" log entry.

>But the main Problem what we have is the memory growth of the Zope server 
>processes. They grow to 500 MB of Memory bevor serving the first request. 

That's strange, too.

You may take a look at my "analyseObjects"


It was developped to help in the determination of memory leaks but
it can be useful to analyse unreasonably high memory loads after startup
as well.

It has several drawbacks, however:

  *  It knows only about the garbage collector registered objects.

     A Python debug build is necessary to learn about all
     Python objects

  *  It does not know how much size the objects use.

     An integration with "PySizer" may improve on this.

>an wile running the constantly growing until the hit the limit of the
>physical memory. When they do, they slow down very dramaticaly
>(responstimes 800% higher than usual) we have done some debugging and it 
>seams that die python garbage collection kicks in and kills the whole

The garbage collector by itself does not know about the physical
memory limit -- but when it kicks in and large parts of your objects
have been swapped out, its traversal (to determine unreachable objects)
will load them and this can drastically slow down the process.

Zope maillist  -
**   No cross posts or HTML encoding!  **
(Related lists - )

Reply via email to