The majority of the Purdue University Engineering web presence is
provided via a cluster running ZEO. We offer hosting for every school,
department, faculty, staff, and student in the College. Because of this
we have a large number of content maintainers/developers on our system.
We are running into problems with users writing bad code which spins or
uploading huge files which seems to tie up the database for long periods
of time.

We end up in a situation where something will spin a client and a user
will repeatedly resubmit the request until all of our zeo clients are
spinning. Our clients are eventually killed, usually manually, and
quickly come back up. We drop the zeo cache on a restart to improve
startup speed, so when the clients do come back up we have 100% cache
misses and the zeo server gets pounded resulting in slow performance
until the client caches repopulate.

Occasionally we can track down the offending URL and correct the
problem, sometimes we cannot.

Perhaps these issues will be addressed in future versions of Zope, we
are currently running Zope 2.6.4

What I would like is some sort of timeout for requests, however I do not
want to punish users with slow connections. Perhaps a way to kill off a
specific request that is consuming excessive resources, without killing
the entire client.

Below is some information on our setup:

1 zeo server (Solaris)
  - 82 gig datafs
  - transaction time out of 120 seconds

2 load balanced zeoclients (Linux)
  - 2 gig zeo cache
  - Database Cache 30000 objects
  - 4 threads

2 failover apaches (Linux)
  - using pydirector for load balancing

We are receiving appoximately 1 million hits per day, which from what
I've read is not all that much. We probably have a higher number of DB
writes than usual because of the number of developers/maintainers. Can
anyone make suggestions for providing a more stable environment?

Thank you,
Brian Brinegar
Web Systems Developer
Engineering Computer Network
Zope maillist  -
**   No cross posts or HTML encoding!  **
(Related lists - )

Reply via email to