Well, one of the things I was going to ask next was for some help doing
We aren't getting any core files, even after setting ulimit correctly
(although we could be setting it uncorrectly. I'll look into that
further). Anyway, someone else in this list said that core dumps for
threaded apps in Linux were mostly useless, so we aren't investing much
energy in it anyway.
With the short restart times we have, I'd prever a solution that didn't
involve keeping a dead site dead for too long (as in, debugging with
gdb). We are working in a ZEO scheme that would switch over the
accelerator to proxy another zeo client, but we are not there yet.
It would be ideal if we could instruct python to grab the SIG11, invoke
gdb, get a C stacktrace for all threads and let Zope die in peace. If it
all happend in a few seconds, we will still keep the client happy.
So, to answer your question, yes, I am confortable hooking up gdb. I'd
just prefer if it could be done in as little time as possible.
On Wed, 2001-12-05 at 18:10, Matthew T. Kromer wrote:
> Are you comfortable with hooking up gdb to Zope to try to catch this? I
> suspect, but do not know, that the MySQL python adapter is probably not
> doing something right w.r.t. memory management. Unfortunately, it is
> probably also the case that the problem only occurs with high-volume
> traffic -- particularly if it is a timing related bug.
> We have not been able to reproduce this problem in any deterministic way
> -- and the only people who seem to have it are those who are heavy MySQL
> users; it makes me think there is something in the adapter which is not
> behaving the same way under Python 2.1 than it did under Python 1.5.2.
> I have not looked at the adapter, so I'm making a few guesses as what
> is going wrong.
Ideas don't stay in some minds very long because they don't like
Zope-Dev maillist - [EMAIL PROTECTED]
** No cross posts or HTML encoding! **
(Related lists -