On Tue, Jan 27, 2004 at 02:24:46PM -0500, Jeff Trawick wrote:
I'm testing with this patch currently (so far so good):
Same here, I've applied the patch, and right now have 1 hours uptime,
which is 12 times more than I've ever had with worker before.
Looks like that was it. Where do I send the
On Wed, Jan 28, 2004 at 10:40:54AM +, Colm MacCarthaigh wrote:
On Tue, Jan 27, 2004 at 02:24:46PM -0500, Jeff Trawick wrote:
I'm testing with this patch currently (so far so good):
Same here, I've applied the patch, and right now have 1 hours uptime,
which is 12 times more than I've
Colm MacCarthaigh wrote:
On Tue, Jan 27, 2004 at 02:24:46PM -0500, Jeff Trawick wrote:
I'm testing with this patch currently (so far so good):
Same here, I've applied the patch, and right now have 1 hours uptime,
which is 12 times more than I've ever had with worker before.
Looks like that was
worker MPM stack corruption in parent:
int free_slots[MAX_SPAWN_RATE];
...
/* great! we prefer these, because the new process can
* start more threads sooner. So prioritize this slot
* by putting it ahead of any slots with active threads.
Jeff Trawick wrote:
worker MPM stack corruption in parent:
int free_slots[MAX_SPAWN_RATE];
...
/* great! we prefer these, because the new process can
* start more threads sooner. So prioritize this slot
* by putting it ahead of any
Colm MacCarthaigh wrote:
On Mon, Jan 26, 2004 at 06:28:03PM +, Colm MacCarthaigh wrote:
I'd love to find out what's causing your worker failures. Are you using
any thread-unsafe modules or libraries?
Not to my knowledge, I wasn't planning to do this till later, but
I've bumped to 2.1, I'll
On Thu, Jan 15, 2004 at 04:04:38PM +, Colm MacCarthaigh wrote:
There were other changes co-incidental to that, like going to 12Gb
of RAM, which certainly helped, so it's hard to narrow it down too
much.
Ok with 18,000 or so child processes (all in the run queue) what does
your load look
On Mon, Jan 26, 2004 at 10:09:20AM -0800, Aaron Bannert wrote:
On Thu, Jan 15, 2004 at 04:04:38PM +, Colm MacCarthaigh wrote:
There were other changes co-incidental to that, like going to 12Gb
of RAM, which certainly helped, so it's hard to narrow it down too
much.
Ok with 18,000 or
On Mon, Jan 26, 2004 at 06:28:03PM +, Colm MacCarthaigh wrote:
I'd love to find out what's causing your worker failures. Are you using
any thread-unsafe modules or libraries?
Not to my knowledge, I wasn't planning to do this till later, but
I've bumped to 2.1, I'll try out the
On Mon, Jan 26, 2004 at 07:37:23PM +, Colm MacCarthaigh wrote:
On Mon, Jan 26, 2004 at 06:28:03PM +, Colm MacCarthaigh wrote:
I'd love to find out what's causing your worker failures. Are you using
any thread-unsafe modules or libraries?
Not to my knowledge, I wasn't planning
Colm MacCarthaigh wrote:
On Mon, Jan 26, 2004 at 06:28:03PM +, Colm MacCarthaigh wrote:
I'd love to find out what's causing your worker failures. Are you using
any thread-unsafe modules or libraries?
Not to my knowledge, I wasn't planning to do this till later, but
I've bumped to 2.1, I'll
On Mon, Jan 26, 2004 at 04:25:58PM -0500, Jeff Trawick wrote:
*sigh*, forensic_id didn't catch it,
forensic_id is just for crash in child
I know, but I couldnt rule out a crash in the child being a root cause
... until now, it doesn't look like it's trigger by a particular URI
anyway.
Colm MacCarthaigh wrote:
Not entirely serious, but today, we actually hit this, in production :)
The hardware, a dual 2Ghz Xeon with 12Gb RAM with Linux 2.6.1-rc2 coped,
and remained responsive. So 20,000 may no longer be outside the realms
of what administrators reasonably desire to have.
On Thu, Jan 15, 2004 at 10:49:43AM -0500, [EMAIL PROTECTED] wrote:
-#define MAX_SERVER_LIMIT 2
+#define MAX_SERVER_LIMIT 10
dang!
Committed a limit of 20.
A couple of observations:
* I don't think you could do this with an early 2.4 kernel on i386 because
of eating up
14 matches
Mail list logo