On Thu, Jan 15, 2004 at 10:49:43AM -0500, [EMAIL PROTECTED] wrote: > >-#define MAX_SERVER_LIMIT 20000 > >+#define MAX_SERVER_LIMIT 100000 > > dang! > > Committed a limit of 200000. > > A couple of observations: > > * I don't think you could do this with an early 2.4 kernel on i386 because > of eating up kernel memory with LDTs, assuming APR thinks it can support > threads. Not sure about current kernels from popular distros.
I'm running 2.6.1-mm2 now, and things are much much better, we got away with it - just about with 2.6.1 vanilla, but -mm2 has improved the stability a lot. Peaked at just over 18,000 with 2.6.1-mm2 so far and it was a lot more bearable. We were having major stability problems with 2.4, and as soon as we went to 2.6 we found out why - we were getting a lot more client requests than we thought, but they were queuing. We hit 20,000 within 2 days of going to 2.6.1-rc2. There were other changes co-incidental to that, like going to 12Gb of RAM, which certainly helped, so it's hard to narrow it down too much. > * Should I assume you tried worker but it uses too much CPU? If so, is it > a small percentage more or really really bad? I don't use worker because it still dumps an un-backtracable corefile within about 5 minutes for me. I still have no idea why, though plenty of corefiles. I havn't tried a serious analysis yet, becasue I've been moving house, but I hope to get to it soon. Moving to worker would be a good thing :) If I get time, I'll compile the forensic logging module and see if I can find if it has a request-specific trigger I can replicate for testing. I have worker running for months on end on the exact same platform with much lower yield rates, so I suspect the problem is only being triggered by the sheer volume of requests. -- Colm MacC�rthaigh Public Key: [EMAIL PROTECTED]
