> On Tue, 9 Jan 2001, Rob Bloodgood wrote:
> > OK, so my next question about per-process size limits is this:
> > Is it a hard limit???
> >
> > As in,
> > what if I alloc 10MB/per and every now & then my one of my
> processes spikes
> > to a (not unreasonable) 11MB? Will it be nuked in mid process? Or just
> > instructed to die at the end of the current request?
>
> It's not a hard limit, and I actually only have it check on every other
> request. We do use hard limits with BSD::Resource to set maximums on CPU
> and RAM, in case something goes totally out of control. That's just a
> safety though.
<chokes> JUST a safety, huh? :-)
Alright, then to you and the mod_perl community in general,
since I never saw a worthwhile resolution to the thread "the edge of chaos,"
In a VERY busy mod_perl environment (and I'm taking 12.1M hits/mo right
now), which has the potential to melt VERY badly if something hiccups (like,
the DB gets locked into a transaction that holds up all MaxClient httpd
processes, and YES it's happened more than once in the last couple of
weeks),
What specific modules/checks/balances would you install into your webserver
to prevent such a melt from killing a box?
Red Hat Linux release 6.1 (Cartman)
Kernel 2.2.16-3smp on an i686
login: Out of memory for httpd
Out of memory for httpd
Out of memory for httpd
Out of memory for httpd
root
Out of memory for mingetty
Out of memory for httpd
Out of memory for httpd
<sigh>
<reset>
...and before the comments about client/server/DBA/caching/proxy/loadbalance
design start flying, I *know*! I'm working on it right now, but for right
now I have what I have and I'm trying to keep it alive for just a little
longer until the real fix is done. :-)
TIA!
L8r,
Rob