> RB> Alright, then to you and the mod_perl community in general, since
> RB> I never saw a worthwhile resolution to the thread "the edge of
> RB> chaos,"
>
> The resolution is that the machine was powerful enough. If you're
> running your mission critical service at "the edge of chaos" then
> you're not budgeting your resources properly. You should have at
> least a 50% room for expansion. That is, you should run your machines
> around 50% of their maximum load so you have room to absorb the spikes
> in traffic.
Well, yes and no... the HW is PLENTY powerful enuff, and I *know* I'm not
budgeting resources properly.
First of all, I'm a true geek... I can melt *any* machine. :-)
Second of all, with the literally thousands of pages of docs necessary to
understand in order to be really mod_perl proficient, I'm not at all
surprised or embarrassed that there are things about tuning a high-powered
server environment that I don't know.
Thirdly, I don't have significant load, for the most part. I have designed
everything I've written to have as little impact on each specific phase of
the transaction process as possible. On my most important server, a dual
PIII/600 w/ 2GB of RAM, part of my problem is that I put in the second GB
when I'm *CONVINCED* that all I needed to do was find the fine line of
resource limitation that would prevent meltdown... I mean, 1GB is a lot of
ram.
And finally, I was hoping to prod somebody into posting snippets of
CODE
and
httpd.conf
that describe SPECIFIC steps/checks/modules/configs designed to put a
reasonable cap on resources so that we can serve millions of hits w/o
needing a restart.
I know I'm not dumb... in fact, I know I'm exceptionally good. But with the
ridiculous number of things I have to keep track of (being head geek is
always a busy job), I still haven't been able to wrap my mind around the
correct usage(s) of the various resource limiting modules. A working
example (even for a completely different machine) would make my job 10 times
easier.
L8r,
Rob