Stas Bekman <[EMAIL PROTECTED]> writes:

> > I had huge problems yesterday.  Our web site made it in to the Sunday
> > Times and has had to serve 1/2 million request in the last 2 days.
> 
> Oh, I thought there was a /. effect, now it's a sunday effect :)

The original concept should be credited to Larry Niven, he called the effect
"Flash crowds"

> > Had I set it up to have proxy servers and a separate mod_perl server?
> > No.  DOH!  So what happened to my 1Gig baby? It died. A sad and unhappy
> > death.

I strongly suggest you move the images to a separate hostname altogether. The
proxy is a good idea but there are other useful effects of having a separate
server altogether that I plan to write about in a separate message sometime.
This does mean rewriting all your img tags though.

> > What happened was this:  My memory usage went up and up until I got "Out
> > of memory" messages MySQL bailed out.  Memory usage was high, and the
> > server was swapping as well.  
> >
> > So I thought - restart MySQL and restart Apache.  But I couldn't reclaim
> > memory.  It was just unavailable.  How do you reclaim memory other than
> > by stopping the processes or powering down?  Is this something that
> > might have happened because it went past the Out of Memory stage?

Have you rebooted yet? Linux has some problems recovering when you run out of
memory really badly. I haven't tried debugging but our mail exchangers have
done some extremely wonky things once they ran out of memory even once
everything had returned to normal. Once non-root users couldn't fork, they
just got "Resource unavailable" but root was fine and memory usage was low.

> First, what you had to do in first place, is to set MaxClients to
> such a number, that when you take the worst case of the process growing to
> X size in memory, your machine wouldn't swap. Which will probably return
> an Error to some of the users, when processes would be able to queue all
> the requests, but it would never keep your machine down!

I claim MaxClients should only be large enough to force 100% cpu usage whether
from your database or the perl script. There's no benefit to having more
processes running if they're just context switching and splitting the same
resources finer. Better to queue the users in the listen queue.

On that note you might want to set the BackLog parameter (I forget the precise
name), it depends on whether you want users to wait indefinitely or just get
an error.

-- 
greg

Reply via email to