Francis Dubé wrote:
Hi everyone,

I'm running a a webserver on FreeBSD (6.2-RELEASE-p6) and I have this error in my logs :

collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

I've read that this is mainly caused by Apache spawning too many processes. Everyone seems to suggest to decrease the MaxClients directive in Apache(set to 450 at the moment), but here's the problem...i need to increase it ! During peaks all the processes are in use, we even have little drops sometime because there isn't enough processes to serve the requests. Our traffic is increasing slowly over time so i'm affraid that it'll become a real problem soon. Any tips on how I could deal with this situation, Apache's or FreBSD's side ?

Here's the useful part of my conf :

Apache/2.2.4, compiled with prefork mpm.
httpd.conf :
<IfModule mpm_prefork_module>
   ServerLimit         450
   StartServers          5
   MinSpareServers       5
   MaxSpareServers      10
   MaxClients          450
   MaxRequestsPerChild   0

KeepAlive On
KeepAliveTimeout 15
MaxKeepAliveRequests 500

You don't say what sort of content you're serving, but if it is
PHP, Ruby-on-Rails, Apache mod_perl or similar dynamic content then here's a very useful strategy.

Something like 25-75% of the HTTP queries on a dynamic web site will
typically be for static files: images, CSS, javascript, etc.  An
instance of Apache padded out with all the machinery to run all that
dynamic code is not the ideal server for the static stuff.  In fact,
if you install one of the special super-fast webservers optimised
for static content, you'll probably be able to answer all those requests from a single thread of execution of a daemon substantially
slimmer than apache.  I like nginx for this purpose, but lighttpd
is another candidate, or you can even use a 2nd highly optimised instance of apache with almost all of the loadable modules and other stuff stripped out.

The tricky bit is managing to direct the HTTP requests to the appropriate 
server.  With nginx I arrange for apache to bind to the
loopback interface and nginx handles the external network i/f, but
the document root for both servers is the same directory tree.  Then
I'd filter off requests for, say, PHP pages using a snippet like so
in nginx.conf:

       location ~ \.php$ {

So all the PHP gets passed through to Apache, and all of the other content 
(assumed to be static files) is served directly by nginx[1].
It also helps if you set nginx to put an 'Expires:' header several
days or weeks in the future for all the static content -- that way
the client browser will cache it locally and it won't even need to
connect back to your server and try doing an 'if-modified-since' HTTP
GET on page refreshes.

The principal effect of this is that Apache+PHP basically spends all it's time doing the heavy lifting it's optimised for, and doesn't get distracted by all the little itty-bitty requests. So you need fewer apache child processes, which reduces memory pressure and to some extent competition for CPU resources.

An alternative variation on this strategy is to use a reverse proxy
-- varnish is purpose designed for this, but you could also use squid
in this role -- the idea being that static content can be served mostly
out of the proxy cache and it's only the expensive to compute dynamic
content that always gets passed all the way back to the origin server.

You can also see the same strategy commonly used on Java based sites,
with Apache being the small-and-lightning-fast component, shielding
a larger and slower instance of Tomcat from the rapacious demands of the Internet surfing public.



[1] Setting 'index index.php' in nginx.conf means it will DTRT with
   directory URLs too.

Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                 Flat 3
PGP:     Ramsgate
                                                 Kent, CT11 9PW

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to