On Tue, Jun 08, 2004 at 10:23:56AM -0400, Greg Ames wrote: > I'm interested to know how httpd 2.x can be made more scalable. Could we > serve > 10,000 clients with current platforms as discussed at > http://www.kegel.com/c10k.html , without massive code churn and module > breakage?
I've served over 20,000 using http 2.x, our current record was just over 23,000 back in February. Stock 2.x code with only a slight 10-liner patch which enabled sendfile for IPv4 connections only and the higher hardlimits patch I jokingly sent to the list (and got committed ;). I mailed Dan (c10k person) back on the third of February about it but I never got a reply. As previously stated on the list the server is running Linux 2.6.x-mm2 (where x is usually the most current), it's a Dell 2650, dual 2.4Ghz Xeon, 12Gb of RAM. Keepalive's are on, though "Timeout" is 30, MaxKeepAliveRequests 100 , KeepAliveTimeout 15, the net/ipv4/tcp_keepalive_time sysctl is 300 and I've found; fs/xfs/refcache_size = 512 vm/min_free_kbytes = 1024000 vm/lower_zone_protection = 1024 vm/page-cluster = 5 vm/swappiness = 10 all helped lots :) This is all with prefork as well, worker works out slower for our load for some reason. Oh and while I'm at it, the same server (http://ftp.heanet.ie/) recently shipped 966Mbit/sec in production, but only to about 5,000 concurrent users, that was the release of Fedora Core/2. 10k is way to low a target, I didn't even have to configure much to achieve that. 100k is a good target :) > I believe that reducing the number of active threads would help by reducing > the stack memory requirements. I've found that to work for us, with definite good results. I'll certainly try it on ftp.heanet.ie (after it looks ready ;) and report back if that's useful. -- Colm MacCárthaigh Public Key: [EMAIL PROTECTED]