Quick fix could be to use HAProxy for loadbalancing and setting the max number of connections per mongrel to 1. Added bonus here is that all requests get queued op at HAProxy (which is very conservative on memory use) and routed to the first available mongrel process instead of getting queued op at mongrel level. I read nginx would in the near future (or maybe already) have the option to limit the number of simultaneous proxied connections. But HAProxy is the only tool that can do this that I have experience with (still wondering why the Rails community seems to favor Pound so much). Piet.
________________________________ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alexey Verkhovsky Sent: donderdag 8 maart 2007 7:36 To: mongrel-users@rubyforge.org Subject: Re: [Mongrel] Memory leaks in my site <snip> It also doesn't leak any memory at all *when it is not overloaded*. E.g., under maximum non-concurent load (single-threaded test client that fires the next request immediately upon receiving a response to the previous one), it stays up forever. When Mongrel + "Hello, World" is overloaded, there is a memory leak to the tune of 6 Mb per hour. I have yet to figure out where is it coming from. <snip> Yes. Meantime, the recipe apparently is "serve static stuff through an upstream web server, and use smaller values of --num--procs". Mongrel that only receives dynamic requests is, essentialy, a single-threaded process, anyway. The only reason to have more than one (1) thread is so that other requests can queue up while it's doing something that takes time. Cool.
_______________________________________________ Mongrel-users mailing list Mongrel-users@rubyforge.org http://rubyforge.org/mailman/listinfo/mongrel-users