I noticed that if a large number of buckets in a brigade are sent out, the resident memory footprint of httpd process (been playing with 2.2.6 for now) will go up significantly.
For instance, one could replicate this behaviour by having a file processed by the INCLUDES filter, which contains a lot of references (say a thousand) to something like this: <!--#include virtual='somefile.html' --> The size of the somefile.html does not matter (it can actually be zero). In this particular example, resident size of httpd jumped from 3 to 11 MB. The served file in question was about 40 kB in size (i.e. the SHTML file containing the virtual directive). Quite a bit for such a small chunk of HTML being pushed out. What appears to be happening is that conn->pool and conn->bucket_alloc do not get destroyed (but rather just cleaned), which then causes the footprint of the process to go up, given that a lot of buckets were allocated. If fact, even destroying conn->pool does not help, because it would appear that conn->pool is not the owner of its allocator. Destroying conn->pool->parent brings the memory footprint of httpd back in check. Now imagine someone (like yours truly :-) writing a handler/filter that sends many, many buckets inside a brigade down the filter chain. This causes the httpd process to start consuming many, many megabytes (in some instances I measured almost 500 MB in my tests), which are never returned. Then imagine multiple httpd processes doing the same thing and not releasing any of that memory back. The machine goes into DOS quickly, due to excessive swapping. Sure, I could fix my code to slam buckets together to reduce the number of them, but that would not fix any other handler/filter (e.g. mod_include). So, I'm guessing the correct fix would be to: - make conn->pool have/own its own allocator - destroy, rather then clear conn->pool on connection close Thoughts? -- Bojan
