Quoting Joe Orton <[EMAIL PROTECTED]>:
It sounds like that is the root cause. If you create a brigade with N
buckets in for arbitrary values of N, expect maximum memory consumption
to be O(N). The output filtering guide touches on this:
http://httpd.apache.org/docs/trunk/developer/output-filters.html
Filters need to be written to pass processed buckets down the filter
chain ASAP rather than buffering them up into big brigades. Likewise
for a content generator - buffering up a hundred 1MB HEAP buckets in a
brigade will obviously give you a maximum heap usage of ~100MB; instead,
pass each HEAP bucket down the filter chain as its generated and you get
maximum of ~1MB.
Most of the shipped filters do behave correctly in this respect (though
there are problems with the handling of FLUSH buckets); see e.g. the
AP_MIN_BYTES_TO_WRITE handling in mod_include.
All true. The problem in both cases was created by the number of
buckets, rather than by the size of them. At first I was suspecting I
was doing something stupid in my code, so I checked out mod_include to
see what happens there and to my surprise, it was eating memory and
not returning it to OS too.
What I was worried about here is that after the connection is closed,
the memory used for all this is never reclaimed, which is what
actually caused DOS (i.e. multiple huge processes started being
swapped out). Not sure how far mod_include would push this (I can test
with say 10,000+ includes and see what happens), but maybe it would be
better to destroy the pool, just like the request pool gets destroyed,
to be on the safe side. But then again, if a module is buggy, it's not
Apache's problem...
Not sure why MaxMemFree never kicked in to saved the day. Gdb says
that the value was passed in correctly and all and yet, with no
effect...
--
Bojan