Hello all,

I have found an explanation of the "problem" I reported a little while ago - and I 
have been asked to post it here, in case someone else has observed something similar.

I spent a day in the debugger with Apache, I found that I could in fact reproduce the 
problem in "the lab" - with some patience. The short description of the problem: 
Apache's virtuel memory consumption is apparently increasing over time as pages are 
viewed, but not necessarily every time.

Apparently... It was (as "expected") not a memory leak, but an effect of the large 
number of threads that I have configured, 144. It appears each thread will allocate a 
"completion context" from a pool (see mpm\winnt\child.c), which will be recycled 
shortly after. Each context takes around half a megabyte. Usually a limited amount of 
contexts are needed, but in a worst case scenario, each thread will request a context, 
and the full amount will be allocated. However, larger numbers of contexts appears 
only after "some" not well defined use. This is why it looks like a leak. Initially, 
my Apache uses around 10MB, and if all threads allocate a context, it will amount to 
86MB, but it will get there very slowly, raising the memory usage only when one or 
more threads decide to need a context and don't find one in the pool.

The reason I see this is www.mdjnet.dk/discog.html, a HTML document with well over 100 
small pictures embedded. If I don't have at least a thread per picture, I will get 
these in the error.log:
"[warn] Server ran out of threads to serve requests. Consider raising the 
ThreadsPerChild setting"
and pictures missing on the page. When someone views that page, or if several users 
access pages in general at the same time, the server will be litterally "showered" 
with request, mostly requests for pictures. Each request will activate a thread. 
Usually only a limited number of threads will need to allocate a context at the same 
time, but - perhaps because my server is an old 200MHz with only 128MB - eventually, a 
larger and larger number of threads will - coincidentially - need contexts at the same 
time. This does not necessarily happen immediately, and with 144 threads there is room 
for the "record" to be broken many times over a period of many weeks/months, so this 
is why it appears to be a memory leak, as memory consumption will grow "forever" and 
to propertions much larger than the entire content of my simple site, even though 
there is indeed an upper limit, in my case 86MB, which it will approach 
"assymptotically".

So, I am no longer worried about a leak in Apache, but there are still a few things I 
don't understand in depth:

- The allocation of contexts. It appears the contexts are not released to the pool 
until after a few seconds, downloading (locally) the page in question takes less time 
than that. Why is the total need for contexts still relatively small - typically? 
After one download of the page, the pool will typically contain around 10-20 contexts, 
while the next download may result in 30 contexts being used. Why the difference, if 
the recycling is relatively slow anyway?

- Why do I so easily get the "out of threads" error? My server is indeed a low volume 
server, and I know Apache to serve a significant larger amount of clients at the same 
time elsewhere, so even without the 100+ jpgs page, the number of simultaneous 
requests will be rather large. Is people out there really running their Apaches with a 
gigantic number og threads, og could the fact that I see the error message be related 
to my "2.0.x / W2K / slow CPU / slow DSL (128kbs)" combination? - or have I simply 
configured my Apache all wrong?

Not questions I loose any sleep over, but it could be interesting to understand in 
detail. If you fell like discussing this further, please cc me on any posts, as I will 
be leaving this list some time soon.

I hope this analysis is interesting to someone, and I hope it might even save someone 
some headache, if they encounter the same effect.

Regards,
Morten Due Jorgensen
http://www.mdjnet.dk

Reply via email to