Hi,

I have a ubuntu server with a linux 2.6.24 kernel running on an HP proliant server with 64-bit 2 x dual-core, 4 GB ram. I'm trying to measure the peak request rate with the apache server configured as forward proxy and so I load this system with 5000 req/sec from a polygraph setup. I have configured mod_mem_cache to only use about 2GB and the max_object _count setting is also configured such that it is approximately = "cache size / mean size of the object". Now I have a added log messages to the mod_mem_cache code, so that when the cache limit is reached a log messages will be dumped and from then on the replacement algorithm can take over. The polygraph server has been configured to offer an object-hit ratio of about 30%.

Since, the mem_cache doesn't use shared-memory I use a single apache server process, i.e MaxServers is set to 1. If i used more than one apache process then i wouldn't be able to control which requests go to which apache server and therefore it is a strong possibility that i might end up caching the same object multiple times each in the context of an individual apache child process. I am using the event MPM, because I thought that would give me better performance than the prefork and worker MPMs. I'm aware that the event MPM is an experimental module but for achieving the best server performance is far important than a few bugs that we might encounter on the way.

Anyway, when i examine the memory consumption of the httpd process via the top command, I see that the process has consumed up to 3.7 GB of heap space even though the maximum size of the cache has been set to 2GB. At this point the server has already run for about 30 minutes serving 5k req/sec. I initially thought that this might be because of the back-log requests and the apr_pool_ memory allocations so in the next run I completely disabled the mem_cache module. During this run, I saw that apache was only able to serve around 4000 req/sec, because all the requests were cache misses however, the memory footprint of apache after 45 minutes was only around 40mb. As soon as i enable the cache, I observe that the memory footprint of apache is almost 0.5 - 1 .5 times as that of the current cache size. The cache here hasn't even reached its limit so there is no possibility of ejected objects not being freed.

I found this link that touches on this topic and it suggests that the apache worker processes should be restarted via the MaxRequestsPerChild directive in order to avoid problems due to memory fragmentation and leaks. (http://www.apachesecurity.net/blog/2006/08/apache_reverse_proxy_memory_co.html ). Restarting the child process is not an option for me since I will lose all the objects that I have cached. If the problem is due to memory fragmentation because the cached object sizes tend to vary between 3k - 8k in my test, then that too is also something that I can live with. I could write my own memory allocation module that allocates chunks in slabs so that way memory consumption is predictable. The only apr_pool that the mem_cache seems to use is when it allocates a cache_object and I don't see that as a reason for this large memory consumption. I am beginning to believe that this indeed might be a memory fragmentation issue, however its only been a week since I have been working with the apache code for a week and so I am not entire sure about my hypothesis.

I would really appreciate if someone on this list could communicate what their thoughts on this matter and if you have reached this far then thank you for your time.

Best Regards,
Manik

Reply via email to