Hi !
Look at such example:


The apache server configuration:

Timeout 100 
KeepAlive Off
MaxKeepAliveRequests 100
KeepAliveTimeout 15
MinSpareServers 3
MaxSpareServers 40
StartServers 3
MaxClients 40           <-- This is important !
MaxRequestsPerChild 40  <-- This is important !

Let us assume, that these 40 httpd processes handle
many requests to maintain some not small www portal.
Many php scripts are beeing executed. Most of them need
a small amount of memory (1-2 M). But there are a few scripts,
which use, mmm...let it be 30 M. And execucution of such "BIG"
script happens statistically 1 per 39 executions of "SMALL" scripts.
Let us assume, that times of execution of all scripts are similiar.
And all httpd processes are busy in almost 100% of time.

So...

In simple model the situation is not bad: in every moment
there should be about 1 (let it be 5 maximally)  "BIG" processes
and the rest are "SMALL".
The used memory is 5*30M + something - maximum 200 M.

But...
It is not in real life. 

The php_module DOES NOT release allocated memory (it is my opinion)
, until the httpd process is finishing (after 40 RequestPerChild from the
mentioned before apache configuration). It holds it ALL in order to use
again when next request comes. So the

Effect is...

After a short time we have about 20 httpd processes (mmm.. 15? when we
are lucky...), where EACH holds 30 M (with hope, that next script will
use it...).
This comes from such computation: every httpd process during its lifetime
executes a "BIG" script once (statistically), in random moment, so
statistically half of time is "before BIG execution", half is "after
BIG execution". So in every moment half of all httpd processes
are after "the BIG execution" and each of them holds 30 M memory.

20*30 M = 600 M     =:O

oops... It is not so good.


Solutions:

1. Decrease the number of MaxRequestsPerChild

hmmm... but what is the optimal value ? It is
a problem to compute this in real life (where 
scripts are not just BIG and SMALL).
And the time for forks - performance lowers down.
:(

2. Decrese the value of MaxClients

...it is even worse then 1. The performance of 
our web server ... :(

3. Set the limit of memory

... BIG scripts can not be executed at all... :(

4. Force the httpd processes (exactly: force the 
php module as I suppose) to release the memory
BEFORE next request. Or better: set some limit
to which memory should be released at least. 

Yeah ... :))

That is it !

But...

How to do that ?
Any ideas ?


I've found out some interesting code in 
Zend/zend_alloc.h Zend/zend_alloc.c.
But setting CACHE_MEMORY_DISABLED to 1 does
not help.

Also functions shutdown_memory_manager()
and set_memory_manager are not simply useable.
I suppose it is not Zend's memory cache, that is
responsible for described effect.

Filip Sielimowicz


-- 
PHP Development Mailing List <http://www.php.net/>
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
To contact the list administrators, e-mail: [EMAIL PROTECTED]

Reply via email to