Edit report at http://bugs.php.net/bug.php?id=52569&edit=1

 ID:                 52569
 Comment by:         dennisml at conversis dot de
 Reported by:        mplomer at gmx dot de
 Summary:            Implement "ondemand" process-manager (to allow zero
                     children)
 Status:             Analyzed
 Type:               Feature/Change Request
 Package:            FPM related
 PHP Version:        5.3.3
 Assigned To:        fat
 Block user comment: N

 New Comment:

Is v5 of the patch known not to work with fpm in php 5.3.3? When
applying the patch I get the following segfault:



Program received signal SIGSEGV, Segmentation fault.

0x00000000005cf319 in fpm_env_conf_wp (wp=<value optimized out>)

    at /home/dennis/php-5.3.3/sapi/fpm/fpm/fpm_env.c:141

141                     if (*kv->value == '$') {


Previous Comments:
------------------------------------------------------------------------
[2010-09-05 20:42:56] f...@php.net

@dennisml at conversis dot de



It's complex to do and security risky. Don't want to mess with that.

------------------------------------------------------------------------
[2010-09-04 16:26:06] dennisml at conversis dot de

Since this patch causes the master process to dynamically fork children
on demand I'm wondering if it would be feasible to introduce the
possibility to do setuid()/setgid() calls after the fork to run the
child process with different user id's?

What I'm thinking about is the mass-hosting case that was previously
talked about on the mailinglist. Back then this would have been quite a
bit of work to do but with this patch this should be much easier to
accomplish.

------------------------------------------------------------------------
[2010-08-30 10:21:37] mplomer at gmx dot de

Some test results of the "ondemand"-pm:



General

- Pool has to start with 0 children - OK

- Handling and checking of new config options - OK



Concurrent requests

- Children has to be forked immediately on new requests without delay -
OK

- Idle children has to be killed after pm.process_idle_timeout + 0-1s -
OK

- When there are more than one idle children, kill only one per second
PER POOL - OK



Reaching pm.max_children limit

- No more processes has to be created - OK

- Requests have to wait until one child becomes idle and then get
handled immediately without further delay - OK

- When limit is reached, issue a warning and increase status counter
(and do this only once) - OK:

  Aug 28 13:39:41.537174 [WARNING] pid 27540,
fpm_pctl_on_socket_accept(), line 507: [pool www] server reached
max_children setting (10), consider raising it

- Warning is re-issued after children count decreases and hit the limit
again - OK



CPU burns

- When reaching max_children limit, pause libevent callback and reenable
it in the maintenance routine, to avoid CPU burns - OK



- When children takes too long to accept() the request, avoid CPU burn -
NOTOK

 -> happens sometimes (in praxis only sometimes after forking) - to
reproduce add an usleep(50000) in children's code after fork(), or
apachebench with ~200 concurrent requests :-)

 -> You get a lot of: "fpm_pctl_on_socket_accept(), line 502: [pool www]
fpm_pctl_on_socket_accept() called"

 -> It's not a big problem, because this doesn't take much time (in one
rare case it took ~90ms on my machine), but it's not nice, especially
when the server is flooded with requests

 -> one idea:

   - do not re-enable event-callback in fpm_pctl_on_socket_accept

   - send an event from children just after accept() to parent process

   - re-enable event-callback in parent process, when it receives this
event from children

   - in case of an error it is re-enabled in the maintainance routine
after max 1s, which is IMHO not bad to throttle requests in case of
error



Stress tests

- Test-machine: Intel Core i7 930 (4 x 2.8 GHz) (VMware with 256 MB
RAM)



- Testing with 100 concurrent requests on the same pool to a sleep(10);
php script with 0 running processes and max_children = 200:

 - took about 4ms per fork in average

 - 25 processes are forked in one block (timeslice?), after this there
is a gap of 200-1000ms

  - took about 125ms to fork 25 children

  - took about 2.5s to fork all 100 children and accept the requests

- Testing with 200 concurrent requests

  - hits RAM limit of VM, so it's maybe not meaningful

  - took ~10.5s to fork all 200 children and accept the requests

- Testing with 10 concurrent requests on 20 pools (so in fact 200
concurrent requests)

  - took ~11.2s to fork all 200 children and accept the requests

  - all children are killed after process_timeout + 10s (1 process per
second per pool is killed) - OK

------------------------------------------------------------------------
[2010-08-30 10:18:14] mplomer at gmx dot de

Patch version 5:

- Added missing fpm_globals.is_child check (proposed by jerome)

- Implemented "max children reached" status counter.

- Fixed missing last_idle_child = NULL; in
fpm_pctl_perform_idle_server_maintenance which caused the routine to
shutdown only one (or a few?) processes per second globally instead per
pool, when you have multiple pools. I think this was not the intention,
and it's a bug.

------------------------------------------------------------------------
[2010-08-27 08:38:34] f...@php.net

Updates to come:



1- there is a bug. after fork, child process seems to run code reverved
to the 

parent process:



Aug 27 08:32:30.646905 [WARNING] pid 4335, fpm_stdio_child_said(), line
143: 

[pool www_chroot] child 4450 said into stderr: "Aug 27 08:32:30.628866
[DEBUG] 

pid 4450, fpm_pctl_on_socket_accept(), line 529: [pool www_chroot] got
accept 

without idle child available .... I forked, now=22184178.981102"



2- the 1s max delay before resetting the fpm_pctl_on_socket_accept() is
in 

theory enough. But I prefer to set a much smaller specific timer (~1ms)
just in 

case. Imagine there is a bug and children becomes to segfault and it's
not 

restarted. There will be a 1s delay (max) before it's forked again. I
now it's 

the worst case scenario.

------------------------------------------------------------------------


The remainder of the comments for this report are too long. To view
the rest of the comments, please view the bug report online at

    http://bugs.php.net/bug.php?id=52569


-- 
Edit this bug report at http://bugs.php.net/bug.php?id=52569&edit=1

Reply via email to