Edit report at http://bugs.php.net/bug.php?id=52569&edit=1

 ID:                 52569
 Comment by:         f...@php.net
 Reported by:        mplomer at gmx dot de
 Summary:            Implement "ondemand" process-manager (to allow zero
                     children)
 Status:             Analyzed
 Type:               Feature/Change Request
 Package:            FPM related
 PHP Version:        5.3.3
 Assigned To:        fat
 Block user comment: N

 New Comment:

You should "make clean" before recompiling with v5 patch.



The v5 patch does not apply on 5.3.3, it applies on the svn PHP5_3_3
branch.



++ Jerome


Previous Comments:
------------------------------------------------------------------------
[2010-09-13 03:30:56] dennisml at conversis dot de

Is v5 of the patch known not to work with fpm in php 5.3.3? When
applying the patch I get the following segfault:



Program received signal SIGSEGV, Segmentation fault.

0x00000000005cf319 in fpm_env_conf_wp (wp=<value optimized out>)

    at /home/dennis/php-5.3.3/sapi/fpm/fpm/fpm_env.c:141

141                     if (*kv->value == '$') {

------------------------------------------------------------------------
[2010-09-05 20:42:56] f...@php.net

@dennisml at conversis dot de



It's complex to do and security risky. Don't want to mess with that.

------------------------------------------------------------------------
[2010-09-04 16:26:06] dennisml at conversis dot de

Since this patch causes the master process to dynamically fork children
on demand I'm wondering if it would be feasible to introduce the
possibility to do setuid()/setgid() calls after the fork to run the
child process with different user id's?

What I'm thinking about is the mass-hosting case that was previously
talked about on the mailinglist. Back then this would have been quite a
bit of work to do but with this patch this should be much easier to
accomplish.

------------------------------------------------------------------------
[2010-08-30 10:21:37] mplomer at gmx dot de

Some test results of the "ondemand"-pm:



General

- Pool has to start with 0 children - OK

- Handling and checking of new config options - OK



Concurrent requests

- Children has to be forked immediately on new requests without delay -
OK

- Idle children has to be killed after pm.process_idle_timeout + 0-1s -
OK

- When there are more than one idle children, kill only one per second
PER POOL - OK



Reaching pm.max_children limit

- No more processes has to be created - OK

- Requests have to wait until one child becomes idle and then get
handled immediately without further delay - OK

- When limit is reached, issue a warning and increase status counter
(and do this only once) - OK:

  Aug 28 13:39:41.537174 [WARNING] pid 27540,
fpm_pctl_on_socket_accept(), line 507: [pool www] server reached
max_children setting (10), consider raising it

- Warning is re-issued after children count decreases and hit the limit
again - OK



CPU burns

- When reaching max_children limit, pause libevent callback and reenable
it in the maintenance routine, to avoid CPU burns - OK



- When children takes too long to accept() the request, avoid CPU burn -
NOTOK

 -> happens sometimes (in praxis only sometimes after forking) - to
reproduce add an usleep(50000) in children's code after fork(), or
apachebench with ~200 concurrent requests :-)

 -> You get a lot of: "fpm_pctl_on_socket_accept(), line 502: [pool www]
fpm_pctl_on_socket_accept() called"

 -> It's not a big problem, because this doesn't take much time (in one
rare case it took ~90ms on my machine), but it's not nice, especially
when the server is flooded with requests

 -> one idea:

   - do not re-enable event-callback in fpm_pctl_on_socket_accept

   - send an event from children just after accept() to parent process

   - re-enable event-callback in parent process, when it receives this
event from children

   - in case of an error it is re-enabled in the maintainance routine
after max 1s, which is IMHO not bad to throttle requests in case of
error



Stress tests

- Test-machine: Intel Core i7 930 (4 x 2.8 GHz) (VMware with 256 MB
RAM)



- Testing with 100 concurrent requests on the same pool to a sleep(10);
php script with 0 running processes and max_children = 200:

 - took about 4ms per fork in average

 - 25 processes are forked in one block (timeslice?), after this there
is a gap of 200-1000ms

  - took about 125ms to fork 25 children

  - took about 2.5s to fork all 100 children and accept the requests

- Testing with 200 concurrent requests

  - hits RAM limit of VM, so it's maybe not meaningful

  - took ~10.5s to fork all 200 children and accept the requests

- Testing with 10 concurrent requests on 20 pools (so in fact 200
concurrent requests)

  - took ~11.2s to fork all 200 children and accept the requests

  - all children are killed after process_timeout + 10s (1 process per
second per pool is killed) - OK

------------------------------------------------------------------------
[2010-08-30 10:18:14] mplomer at gmx dot de

Patch version 5:

- Added missing fpm_globals.is_child check (proposed by jerome)

- Implemented "max children reached" status counter.

- Fixed missing last_idle_child = NULL; in
fpm_pctl_perform_idle_server_maintenance which caused the routine to
shutdown only one (or a few?) processes per second globally instead per
pool, when you have multiple pools. I think this was not the intention,
and it's a bug.

------------------------------------------------------------------------


The remainder of the comments for this report are too long. To view
the rest of the comments, please view the bug report online at

    http://bugs.php.net/bug.php?id=52569


-- 
Edit this bug report at http://bugs.php.net/bug.php?id=52569&edit=1

Reply via email to