Hello Luca,

Am 05.02.2018 um 02:27 schrieb Luca Toscano:
Hi Hajo,

2018-02-01 3:58 GMT+01:00 Luca Toscano <toscano.l...@gmail.com <mailto:toscano.l...@gmail.com>>:

    Hi Hajo,

    2018-01-31 2:37 GMT-08:00 Hajo Locke <hajo.lo...@gmx.de
    <mailto:hajo.lo...@gmx.de>>:

        Hello,


        Am 22.01.2018 um 11:54 schrieb Hajo Locke:
        Hello,

        Am 19.01.2018 um 15:48 schrieb Luca Toscano:
        Hi Hajo,

        2018-01-19 13:23 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de
        <mailto:hajo.lo...@gmx.de>>:

            Hello,

            thanks Daniel and Stefan. This is a good point.
            I did the test with a static file and this test was
            successfully done within only a few seconds.

            finished in 20.06s, 4984.80 req/s, 1.27GB/s
            requests: 100000 total, 100000 started, 100000 done,
            100000 succeeded, 0 failed, 0 errored, 0 timeout

            so problem seems to be not h2load and basic apache. may
            be i should look deeper into proxy_fcgi configuration.
            php-fpm configuration is unchanged and was successfully
            used with classical fastcgi-benchmark, so i think i have
            to doublecheck the proxy.

            now i did this change in proxy:

            from
            enablereuse=on
            to
            enablereuse=off

            this change leads to a working h2load testrun:
            finished in 51.74s, 1932.87 req/s, 216.05MB/s
            requests: 100000 total, 100000 started, 100000 done,
            100000 succeeded, 0 failed, 0 errored, 0 timeout

            iam surprised by that. i expected a higher performance
            when reusing backend connections rather then creating
            new ones.
            I did some further tests and changed some other
            php-fpm/proxy values, but once "enablereuse=on" is set,
            the problem returns.

            Should i just run the proxy with enablereuse=off? Or do
            you have an other suspicion?



        Before giving up I'd check two things:

        1) That the same results happen with a regular localhost
        socket rather than a unix one.
        I changed my setup to use tcp-sockets in php-fpm and
        proxy-fcgi. Currently i see the same behaviour.
        2) What changes on the php-fpm side. Are there more busy
        workers when enablereuse is set to on? I am wondering how
        php-fpm handles FCGI requests happening on the same socket,
        as opposed to assuming that 1 connection == 1 FCGI request.
        If "enablereuse=off" is set i see a lot of running
        php-workerprocesses (120-130) and high load. Behaviour is
        like expected.
        When set "enablereuse=on" i can see a big change. number of
        running php-workers is really low (~40). The test is running
        some time and then it stucks.
        I can see that php-fpm processes are still active and waiting
        for connections, but proxy_fcgi is not using them nor it is
        establishing new connections. loadavg is low and
        benchmarktest is not able to finalize.
        I did some further tests to solve this issue. I set ttl=1 for
        this Proxy and achieved good performance and high number of
        working childs. But this is paradoxical.
        proxy_fcgi knows about inactive connection to kill it, but not
        reenable this connection for working.
        May be this is helpful to others.
        May be a kind of communicationproblem and checking
        health/busy status of php-processes.
        Whole proxy configuration is  this:

        <Proxy "unix:/dev/shm/php70fpm.sock|fcgi://php70fpm">
            ProxySet enablereuse=off flushpackets=On timeout=3600
        max=15000
        </Proxy>
        <FilesMatch \.php$|\.php70$>
           SetHandler "proxy:fcgi://php70fpm"
        </FilesMatch>


    Thanks a lot for following up and reporting these interesting
    results! Yann opened a thread[1] on dev@ to discuss the issue,
    let's follow up in there so we don't keep two conversations open.

    Luca

    [1]:
    
https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E
    
<https://lists.apache.org/thread.html/a9586dab96979bf45550c9714b36c49aa73526183998c5354ca9f1c8@%3Cdev.httpd.apache.org%3E>



reporting in here what I think it is happening in your test environment when enablereuse is set to on. Recap of your settings:

/etc/apache2/conf.d/limits.conf
StartServers          10
MaxClients          500
MinSpareThreads      450
MaxSpareThreads      500
ThreadsPerChild      150
MaxRequestsPerChild   0
Serverlimit 500

<Proxy "unix:/dev/shm/php70fpm.sock|fcgi://php70fpm/">
    ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
</Proxy>
<FilesMatch \.php$|\.php70$>
   SetHandler "proxy:fcgi://php70fpm/"
</FilesMatch>

request_terminate_timeout = 7200
listen = /dev/shm/php70fpm.sock
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000

By default mod_proxy allows a connection pool of ThreadsPerChild connections to the backend for each httpd process, meanwhile in your case you have set 3200 using the 'max' parameter (as stated in the docs it is a per process setting, not a overall one). PHP-FPM handles one connection for each worker at the time, and your settings allow a maximum of 500 processes, therefore a maximum of 500 connections established at the same time from httpd. When connection reuse is set to on, the side effect is that for each mod_proxy's open/established connection in the pool there will be one PHP-FPM worker tight to it, even if not serving any request (waiting for one basically). This can lead to a situation in which all PHP-FPM workers are "busy" not allowing mod_proxy to create more connections (even if it is set/allowed to do so) leading to a stall that finishes only if one PHP-FPM worker is freed (Timeout/ProxyTimeout or 'ttl' elapsed for example).

If you have time and patience could you re-try your tests with different settings in light of what said above and see if the weird slowdown/stall issue goes away?

I am planning to update the docs to reflect this use case!
Thanks for investigating this issue. I will need some time to test and evaluate this. Currently i have the flu... :(

Thanks,

Luca
Hajo


---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus

Reply via email to