Hello,

thanks Daniel and Stefan. This is a good point.
I did the test with a static file and this test was successfully done within only a few seconds.

finished in 20.06s, 4984.80 req/s, 1.27GB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout

so problem seems to be not h2load and basic apache. may be i should look deeper into proxy_fcgi configuration. php-fpm configuration is unchanged and was successfully used with classical fastcgi-benchmark, so i think i have to doublecheck the proxy.

now i did this change in proxy:

from
enablereuse=on
to
enablereuse=off

this change leads to a working h2load testrun:
finished in 51.74s, 1932.87 req/s, 216.05MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout

iam surprised by that. i expected a higher performance when reusing backend connections rather then creating new ones. I did some further tests and changed some other php-fpm/proxy values, but once "enablereuse=on" is set, the problem returns.

Should i just run the proxy with enablereuse=off? Or do you have an other suspicion?

Thanks,
Hajo


Am 19.01.2018 um 12:45 schrieb Daniel:
which are the results exactly and which are the results to a non-php
file such as a gif or similar?

2018-01-19 12:38 GMT+01:00 Hajo Locke <hajo.lo...@gmx.de>:
Hello list,

i do some http/2 benchmarks on my machine and have problems to finish at
least one test.

System is Ubuntu16.04, libnghttp2-14 1.7.1, Apache 2.4.29, mpm_event

I start h2load with standard-params:

h2load  -n100000 -c100 -m10 https://example.com/phpinfo.php

first steps are really quick and i can see a progress to 50-70%. but after
that requests by h2load to server decrease dramatically.
it seems that h2load ist stopping requests to server, but i dont see any
reason for that on serverside. i can start a 2nd h2load and this is starting
furious again, while the first one stucks with no progress, so i can't
believe there is a serverproblem.

all serverconfigs are really high, to avoid any kind of bottleneck.

/etc/apache2/conf.d/limits.conf
StartServers          10
MaxClients          500
MinSpareThreads      450
MaxSpareThreads      500
ThreadsPerChild      150
MaxRequestsPerChild   0
Serverlimit 500

my test-vhost just has some default values like servername, docroot etc.
additional there is the proxy_fcgi config
<Proxy "unix:/dev/shm/php70fpm.sock|fcgi://php70fpm/">
     ProxySet enablereuse=on flushpackets=On timeout=3600 max=1500
</Proxy>
<FilesMatch \.php$|\.php70$>
    SetHandler "proxy:fcgi://php70fpm/"
</FilesMatch>

fpm-config also has high limits to serve every incoming connection:
request_terminate_timeout = 7200
security.limit_extensions = no
listen = /dev/shm/php70fpm.sock
listen.owner = myuser
listen.group = mygroup
listen.mode = 0660
user = myuser
group = mygroup
pm = ondemand
pm.max_children = 500
pm.max_requests = 2000
catch_workers_output = yes

Currently i have no explanation for this. a really fast start and then
decreasing to low-activity.  but i cant see that limits are reached or
processes not respond.
Possible to have a problem in h2load or a hidden problem in my
configuration? Is there an other recommend way to do a h2-speedbenchmarking?

before using proxy_fcgi i used the classical mod_fastcgi with
fastcgiexternalserver and did not have this kind of problems. with
mod_fastcgi the test could complete.
Currently iam stumped and need a hint please.

Thanks,
Hajo


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org





---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org

Reply via email to