Data flow is: Client ---------> Haproxy ------------> Nginx Http2 Http1.1
2018-01-31 11:51 GMT+08:00 Igor Cicimov <ig...@encompasscorporation.com>: > > > On Wed, Jan 31, 2018 at 1:41 PM, 龙红波 <dragonorlo...@gmail.com> wrote: > >> *hi all,* >> * recently we are ready to upgrade to haproxy 1.8,however, when >> testing HTTP2, we found a drop in performance,below is the test scenario:* >> * haproxy version:* >> >> HA-Proxy version 1.8.3-205f675 2017/12/30 >> Copyright 2000-2017 Willy Tarreau <wi...@haproxy.org> >> >> Build options : >> TARGET = linux2628 >> CPU = generic >> CC = gcc >> CFLAGS = -O2 -g -fno-strict-aliasing >> -Wdeclaration-after-statement -fwrapv -Wno-unused-label >> OPTIONS = USE_OPENSSL=1 >> >> Default settings : >> maxconn = 2000, bufsize = 16384, maxrewrite = 1024, >> maxpollevents = 200 >> Built with OpenSSL version : OpenSSL 1.0.2g 1 Mar 2016 >> Running on OpenSSL version : OpenSSL 1.0.2g 1 Mar 2016 >> OpenSSL library supports TLS extensions : yes >> OpenSSL library supports SNI : yes >> OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2 >> Built with transparent proxy support using: IP_TRANSPARENT >> IPV6_TRANSPARENT IP_FREEBIND >> Encrypted password support via crypt(3): yes >> Built with multi-threading support. >> Built without PCRE or PCRE2 support (using libc's regex >> instead) >> Built without compression support (neither USE_ZLIB nor >> USE_SLZ are set). >> Compression algorithms supported : identity("identity") >> Built with network namespace support. >> >> * haproxy config:* >> >> global >> chroot /var/lib/haproxy >> stats socket /run/haproxy/admin.sock mode 660 level admin >> stats timeout 10s >> user haproxy >> group haproxy >> maxconn 81920 >> daemon >> tune.ssl.default-dh-param 2048 >> ssl-default-bind-options no-sslv3 >> ssl-default-bind-ciphers HIGH:!aNULL:!MD5:!ADH:!RC4 >> tune.ssl.lifetime 600s >> tune.ssl.maxrecord 1500 >> tune.ssl.cachesize 20m >> nbproc 1 >> tune.h2.max-concurrent-streams 500 >> >> defaults >> maxconn 81920 >> option clitcpka >> option srvtcpka >> option log-health-checks >> option splice-auto >> option http-keep-alive >> option redispatch >> no option http-buffer-request >> timeout http-keep-alive 90s >> backlog 8192 >> timeout connect 4000 >> timeout queue 90s >> timeout check 5s >> timeout client-fin 90s >> timeout server-fin 90s >> monitor-net 10.185.3.117/32 >> errorfile 400 /etc/haproxy/errors/400.http >> errorfile 403 /etc/haproxy/errors/403.http >> errorfile 408 /etc/haproxy/errors/408.http >> errorfile 500 /etc/haproxy/errors/500.http >> errorfile 503 /etc/haproxy/errors/503.http >> errorfile 504 /etc/haproxy/errors/504.http >> >> backend 1999_8c78604d-287a-4f95-b216-40a568f06b77 >> option tcp-check >> timeout check 2000 >> timeout server 90s >> balance roundrobin >> mode http >> option httplog >> no option splice-auto >> server backserver-group-ins:10.172.114.50:000_8888 >> 10.172.114.50:8888 check inter 5000 rise 2 fall 5 weight 100 >> server backserver-group-ins:10.172.114.49:000_8888 >> 10.172.114.49:8888 check inter 5000 rise 2 fall 5 weight 100 >> >> frontend 1999_da24bbd3-00b5-45ef-8bf4-32d05d417818 >> timeout client 90s >> mode http >> option dontlognull >> no option splice-auto >> bind :1999 mss 1360 ssl crt /etc/ssl/xip.io/xip.io.pem alpn h2 >> npn h2,http/1.1 >> >> acl host_acl_0 hdr_reg(host) -i ^.*$ >> acl path_acl_0_0 path_reg -i / >> use_backend 1999_8c78604d-287a-4f95-b216-40a568f06b77 if >> host_acl_0 path_acl_0_0 >> >> *Use h2load test, respectively, test http1.1 and http2, A total of three >> sets of data,haproxy reached cpu 100%,* >> * group 1:* >> >> h2load -n1000000 -c20 -m5 https://$0.172.144.113:1999/128 >> >> starting benchmark... >> spawning thread #0: 20 total client(s). 1000000 total requests >> TLS Protocol: TLSv1.2 >> Cipher: ECDHE-RSA-AES256-GCM-SHA384 >> Application protocol: h2 >> ...... >> >> finished in 86.23s, 11596.77 req/s, 2.90MB/s >> requests: 1000000 total, 1000000 started, 1000000 done, 1000000 >> succeeded, 0 failed, 0 errored, 0 timeout >> status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx >> >> >> *group2:* >> >> h2load -n1000000 -c20 -m1 https://10.172.144.113:1999/128 --h1 >> starting benchmark... >> spawning thread #0: 20 total client(s). 1000000 total requests >> TLS Protocol: TLSv1.2 >> Cipher: ECDHE-RSA-AES256-GCM-SHA384 >> Application protocol: http/1.1 >> ...... >> >> finished in 73.72s, 13564.36 req/s, 4.42MB/s >> requests: 1000000 total, 1000000 started, 1000000 done, 1000000 >> succeeded, 0 failed, 0 errored, 0 timeout >> status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx >> >> >> * group3:* >> >> h2load -n1000000 -c100 -m1 https://10.172.144.113:1999/128 >> --h1 >> starting benchmark... >> spawning thread #0: 100 total client(s). 1000000 total requests >> TLS Protocol: TLSv1.2 >> Cipher: ECDHE-RSA-AES256-GCM-SHA384 >> Application protocol: http/1.1 >> ...... >> >> finished in 67.84s, 14739.69 req/s, 4.81MB/s >> requests: 1000000 total, 1000000 started, 1000000 done, >> 1000000 succeeded, 0 failed, 0 errored, 0 timeout >> status codes: 1000000 2xx, 0 3xx, 0 4xx, 0 5xx >> >> *Is this phenomenon normal? Or my way of using is wrong?* >> > > > Are the backend servers http2 enabled too? If not it might be the http2 > -> http1.1 conversion? Not sure I might be talking rubbish ... > >