Re: HAProxy 1.8.3 SSL caching regression
Hi William, Verified. Thanks for the quick fix, Jeffrey J. Persch On Wed, Jan 3, 2018 at 2:02 PM, Jeffrey J. Persch <jjper...@gmail.com> wrote: > Hi William, > > The test case now works. I will do load testing with the patch today. > > Thanks, > Jeffrey J. Persch > > On Wed, Jan 3, 2018 at 1:25 PM, William Lallemand <wlallem...@haproxy.com> > wrote: > >> On Wed, Jan 03, 2018 at 06:41:01PM +0100, William Lallemand wrote: >> > I'm able to reproduce the problem thanks to your detailed example, it >> looks >> > like a regression in the code. >> > >> > I will check the code to see what's going on. >> >> I found the issue, would you mind trying the attached patch? >> >> Thanks. >> >> -- >> William Lallemand >> > >
Re: HAProxy 1.8.3 SSL caching regression
Hi William, The test case now works. I will do load testing with the patch today. Thanks, Jeffrey J. Persch On Wed, Jan 3, 2018 at 1:25 PM, William Lallemand <wlallem...@haproxy.com> wrote: > On Wed, Jan 03, 2018 at 06:41:01PM +0100, William Lallemand wrote: > > I'm able to reproduce the problem thanks to your detailed example, it > looks > > like a regression in the code. > > > > I will check the code to see what's going on. > > I found the issue, would you mind trying the attached patch? > > Thanks. > > -- > William Lallemand >
HAProxy 1.8.3 SSL caching regression
Greetings, We have been load testing 1.8.3 and noticed SSL caching was broken in 1.8 during the shctx refactoring. New SSL connections will cache up until tune.ssl.cachesize, then no connections will ever be cached again. In haproxy 1.7 and before, the SSL cache works correctly as a LRU cache. Example configuration file, haproxy-ssl-cache.cfg, with cachesize set to 3 to easily reproduce: global ssl-default-bind-ciphers HIGH:!aNULL:!MD5 ssl-default-bind-options no-sslv3 no-tls-tickets tune.ssl.default-dh-param 2048 tune.ssl.cachesize 3 tune.ssl.lifetime 60 defaults stats enable stats uri /haproxy/stats frontend some-frontend bind :8443 ssl crt self-signed.pem mode http timeout client 15s timeout http-request 15s use_backend some-backend backend some-backend mode http timeout connect 1s timeout queue 0s timeout server 1s server some-server 127.0.0.1:8091 check Example script to build and test on macosx: srcdir=haproxy-1.8 # Install openssl library brew install openssl # Build HAProxy with OpenSSL support make -C $srcdir TARGET=osx USE_OPENSSL=1 SSL_INC=/usr/local/opt/openssl/include SSL_LIB=/usr/local/opt/openssl/lib USE_ZLIB=1 # Generate self signed cert openssl req -newkey rsa:2048 -nodes -keyout self-signed.key -x509 -days 365 -out self-signed.crt -subj "/C=US/ST=Pennsylvania/L=Philadelphia/O=HAProxy/OU=QA/CN=localhost" cat self-signed.crt self-signed.key >>self-signed.pem # Run HAProxy $srcdir/haproxy -f haproxy-ssl-cache.cfg & # Demonstrate failure to cache new sessions after cache fills openssl s_client -connect localhost:8443 -reconnect -no_ticket verify.err | egrep 'New|Reused' # PASS: 1 New, 5 Reused openssl s_client -connect localhost:8443 -reconnect -no_ticket verify.err | egrep 'New|Reused' # PASS: 1 New, 5 Reused openssl s_client -connect localhost:8443 -reconnect -no_ticket verify.err | egrep 'New|Reused' # PASS: 1 New, 5 Reused openssl s_client -connect localhost:8443 -reconnect -no_ticket verify.err | egrep 'New|Reused' # FAIL: 6 New # Demonstrate failure to evict old entries from cache sleep 65 openssl s_client -connect localhost:8443 -reconnect -no_ticket verify.err | egrep 'New|Reused' # FAIL: 6 New This appears to independent of target & openssl version, we have reproduced on linux2628 openssl 1.0.1k-fips and osx openssl 1.0.2n. Any insights appreciated. Thanks, Jeffrey J. Persch
Re: Question regarding url_param hashing
Greetings, I have reproduced this same issue on haproxy-1.4.6. Configure a backend with balance url_param x hash-type consistent server s1 host1:80 server s2 host2:80 server s3 host3:80 server s4 host4:80 The server will only dispatch requests to two servers. The problem is seen in chash_init_server_tree: /* queue active and backup servers in two distinct groups */ for (srv = p-srv; srv; srv = srv-next) { srv-lb_tree = (srv-state SRV_BACKUP) ? p-lbprm.chash.bck : p- lbprm.chash.act; srv-lb_nodes_tot = srv-uweight * BE_WEIGHT_SCALE; srv-lb_nodes_now = 0; srv-lb_nodes = (struct tree_occ *)calloc(srv-lb_nodes_tot, sizeof(struct tree_occ)); for (node = 0; node srv-lb_nodes_tot; node++) { srv-lb_nodes[node].server = srv; srv-lb_nodes[node].node.key = chash_hash(srv-puid * SRV_EWGHT_RANGE + node); } if (srv_is_usable(srv-state, srv-eweight)) chash_queue_dequeue_srv(srv); } The problem is that when this code runs, all of the servers have the same srv-puid == 0. With equal weights, each server generates the exact same set of node keys. Regardless of the key you lookup using eb32_lookup_ge, you get a node for the last server. The chash_get_server_hash function is reduced to choosing between the closer of the last or first server. Any insights as to why srv-puid is zero and not unique per server? Jeff Persch
Re: Question regarding url_param hashing
On May 25, 2010, at 3:03 PM, Jeffrey J. Persch wrote: Any insights as to why srv-puid is zero and not unique per server? I'll answer my own question. The consistent hash server tree is initialized in check_config_validity, line 5075. Generated puids are not assigned until check_config_validity, line 5166. Jeff Persch