Re: [ANNOUNCE] haproxy-1.8.13
Hi Vincent, On Mon, Jul 30, 2018 at 11:16:39PM +0200, Vincent Bernat wrote: > ? 30 juillet 2018 20:55 +0200, Willy Tarreau : > > > What I don't like with PGP on an exposed machine is that it reduces the > > size of your 4096-bit key to the size of your passphrase (which most > > often contains much less than the ~700 characters it would need to be > > as large), and also increases your ability to get fooled into entering > > it. Some would call me paranoid, but I don't think I am, I'm just trying > > to keep a balanced level of security, knowing that the global one is not > > better than the weakest point. > > Attacks on asymmetric ciphers do not rely on bruteforce: you don't have > to explore the whole keyspace to guess the private key. You can use > algorithms like the general number field sieve. A 4096-bit RSA keypair > would be roughly equivalent to a symmetric algorithm using a 160-bit key > (unless we find better algorithms to break RSA). I thought RSA4096 was equivalent to more than this, I'm disappointed :-) > A 32-character > passphrase would be enough to protect the private key. Moreover, if you > use a weaker passphrase, you have not lost yet as the string to key > function used to turn the passphrase into an AES key is slow. I don't > know where the limit is, but the idea is that with a shorter passphrase, > the attacker may still have a better time finding the AES key instead of > the passphrase. I see, the same principle as system passwords using many rounds to slow down brute force attacks. With this said, when you see the amount of power that some ASICs, FPGAs and GPUs have developed over the years due to the mining activities, often counting in gigahashes/s, I suspect you'll need many rounds to be safe :-/ > But if someone can steal your encrypted key from your machine, they may > also be able to steal the unencrypted one through various means. So, you > may still be right about being paranoid. :) Yes, that's still the point. After all, when you have access to a user- owned file, you also have access to this user's processes. It's not very complicated to run "while ! strace -o foo.log -p $(pgrep gpg); do sleep 0.1;done", it remains very discrete and will easily reveal the passphrase. Thanks for the detailed explanation! Willy
Re: [ANNOUNCE] haproxy-1.8.13
❦ 30 juillet 2018 20:55 +0200, Willy Tarreau : > What I don't like with PGP on an exposed machine is that it reduces the > size of your 4096-bit key to the size of your passphrase (which most > often contains much less than the ~700 characters it would need to be > as large), and also increases your ability to get fooled into entering > it. Some would call me paranoid, but I don't think I am, I'm just trying > to keep a balanced level of security, knowing that the global one is not > better than the weakest point. Attacks on asymmetric ciphers do not rely on bruteforce: you don't have to explore the whole keyspace to guess the private key. You can use algorithms like the general number field sieve. A 4096-bit RSA keypair would be roughly equivalent to a symmetric algorithm using a 160-bit key (unless we find better algorithms to break RSA). A 32-character passphrase would be enough to protect the private key. Moreover, if you use a weaker passphrase, you have not lost yet as the string to key function used to turn the passphrase into an AES key is slow. I don't know where the limit is, but the idea is that with a shorter passphrase, the attacker may still have a better time finding the AES key instead of the passphrase. But if someone can steal your encrypted key from your machine, they may also be able to steal the unencrypted one through various means. So, you may still be right about being paranoid. :) -- The man who sets out to carry a cat by its tail learns something that will always be useful and which never will grow dim or doubtful. -- Mark Twain
Re: [ANNOUNCE] haproxy-1.8.13
On Mon, Jul 30, 2018 at 07:41:33PM +0200, Tim Düsterhus wrote: > Willy, > > Am 30.07.2018 um 18:05 schrieb Willy Tarreau: > > A small update happened to the download directory, the sha256 of the > > tar.gz files are now present in addition to the (quite old) md5 ones. > > We may start to think about phasing md5 signatures out, for example > > after 1.9 is released. > > I'd even like to see PGP signatures, like you already do for the git > tags (but not the Tarballs). But this is a greater change than just > updating the checksums :-) I know and I've already thought about it. But I personally refuse to store my PGP key on any exposed machine. Right now in order to tag, I have to SSH into an isolated machine, run "git pull --tags", create-release, and "git push --tags". Then I upload the release. What I don't like with PGP on an exposed machine is that it reduces the size of your 4096-bit key to the size of your passphrase (which most often contains much less than the ~700 characters it would need to be as large), and also increases your ability to get fooled into entering it. Some would call me paranoid, but I don't think I am, I'm just trying to keep a balanced level of security, knowing that the global one is not better than the weakest point. If I wanted to sign the images, it would require to find a different release method and would significantly complicate the procedure. Willy
Re: [ANNOUNCE] haproxy-1.8.13
Willy, Am 30.07.2018 um 18:05 schrieb Willy Tarreau: > A small update happened to the download directory, the sha256 of the > tar.gz files are now present in addition to the (quite old) md5 ones. > We may start to think about phasing md5 signatures out, for example > after 1.9 is released. I'd even like to see PGP signatures, like you already do for the git tags (but not the Tarballs). But this is a greater change than just updating the checksums :-) Best regards Tim Düsterhus
Re: [ANNOUNCE] haproxy-1.8.13
On 30/07/2018 18:05, Willy Tarreau wrote: Hi, HAProxy 1.8.13 was released on 2018/07/30. It added 28 new commits after version 1.8.12. Nothing critical this time, however we finally got rid of the annoying CLOSE_WAIT on H2 thanks to the continued help from Milan Petruzelka, Janusz Dziemidowicz and Olivier Doucet. Just for this it was worth emitting a release. During all these tests we also met a case where sending a POST to the stats applet over a slow link using H2 could sometimes result in haproxy busy waiting for data, causing 100% CPU being seen. It was fixed, along with another bug affecting applets like stats, possibly causing occasional CPU spikes. While developing on 1.9 we found a few interesting corner cases with threads, one of which causes performance to significantly drop when reaching a server maxconn *if* there are more threads than available CPUs. It turned out to be caused by the synchronization point not leaving enough CPU to sleeping threads to be scheduled and join. You should never ever use less threads than CPUs, but config errors definitely happen and we'd rather limit their impact. Speaking about config errors, another case existed where a "process" directive on a "bind" line could reference non-existing threads. If only non-existing threads were referenced, it didn't trigger an error and would silently start, but with nobody to accept the traffic. It easily happens when reducing the number of threads in a config. This was addressed similarly to the process case, where the threads are automatically remapped and a warning is emitted in this case. An issue was addressed with the proxy protocol header sent to servers. If a "http-request set-src" directive is used, it is possible to end up with a mix of IPv4 and IPv6, which cannot be transported by the protocol (since it makes no sense from a network perspective). Till now a server would only receive "PROXY UNKNOWN" and would not even be able to get the client's address. Tim Duesterhus addressed this by converting the IPv4 address to IPv6 if exactly one of the addresses is IPv6. It is the only way not to lose information Christopher addressed a rare issue which could trigger during soft reloads with threads enabled : if a thread quits at the exact moment a thread sync is requested, the remaining threads could wait for it forever. Vincent Bernat updated the systemd unit file so that when quitting, if the master reports 143 (SIGTERM+128) as the exit status due to the fact that it reports the last killed worker's status, systemd doesn't consider this as a failure. The remaining changes are pretty minor. Some H2 debugging code developed to fix the CLOSE_WAIT issues was backported in orther to simplify the retrieval of internal states when such issue shappen. A small update happened to the download directory, the sha256 of the tar.gz files are now present in addition to the (quite old) md5 ones. We may start to think about phasing md5 signatures out, for example after 1.9 is released. As usual, it's worth updating if you're on 1.8, especially if you're using H2 and/or threads. If you think you've found a bug that is not addressed in the changelog below, please update and try again before reporting it. There are so many possible side effects from H2 issues and thread issues that it is possible that your issue is a different manifestation of one of these. Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Sources : http://www.haproxy.org/download/1.8/src/ Git repository : http://git.haproxy.org/git/haproxy-1.8.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ As always the new Version is also on the docker hub. https://hub.docker.com/r/me2digital/haproxy18/ Willy Regards Aleks --- Complete changelog : Christopher Faulet (4): BUG/MINOR: http: Set brackets for the unlikely macro at the right place MINOR: debug: Add check for CO_FL_WILL_UPDATE MINOR: debug: Add checks for conn_stream flags BUG/MEDIUM: threads: Fix the exit condition of the thread barrier Olivier Houchard (2): BUG/MINOR: servers: Don't make "server" in a frontend fatal. BUG/MINOR: threads: Handle nbthread == MAX_THREADS. Tim Duesterhus (2): BUILD: Generate sha256 checksums in publish-release MEDIUM: proxy_protocol: Convert IPs to v6 when protocols are mixed Vincent Bernat (1): MINOR: systemd: consider exit status 143 as successful Willy Tarreau (19): BUG/MINOR: ssl: properly ref-count the tls_keys entries MINOR: mux: add a "show_fd" function to dump debugging information for "show fd" MINOR: h2: implement a basic "show_fd" function BUG/MINOR: h2: remove accidental debug code introduced with show_fd function MINOR: h2: keep a count of the
Re: Help with backend server sni setup
Hi. On 30/07/2018 16:39, Lukas Tribus wrote: On Mon, 30 Jul 2018 at 13:30, Aleksandar Lazic wrote: Hi. I have the following Setup. APP -> Internal Haproxy -(HTTPS)-> external HAProxy -> APP The external HAProxy is configured with multiple TLS Vhost. Never use SNI for Vhosting. It should work with the host header only. SNI should only be used for certificate selection, otherwise overlapping certificates will cause wrong forwarding decisions. The openshift router, based on haproxy 1.8, looks for the sni hostname for routing. https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L198-L209 Due to this fact we *must* set the ssl hostname I assume that when I add `server sni appinternal.domain.com` to the server line will be set the hostname field in the TLS session to this value. No, the sni keyword expects a fetch expression. Set it to the host header for example: sni req.hdr(host) Or to a static string: sni str(www.example.com) When I take a look into the code I see this line. http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/backend.c;hb=ada31afbc1e9095d494973cad91a4e507c4c1d9b#l1255 ssl_sock_set_servername(srv_conn, smp->data.u.str.str); and the implementation of this function is here http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/ssl_sock.c;hb=ada31afbc1e9095d494973cad91a4e507c4c1d9b#l5922 The blocks begins here. http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/backend.c;hb=ada31afbc1e9095d494973cad91a4e507c4c1d9b#l1236 As far as I understood this block and I'm not sure that I have it understood right the fetch sample checks for the string, as you have written, AND SET the hostname into the SSL/TLS header for SNI. Now after I looked into the code and read the doc again it's clear now for me. This options set's cite from doc the host name sent in the SNI TLS extension to the server. Please apologise for the rush and my stupidity. cheers, lukas Best greetings aleks
[ANNOUNCE] haproxy-1.8.13
Hi, HAProxy 1.8.13 was released on 2018/07/30. It added 28 new commits after version 1.8.12. Nothing critical this time, however we finally got rid of the annoying CLOSE_WAIT on H2 thanks to the continued help from Milan Petruzelka, Janusz Dziemidowicz and Olivier Doucet. Just for this it was worth emitting a release. During all these tests we also met a case where sending a POST to the stats applet over a slow link using H2 could sometimes result in haproxy busy waiting for data, causing 100% CPU being seen. It was fixed, along with another bug affecting applets like stats, possibly causing occasional CPU spikes. While developing on 1.9 we found a few interesting corner cases with threads, one of which causes performance to significantly drop when reaching a server maxconn *if* there are more threads than available CPUs. It turned out to be caused by the synchronization point not leaving enough CPU to sleeping threads to be scheduled and join. You should never ever use less threads than CPUs, but config errors definitely happen and we'd rather limit their impact. Speaking about config errors, another case existed where a "process" directive on a "bind" line could reference non-existing threads. If only non-existing threads were referenced, it didn't trigger an error and would silently start, but with nobody to accept the traffic. It easily happens when reducing the number of threads in a config. This was addressed similarly to the process case, where the threads are automatically remapped and a warning is emitted in this case. An issue was addressed with the proxy protocol header sent to servers. If a "http-request set-src" directive is used, it is possible to end up with a mix of IPv4 and IPv6, which cannot be transported by the protocol (since it makes no sense from a network perspective). Till now a server would only receive "PROXY UNKNOWN" and would not even be able to get the client's address. Tim Duesterhus addressed this by converting the IPv4 address to IPv6 if exactly one of the addresses is IPv6. It is the only way not to lose information Christopher addressed a rare issue which could trigger during soft reloads with threads enabled : if a thread quits at the exact moment a thread sync is requested, the remaining threads could wait for it forever. Vincent Bernat updated the systemd unit file so that when quitting, if the master reports 143 (SIGTERM+128) as the exit status due to the fact that it reports the last killed worker's status, systemd doesn't consider this as a failure. The remaining changes are pretty minor. Some H2 debugging code developed to fix the CLOSE_WAIT issues was backported in orther to simplify the retrieval of internal states when such issue shappen. A small update happened to the download directory, the sha256 of the tar.gz files are now present in addition to the (quite old) md5 ones. We may start to think about phasing md5 signatures out, for example after 1.9 is released. As usual, it's worth updating if you're on 1.8, especially if you're using H2 and/or threads. If you think you've found a bug that is not addressed in the changelog below, please update and try again before reporting it. There are so many possible side effects from H2 issues and thread issues that it is possible that your issue is a different manifestation of one of these. Please find the usual URLs below : Site index : http://www.haproxy.org/ Discourse: http://discourse.haproxy.org/ Sources : http://www.haproxy.org/download/1.8/src/ Git repository : http://git.haproxy.org/git/haproxy-1.8.git/ Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git Changelog: http://www.haproxy.org/download/1.8/src/CHANGELOG Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/ Willy --- Complete changelog : Christopher Faulet (4): BUG/MINOR: http: Set brackets for the unlikely macro at the right place MINOR: debug: Add check for CO_FL_WILL_UPDATE MINOR: debug: Add checks for conn_stream flags BUG/MEDIUM: threads: Fix the exit condition of the thread barrier Olivier Houchard (2): BUG/MINOR: servers: Don't make "server" in a frontend fatal. BUG/MINOR: threads: Handle nbthread == MAX_THREADS. Tim Duesterhus (2): BUILD: Generate sha256 checksums in publish-release MEDIUM: proxy_protocol: Convert IPs to v6 when protocols are mixed Vincent Bernat (1): MINOR: systemd: consider exit status 143 as successful Willy Tarreau (19): BUG/MINOR: ssl: properly ref-count the tls_keys entries MINOR: mux: add a "show_fd" function to dump debugging information for "show fd" MINOR: h2: implement a basic "show_fd" function BUG/MINOR: h2: remove accidental debug code introduced with show_fd function MINOR: h2: keep a count of the number of conn_streams attached to the mux MINOR: h2: add the mux and demux buffer lengths on "show fd" BUG/MEDIUM: h2: don't accept new
Re: Help with backend server sni setup
On Mon, 30 Jul 2018 at 13:30, Aleksandar Lazic wrote: > > Hi. > > I have the following Setup. > > APP -> Internal Haproxy -(HTTPS)-> external HAProxy -> APP > > The external HAProxy is configured with multiple TLS Vhost. Never use SNI for Vhosting. It should work with the host header only. SNI should only be used for certificate selection, otherwise overlapping certificates will cause wrong forwarding decisions. > I assume that when I add `server sni appinternal.domain.com` to the > server line will be set the hostname field in the TLS session to this > value. No, the sni keyword expects a fetch expression. Set it to the host header for example: sni req.hdr(host) Or to a static string: sni str(www.example.com) cheers, lukas
Understanding certain balance configuration
Hi, I'm trying to understand how balance url_param hash-type consistent should work. Haproxy 1.7.11. Lets say, we have a config of two haproxy instances that balance content between local and remote (sibling). server0 (10.0.0.1) would have config section like this: backend load_balancer balance url_param file_id hash-type consistent server local_backend /path/to/socket id 1 server remote_backend 10.0.0.2:80 id 2 backend local_backend balance url_param file_id hash-type consistent server server0 127.0.0.1:100 server server1 127.0.0.1:200 server1 (10.0.0.2) would have config section like this: backend load_balancer balance url_param file_id hash-type consistent server local_backend /path/to/socket id 2 server remote_backend 10.0.0.1:80 id 1 backend local_backend balance url_param file_id hash-type consistent server server0 127.0.0.1:100 server server1 127.0.0.1:200 Assuming that all requests indeed have URL parameter "file_id", should requests on both servers only reach single "local_backend" server since they are already balanced and are not anymore divided in "local_backend" because of identical configuration on both "load_balancer" and "local_backend"? thanks in advance, Veiko
Re: force-persist and use_server combined
On 07/25/2018 03:05 PM, Veiko Kukk wrote: The idea here is that HAproxy statistics page, some other backend statistics and also some remote health checks running against path under /dl/ would always reach only local_http_frontend, never go anywhere else even when local really is down, not just marked as down. This config does not work, it forwards /haproxy?stats request to remote_http_frontend when local_http_frontend is really down. Is it expected? Any ways to overcome this limitation? I wonder if my question was too stupid or was just left unnoticed by someone who knows how force-persist is supposed to be working. Meanwhile I've created workaround by adding additional config sections and having use_backend ACL instead of use_server ACL to achieve what was needed. regards, Veiko
Help with backend server sni setup
Hi. I have the following Setup. APP -> Internal Haproxy -(HTTPS)-> external HAProxy -> APP The external HAProxy is configured with multiple TLS Vhost. I assume that when I add `server sni appinternal.domain.com` to the server line will be set the hostname field in the TLS session to this value. I'm not sure if this could work from the doc reading. https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-sni Could this work? Best regards Aleks
Re: [PATCH] MEDIUM: proxy_protocol: Convert IPs to v6 when protocols are mixed
Hi Tim, On Fri, Jul 27, 2018 at 06:46:13PM +0200, Tim Duesterhus wrote: > Willy, > > attached is an updated patch that: > > 1. Only converts the addresses to IPv6 if at least one of them is IPv6. >But it does not convert them to IPv4 if both of them can be converted to > IPv4. > 2. Does not copy the whole `struct connection`, but performs the conversion > inside >`make_proxy_line_v?`. > > I'm not sure whether I like this better than my first attempt at it. Proxy > protocol > v2 was rather easy to modify, but proxy protocol v1 required a complete > restructuring > to not create a new case for each of the 4 address combinations (44, 46, 64, > 66). In my opinion the code resulting from this approach is cleaner and safer than the code it replaces. I've looked carefully at it (v1 and v2) and am fine with it. I personally prefer to use the unhandled case in the "else" part to avoid maintaining two sets of conditions but here it's fine because these conditions are easy enough to enumerate. Thus I'm merging it and backporting it to 1.8 since it also makes sense to address this issue there. Thanks! Willy