[PATCH 1/2] DOC: agent-check: fix typo in "fail" word expected reply

2020-09-26 Thread William Dauchy
`tcpcheck_agent_expect_reply` expects "fail" not "failed"

This should fix github issue #876

This can be backported to all maintained versions (i.e >= 1.6) as of
today.

Signed-off-by: William Dauchy 
---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 309d076aa..97ff2e499 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13073,7 +13073,7 @@ agent-check
 MAINT mode, thus it will not accept any new connections at all, and health
 checks will be stopped.
 
-  - The words "down", "failed", or "stopped", optionally followed by a
+  - The words "down", "fail", or "stopped", optionally followed by a
 description string after a sharp ('#'). All of these mark the server's
 operating state as DOWN, but since the word itself is reported on the stats
 page, the difference allows an administrator to know if the situation was
-- 
2.28.0




[PATCH 2/2] DOC: crt: advise to move away from cert bundle

2020-09-26 Thread William Dauchy
especially when starting to use `new ssl cert` runtime API, it might
become a bit confusing for users to mix bundle and single cert,
especially when it comes to use the commit command:
e.g.:
- start the process with `crt` loading a bundle
- use `set ssl cert my_cert.pem.ecdsa`: API detects it as a replacement
  of a bundle.
- `commit` has to be done on the bundle: `commit ssl cert my_cert.pem`

however:
- add a new cert: `new ssl cert my_cert.pem.rsa`: added as a single
  certificate
- `commit` has to be done on the certificate: `commit ssl cert
  my_cert.pem.rsa`

this should resolve github issue #872

this should probably be backported in >= v2.2 in order to encourage
people to move away from bundle certificates loading.

Signed-off-by: William Dauchy 
---
 doc/configuration.txt | 7 ++-
 doc/management.txt| 4 
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 97ff2e499..87f35e984 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -12560,10 +12560,15 @@ crt 
   connecting with "ecdsa.example.com" will only be able to use ECDSA cipher
   suites. With BoringSSL and Openssl >= 1.1.1 multi-cert is natively supported,
   no need to bundle certificates. ECDSA certificate will be preferred if client
-  support it.
+  supports it.
 
   If a directory name is given as the  argument, haproxy will
   automatically search and load bundled files in that directory.
+  It is however recommended to move away from bundle loading, especially if you
+  want to use the runtime API to load new certificate which does not support
+  bundle. A recommended way to migrate is to set `ssl-load-extra-file`
+  parameter to `none` in global config so that each certificate is loaded as a
+  single one.
 
   OSCP files (.ocsp) and issuer files (.issuer) are supported with multi-cert
   bundling. Each certificate can have its own .ocsp and .issuer file. At this
diff --git a/doc/management.txt b/doc/management.txt
index adbad95d3..42e8ddbca 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -1725,6 +1725,10 @@ new ssl cert 
   Create a new empty SSL certificate store to be filled with a certificate and
   added to a directory or a crt-list. This command should be used in
   combination with "set ssl cert" and "add ssl crt-list".
+  Note that bundle certificates are not supported; it is recommended to use
+  `ssl-load-extra-file none` in global config to avoid loading certificates as
+  bundle and then mixing with single certificates in the runtime API. This will
+  avoid confusion, especailly when it comes to the `commit` command.
 
 prompt
   Toggle the prompt at the beginning of the line and enter or leave interactive
-- 
2.28.0




Re: `ssl_fc_has_early` fetcher and 0rtt

2020-09-09 Thread William Dauchy
> > That seems strange indeed but looking at the code that's what I'm
> > seeing. Was your access to ssl_fc_has_early placed before or after the
> > rule above ? If it's after it must indeed report false.

fetcher is placed before the rule

> > I seem to remember there was one but can't find it, so I may have been
> > confused. With this said, it doesn't provide a big information since
> > once the handshake is completed, it's exactly identical to a regular
> > one. But it can be nice for statistics at least.
> >
>
> Pretty sure the original version always returned 1 if there were early
> data, but we changed it so that it would return 0 once the handshake was
> done, as we thought it was more useful, that may be what you're thinking
> about.

Yes I indeed found the commit which changed the behaviour. In fact I
probably mixed the true meaning of the definition in the doc and the
old blog post about it which is vague.
Now everything is clear.

>From our point of view, it's interesting to check some behavioural
changes we can make on the L4 layer (e.g. hash including source port
or not).

> That indeed tells me something. However I checked if CO_FL_EARLY_DATA
> persists, and apparently we kill it very early once we have everything
> so it's not as if we could trivially add a new sample fetch function to
> report its past status.
>
> Also I seem to remember that we concluded that depending on the timing
> and how data were aggregated and processed in the SSL lib, we couldn't
> reliably report the use of early data if those were converted very early.

At some point I was asking myself whether this could be done but
simply to give a signal without any guarantee, but you seemed to
conclude that it's not reliable enough to become a new fetcher.
-- 
William



Re: `ssl_fc_has_early` fetcher and 0rtt

2020-09-09 Thread William Dauchy
Hello Willy,

Thank you for your answer,

On Wed, Sep 9, 2020 at 4:39 PM Willy Tarreau  wrote:
> If I remember well, the principle consists in detecting whether or not
> the request was received using TLS early data (0-rtt) before the handshake
> was completed. The problem is that early data may trivially be captured
> and replayed, so you don't necessarily want to accept all of them, only
> replay-safe requests. Typically a login page that is limited to 3
> attempts before blocking should not be allowed, but fetching a favicon
> is totally safe.
>
> Once the handshake ends you'll know whether it was safe or not, so you
> can actually decide to wait on this function to return false to indicate
> that the request is complete and not replayed, or just use it to return
> a "425 too early" response for certain sensitive resources.
>
> I think it's not easy to reproduce these tests, you need a high enough
> latency between haproxy and the client so that the handshake is not
> already completed when you evaluate the rule, and of course you need
> to make sure the client sends using early data. I don't remember how
> Olivier used to run his tests but I remember that it was a bit tricky,
> so it's very possible that you never fall into the situation where you
> can see the unvalidated early data yet.

It means my understanding of this fetcher was wrong indeed.
For me the protection was here:
  http-request wait-for-handshake if ! METH_GET
and the fetcher here to log whether it was a 0rtt request or not.
In reality, it means all our requests have completed the handshake
when the rule is evaluated (which is surprising looking at the number
we have).
So maybe we can possibly work on an alternative fetcher to know
whether this was a 0rtt request? Or is there another way?

Thanks,
-- 
William



Re: `ssl_fc_has_early` fetcher and 0rtt

2020-09-09 Thread William Dauchy
On Wed, Sep 9, 2020 at 10:48 AM William Dauchy  wrote:
> I'm trying to understand `ssl_fc_has_early` fetcher behavior as I'm
> unable to find a single request where it returns 1.

(sorry, forgot to mention, all of these tests were done on v2.2.x)

-- 
William



`ssl_fc_has_early` fetcher and 0rtt

2020-09-09 Thread William Dauchy
Hello,

I'm trying to understand `ssl_fc_has_early` fetcher behavior as I'm
unable to find a single request where it returns 1.

Our config has 0rtt enabled and it is as follow:

```
global
log 127.0.0.1 format rfc5424 local0 info
daemon
stats socket /var/lib/haproxy/stats level admin mode 600 user
haproxy group haproxy expose-fd listeners
stats timeout 2m
maxconn 524288
user haproxy
group haproxy
set-dumpable
tune.bufsize 33792
tune.runqueue-depth 1200
tune.sched.low-latency on
tune.ssl.cachesize 0
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-bind-ciphers
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384
ssl-default-bind-ciphersuites
TLS_AES_128_GCM_SHA256:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-server-ciphers
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES256-SHA
ssl-default-server-ciphersuites
TLS_AES_128_GCM_SHA256:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_256_GCM_SHA384
ssl-server-verify none
hard-stop-after 183s
nbthread 16
cpu-map auto:1/1-16 0-15
spread-checks 5
chroot /etc/haproxy/chroot
strict-limits

defaults
mode http
log global
option httplog
option http-keep-alive
option forwardfor except 127.0.0.0/8
option redispatch 1
option http-ignore-probes
retries 3
retry-on conn-failure empty-response response-timeout 0rtt-rejected
timeout http-request 10s
timeout queue 1s
timeout connect 10s
timeout client 300s
timeout server 300s
timeout http-keep-alive 10s
timeout check 5s
maxconn 524288
balance roundrobin
http-reuse always
default-server inter 10s fastinter 1s fall 3 slowstart 20s observe
layer7 error-limit 5 on-error fastinter pool-purge-delay 10s tfo
allow-0rtt pool-low-conn 32
http-check expect rstatus ^(200|429)

frontend fe_foo
bind x:80 name http_ip process 1/all tfo
bind x:443 name https_ip process 1/all ssl crt
/etc/haproxy/tls/fe_foo alpn h2,http/1.1 tfo allow-0rtt

log-format-sd [fc_rtt=\"%[fc_rtt]\"\
fc_0rtt=\"%[ssl_fc_has_early]\"\ fc_resumed=\"%[ssl_fc_is_resumed]\"]

http-request disable-l7-retry if ! METH_GET
http-request wait-for-handshake if ! METH_GET

use_backend bar
```

- Am I right that tune.ssl.cachesize is not involved here?
- What's odd is that I do find requests which returns 1 with
`ssl_fc_is_resumed` fetcher.
- I looked at `ssl_fc_has_early` and I did not found anything strange.
- I tried to remove `wait-for-handshake` and `tune.ssl.cachesize 0`
without success

>From the client point of view I'm able to test it through:
openssl s_client -tls1_3 -state -connect x:443 -servername foo.com
-keylogfile keylogfile.log -sess_out ~/ssl/session.data
openssl s_client -tls1_3 -state -connect x:443 -servername foo.com
-keylogfile keylogfile.log -sess_in ~/ssl/session.data -early_data
httpget.txt

The second time I'm able to see:

Reused, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was accepted
Verify return code: 0 (ok)

So to me, from the client point of view I'm able to make use of 0rtt.

Any idea how to explain `ssl_fc_has_early` fetcher behaviour? Am I
missing something in my config? Does someone have a different
behaviour with a similar config?

Best,
-- 
William



Re: Logging using %HP (path) produce different results with H1 and H2

2020-08-21 Thread William Dauchy
Hello Pierre,

On Fri, Aug 21, 2020 at 7:46 PM Pierre Cheynier  wrote:
> We're running HAProxy 2.2.2.
> It turns out logging requests paths using "%HP" var produce a different 
> results on H1 vs. H2.
>
> H1: /path
> H2: https://hostname.domain/path (< I consider this one buggy)
>
> No idea where does this comes from exactly, I essentially understand txn->uri 
> structure ends up being completely different between the 2 code paths.
> Anybody here with lightspeed knowledge of src/h* to investigate this?

I believe this is expected; this behaviour has changed since v2.1 though.
see 
http://git.haproxy.org/?p=haproxy.git;a=commit;h=30ee1efe676e8264af16bab833c621d60a72a4d7
see also older threads talking about it
https://www.mail-archive.com/haproxy@formilux.org/msg37783.html

Best,
--
William



Re: [PATCH v3 1/2] CLEANUP: ssl: ssl_sock_crt2der semicolon and spaces

2020-08-11 Thread William Dauchy
On Tue, Aug 11, 2020 at 11:36 AM William Lallemand
 wrote:
> I just pushed this fix on top on your patch, the sk_X509_pop_free() was
> provoking a double free in the session release.
>
> e3a5f84 BUG/MINOR: ssl: double free w/ smp_fetch_ssl_x_chain_der()

ok sorry for that and thanks for the fix. Strange that I did not notice it.
-- 
William



[PATCH] CLEANUP: fix all duplicated semicolons

2020-08-07 Thread William Dauchy
trivial commit, does not change the code behaviour

Signed-off-by: William Dauchy 
---
 src/mux_fcgi.c | 2 +-
 src/mux_h2.c   | 2 +-
 src/ring.c | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/mux_fcgi.c b/src/mux_fcgi.c
index 875049884..ab9ddf095 100644
--- a/src/mux_fcgi.c
+++ b/src/mux_fcgi.c
@@ -825,7 +825,7 @@ static inline struct fcgi_strm *fcgi_conn_st_by_id(struct 
fcgi_conn *fconn, int
  */
 static void fcgi_release(struct fcgi_conn *fconn)
 {
-   struct connection *conn = NULL;;
+   struct connection *conn = NULL;
 
TRACE_POINT(FCGI_EV_FCONN_END);
 
diff --git a/src/mux_h2.c b/src/mux_h2.c
index 44955ba8f..bfa65bd24 100644
--- a/src/mux_h2.c
+++ b/src/mux_h2.c
@@ -926,7 +926,7 @@ static inline struct h2s *h2c_st_by_id(struct h2c *h2c, int 
id)
  */
 static void h2_release(struct h2c *h2c)
 {
-   struct connection *conn = NULL;;
+   struct connection *conn = NULL;
 
TRACE_ENTER(H2_EV_H2C_END);
 
diff --git a/src/ring.c b/src/ring.c
index 7c7d58edc..2ee6c10b9 100644
--- a/src/ring.c
+++ b/src/ring.c
@@ -186,7 +186,7 @@ ssize_t ring_write(struct ring *ring, size_t maxlen, const 
struct ist pfx[], siz
totlen += len;
}
 
-   *b_tail(buf) = 0; buf->data++;; // new read counter
+   *b_tail(buf) = 0; buf->data++; // new read counter
sent = lenlen + totlen + 1;
 
/* notify potential readers */
-- 
2.28.0




[PATCH v3 1/2] CLEANUP: ssl: ssl_sock_crt2der semicolon and spaces

2020-08-06 Thread William Dauchy
trivial commit, does not change the code behaviour

Signed-off-by: William Dauchy 
---
 src/ssl_utils.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/ssl_utils.c b/src/ssl_utils.c
index 510b5da80..0a12fea5b 100644
--- a/src/ssl_utils.c
+++ b/src/ssl_utils.c
@@ -87,16 +87,16 @@ int ssl_sock_get_serial(X509 *crt, struct buffer *out)
 int ssl_sock_crt2der(X509 *crt, struct buffer *out)
 {
int len;
-   unsigned char *p = (unsigned char *) out->area;;
+   unsigned char *p = (unsigned char *) out->area;
 
-   len =i2d_X509(crt, NULL);
+   len = i2d_X509(crt, NULL);
if (len <= 0)
return 1;
 
if (out->size < len)
return -1;
 
-   i2d_X509(crt,);
+   i2d_X509(crt, );
out->data = len;
return 1;
 }
-- 
2.27.0




[PATCH v3 2/2] MINOR: ssl: add ssl_{c,s}_chain_der fetch methods

2020-08-06 Thread William Dauchy
Following work from Arjen and Mathilde, it adds ssl_{c,s}_chain_der
methods; it returns DER encoded certs from SSL_get_peer_cert_chain

Also update existing vtc tests to add random intermediate certificates

When getting the result through this header:
  http-response add-header x-ssl-chain-der %[ssl_c_chain_der,hex]
One can parse it with any lib accepting ASN.1 DER data, such as in go:
  bin, err := encoding/hex.DecodeString(cert)
  certs_parsed, err := x509.ParseCertificates(bin)

Cc: Arjen Nienhuis 
Signed-off-by: Mathilde Gilles 
Signed-off-by: William Dauchy 
---
v2:
- add missing check after trasj alloc
v3:
- add ssl_s_chain_der support 
---
 doc/configuration.txt|  14 
 reg-tests/ssl/client1.pem| 106 +++
 reg-tests/ssl/common.pem |  54 +-
 reg-tests/ssl/ssl_client_samples.vtc |   2 +
 reg-tests/ssl/ssl_server_samples.vtc |   4 +-
 src/ssl_sample.c |  69 +
 6 files changed, 247 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9afb5662c..98ec9393c 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17062,6 +17062,13 @@ ssl_c_der : binary
   incoming connection was made over an SSL/TLS transport layer. When used for
   an ACL, the value(s) to match against can be passed in hexadecimal form.
 
+ssl_c_der_chain : binary
+  Returns the DER formatted chain certificate presented by the client when the
+  incoming connection was made over an SSL/TLS transport layer. When used for
+  an ACL, the value(s) to match against can be passed in hexadecimal form. One
+  can parse the result with any lib accepting ASN.1 DER data. It currentlly
+  does not support resumed sessions.
+
 ssl_c_err : integer
   When the incoming connection was made over an SSL/TLS transport layer,
   returns the ID of the first error detected during verification at depth 0, or
@@ -17426,6 +17433,13 @@ ssl_s_der : binary
   outgoing connection was made over an SSL/TLS transport layer. When used for
   an ACL, the value(s) to match against can be passed in hexadecimal form.
 
+ssl_s_chain_der : binary
+  Returns the DER formatted chain certificate presented by the server when the
+  outgoing connection was made over an SSL/TLS transport layer. When used for
+  an ACL, the value(s) to match against can be passed in hexadecimal form. One
+  can parse the result with any lib accepting ASN.1 DER data. It currentlly
+  does not support resumed sessions.
+
 ssl_s_key_alg : string
   Returns the name of the algorithm used to generate the key of the certificate
   presented by the server when the outgoing connection was made over an
diff --git a/reg-tests/ssl/client1.pem b/reg-tests/ssl/client1.pem
index 3725f79f4..d830a4236 100644
--- a/reg-tests/ssl/client1.pem
+++ b/reg-tests/ssl/client1.pem
@@ -28,6 +28,112 @@ 
LpmkYVHWhADLY4q06PUz8gFGsfDHnx9RQIV01SXbcFxhmAjJBequBbTpucW4UAK4
 sTmg59wlYEeNdomLBPW8f0zkY3Nm/IbyJ8kEofUa4kbwdD/osS5fgWgARiVQXEMW
 W4oMGWpplJan6qe+hInvd+5syZXtO+K/uSOj63H6BwAu
 -END CERTIFICATE-
+-BEGIN CERTIFICATE-
+MIIJazCCBVOgAwIBAgIUWHoc5e2FUECgyCvyVf8wCtt8gTYwDQYJKoZIhvcNAQEL
+BQAwRTELMAkGA1UEBhMCRlIxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
+GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMDA4MDQxODU4MTZaFw0yMDA5
+MDMxODU4MTZaMEUxCzAJBgNVBAYTAkZSMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
+HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggQiMA0GCSqGSIb3DQEB
+AQUAA4IEDwAwggQKAoIEAQDARiuHkhrnf38Md1nxGDSneJfwv/QksdNNMNTJBdjg
+OVmaRCIAyz43oefTWDQ/TebbSwB+Lg9pud1zadGWhlZRhCgBPP8JDMhIKH4eXIRk
+5IIa8WD08EwvSlqJL0r4gsMtVsxy7BZHAkka/2Ket9pyGt4kG5n75RFdc6BI80/8
+RwJt/MDxPrcVBAT7LnCluxQpyya9mZCabj7l+9a2yU2hgWS6QqfZJ133krkP/MMh
+AEQkSoA4mmBwWk9yPqXmUqiOi7v6iLkIUEh5SgYVPRk9BtU/kDaUdSwuqRrpCZo4
+SsWZWFLxBmLHkSh+G+BWjCVYMQr2ye7e+VMT/20+5xAfq4fj9n5BsPcx3QcVuTof
+RAc/Oygnt4MYnIcUb7zRFvCAvgpUHL7BnEn6nhyXjHJGqGDchsg8m9t3v/Y3ohq+
+qmrSzdeuylE1n3W5aWJlbFmyXegNP45MJ0xicesVrXEWF7YD/ir9mGJ8bQYr4blf
+77PrbF02komC6AzVPKOJa0jR+eW1wErzYlkYgez6ylBWCiHJd1dhEHlK3h2rXdYa
+Gnb45ILCLpEDjNEUrHifLLNXwqJpgZQsJU6BgMgk7ZgBfAKrCfTeg0rkCqCAPeVb
+8eSLf7FBF7YBRJ5P6u8qXc4RtgEu607GaWV0gIMfyVBY52oV+OaNsEdFetrJnp3c
+friG8vJ+7jdq6zjUCGgnfUIHoViJPh3JuFfhA3jT0gQDKW5PeI7dxhrNvlqdYfHI
+fxX7Y1/J6cTQkqJ1cai2f0bwJIJiTAThNbG+zrtjJ7fZ3wJ4udyU/IKrwShqtmTb
+1Ofj0tJDdwOH8i84vIySLUvR9aAb7ClFlnsx6rzwOxG90W7C0LA2M0EHm4FezJm/
+FfujnZwEWr1T9Wki6qE0MHCbdN/TTDws//EKkkE44FC+amL96w0IQl70vpE37j2A
+zlDWvFFID95SIxfmpkwWDvXDKv6gr1GMLeysCl2fgpY05Xidw5cEo9/tEkuWn/dG
+x/D9hnLBGeroA0251ES12jemqDjI2U0tfaeHakjwSsoWElf94Qmuh2iPZ+1zIxQs
+7o6nAWN8X9hfsmrDTTHlww0TEfrjlbzG5Yh+0ZRxmejgiUyOCXck+eh/ZXMXvfWh
+y3CorIIuWgkRjm80PYkdaRDJdZuyP6R7tXfTXNVzAiSQf0Qx9ru2KB2Fs/XZPamH
+KjItAU5Q6msIVvaRMS0muQgV+b6hqSEBzqXqJfAlpVLHXr5FqK+U7EB9y02B6piB
+tAmxqXP8OOCoQql6/vgIcrDFUOo6KtGBW36ef74XE3KCUVaIzVJZSIt6i/Vi0bZj
+bAjsJUQ3qDlHdorv9TRVOhnC1GUz7SuYnpEOyiXmyx3LAgMBAAGjUzBRMB0GA1Ud
+DgQWBBQ62csZcH/meQcENHhNbqz9LMzwjjAfBgNVHSMEGDAWgBQ62csZcH/meQcE

Re: [PATCH 2/2] MINOR: ssl: add ssl_c_chain_der fetch method

2020-08-05 Thread William Dauchy
On Wed, Aug 5, 2020 at 3:20 PM Emeric Brun  wrote:
> I think this code could be useful to declare also a "ssl_s_chain_der" using 
> minor changes as this is done on ssl_c_serial:

true, I can do a v3 to handle this.

--
William



[PATCH v2 2/2] MINOR: ssl: add ssl_c_chain_der fetch method

2020-08-05 Thread William Dauchy
Following work from Arjen and Mathilde, it adds ssl_c_chain_der method;
it returns DER encoded certs from SSL_get_peer_cert_chain

Also update existing vtc tests to add random intermediate certificates

When getting the result through this header:
  http-response add-header x-ssl-chain-der %[ssl_c_chain_der,hex]
One can parse it with any lib accepting ASN.1 DER data, such as in go:
  bin, err := encoding/hex.DecodeString(cert)
  certs_parsed, err := x509.ParseCertificates(bin)

Cc: Arjen Nienhuis 
Signed-off-by: Mathilde Gilles 
Signed-off-by: William Dauchy 
---
Changes:
v2: add return value check for `alloc_trash_chunk()` call
---
 doc/configuration.txt|   7 ++
 reg-tests/ssl/client1.pem| 106 +++
 reg-tests/ssl/ssl_client_samples.vtc |   2 +
 src/ssl_sample.c |  63 
 4 files changed, 178 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9afb5662c..b12c2dc8b 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17062,6 +17062,13 @@ ssl_c_der : binary
   incoming connection was made over an SSL/TLS transport layer. When used for
   an ACL, the value(s) to match against can be passed in hexadecimal form.
 
+ssl_c_der_chain : binary
+  Returns the DER formatted chain certificate presented by the client when the
+  incoming connection was made over an SSL/TLS transport layer. When used for
+  an ACL, the value(s) to match against can be passed in hexadecimal form. One
+  can parse the result with any lib accepting ASN.1 DER data. It currentlly
+  does not support resumed sessions.
+
 ssl_c_err : integer
   When the incoming connection was made over an SSL/TLS transport layer,
   returns the ID of the first error detected during verification at depth 0, or
diff --git a/reg-tests/ssl/client1.pem b/reg-tests/ssl/client1.pem
index 3725f79f4..d830a4236 100644
--- a/reg-tests/ssl/client1.pem
+++ b/reg-tests/ssl/client1.pem
@@ -28,6 +28,112 @@ 
LpmkYVHWhADLY4q06PUz8gFGsfDHnx9RQIV01SXbcFxhmAjJBequBbTpucW4UAK4
 sTmg59wlYEeNdomLBPW8f0zkY3Nm/IbyJ8kEofUa4kbwdD/osS5fgWgARiVQXEMW
 W4oMGWpplJan6qe+hInvd+5syZXtO+K/uSOj63H6BwAu
 -END CERTIFICATE-
+-BEGIN CERTIFICATE-
+MIIJazCCBVOgAwIBAgIUWHoc5e2FUECgyCvyVf8wCtt8gTYwDQYJKoZIhvcNAQEL
+BQAwRTELMAkGA1UEBhMCRlIxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
+GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMDA4MDQxODU4MTZaFw0yMDA5
+MDMxODU4MTZaMEUxCzAJBgNVBAYTAkZSMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
+HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggQiMA0GCSqGSIb3DQEB
+AQUAA4IEDwAwggQKAoIEAQDARiuHkhrnf38Md1nxGDSneJfwv/QksdNNMNTJBdjg
+OVmaRCIAyz43oefTWDQ/TebbSwB+Lg9pud1zadGWhlZRhCgBPP8JDMhIKH4eXIRk
+5IIa8WD08EwvSlqJL0r4gsMtVsxy7BZHAkka/2Ket9pyGt4kG5n75RFdc6BI80/8
+RwJt/MDxPrcVBAT7LnCluxQpyya9mZCabj7l+9a2yU2hgWS6QqfZJ133krkP/MMh
+AEQkSoA4mmBwWk9yPqXmUqiOi7v6iLkIUEh5SgYVPRk9BtU/kDaUdSwuqRrpCZo4
+SsWZWFLxBmLHkSh+G+BWjCVYMQr2ye7e+VMT/20+5xAfq4fj9n5BsPcx3QcVuTof
+RAc/Oygnt4MYnIcUb7zRFvCAvgpUHL7BnEn6nhyXjHJGqGDchsg8m9t3v/Y3ohq+
+qmrSzdeuylE1n3W5aWJlbFmyXegNP45MJ0xicesVrXEWF7YD/ir9mGJ8bQYr4blf
+77PrbF02komC6AzVPKOJa0jR+eW1wErzYlkYgez6ylBWCiHJd1dhEHlK3h2rXdYa
+Gnb45ILCLpEDjNEUrHifLLNXwqJpgZQsJU6BgMgk7ZgBfAKrCfTeg0rkCqCAPeVb
+8eSLf7FBF7YBRJ5P6u8qXc4RtgEu607GaWV0gIMfyVBY52oV+OaNsEdFetrJnp3c
+friG8vJ+7jdq6zjUCGgnfUIHoViJPh3JuFfhA3jT0gQDKW5PeI7dxhrNvlqdYfHI
+fxX7Y1/J6cTQkqJ1cai2f0bwJIJiTAThNbG+zrtjJ7fZ3wJ4udyU/IKrwShqtmTb
+1Ofj0tJDdwOH8i84vIySLUvR9aAb7ClFlnsx6rzwOxG90W7C0LA2M0EHm4FezJm/
+FfujnZwEWr1T9Wki6qE0MHCbdN/TTDws//EKkkE44FC+amL96w0IQl70vpE37j2A
+zlDWvFFID95SIxfmpkwWDvXDKv6gr1GMLeysCl2fgpY05Xidw5cEo9/tEkuWn/dG
+x/D9hnLBGeroA0251ES12jemqDjI2U0tfaeHakjwSsoWElf94Qmuh2iPZ+1zIxQs
+7o6nAWN8X9hfsmrDTTHlww0TEfrjlbzG5Yh+0ZRxmejgiUyOCXck+eh/ZXMXvfWh
+y3CorIIuWgkRjm80PYkdaRDJdZuyP6R7tXfTXNVzAiSQf0Qx9ru2KB2Fs/XZPamH
+KjItAU5Q6msIVvaRMS0muQgV+b6hqSEBzqXqJfAlpVLHXr5FqK+U7EB9y02B6piB
+tAmxqXP8OOCoQql6/vgIcrDFUOo6KtGBW36ef74XE3KCUVaIzVJZSIt6i/Vi0bZj
+bAjsJUQ3qDlHdorv9TRVOhnC1GUz7SuYnpEOyiXmyx3LAgMBAAGjUzBRMB0GA1Ud
+DgQWBBQ62csZcH/meQcENHhNbqz9LMzwjjAfBgNVHSMEGDAWgBQ62csZcH/meQcE
+NHhNbqz9LMzwjjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IEAQBA
+wLsGf3R1+/I2zQE+lsj7RasZtA/Cos92iEGDAPvFbx9e+roG8Gg8KBsEJu/HN0JH
+lMMiQ8dDRHSBMvRBENL5/57oOOhmqc+1u5sazLuANhzAYPZG17Klib7YpEwWoXar
+FDDiJYtCyLW0oNLpCswYopWK9GC0RJNucB0NFvOxehJ2sP2/fxGBQMB09L6mjKjd
+4KsOzyd3dNf0VYS6jB+/1pcKSHKQUo9HRHB5FK04PsYHoh4AtmEHvmYQKcWWidgU
+v26ftlH00ERzuW2juqBbz9mghlNRqXi0IyZ9b4tSj29dxW+WWFzo7j2zEPaD6z2W
+DEHq7zvON+g+q6qLgWeszqMgJzjvWjMj00E/t06PoHPiz/cAnDKEqp+ZzxCIFrxj
+/qneChpogDWyLbawhyyzbZvbirx5znOSbWjPZgydqaNEFViqbxwinBx4Xxabo6XN
+TU020FuMWmgfbIcvtgjKgyKqc97l7JMNNm7LQV9+9W0U5zdIqQKLZ9MMrd2w3xh4
+MAB8NKnwzHReK0TWwUU9HSgFAGdEX6HnyZ3bQ13ijg+sNBRMEi0gBHaqZKDdyoft
+B2u2uasSwioV48dbSIcHl+rTBKxiMh5XQ7ENnaGOJkjsIqTVzizqnPHU8eMBnSbb
+dsXlamROYII44+j3Ku6OGt51w86eGk4VxI3tmaECcJKqTkwUFD8AcNDrkjtmLuxK
+12yjnoM+u1cclfqQ5NOtRc6MJZ27jCobfBBhVdKVDp4X1WNyqGlbsU5adDAzknuI
+GT7MJO7lGjkZX2n54BNPSfrSknYMOVYcZqL0Dbcrhx5IyEmg

[PATCH v2 1/2] CLEANUP: ssl: ssl_sock_crt2der semicolon and spaces

2020-08-05 Thread William Dauchy
trivial commit, does not change the code behaviour

Signed-off-by: William Dauchy 
---
 src/ssl_utils.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/ssl_utils.c b/src/ssl_utils.c
index 510b5da80..0a12fea5b 100644
--- a/src/ssl_utils.c
+++ b/src/ssl_utils.c
@@ -87,16 +87,16 @@ int ssl_sock_get_serial(X509 *crt, struct buffer *out)
 int ssl_sock_crt2der(X509 *crt, struct buffer *out)
 {
int len;
-   unsigned char *p = (unsigned char *) out->area;;
+   unsigned char *p = (unsigned char *) out->area;
 
-   len =i2d_X509(crt, NULL);
+   len = i2d_X509(crt, NULL);
if (len <= 0)
return 1;
 
if (out->size < len)
return -1;
 
-   i2d_X509(crt,);
+   i2d_X509(crt, );
out->data = len;
return 1;
 }
-- 
2.27.0




Re: [PATCH 0/2] ssl chain fetcher

2020-08-05 Thread William Dauchy
Hello William,

Thanks for your quick answer.

On Wed, Aug 5, 2020 at 2:41 PM William Lallemand  wrote:
> I don't know if this is possible to fix it, but I think must of the SSL 
> fetches have
> the problem.

ok makes sense in that case. So for now I assume there is no easy way
to get the information from a resumed session, right?

> Your patches look good, my only concern is the alloc_trash_chunk()
> return value which is not tested, otherwise I'm okay to merge them!

fixed in v2.

-- 
William



[PATCH 0/2] ssl chain fetcher

2020-08-05 Thread William Dauchy
Hi,

Here is a patch to add a new fetcher for cert chain.
It follows discussion after thread
https://www.mail-archive.com/haproxy@formilux.org/msg35607.html

It currently does not support session reuse, but I was looking for
inputs about it, whether I could make use of `reused_sess` objects in
haproxy, and in which way. Indeed, as pointed by Emeric in
https://www.mail-archive.com/haproxy@formilux.org/msg37380.html
`SSL_get_peer_cert_chain` returns NULL in case of resumed session.

Thanks,

William Dauchy (2):
  CLEANUP: ssl: ssl_sock_crt2der semicolon and spaces
  MINOR: ssl: add ssl_c_chain_der fetch method

 doc/configuration.txt|   7 ++
 reg-tests/ssl/client1.pem| 106 +++
 reg-tests/ssl/ssl_client_samples.vtc |   2 +
 src/ssl_sample.c |  61 +++
 src/ssl_utils.c  |   6 +-
 5 files changed, 179 insertions(+), 3 deletions(-)

-- 
2.27.0




[PATCH 1/2] CLEANUP: ssl: ssl_sock_crt2der semicolon and spaces

2020-08-05 Thread William Dauchy
trivial commit, does not change the code behaviour

Signed-off-by: William Dauchy 
---
 src/ssl_utils.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/ssl_utils.c b/src/ssl_utils.c
index 510b5da80..0a12fea5b 100644
--- a/src/ssl_utils.c
+++ b/src/ssl_utils.c
@@ -87,16 +87,16 @@ int ssl_sock_get_serial(X509 *crt, struct buffer *out)
 int ssl_sock_crt2der(X509 *crt, struct buffer *out)
 {
int len;
-   unsigned char *p = (unsigned char *) out->area;;
+   unsigned char *p = (unsigned char *) out->area;
 
-   len =i2d_X509(crt, NULL);
+   len = i2d_X509(crt, NULL);
if (len <= 0)
return 1;
 
if (out->size < len)
return -1;
 
-   i2d_X509(crt,);
+   i2d_X509(crt, );
out->data = len;
return 1;
 }
-- 
2.27.0




[PATCH 2/2] MINOR: ssl: add ssl_c_chain_der fetch method

2020-08-05 Thread William Dauchy
Following work from Arjen and Mathilde, it adds ssl_c_chain_der method;
it returns DER encoded certs from SSL_get_peer_cert_chain

Also update existing vtc tests to add random intermediate certificates

When getting the result through this header:
  http-response add-header x-ssl-chain-der %[ssl_c_chain_der,hex]
One can parse it with any lib accepting ASN.1 DER data, such as in go:
  bin, err := encoding/hex.DecodeString(cert)
  certs_parsed, err := x509.ParseCertificates(bin)

Cc: Arjen Nienhuis 
Signed-off-by: Mathilde Gilles 
Signed-off-by: William Dauchy 
---
 doc/configuration.txt|   7 ++
 reg-tests/ssl/client1.pem| 106 +++
 reg-tests/ssl/ssl_client_samples.vtc |   2 +
 src/ssl_sample.c |  61 +++
 4 files changed, 176 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9afb5662c..b12c2dc8b 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -17062,6 +17062,13 @@ ssl_c_der : binary
   incoming connection was made over an SSL/TLS transport layer. When used for
   an ACL, the value(s) to match against can be passed in hexadecimal form.
 
+ssl_c_der_chain : binary
+  Returns the DER formatted chain certificate presented by the client when the
+  incoming connection was made over an SSL/TLS transport layer. When used for
+  an ACL, the value(s) to match against can be passed in hexadecimal form. One
+  can parse the result with any lib accepting ASN.1 DER data. It currentlly
+  does not support resumed sessions.
+
 ssl_c_err : integer
   When the incoming connection was made over an SSL/TLS transport layer,
   returns the ID of the first error detected during verification at depth 0, or
diff --git a/reg-tests/ssl/client1.pem b/reg-tests/ssl/client1.pem
index 3725f79f4..d830a4236 100644
--- a/reg-tests/ssl/client1.pem
+++ b/reg-tests/ssl/client1.pem
@@ -28,6 +28,112 @@ 
LpmkYVHWhADLY4q06PUz8gFGsfDHnx9RQIV01SXbcFxhmAjJBequBbTpucW4UAK4
 sTmg59wlYEeNdomLBPW8f0zkY3Nm/IbyJ8kEofUa4kbwdD/osS5fgWgARiVQXEMW
 W4oMGWpplJan6qe+hInvd+5syZXtO+K/uSOj63H6BwAu
 -END CERTIFICATE-
+-BEGIN CERTIFICATE-
+MIIJazCCBVOgAwIBAgIUWHoc5e2FUECgyCvyVf8wCtt8gTYwDQYJKoZIhvcNAQEL
+BQAwRTELMAkGA1UEBhMCRlIxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoM
+GEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZDAeFw0yMDA4MDQxODU4MTZaFw0yMDA5
+MDMxODU4MTZaMEUxCzAJBgNVBAYTAkZSMRMwEQYDVQQIDApTb21lLVN0YXRlMSEw
+HwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggQiMA0GCSqGSIb3DQEB
+AQUAA4IEDwAwggQKAoIEAQDARiuHkhrnf38Md1nxGDSneJfwv/QksdNNMNTJBdjg
+OVmaRCIAyz43oefTWDQ/TebbSwB+Lg9pud1zadGWhlZRhCgBPP8JDMhIKH4eXIRk
+5IIa8WD08EwvSlqJL0r4gsMtVsxy7BZHAkka/2Ket9pyGt4kG5n75RFdc6BI80/8
+RwJt/MDxPrcVBAT7LnCluxQpyya9mZCabj7l+9a2yU2hgWS6QqfZJ133krkP/MMh
+AEQkSoA4mmBwWk9yPqXmUqiOi7v6iLkIUEh5SgYVPRk9BtU/kDaUdSwuqRrpCZo4
+SsWZWFLxBmLHkSh+G+BWjCVYMQr2ye7e+VMT/20+5xAfq4fj9n5BsPcx3QcVuTof
+RAc/Oygnt4MYnIcUb7zRFvCAvgpUHL7BnEn6nhyXjHJGqGDchsg8m9t3v/Y3ohq+
+qmrSzdeuylE1n3W5aWJlbFmyXegNP45MJ0xicesVrXEWF7YD/ir9mGJ8bQYr4blf
+77PrbF02komC6AzVPKOJa0jR+eW1wErzYlkYgez6ylBWCiHJd1dhEHlK3h2rXdYa
+Gnb45ILCLpEDjNEUrHifLLNXwqJpgZQsJU6BgMgk7ZgBfAKrCfTeg0rkCqCAPeVb
+8eSLf7FBF7YBRJ5P6u8qXc4RtgEu607GaWV0gIMfyVBY52oV+OaNsEdFetrJnp3c
+friG8vJ+7jdq6zjUCGgnfUIHoViJPh3JuFfhA3jT0gQDKW5PeI7dxhrNvlqdYfHI
+fxX7Y1/J6cTQkqJ1cai2f0bwJIJiTAThNbG+zrtjJ7fZ3wJ4udyU/IKrwShqtmTb
+1Ofj0tJDdwOH8i84vIySLUvR9aAb7ClFlnsx6rzwOxG90W7C0LA2M0EHm4FezJm/
+FfujnZwEWr1T9Wki6qE0MHCbdN/TTDws//EKkkE44FC+amL96w0IQl70vpE37j2A
+zlDWvFFID95SIxfmpkwWDvXDKv6gr1GMLeysCl2fgpY05Xidw5cEo9/tEkuWn/dG
+x/D9hnLBGeroA0251ES12jemqDjI2U0tfaeHakjwSsoWElf94Qmuh2iPZ+1zIxQs
+7o6nAWN8X9hfsmrDTTHlww0TEfrjlbzG5Yh+0ZRxmejgiUyOCXck+eh/ZXMXvfWh
+y3CorIIuWgkRjm80PYkdaRDJdZuyP6R7tXfTXNVzAiSQf0Qx9ru2KB2Fs/XZPamH
+KjItAU5Q6msIVvaRMS0muQgV+b6hqSEBzqXqJfAlpVLHXr5FqK+U7EB9y02B6piB
+tAmxqXP8OOCoQql6/vgIcrDFUOo6KtGBW36ef74XE3KCUVaIzVJZSIt6i/Vi0bZj
+bAjsJUQ3qDlHdorv9TRVOhnC1GUz7SuYnpEOyiXmyx3LAgMBAAGjUzBRMB0GA1Ud
+DgQWBBQ62csZcH/meQcENHhNbqz9LMzwjjAfBgNVHSMEGDAWgBQ62csZcH/meQcE
+NHhNbqz9LMzwjjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IEAQBA
+wLsGf3R1+/I2zQE+lsj7RasZtA/Cos92iEGDAPvFbx9e+roG8Gg8KBsEJu/HN0JH
+lMMiQ8dDRHSBMvRBENL5/57oOOhmqc+1u5sazLuANhzAYPZG17Klib7YpEwWoXar
+FDDiJYtCyLW0oNLpCswYopWK9GC0RJNucB0NFvOxehJ2sP2/fxGBQMB09L6mjKjd
+4KsOzyd3dNf0VYS6jB+/1pcKSHKQUo9HRHB5FK04PsYHoh4AtmEHvmYQKcWWidgU
+v26ftlH00ERzuW2juqBbz9mghlNRqXi0IyZ9b4tSj29dxW+WWFzo7j2zEPaD6z2W
+DEHq7zvON+g+q6qLgWeszqMgJzjvWjMj00E/t06PoHPiz/cAnDKEqp+ZzxCIFrxj
+/qneChpogDWyLbawhyyzbZvbirx5znOSbWjPZgydqaNEFViqbxwinBx4Xxabo6XN
+TU020FuMWmgfbIcvtgjKgyKqc97l7JMNNm7LQV9+9W0U5zdIqQKLZ9MMrd2w3xh4
+MAB8NKnwzHReK0TWwUU9HSgFAGdEX6HnyZ3bQ13ijg+sNBRMEi0gBHaqZKDdyoft
+B2u2uasSwioV48dbSIcHl+rTBKxiMh5XQ7ENnaGOJkjsIqTVzizqnPHU8eMBnSbb
+dsXlamROYII44+j3Ku6OGt51w86eGk4VxI3tmaECcJKqTkwUFD8AcNDrkjtmLuxK
+12yjnoM+u1cclfqQ5NOtRc6MJZ27jCobfBBhVdKVDp4X1WNyqGlbsU5adDAzknuI
+GT7MJO7lGjkZX2n54BNPSfrSknYMOVYcZqL0Dbcrhx5IyEmg+iOlOu1HO1tdnZop
+ej4vT+1V2w9Sa4Wo3UCo84jcm5v/4z7jCYh4BRQ60CFb7GLxZoqXIslcGSPool3n

Re: Haproxy 2.2 LTS package for Debian Stretch oldstable

2020-08-03 Thread William Dauchy
On Mon, Aug 3, 2020 at 10:31 PM Artur  wrote:
> It would be nice to have a Debian Stretch package for the current LTS
> 2.2 branch in backports. It seems it's not available for now.

CCing Vincent who is maintaining those packages.

--
William



[PATCH 1/2] BUG/MINOR: spoa-server: fix size_t format printing

2020-08-01 Thread William Dauchy
>From https://www.python.org/dev/peps/pep-0353/
"A new type Py_ssize_t is introduced, which has the same size as the
compiler's size_t type, but is signed. It will be a typedef for ssize_t
where available."

For integer types, causes printf to expect a size_t-sized integer
argument.

This should fix github issue #702

This should be backported to >= v2.2

Signed-off-by: William Dauchy 
---
 contrib/spoa_server/ps_python.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/spoa_server/ps_python.c b/contrib/spoa_server/ps_python.c
index 00cb0e30d..be6fc4b7e 100644
--- a/contrib/spoa_server/ps_python.c
+++ b/contrib/spoa_server/ps_python.c
@@ -66,7 +66,7 @@ static int ps_python_check_overflow(Py_ssize_t len)
 */
if (len >= (Py_ssize_t)INT_MAX) {
PyErr_Format(spoa_error,
-   "%d is over 2GB. Please split in smaller 
pieces.", \
+   "%zd is over 2GB. Please split in smaller 
pieces.", \
len);
return -1;
} else {
-- 
2.27.0




[PATCH 2/2] DOC: spoa-server: fix false friends `actually`

2020-08-01 Thread William Dauchy
it seems like `actually` was wrongly used instead of `currently`

Signed-off-by: William Dauchy 
---
 contrib/spoa_server/README | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/spoa_server/README b/contrib/spoa_server/README
index f654a2321..a6c696c78 100644
--- a/contrib/spoa_server/README
+++ b/contrib/spoa_server/README
@@ -23,7 +23,7 @@ It's recommended to use python version 3 where possible due 
to python 2 deprecat
   Compilation
 ---
 
-Actually, the server support Lua and Python. Type "make" with the options:
+The server currently supports Lua and Python. Type "make" with the options:
 USE_LUA=1 and/or USE_PYTHON=1.
 
 You can add LUA_INC=.. LUA_LIB=.. to the make command to set the paths to
-- 
2.27.0




Re: [ANNOUNCE] haproxy-2.3-dev1

2020-07-17 Thread William Dauchy
On Fri, Jul 17, 2020 at 3:33 PM Willy Tarreau  wrote:
> We'll try to emit more frequent -dev releases from now on, typically
> every 2 weeks, as the end of the 2.2 cycle showed that it improves accuracy
> in bug reports, brings less risks to users, and shortens the time needed to
> write the announcement :-)

very happy to read this. I believe it will encourage people to test
more in advance as well.
it is also way easier to debug when you practice a bisection around a
regression.

on the road to v2.3!

-- 
William



Re: [ANNOUNCE] haproxy-2.2-dev12

2020-07-04 Thread William Dauchy
Hello Willy,

On Sat, Jul 4, 2020 at 8:11 AM Willy Tarreau  wrote:
> HAProxy 2.2-dev12 was released on 2020/07/04. It added 72 new commits
> after version 2.2-dev11.

maybe the package is being uploaded but I can't find it on
http://www.haproxy.org/download/2.2/src/devel/

>   - for developers, building with DEBUG_MEM_STATS provides a new expert
> command "debug dev memstats" which shows the total counts (calls and
> sizes) of memory allocations per line of code. This is very cheap and
> can be enabled on production servers if suspecting a memory leak
> somewhere (and it served to spot a regression in a recent fix).

without forgetting the useful reset argument, "debug dev memstats
reset" which then allows you to see the allocation starting from a
specific moment.

-- 
William



Re: dev 2.2 High CPU Constantly

2020-07-02 Thread William Dauchy
Hi Igor,

On Thu, Jul 2, 2020 at 9:57 AM Igor Pav  wrote:
> By using dev11, the CPU consumption drops a lot, but when connections
> reach ~1000, the CPU would still go high, remove the 0rtt-rejected
> from conf, the CPU becomes normal...

if you have the opportunity to test the last HEAD of the repo it could
be useful.
I believe this could be fixed by last Willy commit:
https://git.haproxy.org/?p=haproxy.git;a=commit;h=18ed789ae262382e3fe89288132d709c114575aa
-- 
William



Re: [PATCH] BUG/MAJOR: Fix bogus free() during deinit() for http-request rules

2020-06-14 Thread William Dauchy
Hi Tim,

Thanks for the quick fix. I've also tested on my side, it fixes the
issue I reported earlier.

On Sun, Jun 14, 2020 at 5:27 PM Tim Duesterhus  wrote:
> We cannot simply `release_sample_expr(rule->arg.vars.expr)` for a
> `struct act_rule`, because `rule->arg` is a union that might not
> contain valid `vars`. This leads to a crash on a configuration using
> `http-request redirect` and possibly others:
>
> frontend http
> mode http
> bind 127.0.0.1:80
> http-request redirect scheme https
>
> Instead a `struct act_rule` has a `release_ptr` that must be used
> to properly free any additional storage allocated.
>
> This patch fixes a regression in commit 
> ff78fcdd7f15c8626c7e70add7a935221ee2920c.
> It must be backported to whereever that patch is backported.
>
> It has be verified that the configuration above no longer crashes.
> It has also been verified that the configuration in 
> ff78fcdd7f15c8626c7e70add7a935221ee2920c
> does not leak.

Reported-by: William Dauchy 

> ---
>  src/haproxy.c | 1 -
>  src/vars.c| 7 +++
>  2 files changed, 7 insertions(+), 1 deletion(-)

Best,
-- 
William



Re: [PATCH 3/3] BUG/MINOR: haproxy: Free rule->arg.vars.expr during deinit_act_rules

2020-06-14 Thread William Dauchy
Hello,

On Sun, Jun 14, 2020 at 7:40 AM Willy Tarreau  wrote:
> On Sun, Jun 14, 2020 at 12:37:43AM +0200, Tim Duesterhus wrote:
> > careful with this one: I don't know whether it's safe to simply free the
> > expression there or whether I need to somehow check whether there actually
> > is some expression.
> >
> > It does not crash with my stupid example configuration showcasing the leak,
> > but of course real world configurations might or might not trigger a bogus
> > free there.
>
> It seems to be OK. I was worried that the pointer could be part of a union
> containing either the expression or its text version during parsing, but
> this doesn't seem to be the case, so apparently it's OK to always free it
> if it appears in a rule.

After this patch, I'm getting a segfault after firing an USR1 signal
to trigger the deinit:

#0  0x55b653aaec34 in release_sample_expr (expr=0x55b654766bc0) at
src/sample.c:1427
b1427   list_for_each_entry_safe(conv_expr, conv_exprb,
>conv_exprs, list)
(gdb) bt
#0  0x55b653aaec34 in release_sample_expr (expr=0x55b654766bc0) at
src/sample.c:1427
#1  0x55b653b0d884 in deinit_act_rules (rules=0x55b6547644b0) at
src/haproxy.c:2559
#2  0x55b653b0f09d in deinit () at src/haproxy.c:2707
#3  0x55b653b0fef8 in deinit_and_exit (status=0) at src/haproxy.c:2872
#4  0x55b6539f7256 in main (argc=, argv=) at src/haproxy.c:3771

Here is my config:

global
log 127.0.0.1 format rfc5424 local0 info
daemon
stats timeout 2m
set-dumpable
nbproc 1
tune.bufsize 33792
ssl-server-verify none
nbthread 16
cpu-map auto:1/1-16 0-15
spread-checks 5
strict-limits
no busy-polling

defaults
mode http
log global
option httplog
option http-keep-alive
option forwardfor except 127.0.0.0/8
option redispatch 1
option http-ignore-probes
retries 3
retry-on conn-failure empty-response response-timeout 0rtt-rejected
timeout http-request 10s
timeout queue 1s
timeout connect 10s
timeout client 180s
timeout server 180s
timeout http-keep-alive 10s
timeout check 5s
balance roundrobin
http-reuse always
default-server inter 5s fastinter 1s fall 3 slowstart 20s observe
layer7 error-limit 5 on-error fail-check pool-purge-delay 10s tfo
http-check expect status 200

listen stats
bind *:8080
stats enable
stats uri /haproxy_stats
http-request use-service prometheus-exporter if { path /metrics }
monitor-uri /

frontend fe_foo
bind 127.0.0.1:80 name http_ip4 process 1/all tfo

acl http  ssl_fc,not
acl https ssl_fc
monitor-uri /delivery/monitor/lb-check
# errorfile 200 /etc/haproxy/errorfiles/200.http
tcp-request content capture fc_rtt len 10
capture request header Host len 64
capture request header user-agent len 128
capture request header cf-ray len 20
log-format "%tr %TR/%Tw/%Ta %ac/%fc/%bc/%sc/%rc %sq/%bq %{+Q}r"
log-format-sd [user@51719\ src_ip=\"%ci\"\ src_port=\"%cp\"\
ftd=\"%f\"\ bkd=\"%b\"\ srv=\"%si:%sp\"\ status=\"%ST\"\
bytes_r=\"%B\"\ tsc=\"%tsc\"\ sslv=\"%sslv\"\ sslc=\"%sslc\"\
h_host=\"%[capture.req.hdr(1)]\"\ meth=\"%HM\"\ version=\"%HV\"\
h_ua=\"%[capture.req.hdr(2)]\"\
cf_dc=\"%[capture.req.hdr(3),word(2,-)]\"\
fc_rtt=\"%[capture.req.hdr(0)]\"\ time_tr=\"%Tr\"\ time_tc=\"%Tc\"\
time_tt=\"%Tt\"]
http-response set-log-level silent if { rand(100) ge 1 }

http-request del-header connection if { req.hdr(connection) close }
http-request set-header x-proto ssl if https
http-request set-header x-forwarded-proto https if https
http-request set-header x-tls-sessionid %[ssl_fc_session_id,hex] if https
http-request add-header x-client-ip %[src]
http-request set-header client-ip %[src]

http-request set-header x-real-host %[req.hdr(host)]
http-request set-header x-server-address %[dst]
acl content_encoding_header_exists req.hdr(content-encoding) -m found
http-request set-header x-original-content-encoding
%[req.hdr(content-encoding)] if content_encoding_header_exists
acl host_header_exists req.hdr(host) -m found
http-request set-header host %[req.hdr(host),field(1,:)] if
host_header_exists
acl ::disabled_http_methods method CONNECT
http-request deny if ::disabled_http_methods
http-request disable-l7-retry if ! METH_GET
http-request redirect scheme https code 302 if !{ ssl_fc } METH_GET
http-request redirect scheme https code 307 if !{ ssl_fc } ! METH_GET

default_backend be_foo

backend be_foo
   server srv0 127.0.0.1:6011 weight 1

-- 
William



Re: No access to git.haproxy.org from Travis CI

2020-06-13 Thread William Dauchy
Hi,

On Thu, Jun 11, 2020 at 1:10 PM Willy Tarreau  wrote:
> Sure but what I wanted to say was that travis seems to be the only
> point experiencing such difficulties and we don't know how it works
> nor what are the rules in place.

I don't know whether this is related to the issue described here but I just had:

$ git pull --rebase
fatal: unable to access 'http://git.haproxy.org/git/haproxy.git/': The
requested URL returned error: 502
$ git pull --rebase
error: The requested URL returned error: 502 (curl_result = 22,
http_code = 502, sha1 = f38175cf6ed4d02132e6b21cbf643f73be5ee000)
error: Unable to find f38175cf6ed4d02132e6b21cbf643f73be5ee000 under
http://git.haproxy.org/git/haproxy.git
Fetching objects: 4529, done.
Cannot obtain needed commit f38175cf6ed4d02132e6b21cbf643f73be5ee000
while processing commit ff0e8a44a4c23ab36b6f67c4052777ac908d4211.
error: fetch failed.

Third try was ok though :/

-- 
William



Re: dev 2.2 High CPU Constantly

2020-06-11 Thread William Dauchy
Hello Igor,

On Thu, Jun 11, 2020 at 5:25 PM Igor Pav  wrote:
> We got a very high CPU constantly while using 2.2dev. Any suggestion? Thanks.

Do you have more inputs of what you are doing to trigger that?
does it end at some point with an abort? e.g do you have in logs
things like "Thread X is about to kill the process". If so could you
provide the full logs of the error?
or do you have logs starting with "A bogus STREAM" -> in that case it
can be related to what we are currently looking at in
https://github.com/haproxy/haproxy/issues/662

Thanks,
-- 
William



Re: [ANNOUNCE] haproxy-2.1.5

2020-06-03 Thread William Dauchy
Hello Arthur,

On Wed, Jun 3, 2020 at 11:38 AM Artur  wrote:
> What is the risk to trigger this bug in real life ? How to avoid it
> before the next haproxy release ?
> In my setup haproxy 2.1.5 has threads enabled by default (Debian Buster
> backports).

You will trigger this issue if you are in the following cumulative cases:
- using a non-default `pool-purge-delay` value, the higher the value
is, the easiest it is to reproduce
- changing the servers state at run time (i.e from running to
maintenance state) and/or change ip/port of servers at runtime for a
given backend.

So basically, it will impact heavy users of the runtime API.

Best,
-- 
William



Re: [ANNOUNCE] haproxy-2.1.5

2020-06-02 Thread William Dauchy
On Tue, Jun 2, 2020 at 12:13 PM William Dauchy  wrote:
> it seems like I broke something with this commit, but I did not have
> it in v2.2

small followup:
Sorry for that one, the backport was not exactly as I thought, and so
no test were done before release outside of 2.2 branch:
- a small mistake in index within a loop
- more importantly, srv->idle_conns and srv->safe_conns are not
thread-safe list in 2.0 and 2.1

I choose to revert the changes < 2.2
-- 
William



[v2.0 PATCH] Revert "BUG/MEDIUM: connections: force connections cleanup on server changes"

2020-06-02 Thread William Dauchy
As explained by Christopher on github issue #665:
In 2.2, srv->idle_conns and srv->safe_conns are thread-safe lists. But
not in 2.1. So the patch must be reverted or the lists must be changed
to use mt_list instead. The same must be done in 2.0, but the mt_list
does not exist on this version.

I choose to revert it as the original bug is truly revealed in v2.2
after commit 079cb9af22da6 ("MEDIUM: connections: Revamp the way idle
connections are killed")

this should resolve github issue #665

This reverts commit e18f603012fe3b698a30fcedfb0684f48fe403aa.
This reverts commit 3b614335cf25a7081c25e0a789c932c5960f221c.
---
 src/server.c | 39 +--
 1 file changed, 1 insertion(+), 38 deletions(-)

diff --git a/src/server.c b/src/server.c
index 851f32a9..9f2452e0 100644
--- a/src/server.c
+++ b/src/server.c
@@ -51,7 +51,6 @@ static void srv_update_status(struct server *s);
 static void srv_update_state(struct server *srv, int version, char **params);
 static int srv_apply_lastaddr(struct server *srv, int *err_code);
 static int srv_set_fqdn(struct server *srv, const char *fqdn, int dns_locked);
-static void srv_cleanup_connections(struct server *srv);
 
 /* List head of all known server keywords */
 static struct srv_kw_list srv_keywords = {
@@ -3780,11 +3779,8 @@ const char *update_server_addr_port(struct server *s, 
const char *addr, const ch
}
 
 out:
-   if (changed) {
-   /* force connection cleanup on the given server */
-   srv_cleanup_connections(s);
+   if (changed)
srv_set_dyncookie(s);
-   }
if (updater)
chunk_appendf(msg, " by '%s'", updater);
chunk_appendf(msg, "\n");
@@ -5016,8 +5012,6 @@ static void srv_update_status(struct server *s)
if (s->onmarkeddown & 
HANA_ONMARKEDDOWN_SHUTDOWNSESSIONS)
srv_shutdown_streams(s, SF_ERR_DOWN);
 
-   /* force connection cleanup on the given server */
-   srv_cleanup_connections(s);
/* we might have streams queued on this server and 
waiting for
 * a connection. Those which are redispatchable will be 
queued
 * to another server or to the proxy itself.
@@ -5346,37 +5340,6 @@ struct task *srv_cleanup_toremove_connections(struct 
task *task, void *context,
return task;
 }
 
-/* cleanup connections for a given server
- * might be useful when going on forced maintenance or live changing ip/port
- */
-static void srv_cleanup_connections(struct server *srv)
-{
-   struct connection *conn;
-   int did_remove;
-   int i;
-   int j;
-
-   HA_SPIN_LOCK(OTHER_LOCK, _conn_srv_lock);
-   for (i = 0; i < global.nbthread; i++) {
-   did_remove = 0;
-   HA_SPIN_LOCK(OTHER_LOCK, _lock[i]);
-   for (j = 0; j < srv->curr_idle_conns; j++) {
-   conn = LIST_ELEM(srv->idle_conns[tid].n, struct 
connection *, list);
-   if (!conn)
-   conn = LIST_ELEM(srv->safe_conns[tid].n,
-struct connection *, list);
-   if (!conn)
-   break;
-   did_remove = 1;
-   LIST_ADDQ_LOCKED(_connections[i], >list);
-   }
-   HA_SPIN_UNLOCK(OTHER_LOCK, _lock[i]);
-   if (did_remove)
-   task_wakeup(idle_conn_cleanup[i], TASK_WOKEN_OTHER);
-   }
-   HA_SPIN_UNLOCK(OTHER_LOCK, _conn_srv_lock);
-}
-
 struct task *srv_cleanup_idle_connections(struct task *task, void *context, 
unsigned short state)
 {
struct server *srv;
-- 
2.26.2




[v2.1 PATCH] Revert "BUG/MEDIUM: connections: force connections cleanup on server changes"

2020-06-02 Thread William Dauchy
As explained by Christopher on github issue #665:
In 2.2, srv->idle_conns and srv->safe_conns are thread-safe lists. But
not in 2.1. So the patch must be reverted or the lists must be changed
to use mt_list instead. The same must be done in 2.0, but the mt_list
does not exist on this version.

I choose to revert it as the original bug is truly revealed in v2.2
after commit 079cb9af22da6 ("MEDIUM: connections: Revamp the way idle
connections are killed")

this should resolve github issue #665

this reverts commit 7eab37b6819af685c647cf5a581e29fca2f3e079.
this reverts commit 3ad3306ec0bcb0cd4ca2b9ba134ed67663473ee8.
---
 src/server.c | 39 +--
 1 file changed, 1 insertion(+), 38 deletions(-)

diff --git a/src/server.c b/src/server.c
index 7d44921a7..408458f0c 100644
--- a/src/server.c
+++ b/src/server.c
@@ -55,7 +55,6 @@ static int srv_apply_lastaddr(struct server *srv, int 
*err_code);
 static int srv_set_fqdn(struct server *srv, const char *fqdn, int dns_locked);
 static void srv_state_parse_line(char *buf, const int version, char **params, 
char **srv_params);
 static int srv_state_get_version(FILE *f);
-static void srv_cleanup_connections(struct server *srv);
 
 /* List head of all known server keywords */
 static struct srv_kw_list srv_keywords = {
@@ -3945,11 +3944,8 @@ const char *update_server_addr_port(struct server *s, 
const char *addr, const ch
}
 
 out:
-   if (changed) {
-   /* force connection cleanup on the given server */
-   srv_cleanup_connections(s);
+   if (changed)
srv_set_dyncookie(s);
-   }
if (updater)
chunk_appendf(msg, " by '%s'", updater);
chunk_appendf(msg, "\n");
@@ -5110,8 +5106,6 @@ static void srv_update_status(struct server *s)
if (s->onmarkeddown & 
HANA_ONMARKEDDOWN_SHUTDOWNSESSIONS)
srv_shutdown_streams(s, SF_ERR_DOWN);
 
-   /* force connection cleanup on the given server */
-   srv_cleanup_connections(s);
/* we might have streams queued on this server and 
waiting for
 * a connection. Those which are redispatchable will be 
queued
 * to another server or to the proxy itself.
@@ -5440,37 +5434,6 @@ struct task *srv_cleanup_toremove_connections(struct 
task *task, void *context,
return task;
 }
 
-/* cleanup connections for a given server
- * might be useful when going on forced maintenance or live changing ip/port
- */
-static void srv_cleanup_connections(struct server *srv)
-{
-   struct connection *conn;
-   int did_remove;
-   int i;
-   int j;
-
-   HA_SPIN_LOCK(OTHER_LOCK, _conn_srv_lock);
-   for (i = 0; i < global.nbthread; i++) {
-   did_remove = 0;
-   HA_SPIN_LOCK(OTHER_LOCK, _lock[i]);
-   for (j = 0; j < srv->curr_idle_conns; j++) {
-   conn = LIST_ELEM(srv->idle_conns[tid].n, struct 
connection *, list);
-   if (!conn)
-   conn = LIST_ELEM(srv->safe_conns[tid].n,
-struct connection *, list);
-   if (!conn)
-   break;
-   did_remove = 1;
-   MT_LIST_ADDQ(_connections[i], (struct mt_list 
*)>list);
-   }
-   HA_SPIN_UNLOCK(OTHER_LOCK, _lock[i]);
-   if (did_remove)
-   task_wakeup(idle_conn_cleanup[i], TASK_WOKEN_OTHER);
-   }
-   HA_SPIN_UNLOCK(OTHER_LOCK, _conn_srv_lock);
-}
-
 struct task *srv_cleanup_idle_connections(struct task *task, void *context, 
unsigned short state)
 {
struct server *srv;
-- 
2.26.2




Re: [ANNOUNCE] haproxy-2.1.5

2020-06-02 Thread William Dauchy
On Fri, May 29, 2020 at 4:31 PM William Lallemand
 wrote:
> William Dauchy fixed the connection idle cleanup upon a server maintenance or
> an ip/port change.
>   BUG/MEDIUM: connections: force connections cleanup on server changes

it seems like I broke something with this commit, but I did not have
it in v2.2 unless it is also the cause of issue #662

#0  0x7fac7d4f0500 in ?? ()
#1  0x55afb63fd2d9 in srv_cleanup_toremove_connections
(task=0x7fac7c81b980, context=, state=)
at src/server.c:5437
#2  0x55afb6485248 in process_runnable_tasks () at src/task.c:433
#3  0x55afb6435594 in run_poll_loop () at src/haproxy.c:2743
#4  0x55afb643599d in run_thread_poll_loop (data=)
at src/haproxy.c:2878
#5  0x7faca30bfea5 in start_thread () from /lib64/libpthread.so.0
#6  0x7faca1b6f8dd in clone () from /lib64/libc.so.6

I will have a look at it today.
-- 
William



Re: haproxy 2.2-dev8-7867525 - 100% cpu usage on 1 core after config 'reload'

2020-05-29 Thread William Dauchy
Hello Christopher,

On Fri, May 29, 2020 at 9:02 AM Christopher Faulet  wrote:
> I was able to reproduce the bug. Thanks for the reproducer. I've fixed it. It
> should be ok now.

I believe you are referring to
https://git.haproxy.org/?p=haproxy.git;a=commit;h=56192cc60b786f2c82925411d8b2ccd7d9f97d84
right?
I'm trying to hunt another loop issue in v2.2, that's why I prefer to
remove unrelated noise.

Thanks,
-- 
William



Re: RFC: set minimum default TLS version to 1.2 for HAProxy 2.2

2020-05-29 Thread William Dauchy
On Wed, May 27, 2020 at 12:42 PM William Lallemand
 wrote:
> So in my opinion we should do the same, and set the minimum version to
> TLSv12 by default on bind lines. It's still configurable with
> min-ssl-ver if you want the support for prior TLS versions.
> Does anybody have any objections?

Even though I'm late in the reply, I think it is a good decision.
Modern browsers are going to disable it at some point; on our side we
disabled tls1.0. and 1.1 completely last year. The traffic coming from
browsers with this version was very low (around 1% IIRC, no more than
2%), and we also realised a big part of it was in fact fraudulent
traffic coming from bots, so the final decision was not hard.

-- 
William



Re: [PR] Add verfied chain

2020-05-18 Thread William Dauchy
Hi Arjen,

On Mon, May 18, 2020 at 6:02 PM Arjen Nienhuis  wrote:
> I used PKCS7 because I did not know how to parse concatenated blobs.

Mathilde, how did we planned to use it? :)

> I think you should use SSL_get_peer_cert_chain because:
> - BoringSSL has no SSL_get0_verified_chain.
> - For debugging having all the certs is better. Especially if the chain
> is not valid.
> - In theory it's not always possible to do OCSP with the verified chain.
> OCSP is part of finding a valid chain. OpenSSL could choose a cert chain
> that doesn't pass OCSP while an other chain exists that can pass OCSP.

Thank you for your feedbacks.
Do you want to handle the changes? Otherwise we can handle them and
mention you as the original proposition in the commit message. As you
wish.

-- 
William



Re: [PR] Add verfied chain

2020-05-18 Thread William Dauchy
On Mon, May 18, 2020 at 3:58 PM William Lallemand
 wrote:
> I suppose it was put in a PKCS7 container to be able to distinguish each
> DER part of the chain easily? So It can be used by an external tool. I'm not
> sure of what is done with the result of this.
>
> The two patches seem to have different approches, Arjen's one is
> using a SSL_get0_verified_chain() and Mathild's one is using
> SSL_get_peer_cert_chain(). I'm not sure what approach is the best, I
> suppose that SSL_get_peer_cert_chain() is better if we want to have the
> chain event if it wasn't verified and it could be completed with the
> ssl_c_verify sample fetch if we need this information!
>
> I will be grateful if a .vtc test file is also provided with sample
> fetches patches, it's difficult to test every sample fetches nowadays.
>
> There is already a vtc for client auth which is available here:
> https://git.haproxy.org/?p=haproxy.git;a=blob;f=reg-tests/ssl/ssl_client_auth.vtc

Thanks for the feedbacks. I believe we will send our proposition soon.
-- 
William



Re: [PR] Add verfied chain

2020-05-18 Thread William Dauchy
Hello William L.,

On Wed, Dec 4, 2019 at 4:24 PM PR Bot  wrote:
> Patch title(s):
>MINOR: add fetch 'ssl_c_verified_chain'
>Merge branch 'master' of https://github.com/haproxy/haproxy
> Link:
>https://github.com/haproxy/haproxy/pull/396
> Edit locally:
>wget https://github.com/haproxy/haproxy/pull/396.patch && vi 396.patch
> Apply locally:
>curl https://github.com/haproxy/haproxy/pull/396.patch | git am -

We were wondering if you add the time to have a look at this one?
In fact we have a similar need and Mathilde started to work on a very
similar patch, see
https://github.com/ShimmerGlass/haproxy/commit/c63116fe7048320abc41709e4d1b25513da91f57
difference being, Mathilde simply concatenated the certs, and the
patch from Arjen, uses PKCS7. Is there any specific reason to use
PKCS7?

note: it also refers to https://github.com/haproxy/haproxy/issues/297

Best,
-- 
William



[PATCH] BUILD: ssl: include buffer common headers for ssl_sock_ctx

2020-05-17 Thread William Dauchy
since commit c0cdaffaa338 ("REORG: ssl: move ssl_sock_ctx and fix
cross-dependencies issues"), `struct ssl_sock_ctx` was moved in
ssl_sock.h. As it contains a `struct buffer`, including
`common/buffer.h` is now mandatory. I encountered an issue while
including ssl_sock.h on another patch:

include/types/ssl_sock.h:240:16: error: field ‘early_buf’ has incomplete type
  240 |  struct buffer early_buf;  /* buffer to store the early data 
received */

no backport needed.

Fixes: c0cdaffaa338 ("REORG: ssl: move ssl_sock_ctx and fix
cross-dependencies issues")
Signed-off-by: William Dauchy 
---
 include/types/ssl_sock.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/types/ssl_sock.h b/include/types/ssl_sock.h
index 99c964d6e..78639ac10 100644
--- a/include/types/ssl_sock.h
+++ b/include/types/ssl_sock.h
@@ -31,6 +31,7 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 #include 
-- 
2.26.2




Re: [PATCH 1/3] BUG/MINOR: pollers: remove uneeded free in global init

2020-05-13 Thread William Dauchy
On Wed, May 13, 2020 at 03:52:54PM +0200, Willy Tarreau wrote:
> > This one breaks clang builds because it removes the fail_revt label but it 
> > is
> > still declared as a local label, and clang errors on it.
> 
> Ah crap! I didn't notice this part which didn't appear in the context
> of the patch. I didn't notice we still had a few such labels in very
> old files. Do you mind if instead I edit your patch to completely remove
> the line ?

ah sorry for that!

-- 
William



Re: [PATCH 2/3] BUG/MINOR: select: remove uneeded free in thread init

2020-05-13 Thread William Dauchy
On Wed, May 13, 2020 at 11:50:50AM +0200, Willy Tarreau wrote:
> It's not a bug, it's just a way to ensure a common error path undoes
> everything that was done before reaching it. If another operation is
> added after the DIR_WR malloc, it will only have to add its own undo
> operation to the fail label instead of having to figure which one was
> the last to suceed and whether or not it needs to be addressed.
> 
> Thus I'd rather not change it since it does not have any impact and
> makes the code more future-proof.

understood, thanks for the feedback.

-- 
William



Re: [RFC PATCH 1/1] MEDIUM: ssl: add alternative way to load certificates with io_uring

2020-05-11 Thread William Dauchy
On Mon, May 11, 2020 at 5:32 PM William Lallemand
 wrote:
> By removing, it just means:
> - In the case of a directory there is basicaly nothing to do but remove
>   the bundle code, everything will be loaded as usual, it will just be
>   done in separate SSL_CTX.
> - In the configuration, if you specify a bundle with "example.pem" to
>   load the .rsa and .ecdsa, I'll probably add a shim to emulate the
>   previous behavior, but they will be loaded in separate SSL_CTX.

understood, this will indeed simplify a lot of things. Thanks for the
clarification!
-- 
William



Re: [RFC PATCH 1/1] MEDIUM: ssl: add alternative way to load certificates with io_uring

2020-05-11 Thread William Dauchy
Hello William L.

Thank you for your answer.

On Mon, May 11, 2020 at 1:01 PM William Lallemand
 wrote:
> I'm wondering if we coud have the timing directly in the logs or on the
> CLI, because I'm actually also doing this kind of things, and people
> could probably optimize the loading if they could see the loading time.

I agree it is something which was missing while doing my tests.
Remove/re-add my timer to do my tests each time I was doing some
modifications; to a point where I noted to ask you if you had some
tooling I missed.
So that's totally something I could do in a separate patch! What is
your preferred timing function?

> Your patch is really interesting in my opinion, a lot of people could
> benefits of something like that. Regarding the reload part, I have some
> ideas to pass the ckch structures to the new processes without reloading
> the files one the filesystem, but I didn't implement anything yet.

ok; interesting.

> I'm currently reworking the ssl_sock.c and .h files, and splitting them
> in several files, so you will probably need to rebase your patch on top
> of this. At the moment I separated this in 3 parts: 1 part for the
> crt-list/directories, 1 part for the ckch, and 1 part for the
> SSL_CTX/bind. I'm not happy yet with what I have, but I'll do my best to
> push that as soon as possible in the git repository, I don't want the
> 2.2 to be released in the current state of the files.

good to know, and that's mostly why I avoided too much changes on the
file to avoid having too much rebase work while evolving my
functionality.

> I don't really know the io_uring API in details so I trust you on the
> implementation.

Yes, I've been using it for a few months now, so I'm starting to
better understand how to properly use it. There are indeed a few
different options where you could easily get trapped, and making the
code more complicated.
I however was planning to ask Jens for a quick review when I will have
something more feature complete.

> Regarding the name of the file, ssl_load.c, it is really generic, I
> don't mind having a more specific file only for loading the SSL
> certificates with io_uring.

Yes I totally agree; as it was more of an experimentation I will do a
second path on the naming and improve that. As I also started some
poll changes, it also helps me to better name things to make elements
more reusable.

> I'm wondering if we couldn't push directly the certificate in an X509
> struture instead of storing it in a intermediate buffer, and use the
> ckch instead of a specific cert_iobuf. That just some thoughts, I don't
> know io_uring enough.

I will look into that and it was also on my todo but for now I
considered it ok for a first implementation. It mostly depends if we
are able to properly allocate memory at the right time, while keeping
the code clear enough. For now I considered it as a possible
incremental improvement.
Thanks for the input.

> The crtlist files could also benefits from this, since they can contain
> a lot of certificates too.

Yes, indeed, for now I limited myself to the cases I was concerned
about to avoid too much testing. I will improve that. I should have
warned about it in my commit message.

> I'm seeing that you are using in ssl_load_preparetree() the
> SSL_SOCK_KEYTYPES_NAME array, which is only used for merging a
> certificate bundle. I know that this can be confusing in the code
> because the bundle implementation is crappy. But in a directory we
> don't use any filename extension to load a PEM, we load everything as a
> PEM file with the exception of .key, .ocsp, .sctl and .issuer, which are
> loaded separatly with their own functions. (The cert_exts[] array is
> used for that in ssl_sock.c)

ok, good to recall all files should be taken into account; for now I
blindly implemented my only known case. This will be part of the above
point to take into account all possible options.

> I'm going to remove the support for multi-cert bundle once 2.2 is
> released, so it will simplify a lot of things. I encourage people
> writing new features to not support multi-cert bundles, more
> particularly on the CLI.

So you mean that
example.pem.rsa
example.pem.ecdsa
will be loaded separately as all the other certificates?
I'm not 100% sure what you meant behind removing support of "multi-cert bundle".

> Unfortunately your patch is too late for 2.2, but I think it could be
> great for the next release!

Yes, and it was clear for me since the beginning, I would not have the
time to finish things early enough.

Thanks a lot for your feedbacks. I will try to combine the one from
Willy and yours for a future submission. I also hope we could find a
good middle point between something perfect and something we could
improve along the time.

--
William



Re: [RFC PATCH 0/1] using io_uring to load certificates

2020-05-11 Thread William Dauchy
Hello Willy,

Thank you for your feedbacks.

On Sat, May 9, 2020 at 6:39 AM Willy Tarreau  wrote:
> Your work could be extended to other files, like errorfiles, maps, ACLs,
> etc. However with the config parser being very linear, it's extremely
> unlikely that we could actually benefit from this in places other than
> certs. What makes certs specific is that a single config line can induce
> hundreds of thousands of file accesses and that these ones do benefit in
> being performed in parallel.

Yes, I take your comment as also something to improve in my commit
message explanation.

> Despite this I'm wondering if we should think about a generic approach
> like an async file open with a callback on what to call once the file
> is opened, or optionally an automatic async read and a callback set up
> for received data. This would bring the benefit of allowing to perform
> runtime FS accesses without destroying performance: right now, if you
> try to access the file system from Lua or maybe from within one of the
> device detection libs, or if you'd like to reload some files from the
> CLI, the whole thread totally freezes until the I/O is completed, which
> is why such accesses are absolutely forbidden. But done asynchronously
> it could be different. We could imagine an async read API to be used
> from Lua. We could even imagine being able to trigger reloading certain
> files (ACL patterns or error files maybe) from the CLI, or even to serve
> static files. This will of course not solve the security issues that
> come with doing this, but at least it could solve the performance issue.

wow I did not thought that far, but we could make things more generic
indeed. I will try to keep that in mind and improve things along the
way.
Regarding the CLI, I'm however a bit worried about the `chroot`
support though, which is important from my point of view.

> Probably that the work should be split into a functional and an
> implementation layer. The functional layer is the ability to perform
> async file operations. The implementation is done using uring. But for
> other OSes we could imagine using an emulation layer based on AIO or
> even a bunch of threads, which could still possibly provide some
> benefits during startup when loading tons of certificates, especially
> when that's from spinning disks (not sure if those are still in use
> though, last time I saw one was when I tried to repair my old Ultra5
> a few months ago).

ok will try to keep that in mind. not sure I will be able to come up
with a 100% nice interface but I will at least try to have something
we can iterate on.

> From day one I've been totally convinced it's possible to create a new
> poller entirely based on io-uring. We could avoid a number of the ugly
> tricks we use in epoll to avoid useless epoll_ctl() calls. What's nice
> about our pollers is that they're entirely modular so it is perfectly
> possible to create your own and experiment with it.

yes totally. For now I've duplicated to a new poller, but depending on
the result and diff, I might simply propose a similar #ifdef IO_URING
in ev_poll.c to activate it or not. I will see the final result.
Regarding ev_epoll, there is also an hybrid implementation to use
`epoll_wait` but uring for all `epoll_ctl` calls.

Thanks a lot for your inputs,
-- 
William



[PATCH 3/3] CLEANUP: select: enhance readability in init

2020-05-11 Thread William Dauchy
while reading the code, I thought it was clearer to put one instruction
per line as it is mostly done elsewhere

Signed-off-by: William Dauchy 
---
 src/ev_select.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/src/ev_select.c b/src/ev_select.c
index 167bb7eb7..d15f3d828 100644
--- a/src/ev_select.c
+++ b/src/ev_select.c
@@ -225,9 +225,11 @@ static int init_select_per_thread()
int fd_set_bytes;
 
fd_set_bytes = sizeof(fd_set) * (global.maxsock + FD_SETSIZE - 1) / 
FD_SETSIZE;
-   if ((tmp_evts[DIR_RD] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
+   tmp_evts[DIR_RD] = (fd_set *)calloc(1, fd_set_bytes);
+   if (tmp_evts[DIR_RD] == NULL)
goto fail_rd;
-   if ((tmp_evts[DIR_WR] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
+   tmp_evts[DIR_WR] = (fd_set *)calloc(1, fd_set_bytes);
+   if (tmp_evts[DIR_WR] == NULL)
goto fail_wr;
return 1;
   fail_wr:
@@ -238,8 +240,10 @@ static int init_select_per_thread()
 
 static void deinit_select_per_thread()
 {
-   free(tmp_evts[DIR_WR]); tmp_evts[DIR_WR] = NULL;
-   free(tmp_evts[DIR_RD]); tmp_evts[DIR_RD] = NULL;
+   free(tmp_evts[DIR_WR]);
+   tmp_evts[DIR_WR] = NULL;
+   free(tmp_evts[DIR_RD]);
+   tmp_evts[DIR_RD] = NULL;
 }
 
 /*
-- 
2.26.2




[PATCH 1/3] BUG/MINOR: pollers: remove uneeded free in global init

2020-05-11 Thread William Dauchy
Since commit d4604adeaa8c ("MAJOR: threads/fd: Make fd stuffs
thread-safe"), we init pollers per thread using a helper. It was still
correct for mono-thread mode until commit cd7879adc2c4 ("BUG/MEDIUM:
threads: Run the poll loop on the main thread too"). We now use a deinit
helper for all threads, making those free uneeded.

Only poll and select are affected by this very minor issue.

it could be backported from v1.8 to v2.1.

Fixes: cd7879adc2c4 ("BUG/MEDIUM: threads: Run the poll loop on the main
thread too")
Signed-off-by: William Dauchy 
---
 src/ev_poll.c   | 1 -
 src/ev_select.c | 5 +
 2 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/src/ev_poll.c b/src/ev_poll.c
index 538f07057..8df244566 100644
--- a/src/ev_poll.c
+++ b/src/ev_poll.c
@@ -286,7 +286,6 @@ static int _do_init(struct poller *p)
  fail_swevt:
free(fd_evts[DIR_RD]);
  fail_srevt:
-   free(poll_events);
p->pref = 0;
return 0;
 }
diff --git a/src/ev_select.c b/src/ev_select.c
index ab021b97f..acfdbb94a 100644
--- a/src/ev_select.c
+++ b/src/ev_select.c
@@ -255,7 +255,7 @@ static int _do_init(struct poller *p)
p->private = NULL;
 
if (global.maxsock > FD_SETSIZE)
-   goto fail_revt;
+   goto fail_srevt;
 
fd_set_bytes = sizeof(fd_set) * (global.maxsock + FD_SETSIZE - 1) / 
FD_SETSIZE;
 
@@ -272,9 +272,6 @@ static int _do_init(struct poller *p)
  fail_swevt:
free(fd_evts[DIR_RD]);
  fail_srevt:
-   free(tmp_evts[DIR_WR]);
-   free(tmp_evts[DIR_RD]);
- fail_revt:
p->pref = 0;
return 0;
 }
-- 
2.26.2




[PATCH 2/3] BUG/MINOR: select: remove uneeded free in thread init

2020-05-11 Thread William Dauchy
we wrongly free `tmp_evts[DIR_RD]` in error case.

it could be backported from v1.8 to v2.1.

Fixes: d4604adeaa8c ("MAJOR: threads/fd: Make fd stuffs thread-safe")
Signed-off-by: William Dauchy 
---
 src/ev_select.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/ev_select.c b/src/ev_select.c
index acfdbb94a..167bb7eb7 100644
--- a/src/ev_select.c
+++ b/src/ev_select.c
@@ -226,13 +226,13 @@ static int init_select_per_thread()
 
fd_set_bytes = sizeof(fd_set) * (global.maxsock + FD_SETSIZE - 1) / 
FD_SETSIZE;
if ((tmp_evts[DIR_RD] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
-   goto fail;
+   goto fail_rd;
if ((tmp_evts[DIR_WR] = (fd_set *)calloc(1, fd_set_bytes)) == NULL)
-   goto fail;
+   goto fail_wr;
return 1;
-  fail:
+  fail_wr:
free(tmp_evts[DIR_RD]);
-   free(tmp_evts[DIR_WR]);
+  fail_rd:
return 0;
 }
 
-- 
2.26.2




[PATCH 0/3] minor fixes for pollers

2020-05-11 Thread William Dauchy
Hello,

Here are a few minor fixes on pollers as I'm currently spending some
time in this area.

William Dauchy (3):
  BUG/MINOR: pollers: remove uneeded free in global init
  BUG/MINOR: select: remove uneeded free in thread init
  CLEANUP: select: enhance readability in init

 src/ev_poll.c   |  1 -
 src/ev_select.c | 25 +
 2 files changed, 13 insertions(+), 13 deletions(-)

-- 
2.26.2




Re: Proposal to resolve (again) the include dependency hell

2020-05-09 Thread William Dauchy
On Sat, May 09, 2020 at 12:48:30PM +0200, Willy Tarreau wrote:
> I prefer to warn, this is long and only of interest for those who code,
> so don't force yourself to digest this if not necessary :-)

nice summary as always!

> In the end I think that would leave us more or less with this:
>   - creation of a new "base" directory to put the few files that must not
> depend on anything else and which are solely used to ease portability
> (compat, defaults, compiler as of today).
> 
>   - move of ebtree/* into import/

I must admit on this specific ebtree one, I always asked myself why this
was not maintained out-of-tree. I guess that's mainly for simplicity and
historical reasons. I also don't know whether it is already maintained
elsewhere but I guess this could be or could have been used in other
projetcs.

>   - creation of haproxy/ to move everything related to our internal
> architecture there, with a second file when static functions are
> needed.
> 
>   - kill types/ and proto/

For having been confronted to the situation very recently it's true
that's is not necessary natural to always ask yourself what is a type,
what is a proto so I agree it would probably make the things easier to
think about / search even though files would be bigger.
Even in code reading situations it will be easier to find the
information at the same place I believe.

Among that they are sometimes situations where defines/types/protos are
done within the .c file because it is only used there. In my very own
opinion I would enforce to move them in .h files. While reading the code
that's where you would expect to find them, but also when you suddenly
need them in another .c file, it would avoid to reorganise things. I
might be completely wrong here though.

> I'm interested in any opinion on the subject, especially from those who
> touch this code all day, but also from those who touch is once in a while
> and who experienced difficulties finding their way through this.

So I'm not a daily user of those files, but I believe it is often
annoying to land in a circular dependency issue; at some point you
somehow find a solution, and once it compiles without warning, just hope
to not touch them anymore. I also have the frustation to not be 100%
sure I did not forget some includes which are naturally included by
other dependencies, or worse, extra includes which are not needed
anymore because code was removed; anyway that's more inherent to the
language itself.

Well this was my two cents short opinion even though I do not count
much.

-- 
William



[RFC PATCH 1/1] MEDIUM: ssl: add alternative way to load certificates with io_uring

2020-05-08 Thread William Dauchy
experimental patch to make use of IO_URING to batch load certificates;
this drastically reduces the number of syscall and might benefit to
setup with large number of certificates.
it uses liburing in order to batch operation as follow:
for each certificate directory, we apply the same operations on each
file:
 - statx
 - openat
 - read
 - close
the results are stored in an ebtree. Then when we need to load them
with the SSL lib, instead of providing a path, we provide a buffer to
be consumed. The tree is freed after each directory.

for now it requires a quite large limit of file descriptors, as all
operations types are done one after another; so the limit of fd should
be set higher than the number of certificates you have to load. This
part is probably going to evolve very soon as IO_URING plans are to be
able to chain operations with a given pre-defined file descriptor.

on a setup with 25k certificates I was able to measure a minimum of 20%
gain on the init time when the filesystem cache is empty. My testing is
as follow; for each directory:
- put a timer before `ssl_sock_load_cert_list_file`, including
  `ssl_load_certiodir` in the case of io_uring case.
- measure the time at the end of those operations, just after
  `ssl_free_certiodir` in the case of io_uring.

some results with my old ssd laptop, the average certificate init time
in ms, using 24686 certs:
▒▒   8554io_uring_empty_cache
▒8100io_uring_full_cache
▒▒  11395syscall_empty_cache
▒8087syscall_full_cache

The benefits are of course less obvious with following reloads on a
machine where you only have haproxy process managing its own memory, but
it can be interesting on a busier setup where the filesystem cache is
often invalidated. However a very quick overview of reload time on our
production seems to show that more than 50% of the time, the files are
not present in cache anymore while reloading (I hope I'm not too far
from the reality in my analysis); so I would expect this patch could
participate to flatten our reload time; I however did not tested it in a
production environment.

It remains linux-specific though; the operations used require a minimum
of kernel >= v5.6 even though some operations were supported in v5.4, I
did not change the code to adapt the behavior. All my tests were
performed on >= v5.7rc4 as the time of writing.

it introduces three build parameters:
- USE_IO_URING
- IO_URING_INC
- IO_URING_LIB

Signed-off-by: William Dauchy 
---
 Makefile |  10 +-
 doc/io_uring.txt |  18 +++
 include/proto/ssl_load.h |  23 
 include/types/ssl_load.h |  38 ++
 include/types/ssl_sock.h |   7 +
 src/ssl_load.c   | 274 +++
 src/ssl_sock.c   |  32 -
 7 files changed, 395 insertions(+), 7 deletions(-)
 create mode 100644 doc/io_uring.txt
 create mode 100644 include/proto/ssl_load.h
 create mode 100644 include/types/ssl_load.h
 create mode 100644 src/ssl_load.c

diff --git a/Makefile b/Makefile
index 1e4213989..afba381c0 100644
--- a/Makefile
+++ b/Makefile
@@ -52,6 +52,7 @@
 #   USE_SYSTEMD  : enable sd_notify() support.
 #   USE_OBSOLETE_LINKER  : use when the linker fails to emit 
__start_init/__stop_init
 #   USE_THREAD_DUMP  : use the more advanced thread state dump system. 
Automatic.
+#   USE_IO_URING : use IO_URING advanced async features
 #
 # Options can be forced by specifying "USE_xxx=1" or can be disabled by using
 # "USE_xxx=" (empty string). The list of enabled and disabled options for a
@@ -293,7 +294,8 @@ use_opts = USE_EPOLL USE_KQUEUE USE_NETFILTER   
  \
USE_GETADDRINFO USE_OPENSSL USE_LUA USE_FUTEX USE_ACCEPT4  \
USE_ZLIB USE_SLZ USE_CPU_AFFINITY USE_TFO USE_NS   \
USE_DL USE_RT USE_DEVICEATLAS USE_51DEGREES USE_WURFL USE_SYSTEMD  \
-   USE_OBSOLETE_LINKER USE_PRCTL USE_THREAD_DUMP USE_EVPORTS
+   USE_OBSOLETE_LINKER USE_PRCTL USE_THREAD_DUMP USE_EVPORTS  \
+   USE_IO_URING
 
  Target system options
 # Depending on the target platform, some options are set, as well as some
@@ -531,6 +533,12 @@ ifneq ($(USE_BACKTRACE),)
 OPTIONS_LDFLAGS += -Wl,$(if $(EXPORT_SYMBOL),$(EXPORT_SYMBOL),--export-dynamic)
 endif
 
+ifneq ($(USE_IO_URING),)
+OPTIONS_OBJS  += src/ssl_load.o
+OPTIONS_CFLAGS  += $(if $(IO_URING_INC),-I$(IO_URING_INC))
+OPTIONS_LDFLAGS += $(if $(IO_URING_LIB),-L$(IO_URING_LIB)) -luring
+endif
+
 ifneq ($(USE_OPENSSL),)
 # OpenSSL is packaged in various forms and with various dependencies.
 # In general -lssl is enough, but on some platforms, -lcrypto may be needed,
diff --git a/doc/io_uring.txt b/doc/io_uring.txt
new file mode 100644
index 0..aa290dd0d
--- /dev/null
+++ b/doc/io_uring.txt
@@ -0,0 +1,18 @@
+Linux IO

[RFC PATCH 0/1] using io_uring to load certificates

2020-05-08 Thread William Dauchy
Hello,

Here a first experimentation of IO_URING on certificate loading. It
targets heavy users of certificates (several thousands), most likely
public hosting operators. I believe it could be a post v2.2 material if
people are interested. I tried to make it the less invasive possible in
ssl_sock.c to it remains maintainable.

Other experimentation are ongoing around epoll operations (I'm thinking
about alternative way for busy polling) but that something that I want
to extend later with more serious testing. I also recently got some
crazy ideas around traffic duplication to make use of async, but that's
another story.

see also https://kernel.dk/io_uring.pdf for more info about io_uring.

William Dauchy (1):
  MEDIUM: ssl: add alternative way to load certificates with io_uring

 Makefile |  10 +-
 doc/io_uring.txt |  18 +++
 include/proto/ssl_load.h |  23 
 include/types/ssl_load.h |  38 ++
 include/types/ssl_sock.h |   7 +
 src/ssl_load.c   | 274 +++
 src/ssl_sock.c   |  32 -
 7 files changed, 395 insertions(+), 7 deletions(-)
 create mode 100644 doc/io_uring.txt
 create mode 100644 include/proto/ssl_load.h
 create mode 100644 include/types/ssl_load.h
 create mode 100644 src/ssl_load.c

-- 
2.26.2




Re: [ANNOUNCE] haproxy-2.2-dev7

2020-05-05 Thread William Dauchy
Hello,

Thank you for this release update!

On Tue, May 5, 2020 at 11:46 PM Willy Tarreau  wrote:
> HAProxy 2.2-dev7 was released on 2020/05/05. It added 205 new commits
> after version 2.2-dev6.
>
> The most visible changes in this version is the rework of the health checks
> that was started by Gaėtan and completed by Christopher. I'll certainly say
> a number of stupidities about all this so I won't enter into details, but the
> main points to be aware of is that the health checks which for 18 years have
> been the ugliest part of the internals have now become smart. They are now
> all internally implemented on top of tcp-check rules, and that these ones
> were improved to satisfy the new requirements. For now all this new stuff is
> not yet fully exploited beyond what is needed for the checks but we can hope
> a lot of new cool stuff in a near future.
>
> In addition, HTTP checks now run over HTX and employ the muxes so they can
> now run over HTTP/1 and HTTP/2, and can separately set headers and body.
> All the elements may be extracted and processed for advanced checks. You
> should refer to the documentation to figure all the details. Please beware
> that the check configuration rules are subject to change a little bit before
> the release but the main principle is already here.

> Christopher Faulet (136):
>   MAJOR: checks: Implement HTTP check using tcp-check rules

What is funny after this commit is that we wrongly relied on a config
which was written that way:
  http-check expect status 200 string passing
instead of:
  http-check expect status 200
  http-check expect string passing

I first thought about a bad regression even though the documentation
does not allow it, but `string passing` was never evaluated, and so
silently ignored; now things are enforced which is better for us to
see those long standing mistakes ;)
Thanks Christopher!

-- 
William



Re: [PATCH] CLEANUP: connections: align function declaration

2020-05-04 Thread William Dauchy
On Mon, May 04, 2020 at 01:56:10PM +0200, William Dauchy wrote:
> Sorry for this one, I originally did update the server.h file, but I
> don't know why I forgot to add it in my patch submission :/
> then I saw you fixed it while pushing my commit, thank you for this.

I'm talking too fast, the .h was correctly sent.
in fact it was a indeed a mistake added in between while pushing the
fix. I lost myself between the different version here and there.
Anyway :)

-- 
William



Re: [PATCH] CLEANUP: connections: align function declaration

2020-05-04 Thread William Dauchy
Hello Olivier,

Sorry for this one, I originally did update the server.h file, but I
don't know why I forgot to add it in my patch submission :/
then I saw you fixed it while pushing my commit, thank you for this.

On Mon, May 04, 2020 at 01:52:40PM +0200, William Dauchy wrote:
> this patch should be backported where commit 6318d33ce625
> ("BUG/MEDIUM: connections: force connections cleanup on server changes")
> will be backported, that is to say v1.9 to v2.1.
> 
> Fixes: 6318d33ce625 ("BUG/MEDIUM: connections: force connections cleanup
> on server changes")
> Signed-off-by: William Dauchy 
> ---
>  src/server.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/server.c b/src/server.c
> index 94e7aeed1..9e74fa485 100644
> --- a/src/server.c
> +++ b/src/server.c
> @@ -5169,7 +5169,7 @@ struct task *srv_cleanup_toremove_connections(struct 
> task *task, void *context,
>  /* cleanup connections for a given server
>   * might be useful when going on forced maintenance or live changing ip/port
>   */
> -void srv_cleanup_connections(struct server *srv)
> +static void srv_cleanup_connections(struct server *srv)
>  {
>   struct connection *conn;
>   int did_remove;
> -- 
> 2.26.2
> 

-- 
William



[PATCH] CLEANUP: connections: align function declaration

2020-05-04 Thread William Dauchy
this patch should be backported where commit 6318d33ce625
("BUG/MEDIUM: connections: force connections cleanup on server changes")
will be backported, that is to say v1.9 to v2.1.

Fixes: 6318d33ce625 ("BUG/MEDIUM: connections: force connections cleanup
on server changes")
Signed-off-by: William Dauchy 
---
 src/server.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/server.c b/src/server.c
index 94e7aeed1..9e74fa485 100644
--- a/src/server.c
+++ b/src/server.c
@@ -5169,7 +5169,7 @@ struct task *srv_cleanup_toremove_connections(struct task 
*task, void *context,
 /* cleanup connections for a given server
  * might be useful when going on forced maintenance or live changing ip/port
  */
-void srv_cleanup_connections(struct server *srv)
+static void srv_cleanup_connections(struct server *srv)
 {
struct connection *conn;
int did_remove;
-- 
2.26.2




Re: [PATCH 1/1] BUG/MEDIUM: connections: force connections cleanup on server changes

2020-05-02 Thread William Dauchy
On Sat, May 02, 2020 at 10:17:01PM +0200, Olivier Houchard wrote:
> If that's the only change you have for a v2, don't bother, I already
> integrated it in what I plan to push :)

ok, thanks a lot!

-- 
William



Re: [PATCH 0/1] fix idle connections cleanup

2020-05-02 Thread William Dauchy
Hello Olivier,

On Sat, May 02, 2020 at 10:11:26PM +0200, Olivier Houchard wrote:
> And I think your fix is correct, so I will apply it as-is. However, I
> think it should be applied to previous branches, that also had idle
> connection pools. It is probably less noticable before 079cb9af22da6,
> because before that patch, one thread couldn't reuse a connection owned
> by another thread, so to reproduce it you'd have to test it until you
> hit the same thread twice, but I don't see why it wouldn't be an issue
> tgere

Thanks for the review, I hope it is not too late to send a v2 with a
small fix I realised after sending my first version.
Will also change the commit message regarding the backport.


Thanks,
-- 
William



Re: [PATCH 1/1] BUG/MEDIUM: connections: force connections cleanup on server changes

2020-05-02 Thread William Dauchy
On Sat, May 02, 2020 at 09:52:36PM +0200, William Dauchy wrote:
> +void srv_cleanup_connections(struct server *srv)
> +{
> + struct connection *conn;
> + int did_remove;
> + int i;
> + int j;
> +
> + HA_SPIN_LOCK(OTHER_LOCK, _conn_srv_lock);
> + for (i = 0; i < global.nbthread; i++) {
> + did_remove = 0;
> + HA_SPIN_LOCK(OTHER_LOCK, _lock[i]);
> + for (j = 0; j < srv->curr_idle_conns; j++) {
> + conn = MT_LIST_POP(>idle_conns[i], struct 
> connection *, list);
> + if (!conn)
> + conn = MT_LIST_POP(>safe_conns[i],
> +struct connection *, list);
> + if (!conn)
> + break;

just realised I forgot:
did_remove = 1;

but will wait before sending a possible v2 with other feedbacks.

> + MT_LIST_ADDQ(_connections[i], (struct mt_list 
> *)>list);
> + }
> + HA_SPIN_UNLOCK(OTHER_LOCK, _lock[i]);
> + if (did_remove)
> + task_wakeup(idle_conn_cleanup[i], TASK_WOKEN_OTHER);
> + }
> + HA_SPIN_UNLOCK(OTHER_LOCK, _conn_srv_lock);
> +}
> +
>  struct task *srv_cleanup_idle_connections(struct task *task, void *context, 
> unsigned short state)
>  {
>   struct server *srv;
> -- 
> 2.26.2
> 



[PATCH 0/1] fix idle connections cleanup

2020-05-02 Thread William Dauchy
Hello Olivier,

The following patch is an attempt to fix a change of behavior we encounter
following commit:

079cb9af22da6 ("MEDIUM: connections: Revamp the way idle connections are 
killed")
being the origin of this new behaviour.

All details are in the commit message itself.
The approach is most likely not the good one but feel free to advise so
I may improve it.

William Dauchy (1):
  BUG/MEDIUM: connections: force connections cleanup on server changes

 include/proto/server.h |  1 +
 src/server.c   | 37 -
 2 files changed, 37 insertions(+), 1 deletion(-)

Thanks,
-- 
William




[PATCH 1/1] BUG/MEDIUM: connections: force connections cleanup on server changes

2020-05-02 Thread William Dauchy
I've been trying to understand a change of behaviour between v2.2dev5 and
v2.2dev6. Indeed our probe is regularly testing to add and remove
servers on a given backend such as:

 # echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
 113 be_foo 1 srv0 10.236.139.34 2 0 1 1 263 15 3 4 6 0 0 0 - 31255 -
 113 be_foo 2 srv1 0.0.0.0 0 1 256 256 0 15 3 0 14 0 0 0 - 0 -

 -> curl on the corresponding frontend: reply from server:31255

 # echo "set server be_foo/srv1 addr 10.236.139.34 port 31257" | sudo socat 
stdio /var/lib/haproxy/stats
 IP changed from '0.0.0.0' to '10.236.139.34', port changed from '0' to '31257' 
by 'stats socket command'
 # echo "set server be_foo/srv1 weight 256" | sudo socat stdio 
/var/lib/haproxy/stats
 # echo "set server be_foo/srv1 check-port 8500" | sudo socat stdio 
/var/lib/haproxy/stats
 health check port updated.
 # echo "set server be_foo/srv1 state ready" | sudo socat stdio 
/var/lib/haproxy/stats
 # echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
 113 be_foo 1 srv0 10.236.139.34 2 0 1 1 105 15 3 4 6 0 0 0 - 31255 -
 113 be_foo 2 srv1 10.236.139.34 2 0 256 256 2319 15 3 2 6 0 0 0 - 31257 -

 -> curl on the corresponding frontend: reply for server:31257
 (notice the difference of weight)

 # echo "set server be_foo/srv1 state maint" | sudo socat stdio 
/var/lib/haproxy/stats
 # echo "set server be_foo/srv1 addr 0.0.0.0 port 0" | sudo socat stdio 
/var/lib/haproxy/stats
 IP changed from '10.236.139.34' to '0.0.0.0', port changed from '31257' to '0' 
by 'stats socket command'
 # echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
 113 be_foo 1 srv0 10.236.139.34 2 0 1 1 263 15 3 4 6 0 0 0 - 31255 -
 113 be_foo 2 srv1 0.0.0.0 0 1 256 256 0 15 3 0 14 0 0 0 - 0 -

 -> curl on the corresponding frontend: reply from server:31255

 # echo "set server be_foo/srv1 addr 10.236.139.34 port 31256" | sudo socat 
stdio /var/lib/haproxy/stats
 IP changed from '0.0.0.0' to '10.236.139.34', port changed from '0' to '31256' 
by 'stats socket command'
 # echo "set server be_foo/srv1 weight 256" | sudo socat stdio 
/var/lib/haproxy/stats
 # echo "set server be_foo/srv1 check-port 8500" | sudo socat stdio 
/var/lib/haproxy/stats
 health check port updated.
 # echo "set server be_foo/srv1 state ready" | sudo socat stdio 
/var/lib/haproxy/stats
 # echo "show servers state be_foo" | sudo socat stdio /var/lib/haproxy/stats
 113 be_foo 1 srv0 10.236.139.34 2 0 1 1 105 15 3 4 6 0 0 0 - 31255 -
 113 be_foo 2 srv1 10.236.139.34 2 0 256 256 2319 15 3 2 6 0 0 0 - 31256 -

 -> curl on the corresponding frontend: reply from server:31257 (!)

Here we indeed would expect to get an anver from server:31256. The issue
is highly linked to the usage of `pool-purge-delay`, with a value which
is higher than the duration of the test, 10s in our case.

a git bisect between dev5 and dev6 seems to show commit
079cb9af22da6 ("MEDIUM: connections: Revamp the way idle connections are 
killed")
being the origin of this new behaviour.

So if I understand the later correctly, it seems that it was more a
matter of chance that we did not saw the issue earlier.

My patch proposes to force clean idle connections in the two following
cases:
- we set a (still running) server to maintenance
- we change the ip/port of a server

A backport is not needed as the commit revealing the issue is present
only on v2.2 branch only.

Fixes: 079cb9af22da6 ("MEDIUM: connections: Revamp the way idle
connections are killed")
Signed-off-by: William Dauchy 
---
 include/proto/server.h |  1 +
 src/server.c   | 37 -
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/include/proto/server.h b/include/proto/server.h
index f8fed3148..388c7e2ff 100644
--- a/include/proto/server.h
+++ b/include/proto/server.h
@@ -65,6 +65,7 @@ const char *update_server_fqdn(struct server *server, const 
char *fqdn, const ch
 int snr_resolution_cb(struct dns_requester *requester, struct dns_nameserver 
*nameserver);
 int snr_resolution_error_cb(struct dns_requester *requester, int error_code);
 struct server *snr_check_ip_callback(struct server *srv, void *ip, unsigned 
char *ip_family);
+void srv_cleanup_connections(struct server *srv);
 struct task *srv_cleanup_idle_connections(struct task *task, void *ctx, 
unsigned short state);
 struct task *srv_cleanup_toremove_connections(struct task *task, void 
*context, unsigned short state);
 
diff --git a/src/server.c b/src/server.c
index 4d5e706d3..751439d76 100644
--- a/src/server.c
+++ b/src/server.c
@@ -3669,8 +3669,11 @@ const char *update_server_addr_port(struct server *s, 
const char *addr, const ch
}
 
 out:
-   if (changed)
+   if (changed) {
+   /* force connection cleanup on the given serve

Re: [ANNOUNCE] haproxy-2.2-dev6

2020-04-28 Thread William Dauchy
On Tue, Apr 28, 2020 at 8:50 PM Willy Tarreau  wrote:
> So just to let you know, we could finally nail the issue down to a
> change I did on mux-h1 at the end of dev4 (so dev5 is also affected).
> Only front connections dying in a dirty way were affected (i.e. users
> disappearing from the net after they got a response, the timeout was
> not rearmed).

oh that odd, I based my statement on our automatic probing, so maybe
we have another issue.

> I'm not aware of anything past dev5 which could condition its
> occurrence. As such it's very likely that you have this bug as well
> as I don't see what could work around it, but that your workload makes
> you very unlikely to encounter it.
>
> I've pushed the fix below now:
>
>   ca39747dcf ("BUG/MEDIUM: mux-h1: make sure we always have a timeout on 
> front connections")
>
> Next time you rebuild, it could make sense to apply it (even on top of
> your dev5) to improve the stability. If by any chance you retest dev6
> I'm interested in knowing if you still get problems with it, of course :-)

Thanks for the followup, I will apply it on top of dev6 as I'm lazy
adapting the patch on top of dev5 ;-)
Our probing will quickly tell us the result!
-- 
William



Re: [ANNOUNCE] haproxy-2.2-dev6

2020-04-27 Thread William Dauchy
Hello Willy,

On Mon, Apr 27, 2020 at 7:29 PM Willy Tarreau  wrote:
> just a quick note, be careful with -dev6, monitor your FDs from time
> to time. Today it caused an outage on haproxy.org after all FDs were
> in use. Sadly we had skipped a number of -dev in the past, jumping
> from something between dev2 and dev3 to dev6, so the breakage might
> have happened anywhere in between.
> The symptoms were that the CLI was still working pretty fine, "show fd"
> would show a lot of FD and "show sess" would show only the CLI's socket.
> A reload managed to get my stats request immediately accepted by the new
> process so that proves that it was waiting in the accept queue, thus
> accept() was refraining from doing its work.
> For now that's all I know, but I do have traces so I hope we'll quickly
> figure where it comes from.
> I just wanted to let you know given that others might start to run a few
> -dev releases in production and such issues are very nasty to detect :-/

If this can help, the issue is possibly between dev5 and dev6. We have
been following quite closely the dev releases, but we had to revert
from dev6 to dev5 as it produced issues on our side - where it is
perfectly running fine on dev5. I did not reported it earlier (shame
on me) because I did not had the time to investigate and collect some
more details. I however can do that in the following days if
necessary; be I believe you now have traces on your side as well.
-- 
William



Re: [PATCH] MINOR: config: make strict limits enabled by default

2020-03-28 Thread William Dauchy
On Sat, Mar 28, 2020 at 7:29 PM Lukas Tribus  wrote:
> master is still for 2.2 which is in development. If you want to target
> v2.3, you have to wait until 2.2 is released.

oh true, I'm mixing versions, probably because we already started to
deploy part of v2.2...

Sorry for that, please ignore this patch :)
-- 
William



[PATCH v2] MINOR: config: make strict limits enabled by default

2020-03-28 Thread William Dauchy
as agreed a few months ago, enable strict-limits for v2.3
update configuration manual accordingly

Signed-off-by: William Dauchy 
---
 doc/configuration.txt | 11 +--
 src/cfgparse-global.c |  2 --
 src/haproxy.c | 17 +++--
 3 files changed, 12 insertions(+), 18 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 8be19e369..3057d5198 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1536,12 +1536,11 @@ wurfl-cache-size 
   with USE_WURFL=1.
 
 strict-limits
-  Makes process fail at startup when a setrlimit fails. Haproxy is tries to set
-  the best setrlimit according to what has been calculated. If it fails, it
-  will emit a warning. Use this option if you want an explicit failure of
-  haproxy when those limits fail. This option is disabled by default. If it has
-  been enabled, it may still be forcibly disabled by prefixing it with the "no"
-  keyword.
+  Makes process fail at startup when a setrlimit fails. Haproxy tries to set 
the
+  best setrlimit according to what has been calculated. If it fails, it will
+  emit a warning. This option is here to guarantee an explicit failure of
+  haproxy when those limits fail. It is enabled by default. It may still be
+  forcibly disabled by prefixing it with the "no" keyword.
 
 3.2. Performance tuning
 ---
diff --git a/src/cfgparse-global.c b/src/cfgparse-global.c
index 0e6905976..4c98df8e7 100644
--- a/src/cfgparse-global.c
+++ b/src/cfgparse-global.c
@@ -1196,8 +1196,6 @@ int cfg_parse_global(const char *file, int linenum, char 
**args, int kwm)
goto out;
if (kwm == KWM_NO)
global.tune.options &= ~GTUNE_STRICT_LIMITS;
-   else
-   global.tune.options |= GTUNE_STRICT_LIMITS;
}
else {
struct cfg_kw_list *kwl;
diff --git a/src/haproxy.c b/src/haproxy.c
index 278476c3e..6f57f4cb6 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1705,6 +1705,7 @@ static void init(int argc, char **argv)
 #if defined(SO_REUSEPORT)
global.tune.options |= GTUNE_USE_REUSEPORT;
 #endif
+   global.tune.options |= GTUNE_STRICT_LIMITS;
 
pid = getpid();
progname = *argv;
@@ -3062,8 +3063,7 @@ int main(int argc, char **argv)
if (setrlimit(RLIMIT_NOFILE, ) != -1)
getrlimit(RLIMIT_NOFILE, );
 
-   ha_warning("[%s.main()] Cannot raise FD limit 
to %d, limit is %d. "
-  "This will fail in >= v2.3\n",
+   ha_warning("[%s.main()] Cannot raise FD limit 
to %d, limit is %d.\n",
   argv[0], global.rlimit_nofile, 
(int)limit.rlim_cur);
global.rlimit_nofile = limit.rlim_cur;
}
@@ -3082,8 +3082,7 @@ int main(int argc, char **argv)
exit(1);
}
else
-   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs."
-  "This will fail in >= v2.3\n",
+   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs.\n",
   argv[0], global.rlimit_memmax);
}
 #else
@@ -3095,8 +3094,7 @@ int main(int argc, char **argv)
exit(1);
}
else
-   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs.",
-  "This will fail in >= v2.3\n",
+   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs.\n",
   argv[0], global.rlimit_memmax);
}
 #endif
@@ -3282,8 +3280,7 @@ int main(int argc, char **argv)
}
else
ha_alert("[%s.main()] FD limit (%d) too low for 
maxconn=%d/maxsock=%d. "
-"Please raise 'ulimit-n' to %d or more to 
avoid any trouble."
-"This will fail in >= v2.3\n",
+"Please raise 'ulimit-n' to %d or more to 
avoid any trouble.\n",
 argv[0], (int)limit.rlim_cur, global.maxconn, 
global.maxsock,
 global.maxsock);
}
@@ -3565,7 +3562,7 @@ int main(int argc, char **argv)
}
else
ha_warning("[%s.main()] Failed to set the raise 
the maximum "
-  

[PATCH] MINOR: config: make strict limits enabled by default

2020-03-28 Thread William Dauchy
as agreed a few months ago, enable strict-limits for v2.3

Signed-off-by: William Dauchy 
---
 src/cfgparse-global.c |  2 --
 src/haproxy.c | 17 +++--
 2 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/src/cfgparse-global.c b/src/cfgparse-global.c
index 0e6905976..4c98df8e7 100644
--- a/src/cfgparse-global.c
+++ b/src/cfgparse-global.c
@@ -1196,8 +1196,6 @@ int cfg_parse_global(const char *file, int linenum, char 
**args, int kwm)
goto out;
if (kwm == KWM_NO)
global.tune.options &= ~GTUNE_STRICT_LIMITS;
-   else
-   global.tune.options |= GTUNE_STRICT_LIMITS;
}
else {
struct cfg_kw_list *kwl;
diff --git a/src/haproxy.c b/src/haproxy.c
index 278476c3e..6f57f4cb6 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1705,6 +1705,7 @@ static void init(int argc, char **argv)
 #if defined(SO_REUSEPORT)
global.tune.options |= GTUNE_USE_REUSEPORT;
 #endif
+   global.tune.options |= GTUNE_STRICT_LIMITS;
 
pid = getpid();
progname = *argv;
@@ -3062,8 +3063,7 @@ int main(int argc, char **argv)
if (setrlimit(RLIMIT_NOFILE, ) != -1)
getrlimit(RLIMIT_NOFILE, );
 
-   ha_warning("[%s.main()] Cannot raise FD limit 
to %d, limit is %d. "
-  "This will fail in >= v2.3\n",
+   ha_warning("[%s.main()] Cannot raise FD limit 
to %d, limit is %d.\n",
   argv[0], global.rlimit_nofile, 
(int)limit.rlim_cur);
global.rlimit_nofile = limit.rlim_cur;
}
@@ -3082,8 +3082,7 @@ int main(int argc, char **argv)
exit(1);
}
else
-   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs."
-  "This will fail in >= v2.3\n",
+   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs.\n",
   argv[0], global.rlimit_memmax);
}
 #else
@@ -3095,8 +3094,7 @@ int main(int argc, char **argv)
exit(1);
}
else
-   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs.",
-  "This will fail in >= v2.3\n",
+   ha_warning("[%s.main()] Cannot fix MEM limit to 
%d megs.\n",
   argv[0], global.rlimit_memmax);
}
 #endif
@@ -3282,8 +3280,7 @@ int main(int argc, char **argv)
}
else
ha_alert("[%s.main()] FD limit (%d) too low for 
maxconn=%d/maxsock=%d. "
-"Please raise 'ulimit-n' to %d or more to 
avoid any trouble."
-"This will fail in >= v2.3\n",
+"Please raise 'ulimit-n' to %d or more to 
avoid any trouble.\n",
 argv[0], (int)limit.rlim_cur, global.maxconn, 
global.maxsock,
 global.maxsock);
}
@@ -3565,7 +3562,7 @@ int main(int argc, char **argv)
}
else
ha_warning("[%s.main()] Failed to set the raise 
the maximum "
-  "file size. This will fail in >= 
v2.3\n", argv[0]);
+  "file size.\n", argv[0]);
}
 #endif
 
@@ -3579,7 +3576,7 @@ int main(int argc, char **argv)
}
else
ha_warning("[%s.main()] Failed to set the raise 
the core "
-  "dump size. This will fail in >= 
v2.3\n", argv[0]);
+  "dump size.\n", argv[0]);
}
 #endif
 
-- 
2.25.1




Re: [PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread William Dauchy
On Thu, Feb 13, 2020 at 07:48:03AM +0100, Willy Tarreau wrote:
> For me it's different, it's not related to the fact that it's used
> later but to the fact that we only need to undo the namespace change
> if it was changed in the first call. Indeed "ns" is not null only
> when the caller wants to switch the namespace to create a socket. In
> fact as long as there's no namespace configured on the servers, or
> if we're trying to connect to a dispatch or transparent address, ns
> will be NULL and we'll save two setns calls (i.e. almost always the
> case).

indeed, the code was not very clear in that regard, thank you for the
clarification. I overlooked the case we only go through this function to
get a file descriptor without namespace.

> In fact I think we could simplify the logic a bit and merge the code
> into the inline function present in common/namespace.h. This would also
> require to export default_namespace:
> 
> 
>   static inline int my_socketat(const struct netns_entry *ns, int domain, int 
> type, int protocol)
>   {
>   #ifdef USE_NS
>   int sock;
> 
>   if (likely(!ns || default_namespace < 0))
>   goto no_ns;
> 
>   if (setns(ns->fd, CLONE_NEWNET) == -1)
>   return -1;
> 
>   sock = socket(domain, type, protocol);
> 
>   if (setns(default_namespace, CLONE_NEWNET) == -1) {
>   close(sock);
>   sock = -1;
>   }
>   return sock;
>   no_ns:
>   #endif
>   return socket(domain, type, protocol);
>   }
> 
> This allows to remove the !ns || default_namespace logic from
> the function's epilogue. What do you think ?

Indeed, that's clearer in my opinion. I guess I let you handle that
as you wrote the new proposition.

Thank you,
-- 
William



Re: [PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread William Dauchy
On Thu, Feb 13, 2020 at 01:31:51AM +0500, Илья Шипицин wrote:
> we "use" it.
> depending on true/false we either return -1 or not

I guess it is present in the first condition to be able to access
`ns->fd` safely in setns; but the second condition does not acces `ns`
later.

-- 
William



[PATCH 1/2] BUG/MINOR: namespace: avoid closing fd when socket failed in my_socketat

2020-02-12 Thread William Dauchy
we cannot return right after socket opening as we need to move back to
the default namespace first

this should fix github issue #500

this might be backported to all version >= 1.6

Fixes: b3e54fe387c7c1 ("MAJOR: namespace: add Linux network namespace
support")
Signed-off-by: William Dauchy 
---
 src/namespace.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/namespace.c b/src/namespace.c
index f23da48f8..89a968e36 100644
--- a/src/namespace.c
+++ b/src/namespace.c
@@ -121,7 +121,8 @@ int my_socketat(const struct netns_entry *ns, int domain, 
int type, int protocol
sock = socket(domain, type, protocol);
 
if (default_namespace >= 0 && ns && setns(default_namespace, 
CLONE_NEWNET) == -1) {
-   close(sock);
+   if (sock >= 0)
+   close(sock);
return -1;
}
return sock;
-- 
2.25.0




[PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread William Dauchy
we check ns variable but we don't use it later

Signed-off-by: William Dauchy 
---
 src/namespace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/namespace.c b/src/namespace.c
index 89a968e36..3536629bc 100644
--- a/src/namespace.c
+++ b/src/namespace.c
@@ -120,7 +120,7 @@ int my_socketat(const struct netns_entry *ns, int domain, 
int type, int protocol
 
sock = socket(domain, type, protocol);
 
-   if (default_namespace >= 0 && ns && setns(default_namespace, 
CLONE_NEWNET) == -1) {
+   if (default_namespace >= 0 && setns(default_namespace, CLONE_NEWNET) == 
-1) {
if (sock >= 0)
close(sock);
return -1;
-- 
2.25.0




[PATCH 0/2] namespace cleaning in my_socketat

2020-02-12 Thread William Dauchy
Two minor patches for namespace.

William Dauchy (2):
  BUG/MINOR: namespace: avoid closing fd when socket failed in
my_socketat
  CLEANUP: namespace: remove uneeded ns check in my_socketat

 src/namespace.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.25.0




Re: [PATCH] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread William Dauchy
On Wed, Feb 12, 2020 at 03:32:07PM +0100, Willy Tarreau wrote:
> I'd do it differently so that we neither try nor report an error if
> the default mss was not set. Indeed, if it already failed earlier,
> we already had an issue, so no need to fail again. So if you agree
> I'll change it to :
> 
>  if (defaultmss > 0 &&
>  tmpmaxseg != defaultmss &&
>  setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
> sizeof(defaultmss)) == -1)

agreed, sent v2 as well.

Thanks,
-- 
William



[PATCH v2] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread William Dauchy
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.

this is a followup of github issue #499

this should be backported to all versions >= v1.8

Fixes: 153659f1ae69a1 ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy 
---
 src/proto_tcp.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index a9d5229c9..044ade430 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -906,9 +906,9 @@ int tcp_bind_listener(struct listener *listener, char 
*errmsg, int errlen)
defaultmss = default_tcp6_maxseg;
 
getsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , );
-   if (tmpmaxseg != defaultmss && setsockopt(fd, IPPROTO_TCP,
-   TCP_MAXSEG, ,
-   sizeof(defaultmss)) == -1) {
+   if (defaultmss > 0 &&
+   tmpmaxseg != defaultmss &&
+   setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
sizeof(defaultmss)) == -1) {
msg = "cannot set MSS";
err |= ERR_WARN;
}
-- 
2.25.0




[PATCH] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread William Dauchy
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.

this is a followup of github issue #499

this should be backported to all versions >= v1.8

Fixes: 153659f1ae69a1 ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy 
---
 src/proto_tcp.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index a9d5229c9..e509a17bc 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -906,9 +906,9 @@ int tcp_bind_listener(struct listener *listener, char 
*errmsg, int errlen)
defaultmss = default_tcp6_maxseg;
 
getsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , );
-   if (tmpmaxseg != defaultmss && setsockopt(fd, IPPROTO_TCP,
-   TCP_MAXSEG, ,
-   sizeof(defaultmss)) == -1) {
+   if (defaultmss < 0 ||
+   (tmpmaxseg != defaultmss &&
+setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
sizeof(defaultmss)) == -1)) {
msg = "cannot set MSS";
err |= ERR_WARN;
}
-- 
2.25.0




[PATCH] BUG/MINOR: tcp: avoid closing fd when socket failed in tcp_bind_listener

2020-02-12 Thread William Dauchy
we were trying to close file descriptor even when `socket` call was
failing.
this should fix github issue #499

this should be backported to all versions >= v1.8

Fixes: 153659f1ae69a1 ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy 
---
 src/proto_tcp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index df4e5a4d2..a9d5229c9 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -749,8 +749,8 @@ int tcp_bind_listener(struct listener *listener, char 
*errmsg, int errlen)
if (getsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, 
_tcp_maxseg,
_len) == -1)
ha_warning("Failed to get the default value of 
TCP_MAXSEG\n");
+   close(fd);
}
-   close(fd);
}
if (default_tcp6_maxseg == -1) {
default_tcp6_maxseg = -2;
-- 
2.25.0




Re: [PATCH v2] BUG/MINOR: connection: fix ip6 dst_port copy in make_proxy_line_v2

2020-01-26 Thread William Dauchy
On Sun, Jan 26, 2020 at 11:06 PM Lukas Tribus  wrote:
> Just because commit 456 fixes something that has been introduced with
> commit 123 DOES NOT mean we backport 456 to all the branches that 123
> was committed to - instead we backport 456 to a certain branch if it
> *actually* makes sense to do so.
> This is not something a VCS will figure out for us, but is a human
> decision. The author should be suggesting it's intention regarding the
> backport in the commit message, because it's the one person that spend
> the most time with the issue and probably has the best understanding
> of its impacts as well as the complexity and necessity of a backport.
> That does not mean the author needs to tests the fix with those
> branches, send branch-specific diff's, or do other research regarding
> the backport, but *the intention needs to be stated* (based on the
> research already conducted for the matter at hand). If the committer
> (or reviewer) disagrees, it will be clarified. If during the backport
> issues come up, those issues will be dealt with then (this happens
> latter, not when the fix is committed to master).

yes, that's why I was more talking about a semi automatic process,
which only suggests to do a backport with this tag.
I am thinking about examples like on netdev linux kernel subsystem, a
"Fixes:" tag suggests that it is a possible candidate for stable
branches, but then it's up to the maintainer to decide whether to
backport it or not.

Anyway, that being said, I have nothing against adding those messages,
it was a suggestion.

Thanks for the discussion,
-- 
William


Re: [PATCH v2] BUG/MINOR: connection: fix ip6 dst_port copy in make_proxy_line_v2

2020-01-26 Thread William Dauchy
On Sun, Jan 26, 2020 at 8:17 PM Julien Pivotto  wrote:
> That will not work with cherry-picked commits.

that's what I nuanced in the previous email you quoted indeed.
-- 
William


Re: [PATCH v2] BUG/MINOR: connection: fix ip6 dst_port copy in make_proxy_line_v2

2020-01-26 Thread William Dauchy
On Sun, Jan 26, 2020 at 7:56 PM Tim Düsterhus  wrote:
> Yes, and because this is so easy to look up I simply add both the commit
> and the first version to my commit message to save the person doing the
> backporting the brain cycles. Backporting appears to mostly be a bulk
> process and I can imagine it to be mind-numbing.
>
> I can't comment on whether leaving this information out actually makes
> it harder or not, I'm just a community contributor as well. The
> CONTRIBUTING file says this, though:
>
> > The explanation of the user-visible impact and the need for
> backporting to stable branches or not are MANDATORY.

Yes; I was simply challenging that, as it is also open to mistakes to
write in commit message to which version it should be backported.
-- 
William


[PATCH v4] BUG/MINOR: dns: allow 63 char in hostname

2020-01-26 Thread William Dauchy
hostname were limited to 62 char, which is not RFC1035 compliant;
- the parsing loop should stop when above max label char
- fix len label test where d[i] was wrongly used
- simplify the whole function to avoid using two extra char* variable

this should fix github issue #387

Signed-off-by: William Dauchy 
---
 src/dns.c | 31 +--
 1 file changed, 13 insertions(+), 18 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index eefd8d0dc..28d47d26c 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1470,7 +1470,6 @@ int dns_str_to_dn_label(const char *str, int str_len, 
char *dn, int dn_len)
  */
 int dns_hostname_validation(const char *string, char **err)
 {
-   const char *c, *d;
int i;
 
if (strlen(string) > DNS_MAX_NAME_SIZE) {
@@ -1479,36 +1478,32 @@ int dns_hostname_validation(const char *string, char 
**err)
return 0;
}
 
-   c = string;
-   while (*c) {
-   d = c;
-
+   while (*string) {
i = 0;
-   while (*d != '.' && *d && i <= DNS_MAX_LABEL_SIZE) {
-   i++;
-   if (!((*d == '-') || (*d == '_') ||
- ((*d >= 'a') && (*d <= 'z')) ||
- ((*d >= 'A') && (*d <= 'Z')) ||
- ((*d >= '0') && (*d <= '9' {
+   while (*string && *string != '.' && i < DNS_MAX_LABEL_SIZE) {
+   if (!(*string == '-' || *string == '_' ||
+ (*string >= 'a' && *string <= 'z') ||
+ (*string >= 'A' && *string <= 'Z') ||
+ (*string >= '0' && *string <= '9'))) {
if (err)
*err = DNS_INVALID_CHARACTER;
return 0;
}
-   d++;
+   i++;
+   string++;
}
 
-   if ((i >= DNS_MAX_LABEL_SIZE) && (d[i] != '.')) {
+   if (!(*string))
+   break;
+
+   if (*string != '.' && i >= DNS_MAX_LABEL_SIZE) {
if (err)
*err = DNS_LABEL_TOO_LONG;
return 0;
}
 
-   if (*d == '\0')
-   goto out;
-
-   c = ++d;
+   string++;
}
- out:
return 1;
 }
 
-- 
2.24.1




Re: [PATCH v2] BUG/MINOR: connection: fix ip6 dst_port copy in make_proxy_line_v2

2020-01-26 Thread William Dauchy
On Sun, Jan 26, 2020 at 7:33 PM Tim Düsterhus  wrote:
> Backport information are missing (without looking up that commit). 1.8+
> it is.

Thanks. Could be nice to change a bit these rules; indeed, when the
`Fixes` tag is present, it's very easy to ask git in which tag this
was introduced; so in my opinion this should be part of a
semi-automatic process proposing to backport a given fix when this tag
is present (`git tag --contains`).
However, I agree that's a bit wonky as a few commits are
cherry-picked, like this one which was cherry-picked in v1.8 indeed.
-- 
William


Re: [PATCH v3] BUG/MINOR: dns: allow 63 char in hostname

2020-01-26 Thread William Dauchy
Thanks for your review Tim

On Sun, Jan 26, 2020 at 7:20 PM Tim Düsterhus  wrote:
> >   int i;
>
> Consider moving this into the `while` loop to reduce the scope of `i`.

I'm not against doing this when this is a block condition, but for a
loop, I find it a bit dirty and confusing.

> For consistency consider to to either:
> *string and !*string
> or
> *string != '\0' and *string == '\0'
> or
> *string != 0 and *string == 0

fixed.

Thanks,
-- 
William


Re: [PATCH v3] BUG/MINOR: dns: allow 63 char in hostname

2020-01-26 Thread William Dauchy
On Sun, Jan 26, 2020 at 7:08 PM Илья Шипицин  wrote:
> such things are fragile.  once fixed, they can silently break during further 
> refactoring.
> on other hand, such functions are good candidates to write unit tests.

I considered it but to my knowledge, this is currently not possible
with varnishtest, as we would need to mock a dns resolution, and make
haproxy starts. I don't know whether there are other plans for haproxy
tests.
-- 
William


[PATCH v2] BUG/MINOR: connection: fix ip6 dst_port copy in make_proxy_line_v2

2020-01-26 Thread William Dauchy
triggered by coverity; src_port is set earlier.

this should fix github issue #467

Fixes: 7fec02153712 ("MEDIUM: proxy_protocol: Convert IPs to v6 when
protocols are mixed")
Signed-off-by: William Dauchy 
---
 src/connection.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/connection.c b/src/connection.c
index ced919f0e..ff3c34612 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -1372,7 +1372,7 @@ int make_proxy_line_v2(char *buf, int buf_len, struct 
server *srv, struct connec
if (dst->ss_family == AF_INET) {
v4tov6(, &((struct sockaddr_in 
*)dst)->sin_addr);
memcpy(hdr->addr.ip6.dst_addr, , 16);
-   hdr->addr.ip6.src_port = ((struct sockaddr_in 
*)src)->sin_port;
+   hdr->addr.ip6.dst_port = ((struct sockaddr_in 
*)dst)->sin_port;
}
else {
memcpy(hdr->addr.ip6.dst_addr, &((struct 
sockaddr_in6 *)dst)->sin6_addr, 16);
-- 
2.24.1




[PATCH] BUG/MINOR: connection: fix ip6 dst_port copy in make_proxy_line_v2

2020-01-26 Thread William Dauchy
triggered by coverity; src_port is set earlier.

this should fix github issue #467

Fixes: 7fec02153712 ("MEDIUM: proxy_protocol: Convert IPs to v6 when
protocols are mixed")
Signed-off-by: William Dauchy 
---
 src/connection.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/connection.c b/src/connection.c
index ced919f0e..b3c39f1fb 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -1372,7 +1372,7 @@ int make_proxy_line_v2(char *buf, int buf_len, struct 
server *srv, struct connec
if (dst->ss_family == AF_INET) {
v4tov6(, &((struct sockaddr_in 
*)dst)->sin_addr);
memcpy(hdr->addr.ip6.dst_addr, , 16);
-   hdr->addr.ip6.src_port = ((struct sockaddr_in 
*)src)->sin_port;
+   hdr->addr.ip6.dst_port = ((struct sockaddr_in 
*)src)->sin_port;
}
else {
memcpy(hdr->addr.ip6.dst_addr, &((struct 
sockaddr_in6 *)dst)->sin6_addr, 16);
-- 
2.24.1




[PATCH v3] BUG/MINOR: dns: allow 63 char in hostname

2020-01-26 Thread William Dauchy
hostname were limited to 62 char, which is not RFC1035 compliant;
- the parsing loop should stop when above max label char
- fix len label test where d[i] was wrongly used
- simplify the whole function to avoid using two extra char* variable

this should fix github issue #387

Signed-off-by: William Dauchy 
---
 src/dns.c | 31 +--
 1 file changed, 13 insertions(+), 18 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index eefd8d0dc..212c55f0d 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1470,7 +1470,6 @@ int dns_str_to_dn_label(const char *str, int str_len, 
char *dn, int dn_len)
  */
 int dns_hostname_validation(const char *string, char **err)
 {
-   const char *c, *d;
int i;
 
if (strlen(string) > DNS_MAX_NAME_SIZE) {
@@ -1479,36 +1478,32 @@ int dns_hostname_validation(const char *string, char 
**err)
return 0;
}
 
-   c = string;
-   while (*c) {
-   d = c;
-
+   while (*string) {
i = 0;
-   while (*d != '.' && *d && i <= DNS_MAX_LABEL_SIZE) {
-   i++;
-   if (!((*d == '-') || (*d == '_') ||
- ((*d >= 'a') && (*d <= 'z')) ||
- ((*d >= 'A') && (*d <= 'Z')) ||
- ((*d >= '0') && (*d <= '9' {
+   while (*string && *string != '.' && i < DNS_MAX_LABEL_SIZE) {
+   if (!(*string == '-' || *string == '_' ||
+ (*string >= 'a' && *string <= 'z') ||
+ (*string >= 'A' && *string <= 'Z') ||
+ (*string >= '0' && *string <= '9'))) {
if (err)
*err = DNS_INVALID_CHARACTER;
return 0;
}
-   d++;
+   i++;
+   string++;
}
 
-   if ((i >= DNS_MAX_LABEL_SIZE) && (d[i] != '.')) {
+   if (*string == '\0')
+   break;
+
+   if (*string != '.' && i >= DNS_MAX_LABEL_SIZE) {
if (err)
*err = DNS_LABEL_TOO_LONG;
return 0;
}
 
-   if (*d == '\0')
-   goto out;
-
-   c = ++d;
+   string++;
}
- out:
return 1;
 }
 
-- 
2.24.1




Re: [PATCH v2] BUG/MINOR: dns: allow 63 char in hostname

2020-01-26 Thread William Dauchy
Hello Tim,

On Sun, Jan 26, 2020 at 1:39 PM Tim Düsterhus  wrote:
> I wonder if we should move the `i++` down to above the `d++`. `i` is not
> being used in the loop body and the only way to exit the loop is the
> `return`. Moving it down has the benefit that it's not as easy to miss
> that both are incremented within the loop (like you did in your first
> patch).
>
> Or we could get rid of either `i` or `d` entirely and replace it by `(d
> - c)` or `c[i]` respectively.

In fact I've sent v1 by going too fast on my tests, then I realised we
could simplify a lot of things potentially, but would make the patch
easier to be read in two patches, so I sent v2. However as I see you
also agree on a few points I thought about, I will send v3 with a
simplified version of the function.

Thanks,
-- 
William


[PATCH v2] BUG/MINOR: dns: allow 63 char in hostname

2020-01-26 Thread William Dauchy
hostname were limited to 62 char, which is not RFC1035 compliant;
- the loop should stop when above max char
- fix resulting test where d[i] was wrongly used
(also simplify brackets for readability)

this should github issue #387

Signed-off-by: William Dauchy 
---
 src/dns.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index eefd8d0dc..dccd0498c 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1484,7 +1484,7 @@ int dns_hostname_validation(const char *string, char 
**err)
d = c;
 
i = 0;
-   while (*d != '.' && *d && i <= DNS_MAX_LABEL_SIZE) {
+   while (*d != '.' && *d && i < DNS_MAX_LABEL_SIZE) {
i++;
if (!((*d == '-') || (*d == '_') ||
  ((*d >= 'a') && (*d <= 'z')) ||
@@ -1497,7 +1497,7 @@ int dns_hostname_validation(const char *string, char 
**err)
d++;
}
 
-   if ((i >= DNS_MAX_LABEL_SIZE) && (d[i] != '.')) {
+   if (i >= DNS_MAX_LABEL_SIZE && *d != '.') {
if (err)
*err = DNS_LABEL_TOO_LONG;
return 0;
-- 
2.24.1




Re: [PATCH] BUG/MINOR: dns: allow 63 char in hostname

2020-01-26 Thread William Dauchy
Hello Miroslav,

On Sun, Jan 26, 2020 at 2:57 AM Miroslav Zagorac  wrote:
> i think that patch does not correct the bug in that function because in
> the loop above the variable i and pointer d increase at the same time.
>
> This means that *d should be written instead of d[i].

that is very true, d[i] does not make sense at this point. I will send v2.

Thanks,
-- 
William


[PATCH] BUG/MINOR: dns: allow 63 char in hostname

2020-01-25 Thread William Dauchy
hostname were limited to 62 char, which is not RFC1035 compliant;
simply remove equality in condition as counter is initialised at zero.
(also simplify brackets for readability)

this should github issue #387

Signed-off-by: William Dauchy 
---
 src/dns.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/dns.c b/src/dns.c
index eefd8d0dc..8b3a0927e 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1497,7 +1497,7 @@ int dns_hostname_validation(const char *string, char 
**err)
d++;
}
 
-   if ((i >= DNS_MAX_LABEL_SIZE) && (d[i] != '.')) {
+   if (i > DNS_MAX_LABEL_SIZE && d[i] != '.') {
if (err)
*err = DNS_LABEL_TOO_LONG;
return 0;
-- 
2.24.1




[PATCH] MINOR: proxy: clarify number of connections log when stopping

2020-01-25 Thread William Dauchy
this log could be sometimes a bit confusing (depending on the number in
fact) when you read it (e.g is it the number of active connection?) -
only trained eyes knows haproxy output a different log when closing
active connections while stopping.

Signed-off-by: William Dauchy 
---
 src/proxy.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/src/proxy.c b/src/proxy.c
index dbce45dd4..2d6fe48fd 100644
--- a/src/proxy.c
+++ b/src/proxy.c
@@ -975,9 +975,9 @@ struct task *manage_proxy(struct task *t, void *context, 
unsigned short state)
int t;
t = tick_remain(now_ms, p->stop_time);
if (t == 0) {
-   ha_warning("Proxy %s stopped (FE: %lld conns, BE: %lld 
conns).\n",
+   ha_warning("Proxy %s stopped (cumulated conns: FE: 
%lld, BE: %lld).\n",
   p->id, p->fe_counters.cum_conn, 
p->be_counters.cum_conn);
-   send_log(p, LOG_WARNING, "Proxy %s stopped (FE: %lld 
conns, BE: %lld conns).\n",
+   send_log(p, LOG_WARNING, "Proxy %s stopped (cumulated 
conns: FE: %lld, BE: %lld).\n",
 p->id, p->fe_counters.cum_conn, 
p->be_counters.cum_conn);
stop_proxy(p);
/* try to free more memory */
@@ -2071,9 +2071,9 @@ static int cli_parse_shutdown_frontend(char **args, char 
*payload, struct appctx
if (px->state == PR_STSTOPPED)
return cli_msg(appctx, LOG_NOTICE, "Frontend was already shut 
down.\n");
 
-   ha_warning("Proxy %s stopped (FE: %lld conns, BE: %lld conns).\n",
+   ha_warning("Proxy %s stopped (cumulated conns: FE: %lld, BE: %lld).\n",
   px->id, px->fe_counters.cum_conn, px->be_counters.cum_conn);
-   send_log(px, LOG_WARNING, "Proxy %s stopped (FE: %lld conns, BE: %lld 
conns).\n",
+   send_log(px, LOG_WARNING, "Proxy %s stopped (cumulated conns: FE: %lld, 
BE: %lld).\n",
 px->id, px->fe_counters.cum_conn, px->be_counters.cum_conn);
 
stop_proxy(px);
-- 
2.24.1




Re: [PATCH] ssl certificates load speedup and dedup (pem/ctx)

2020-01-25 Thread William Dauchy
On Sat, Jan 25, 2020 at 6:31 PM William Lallemand
 wrote:
> There is no limitation with the chroot since the file is uploaded over the 
> CLI.
> If you use the "set ssl cert" and "commit ssl cert" commands over the CLI the
> chroot option is not suppose to affect these.
>
> Do you have an example that might be a problem? I don't see how it could but I
> can be wrong.

no, it was not clear to me that you were talking about an extension of
`set ssl sert`; the description you made earlier of the possible
`haproxyctl cert reload` helper seemed to need reading the filesystem.
So nevermind, if it says in the same spirit of the current CLI
commands, i.e loading over the command line, it is perfectly ok to me.
-- 
William


Re: [PATCH] ssl certificates load speedup and dedup (pem/ctx)

2020-01-25 Thread William Dauchy
On Fri, Jan 24, 2020 at 4:40 PM William Lallemand
 wrote:
> What we are trying to do with the certificates and the CLI, is to be able to 
> do
> a 'reload' of the filesystem, but without reloading haproxy. You could imagine
> an haproxy helper (let's say `haproxyctl cert reload`) that will scan the
> changes on the filesystem and apply the changes without launching a new
> process, so it could be a file removed, updated or added.

just being curious here, do we assume this will not be compatible with
chroot usage?

-- 
William


[PATCH] CLEANUP: proxy: simplify proxy_parse_rate_limit proxy checks

2020-01-15 Thread William Dauchy
rate-limits are valid for both frontend and listen, but not backend; so
we can simplify this check in a similar manner as it is done in e.g
max-keep-alive-queue.

this should fix github issue #449

Signed-off-by: William Dauchy 
---
 src/proxy.c | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/src/proxy.c b/src/proxy.c
index 8720b2880..aed32f94b 100644
--- a/src/proxy.c
+++ b/src/proxy.c
@@ -330,7 +330,7 @@ static int proxy_parse_rate_limit(char **args, int section, 
struct proxy *proxy,
   struct proxy *defpx, const char *file, int 
line,
   char **err)
 {
-   int retval, cap;
+   int retval;
char *res;
unsigned int *tv = NULL;
unsigned int *td = NULL;
@@ -341,7 +341,6 @@ static int proxy_parse_rate_limit(char **args, int section, 
struct proxy *proxy,
if (strcmp(args[1], "sessions") == 0) {
tv = >fe_sps_lim;
td = >fe_sps_lim;
-   cap = PR_CAP_FE;
}
else {
memprintf(err, "'%s' only supports 'sessions' (got '%s')", 
args[0], args[1]);
@@ -359,10 +358,9 @@ static int proxy_parse_rate_limit(char **args, int 
section, struct proxy *proxy,
return -1;
}
 
-   if (!(proxy->cap & cap)) {
-   memprintf(err, "%s %s will be ignored because %s '%s' has no %s 
capability",
-args[0], args[1], proxy_type_str(proxy), proxy->id,
-(cap & PR_CAP_BE) ? "backend" : "frontend");
+   if (!(proxy->cap & PR_CAP_FE)) {
+   memprintf(err, "%s %s will be ignored because %s '%s' has no 
frontend capability",
+ args[0], args[1], proxy_type_str(proxy), proxy->id);
retval = 1;
}
else if (defpx && *tv != *td) {
-- 
2.24.1




Re: [PATCH] CLEANUP: ssl: remove opendir call in ssl_sock_load_cert

2020-01-13 Thread William Dauchy
Hi William L.,

Thanks for your  answer.

On Mon, Jan 13, 2020 at 04:00:53PM +0100, William Lallemand wrote:
> I understand that you were trying to remove opendir, which is good idea.
> However, I find it kind of confusing: if ssl_sock_load_ckchs() returns an
> error, this error will be added to the "unable to scan directory" message, and
> at this point it is not trying to scan a directory anymore.
> 
> However, there is already a call to stat(), we could probably skip the
> call to opendir() by checking S_IFDIR in the stat structure.

True, I probably over engineered it, I've sent v2, which is simpler.

Thanks,
-- 
William



[PATCH v2] CLEANUP: ssl: remove opendir call in ssl_sock_load_cert

2020-01-13 Thread William Dauchy
Since commit 3180f7b55434 ("MINOR: ssl: load certificates in
alphabetical order"), `readdir` was replaced by `scandir`. We can indeed
replace it with a check on the previous `stat` call.

This micro cleanup can be a good benefit when you have hundreds of bind
lines which open TLS certificates directories in terms of syscall,
especially in a case of frequent reloads.

Signed-off-by: William Dauchy 
---
 src/ssl_sock.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 124b2c602..ff8af2395 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -4249,7 +4249,6 @@ int ssl_sock_load_cert(char *path, struct bind_conf 
*bind_conf, char **err)
 {
struct dirent **de_list;
int i, n;
-   DIR *dir;
struct stat buf;
char *end;
char fp[MAXPATHLEN+1];
@@ -4265,8 +4264,7 @@ int ssl_sock_load_cert(char *path, struct bind_conf 
*bind_conf, char **err)
}
 
if (stat(path, ) == 0) {
-   dir = opendir(path);
-   if (!dir) {
+   if (S_ISDIR(buf.st_mode) == 0) {
ckchs =  ckchs_load_cert_file(path, 0,  err);
if (!ckchs)
return ERR_ALERT | ERR_FATAL;
@@ -4355,7 +4353,6 @@ ignore_entry:
}
free(de_list);
}
-   closedir(dir);
return cfgerr;
}
 
-- 
2.24.1




[PATCH] DOC: clarify crt-base usage

2020-01-11 Thread William Dauchy
crt-base is also used after "crt" directive.

Signed-off-by: William Dauchy 
---
 doc/configuration.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index d0bb9741..9ac89851 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -800,8 +800,8 @@ cpu-map [auto:][/] ...
 
 crt-base 
   Assigns a default directory to fetch SSL certificates from when a relative
-  path is used with "crtfile" directives. Absolute locations specified after
-  "crtfile" prevail and ignore "crt-base".
+  path is used with "crtfile" or "crt" directives. Absolute locations specified
+  prevail and ignore "crt-base".
 
 daemon
   Makes the process fork into background. This is the recommended mode of
-- 
2.24.1




[PATCH] CLEANUP: compression: remove unused deinit_comp_ctx section

2020-01-11 Thread William Dauchy
Since commit 27d93c3f9428 ("BUG/MAJOR: compression/cache: Make it
really works with these both filters"), we no longer use section
deinit_comp_ctx.

This should fix github issue #441

Signed-off-by: William Dauchy 
---
 src/flt_http_comp.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/flt_http_comp.c b/src/flt_http_comp.c
index e98ff46b..574b9373 100644
--- a/src/flt_http_comp.c
+++ b/src/flt_http_comp.c
@@ -577,8 +577,6 @@ select_compression_response_header(struct comp_state *st, 
struct stream *s, stru
msg->flags |= HTTP_MSGF_COMPRESSING;
return 1;
 
-  deinit_comp_ctx:
-   st->comp_algo->end(>comp_ctx);
   fail:
st->comp_algo = NULL;
return 0;
-- 
2.24.1




Re: [PATCH] MINOR: cli: use global unix settings for stats/master sockets

2020-01-09 Thread William Dauchy
Thanks for your answers.

On Thu, Jan 09, 2020 at 06:17:29AM +0100, Willy Tarreau wrote:
> I had a look at how this currently works and am embarrassed by both the
> patch and the way things currently work. Indeed, the patch only makes
> use of the mode, uid and gid from the unix-bind statement and silently
> ignores the path. It would be tempting to decide that since it's a unix
> socket we should enfore all of unix-bind settings to the stats socket,
> but then the problem caused by unix-bind is that the path component is
> a mandatory prefix that is prepended before all socket paths. So if we
> enforce the path we'll break all configs already using unix-bind.
> 
> I tend to think that at some point we could decide to purposely break
> some of this stuff so that the stats socket is not special at all, but
> I fear that some configs could not be expressed anymore due to this,
> and typically users will place the stats socket into a location outside
> of the chroot so that it cannot be accessed by accident by the process.
> That's even more true for the master socket where the path is specified
> on the command line, regardless of any global setting.
> 
> So maybe in the end your approach is the most reasonable one. However
> in this case we should explicitly state in the doc what settings from
> the unix-bind directive are reused by the stats/master socket, because
> to be transparent, I wasn't aware of the other ones beyond "prefix" and
> that's the first thing I tried and was surprized not to see it work as
> I imagined it would.
> 
> What do you think ?

I was indeed puzzled for the exact same reasons you mentioned but I
badly felt lazy expressing them (shame on me).
I also think we should consider those sockets as any others, and so
take into account the prefix as well. This will make everything clearer
from the user point of view, even though this will come with a breaking
change. Indeed, when I started using master socket, I was a bit
frustrated to have to replicate the rights settings in two different
places instead of having one single coherent one.

On Thu, Jan 09, 2020 at 12:00:43PM +0100, William Lallemand wrote:
> In my opinion that's not a good idea for the master CLI as it shouldn't depend
> on the configuration file. As a reminder the master CLI must work when a
> configuration file is corrupted, in "waitpid" mode. In this mode there is no
> configuration and the master CLI is running only with the configuration used 
> in
> argument.

that's a good subject to talk about. While looking at this I also
wondered why it was not possible to configure it in the config file; to my
knowledge there is no explanation about it in the doc, nor in the code,
but I might have missed something.
I did not know about those concerns around being able to access CLI even
if the config file is corrupted, thanks for the clarification.

That being said, it makes my patch less likely to be acceptable, but I
still confirm the user point of view where:
- you don't get why you cannot configure the master CLI in the
  configuration file; that's probably a point to address in the doc,
  since the configuration doc mentioned the master cli, but you find how
  to set it up in the management doc.
- having to replicate same settings is frustrating even though our
  config is generated
- when reading "stats socket" doc, it is mentioned all settings in
  "bind" are supported, but while reading "unix-bind" it becomes
  confusing whether this will be taken into account or not.

-- 
William



[PATCH] MINOR: cli: use global unix settings for stats/master sockets

2020-01-08 Thread William Dauchy
allows to use unix-bind settings in config file for both stats and master
sockets; this will save some double painful config when you can rely on
the global unix-bind.
Local settings will still overload the default global.

Signed-off-by: William Dauchy 
---
 doc/configuration.txt | 1 +
 doc/management.txt| 1 +
 src/cli.c | 8 
 3 files changed, 10 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index d0bb9741..c3aedb9e 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1293,6 +1293,7 @@ stats socket [|] [param*]
   All parameters supported by "bind" lines are supported, for instance to
   restrict access to some users or their access rights. Please consult
   section 5.1 for more information.
+  "unix-bind" settings have also an effect on this socket settings.
 
 stats timeout 
   The default timeout on the stats socket is set to 10 seconds. It is possible
diff --git a/doc/management.txt b/doc/management.txt
index 973b6f3a..2c48db1f 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -283,6 +283,7 @@ list of options is :
 For security reasons, it is recommended to bind the master CLI to a local
 UNIX socket. The bind options are the same as the keyword "bind" in
 the configuration file with words separated by commas instead of spaces.
+"unix-bind" settings have also an effect on this socket settings.
 
 Note that this socket can't be used to retrieve the listening sockets from
 an old process during a seamless reload.
diff --git a/src/cli.c b/src/cli.c
index ba48d147..07a74067 100644
--- a/src/cli.c
+++ b/src/cli.c
@@ -256,6 +256,10 @@ static int stats_parse_global(char **args, int 
section_type, struct proxy *curpx
bind_conf = bind_conf_alloc(global.stats_fe, file, line, 
args[2], xprt_get(XPRT_RAW));
bind_conf->level &= ~ACCESS_LVL_MASK;
bind_conf->level |= ACCESS_LVL_OPER; /* default access level */
+   /* use default settings for unix sockets */
+   bind_conf->ux.uid  = global.unix_bind.ux.uid;
+   bind_conf->ux.gid  = global.unix_bind.ux.gid;
+   bind_conf->ux.mode = global.unix_bind.ux.mode;
 
if (!str2listener(args[2], global.stats_fe, bind_conf, file, 
line, err)) {
memprintf(err, "parsing [%s:%d] : '%s %s' : %s\n",
@@ -2547,6 +2551,10 @@ int mworker_cli_proxy_new_listener(char *line)
 
bind_conf->level &= ~ACCESS_LVL_MASK;
bind_conf->level |= ACCESS_LVL_ADMIN;
+   /* use default settings for unix sockets */
+   bind_conf->ux.uid  = global.unix_bind.ux.uid;
+   bind_conf->ux.gid  = global.unix_bind.ux.gid;
+   bind_conf->ux.mode = global.unix_bind.ux.mode;
 
if (!str2listener(args[0], mworker_proxy, bind_conf, "master-socket", 
0, )) {
ha_alert("Cannot create the listener of the master CLI\n");
-- 
2.24.1




[PATCH] CLEANUP: server: remove unused err section in server_finalize_init

2020-01-08 Thread William Dauchy
Since commit 980855bd953c ("BUG/MEDIUM: server: initialize the orphaned
conns lists and tasks at the end"), we no longer use err section.

This should fix github issue #438

Signed-off-by: William Dauchy 
---
 src/server.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/server.c b/src/server.c
index 6212a420..14ff716a 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2053,8 +2053,6 @@ static int server_finalize_init(const char *file, int 
linenum, char **args, int
srv_lb_commit_status(srv);
 
return 0;
-err:
-   return ERR_ALERT | ERR_FATAL;
 }
 
 /*
-- 
2.24.1




[PATCH] CLEANUP: mux-h2: remove unused goto "out_free_h2s"

2020-01-08 Thread William Dauchy
Since commit fa8aa867b915 ("MEDIUM: connections: Change struct
wait_list to wait_event.") we no longer use this section.

this should fix github issue #437

Signed-off-by: William Dauchy 
---
 src/mux_h2.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/src/mux_h2.c b/src/mux_h2.c
index be9dae92..6ec8d6c0 100644
--- a/src/mux_h2.c
+++ b/src/mux_h2.c
@@ -1347,9 +1347,6 @@ static struct h2s *h2s_new(struct h2c *h2c, int id)
 
TRACE_LEAVE(H2_EV_H2S_NEW, h2c->conn, h2s);
return h2s;
-
- out_free_h2s:
-   pool_free(pool_head_h2s, h2s);
  out:
TRACE_DEVEL("leaving in error", H2_EV_H2S_ERR|H2_EV_H2S_END, h2c->conn);
return NULL;
-- 
2.24.1




  1   2   >