Re: HTTP method ACLs broken in HAProxy 2.2.3

2020-09-18 Thread James Brown
Thanks Christopher, I'll give it a shot today.

On Fri, Sep 18, 2020 at 6:39 AM Christopher Faulet 
wrote:

> Le 18/09/2020 à 10:47, Christopher Faulet a écrit :
> > Le 18/09/2020 à 01:33, James Brown a écrit :
> >> git bisect says that this regression was caused
> >> by commit c89077713915f605eb5d716545f182c8d0bf5581
> >>
> >> This makes little sense to me, since that commit doesn't touch anything
> even
> >> slightly related.
> >>
> >> As far as I can tell, the proximate issue is that PATCH is not a
> "well-known
> >> method" to HAproxy, despite being in RFC 5789. find_http_meth is
> returning
> >> HTTP_METH_OTHER (silently) for it, and there's something hinky with how
> the
> >> method ACL matcher handles HTTP_METH_OTHER.
> >>
> >> I tried to poke the pat_match_meth function with a debugger, but it's
> not even
> >> being called in v2.2.3 (it /is/ being called in v2.2.2). Did something
> break in
> >> how custom matchers are called?
> >>
> >
> > I'm able to reproduce the bug. In fact, it was introduced by the commit
> > 05f3910f5 ("BUG/MEDIUM: htx: smp_prefetch_htx() must always validate the
> > direction"). I attached a patch to fix the bug.
> >
>
> Sorry, it is the wrong patch. I attached a totally unrelated and outdated
> patch.
> Anyway, I pushed the fix in the 2.3-dev:
>
>http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=d2414a23
>
> It is not backported yet to the 2.2, but you can apply it.
>
> --
> Christopher Faulet
>
>

-- 
James Brown
Engineer


Re: HTTP method ACLs broken in HAProxy 2.2.3

2020-09-18 Thread Christopher Faulet

Le 18/09/2020 à 10:47, Christopher Faulet a écrit :

Le 18/09/2020 à 01:33, James Brown a écrit :

git bisect says that this regression was caused
by commit c89077713915f605eb5d716545f182c8d0bf5581

This makes little sense to me, since that commit doesn't touch anything even
slightly related.

As far as I can tell, the proximate issue is that PATCH is not a "well-known
method" to HAproxy, despite being in RFC 5789. find_http_meth is returning
HTTP_METH_OTHER (silently) for it, and there's something hinky with how the
method ACL matcher handles HTTP_METH_OTHER.

I tried to poke the pat_match_meth function with a debugger, but it's not even
being called in v2.2.3 (it /is/ being called in v2.2.2). Did something break in
how custom matchers are called?



I'm able to reproduce the bug. In fact, it was introduced by the commit
05f3910f5 ("BUG/MEDIUM: htx: smp_prefetch_htx() must always validate the
direction"). I attached a patch to fix the bug.



Sorry, it is the wrong patch. I attached a totally unrelated and outdated patch. 
Anyway, I pushed the fix in the 2.3-dev:


  http://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=d2414a23

It is not backported yet to the 2.2, but you can apply it.

--
Christopher Faulet



Re: Naming and discoverability of fetches and converters

2020-09-18 Thread Willy Tarreau
HiTim,

On Thu, Sep 17, 2020 at 07:27:55PM +0200, Tim Düsterhus wrote:
> Hi List,
> 
> I'd like to discuss something that's bothering me for quite some time
> now: The naming, namespacing and discoverability of fetches and converters.
> 
> Frankly, it's a small mess right now. I assume this is due to historic
> growth an features being added over time.

I don't think anyone will disagree with this!

> Let me give a few examples:
> 
> I want to encoded something using base64? I use the `base64` converter.
> Decoding? **Obviously** that's `b64dec` ;)

That's a perfect example of something initially created to do one
thing and taking too generic a name ("base64"). Adding the decoder
years later proved to be difficult. I even think that by then we
didn't yet easily support arguments for converters, so it wasn't
trivial to indicate in base64() what direction to use, plus it
wouldn't have helped much anyway, given that what you write usually
means something in a context and might not be the correct form anyway.

> Hex encoding? Easily done with `hex`. Decoding back to a string? Not
> possible. But you can to `hex2i` to turn a hexadecimal string into an
> integer.

This one was recently discussed as well, I even though we had "unhex"
or something like this. Maybe it appeared in a private discussion with
someone, I don't remember.

> We have `url_dec`. But there isn't an `url_enc` or maybe I am just not
> able to find it.

I'm pretty sure it appeared in a private discussion recently as well,
in which I explained that it does have the exact same issue as url_dec
in that it needs to work differently depending on the part that's being
encoded and as such has to be very flexible.

> Then there's quite a few redundant converters. We have `sha1`, then
> `sha2` and most recently we have `digest`.

>From what I'm reading in the commit message (the doc not being clear
on this point), they don't seem to overlap as "digest" seems to return
an HMAC. In addition, "sha1" doesn't require openssl so it comes for
free. But anyway I get your point and generally agree.

> Then within the description of the converters the variable naming and
> scoping is explained over and over again for each converter that takes
> variable names.

Did you also notice that copy-pasting is quite a common practice in the
doc :-/

> For fetches it's not as bad, many have proper prefixes.

Initially there were very few converters, they were usable and needed
only for stick tables, so these were ipmask(), lower(), upper() and
that was about all. These slowly expanded without any general direction
being observed, making the naming quite not trivial. For fetches it's
different because you have to indicate where you want to retrieve the
information, and actually for some of them, depending on the mode
haproxy was working, very early it was possible to have some confusion
about that, so they needed to be a bit more precise.

> But there's also
> a difference in naming. Sometimes the namespace is separated by a dot
> (`req.hdr`). Sometime it's an underscore (`ssl_fc`).

Yep. The dot was added later to try to group all those working within
the same domain (e.g. the request or the response). We also (re)discovered
that the automatic binding making them directly available from Lua is less
transparent with the dot as that one has to be turned to an underscore
there. But it's less common and the dot still appears much more readable
to me.

> Long story short: Is it feasible to update the canonical names of
> fetches and converters, preserving backwards compatibility with old
> versions and adding the missing "duals" for encoders / decoders or do we
> have to live with the current situation.

We can *try*. Trust me, it is *extremely hard* to correctly name things.
A significant part of the development job is to find correct names for
variables, functions, struct members, and worse, config options! You want
non-confusing names that accurately represent what you're designating with
a very limited risk that it later degenerates into something broader and
confusing again.

> ---
> 
> Let me give a proposal of what makes sense to me:
> 
> Make 's.' contain string related converters.
> 
> s.field() instead of field()
> s.word() instead of word()
> s.json() instead of json()

Maybe, why not for these ones. But then json() alone cannot stay
this way and needs to be called json-enc() or something like this
since it only encodes and doesn't decode.

> Make 'http.' contain HTTP related converters.
> 
> http.language() instead of language()

This already becomes difficult. http is very wide and touches all
areas. For example you'll use http.date() to encode an HTTP date
while plenty of users will search for it in "date.something" because
it's just one way to represent a date, or "time.something" because a
date is first and foremost something related to time.

> Make 'math.' contain mathematics related converters.
> 
> math.add() instead of add()
> math.sub() instead of sub(

Re: Right way to get file version with Data Plane API?

2020-09-18 Thread Илья Шипицин
what do you mean by "file version" ?

пт, 18 сент. 2020 г. в 12:57, Ricardo Fraile :

> Hello,
>
> Getting the file version seems to be one of the first things to do at
> the beginning of using the API, but I can't find an easy and clear way
> to get it.
>
> It seems extrange that that thing doesn't have a target url to get it.
> Maybe I'm wrong, but I get it with the raw output:
>
> # curl --user user1:password1
> http://127.0.0.1:/v2/services/haproxy/configuration/raw
>
> Is there an other right way to get it?
>
>
> Thanks,
>
>


Re: TLS handshake error

2020-09-18 Thread vcjouni

Thanks for your help!

I tested your patch but it did not change client behavior, most of 
browsers still tried to use RSA-PSS in handshake. From TLS 1.3 draft I 
understood that if TLS 1.3 is available, client can always choose to use 
RSA-PSS, so only way to get this work was to remove TLS 1.3 completely. 
This is what I did, I built custom haproxy ingress-controller Pod with 
openssl that does not have TLS 1.3 and got auth working.


Br, Jouni


On 9/18/20 2:57 AM, Bruno Henc wrote:

Move ../test/recipes/80-test_ssl_new.t outside of the build root. That means "throw 
out". rm -f ../test/recipes/80-test_ssl_new.t also works.




‐‐‐ Original Message ‐‐‐
On Tuesday, September 15, 2020 8:28 PM, vcjouni  
wrote:


Hi,

I tested for openssl-1.1.1g.tar.gz from openssl.org in Linux Mint 19.3:

$ patch -p1 < reorder-sigalgs.patch
patching file ssl/t1_lib.c

./config

make

make test

Test Summary Report

-

../test/recipes/80-test_ssl_new.t    (Wstat: 256 Tests: 29
Failed: 1)
   Failed test:  20
   Non-zero exit status: 1
Files=155, Tests=1466, 76 wallclock secs ( 1.74 usr  0.10 sys + 75.96
cusr  9.72 csys = 87.52 CPU)
Result: FAIL
Makefile:207: recipe for target '_tests' failed

What did you mean by thrown that test out? Now that test failed.

Br,

Jouni

On 9/15/20 4:36 PM, Bruno Henc wrote:


Hi,
Last time I saw this error it involved TLS decryption by firewalls that didn't 
support RSA-PSS. Why they blow up
when the new, more secure RSA-PSS signature algorithms are used beats me, but 
it's principally on them for not supporting the latest IETF standards.
Attached is a patch that reorders the signature algorithms
in openssl 1.1.1 so that the pkcs1 ones are first. Also,
the test/recipes/80-test_ssl_new.t test needs to be thrown out for this to 
work. I would recommend trying to get this to work without docker and 
kubernetes first, and using the -d haproxy option to get more detailed OpenSSL 
logging.
It is also likely you will need to set ssl-default-bind-curves since the 
firewalls in question do not support curve x25519 either. This is a HAProxy 2.2 
option (https://www.haproxy.com/blog/announcing-haproxy-2-2/). 
ssl-default-bind-curves P-256 should/could the trick, although tuning this to 
include all supported curves should be done for production traffic.
As for implementing this in HAProxy, SSL_CTX_set1_sigalgs_list could be used in 
the part of the code that initializes the TLS connection, but it seems that 
somewhere in the handshake code the signature algorithms get renegotiated.
I haven't had any luck in identifying where exactly this renegotiation happens. Someone 
else might have more luck in writing up a patch that adds a "sigalgs" or 
similarly named option to adjust the signature algorithms.
Regards,
Bruno
‐‐‐ Original Message ‐‐‐
On Tuesday, September 15, 2020 1:45 PM, vcjouni jouni.rosen...@valuecode.com 
wrote:


Hi!
We can not get haproxy-ingress to work with TLS authentication. Only
option to get this work is by using force-tlsv12 and then only Chrome
works. Problem is TLS handshake decrypt error when using RSA-PSS
signature algorithm, handshake fails every time. When we use
force-tlsv12, only Chrome will change signature back to pkcs1 and then
session is ok. Safari, Edge or IE does not work with any option and they
keep offering RSA-PSS signature.
This same CA bundle and cards has been used before with citrix netscaler
ingress controller without problems (TLSv1.2). Smart cards are
government provided FINEID cards, so we can't test them command line,
only by using those cards with browser.
I already discussed with haproxy-ingress builder jcmoraisjr and he
suggested us to ask from haproxy mailing list.
Docker image: image: quay.io/jcmoraisjr/haproxy-ingress
haproxy -vv

HA-Proxy version 2.0.17 2020/07/31 - https://haproxy.org/
Build options :
   TARGET  = linux-glibc
   CPU = generic
   CC  = gcc
   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-address-of-packed-member -Wno-unused-label
-Wno-sign-compare -Wno-unused-parameter -Wno-old-style-declaration
-Wno-ignored-qualifiers -Wno-clobbered -Wno-missing-field-initializers
-Wno-implicit-fallthrough -Wno-stringop-overflow -Wno-cast-function-type
-Wtype-limits -Wshift-negative-value -Wshift-overflow=2
-Wduplicated-cond -Wnull-dereference
   OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_GETADDRINFO=1 USE_OPENSSL=1
USE_LUA=1 USE_ZLIB=1
Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE
-PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD
-PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY
+LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO
+OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO
+NS +DL +RT -DEVICEAT

Re: HTTP method ACLs broken in HAProxy 2.2.3

2020-09-18 Thread Christopher Faulet

Le 18/09/2020 à 01:33, James Brown a écrit :
git bisect says that this regression was caused 
by commit c89077713915f605eb5d716545f182c8d0bf5581


This makes little sense to me, since that commit doesn't touch anything even 
slightly related.


As far as I can tell, the proximate issue is that PATCH is not a "well-known 
method" to HAproxy, despite being in RFC 5789. find_http_meth is returning 
HTTP_METH_OTHER (silently) for it, and there's something hinky with how the 
method ACL matcher handles HTTP_METH_OTHER.


I tried to poke the pat_match_meth function with a debugger, but it's not even 
being called in v2.2.3 (it /is/ being called in v2.2.2). Did something break in 
how custom matchers are called?




I'm able to reproduce the bug. In fact, it was introduced by the commit 
05f3910f5 ("BUG/MEDIUM: htx: smp_prefetch_htx() must always validate the 
direction"). I attached a patch to fix the bug.


Thanks,
--
Christopher Faulet
>From 6ad22f6b3610628eae275918464d2e9386299e1f Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Fri, 20 Mar 2020 08:21:55 +0100
Subject: [PATCH] WIP: Add flag to explicitly add a connection header

---
 src/mux_h1.c | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/src/mux_h1.c b/src/mux_h1.c
index b3c954e0e..0b8e6b120 100644
--- a/src/mux_h1.c
+++ b/src/mux_h1.c
@@ -79,6 +79,7 @@
 /* 0x0800 .. 0x1000 unused */
 #define H1S_F_HAVE_SRV_NAME  0x2000 /* Set during output process if the server name header was added to the request */
 #define H1S_F_HAVE_O_CONN0x4000 /* Set during output process to know connection mode was processed */
+#define H1S_F_EXPLICIT_CONN  0x8000 /* Explicitly set connection header during output processing */
 
 /* H1 connection descriptor */
 struct h1c {
@@ -942,13 +943,13 @@ static void h1_update_req_conn_value(struct h1s *h1s, struct h1m *h1m, struct is
 		return;
 
 	if (h1s->flags & H1S_F_WANT_KAL || px->options2 & PR_O2_FAKE_KA) {
-		if (!(h1m->flags & H1_MF_VER_11)) {
+		if (!(h1m->flags & H1_MF_VER_11) || (h1s->flags & H1S_F_EXPLICIT_CONN)) {
 			TRACE_STATE("add \"Connection: keep-alive\"", H1_EV_TX_DATA|H1_EV_TX_HDRS, h1s->h1c->conn, h1s);
 			*conn_val = ist("keep-alive");
 		}
 	}
 	else { /* H1S_F_WANT_CLO && !PR_O2_FAKE_KA */
-		if (h1m->flags & H1_MF_VER_11) {
+		if ((h1m->flags & H1_MF_VER_11) || (h1s->flags & H1S_F_EXPLICIT_CONN)) {
 			TRACE_STATE("add \"Connection: close\"", H1_EV_TX_DATA|H1_EV_TX_HDRS, h1s->h1c->conn, h1s);
 			*conn_val = ist("close");
 		}
@@ -965,13 +966,14 @@ static void h1_update_res_conn_value(struct h1s *h1s, struct h1m *h1m, struct is
 
 	if (h1s->flags & H1S_F_WANT_KAL) {
 		if (!(h1m->flags & H1_MF_VER_11) ||
-		!((h1m->flags & h1s->req.flags) & H1_MF_VER_11)) {
+		!((h1m->flags & h1s->req.flags) & H1_MF_VER_11) ||
+		(h1s->flags & H1S_F_EXPLICIT_CONN)) {
 			TRACE_STATE("add \"Connection: keep-alive\"", H1_EV_TX_DATA|H1_EV_TX_HDRS, h1s->h1c->conn, h1s);
 			*conn_val = ist("keep-alive");
 		}
 	}
 	else { /* H1S_F_WANT_CLO */
-		if (h1m->flags & H1_MF_VER_11) {
+		if ((h1m->flags & H1_MF_VER_11) || (h1s->flags & H1S_F_EXPLICIT_CONN)) {
 			TRACE_STATE("add \"Connection: close\"", H1_EV_TX_DATA|H1_EV_TX_HDRS, h1s->h1c->conn, h1s);
 			*conn_val = ist("close");
 		}
@@ -1684,10 +1686,13 @@ static size_t h1_process_output(struct h1c *h1c, struct buffer *buf, size_t coun
 		goto skip_hdr;
 }
 else if (isteq(n, ist("connection"))) {
+	h1s->flags |= H1S_F_EXPLICIT_CONN;
 	h1_parse_connection_header(h1m, &v);
 	if (!v.len)
 		goto skip_hdr;
 }
+else if (isteq(n, ist("keep-alive")))
+	h1s->flags |= H1S_F_EXPLICIT_CONN;
 
 /* Skip header if same name is used to add the server name */
 if (!(h1m->flags & H1_MF_RESP) && h1c->px->server_id_hdr_name &&
-- 
2.24.1



Right way to get file version with Data Plane API?

2020-09-18 Thread Ricardo Fraile

Hello,

Getting the file version seems to be one of the first things to do at 
the beginning of using the API, but I can't find an easy and clear way 
to get it.


It seems extrange that that thing doesn't have a target url to get it. 
Maybe I'm wrong, but I get it with the raw output:


# curl --user user1:password1 
http://127.0.0.1:/v2/services/haproxy/configuration/raw


Is there an other right way to get it?


Thanks,