Package: release.debian.org
Severity: normal
Tags: bullseye
User: release.debian....@packages.debian.org
Usertags: pu
X-Debbugs-Cc: hapr...@packages.debian.org, t...@security.debian.org
Control: affects -1 + src:haproxy

Hi,

For ELTS I was fixing haproxy's CVES CVE-2023-40225 and CVE-2023-45539,
and I also like to fix those for stable and oldstable.

CC'ing the security team, in case they want to issue an DSA instead.

The changes can also be found on the LTS repository:
https://salsa.debian.org/lts-team/packages/haproxy

[ Tests ]
I've tested the fixes manually, using netcat to inject
problematic http requests and confirm that the patched
version rejects the malicous requests. (using nginx and
also netcat as http server.)

(Being verbose here to document the tests for later reference ;-))

haproxy is listening on port 8080

e.g for CVE-2023-40225:
echo 'GET /index.nginx-debian.html# HTTP/1.0' | netcat localhost 8080
must be rejected with 400 Bad Request
and without the "#" accepted.

for CVE-2023-45539, nginx is stopped, and netcat listens on port 80:
echo 'GET / HTTP/.1.1
host: whatever
content-length:
' | netcat localhost 8080

If the request is accepted (and forwarded to the listening netcat),
haproxy is vulnerable. If a "400 Bad request" ist thrown, without
netcat receiving something, haproxy is not vulnerable.

(haproxy is running on port 8080)

[ Risks ]
Upstream patch, applied cleanly.

[ Checklist ]
  [x] *all* changes are documented in the d/changelog
  [x] I reviewed all changes and I approve them
  [x] attach debdiff against the package in (old)stable
  [x] the issue is verified as fixed in unstable

Debdiff attached.

I'v uploaded the package to o-s-p-u already.

Cheers,
-- 
tobi
diff -Nru haproxy-2.2.9/debian/changelog haproxy-2.2.9/debian/changelog
--- haproxy-2.2.9/debian/changelog      2023-04-10 16:18:09.000000000 +0200
+++ haproxy-2.2.9/debian/changelog      2023-12-25 09:46:44.000000000 +0100
@@ -1,3 +1,13 @@
+haproxy (2.2.9-2+deb11u6) bullseye; urgency=high
+
+  * Non-maintainer upload by the Security Team.
+  * Import upstream patch for CVE-2023-40225, forwarding malformed
+    Content-Length Headers. Closes: #1043502.
+  * Import upstream patch for CVE-2023-45539, accepting invalid requests
+    containing '#' as part of the URI component.
+
+ -- Tobias Frost <t...@debian.org>  Mon, 25 Dec 2023 09:46:44 +0100
+
 haproxy (2.2.9-2+deb11u5) bullseye-security; urgency=high
 
   * Non-maintainer upload by the Security Team.
diff -Nru haproxy-2.2.9/debian/.gitlab-ci.yml 
haproxy-2.2.9/debian/.gitlab-ci.yml
--- haproxy-2.2.9/debian/.gitlab-ci.yml 1970-01-01 01:00:00.000000000 +0100
+++ haproxy-2.2.9/debian/.gitlab-ci.yml 2023-12-25 09:46:44.000000000 +0100
@@ -0,0 +1,7 @@
+---
+include:
+  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/salsa-ci.yml
+  - 
https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
+
+variables:
+  RELEASE: 'bullseye'
diff -Nru haproxy-2.2.9/debian/patches/CVE-2023-40225.patch 
haproxy-2.2.9/debian/patches/CVE-2023-40225.patch
--- haproxy-2.2.9/debian/patches/CVE-2023-40225.patch   1970-01-01 
01:00:00.000000000 +0100
+++ haproxy-2.2.9/debian/patches/CVE-2023-40225.patch   2023-12-25 
09:46:44.000000000 +0100
@@ -0,0 +1,257 @@
+Description: CVE-2023-40225 - http: reject any empty content-length header 
value 
+ The content-length header parser has its dedicated function, in order
+ to take extreme care about invalid, unparsable, or conflicting values.
+ But there's a corner case in it, by which it stops comparing values
+ when reaching the end of the header. This has for a side effect that
+ an empty value or a value that ends with a comma does not deserve
+ further analysis, and it acts as if the header was absent.
+ .
+ While this is not necessarily a problem for the value ending with a
+ comma as it will be cause a header folding and will disappear, it is a
+ problem for the first isolated empty header because this one will not
+ be recontructed when next ones are seen, and will be passed as-is to the
+ backend server. A vulnerable HTTP/1 server hosted behind haproxy that
+ would just use this first value as "0" and ignore the valid one would
+ then not be protected by haproxy and could be attacked this way, taking
+ the payload for an extra request.
+ .
+ In field the risk depends on the server. Most commonly used servers
+ already have safe content-length parsers, but users relying on haproxy
+ to protect a known-vulnerable server might be at risk (and the risk of
+ a bug even in a reputable server should never be dismissed).
+ .
+ A configuration-based work-around consists in adding the following rule
+ in the frontend, to explicitly reject requests featuring an empty
+ content-length header that would have not be folded into an existing
+ one:
+ .
+ http-request deny if { hdr_len(content-length) 0 }
+ .
+ The real fix consists in adjusting the parser so that it always expects a
+ value at the beginning of the header or after a comma. It will now reject
+ requests and responses having empty values anywhere in the C-L header.
+ .
+ This needs to be backported to all supported versions. Note that the
+ modification was made to functions h1_parse_cont_len_header() and
+ http_parse_cont_len_header(). Prior to 2.8 the latter was in
+ h2_parse_cont_len_header(). One day the two should be refused but the
+ former is also used by Lua.
+ .
+ The HTTP messaging reg-tests were completed to test these cases.
+ .
+ Thanks to Ben Kallus of Dartmouth College and Narf Industries for
+ reporting this! (this is in GH haproxy#2237).
+Origin: 
https://github.com/FireBurn/haproxy/commit/e8ba5e106444fc78558f4ff26e9ce946f89216f4
+Bug: https://github.com/haproxy/haproxy/issues/2237
+Bug-Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1043502
+Last-Update: 2023-12-24 <YYYY-MM-DD, last update of the meta-information, 
optional>
+---
+This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
+diff --git a/reg-tests/http-messaging/h1_to_h1.vtc 
b/reg-tests/http-messaging/h1_to_h1.vtc
+index 5b02f172433b..39708a246f6d 100644
+--- a/reg-tests/http-messaging/h1_to_h1.vtc
++++ b/reg-tests/http-messaging/h1_to_h1.vtc
+@@ -269,3 +269,29 @@ client c3h1 -connect ${h1_feh1_sock} {
+       # arrive here.
+       expect_close
+ } -run
++
++client c4h1 -connect ${h1_feh1_sock} {
++      # this request is invalid and advertises an invalid C-L ending with an
++        # empty value, which results in a stream error.
++      txreq \
++        -req "GET" \
++        -url "/test31.html" \
++          -hdr "content-length: 0," \
++          -hdr "connection: close"
++      rxresp
++      expect resp.status == 400
++      expect_close
++} -run
++
++client c5h1 -connect ${h1_feh1_sock} {
++      # this request is invalid and advertises an empty C-L, which results
++      # in a stream error.
++      txreq \
++        -req "GET" \
++        -url "/test41.html" \
++          -hdr "content-length:" \
++          -hdr "connection: close"
++      rxresp
++      expect resp.status == 400
++      expect_close
++} -run
+diff --git a/reg-tests/http-messaging/h2_to_h1.vtc 
b/reg-tests/http-messaging/h2_to_h1.vtc
+index 481aded12aee..3ed3751902e1 100644
+--- a/reg-tests/http-messaging/h2_to_h1.vtc
++++ b/reg-tests/http-messaging/h2_to_h1.vtc
+@@ -10,6 +10,8 @@ barrier b1 cond 2 -cyclic
+ barrier b2 cond 2 -cyclic
+ barrier b3 cond 2 -cyclic
+ barrier b4 cond 2 -cyclic
++barrier b5 cond 2 -cyclic
++barrier b6 cond 2 -cyclic
+ 
+ server s1 {
+       rxreq
+@@ -31,6 +33,12 @@ server s1 {
+ 
+       barrier b4 sync
+       # the next request is never received
++
++      barrier b5 sync
++      # the next request is never received
++
++      barrier b6 sync
++      # the next request is never received
+ } -repeat 2 -start
+ 
+ haproxy h1 -conf {
+@@ -115,6 +123,32 @@ client c1h2 -connect ${h1_feh2_sock} {
+               txdata -data "this is sent and ignored"
+               rxrst
+       } -run
++
++      # fifth request is invalid and advertises an invalid C-L ending with an
++        # empty value, which results in a stream error.
++      stream 9 {
++              barrier b5 sync
++              txreq \
++                -req "GET" \
++                -scheme "https" \
++                -url "/test5.html" \
++                -hdr "content-length" "0," \
++                -nostrend
++              rxrst
++      } -run
++
++      # sixth request is invalid and advertises an empty C-L, which results
++      # in a stream error.
++      stream 11 {
++              barrier b6 sync
++              txreq \
++                -req "GET" \
++                -scheme "https" \
++                -url "/test6.html" \
++                -hdr "content-length" "" \
++                -nostrend
++              rxrst
++      } -run
+ } -run
+ 
+ # HEAD requests : don't work well yet
+@@ -257,4 +291,30 @@ client c3h2 -connect ${h1_feh2_sock} {
+               txdata -data "this is sent and ignored"
+               rxrst
+       } -run
++
++      # fifth request is invalid and advertises invalid C-L ending with an
++        # empty value, which results in a stream error.
++      stream 9 {
++              barrier b5 sync
++              txreq \
++                -req "POST" \
++                -scheme "https" \
++                -url "/test25.html" \
++                -hdr "content-length" "0," \
++                -nostrend
++              rxrst
++      } -run
++
++      # sixth request is invalid and advertises an empty C-L, which results
++      # in a stream error.
++      stream 11 {
++              barrier b6 sync
++              txreq \
++                -req "POST" \
++                -scheme "https" \
++                -url "/test26.html" \
++                -hdr "content-length" "" \
++                -nostrend
++              rxrst
++      } -run
+ } -run
+diff --git a/src/h1.c b/src/h1.c
+index 83912343d2c1..f351af8f7125 100644
+--- a/src/h1.c
++++ b/src/h1.c
+@@ -29,13 +29,20 @@ int h1_parse_cont_len_header(struct h1m *h1m, struct ist 
*value)
+       int not_first = !!(h1m->flags & H1_MF_CLEN);
+       struct ist word;
+ 
+-      word.ptr = value->ptr - 1; // -1 for next loop's pre-increment
++      word.ptr = value->ptr;
+       e = value->ptr + value->len;
+ 
+-      while (++word.ptr < e) {
++      while (1) {
++              if (word.ptr >= e) {
++                      /* empty header or empty value */
++                      goto fail;
++              }
++
+               /* skip leading delimiter and blanks */
+-              if (unlikely(HTTP_IS_LWS(*word.ptr)))
++              if (unlikely(HTTP_IS_LWS(*word.ptr))) {
++                      word.ptr++;
+                       continue;
++              }
+ 
+               /* digits only now */
+               for (cl = 0, n = word.ptr; n < e; n++) {
+@@ -74,6 +81,13 @@ int h1_parse_cont_len_header(struct h1m *h1m, struct ist 
*value)
+               h1m->flags |= H1_MF_CLEN;
+               h1m->curr_len = h1m->body_len = cl;
+               *value = word;
++
++              /* Now either n==e and we're done, or n points to the comma,
++               * and we skip it and continue.
++               */
++              if (n++ == e)
++                      break;
++
+               word.ptr = n;
+       }
+       /* here we've reached the end with a single value or a series of
+diff --git a/src/h2.c b/src/h2.c
+index bda4b05c7e22..b25aee10a51b 100644
+--- a/src/h2.c
++++ b/src/h2.c
+@@ -79,13 +79,20 @@ int h2_parse_cont_len_header(unsigned int *msgf, struct 
ist *value, unsigned lon
+       int not_first = !!(*msgf & H2_MSGF_BODY_CL);
+       struct ist word;
+ 
+-      word.ptr = value->ptr - 1; // -1 for next loop's pre-increment
++      word.ptr = value->ptr;
+       e = value->ptr + value->len;
+ 
+-      while (++word.ptr < e) {
++      while (1) {
++              if (word.ptr >= e) {
++                      /* empty header or empty value */
++                      goto fail;
++              }
++
+               /* skip leading delimiter and blanks */
+-              if (unlikely(HTTP_IS_LWS(*word.ptr)))
++              if (unlikely(HTTP_IS_LWS(*word.ptr))) {
++                      word.ptr++;
+                       continue;
++              }
+ 
+               /* digits only now */
+               for (cl = 0, n = word.ptr; n < e; n++) {
+@@ -124,6 +131,13 @@ int h2_parse_cont_len_header(unsigned int *msgf, struct 
ist *value, unsigned lon
+               *msgf |= H2_MSGF_BODY_CL;
+               *body_len = cl;
+               *value = word;
++
++              /* Now either n==e and we're done, or n points to the comma,
++               * and we skip it and continue.
++               */
++              if (n++ == e)
++                      break;
++
+               word.ptr = n;
+       }
+       /* here we've reached the end with a single value or a series of
diff -Nru haproxy-2.2.9/debian/patches/CVE-2023-45539.patch 
haproxy-2.2.9/debian/patches/CVE-2023-45539.patch
--- haproxy-2.2.9/debian/patches/CVE-2023-45539.patch   1970-01-01 
01:00:00.000000000 +0100
+++ haproxy-2.2.9/debian/patches/CVE-2023-45539.patch   2023-12-25 
09:46:44.000000000 +0100
@@ -0,0 +1,102 @@
+Description: CVE-2023-45539 - HAProxy accepts # as part of the URI component.
+ Seth Manesse and Paul Plasil reported that the "path" sample fetch
+ function incorrectly accepts '#' as part of the path component. This
+ can in some cases lead to misrouted requests for rules that would apply
+ on the suffix:
+ .
+ use_backend static if { path_end .png .jpg .gif .css .js }
+ .
+ Note that this behavior can be selectively configured using
+ "normalize-uri fragment-encode" and "normalize-uri fragment-strip".
+ .
+ The problem is that while the RFC says that this '#' must never be
+ emitted, as often it doesn't suggest how servers should handle it. A
+ diminishing number of servers still do accept it and trim it silently,
+ while others are rejecting it, as indicated in the conversation below
+ with other implementers:
+ .
+ https://lists.w3.org/Archives/Public/ietf-http-wg/2023JulSep/0070.html
+ .
+ Looking at logs from publicly exposed servers, such requests appear at
+ a rate of roughly 1 per million and only come from attacks or poorly
+ written web crawlers incorrectly following links found on various pages.
+ .
+ Thus it looks like the best solution to this problem is to simply reject
+ such ambiguous requests by default, and include this in the list of
+ controls that can be disabled using "option accept-invalid-http-request".
+ .
+ We're already rejecting URIs containing any control char anyway, so we
+ should also reject '#'.
+ .
+ In the H1 parser for the H1_MSG_RQURI state, there is an accelerated
+ parser for bytes 0x21..0x7e that has been tightened to 0x24..0x7e (it
+ should not impact perf since 0x21..0x23 are not supposed to appear in
+ a URI anyway). This way '#' falls through the fine-grained filter and
+ we can add the special case for it also conditionned by a check on the
+ proxy's option "accept-invalid-http-request", with no overhead for the
+ vast majority of valid URIs. Here this information is available through
+ h1m->err_pos that's set to -2 when the option is here (so we don't need
+ to change the API to expose the proxy). Example with a trivial GET
+ through netcat:
+ .
+ [08/Aug/2023:16:16:52.651] frontend layer1 (#2): invalid request
+ backend <NONE> (#-1), server <NONE> (#-1), event #0, src 127.0.0.1:50812
+ buffer starts at 0 (including 0 out), 16361 free,
+ len 23, wraps at 16336, error at position 7
+ H1 connection flags 0x00000000, H1 stream flags 0x00000810
+ H1 msg state MSG_RQURI(4), H1 msg flags 0x00001400
+ H1 chunk len 0 bytes, H1 body len 0 bytes :
+ .
+ 00000  GET /aa#bb HTTP/1.0\r\n
+ 00021  \r\n
+ .
+ This should be progressively backported to all stable versions along with
+ the following patch:
+ .
+ REGTESTS: http-rules: add accept-invalid-http-request for normalize-uri tests
+ .
+ Similar fixes for h2 and h3 will come in followup patches.
+ .
+ Thanks to Seth Manesse and Paul Plasil for reporting this problem with
+ detailed explanations.
+Origin: 
https://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=178cea76b1c9d9413afa6961b6a4576fcb5b26fa
+---
+ src/h1.c | 15 +++++++++++----
+ 1 file changed, 11 insertions(+), 4 deletions(-)
+
+--- a/src/h1.c
++++ b/src/h1.c
+@@ -399,13 +399,13 @@
+       case H1_MSG_RQURI:
+       http_msg_rquri:
+ #ifdef HA_UNALIGNED_LE
+-              /* speedup: skip bytes not between 0x21 and 0x7e inclusive */
++              /* speedup: skip bytes not between 0x24 and 0x7e inclusive */
+               while (ptr <= end - sizeof(int)) {
+-                      int x = *(int *)ptr - 0x21212121;
++                      int x = *(int *)ptr - 0x24242424;
+                       if (x & 0x80808080)
+                               break;
+ 
+-                      x -= 0x5e5e5e5e;
++                      x -= 0x5b5b5b5b;
+                       if (!(x & 0x80808080))
+                               break;
+ 
+@@ -417,8 +417,15 @@
+                       goto http_msg_ood;
+               }
+       http_msg_rquri2:
+-              if (likely((unsigned char)(*ptr - 33) <= 93)) /* 33 to 126 
included */
++              if (likely((unsigned char)(*ptr - 33) <= 93)) { /* 33 to 126 
included */
++                      if (*ptr == '#') {
++                              if (h1m->err_pos < -1) /* PR_O2_REQBUG_OK not 
set */
++                                      goto invalid_char;
++                              if (h1m->err_pos == -1) /* PR_O2_REQBUG_OK set: 
just log */
++                                      h1m->err_pos = ptr - start + skip;
++                      }
+                       EAT_AND_JUMP_OR_RETURN(ptr, end, http_msg_rquri2, 
http_msg_ood, state, H1_MSG_RQURI);
++              }
+ 
+               if (likely(HTTP_IS_SPHT(*ptr))) {
+                       sl.rq.u.len = ptr - sl.rq.u.ptr;
diff -Nru haproxy-2.2.9/debian/patches/series 
haproxy-2.2.9/debian/patches/series
--- haproxy-2.2.9/debian/patches/series 2023-04-10 16:17:01.000000000 +0200
+++ haproxy-2.2.9/debian/patches/series 2023-12-25 09:46:44.000000000 +0100
@@ -21,3 +21,5 @@
 
 # applied during the build process:
 # debianize-dconv.patch
+CVE-2023-40225.patch
+CVE-2023-45539.patch

Attachment: signature.asc
Description: PGP signature

Reply via email to