A team of security researchers notified me on Thursday evening that they
had found a dirty bug in HAProxy's headers processing, and that, when
properly exploited, this bug allows to build an HTTP content smuggling
attack. HTTP content smuggling attacks consist in passing extra requests
after a first one on a connection to a proxy, and making the subsequent
ones bypass the filtering in place.

Here, a properly crafted HTTP request can make HAProxy drop some important
headers fields such as Connection, Content-length, Transfer-Encoding, Host
etc after having parsed and at least partially processed them. Because of
this, the request that HAProxy forwards doesn't match what it thinks it is
and some parts of a request body can be used to create extra requests to
the server, that will not be filtered nor detected by HAProxy. This can
for example be used to bypass an authentication check that is present on
haproxy for some URLs, or access some restricted area that is normally
access only if some specific checks are validated. The difficulty to
build such attacks and their impact in great part depends on the site's
architecture and what the servers are willing to accept; if there's no
filtering, routing nor caching on HAProxy, at best some connections will
eventually fail and logs will not reflect the extraneous requests. In
general it's not trivial to build such an attack in that it depends on the
site, but it's not terribly difficult either for someone who sufficiently
knows HTTP internals and studies the fix.

The problem affects all versions at different degrees:

  - HTX-aware versions (2.0 in default config and all versions above) are
    impacted in HTTP/1. HTTP/2 and HTTP/3 also suffer from the bug but no
    parsing nor processing happens in the dropped headers so HAProxy stays
    properly synchronized with the server (i.e. there's no request
    smuggling attack there)

  - non-HTX versions (1.9 and before, or 2.0 in legacy mode) will not
    drop the theader, but will nonetheless pass the faulty request as-is
    to a server. This means that, while such versions will not be abused
    to attack a server, if placed at the edge they are not sufficient to
    protect an internal HAProxy instance either.

The issue was fixed in all versions and all modes (HTX and legacy), and all
versions were upgraded. The following status of supported versions is now
the following:

   Branch     Vulnerable               Fixed      Maintained until
   2.8-dev    2.8-dev0 .. 2.8-dev3     2.8-dev4     2028-Q4 (LTS)
   2.7        2.7.0 .. 2.7.2           2.7.3        2024-Q1
   2.6        2.6.0 .. 2.6.8           2.6.9        2027-Q2 (LTS)
   2.5        2.5.0 .. 2.5.11          2.5.12       2023-Q1
   2.4        2.4.0 .. 2.4.21          2.4.12       2026-Q2 (LTS)
   2.2        2.2.0 .. 2.2.28          2.2.29       2025-Q2 (LTS)
   2.0        2.0.0 .. 2.0.30          2.0.31       2024-Q2 (LTS)

Distros were notified (not very long ago admittedly, the delay was quite
short for them) and updated packages will appear soon. If you don't see
yours immediately, please be gentle, it takes time to build many versions.

If for any reason you're not sure where to retrieve an updated package,
as a reminder the list of available packages (both provided by distros
and by the community) is here:


If you're running on an outdated version (a branch that is not listed
above), the best short-term option will be to upgrade to the immediately
next branch, which is the one that will give you the least surprise or
changes. Please do not ask for help upgrading from outdated versions,
if you didn't care about updating in 5 years, it's unlikely that anyone
will care about helping you to catch up.

For those who for some reasons cannot update right now, we could design a
workaround that was tested on all of the versions above and in both modes
(legacy and HTX; only 2.0 has legacy). It consists in adding a rule in
each exposed frontend, preferably before other "http-request" statements,
that detects the internal condition that results from an exploitation
attempt of the bug (warning: it's a single very long line):

   http-request deny if { fc_http_major 1 } !{ req.body_size 0 } !{ 
req.hdr(content-length) -m found } !{ req.hdr(transfer-encoding) -m found } !{ 
method CONNECT }

   Note: versions 2.4 and above may optionally drop the final test on the
         "CONNECT" method as it's only strictly needed for 2.3 and earlier

This will result in rejecting the request with a 403 response if it tries
to abuse this bug in the parser. An increase in 403 in your logs may
indicate attempts to exploit the bug. With the fix deployed, a 400 (bad
request) will be returned and logged instead, and blocked HTTP/1 requests
will appear as usual with the details on the protocol violation in
"show errors". HTTP/2 and HTTP/3 requests are sent in binary format and
are currently not dumped in "show errors", though as mentioned above, they
should be mostly harmless.

As usual, config-based workarounds should only be seen as a fallback
solution in case it is not possible or desirable to deploy an update.
This one was extensively tested to make sure it doesn't block valid
traffic, but it is not possible to be certain it will block any form
of the attack, so it is not a durable solution anyway.

If you believe you face a regression after deploying the updated version,
please do the following:

  1) make sure the symptoms you're observing were not present in the
     last version of the same branch affected by the bug (i.e. the
     regression could come from another of the versions you've skipped)

  2) roll back to the latest known good version with the workaround
     above and report your problem, either on the mailing list or on
     the GitHub-hosted issue tracker after verifying that yours is not
     there yet:  https://github.com/haproxy/haproxy/issues

  3) as usual, please share any relevant info (confs, type of traffic,
     logs, stats, observations etc).

CVE-2023-25725 was assigned to this bug.

I would like to particularly thank the security research team composed
of Bahruz Jabiyev, Anthony Gavazzi, and Engin Kirda from Northeastern
University, Kaan Onarlioglu from Akamai Technologies, Adi Peleg and
Harvey Tuch from Google for their responsible disclosure of the problem
with sufficient details allowing the fix to be issued very quickly.

Big thumbs up as well to distro packagers for being so responsive to the
need of an emergency release, and to Robert Frohl from SuSE for filing and
handling the CVE.

A separate message will be sent for each verison.


Reply via email to