Hi John, On Thu, Aug 21, 2025 at 01:04:24AM -0400, John Lauro wrote: > Our webdevs notice clients are no longer receiving pages in gzip > format. One thing that has changed is we have another haproxy server > in the middle and the backed to it is via server send-proxy-v2. If I > skip the middle haproxy server and hit the final servers directly, > then haproxy will properly encode the html pages. > > Was running something in 2.8, but upgraded to 3.2.4 and verified still an > issue. > Fairly basic: > filter compression > compression algo gzip > compression type text/html text/plain text/css > application/xml application/javascript > application/x-www-form-urlencoded application/json > > I can narrow it down to a more complete config and test case if this > is expected to work and not easily reproduced. If this is expect to > fail (known limitation of proxy-v2), is the recommended option to use > http/https instead and set the headers for IP tracking? I prefer > passing original IPs via proxy protocol instead of headers, but that's > only a requirement with tcp.
That's very strange, as there is really *zero* relation between the two mechanisms (compression and proxy protocol). The compression works at the upper layer (application layer, on top of http) while the proxy protocol is at the connection layer. Are you sure that it's not something like the server omitting some headers or responding in HTTP/1.0 or such a thing when it receives a proxy header ? If you have the ability to capture response headers with/without PP to compare, that would be nice. On an unrelated note, I don't like much to use proxy-proto with HTTP, because it forces to mark connections as belonging to a single client and prevents them from being shared between clients, and forces them to be closed once the client connection closes. This results in more concurrent conections on the server (less sharing) and higher connection rates. Using "forwarded" or "x-forwarded-for" is much better (but requires that the server supports it, of course). Willy