Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-31 Thread Abhijeet Rastogi
Tim,


> Partial responses should not happen with TCP being used. I didn't follow
> along with all the DNS stuff, but does the resolver support TCP by now
> or is it still limited to UDP?


Using TCP doesn't avoid the truncation, it only delays the problem to a
higher limit. From
https://datatracker.ietf.org/doc/html/rfc1035#section-4.2.2

The message is prefixed with a two byte length field which gives the message

length, excluding the two byte length field.


And that limits the length to 2^16. (65536). In our HAproxy setup/scale,
we're already hitting those limits. A typical example, I just did a typical
query,
even with DNS compression enabled, TCP response size with around 2000
answers occupies around 33k

(pasting specific output from dig)
;; flags: qr rd; QUERY: 1, ANSWER: 2105, AUTHORITY: 0, ADDITIONAL: 0
;; Query time: 73 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Mar 31 15:22:33 UTC 2022
;; MSG SIZE  rcvd: 33739

It is perfectly reasonable to reach the 65k limit as well. When that limit
is exceeded, you might see behavior
where the DNS server is randomly choosing "answers" to fit within the 65k
limit.

Cheers,
Abhijeet (https://abhi.host)

On Thu, Mar 31, 2022 at 5:01 AM Tim Düsterhus  wrote:

> Willy,
>
> On 3/31/22 08:31, Willy Tarreau wrote:
> >> Can you clarify what *exactly* is expected to be removed and what will
> >> remain? Is it just SRV DNS records or more?
> >
> > watchdog was triggered due to this, and in my opinion the concept is
> > fundamentally flawed since responses are often partial. As soon as you
> > suspect that all active addresses were not delivered, you know that you
> > have to put lots of hacks in place.
>
> Partial responses should not happen with TCP being used. I didn't follow
> along with all the DNS stuff, but does the resolver support TCP by now
> or is it still limited to UDP?
> > I hope this clarifies the situation and doesn't start to make anyone
> > worry :-)  Anyway there's no emergency, the code is still there, and
>
> It makes me worry :-) On one box I'm relying on a server-template being
> filled with A records from the resolver. Let me explain my use-case:
>
> For that service I need to proxy (static) resources from a service
> *external* to me to:
>
> a) Prevent IP addresses of my end users being sent to this external
> service (for privacy reasons)
> b) Locally cache the files to reduce the load on the external service.
>
> The IP address(es) of the external service are not under my control and
> can and will unpredictably change. The service also exposes multiple A
> records pointing to different backend servers. I must spread my requests
> across multiple of those backend servers to be "nice to them" and not
> tie up resources on a single of their backends.
>
> Sure, DNS-based load-balancing comes with great limitations, due to
> browser's behavior, caching resolvers and TTLs, but apparently it works
> for the service. And as the service is external to me, that's what I
> need to work with.
>
> Currently it's working well:
>
> Requests from the user's browser for the proxied resources come in via
> my HAProxy, are then routed to my nginx (which handles the caching),
> which then uses a dedicated HAProxy 'listen' section for the upstream
> requests (as nginx is too dumb to perform DNS resolving properly). This
> 'listen' section then uses 'server-template' to spread the requests
> across the external service's servers.
>
> Browser -> HAProxy -> nginx -> HAProxy -> External Service (uncached)
> Browser -> HAProxy -> nginx (cached)
>
> nginx:
>
> > upstream some_service {
> >   server unix:/var/lib/haproxy/some-service-internal.sock;
> > }
> >
> > proxy_cache_path /var/lib/some_service/cache/ levels=2
> keys_zone=some_service:50m max_size=7G inactive=30d;
> >
> > server {
> >   listen unix:/var/lib/haproxy/some-service.sock;
> >   real_ip_header X-Forwarded-For;
> >
> >   server_name some-service.example.net;
> >
> >   error_log /var/log/nginx/some_service.log;
> >
> >   proxy_cache some_service;
> >   proxy_cache_key "https://some_service$request_uri;;
> >   proxy_cache_background_update on;
> >   proxy_cache_valid 200 30d;
> >   proxy_cache_use_stale error timeout invalid_header updating
> http_500 http_502 http_503 http_504 http_429;
> >   proxy_http_version 1.1;
> >   proxy_temp_path /var/lib/some_service/tmp/;
> >   add_header X-Proxy-Cache $upstream_cache_status;
> >
> >   location / {
> >   proxy_pass http://some_service;
> >   }
> > }
>
> HAProxy:
>
> >
> > listen some-service-internal
> >   mode http
> >
> >   bind u...@some-service-internal.sock mode 666
> >
> >   http-request set-header Host example.com
> >
> >   server-template some-service 1-4 example.com:443 resolvers
> my-resolver check inter 60s ssl sni req.hdr(host) verify required ca-file
> ca-certificates.crt resolve-prefer ipv4
>
> To replicate this set-up without support for server-template + 

Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-31 Thread Tim Düsterhus

Willy,

On 3/31/22 08:31, Willy Tarreau wrote:

Can you clarify what *exactly* is expected to be removed and what will
remain? Is it just SRV DNS records or more?


watchdog was triggered due to this, and in my opinion the concept is
fundamentally flawed since responses are often partial. As soon as you
suspect that all active addresses were not delivered, you know that you
have to put lots of hacks in place.


Partial responses should not happen with TCP being used. I didn't follow 
along with all the DNS stuff, but does the resolver support TCP by now 
or is it still limited to UDP?

I hope this clarifies the situation and doesn't start to make anyone
worry :-)  Anyway there's no emergency, the code is still there, and


It makes me worry :-) On one box I'm relying on a server-template being 
filled with A records from the resolver. Let me explain my use-case:


For that service I need to proxy (static) resources from a service 
*external* to me to:


a) Prevent IP addresses of my end users being sent to this external 
service (for privacy reasons)

b) Locally cache the files to reduce the load on the external service.

The IP address(es) of the external service are not under my control and 
can and will unpredictably change. The service also exposes multiple A 
records pointing to different backend servers. I must spread my requests 
across multiple of those backend servers to be "nice to them" and not 
tie up resources on a single of their backends.


Sure, DNS-based load-balancing comes with great limitations, due to 
browser's behavior, caching resolvers and TTLs, but apparently it works 
for the service. And as the service is external to me, that's what I 
need to work with.


Currently it's working well:

Requests from the user's browser for the proxied resources come in via 
my HAProxy, are then routed to my nginx (which handles the caching), 
which then uses a dedicated HAProxy 'listen' section for the upstream 
requests (as nginx is too dumb to perform DNS resolving properly). This 
'listen' section then uses 'server-template' to spread the requests 
across the external service's servers.


Browser -> HAProxy -> nginx -> HAProxy -> External Service (uncached)
Browser -> HAProxy -> nginx (cached)

nginx:


upstream some_service {
server unix:/var/lib/haproxy/some-service-internal.sock;
}

proxy_cache_path /var/lib/some_service/cache/ levels=2 
keys_zone=some_service:50m max_size=7G inactive=30d;

server {
listen unix:/var/lib/haproxy/some-service.sock;
real_ip_header X-Forwarded-For;

server_name some-service.example.net;

error_log /var/log/nginx/some_service.log;

proxy_cache some_service;
proxy_cache_key "https://some_service$request_uri;;
proxy_cache_background_update on;
proxy_cache_valid 200 30d;
proxy_cache_use_stale error timeout invalid_header updating http_500 
http_502 http_503 http_504 http_429;
proxy_http_version 1.1;
proxy_temp_path /var/lib/some_service/tmp/;
add_header X-Proxy-Cache $upstream_cache_status;

location / {
proxy_pass http://some_service;
}
}


HAProxy:



listen some-service-internal
mode http

bind u...@some-service-internal.sock mode 666

http-request set-header Host example.com

server-template some-service 1-4 example.com:443 resolvers my-resolver 
check inter 60s ssl sni req.hdr(host) verify required ca-file 
ca-certificates.crt resolve-prefer ipv4


To replicate this set-up without support for server-template + grabbing 
A records from the DNS response I would need to:


1) Use some other software (Varnish or Squid might or might not capable 
to do this. I neither used either).
2) Write some custom sidecar script that looks up the IP addresses of 
the external service and then updates the HAProxy or nginx config.



my concern is more about how we can encourage such existing users to
start to think about revisiting their approach with new tools and
practices. And this will also require that we have working alternatives
to suggest. While I'm pretty confident that the dataplane-api, ingress
controller and such things already offer a valid response, I don't know
for sure if they can be considered as drop-in replacement nor if they
support everything, and this will have to be studied as well before
starting to scare users!



Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-31 Thread Willy Tarreau
Hi Alex,

On Sat, Mar 26, 2022 at 08:30:56PM +0100, Aleksandar Lazic wrote:
> I fully agree with "using DNS for service discovery is a disaster." and the 
> DNS
> was the easiest way in the past for service discovery.
> 
> A possible solution could be that there is a registration API in HAProxy which
> uses the dynamic server feature to add them self to a HAProxy 
> backend/listener.

That's one of the conclusions we've started to come to. But additionally
we're seeing some requirements about being able to restart to apply certain
changes and at this point it appears clear that this needs to be managed by
an external process, and given that the existing dataplane API already deals
with all that, it makes more sense to distribute the effort that way:
  - make haproxy's API more friendly to the dataplane API
  - make the dataplane API able to communicate with fast-moving
registries.

> There should be a shared secret to protect the HAProxy API against attacks and
> can only be used via TLS.

In fact I would like us to have a new applet ("service" as exposed in the
config) for this, that could be either called via "use-service blah", or
be used by default on the existing CLI when HTTP is detected there. The
current CLI's language has zero intersection with HTTP so it should be
easy to let it adapt by itself. That would be cool because it already
supports permissions etc and it makes sense that the same set of controls
and permissions is used to perform the same actions using two different
languages.

> I would suggest JSON as it is the more or less a standard for API interaction.
> The benefit of JSON is also that's not another cli is necessary to maintain
> just for that feature.

We came to that conclusion as well. The current CLI format is a big problem
to deal with for external components. For example recently Rémi added some
commands to show OCSP responses and by accident we noticed line breaks there,
that come from the openssl output. So he had to reprocess openssl's output
to eliminate them because on the current CLI they're delimiters. The CLI
was designed for humans and is best used with "socat readline /path/socket".
A program ought not have to read messages made for humans nor deal with such
a syntax.

> I'm not sure if it's know that envoy uses the "xDS REST and gRPC protocol" for
> there endpoint config. I'm not sure if "xDS REST" brings any benefit in
> HAProxy but maybe we can get some Ideas how the problem is solved there.
> 
> https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol
>
> The terminology helps to understand some of the xds parts.
> https://www.envoyproxy.io/docs/envoy/latest/intro/life_of_a_request#terminology

It's different but covers more aspects (frontends, rules, etc). It's yet
another reason for placing that in an external component that would speak
natively with haproxy. This way more protocols could be adopted with less
efforts (and sometimes libs are also provided in high-level languages).

> If we scale the view a little bit out of "add backend servers to HAProxy" and
> thinking like "Add some bunch of backends to a haproxy clusters" we can think
> about to use something like the raft protocol ( https://raft.github.io/ ).

Don't know, maybe. I never heard about it before.

> Because most companies out there have not only one HAProxy instance there are
> running at least two instances and therefore is required to have a solution
> which could work with more the one instance o HAProxies.

For sure! That's by the way another big problem posed by DNS: you cannot
even keep your LBs consistent because they all receive different responses!

> To add the raft protocol would of course increase the complexity of HAProxy 
> but
> offers the handling of join/remove of backends and in HAProxy  can then the
> dynamic server feature be used to add the new backend to the backend section.

Well, be careful, it's important to think in terms of layers. HAProxy is a
dataplane which deals with its traffic and its own servers. It doesn't deal
with other haproxy nodes. However it totally makes sense to stack layers on
top of that to control multiple haproxy nodes.

> The benefit from my point of view is to have a underlying algorithm which 
> offers
> a consensual handling of join/remove of Servers/Endpoints.

One of the problem of performing such an approach at too low a layer is
that at some point you have to speak to one node and hope that it spreads
the info to the other ones. That's bad in terms of high availbility because
it means that you trust one node a bit too much at one critical instant.
Also there are plenty of multi-site architectures in which some central
management components have access to all LBs but LBs cannot see each
other, or only within the same LAN+site.

> Maybe the peers protocol could also be used for that part as it is already 
> part
> of HAProxy.

That's exactly the type of thing we wanted to do long ago and that I'm now
convinced we must *not* 

Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-31 Thread Willy Tarreau
Hi Tim,

On Wed, Mar 30, 2022 at 09:14:42PM +0200, Tim Düsterhus wrote:
> Willy,
> 
> On 3/26/22 10:22, Willy Tarreau wrote:
> > be the last LTS version with this. I'm interested in opinions and feedback
> > about this. And the next question will obviously be "how could we detect
> 
> Can you clarify what *exactly* is expected to be removed and what will
> remain? Is it just SRV DNS records or more?

What I believe is causing significant trouble at the moment in the DNS
area is the assignment of randomly delivered IP addresses to a fleat of
servers. Whether it's from SRV or just from a wide range of addresses
returned for a single request, it's basically the same. For example if
you configure 10 servers with the same name "foo.example.com", the DNS
will have to check in each response if there are addresses already
assigned to active servers, and just refresh them, then find if there
are addresses that are not assigned and see if some addressless servers
are available, in which case these addresses will be assigned to them,
then spot any address that has disappeared for a while, and decide
whether or not the servers that were assigned such addresses finally
ought to be stopped. In addition to being totally unreliable, it's
extremely CPU intensive. We've seen plenty of situations where the
watchdog was triggered due to this, and in my opinion the concept is
fundamentally flawed since responses are often partial. As soon as you
suspect that all active addresses were not delivered, you know that you
have to put lots of hacks in place.

What I would like to see is a resolver that does just that: resolving.

If multiple addresses are returned for a name, as long as one of them
is already assigned that's OK otherwise the server's address changes.
If you have multiple servers with the same name, it should be written
clearly that it's not the resolver's role to try to distribute multiple
responses fairly. Instead I'd rather see addresses assigned like they
would at boot when using the libc's resolver, i.e. any address to any
server, possibly the same address. This would definitely clarify that
the resolver is there to respond to the question "give me the first
[ipv4/ipv6/any] address corresponding to this name" and not be involved
in backend-wide hacks. This would also make sure that do-resolve() does
simple and reliable things. Also I would like to see the resolvers really
resolve CNAMEs, because that's what application level code (e.g. Lua or
HTTP client) really needs. If I understand right, at the moment CNAMEs
are only resolved if they appear in the same response, thus I strongly
doubt they can work cross-domain.

It's important to keep in mind that the reasons such mechanisms were put
in place originally was in order to adopt new emerging trends around
Consul and similar registries. Nowadays all these ones have evolved to
support way more reliable and richer APIs due to such previous limitations,
and the DNS as we support it should really really really not be used.

I hope this clarifies the situation and doesn't start to make anyone
worry :-)  Anyway there's no emergency, the code is still there, and
my concern is more about how we can encourage such existing users to
start to think about revisiting their approach with new tools and
practices. And this will also require that we have working alternatives
to suggest. While I'm pretty confident that the dataplane-api, ingress
controller and such things already offer a valid response, I don't know
for sure if they can be considered as drop-in replacement nor if they
support everything, and this will have to be studied as well before
starting to scare users!

Cheers,
Willy



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-30 Thread Tim Düsterhus

Willy,

On 3/26/22 10:22, Willy Tarreau wrote:

be the last LTS version with this. I'm interested in opinions and feedback
about this. And the next question will obviously be "how could we detect


Can you clarify what *exactly* is expected to be removed and what will 
remain? Is it just SRV DNS records or more?


Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-28 Thread Amaury Denoyelle
On Sun, Mar 27, 2022 at 12:09:22AM +0500, Илья Шипицин wrote:
> сб, 26 мар. 2022 г. в 22:23, Ionel GARDAIS  >:
> > Thanks Willy for these updates.
> >
> > While skimming the result on the interop website, I was surprised that
> > haproxy is always more than 50% slower than its competitor.
> > Is it because you've enable lots of traces as part of your debugging
> > process for the runs ?
> >
> looks like this dockerfile is used
> https://github.com/haproxytech/haproxy-qns/blob/master/Dockerfile
> 

Hi,

Thanks for your interest on haproxy QUIC implementation :) Indeed the
perfs results reported by the interop test suite for haproxy look
miserable. To be honest, at the moment I did not look how these tests
are implemented. We left this part out as we dealt with plenty of
functional issues. As such, we enabled a lot of traces and debug options
to be able to quickly understand bugs, so this may have an impact on the
results of perfs test.

Now, we are focused on trying to deploy QUIC on haproxy.org to inspect
the behavior with real-life browsers. When this is done, the next
objective will be to try to improve the results of these perfs tests.

-- 
Amaury Denoyelle



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Willy Tarreau
On Sat, Mar 26, 2022 at 08:49:14PM +0100, Lukas Tribus wrote:
> Hello Willy,
> 
> On Sat, 26 Mar 2022 at 10:22, Willy Tarreau  wrote:
> > A change discussed around previous announce was made in the H2 mux: the
> > "timeout http-keep-alive" and "timeout http-request" are now respected
> > and work as documented, so that it will finally be possible to force such
> > connections to be closed when no request comes even if they're seeing
> > control traffic such as PING frames. This can typically happen in some
> > server-to-server communications whereby the client application makes use
> > of PING frames to make sure the connection is still alive. I intend to
> > backport this after some time, probably to 2.5 and later 2.4, as I've
> > got reports about stable versions currently posing this problem.
> 
> While I agree with the change, actually documented is the previous behavior.
> 
> So this is a change in behavior, and documentation will need updating
> as well to actually reflect this new behavior (patch incoming).
> 
> I have to say I don't like the idea of backporting such changes. We
> have documented and trained users that H2 doesn't respect "timeout
> http-keep-alive" and that it uses "timeout client" instead. We even
> argued that this is a good thing because we want H2 connections to
> stay up longer. I suggest not changing documented behavior in bugfix
> releases of stable and stable/LTS releases.

These are interesting points. Actually these previous choices came from
technical limitations back then and from some wrong assumptions from me
that we would like H2 connections to last longer in order to amortize
the TLS setup cost. But nowadays TLS is equally used both for H1 and H2,
which makes my assumption wrong.

Aside the two lines that your patch reverted from these options, I'm
seeing that all the justification in the timeouts essentially speak
about HTTP without specific version since this is more about a user
behavior in front of a browser than a technical connection behavior.

And given that the introduction of H2 post-dates the timeouts and
nowadays one can receive H2 traffic without having revisited their docs,
I think it's fair to assume that most users who care about these
timeouts have likely set them before enabling H2 and do have
expectations about their effectiveness that were not met.

Thus I tend to think that it can be argued both ways. I'm not seeing
an emergency in backporting this, so I'm fine with waiting for more
reports of surprises before reconsidering this option. But that's
definitely something I don't want to rule out for the reasons above.

Thanks!
Willy



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Lukas Tribus
Hello Willy,

On Sat, 26 Mar 2022 at 10:22, Willy Tarreau  wrote:
> A change discussed around previous announce was made in the H2 mux: the
> "timeout http-keep-alive" and "timeout http-request" are now respected
> and work as documented, so that it will finally be possible to force such
> connections to be closed when no request comes even if they're seeing
> control traffic such as PING frames. This can typically happen in some
> server-to-server communications whereby the client application makes use
> of PING frames to make sure the connection is still alive. I intend to
> backport this after some time, probably to 2.5 and later 2.4, as I've
> got reports about stable versions currently posing this problem.

While I agree with the change, actually documented is the previous behavior.

So this is a change in behavior, and documentation will need updating
as well to actually reflect this new behavior (patch incoming).

I have to say I don't like the idea of backporting such changes. We
have documented and trained users that H2 doesn't respect "timeout
http-keep-alive" and that it uses "timeout client" instead. We even
argued that this is a good thing because we want H2 connections to
stay up longer. I suggest not changing documented behavior in bugfix
releases of stable and stable/LTS releases.


cheers,
lukas



Re: [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Aleksandar Lazic
Hi Willy.

On Sat, 26 Mar 2022 10:22:02 +0100
Willy Tarreau  wrote:

> Hi,
> 
> HAProxy 2.6-dev4 was released on 2022/03/26. It added 80 new commits
> after version 2.6-dev3.
> 
> The activity started to calm down a bit, which is good because we're
> roughly 2 months before the release and it will become important to avoid
> introducing last-minute regressions.
> 
> This version mostly integrates fixes for various bugs in various places
> like stream-interfaces, QUIC, the HTTP client or the trace subsystem. The
> remaining patches are mostly QUIC improvements and code cleanups. In
> addition the MQTT protocol parser was extended to also support MQTTv3.1.
> 
> A change discussed around previous announce was made in the H2 mux: the
> "timeout http-keep-alive" and "timeout http-request" are now respected
> and work as documented, so that it will finally be possible to force such
> connections to be closed when no request comes even if they're seeing
> control traffic such as PING frames. This can typically happen in some
> server-to-server communications whereby the client application makes use
> of PING frames to make sure the connection is still alive. I intend to
> backport this after some time, probably to 2.5 and later 2.4, as I've
> got reports about stable versions currently posing this problem.
> 
> I'm expecting to see another batch of stream-interface code refactoring
> that Christopher is still working on. This is a very boring and tedious
> task that should significantly lower the long-term maintenance effort,
> so I'm willing to wait a little bit for such changes to be ready. What
> this means for users is a reduction of the bugs we've seen over the last
> 2-3 years alternating between truncated responses and never-dying
> connections and that result from the difficulty to propagate certain
> events across multiple layers.
> 
> Also William still has some updates to finish on the HTTP client
> (connection retries, SSL cert verification and host name resolution
> mainly). On the paper, each of them is relatively easy, but practically,
> since the HTTP client is the first one of its category, each attempt to
> progress is stopped by the discovery of a shortcoming or bug that were
> not visible before. Thus the progress takes more time than desired but
> as a side effect, the core code gets much more reliable by getting rid
> of these old issues.
> 
> One front that made impressive progress over the last few months is QUIC.
> While a few months ago we were counting the number of red boxes on the
> interop tests at https://interop.seemann.io/ to figure what to work on as
> a top priority, now we're rather counting the number of tests that report
> a full-green state, and haproxy is now on par with other servers in these
> tests. Thus the idea emerged, in order to continue to make progress on
> this front, to start to deploy QUIC on haproxy.org so that interoperability
> issues with browsers and real-world traffic can be spotted. A few attempts
> were made and already revealed issues so for now it's disabled again. Be
> prepared to possibly observe a few occasional hiccups when visiting the
> site (and if so, please do complain to us). The range of possible issues
> would likely be frozen transfers and truncated responses, but these should
> not happen.
> 
> From a technical point, the way it's done is by having a separate haproxy
> process listening to QUIC on UDP port 1443, and forwarding HTTP requests
> to the existing process. The main process constantly checks the QUIC one,
> and when it's seen as operational, it appends an Alt-Svc header that
> indicates the client that an HTTP/3 implementation is available on port
> 1443, and that this announce is valid for a short time (we'll leave it to
> one minute only so that issues can resolve quickly, but for now it's only
> 10s so that quick tests cause no harm):
> 
> http-response add-header alt-svc 'h3=":1443"; ma=60' if \
>{ var(txn.host) -m end haproxy.org } { nbsrv(quic) gt 0 }
> 
> As such, compatible browsers are free to try to connect there or not. Other
> tools (such as git clone) will not use it. For those impatient to test it,
> the QUIC process' status is reported at the bottom of the stats page here:
> http://stats.haproxy.org/. The "quic" socket in the frontend at the top
> reports the total traffic received from the QUIC process, so if you're
> seeing it increase while you reload the page it's likely that you're using
> QUIC to read it. In Firefox I'm having this little plugin loaded:
> 
>   https://addons.mozilla.org/en-US/firefox/addon/http2-indicator/
> 
> It displays a small flash on the URL bar with different colors depending
> on the protocol used to load the page (H1/SPDY/H2/H3). When that works it's
> green (H3), otherwise it's blue (H2).
> 
> At this point I'd still say "do not reproduce these experiments at home".
> Amaury and Fred are still watching the process' traces very closely to
> spot bugs and stop it as 

Re: [*EXT*] [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Илья Шипицин
сб, 26 мар. 2022 г. в 22:23, Ionel GARDAIS :

> Thanks Willy for these updates.
>
> While skimming the result on the interop website, I was surprised that
> haproxy is always more than 50% slower than its competitor.
> Is it because you've enable lots of traces as part of your debugging
> process for the runs ?
>

looks like this dockerfile is used
https://github.com/haproxytech/haproxy-qns/blob/master/Dockerfile


>
> Ionel
>
> - Mail original -
> De: "Willy Tarreau" 
> À: "haproxy" 
> Envoyé: Samedi 26 Mars 2022 10:22:02
> Objet: [*EXT*] [ANNOUNCE] haproxy-2.6-dev4
>
> Hi,
>
> HAProxy 2.6-dev4 was released on 2022/03/26. It added 80 new commits
> after version 2.6-dev3.
>
> The activity started to calm down a bit, which is good because we're
> roughly 2 months before the release and it will become important to avoid
> introducing last-minute regressions.
>
> This version mostly integrates fixes for various bugs in various places
> like stream-interfaces, QUIC, the HTTP client or the trace subsystem. The
> remaining patches are mostly QUIC improvements and code cleanups. In
> addition the MQTT protocol parser was extended to also support MQTTv3.1.
>
> A change discussed around previous announce was made in the H2 mux: the
> "timeout http-keep-alive" and "timeout http-request" are now respected
> and work as documented, so that it will finally be possible to force such
> connections to be closed when no request comes even if they're seeing
> control traffic such as PING frames. This can typically happen in some
> server-to-server communications whereby the client application makes use
> of PING frames to make sure the connection is still alive. I intend to
> backport this after some time, probably to 2.5 and later 2.4, as I've
> got reports about stable versions currently posing this problem.
>
> I'm expecting to see another batch of stream-interface code refactoring
> that Christopher is still working on. This is a very boring and tedious
> task that should significantly lower the long-term maintenance effort,
> so I'm willing to wait a little bit for such changes to be ready. What
> this means for users is a reduction of the bugs we've seen over the last
> 2-3 years alternating between truncated responses and never-dying
> connections and that result from the difficulty to propagate certain
> events across multiple layers.
>
> Also William still has some updates to finish on the HTTP client
> (connection retries, SSL cert verification and host name resolution
> mainly). On the paper, each of them is relatively easy, but practically,
> since the HTTP client is the first one of its category, each attempt to
> progress is stopped by the discovery of a shortcoming or bug that were
> not visible before. Thus the progress takes more time than desired but
> as a side effect, the core code gets much more reliable by getting rid
> of these old issues.
>
> One front that made impressive progress over the last few months is QUIC.
> While a few months ago we were counting the number of red boxes on the
> interop tests at https://interop.seemann.io/ to figure what to work on as
> a top priority, now we're rather counting the number of tests that report
> a full-green state, and haproxy is now on par with other servers in these
> tests. Thus the idea emerged, in order to continue to make progress on
> this front, to start to deploy QUIC on haproxy.org so that
> interoperability
> issues with browsers and real-world traffic can be spotted. A few attempts
> were made and already revealed issues so for now it's disabled again. Be
> prepared to possibly observe a few occasional hiccups when visiting the
> site (and if so, please do complain to us). The range of possible issues
> would likely be frozen transfers and truncated responses, but these should
> not happen.
>
> From a technical point, the way it's done is by having a separate haproxy
> process listening to QUIC on UDP port 1443, and forwarding HTTP requests
> to the existing process. The main process constantly checks the QUIC one,
> and when it's seen as operational, it appends an Alt-Svc header that
> indicates the client that an HTTP/3 implementation is available on port
> 1443, and that this announce is valid for a short time (we'll leave it to
> one minute only so that issues can resolve quickly, but for now it's only
> 10s so that quick tests cause no harm):
>
> http-response add-header alt-svc 'h3=":1443"; ma=60' if \
>{ var(txn.host) -m end haproxy.org } { nbsrv(quic) gt 0 }
>
> As such, compatible browsers are free to try to connect there or not. Other
> tools (such as git clone) will not use it. For those impatient to test 

Re: [*EXT*] [ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Ionel GARDAIS
Thanks Willy for these updates.

While skimming the result on the interop website, I was surprised that haproxy 
is always more than 50% slower than its competitor.
Is it because you've enable lots of traces as part of your debugging process 
for the runs ?

Ionel

- Mail original -
De: "Willy Tarreau" 
À: "haproxy" 
Envoyé: Samedi 26 Mars 2022 10:22:02
Objet: [*EXT*] [ANNOUNCE] haproxy-2.6-dev4

Hi,

HAProxy 2.6-dev4 was released on 2022/03/26. It added 80 new commits
after version 2.6-dev3.

The activity started to calm down a bit, which is good because we're
roughly 2 months before the release and it will become important to avoid
introducing last-minute regressions.

This version mostly integrates fixes for various bugs in various places
like stream-interfaces, QUIC, the HTTP client or the trace subsystem. The
remaining patches are mostly QUIC improvements and code cleanups. In
addition the MQTT protocol parser was extended to also support MQTTv3.1.

A change discussed around previous announce was made in the H2 mux: the
"timeout http-keep-alive" and "timeout http-request" are now respected
and work as documented, so that it will finally be possible to force such
connections to be closed when no request comes even if they're seeing
control traffic such as PING frames. This can typically happen in some
server-to-server communications whereby the client application makes use
of PING frames to make sure the connection is still alive. I intend to
backport this after some time, probably to 2.5 and later 2.4, as I've
got reports about stable versions currently posing this problem.

I'm expecting to see another batch of stream-interface code refactoring
that Christopher is still working on. This is a very boring and tedious
task that should significantly lower the long-term maintenance effort,
so I'm willing to wait a little bit for such changes to be ready. What
this means for users is a reduction of the bugs we've seen over the last
2-3 years alternating between truncated responses and never-dying
connections and that result from the difficulty to propagate certain
events across multiple layers.

Also William still has some updates to finish on the HTTP client
(connection retries, SSL cert verification and host name resolution
mainly). On the paper, each of them is relatively easy, but practically,
since the HTTP client is the first one of its category, each attempt to
progress is stopped by the discovery of a shortcoming or bug that were
not visible before. Thus the progress takes more time than desired but
as a side effect, the core code gets much more reliable by getting rid
of these old issues.

One front that made impressive progress over the last few months is QUIC.
While a few months ago we were counting the number of red boxes on the
interop tests at https://interop.seemann.io/ to figure what to work on as
a top priority, now we're rather counting the number of tests that report
a full-green state, and haproxy is now on par with other servers in these
tests. Thus the idea emerged, in order to continue to make progress on
this front, to start to deploy QUIC on haproxy.org so that interoperability
issues with browsers and real-world traffic can be spotted. A few attempts
were made and already revealed issues so for now it's disabled again. Be
prepared to possibly observe a few occasional hiccups when visiting the
site (and if so, please do complain to us). The range of possible issues
would likely be frozen transfers and truncated responses, but these should
not happen.

>From a technical point, the way it's done is by having a separate haproxy
process listening to QUIC on UDP port 1443, and forwarding HTTP requests
to the existing process. The main process constantly checks the QUIC one,
and when it's seen as operational, it appends an Alt-Svc header that
indicates the client that an HTTP/3 implementation is available on port
1443, and that this announce is valid for a short time (we'll leave it to
one minute only so that issues can resolve quickly, but for now it's only
10s so that quick tests cause no harm):

http-response add-header alt-svc 'h3=":1443"; ma=60' if \
   { var(txn.host) -m end haproxy.org } { nbsrv(quic) gt 0 }

As such, compatible browsers are free to try to connect there or not. Other
tools (such as git clone) will not use it. For those impatient to test it,
the QUIC process' status is reported at the bottom of the stats page here:
http://stats.haproxy.org/. The "quic" socket in the frontend at the top
reports the total traffic received from the QUIC process, so if you're
seeing it increase while you reload the page it's likely that you're using
QUIC to read it. In Firefox I'm having this little plugin loaded:

  https://addons.mozilla.org/en-US/firefox/addon/http2-indicator/

It displays a small flash on the URL bar with different colors depending
on the protocol used to load the page (H1/SPDY/H

[ANNOUNCE] haproxy-2.6-dev4

2022-03-26 Thread Willy Tarreau
Hi,

HAProxy 2.6-dev4 was released on 2022/03/26. It added 80 new commits
after version 2.6-dev3.

The activity started to calm down a bit, which is good because we're
roughly 2 months before the release and it will become important to avoid
introducing last-minute regressions.

This version mostly integrates fixes for various bugs in various places
like stream-interfaces, QUIC, the HTTP client or the trace subsystem. The
remaining patches are mostly QUIC improvements and code cleanups. In
addition the MQTT protocol parser was extended to also support MQTTv3.1.

A change discussed around previous announce was made in the H2 mux: the
"timeout http-keep-alive" and "timeout http-request" are now respected
and work as documented, so that it will finally be possible to force such
connections to be closed when no request comes even if they're seeing
control traffic such as PING frames. This can typically happen in some
server-to-server communications whereby the client application makes use
of PING frames to make sure the connection is still alive. I intend to
backport this after some time, probably to 2.5 and later 2.4, as I've
got reports about stable versions currently posing this problem.

I'm expecting to see another batch of stream-interface code refactoring
that Christopher is still working on. This is a very boring and tedious
task that should significantly lower the long-term maintenance effort,
so I'm willing to wait a little bit for such changes to be ready. What
this means for users is a reduction of the bugs we've seen over the last
2-3 years alternating between truncated responses and never-dying
connections and that result from the difficulty to propagate certain
events across multiple layers.

Also William still has some updates to finish on the HTTP client
(connection retries, SSL cert verification and host name resolution
mainly). On the paper, each of them is relatively easy, but practically,
since the HTTP client is the first one of its category, each attempt to
progress is stopped by the discovery of a shortcoming or bug that were
not visible before. Thus the progress takes more time than desired but
as a side effect, the core code gets much more reliable by getting rid
of these old issues.

One front that made impressive progress over the last few months is QUIC.
While a few months ago we were counting the number of red boxes on the
interop tests at https://interop.seemann.io/ to figure what to work on as
a top priority, now we're rather counting the number of tests that report
a full-green state, and haproxy is now on par with other servers in these
tests. Thus the idea emerged, in order to continue to make progress on
this front, to start to deploy QUIC on haproxy.org so that interoperability
issues with browsers and real-world traffic can be spotted. A few attempts
were made and already revealed issues so for now it's disabled again. Be
prepared to possibly observe a few occasional hiccups when visiting the
site (and if so, please do complain to us). The range of possible issues
would likely be frozen transfers and truncated responses, but these should
not happen.

>From a technical point, the way it's done is by having a separate haproxy
process listening to QUIC on UDP port 1443, and forwarding HTTP requests
to the existing process. The main process constantly checks the QUIC one,
and when it's seen as operational, it appends an Alt-Svc header that
indicates the client that an HTTP/3 implementation is available on port
1443, and that this announce is valid for a short time (we'll leave it to
one minute only so that issues can resolve quickly, but for now it's only
10s so that quick tests cause no harm):

http-response add-header alt-svc 'h3=":1443"; ma=60' if \
   { var(txn.host) -m end haproxy.org } { nbsrv(quic) gt 0 }

As such, compatible browsers are free to try to connect there or not. Other
tools (such as git clone) will not use it. For those impatient to test it,
the QUIC process' status is reported at the bottom of the stats page here:
http://stats.haproxy.org/. The "quic" socket in the frontend at the top
reports the total traffic received from the QUIC process, so if you're
seeing it increase while you reload the page it's likely that you're using
QUIC to read it. In Firefox I'm having this little plugin loaded:

  https://addons.mozilla.org/en-US/firefox/addon/http2-indicator/

It displays a small flash on the URL bar with different colors depending
on the protocol used to load the page (H1/SPDY/H2/H3). When that works it's
green (H3), otherwise it's blue (H2).

At this point I'd still say "do not reproduce these experiments at home".
Amaury and Fred are still watching the process' traces very closely to
spot bugs and stop it as soon as a problem is detected. But it's still
too early for being operated by non-developers. The hope is that by 2.6
we'll reach the point where enthousiasts can deploy a few instances on
not-too-sensitive sites with sufficient confidence