Re: [ANNOUNCE] haproxy-2.6-dev11

2022-05-23 Thread Илья Шипицин
вт, 24 мая 2022 г. в 10:47, Willy Tarreau :

> Hi Ilya,
>
> On Tue, May 24, 2022 at 09:53:01AM +0500,  ??? wrote:
> > Hello,
> >
> > can we please address https://github.com/haproxy/haproxy/issues/1585
> before
> > final 2.6 ?
>
> I thought it was since I replied it was an FP but OK, I pushed a patch
> to silence it.
>

thanks!


>
> Thanks,
> Willy
>


Re: [ANNOUNCE] haproxy-2.6-dev11

2022-05-23 Thread Willy Tarreau
Hi Ilya,

On Tue, May 24, 2022 at 09:53:01AM +0500,  ??? wrote:
> Hello,
> 
> can we please address https://github.com/haproxy/haproxy/issues/1585 before
> final 2.6 ?

I thought it was since I replied it was an FP but OK, I pushed a patch
to silence it.

Thanks,
Willy



Re: [ANNOUNCE] haproxy-2.6-dev11

2022-05-23 Thread Илья Шипицин
Hello,

can we please address https://github.com/haproxy/haproxy/issues/1585 before
final 2.6 ?

Ilya

сб, 21 мая 2022 г. в 13:11, Willy Tarreau :

> Hi,
>
> HAProxy 2.6-dev11 was released on 2022/05/20. It added 106 new commits
> after version 2.6-dev10.
>
> Yes, there were still too many changes for a final version, that's often
> like this when getting close to a release. And I couldn't finish the
> renaming of the confusing stuff in the conn_stream layer, for which I'll
> rely on Christopher's help next week. I now understand the trouble some
> developers face when creating an applet and why the only practical
> solution is to copy-paste existing stuff, because even some of the
> existing functions' comments are ambiguous if you stumble on them with
> the wrong idea of what they do, and I absolutely want to address this
> for the release, or it will further complicate development in new
> versions, or maintenance of 2.6 if we rename later.
>
> Most of the changes are of minor importance, or bug fixes though, but
> some are particularly interesting:
>
>  - on the SSL front, a few global settings were added to configure the
>ssl-providers that come with OpenSSL 3 to replace the engines. At this
>point it's not totally clear to me how this will evolve, but since
>these are just global settings that are very likely to become necessary
>mid-term, it's better if they're readily available.
>
>  - QUIC now provides a number of counters of retries, errors etc, and
>finally supports the Retry mechanism, which is the QUIC equivalent of
>the TCP SYN cookies. These are used to validate a client's connection
>request and make sure it's not a spoofed packet. They can be forced, or
>will be automatically enabled when a configurable number of incoming
>connections are not yet confirmed. This is done via the global
>"tune.quic.retry-threshold" parameter. BTW I'm just seeing that it's
>not documented yet; Fred, please do not forget to update it!
>
>  - outgoing applets now support delayed initialization. I know it's a bit
>late for merging this but it addresses a long-existing problem with the
>peers and that could possibly be further emphasized with the http
> client.
>The problem was that outgoing applets were only created on the thread
>that required them, and for peers it was created during config parsing,
>thus all outgoing applets were on thread 1, possibly eating a lot of
>CPU on this thread. That's the issue that Maciej Zdeb reported a month
>ago. Maciej tried to address this but there was a chicken-and-egg issue
>that made it impossible to create the applets on another thread. Now
>that they can be initialized later, it's possible to schedule them on
>any thread, and Maciej's patches could be integrated as well, so the
>peers will no longer aggregate mostly on one thread.
>
>  - a QUIC flow-control limitation that was preventing large POST requests
>from working was addressed, so with this last limitation removed, the
>stack is expected to be fully operational. In addition, the HTTP/3
>decoder now has better latency as it doesn't need to wait for a full
>data frame anymore before starting to decode and forward it.
>
>  - a new global setting "cluster-secret" was added. For now it's only used
>by QUIC for cluster-wide crypto such as retries so that a connection
>retry can be validated by any node. It will likely be used for more QUIC
>stuff, and it makes sense to use it for anything else that is
> cluster-wide
>in the future so the option was named without "quic" in its name.
>
>  - New option "http-restrict-req-hdr-names" was added at the proxy level.
>It can be used to inspect HTTP header names and decide what to do with
>those having any character other than alphanumerical or dash ("-"),
>either delete the header or reject the request. The purpose is to help
>protect application servers that map dash to underscore due to CGI
>inheritance, or worse, which crash when passed such characters. The
>option is automatically set to the delete mode in backends having
>FastCGI configured. This will eventually be backported, because we got
>reports of such broken application servers deployed in field where site
>owners count on haproxy to work around this problem.
>
>  - some configuration issues related to QUIC remained, by which it was
>possible to combine incompatible values of "proto" and sockets, such
>as a QUIC bind with a "proto h2" or no "proto", or "proto quic" on a
>TCP line, or a QUIC address used in peers, or "quic" without "ssl" etc.
>And such combinations were problematic at runtime because the QUIC mux
>and transport cannot be split apart, so each being used with the wrong
>other part caused immediate crashes. This is what made "proto quic"
>mandatory for QUIC bind lines. This was finally sorted out so that
>incompatible 

[PR] chore: Included githubactions in the dependabot config

2022-05-23 Thread PR Bot
Dear list!

Author: naveen <172697+naveensriniva...@users.noreply.github.com>
Number of patches: 1

This is an automated relay of the Github pull request:
   chore: Included githubactions in the dependabot config

Patch title(s): 
   chore: Included githubactions in the dependabot config

Link:
   https://github.com/haproxy/haproxy/pull/1713

Edit locally:
   wget https://github.com/haproxy/haproxy/pull/1713.patch && vi 1713.patch

Apply locally:
   curl https://github.com/haproxy/haproxy/pull/1713.patch | git am -

Description:
   This should help with keeping the GitHub actions updated on new
   releases. This will also help with keeping it secure.
   Dependabot helps in keeping the supply chain secure
   https://docs.github.com/en/code-security/dependabot
   
   GitHub
   actions up to date https://docs.github.com/en/code-
   security/dependabot/working-with-dependabot/keeping-your-actions-up-
   to-date-with-dependabot
   https://github.com/ossf/scorecard/blob/main/docs/checks.md#dependency-
   update-tool
   Signed-off-by: naveen
   <172697+naveensriniva...@users.noreply.github.com>

Instructions:
   This github pull request will be closed automatically; patch should be
   reviewed on the haproxy mailing list (haproxy@formilux.org). Everyone is
   invited to comment, even the patch's author. Please keep the author and
   list CCed in replies. Please note that in absence of any response this
   pull request will be lost.



[PATCH] REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+ (2)

2022-05-23 Thread Tim Duesterhus
Introduced in:

18c13d3bd MEDIUM: http-ana: Add a proxy option to restrict chars in request 
header names

see also:

fbbbc33df REGTESTS: Do not use REQUIRE_VERSION for HAProxy 2.5+
---
 reg-tests/http-rules/restrict_req_hdr_names.vtc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/reg-tests/http-rules/restrict_req_hdr_names.vtc 
b/reg-tests/http-rules/restrict_req_hdr_names.vtc
index 28a10d3db..a3a95939f 100644
--- a/reg-tests/http-rules/restrict_req_hdr_names.vtc
+++ b/reg-tests/http-rules/restrict_req_hdr_names.vtc
@@ -1,5 +1,5 @@
 varnishtest "http-restrict-req-hdr-names option tests"
-#REQUIRE_VERSION=2.6
+feature cmd "$HAPROXY_PROGRAM -cc 'version_atleast(2.6-dev0)'"
 
 # This config tests "http-restrict-req-hdr-names" option
 
-- 
2.36.1




Re: [PATCH 1/2] BUG/MEDIUM: tools: Fix `inet_ntop` usage in sa2str

2022-05-23 Thread Thayne McCombs


Thanks for catching that.



Re: Increase SSL Key Generation after upgrade from 2.4.15 to 2.4.17

2022-05-23 Thread Tomasz Ludwiczak
Thank you for your reply

I think it is related to these changes and the configuration we have for
timeouts.

http://git.haproxy.org/?p=haproxy-2.4.git;a=commit;h=f5b2c3f1e65f57782afe30981031f122bd8ee24c

http://git.haproxy.org/?p=haproxy-2.4.git;a=commit;h=211fc0b5b060bc7b1f83e6514a8ceaeda7e65ee0

modehttp
option allbackups
timeout http-request 5s
*timeout http-keep-alive 500*
timeout connect 5000
timeout client  40s
timeout server  40s
maxconn 10

We will try to confirm this and let you know.

-- 
regards
Tomek

pt., 20 maj 2022 o 23:26 Willy Tarreau  napisał(a):

> Hi Tomasz,
>
> On Fri, May 20, 2022 at 05:17:19PM +0200, Tomasz Ludwiczak wrote:
> > Hi,
> >
> > I am seeing an increase in SSL Key Generation after upgrading from 2.4.15
> > to 2.4.17. I have not changed the openssl version. Does anyone have an
> idea
> > what this could be related to?
> > I have looked at the changes from 2.4.16 and 2.4.17 and nothing obvious
> > pointing to changes around TLS reuse.
>
> Interesting, I've reviewed the fixes merged between the two and cannot
> find anything relevant. Do you have copies of the "show info" output
> before the upgrade to compare before and after ? There are SSL lookups
> and misses there. These could give some hints about what is happening.
> Have you tried reverting to 2.4.15 to see if the problem disappears ?
> We could for example imagine that it's concommittant with another change
> that happened during the same upgrade (e.g. openssl lib upgrade), even
> if I would find it unlikely as well. Are you certain you didn't change
> any tuning option in the config between the two versions ? For example
> reducing the size of the SSL session cache could make a difference.
>
> It would be useful if you could also test with 2.4.16 to help figure if
> that's related to a change between 2.4.15->16 or 2.4.16->17.
>
> Regards,
> Willy
>


Re: Paid feature development: TCP stream compression

2022-05-23 Thread Mark Zealey

On 20/05/2022 22:15, Willy Tarreau wrote:

On Fri, May 20, 2022 at 12:16:07PM +0100, Mark Zealey wrote:

Thanks, we may use this for a very rough proof-of-concept. However we are
dealing with millions of concurrent connections, 10-100 million connections
per day, so we'd prefer to pay someone to develop (+ test!) something for
haproxy which will work at this scale

That's a big problem with gzip. While compression can be done stateless
(which is what we're doing with SLZ)
This is very useful thank you, I had not come across SLZ before. 
Unfortunately from reading the documentation it appears that our packet 
sizes would not be big enough to benefit from this (I don't have exact 
stats, but I believe median packet size on a long lived connection would 
be around 200 bytes uncompressed), although something with a predefined 
dictionary (eg Brotli or zstd with custom dict) may work well with this 
method. It may also be possible to use something like Brotli where you 
can define a window size to reduce the amount of decompression memory 
required.



decompression uses roughly 256 kB
of permanent RAM per *stream*. That's 256 GB of RAM for just one million
connections, 1 TB of RAM for just 4 million connections.


To be honest with you, whilst it would be nice to avoid this cost it's 
not the end of the world. Even on AWS such servers are relatively 
affordable.




Sadly, that's a
perfect example of use case that requires extreme horizontal scalability
and that's better kept away from the LB and performed on servers.


So currently we are absorbing this compression/memory cost on the 
backend servers, however as this is a stateful cluster which cannot 
scale horizontally so easily (rather than independent http servers) it 
would be better to keep the backend smaller and let the frontend of 
loadbalancers (which are independent and infinitely horizontally 
scalable in our infrastructure) handle this offload.




Isn't there any way to advertise support or lack of for compression ?
Because if your real need is to extract contents to perform any form of
processing, we could imagine that instead of having to decompress the
stream, it would be better if we could interfere with the connection
setup to disable compression.


The real need for us is to conserve bandwidth, and we are happy to spend 
money on hardware/development to do this. XMPP already has in-protocol 
compression support which we are using and is nice, but I would prefer 
it if we set it up on the frontend as a new TCP port (actually soon I 
would like to experiment with using QUIC for this - 
https://github.com/haproxytech/quic-dev/issues/1). As we fully control 
the end-user client of the service we can simply switch to a port which 
is compression-only (handled by haproxy) rather than having to do this 
on protocol level. And this would also save us some reasonably 
significant amount of bandwidth from the plaintext preamables currently 
used to establish compression.




I'm just trying to figure a reasonable alternative, because like this in
addition to being extremely unscalable, it makes your LBs a trivial DoS
target.
I agree with the high memory usage, DoS is the main concern here. 
However we can mitigate this in a few ways which I'd rather not go in to 
in public.




Compression is not done on TCP but since it's done using a filter that
deals with HTTP compression, I imagine that it wouldn't be too hard to
modify the filter not to emit HTTP chunks and to work on top of plain
TCP. My real concern is for the decompression. At 256 kB per stream,
that's quite a no-go


So I spent a few hours looking into this on Friday (prior to your 
suggestion), and writing a basic filter to handle tcp-level stream 
compression/decompression. I have this partially working in that it will 
do TCP-level compression thanks to the great http/compression 
abstraction already available. However whereas with HTTP it appears you 
can modify the contents quite easily using the HTX abstractions, I 
cannot see from the documentation (and I tried a number of things in the 
code) how to modify the response from a non-HTX buffer such as would be 
used in this case. I also did not look in to (and don't know if it's 
possible with the current filter library hooks) how to do decompression 
on the stream going f->b. So I'm still looking for someone with a lot of 
haproxy experience who I can pay for this dev work as I think there's a 
chance it may also involve some core changes to the filter mechanisms.


Mark




Słowa kluczowe do wypozycjonowania

2022-05-23 Thread Adam Charachuta
Dzień dobry,

zapoznałem się z Państwa ofertą i z przyjemnością przyznaję, że przyciąga uwagę 
i zachęca do dalszych rozmów. 

Pomyślałem, że może mógłbym mieć swój wkład w Państwa rozwój i pomóc dotrzeć z 
tą ofertą do większego grona odbiorców. Pozycjonuję strony www, dzięki czemu 
generują świetny ruch w sieci.

Możemy porozmawiać w najbliższym czasie?


Pozdrawiam,
Adam Charachuta



Re: [PATCH v2] CLEANUP: tools: Crash if inet_ntop fails due to ENOSPC in sa2str

2022-05-23 Thread Willy Tarreau
On Mon, May 23, 2022 at 09:30:49AM +0200, Tim Duesterhus wrote:
> This is impossible, because we pass a destination buffer that is appropriately
> sized to hold an IPv6 address.

Applied now, thank you Tim!
Willy



[PATCH v2] CLEANUP: tools: Crash if inet_ntop fails due to ENOSPC in sa2str

2022-05-23 Thread Tim Duesterhus
This is impossible, because we pass a destination buffer that is appropriately
sized to hold an IPv6 address.

This is related to GitHub issue #1599.
---
 src/tools.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/src/tools.c b/src/tools.c
index 79d1d5c9b..4ecbdc4d7 100644
--- a/src/tools.c
+++ b/src/tools.c
@@ -1375,7 +1375,10 @@ char * sa2str(const struct sockaddr_storage *addr, int 
port, int map_ports)
default:
return NULL;
}
-   inet_ntop(addr->ss_family, ptr, buffer, sizeof(buffer));
+   if (inet_ntop(addr->ss_family, ptr, buffer, sizeof(buffer)) == NULL) {
+   BUG_ON(errno == ENOSPC);
+   return NULL;
+   }
if (map_ports)
return memprintf(, "%s:%+d", buffer, port);
else
-- 
2.36.1




Re: Peers using heavily single cpu core

2022-05-23 Thread Willy Tarreau
Hi Maciej,

On Mon, May 23, 2022 at 08:50:53AM +0200, Maciej Zdeb wrote:
> Hi Christopher,
> I've verified that outgoing connections are now spread between multiple
> threads! Thank you very much!

That's really great, thank you for testing! I, too, thought it was
worth being merged even this late in the dev cycle. The way it was
done allows to postpone conversion of other applets, so the risk of
breaking something remains fairly limited. We'll see.

Cheers,
Willy



Re: [PATCH 2/2] CLEANUP: tools: Crash if inet_ntop fails in sa2str

2022-05-23 Thread Willy Tarreau
On Sun, May 22, 2022 at 01:06:28PM +0200, Tim Duesterhus wrote:
> @@ -1374,7 +1374,10 @@ char * sa2str(const struct sockaddr_storage *addr, int 
> port, int map_ports)
>   default:
>   return NULL;
>   }
> - inet_ntop(addr->ss_family, ptr, buffer, sizeof(buffer));
> + if (inet_ntop(addr->ss_family, ptr, buffer, sizeof(buffer)) == NULL) {
> + BUG_ON("inet_ntop failed to convert");
> + return NULL;
> + }

Hmm no, please at least check errno for ENOSPC on this one, because some
minimalistic libcs (or older ones) properly report EAFNOSUPPORT when
passed AF_INET6 addresses that they do not support, and we don't want
to crash at runtime when an address is updated..

Maybe such a variant instead ?

-   inet_ntop(addr->ss_family, ptr, buffer, sizeof(buffer));
+   if (inet_ntop(addr->ss_family, ptr, buffer, sizeof(buffer)) == NULL) {
+   BUG_ON(erro == ENOSPC);
+   return NULL;
+   }

Willy



Re: [PATCH 1/2] BUG/MEDIUM: tools: Fix `inet_ntop` usage in sa2str

2022-05-23 Thread Willy Tarreau
On Sun, May 22, 2022 at 01:06:27PM +0200, Tim Duesterhus wrote:
> The given size must be the size of the destination buffer, not the size of the
> (binary) address representation.
> 
> This fixes GitHub issue #1599.
> 
> The bug was introduced in 92149f9a82a9b55c598f1cc815bc330c555f3561 which is in
> 2.4+. The fix must be backported there.

Ah good catch, merged, thanks Tim!
Willy



Re: [PATCH] CLEANUP: tools: Clean up non-QUIC error message handling in str2sa_range()

2022-05-23 Thread Willy Tarreau
On Sun, May 22, 2022 at 12:40:58PM +0200, Tim Duesterhus wrote:
> If QUIC support is enabled both branches of the ternary conditional are
> identical, upsetting Coverity. Move the full conditional into the non-QUIC
> preprocessor branch to make the code more clear.
> 
> This resolves GitHub issue #1710.

OK that's fine like this. Now merged, thank you Tim!
Willy



Re: Peers using heavily single cpu core

2022-05-23 Thread Maciej Zdeb
Hi Christopher,
I've verified that outgoing connections are now spread between multiple
threads! Thank you very much!

23 : st=0x000121(cl heopI W:sRa R:srA) tmask=0x8 umask=0x0
owner=0x56219da29280 iocb=0x56219cdc8730(sock_conn_iocb) back=0
cflg=0x0300 fam=ipv4 lport=1024 rport=52046 fe=hap1 mux=PASS
ctx=0x7f4c2002f410 xprt=RAW
24 : st=0x000121(cl heopI W:sRa R:srA) tmask=0x20 umask=0x0
owner=0x7f4c2802a7a0 iocb=0x56219cdc8730(sock_conn_iocb) back=0
cflg=0x0300 fam=ipv4 lport=1024 rport=55428 fe=hap1 mux=PASS
ctx=0x7f4c2c02f200 xprt=RAW
25 : st=0x000121(cl heopI W:sRa R:srA) tmask=0x1 umask=0x0
owner=0x7f4c2c0260b0 iocb=0x56219cdc8730(sock_conn_iocb) back=0
cflg=0x0300 fam=ipv4 lport=1024 rport=51190 fe=hap1 mux=PASS
ctx=0x56219da24180 xprt=RAW
26 : st=0x010121(cL heopI W:sRa R:srA) tmask=0x20 umask=0x0
owner=0x7f4c2c026ac0 iocb=0x56219cdc8730(sock_conn_iocb) back=1
cflg=0x1300 fam=ipv4 lport=34454 rport=1024 px=hap1 mux=PASS
ctx=0x7f4c2c026610 xprt=RAW
27 : st=0x000121(cl heopI W:sRa R:srA) tmask=0x4 umask=0x0
owner=0x7f4c40026cf0 iocb=0x56219cdc8730(sock_conn_iocb) back=0
cflg=0x0300 fam=ipv4 lport=1024 rport=50226 fe=hap1 mux=PASS
ctx=0x7f4c3002eb60 xprt=RAW

Kind regards,

wt., 17 maj 2022 o 16:25 Christopher Faulet 
napisał(a):

> Le 4/20/22 à 14:51, Maciej Zdeb a écrit :
> > Hi Willy,
> > I saw Christopher changes are now merged. I was wondering how to proceed
> with my
> > issue. Right now in stream_new() I'm able to get cs_endpoint and appctx
> (if
> > endpoint is applet), so I can get thread_mask of appctx to create a
> stream task
> > on the same thread. Is this approach correct?
> >
>
> Hi Maciej,
>
> I've finally finish my applet refactoring. Now it is possible to choose
> where to
> start an applet. I've also merged your patches. Peer applets must now be
> balanced across threads. So, you may give it a try to be sure it solves
> your
> issue and also validate everything works fine.
>
> --
> Christopher Faulet
>


[SPAM] 您好:

2022-05-23 Thread 企业邮箱系统

尊敬的   您好:
接上级通知各部门人员,公司企业邮箱所有用户登录密码将在3天后过期,为避免数据的丢失,进行重新登记,逾时将出现邮箱无法登录使用的情况,按照指引进行操作!谢谢配合请您立即点击登记:
 (此邮件仅用于通知,无须回复)   此为系统邮件。请勿回复,谢谢。
2022-05-23  14:04:18