Re: [PATCH 1/2] CI: reduce ASAN log redirection umbrella size

2024-05-13 Thread Илья Шипицин
пн, 13 мая 2024 г. в 11:29, William Lallemand :

> On Thu, May 09, 2024 at 10:24:55PM +0200, Илья Шипицин wrote:
> > sorry for th delay.
> >
> > indeed, it's better to drop asan redirection. I sent a patch to the list.
> >
> > for my defence I can say that in my experiments asan.log worked as
> expected
> > :)
> >
>
> No worries, we had a change of distribution version since then, maybe
> the variable need more parameters to achieve it with this version, no
> idea. But honestly I find it simpler to have the ASAN errors in the
> reg-test directly.
>

initially I thought of using "asan_options=halt_on_error=0"
actually, I believed that it is default behaviour (it is not), even more,
switching to halt_on_error=0 required asan lib rebuild.


>
> > ignore show version error · chipitsine/haproxy@2900d66 (github.com)
> > <
> https://github.com/chipitsine/haproxy/actions/runs/9022839976/job/24793325629
> >
> >
> > [image: image.png]
>
> I never saw this at all, I doubt it worked in master for a long time :-)
>
> https://github.com/haproxy/haproxy/actions/runs/9060411631/job/24890056499
>
> That's better indeed, I'll merge the patch. Thanks!
>
>
> --
> William Lallemand
>


Re: [PATCH 1/3] BUILD: illumos: pthread_getcpuclockid is not available

2024-05-05 Thread Илья Шипицин
updated patches.

I'll address reorg to "compat.h" a bit later, once it is settled in my head

вс, 5 мая 2024 г. в 12:48, Илья Шипицин :

> I will test and send simplified patch, i.e. I'll patch directly clock.c
>
> if we want to move that macro to compat.h, I'd postpone that for some
> investigation
>
> 1) we will need to include "pthread.h" from compat.h (currently it's not
> true)
> 2) we will need to make sure compat.h is included everywhere (I do not see
> that include in clock.c)
>
> вс, 5 мая 2024 г. в 12:24, Willy Tarreau :
>
>> On Sun, May 05, 2024 at 11:15:24AM +0200,  ??? wrote:
>> > ??, 5 ??? 2024 ?. ? 10:42, Willy Tarreau :
>> >
>> > > On Sun, May 05, 2024 at 09:12:41AM +0200, Miroslav Zagorac wrote:
>> > > > On 05. 05. 2024. 08:32, Willy Tarreau wrote:
>> > > > > On Sun, May 05, 2024 at 07:49:55AM +0200,  ??? wrote:
>> > > > >> ??, 5 ??? 2024 ?. ? 02:05, Miroslav Zagorac :
>> > > > >>> I think that this patch is not satisfactory because, for
>> example,
>> > > Solaris
>> > > > >>> 11.4.0.0.1.15.0 (from 2018) has _POSIX_TIMERS and
>> > > _POSIX_THREAD_CPUTIME
>> > > > >>> defined, but does not have the pthread_getcpuclockid() function;
>> > > while
>> > > > >>> solaris
>> > > > >>> 11.4.42.0.0.111.0 (from 2022) has that function.
>> > > > >>>
>> > > > >>
>> > > > >> I'm trying to build on this vmactions/solaris-vm: Use Solaris in
>> > > github
>> > > > >> actions <https://github.com/vmactions/solaris-vm>
>> > > > >> it does not have pthread_getcpuclockid()
>> > > > >
>> > > > > I'm wondering what the point of defining _POSIX_THREAD_CPUTIME
>> can be
>> > > > > then :-/
>> > > > >
>> > > >
>> > > > The pthread_getcpuclockid() function is declared in the include file
>> > > > /usr/include/pthread.h.  The only difference between the two
>> "versions"
>> > > of
>> > > > Solaris 11.4 is that the newer version has a declaration and the
>> older
>> > > one
>> > > > does not.
>> > > >
>> > > > However, _POSIX_THREAD_CPUTIME is defined in the
>> /usr/include/unistd.h
>> > > file as
>> > > > -1 in the UNIX 03 block of options that are not supported in Solaris
>> > > 11.4.
>> > > >
>> > > > /* Unsupported UNIX 03 options */
>> > > > #if defined(_XPG6)
>> > > > ..
>> > > > #define _POSIX_THREAD_CPUTIME (-1)
>> > > > ..
>> > > > #endif
>> > > >
>> > > >
>> > > > An explanation of that definition can be found here:
>> > > >
>> > > > https://docs.oracle.com/cd/E88353_01/html/E37842/unistd-3head.html
>> > > >
>> > > > "If a symbolic constant is defined with the value -1, the option is
>> not
>> > > > supported. Headers, data types, and function interfaces required
>> only
>> > > for the
>> > > > option need not be supplied. An application that attempts to use
>> anything
>> > > > associated only with the option is considered to be requiring an
>> > > extension.
>> > > (...)
>> > >
>> > > Ah excellent, that's quite useful! We're already doing that with
>> > > _POSIX_TIMERS. So I guess one just needs to try this instead:
>> > >
>> > > -#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
>> > > defined(_POSIX_THREAD_CPUTIME
>> > > +#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
>> > > defined(_POSIX_THREAD_CPUTIME && (_POSIX_THREAD_CPUTIME >= 0)
>> > >
>> >
>> > that worked (I added closing bracket after second "defined")
>>
>> Ah yes indeed. Thanks for the test. Do you want to update you patch maybe,
>> since you can test it ?
>>
>> Thanks,
>> Willy
>>
>
From 3cc9c51ae6b22beeaaee89d1e5f69b99c6874f39 Mon Sep 17 00:00:00 2001
From: Ilia Shipitsin 
Date: Sun, 5 May 2024 13:41:32 +0200
Subject: [PATCH 3/3] CI: netbsd: limit scheduled workflow to parent repo only

it is not very useful for most of forks.
---
 .github/workflows/netbsd.yml | 1 +
 1 file changed, 1 in

Re: [PATCH 1/3] BUILD: illumos: pthread_getcpuclockid is not available

2024-05-05 Thread Илья Шипицин
I will test and send simplified patch, i.e. I'll patch directly clock.c

if we want to move that macro to compat.h, I'd postpone that for some
investigation

1) we will need to include "pthread.h" from compat.h (currently it's not
true)
2) we will need to make sure compat.h is included everywhere (I do not see
that include in clock.c)

вс, 5 мая 2024 г. в 12:24, Willy Tarreau :

> On Sun, May 05, 2024 at 11:15:24AM +0200,  ??? wrote:
> > ??, 5 ??? 2024 ?. ? 10:42, Willy Tarreau :
> >
> > > On Sun, May 05, 2024 at 09:12:41AM +0200, Miroslav Zagorac wrote:
> > > > On 05. 05. 2024. 08:32, Willy Tarreau wrote:
> > > > > On Sun, May 05, 2024 at 07:49:55AM +0200,  ??? wrote:
> > > > >> ??, 5 ??? 2024 ?. ? 02:05, Miroslav Zagorac :
> > > > >>> I think that this patch is not satisfactory because, for example,
> > > Solaris
> > > > >>> 11.4.0.0.1.15.0 (from 2018) has _POSIX_TIMERS and
> > > _POSIX_THREAD_CPUTIME
> > > > >>> defined, but does not have the pthread_getcpuclockid() function;
> > > while
> > > > >>> solaris
> > > > >>> 11.4.42.0.0.111.0 (from 2022) has that function.
> > > > >>>
> > > > >>
> > > > >> I'm trying to build on this vmactions/solaris-vm: Use Solaris in
> > > github
> > > > >> actions 
> > > > >> it does not have pthread_getcpuclockid()
> > > > >
> > > > > I'm wondering what the point of defining _POSIX_THREAD_CPUTIME can
> be
> > > > > then :-/
> > > > >
> > > >
> > > > The pthread_getcpuclockid() function is declared in the include file
> > > > /usr/include/pthread.h.  The only difference between the two
> "versions"
> > > of
> > > > Solaris 11.4 is that the newer version has a declaration and the
> older
> > > one
> > > > does not.
> > > >
> > > > However, _POSIX_THREAD_CPUTIME is defined in the
> /usr/include/unistd.h
> > > file as
> > > > -1 in the UNIX 03 block of options that are not supported in Solaris
> > > 11.4.
> > > >
> > > > /* Unsupported UNIX 03 options */
> > > > #if defined(_XPG6)
> > > > ..
> > > > #define _POSIX_THREAD_CPUTIME (-1)
> > > > ..
> > > > #endif
> > > >
> > > >
> > > > An explanation of that definition can be found here:
> > > >
> > > > https://docs.oracle.com/cd/E88353_01/html/E37842/unistd-3head.html
> > > >
> > > > "If a symbolic constant is defined with the value -1, the option is
> not
> > > > supported. Headers, data types, and function interfaces required only
> > > for the
> > > > option need not be supplied. An application that attempts to use
> anything
> > > > associated only with the option is considered to be requiring an
> > > extension.
> > > (...)
> > >
> > > Ah excellent, that's quite useful! We're already doing that with
> > > _POSIX_TIMERS. So I guess one just needs to try this instead:
> > >
> > > -#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> > > defined(_POSIX_THREAD_CPUTIME
> > > +#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> > > defined(_POSIX_THREAD_CPUTIME && (_POSIX_THREAD_CPUTIME >= 0)
> > >
> >
> > that worked (I added closing bracket after second "defined")
>
> Ah yes indeed. Thanks for the test. Do you want to update you patch maybe,
> since you can test it ?
>
> Thanks,
> Willy
>


Re: [PATCH 1/3] BUILD: illumos: pthread_getcpuclockid is not available

2024-05-05 Thread Илья Шипицин
вс, 5 мая 2024 г. в 10:42, Willy Tarreau :

> On Sun, May 05, 2024 at 09:12:41AM +0200, Miroslav Zagorac wrote:
> > On 05. 05. 2024. 08:32, Willy Tarreau wrote:
> > > On Sun, May 05, 2024 at 07:49:55AM +0200,  ??? wrote:
> > >> ??, 5 ??? 2024 ?. ? 02:05, Miroslav Zagorac :
> > >>> I think that this patch is not satisfactory because, for example,
> Solaris
> > >>> 11.4.0.0.1.15.0 (from 2018) has _POSIX_TIMERS and
> _POSIX_THREAD_CPUTIME
> > >>> defined, but does not have the pthread_getcpuclockid() function;
> while
> > >>> solaris
> > >>> 11.4.42.0.0.111.0 (from 2022) has that function.
> > >>>
> > >>
> > >> I'm trying to build on this vmactions/solaris-vm: Use Solaris in
> github
> > >> actions 
> > >> it does not have pthread_getcpuclockid()
> > >
> > > I'm wondering what the point of defining _POSIX_THREAD_CPUTIME can be
> > > then :-/
> > >
> >
> > The pthread_getcpuclockid() function is declared in the include file
> > /usr/include/pthread.h.  The only difference between the two "versions"
> of
> > Solaris 11.4 is that the newer version has a declaration and the older
> one
> > does not.
> >
> > However, _POSIX_THREAD_CPUTIME is defined in the /usr/include/unistd.h
> file as
> > -1 in the UNIX 03 block of options that are not supported in Solaris
> 11.4.
> >
> > /* Unsupported UNIX 03 options */
> > #if defined(_XPG6)
> > ..
> > #define _POSIX_THREAD_CPUTIME (-1)
> > ..
> > #endif
> >
> >
> > An explanation of that definition can be found here:
> >
> > https://docs.oracle.com/cd/E88353_01/html/E37842/unistd-3head.html
> >
> > "If a symbolic constant is defined with the value -1, the option is not
> > supported. Headers, data types, and function interfaces required only
> for the
> > option need not be supplied. An application that attempts to use anything
> > associated only with the option is considered to be requiring an
> extension.
> (...)
>
> Ah excellent, that's quite useful! We're already doing that with
> _POSIX_TIMERS. So I guess one just needs to try this instead:
>
> -#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> defined(_POSIX_THREAD_CPUTIME
> +#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> defined(_POSIX_THREAD_CPUTIME && (_POSIX_THREAD_CPUTIME >= 0)
>

that worked (I added closing bracket after second "defined")


>
> Please note that it appears at a few places, so we'll probably have to
> move that painful definition in compat.h I think.
>


that makes sense


>
> Willy
>


Re: [PATCH 1/3] BUILD: illumos: pthread_getcpuclockid is not available

2024-05-05 Thread Илья Шипицин
вс, 5 мая 2024 г. в 08:32, Willy Tarreau :

> On Sun, May 05, 2024 at 07:49:55AM +0200,  ??? wrote:
> > ??, 5 ??? 2024 ?. ? 02:05, Miroslav Zagorac :
> >
> > > On 04. 05. 2024. 17:36, Ilya Shipitsin wrote:
> > > > this function is considered optional for POSIX and not implemented
> > > > on Illumos
> > > >
> > > > Reference:
> > >
> https://www.gnu.org/software/gnulib/manual/html_node/pthread_005fgetcpuclockid.html
> > > > According to
> > > https://github.com/cpredef/predef/blob/master/OperatingSystems.md
> Illumos
> > > > is identified by __illumos__ macro available since gcc-11
> > > > ---
> > > >  src/clock.c | 2 +-
> > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >
> > > > diff --git a/src/clock.c b/src/clock.c
> > > > index ec2133c8b..f484c2d9c 100644
> > > > --- a/src/clock.c
> > > > +++ b/src/clock.c
> > > > @@ -135,7 +135,7 @@ uint64_t now_cpu_time_thread(int thr)
> > > >  /* set the clock source for the local thread */
> > > >  void clock_set_local_source(void)
> > > >  {
> > > > -#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> > > defined(_POSIX_THREAD_CPUTIME)
> > > > +#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> > > defined(_POSIX_THREAD_CPUTIME) && !defined(__illumos__)
> > > >  #ifdef USE_THREAD
> > > >   pthread_getcpuclockid(pthread_self(),
> _thread_clock_id[tid]);
> > > >  #else
> > >
> > > Hello Ilya,
> > >
> > > I think that this patch is not satisfactory because, for example,
> Solaris
> > > 11.4.0.0.1.15.0 (from 2018) has _POSIX_TIMERS and _POSIX_THREAD_CPUTIME
> > > defined, but does not have the pthread_getcpuclockid() function; while
> > > solaris
> > > 11.4.42.0.0.111.0 (from 2022) has that function.
> > >
> >
> > I'm trying to build on this vmactions/solaris-vm: Use Solaris in github
> > actions 
> > it does not have pthread_getcpuclockid()
>
> I'm wondering what the point of defining _POSIX_THREAD_CPUTIME can be
> then :-/
>
> Just guessing, are you sure you're building with -pthread -lrt ? Just in
> case, please double-check with V=1. Solaris sets USE_RT, but maybe
> something
> else is needed.
>

I did "find / -name pthread.h -exec cat {} ';' -print"
and there was not declaration of pthread_getcpuclockid()

chances are that it is shipped in a lib, but the prototype is missing ...


>
> Willy
>


Re: [PATCH 1/3] BUILD: illumos: pthread_getcpuclockid is not available

2024-05-04 Thread Илья Шипицин
вс, 5 мая 2024 г. в 02:05, Miroslav Zagorac :

> On 04. 05. 2024. 17:36, Ilya Shipitsin wrote:
> > this function is considered optional for POSIX and not implemented
> > on Illumos
> >
> > Reference:
> https://www.gnu.org/software/gnulib/manual/html_node/pthread_005fgetcpuclockid.html
> > According to
> https://github.com/cpredef/predef/blob/master/OperatingSystems.md Illumos
> > is identified by __illumos__ macro available since gcc-11
> > ---
> >  src/clock.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/src/clock.c b/src/clock.c
> > index ec2133c8b..f484c2d9c 100644
> > --- a/src/clock.c
> > +++ b/src/clock.c
> > @@ -135,7 +135,7 @@ uint64_t now_cpu_time_thread(int thr)
> >  /* set the clock source for the local thread */
> >  void clock_set_local_source(void)
> >  {
> > -#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> defined(_POSIX_THREAD_CPUTIME)
> > +#if defined(_POSIX_TIMERS) && (_POSIX_TIMERS > 0) &&
> defined(_POSIX_THREAD_CPUTIME) && !defined(__illumos__)
> >  #ifdef USE_THREAD
> >   pthread_getcpuclockid(pthread_self(), _thread_clock_id[tid]);
> >  #else
>
> Hello Ilya,
>
> I think that this patch is not satisfactory because, for example, Solaris
> 11.4.0.0.1.15.0 (from 2018) has _POSIX_TIMERS and _POSIX_THREAD_CPUTIME
> defined, but does not have the pthread_getcpuclockid() function; while
> solaris
> 11.4.42.0.0.111.0 (from 2022) has that function.
>

I'm trying to build on this vmactions/solaris-vm: Use Solaris in github
actions 
it does not have pthread_getcpuclockid()



>
> Maybe it could be solved in a different way..
>
>
> Best regards,
>
> --
> Miroslav Zagorac
>
> What can change the nature of a man?
>


Re: How to configure DH groups for TLS 1.3

2024-05-02 Thread Илья Шипицин
I'd try openssl.cnf

чт, 2 мая 2024 г. в 17:17, Froehlich, Dominik :

> Hello everyone,
>
>
>
> I’m hardening HAProxy for CVE-2002-20001 (DHEAT attack) at the moment.
>
>
>
> For TLS 1.2 I’m using the “tune.ssl.default-dh-param” option to limit the
> key size to 2048 bit so that an attacker can’t force huge keys and thus
> lots of CPU cycles on the server.
>
>
>
> However, I’ve noticed that the property has no effect on TLS 1.3
> connections. An attacker can still negotiate an 8192-bit key and brick the
> server with relative ease.
>
>
>
> I’ve found an OpenSSL blog article about the issue:
> https://www.openssl.org/blog/blog/2022/10/21/tls-groups-configuration/index.html
>
>
>
> As it seems, this used to be a non-issue with OpenSSL 1.1.1 because it
> only supported EC groups, not finite field ones but in OpenSSL 3.x it is
> again possible to select the vulnerable groups, even with TLS 1.3.
>
>
>
> The article mentions a way of configuring OpenSSL with a “Groups” setting
> to restrict the number of supported DH groups, however I haven’t found any
> HAProxy config option equivalent.
>
>
>
> The closest I’ve gotten is the “curves” property:
> https://docs.haproxy.org/2.8/configuration.html#5.1-curves
>
>
>
> However, I think it only restricts the available elliptic curves in a
> ECDHE handshake, but it does not prevent a TLS 1.3 client from selecting a
> non-ECDHE prime group, for example “ffdhe8192”.
>
>
>
> The article provides example configurations for NGINX and Apache, but is
> there any way to restrict the DH groups (e.g to just ECDHE) for TLS 1.3 for
> HAProxy, too?
>
>
>
>
>
> Best Regards,
>
> Dominik
>


Re: Changes in HAProxy 3.0's Makefile and build options

2024-04-22 Thread Илья Шипицин
I'll postpone for a while.
I thought that value of "2" is the same as "1", I'll try to find better doc.

seems that I didn''t specify "march" and that might be the cause.

сб, 20 апр. 2024 г. в 15:21, Willy Tarreau :

> On Sat, Apr 20, 2024 at 03:11:19PM +0200,  ??? wrote:
> > ??, 20 ???. 2024 ?. ? 15:07, Willy Tarreau :
> >
> > > On Sat, Apr 20, 2024 at 02:49:38PM +0200,  ??? wrote:
> > > > ??, 11 ???. 2024 ?. ? 21:05, Willy Tarreau :
> > > >
> > > > > Hi Ilya,
> > > > >
> > > > > On Thu, Apr 11, 2024 at 08:27:39PM +0200,  ??? wrote:
> > > > > > do you know maybe how this was supposed to work ?
> > > > > > haproxy/Makefile at master · haproxy/haproxy (github.com)
> > > > > > 
> > > > >
> > > > > That's this:
> > > > >
> > > > >   ifneq ($(shell $(CC) $(CFLAGS) -dM -E -xc -  2>/dev/null |
> > > > > grep -c 'LOCK_FREE.*1'),0)
> > > > > USE_LIBATOMIC   = implicit
> > > > >   endif
> > > > >
> > > > > It calls the compiler with the known flags and checks if for this
> arch,
> > > > > it's configured to require libatomic.
> > > > >
> > > >
> > > > macros has changed from 1 to 2:
> > > >
> > > > ilia@fedora:~/Downloads$ cc -dM -E -xc - /dev/null  |
> grep
> > > LOCK
> > > > #define __GCC_ATOMIC_CHAR_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_CHAR32_T_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_BOOL_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_POINTER_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_INT_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_WCHAR_T_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_LONG_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_CHAR16_T_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_LLONG_LOCK_FREE 2
> > > > #define __GCC_ATOMIC_SHORT_LOCK_FREE 2
> > >
> > > This means it's always lock-free, implemented natively thus doesn't
> > > require libatomic. Value 1 means "sometimes lock-free" and implemented
> > > as a function provided by libatomic.
> > >
> > > Did the problem appear when I changed the flags in the makefile ? Maybe
> > > I accidently lost one and it's falling back to a subset of the target
> > > arch ?
> > >
> >
> >
> > the problem appears only with QuicTLS manually built with "-m32" flag. it
> > does not appear with "-m32" if built and linked against system OpenSS:L
> >
> > but after I modify condition (the same as previously enforcing libatomic
> in
> > ADDLIB), it builds fine.
>
> OK thanks for that, but was it already present before my changes in the
> makefile ? Could you check that the -m32 flag is properly passed to this
> test ?
>
> On 32-bit ix86, there are different cases that require libatomic:
>
>   $ gcc -march=i686 -m32 -dM -E -xc - < /dev/null |grep -c LOCK.*1
>   0
>   $ gcc -march=i586 -m32 -dM -E -xc - < /dev/null |grep -c LOCK.*1
>   0
>   $ gcc -march=i486 -m32 -dM -E -xc - < /dev/null |grep -c LOCK.*1
>   1
>   $ gcc -march=i386 -m32 -dM -E -xc - < /dev/null |grep -c LOCK.*1
>   10
>
> Only i386 and i486 require it. That makes me think, maybe it's quictls
> that was built against it and adds a dependency to it. You can check
> it using objdump -p|grep NEEDED.
>
> If so that would make sense to just manually add it (or any other
> required dependencies) in ADDLIB since they're here just to satisfy
> external dependencies.
>
> Willy
>


Re: Changes in HAProxy 3.0's Makefile and build options

2024-04-20 Thread Илья Шипицин
сб, 20 апр. 2024 г. в 15:07, Willy Tarreau :

> On Sat, Apr 20, 2024 at 02:49:38PM +0200,  ??? wrote:
> > ??, 11 ???. 2024 ?. ? 21:05, Willy Tarreau :
> >
> > > Hi Ilya,
> > >
> > > On Thu, Apr 11, 2024 at 08:27:39PM +0200,  ??? wrote:
> > > > do you know maybe how this was supposed to work ?
> > > > haproxy/Makefile at master · haproxy/haproxy (github.com)
> > > > 
> > >
> > > That's this:
> > >
> > >   ifneq ($(shell $(CC) $(CFLAGS) -dM -E -xc - /dev/null |
> > > grep -c 'LOCK_FREE.*1'),0)
> > > USE_LIBATOMIC   = implicit
> > >   endif
> > >
> > > It calls the compiler with the known flags and checks if for this arch,
> > > it's configured to require libatomic.
> > >
> >
> > macros has changed from 1 to 2:
> >
> > ilia@fedora:~/Downloads$ cc -dM -E -xc - /dev/null  | grep
> LOCK
> > #define __GCC_ATOMIC_CHAR_LOCK_FREE 2
> > #define __GCC_ATOMIC_CHAR32_T_LOCK_FREE 2
> > #define __GCC_ATOMIC_BOOL_LOCK_FREE 2
> > #define __GCC_ATOMIC_POINTER_LOCK_FREE 2
> > #define __GCC_ATOMIC_INT_LOCK_FREE 2
> > #define __GCC_ATOMIC_WCHAR_T_LOCK_FREE 2
> > #define __GCC_ATOMIC_LONG_LOCK_FREE 2
> > #define __GCC_ATOMIC_CHAR16_T_LOCK_FREE 2
> > #define __GCC_ATOMIC_LLONG_LOCK_FREE 2
> > #define __GCC_ATOMIC_SHORT_LOCK_FREE 2
>
> This means it's always lock-free, implemented natively thus doesn't
> require libatomic. Value 1 means "sometimes lock-free" and implemented
> as a function provided by libatomic.
>
> Did the problem appear when I changed the flags in the makefile ? Maybe
> I accidently lost one and it's falling back to a subset of the target
> arch ?
>


the problem appears only with QuicTLS manually built with "-m32" flag. it
does not appear with "-m32" if built and linked against system OpenSS:L

but after I modify condition (the same as previously enforcing libatomic in
ADDLIB), it builds fine.


>
> > the following patch works, but I'll play a bit more 
> >
> > diff --git a/Makefile b/Makefile
> > index 4bd263498..370ac7ed0 100644
> > --- a/Makefile
> > +++ b/Makefile
> > @@ -493,7 +493,7 @@ $(set_target_defaults)
> >  # linking with it by default as it's not always available nor deployed
> >  # (especially on archs which do not need it).
> >  ifneq ($(USE_THREAD:0=),)
> > -  ifneq ($(shell $(CC) $(OPT_CFLAGS) $(ARCH_FLAGS) $(CPU_CFLAGS)
> > $(STD_CFLAGS) $(WARN_CFLAGS) $(NOWARN_CFLAGS) $(ERROR_CFLAGS) $(CFLAGS)
> -dM
> > -E -xc - /dev/null | grep -c 'LOCK_FREE.*1'),0)
> > +  ifneq ($(shell $(CC) $(OPT_CFLAGS) $(ARCH_FLAGS) $(CPU_CFLAGS)
> > $(STD_CFLAGS) $(WARN_CFLAGS) $(NOWARN_CFLAGS) $(ERROR_CFLAGS) $(CFLAGS)
> -dM
> > -E -xc - /dev/null | grep -c 'LOCK_FREE.*[12]'),0)
> >  USE_LIBATOMIC   = implicit
> >endif
> >  endif
>
> It would impose libatomic for everyone, and not everyone has it. For
> example this would break clang on freebsd from what I'm seeing. That's
> not logic, there must be another reason.
>
> Willy
>


Re: Changes in HAProxy 3.0's Makefile and build options

2024-04-20 Thread Илья Шипицин
чт, 11 апр. 2024 г. в 21:05, Willy Tarreau :

> Hi Ilya,
>
> On Thu, Apr 11, 2024 at 08:27:39PM +0200,  ??? wrote:
> > do you know maybe how this was supposed to work ?
> > haproxy/Makefile at master · haproxy/haproxy (github.com)
> > 
>
> That's this:
>
>   ifneq ($(shell $(CC) $(CFLAGS) -dM -E -xc - /dev/null |
> grep -c 'LOCK_FREE.*1'),0)
> USE_LIBATOMIC   = implicit
>   endif
>
> It calls the compiler with the known flags and checks if for this arch,
> it's configured to require libatomic.
>

macros has changed from 1 to 2:

ilia@fedora:~/Downloads$ cc -dM -E -xc - /dev/null  | grep LOCK
#define __GCC_ATOMIC_CHAR_LOCK_FREE 2
#define __GCC_ATOMIC_CHAR32_T_LOCK_FREE 2
#define __GCC_ATOMIC_BOOL_LOCK_FREE 2
#define __GCC_ATOMIC_POINTER_LOCK_FREE 2
#define __GCC_ATOMIC_INT_LOCK_FREE 2
#define __GCC_ATOMIC_WCHAR_T_LOCK_FREE 2
#define __GCC_ATOMIC_LONG_LOCK_FREE 2
#define __GCC_ATOMIC_CHAR16_T_LOCK_FREE 2
#define __GCC_ATOMIC_LLONG_LOCK_FREE 2
#define __GCC_ATOMIC_SHORT_LOCK_FREE 2

the following patch works, but I'll play a bit more 

diff --git a/Makefile b/Makefile
index 4bd263498..370ac7ed0 100644
--- a/Makefile
+++ b/Makefile
@@ -493,7 +493,7 @@ $(set_target_defaults)
 # linking with it by default as it's not always available nor deployed
 # (especially on archs which do not need it).
 ifneq ($(USE_THREAD:0=),)
-  ifneq ($(shell $(CC) $(OPT_CFLAGS) $(ARCH_FLAGS) $(CPU_CFLAGS)
$(STD_CFLAGS) $(WARN_CFLAGS) $(NOWARN_CFLAGS) $(ERROR_CFLAGS) $(CFLAGS) -dM
-E -xc - /dev/null | grep -c 'LOCK_FREE.*1'),0)
+  ifneq ($(shell $(CC) $(OPT_CFLAGS) $(ARCH_FLAGS) $(CPU_CFLAGS)
$(STD_CFLAGS) $(WARN_CFLAGS) $(NOWARN_CFLAGS) $(ERROR_CFLAGS) $(CFLAGS) -dM
-E -xc - /dev/null | grep -c 'LOCK_FREE.*[12]'),0)
 USE_LIBATOMIC   = implicit
   endif
 endif



>
> > I had to enable atomic explicitly despite it was supposed to be detected
> on
> > the fly (I haven't had a deeper look yet)
> >
> > haproxy/.github/workflows/fedora-rawhide.yml at master · haproxy/haproxy
> > <
> https://github.com/haproxy/haproxy/blob/master/.github/workflows/fedora-rawhide.yml#L17-L18
> >
>
> I honestly don't know why it's not working, it could be useful to achive
> the output of this command, either by calling it directly or for example
> by inserting "tee /tmp/log |" before grep. If you have a copy of the linker
> erorr you got without the lib, maybe it will reveal something. But honestly
> this is a perfect example of the reasons why I prefer generic makefiles to
> automatic detection: you see that you need libatomic, you add it, done. No
> need to patch a configure to mask an error due to an unhandled special case
> etc. The test above was mostly meant to ease the build for the most common
> cases (and for me till now it has always worked, on various systems and
> hardware), but I wouldn't mind too much.
>
> In fact you could simplify your build script by just passing
> USE_LIBATOMIC=1 to enable it.
>
> BTW, is there a reason for "make -j3" in the build script ? Limiting
> oneself
> to 3 build processes when modern machines rarely have less than 8 cores is
> a bit of a waste of time, especially if every other package does the same
> in the distro! I'd just do "make -j$(nproc)" as usual there.
>
> Cheers,
> Willy
>


Re: [PATCH 1/2] CI: reduce ASAN log redirection umbrella size

2024-04-17 Thread Илья Шипицин
on my experiments, asan log was grouped under "show vtest results".
on provided branch indeed there are no grouping.

I'll play a bit, maybe we'll end with dropping that log redirection

ср, 17 апр. 2024 г. в 21:17, William Lallemand :

> On Sun, Apr 14, 2024 at 09:23:51AM +0200, Ilya Shipitsin wrote:
> > previously ASAN_OPTIONS=log_path=asan.log was intended for VTest
> > execution only, it should not affect "haproxy -vv" and hsproxy
> > config smoke testing
> > ---
> >  .github/workflows/vtest.yml | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/.github/workflows/vtest.yml b/.github/workflows/vtest.yml
> > index 9d0bf48b0..5ee8a7a64 100644
> > --- a/.github/workflows/vtest.yml
> > +++ b/.github/workflows/vtest.yml
> > @@ -42,8 +42,6 @@ jobs:
> ># Configure a short TMPDIR to prevent failures due to long unix
> socket
> ># paths.
> >TMPDIR: /tmp
> > -  # Force ASAN output into asan.log to make the output more
> readable.
> > -  ASAN_OPTIONS: log_path=asan.log
> >OT_CPP_VERSION: 1.6.0
> >  steps:
> >  - uses: actions/checkout@v4
> > @@ -143,6 +141,9 @@ jobs:
> >run: echo "::add-matcher::.github/vtest.json"
> >  - name: Run VTest for HAProxy ${{
> steps.show-version.outputs.version }}
> >id: vtest
> > +  env:
> > +# Force ASAN output into asan.log to make the output more
> readable.
> > +ASAN_OPTIONS: log_path=asan.log
> >run: |
> >  # This is required for macOS which does not actually allow to
> increase
> >  # the '-n' soft limit to the hard limit, thus failing to run.
>
>
> Ilya,
>
> I still don't get how ASAN is working with the CI. Each time I have an
> ASAN issue I can't get a trace out of github.
>
> For example, there was an issue with ASAN in this commit:
>
> https://github.com/haproxy/haproxy/commit/bdee8ace814139771efa90cc200c67e7d9b72751
>
> I couldn't get a trace in the CI:
> https://github.com/haproxy/haproxy/actions/runs/8724600484/job/23936238899
>
> But I had no problem when testing it from my computer, I'm just doing a
> ` make reg-tests reg-tests/ssl/crt_store.vtc -- --debug` and have the
> ASAN output directly.
>
> Do you think we could achieve the same thing with github actions? I
> never saw an output from this asan.log file in the CI.
>
> --
> William Lallemand
>


Re: Changes in HAProxy 3.0's Makefile and build options

2024-04-14 Thread Илья Шипицин
сб, 13 апр. 2024 г. в 15:26, Willy Tarreau :

> Hi Tristan,
>
> On Fri, Apr 12, 2024 at 07:38:18AM +, Tristan wrote:
> > Hi Willy,
> >
> > > On 11 Apr 2024, at 18:18, Willy Tarreau  wrote:
> > >
> > > Some distros simply found that stuffing their regular CFLAGS into
> > > DEBUG_CFLAGS or CPU_CFLAGS does the trick most of the time. Others use
> > > other combinations depending on the tricks they figured.
> >
> > Good to know I wasn't alone scratching my head about what came from
> where!
>
> Definitely.
>
> > > After this analysis, I figured that we needed to simplify all this, get
> > > back to what is more natural to use or more commonly found, without
> > > breaking everything for everyone. [...]
> >
> > These are very nice improvements indeed, but I admit that if (at least
> > partially) breaking backwards compatibility was the acceptable choice
> here,
>
> It should almost not break it otherwise it will indicate what changed
> and how to proceed.
>
> > I'd have hoped to see something like cmake rather than a makefile
> > refactoring.
>

I would like to hear real pain behind the idea of moving to cmake.


I suspect such pain exists, for example I had to explicitly specify LUA
options depending on linux distro (it was hard to autodetect
LUA by Makefile, but it is easier for cmake)

maybe we can deal with the pain and improve things using make :)



>
> That's orthogonal. cmake is an alternative to autoconf, not to make,
> you still run "make -j$(nproc)" once cmake is done.
>
> And such mechanisms are really really a pain to deal with, at every
> stage (development and user). They can be justified for ultra-complex
> projects but quite frankly, having to imagine not being able to flip
> an option without rebuilding everything, not having something as simple
> as "V=1" to re-run the failed file and see exactly what was tried,
> having to fight against the config generator all the time etc is really
> a no-go for me. I even remember having stopped using OpenCV long ago
> when it switched from autoconf to cmake because it turned something
> complicated to something out of control that would no longer ever build
> outside of the authors' environment. We could say whatever, like they
> did it wrong or anything else, but the fact is I'm not going to impose
> a similar pain to our users.
>
> In our case, we only need to set a list of files to be built, and pass
> a few -I/-L/-l while leaving the final choice to the user.
>
> > Actually, I'd thought a few times in the past about starting a
> discussion in
> > that direction but figured it would be inconceivable.
> >
> > I don't know how controversial it is, so the main two reasons I mention
> it are:
> > - generally better tooling (and IDE support) out of the box:
> options/flags
> >   discovery and override specifically tends to be quite a bit simpler as
> the
> >   boilerplate and conventions are mostly handled by default
> > - easier to parse final result: both of them look frankly awful, but the
> >   result of cmake setups is often a little easier to parse as it
> encourages a
> >   rather declarative style (can be done in gmake, but it is very much up
> to the
> >   maintainers to be extremely disciplined about it)
>
> The problem with tools like cmake is not to parse the output when it
> works but to figure how to force it to accept that *you* are the one who
> knows when it decides it will not do what you want. While a makefile can
> trivially be overridden or even fixed, good luck for guessing all the
> magic names of cmake variables that are missing and that may help you.
>
> I really do understand why some super complex projects involving git
> submodules and stuff like this would go in that direction, but otherwise
> it's just asking for trouble with little to no benefit.
>
> > Arguably, there's the downside of requiring an extra tool everyone might
> not
> > be deeply familiar with already, and cmake versions matter more than
> gmake
> > ones so I would worry about compatibility for distros of the GCC 4 era,
> but
> > it seemed to me like it's reasonably proven and spread by now to
> consider.
>
> It's not even a matter of compatibility with any gcc version, rather a
> compatibility with what developers are able to write and what users want
> to do if that was not initially imagined by the developers. Have you read
> already about all the horrors faced by users trying to use distcc or ccache
> with cmake, often having to run sed over their cmake files ? Some time ago,
> cmake even implemented yet another magic variable specifically to make this
> less painful. And cross-compilation is yet another entire topic. All of
> this just ends up in a situation where the build system becomes an entire
> project on its own, just for the sake of passing 6 variables to make in
> the end :-/  In the case of a regular makefile at least, 100% of the
> variables you may have to use are present in the makefile, you don't need
> to resort to random guesses by 

Re: [PATCH 0/1] CI: revert entropy hack

2024-04-13 Thread Илья Шипицин
It has been resolved on image generation side
https://github.com/actions/runner-images/issues/9491

It is no harm to keep it on our side as well, but we can drop it

On Fri, Apr 12, 2024, 18:55 Willy Tarreau  wrote:

> On Fri, Apr 12, 2024 at 12:42:51PM +0200,  ??? wrote:
> > ping :)
>
> Ah thanks for the reminder. I noticed it a few days ago and I wanted to
> ask you to please include a commit message explaining why it's no longer
> necessary. We don't need much, just to understand the rationale for the
> removal.
>
> If you just send me one or two human-readable sentences that can be
> copy-pasted in the message, I'm willing to do it myself to save you
> from resending.
>
> Thanks,
> Willy
>


Re: [PATCH 0/1] CI: revert entropy hack

2024-04-12 Thread Илья Шипицин
ping :)

сб, 6 апр. 2024 г. в 15:38, Ilya Shipitsin :

> hack introduced in  3a0fc8641b1549b00cd3125107545b6879677801 might be
> reverted
>
> Ilya Shipitsin (1):
>   CI: revert kernel entropy introduced in
> 3a0fc8641b1549b00cd3125107545b6879677801
>
>  .github/workflows/vtest.yml | 11 ---
>  1 file changed, 11 deletions(-)
>
> --
> 2.44.0
>
>


Re: Changes in HAProxy 3.0's Makefile and build options

2024-04-11 Thread Илья Шипицин
чт, 11 апр. 2024 г. в 19:18, Willy Tarreau :

> Hi all,
>
> after all the time where we've all been complaining about the difficulty
> to adjust CFLAGS during the build, I could tackle the problem for a first
> step in the right direction.
>
> First, let's start with a bit of history to explain the situation and why
> it was bad. Originally, a trivial makefile with very simple rules and a
> single object to build (haproxy.o) had just CFLAGS set to "-O2 -g" or
> something like that, and that was perfect. It was possible to just pass a
> different CFLAGS value on the "make" command line and be done with it.
>
> Options started to pile up, but in a way that remained manageable for a
> long time (e.g. add PCRE support, later dlmalloc), so CFLAGS was still the
> only thing to override if needed. With 32/64 bit variants and so on, we
> started to feel the need to split those CFLAGS into multiple pieces for
> more flexibility. But these pieces were still aggregated into CFLAGS so
> that those used to overriding it were still happy. This situation forced
> us not to break these precious CFLAGS that some users were relying on.
>
> And that went like this for a long time, though the definition of this
> CFLAGS variable became more and more complicated by inheriting from some
> automatic options. For example, in 3.0-dev7, CFLAGS is initialized to
> "$(ARCH_FLAGS) $(CPU_CFLAGS) $(DEBUG_CFLAGS) $(SPEC_CFLAGS)", i.e. it
> concatenates 4 variables (in apparence). The 4th one (SPEC_CFLAGS) is
> already a concatenation of a fixed one (WARN_CFLAGS) and a series of
> automatically detected ones (the rest of it). ARCH_FLAGS is set from a
> a fixed list of presets depending on the ARCH variable, and CPU_CFLAGS
> is set from a list of presets depending on the CPU variable. And the
> most beautiful is that CFLAGS alone is no longer sufficient since some
> of the USE_* options append their own values behind it, and we complete
> with $(TARGET_CFLAGS) $(SMALL_OPTS) $(DEFINE).
>
> Yeah I know that's ugly. We all know it. Whenever someone asks me "how can
> I enable -fsanitize=address because I'd like to run ASAN", I respond "hmmm
> it depends what options you already use and which ones area easiest to
> hack".
>
> Some distros simply found that stuffing their regular CFLAGS into
> DEBUG_CFLAGS or CPU_CFLAGS does the trick most of the time. Others use
> other combinations depending on the tricks they figured.
>
> After this analysis, I figured that we needed to simplify all this, get
> back to what is more natural to use or more commonly found, without
> breaking everything for everyone. What is certain however in the current
> situation, is that nobody is overriding CFLAGS since it's too rich, too
> complex and too unpredictable. So it was really time to reset it.
>
> Thus here's what was done:
>   - CFLAGS is now empty by default and can be passed some build options
> that are appended at the end of the automatic flags. This means that
> -Os, -ggdb3, -Wfoobar, -Wno-foobar, -I../secret-place/include etc
> will properly be honored without having to trick anything anymore.
> Thus for package maintainers, building with CFLAGS="$DISTRO_CFLAGS"
> should just get the work done.
>
>   - LDFLAGS is now empty by default and can be passed some build options
> that are prepended to the list of linker options.
>
>   - ARCH_FLAGS now defaults to -g and is passed to both the compiler and
> the linker. It may be overridden to change the word size (-m32/-m64),
> enable alternate debugging formats (-ggdb3), enable LTO (-flto),
> ASAN (-fsanitize=address) etc. It's in fact for flags that must be
> consistent between the compiler and the linker. It was not renamed
> since it was already there and used quite a bit already.
>
>   - CPU_CFLAGS was preserved for ease of porting but is empty by default.
> Some may find it convenient to pass their -march, -mtune etc there.
>
>   - CPU and ARCH are gone. Let's face it, only 2 of them were usable and
> no single maintainer will be crazy enough to use these options here
> and resort to another approach for other archs. However the previous
> values are mentioned in the doc as a hint about what's known to work
> well.
>
>   - OPT_CFLAGS was created with "-O2" and nothing else. As developers,
> we spend our time juggling between -O2 and -O0 depending on whether
> we're testing or debugging. Some embedded distros also have options
> to pass -O2 or -Os to choose between fast and small, and that may
> fit very well there.
>
>   - STD_CFLAGS contains a few flags related to standards compliance.
> It is currently set to -fwrapv, -fno-strict-overflow and/or empty
> depending on what the compiler prefers. It's important not to touch
> it unless you know exactly what you're doing, and previously these
> options used to be lost by accident when overriding other ones.
>
>   - WARN_CFLAGS is now set to the list of warnings to 

Re: haproxy backend server template service discovery questions

2024-04-08 Thread Илья Шипицин
am I right that you consider that as a documentation bug ?

пн, 8 апр. 2024 г. в 10:44, Andrii Ustymenko :

> Yes, for the 1) question indeed.
> Basically I have tested with local "out of sync" custom nameserver. And I
> was observing some inconsistent results of the backend-servers table. That
> led to this question.
>
> Most of the time I was seeing the state of only from the local nameserver.
> However sometimes I have seen the "merged" state where all the replies
> existed together in the table.
>
> It was also observed that amount of the requests made by haproxy to all
> nameservers is the same even though the local one normally replies faster.
>
> And sorry, forgot to mention we are running haproxy version 2.8.7
> On 08/04/2024 10:31, Илья Шипицин wrote:
>
> and particularly your question is "does HAProxy merge all responses or
> pick the first one or use some other approach" ?
>
> пн, 8 апр. 2024 г. в 10:23, Andrii Ustymenko :
>
>> I guess indeed it is not a case of consul-template specifically, but more
>> of rendered templates and how haproxy maintains it.
>> On 06/04/2024 20:15, Илья Шипицин wrote:
>>
>> Consul template is something done by consul itself, after that
>> haproxy.conf is rendered
>>
>> Do you mean "how haproxy deals with rendered template"?
>>
>> On Fri, Apr 5, 2024, 15:02 Andrii Ustymenko 
>> wrote:
>>
>>> Dear list!
>>>
>>> My name is Andrii. I work for Adyen. We are using haproxy as our main
>>> software loadbalancer at quite large scale.
>>> Af of now our main use-case for backends routing based on
>>> server-template and dynamic dns from consul as service discovery. Below
>>> is the example of simple backend configuration:
>>>
>>> ```
>>> backend example
>>>balance roundrobin
>>>server-template application 10 _application._tcp.consul resolvers
>>> someresolvers init-addr last,libc,none resolve-opts allow-dup-ip
>>> resolve-prefer ipv4 check ssl verify none
>>> ```
>>>
>>> and in global configuration
>>>
>>> ```
>>> resolvers someresolvers
>>>nameserver ns1 10.10.10.10:53
>>>nameserver ns2 10.10.10.11:53
>>> ```
>>>
>>> As we see haproxy will create internal table for backends with some
>>> be_id and be_name=application and allocate 10 records for each server
>>> with se_id from 1 to 10. Then those records get populated and updated
>>> with the data from resolvers.
>>> I would like to understand couple of things with regards to this
>>> structure and how it works, which I could not figure out myself from the
>>> source code:
>>>
>>> 1) In tcpdump for dns queries we see that haproxy asynchronously polls
>>> all the nameservers simultaneously. For instance:
>>>
>>> ```
>>> 11:06:17.587798 eth2  Out ifindex 4 aa:aa:aa:aa:aa:aa ethertype IPv4
>>> (0x0800), length 108: 10.10.10.50.24050 > 10.10.10.10.53: 34307+ [1au]
>>> SRV? _application._tcp.consul. (60)
>>> 11:06:17.587802 eth2  Out ifindex 4 aa:aa:aa:aa:aa:aa ethertype IPv4
>>> (0x0800), length 108: 10.10.10.50.63155 > 10.10.10.11.53: 34307+ [1au]
>>> SRV? _application._tcp.consul. (60)
>>> 11:06:17.588097 eth2  In  ifindex 4 ff:ff:ff:ff:ff:ff ethertype IPv4
>>> (0x0800), length 205: 10.10.10.10.53 > 10.10.10.50.24050: 2194 2/0/1 SRV
>>> 0a5099e5.addr.consul.:25340 1 1, SRV 0a509934.addr.consul.:26010 1 1
>>> (157)
>>> 11:06:17.588097 eth2  In  ifindex 4 ff:ff:ff:ff:ff:ff ethertype IPv4
>>> (0x0800), length 205: 10.10.10.11.53 > 10.10.10.50.63155: 2194 2/0/1 SRV
>>> 0a5099e5.addr.consul.:25340 1 1, SRV 0a509934.addr.consul.:26010 1 1
>>> (157)
>>> ```
>>>
>>> Both nameservers reply with the same response. But what if they are out
>>> of sync? Let's say one says: server1, server2 and the second one says
>>> server2, server3? So far testing this locally - I see sometimes the
>>> reply overrides the table, but sometimes it seems to just gets merged
>>> with the rest.
>>>
>>> 2) Each entry from SRV reply will be placed into the table under
>>> specific se_id. Most of the times that placement won't change. So, for
>>> the example above the most likely 0a5099e5.addr.consul. and
>>> 0a509934.addr.consul. will have se_id 1 and 2 respectively. However
>>> sometimes we have the following scenario:
>>>
>>> 1. We admistratively disable the server (drain traffic)

Re: haproxy backend server template service discovery questions

2024-04-08 Thread Илья Шипицин
and particularly your question is "does HAProxy merge all responses or pick
the first one or use some other approach" ?

пн, 8 апр. 2024 г. в 10:23, Andrii Ustymenko :

> I guess indeed it is not a case of consul-template specifically, but more
> of rendered templates and how haproxy maintains it.
> On 06/04/2024 20:15, Илья Шипицин wrote:
>
> Consul template is something done by consul itself, after that
> haproxy.conf is rendered
>
> Do you mean "how haproxy deals with rendered template"?
>
> On Fri, Apr 5, 2024, 15:02 Andrii Ustymenko 
> wrote:
>
>> Dear list!
>>
>> My name is Andrii. I work for Adyen. We are using haproxy as our main
>> software loadbalancer at quite large scale.
>> Af of now our main use-case for backends routing based on
>> server-template and dynamic dns from consul as service discovery. Below
>> is the example of simple backend configuration:
>>
>> ```
>> backend example
>>balance roundrobin
>>server-template application 10 _application._tcp.consul resolvers
>> someresolvers init-addr last,libc,none resolve-opts allow-dup-ip
>> resolve-prefer ipv4 check ssl verify none
>> ```
>>
>> and in global configuration
>>
>> ```
>> resolvers someresolvers
>>nameserver ns1 10.10.10.10:53
>>nameserver ns2 10.10.10.11:53
>> ```
>>
>> As we see haproxy will create internal table for backends with some
>> be_id and be_name=application and allocate 10 records for each server
>> with se_id from 1 to 10. Then those records get populated and updated
>> with the data from resolvers.
>> I would like to understand couple of things with regards to this
>> structure and how it works, which I could not figure out myself from the
>> source code:
>>
>> 1) In tcpdump for dns queries we see that haproxy asynchronously polls
>> all the nameservers simultaneously. For instance:
>>
>> ```
>> 11:06:17.587798 eth2  Out ifindex 4 aa:aa:aa:aa:aa:aa ethertype IPv4
>> (0x0800), length 108: 10.10.10.50.24050 > 10.10.10.10.53: 34307+ [1au]
>> SRV? _application._tcp.consul. (60)
>> 11:06:17.587802 eth2  Out ifindex 4 aa:aa:aa:aa:aa:aa ethertype IPv4
>> (0x0800), length 108: 10.10.10.50.63155 > 10.10.10.11.53: 34307+ [1au]
>> SRV? _application._tcp.consul. (60)
>> 11:06:17.588097 eth2  In  ifindex 4 ff:ff:ff:ff:ff:ff ethertype IPv4
>> (0x0800), length 205: 10.10.10.10.53 > 10.10.10.50.24050: 2194 2/0/1 SRV
>> 0a5099e5.addr.consul.:25340 1 1, SRV 0a509934.addr.consul.:26010 1 1 (157)
>> 11:06:17.588097 eth2  In  ifindex 4 ff:ff:ff:ff:ff:ff ethertype IPv4
>> (0x0800), length 205: 10.10.10.11.53 > 10.10.10.50.63155: 2194 2/0/1 SRV
>> 0a5099e5.addr.consul.:25340 1 1, SRV 0a509934.addr.consul.:26010 1 1 (157)
>> ```
>>
>> Both nameservers reply with the same response. But what if they are out
>> of sync? Let's say one says: server1, server2 and the second one says
>> server2, server3? So far testing this locally - I see sometimes the
>> reply overrides the table, but sometimes it seems to just gets merged
>> with the rest.
>>
>> 2) Each entry from SRV reply will be placed into the table under
>> specific se_id. Most of the times that placement won't change. So, for
>> the example above the most likely 0a5099e5.addr.consul. and
>> 0a509934.addr.consul. will have se_id 1 and 2 respectively. However
>> sometimes we have the following scenario:
>>
>> 1. We admistratively disable the server (drain traffic) with the next
>> command:
>>
>> ```
>> echo "set server example/application1 state maint" | nc -U
>> /var/lib/haproxy/stats
>> ```
>>
>> the MAINT flag will be added to the record with se_id 1
>>
>> 2. Instance of application goes down and gets de-registered from consul,
>> so also evicted from srv replies and out of discovery of haproxy.
>>
>> 3. Instance of application goes up and gets registered by consul and
>> discovered by haproxy, but haproxy allocates different se_id for it.
>> Haproxy healthchecks will control the traffic to it in this case.
>>
>> 4. We will still have se_id 1 with MAINT flag and application instance
>> dns record placed into different se_id.
>>
>> The problem comes that any new discovered record which get placed into
>> se_id 1 will never be active until either command:
>>
>> ```
>> echo "set server example/application1 state ready" | nc -U
>> /var/lib/haproxy/stats
>> ```
>>
>> executed or haproxy gets reloaded without state file. With this pattern
>> we ba

Re: [ANNOUNCE] haproxy-3.0-dev7

2024-04-07 Thread Илья Шипицин
сб, 6 апр. 2024 г. в 17:53, Willy Tarreau :

> Hi,
>
> HAProxy 3.0-dev7 was released on 2024/04/06. It added 73 new commits
> after version 3.0-dev6.
>
> Among the changes that stand out in this version, here's what I'm seeing:
>
>   - improvements to the CLI internal API so that the various keyword
> handlers now have their own buffers. This might possibly uncover
> a few long-lasting bugs but over time will improve the reliability
> and avoid the occasional bugs with connections never closing or
> spinning loops.
>
>   - we no longer depend on libsystemd. Not only this will avoid pulling
> in tons of questionable dependencies, this also allows to enable
> USE_SYSTEMD by default (it's only done on linux-glibc though), thus
> reducing config combinations.
>
>   - log load-balancing internals were simplified. The very first version
> (never merged) didn't rely on backends, thus used to implement its
> own servers and load-balancing. It was finally remapped to backends
> and real servers, but the LB algorithms had remained specific, with
> some exceptions at various places in the setup code to handle them.
> Now the backends have switched to regular LB algorithms, which not
> only helps for code maintenance, but also exposes all table-based
> algorithms to the log backends with support for weights, and also
> exposed the "sticky" algorithm to TCP and HTTP backends. It's one of
> these changes which remove code while adding features :-)
>
>   - Linux capabilities are now properly checked so that haproxy won't
> complain about permissions for example when used in transparent mode,
> if capabilities are sufficient. In addition, file-system capabilities
> set on the binary are also supported now.
>
>   - stick-tables are now sharded over multiple tree heads each with their
> own locks. This significantly reduces locking contention on systems
> with many threads (gains of ~6x measured on a 80-thread systems). In
> addition, the locking could be reduced even with low thread counts,
> particulary when using peers, where the performance could be doubled.
>
>   - cookies are now permitted for dynamically added servers. The only
> reason they were not previously was that it required to audit the
> whole cookie initialization/release code to figure whether it had
> corner cases or not. With that audit now done, the cookies could
> be allowed. In addition, dynamic cookies were supported a bit by
> accident with a small defect (one had to set the address again to
> index the server), and are now properly supported.
>
>   - the "enabled" keyword used to be silently ignored when adding a
> dynamic server. Now it's properly rejected to avoid confusing
> scripts. We don't know yet if it will be supported later or not,
> so better stay safe.
>
>   - the key used by consistent hash to map to a server used to always
> be the server's id (either explicit or implicit, position-based).
> Now the "hash-key" directive will also allow to use the server's
> address or address+port for this. The benefit is that multiple LBs
> with servers in a different order will still send the same hashes
> to the same servers.
>
>   - a new "guid" keyword was added for servers, listeners and proxies.
> The purpose will be to make it possible for external APIs to assign
> a globally unique object identifier to each of them in stats dumps
> or CLI accesses, and to later reliably recognize a server upon
> reloads. For now the identifier is not exploited.
>

I have a question about the UUID version. it is not specified. Is it UUID
version 6 ?


>
>   - QUIC now supports the HyStart++ (RFC9406) alternative to slowstart
> with the Cubic algorithm. It's supposed to show better recovery
> patterns. More testing is needed before enabling it by default.
>
>   - a few bug fixes (truncated responses when splicing, QUIC crashes
> on strict-alignment platforms, redispatch 0 didn't work, more OCSP
> update fixes, proper reporting of too big CLI payload, etc).
>
>   - some build fixes, code cleanups, CI updates, doc updates, and
> cleanups of regtests.
>
> I think that's all. It's currently up and running on haproxy.org. I'd
> suspect that with the many stable updates yesterday, we may see less
> test reports on 3.0-dev7, but please don't forget to test it if you
> can, that helps a lot ;-)
>
> Please find the usual URLs below :
>Site index   : https://www.haproxy.org/
>Documentation: https://docs.haproxy.org/
>Wiki : https://github.com/haproxy/wiki/wiki
>Discourse: https://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : https://www.haproxy.org/download/3.0/src/
>Git repository   : https://git.haproxy.org/git/haproxy.git/

Re: haproxy backend server template service discovery questions

2024-04-06 Thread Илья Шипицин
Consul template is something done by consul itself, after that haproxy.conf
is rendered

Do you mean "how haproxy deals with rendered template"?

On Fri, Apr 5, 2024, 15:02 Andrii Ustymenko 
wrote:

> Dear list!
>
> My name is Andrii. I work for Adyen. We are using haproxy as our main
> software loadbalancer at quite large scale.
> Af of now our main use-case for backends routing based on
> server-template and dynamic dns from consul as service discovery. Below
> is the example of simple backend configuration:
>
> ```
> backend example
>balance roundrobin
>server-template application 10 _application._tcp.consul resolvers
> someresolvers init-addr last,libc,none resolve-opts allow-dup-ip
> resolve-prefer ipv4 check ssl verify none
> ```
>
> and in global configuration
>
> ```
> resolvers someresolvers
>nameserver ns1 10.10.10.10:53
>nameserver ns2 10.10.10.11:53
> ```
>
> As we see haproxy will create internal table for backends with some
> be_id and be_name=application and allocate 10 records for each server
> with se_id from 1 to 10. Then those records get populated and updated
> with the data from resolvers.
> I would like to understand couple of things with regards to this
> structure and how it works, which I could not figure out myself from the
> source code:
>
> 1) In tcpdump for dns queries we see that haproxy asynchronously polls
> all the nameservers simultaneously. For instance:
>
> ```
> 11:06:17.587798 eth2  Out ifindex 4 aa:aa:aa:aa:aa:aa ethertype IPv4
> (0x0800), length 108: 10.10.10.50.24050 > 10.10.10.10.53: 34307+ [1au]
> SRV? _application._tcp.consul. (60)
> 11:06:17.587802 eth2  Out ifindex 4 aa:aa:aa:aa:aa:aa ethertype IPv4
> (0x0800), length 108: 10.10.10.50.63155 > 10.10.10.11.53: 34307+ [1au]
> SRV? _application._tcp.consul. (60)
> 11:06:17.588097 eth2  In  ifindex 4 ff:ff:ff:ff:ff:ff ethertype IPv4
> (0x0800), length 205: 10.10.10.10.53 > 10.10.10.50.24050: 2194 2/0/1 SRV
> 0a5099e5.addr.consul.:25340 1 1, SRV 0a509934.addr.consul.:26010 1 1 (157)
> 11:06:17.588097 eth2  In  ifindex 4 ff:ff:ff:ff:ff:ff ethertype IPv4
> (0x0800), length 205: 10.10.10.11.53 > 10.10.10.50.63155: 2194 2/0/1 SRV
> 0a5099e5.addr.consul.:25340 1 1, SRV 0a509934.addr.consul.:26010 1 1 (157)
> ```
>
> Both nameservers reply with the same response. But what if they are out
> of sync? Let's say one says: server1, server2 and the second one says
> server2, server3? So far testing this locally - I see sometimes the
> reply overrides the table, but sometimes it seems to just gets merged
> with the rest.
>
> 2) Each entry from SRV reply will be placed into the table under
> specific se_id. Most of the times that placement won't change. So, for
> the example above the most likely 0a5099e5.addr.consul. and
> 0a509934.addr.consul. will have se_id 1 and 2 respectively. However
> sometimes we have the following scenario:
>
> 1. We admistratively disable the server (drain traffic) with the next
> command:
>
> ```
> echo "set server example/application1 state maint" | nc -U
> /var/lib/haproxy/stats
> ```
>
> the MAINT flag will be added to the record with se_id 1
>
> 2. Instance of application goes down and gets de-registered from consul,
> so also evicted from srv replies and out of discovery of haproxy.
>
> 3. Instance of application goes up and gets registered by consul and
> discovered by haproxy, but haproxy allocates different se_id for it.
> Haproxy healthchecks will control the traffic to it in this case.
>
> 4. We will still have se_id 1 with MAINT flag and application instance
> dns record placed into different se_id.
>
> The problem comes that any new discovered record which get placed into
> se_id 1 will never be active until either command:
>
> ```
> echo "set server example/application1 state ready" | nc -U
> /var/lib/haproxy/stats
> ```
>
> executed or haproxy gets reloaded without state file. With this pattern
> we basically have persistent "records pollution" due to operations made
> directly with control socket.
>
> I am not sure is there anything to do about this. Maybe, if haproxy
> could cache the state not only of se_id but also associated record with
> that and then if that gets changed - re-schedule healtchecks. Or instead
> of integer ids use some hashed ids based on dns/ip-addresses of
> discovered records, in this case binding will happen exactly in the same
> slot.
>
> Thanks in advance!
>
> --
>
> Best regards,
>
> Andrii Ustymenko
>
>
>


Re: [PATCH 0/1] CI: additional ASAN smoke tests

2024-03-04 Thread Илья Шипицин
ping :)

сб, 17 февр. 2024 г. в 20:43, Ilya Shipitsin :

>
>
> Ilya Shipitsin (1):
>   CI: run more smoke tests on config syntax to check memory related
> issues
>
>  .github/workflows/vtest.yml | 4 
>  1 file changed, 4 insertions(+)
>
> --
> 2.43.2
>
>


Re: WolfSSL builds for use with HAProxy

2024-02-10 Thread Илья Шипицин
сб, 10 февр. 2024 г. в 00:00, Tristan :

> Hi Ilya,
>
> On 09/02/2024 20:31, Илья Шипицин wrote:
> > I run QUIC Interop from time to time, WolfSSL shows the best
> > compatibility compared to LibreSSL and aws-lc.
> > it really looks like a winner today
>
> And in comparison to current QuicTLS?
>

QuicTLS

Run took 0:40:46.016826
+-++
| |haproxy |
+-++
| quic-go | HDCLRC20MSRZ3BUAL2C1C2 |
| |  EV2   |
| |  L16   |
+-++
+-+--+
| |   haproxy|
+-+--+
| quic-go | G: 8879 (± 46) kbps  |
| | C: 5504 (± 120) kbps |
+-+--+

wolfSSL

Run took 0:43:09.272822
+-+-+
| |   haproxy   |
+-+-+
| quic-go | HDCLRC20MSR3BUAL2C2 |
| | EV2 |
| |ZL1C16   |
+-+-+
+-+--+
| |   haproxy|
+-+--+
| quic-go | G: 8508 (± 152) kbps |
| | C: 5262 (± 441) kbps |
+-+--+

it is combination of quic-go client and haproxy.
wolfSSL passes all tests except "TestCaseHandshakeLoss" and
"TestCaseZeroRTT"

I configure wolfSSL as "./configure --enable-haproxy --enable-quic", maybe
there are some dedicated flags for 0-RTT, I didn't check yet


>
> > I'm afraid it practice it works in a different way.
> > First, you install WolfSSL to prod, next INSTALL/wiki got updated :)
>
> I appreciate the joke... :) but more seriously I am not very
> knowledgeable when it comes to low-level programming or the associated
> tuning/performance-testing.
> So even if I deployed it, my opinion on that topic is unlikely to be the
> best (besides bug reports anyway).
>
> Either way, for now I'm waiting on OCSP support first (hi William,
> Rémi); hopefully someone else figures out the best build flags by the
> time that's dealt with.
>
> Tristan
>


Re: WolfSSL builds for use with HAProxy

2024-02-09 Thread Илья Шипицин
чт, 8 февр. 2024 г. в 15:49, Tristan :

> Hi all,
>
> With the ever-increasing threat of one day needing to give up on OpenSSL
> 1.1.1 (whenever the next bad CVE is found on QuicTLS 1.1.1w,
> essentially) I was looking at alternatives a bit closer.
>
> Based on the wiki,
> https://github.com/openssl/openssl/issues/20286#issuecomment-1527869072,
> and that it has support for other features I'm interested in (notably
> ECH), WolfSSL seems by far my best bet at the moment.
>

I run QUIC Interop from time to time, WolfSSL shows the best compatibility
compared to LibreSSL and aws-lc.
it really looks like a winner today


>
> However, given that almost everything is compile time and defaults focus
> on suitability for constrained embedded environments rather than best
> big-proxy-server oriented performance, does anyone have pointers on what
> flags are important/traps/etc?
>
> Besides the getrandom thing, HAProxy's INSTALL/wiki only vaguely mention
> that such build-time tuning is required, so I'm hoping someone might
> have gone through that already.
>

I'm afraid it practice it works in a different way.
First, you install WolfSSL to prod, next INSTALL/wiki got updated :)


>
> This one is a bit extra, but considering that aiming for bleeding edge
> with WolfSSL is not entirely compatible with how most distros work (ie
> even if it was packaged, it's likely going to be perpetually quite far
> behind), what does the future look like in that regard from the distros'
> side?
>
> Thanks,
> Tristan
>
>


Re: [PATCH 0/3] fix speling remnants, enable spel chek on push

2024-01-26 Thread Илья Шипицин
пт, 26 янв. 2024 г. в 20:01, Willy Tarreau :

> On Fri, Jan 26, 2024 at 05:30:31PM +0100, Willy Tarreau wrote:
> > On Wed, Jan 24, 2024 at 02:26:13PM +0100, Ilya Shipitsin wrote:
> > > it is very fast check, should not affect developer velocity much
> >
> > OK now pushed, thank you Ilya!
>
> Ilya, I reverted the last one (automatic check on push) for now, because
> it's making all pushes fail for reasons that neither the patch's author
> nor the committer can validate before pushing, which is extremly
> frustrating and turning the CI into just noise since it's always red
> now.
>

>From one hand, validation is not any different than other "pre checkin"
validation, i.e. developer can push to his own fork (it could be automatic
if pull requests were valid way of contribution).

from the other hand, we can get back to cron, it will accumulate some
"spell checking debt", but we live with that for years, np


>
> We'll need to think a bit more about how to address this, but let's get
> back to the state where error reports correspond to functional or technical
> issues for now.
>
> Willy
>


Re: HAProxy Technologies NERC CIP 13 Vendor Questionnaire

2024-01-23 Thread Илья Шипицин
how can HAProxy be related, for example, to "NERC requires CORE to revoke
access within 24 hours when remote or onsite
access is no longer needed by your personnel to CORE systems or
facilities." ?



вт, 23 янв. 2024 г. в 00:58, Robert Dillabough :

> Hi Support,
>
> For NERC compliance, CORE needs to perform a CIP-013 Cyber Security Supply
> Chain Risk Assessment on *HAProxy Technologies*. Attached is CIP-013
> Questionnaire. If you could fill it out to the best of your ability and
> return it to me that would be much appreciated. Once returned, the
> Compliance Team here at CORE will review your answers. CIP-013’s purpose is
> to mitigate cyber security risks to the reliable operation of the Bulk
> Electric System (BES) by implementing security controls for supply chain
> risk management of BES Cyber Systems. If I need to email this to another
> person or group, can you please direct me to them. If you have any
> questions, please feel free to contact me, all my information is in my
> signature below.
>
> Thanks,
>
> *Robbie Dillabough*
>
> Electrical Engineer - Operations
>
>
>
> 800.332.9540 *DIRECT*
>
> 720.733.5672 *MAIN*
>
> 303.880.9912 *MOBILE*
>
> *rdillabo...@core.coop *
>
>
>
> 
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


Re: Exchange services

2023-12-13 Thread Илья Шипицин
Good idea, I will try that

On Wed, Dec 13, 2023, 22:52 Henning Svane  wrote:

> Hi
>
>
>
> Have you tried to test from
>
> Microsoft Remote Connectivity Analyzer
> <https://testconnectivity.microsoft.com/tests/exchange>
>
>
>
> Here you can see what error there is in the connection.
>
> Maybe that could be usable information for debugging
>
>
>
> Regards
>
> Henning
>
>
>
> *Fra:* Илья Шипицин 
> *Sendt:* 13. december 2023 22:37
> *Til:* Dario Girella 
> *Cc:* HAProxy 
> *Emne:* Re: exchange services
>
>
>
> It would be interesting to bisect on 2.9
>
>
>
> On Wed, Dec 13, 2023, 20:24 Dario Girella  wrote:
>
> Hello,
>
> i just upgrade my haproxy version from 2.8.5 to 2.9, all seems fine but i
> receive error from outlook trying to configure mailbox by autodiscover.
>
> Also problem to open owa.
>
> Something  change or to check?
>
> I revert back to 2.8.5 and all is fine.
>
>
>
> Regards
>
>
>
> Dario
>
>


Re: exchange services

2023-12-13 Thread Илья Шипицин
It would be interesting to bisect on 2.9

On Wed, Dec 13, 2023, 20:24 Dario Girella  wrote:

> Hello,
>
> i just upgrade my haproxy version from 2.8.5 to 2.9, all seems fine but i
> receive error from outlook trying to configure mailbox by autodiscover.
>
> Also problem to open owa.
>
> Something  change or to check?
>
> I revert back to 2.8.5 and all is fine.
>
>
>
> Regards
>
>
>
> Dario
>


Re: [PATCH 1/1] CI: switch aws-lc builds to "latest" semantic

2023-11-23 Thread Илья Шипицин
чт, 23 нояб. 2023 г. в 22:18, William Lallemand :

> Hi Ilya,
>
> On Thu, Nov 23, 2023 at 06:57:52PM +0100, Ilya Shipitsin wrote:
> > for development branches let's use "latest" and fixed for stable
> >
> > LibreSSL-3.6.0 had some regression, it was fixed in 3.6.1, let us
> > switch back to the latest LibreSSL available
>
>
> I think you made a mistake, doesn't seem related to libreSSL at all.
>
> > ---
> >  .github/matrix.py | 8 +++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/.github/matrix.py b/.github/matrix.py
> > index b5a971c5a..2d1831a4d 100755
> > --- a/.github/matrix.py
> > +++ b/.github/matrix.py
> > @@ -195,7 +195,6 @@ def main(ref_name):
> >  "OPENSSL_VERSION=1.1.1s",
> >  "QUICTLS=yes",
> >  "WOLFSSL_VERSION=5.6.4",
> > -"AWS_LC_VERSION=1.16.0",
> >  # "BORINGSSL=yes",
> >  ]
> >
> > @@ -203,6 +202,11 @@ def main(ref_name):
> >  ssl_versions = ssl_versions + [
> >  "OPENSSL_VERSION=latest",
> >  "LIBRESSL_VERSION=latest",
> > +"AWS_LC_VERSION=latest",
> > +]
> > +else: # stable branch
> > +ssl_versions = ssl_versions + [
> > +"AWS_LC_VERSION=1.17.3",
> >  ]
> >
> >  for ssl in ssl_versions:
> > @@ -213,6 +217,8 @@ def main(ref_name):
> >  flags.append("USE_OPENSSL_WOLFSSL=1")
> >  if "AWS_LC" in ssl:
> >  flags.append("USE_OPENSSL_AWSLC=1")
> > +if "latest" in ssl:
> > +ssl = determine_latest_aws_lc(ssl)
> >  if ssl != "stock":
> >  flags.append("SSL_LIB=${HOME}/opt/lib")
> >  flags.append("SSL_INC=${HOME}/opt/include")
>
>
> Well, the idea was to build the "latest" aws-lc outside the push CI, so
> we are already doing this here:
>
> http://github.com/haproxy/haproxy/blob/master/.github/workflows/aws-lc.yml
>
> I'm not really confortable with having everything in "latest" in the
> master in fact, we already have the "openssl-3.2.0-*"
> builds for a while without even testing 3.1 anymore, and I didn't
> noticed.
>


in theory we can do like that.

we can pin openssl=3.2.0beta1 and we can dynamically check during a build
whether it still resolves to the latest.
if not, we fail a build


>
> That's a problem, maybe we should put the "latest" builds in a daily
> build so it can evolve on its own without impacting the dev.
>
> Having a library which change its version between 2 pushes can be quite
> confusing, even more if the library broke something, usually you want to
> test your code when you push in master, not the libraries!
>
> For example we could have had build breakage when switching
> automatically to 3.2-alpha them 3.2-beta etc.
>
> But since we didn't had any problem for now, maybe we could just try it,
> it can be reverted easily anyway...
>
> --
> William Lallemand
>


Re: [PATCH 1/1] CI: switch aws-lc builds to "latest" semantic

2023-11-23 Thread Илья Шипицин
чт, 23 нояб. 2023 г. в 22:18, William Lallemand :

> Hi Ilya,
>
> On Thu, Nov 23, 2023 at 06:57:52PM +0100, Ilya Shipitsin wrote:
> > for development branches let's use "latest" and fixed for stable
> >
> > LibreSSL-3.6.0 had some regression, it was fixed in 3.6.1, let us
> > switch back to the latest LibreSSL available
>
>
> I think you made a mistake, doesn't seem related to libreSSL at all.
>

it's a copy paste error, sorry


>
> > ---
> >  .github/matrix.py | 8 +++-
> >  1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/.github/matrix.py b/.github/matrix.py
> > index b5a971c5a..2d1831a4d 100755
> > --- a/.github/matrix.py
> > +++ b/.github/matrix.py
> > @@ -195,7 +195,6 @@ def main(ref_name):
> >  "OPENSSL_VERSION=1.1.1s",
> >  "QUICTLS=yes",
> >  "WOLFSSL_VERSION=5.6.4",
> > -"AWS_LC_VERSION=1.16.0",
> >  # "BORINGSSL=yes",
> >  ]
> >
> > @@ -203,6 +202,11 @@ def main(ref_name):
> >  ssl_versions = ssl_versions + [
> >  "OPENSSL_VERSION=latest",
> >  "LIBRESSL_VERSION=latest",
> > +"AWS_LC_VERSION=latest",
> > +]
> > +else: # stable branch
> > +ssl_versions = ssl_versions + [
> > +"AWS_LC_VERSION=1.17.3",
> >  ]
> >
> >  for ssl in ssl_versions:
> > @@ -213,6 +217,8 @@ def main(ref_name):
> >  flags.append("USE_OPENSSL_WOLFSSL=1")
> >  if "AWS_LC" in ssl:
> >  flags.append("USE_OPENSSL_AWSLC=1")
> > +if "latest" in ssl:
> > +ssl = determine_latest_aws_lc(ssl)
> >  if ssl != "stock":
> >  flags.append("SSL_LIB=${HOME}/opt/lib")
> >  flags.append("SSL_INC=${HOME}/opt/include")
>
>
> Well, the idea was to build the "latest" aws-lc outside the push CI, so
> we are already doing this here:
>
> http://github.com/haproxy/haproxy/blob/master/.github/workflows/aws-lc.yml
>
> I'm not really confortable with having everything in "latest" in the
> master in fact, we already have the "openssl-3.2.0-*"
> builds for a while without even testing 3.1 anymore, and I didn't
> noticed.
>
> That's a problem, maybe we should put the "latest" builds in a daily
> build so it can evolve on its own without impacting the dev.
>
> Having a library which change its version between 2 pushes can be quite
> confusing, even more if the library broke something, usually you want to
> test your code when you push in master, not the libraries!
>
> For example we could have had build breakage when switching
> automatically to 3.2-alpha them 3.2-beta etc.
>
> But since we didn't had any problem for now, maybe we could just try it,
> it can be reverted easily anyway...
>

accidental breakage is not a big issue. you can always get back to fixed
version if you want to.

we definitely should try. if we'll find better way later, we'll switch to it


>
> --
> William Lallemand
>


Re: CVE-2023-44487 and haproxy-1.8

2023-10-16 Thread Илья Шипицин
Does 1.8 support http/2?

On Mon, Oct 16, 2023, 18:58 Ryan O'Hara  wrote:

> Hi all.
>
> I read the most recently HAProxy Newsletter, specifically the article "HAProxy
> is Not Affected by the HTTP/2 Rapid Reset Attack" by Nick Ramirez [1]. A
> This article states that HAProxy versions 1.9 and later are *not* affetced,
> which is great. This implies that haproxy-1.8 *is* affected, but it also
> doesn't come right out and say that. I understand haproxy-1.8 is EOL, but
> do we know for certain that haproxy-1.8 is affected or not? Asking for a
> reason.
>
> And shout-out to Nick for writing such a great article! Thank you, Nick!
>
> Ryan
>
> [1]
> https://www.haproxy.com/blog/haproxy-is-not-affected-by-the-http-2-rapid-reset-attack-cve-2023-44487
>


Re: [PATCH 1/1] CI: cirrus-ci: display gdb bt if any

2023-09-21 Thread Илья Шипицин
ping :)

пт, 8 сент. 2023 г. в 22:57, Ilya Shipitsin :

> previously, if test process crashes (either BUG_ON or segfault), no
> coredump were collected and analysed
> ---
>  .cirrus.yml | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/.cirrus.yml b/.cirrus.yml
> index 2993b943a..4bf3fb672 100644
> --- a/.cirrus.yml
> +++ b/.cirrus.yml
> @@ -6,8 +6,13 @@ FreeBSD_task:
>install_script:
>  - pkg update -f && pkg upgrade -y && pkg install -y openssl git gmake
> lua53 socat pcre
>script:
> +- sudo sysctl kern.corefile=/tmp/%N.%P.core
> +- sudo sysctl kern.sugid_coredump=1
>  - scripts/build-vtest.sh
>  - gmake CC=clang V=1 ERR=1 TARGET=freebsd USE_ZLIB=1 USE_PCRE=1
> USE_OPENSSL=1 USE_LUA=1 LUA_INC=/usr/local/include/lua53
> LUA_LIB=/usr/local/lib LUA_LIB_NAME=lua-5.3
>  - ./haproxy -vv
>  - ldd haproxy
> -- env VTEST_PROGRAM=../vtest/vtest gmake reg-tests
> REGTESTS_TYPES=default,bug,devel || (for folder in /tmp/*regtest*/vtc.*; do
> cat $folder/INFO $folder/LOG; done && exit 1)
> +  test_script:
> +- env VTEST_PROGRAM=../vtest/vtest gmake reg-tests
> REGTESTS_TYPES=default,bug,devel
> +  on_failure:
> +debug_script: (for folder in /tmp/*regtest*/vtc.*; do cat
> $folder/INFO $folder/LOG; done && ls /tmp/haproxy.*.core && gdb -ex 'thread
> apply all bt full' ./haproxy /tmp/haproxy.*.core)
> --
> 2.35.3.windows.1
>
>


Re: mux-h2: Backend stream is not fully closed if frontend keeps stream open

2023-09-17 Thread Илья Шипицин
Yes, that e2e is probably not going to do nasty things. But it worth a try

On Sun, Sep 17, 2023, 03:26 Valters Jansons 
wrote:

> On Sat, Sep 16, 2023 at 10:02 PM Илья Шипицин 
> wrote:
> > I wonder if there're gRPC test tests similar to h2spec (I couldn't findI
> am  them)
>
> I am not aware of a single binary that could be used as a gRPC test
> for proxies. The closest thing that I can think of is examples from
> official gRPC libraries. Internally, the Go library runs a Bash script
> for CI "Run extra tests" to check the outputs of examples. It can be
> seen at
> https://github.com/grpc/grpc-go/blob/v1.58.1/examples/examples_test.sh
> and could be adopted for HAProxy, by overriding the client `-addr`
> flag. There are no guarantees that it continues being maintained in
> the current format though.
>
> ---
>
> But, gRPC is essentially a framework (binary encoding and custom
> headers) for object-oriented HTTP/2. My observed issue is an HTTP/2
> processing issue by HAProxy, when the frontend client doesn't send an
> END_STREAM flag.
>
> When I implement a similar scenario in other languages, the failure
> rate is much lower. For example, I pushed a custom Java client to
> https://github.com/sigv/grpcopen/tree/main/java that I was testing
> with nginx (same HAProxy configuration as my other proof). When this
> request succeeds, however, the session state is reported SD--. The
> gRPC example to me is simply the most reliable failure, as there is
> some timing involved, which I can't seem to yet figure out fully.
>
> To highlight H2 standard (RFC7540), section 8.1 states "the last frame
> in the sequence bears an END_STREAM flag". It is however never
> clarified if that's a "MUST" or a "SHOULD". The client could be argued
> as non-compliant, assuming it is intended as "MUST". Maybe this is the
> reason why the h2spec project does not cover such a scenario ("who
> would do such a thing").
>
> What is even more important for this issue though, is the last
> paragraph of the same section:
>
>An HTTP response is complete after the server sends -- or the client
>receives -- a frame with the END_STREAM flag set (including any
>CONTINUATION frames needed to complete a header block).  A server can
>send a complete response prior to the client sending an entire
>request if the response does not depend on any portion of the request
>that has not been sent and received.  When this is true, a server MAY
>request that the client abort transmission of a request without error
>by sending a RST_STREAM with an error code of NO_ERROR after sending
>a complete response (i.e., a frame with the END_STREAM flag).
>Clients MUST NOT discard responses as a result of receiving such a
>RST_STREAM, though clients can always discard responses at their
>discretion for other reasons.
>
> To paraphrase, when the backend server sends END_STREAM, the response
> is complete and the server is not interested in the request data
> anymore. The END_STREAM is a success in the eyes of the H2 spec, and
> "clients MUST NOT discard responses as a result" (that is - HAProxy is
> expected to send the response to client when END_STREAM). HAProxy
> should not treat it as an abrupt close. When the backend server sends
> RST_STREAM(NO_ERROR) to abort the connection, HAProxy must simply stop
> sending data to the backend server.
>
> I understand there is some complexity with the mux, and tieing H1 into
> it. But the H2 spec says the exchange is successful, and the observed
> half-close is okay.
>
> Best regards,
> Valters Jansons
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-09-07 Thread Илья Шипицин
чт, 7 сент. 2023 г. в 00:05, Hopkins, Andrew :

> I tried it and HAProxy doesn’t build with AWS-LC when quic is turned on.
> There are at least two issues:
> 1. AWS-LC’s TLS 1.3 cipher suite names are a little different, this is
> easy to fix and I opened https://github.com/aws/aws-lc/pull/1175
>
> 2. ChaCha Poly and AES CCM are not usable through the EVP_CIPHER API,
> AWS-LC only exposes these through the AEAD API
>
>
>
> How important is ChaCha Poly & AES CCM to HAProxy and your users? I see
> three options:
>

it is mandatory, browsers require it.


accidentally "QUIC without ChaCha Poly" was implemented for LibreSSL (it
was added few commits later), nothing worked from browser point of view.


> 1. AWS-LC plumbs these two algorithms through the EVP_CIPHER API. This is
> useful for HAProxy and other AWS-LC customers, but is the most work
> 2. HAProxy adopts AWS-LC’s (and BoringSSL’s) AEAD API
>
> 3. HAProxy turns off ChaCha Poly and AES CCM support in quic when built
> with AWS-LC
>

I recall there was similar usage for BoringSSL, maybe just modifying
"ifdef" should work


>
>
>
>
> *From: *Илья Шипицин 
> *Date: *Wednesday, September 6, 2023 at 5:41 AM
> *To: *William Lallemand 
> *Cc: *"Hopkins, Andrew" , Willy Tarreau ,
> Aleksandar Lazic , "haproxy@formilux.org" <
> haproxy@formilux.org>
> *Subject: *RE: [EXTERNAL] [PATCH] BUILD: ssl: Build with new
> cryptographic library AWS-LC
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> based on USE_OPENSSL_AWSLC quic may be enabled  ?
>
>
>
> ср, 6 сент. 2023 г. в 14:26, William Lallemand :
>
> On Tue, Sep 05, 2023 at 11:56:26PM +, Hopkins, Andrew wrote:
> > I split up the remaining CI changes into 4 new attached patches. The
> > latest changes are still passing on my fork
> > https://github.com/andrewhop/haproxy/actions/runs/6090899582.
> >
>
> Thanks, I just merged them!
>
>
> > I was hoping to take advantage of the better HAProxy support in
> > AWS-LC's CI but I'm running into some issues in
> > https://github.com/aws/aws-lc/pull/1174 I was wondering if you had any
> > pointers of what to look at. I think this is CodeBuild specific issue
> > since the tests pass in HAProxy's CI and when I run AWS-LC's CI
> > locally. I just can't figure out what CodeBuild might be doing to mess
> > with the results.
> >
> > Looking at the log for mcli_start_progs.vtc the two sleep programs are
> > started as expected but the overall process returns the wrong exit
> > code (0x0 instead of 0x82). Does anything stand out to you as weird
> > looking?
> >
>
> I never used CodeBuild so I'm not aware on any timers or process
> limitation but that could be something like that.
>
> From what I understand from the trace, I think every processes received a
> SIGTERM. You can see 2 "Exiting Master process..." and the first one is
> before
> the "kill" from VTest which is suppose to send a SIGINT so it was probably
> sent
> outside the test.
>
> This test should finish like this:
>
> ***  h1debug|:MASTER.accept(0008)=000e from [127.0.0.1:41542]
> ALPN=
> ***  h1debug|:MASTER.srvcls[000e:]
>  h1CLI connection normally closed
> ***  h1CLI closing fd 9
>  h1CLI recv|#
>  
>  h1CLI recv|357949  master  0 [failed: 0]
>  0d00h00m00s 2.9-dev4-06d369-78
>  h1CLI recv|# workers
>  h1CLI recv|357955  worker  0
>  0d00h00m00s 2.9-dev4-06d369-78
>  h1CLI recv|# programs
>  h1CLI recv|357953  foo 0
>  0d00h00m00s -
>  h1CLI recv|357954  bar 0
>  0d00h00m00s -
>  h1CLI recv|
> ***  h1debug|0001:MASTER.clicls[:]
> ***  h1debug|0001:MASTER.closed[:]
>  h1CLI expect match ~ ".*foo.*
> .*bar.*
> "
> **   h1CLI ending
> **   h1Wait
> **   h1Stop HAproxy pid=357949
>  h1Kill(2)=0: Success
> ***  h1debug|[NOTICE]   (357949) : haproxy version is
> 2.9-dev4-06d369-78
> ***  h1debug|[NOTICE]   (357949) : path to executable is
> /home/wla/projects/haproxy/haproxy-community-maint/haproxy
> ***  h1debug|[WARNING]  (357949) : Exiting Master process...
> ***  h1debug|[ALERT](357949) : Current program 'foo' (357953)
> exited with code 130 (Interrupt)
> ***  h1debug|[ALERT](357949) : Current pro

Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-09-06 Thread Илья Шипицин
based on USE_OPENSSL_AWSLC quic may be enabled  ?

ср, 6 сент. 2023 г. в 14:26, William Lallemand :

> On Tue, Sep 05, 2023 at 11:56:26PM +, Hopkins, Andrew wrote:
> > I split up the remaining CI changes into 4 new attached patches. The
> > latest changes are still passing on my fork
> > https://github.com/andrewhop/haproxy/actions/runs/6090899582.
> >
>
> Thanks, I just merged them!
>
>
> > I was hoping to take advantage of the better HAProxy support in
> > AWS-LC's CI but I'm running into some issues in
> > https://github.com/aws/aws-lc/pull/1174 I was wondering if you had any
> > pointers of what to look at. I think this is CodeBuild specific issue
> > since the tests pass in HAProxy's CI and when I run AWS-LC's CI
> > locally. I just can't figure out what CodeBuild might be doing to mess
> > with the results.
> >
> > Looking at the log for mcli_start_progs.vtc the two sleep programs are
> > started as expected but the overall process returns the wrong exit
> > code (0x0 instead of 0x82). Does anything stand out to you as weird
> > looking?
> >
>
> I never used CodeBuild so I'm not aware on any timers or process
> limitation but that could be something like that.
>
> From what I understand from the trace, I think every processes received a
> SIGTERM. You can see 2 "Exiting Master process..." and the first one is
> before
> the "kill" from VTest which is suppose to send a SIGINT so it was probably
> sent
> outside the test.
>
> This test should finish like this:
>
> ***  h1debug|:MASTER.accept(0008)=000e from [127.0.0.1:41542]
> ALPN=
> ***  h1debug|:MASTER.srvcls[000e:]
>  h1CLI connection normally closed
> ***  h1CLI closing fd 9
>  h1CLI recv|#
>  
>  h1CLI recv|357949  master  0 [failed: 0]
>  0d00h00m00s 2.9-dev4-06d369-78
>  h1CLI recv|# workers
>  h1CLI recv|357955  worker  0
>  0d00h00m00s 2.9-dev4-06d369-78
>  h1CLI recv|# programs
>  h1CLI recv|357953  foo 0
>  0d00h00m00s -
>  h1CLI recv|357954  bar 0
>  0d00h00m00s -
>  h1CLI recv|
> ***  h1debug|0001:MASTER.clicls[:]
> ***  h1debug|0001:MASTER.closed[:]
>  h1CLI expect match ~ ".*foo.*
> .*bar.*
> "
> **   h1CLI ending
> **   h1Wait
> **   h1Stop HAproxy pid=357949
>  h1Kill(2)=0: Success
> ***  h1debug|[NOTICE]   (357949) : haproxy version is
> 2.9-dev4-06d369-78
> ***  h1debug|[NOTICE]   (357949) : path to executable is
> /home/wla/projects/haproxy/haproxy-community-maint/haproxy
> ***  h1debug|[WARNING]  (357949) : Exiting Master process...
> ***  h1debug|[ALERT](357949) : Current program 'foo' (357953)
> exited with code 130 (Interrupt)
> ***  h1debug|[ALERT](357949) : Current program 'bar' (357954)
> exited with code 130 (Interrupt)
>  dT0.076
> ***  h1debug|[ALERT](357949) : Current worker (357955) exited with
> code 130 (Interrupt)
> ***  h1debug|[WARNING]  (357949) : All workers exited. Exiting... (130)
>  dT0.077
>  h1STDOUT EOF
>  dT0.171
> **   h1WAIT4 pid=357949 status=0x8200 (user 0.058881 sys 0.026402)
> *top   RESETTING after reg-tests/mcli/mcli_start_progs.vtc
> **   h1Reset and free h1 haproxy -1
>  dT0.172
> **   s1Waiting for server (4/-1)
> *top   TEST reg-tests/mcli/mcli_start_progs.vtc completed
> *diag  0.0 /usr/bin/sleep
> #top  TEST reg-tests/mcli/mcli_start_progs.vtc passed (0.173)
> 0 tests failed, 0 tests skipped, 1 tests passed
>
>
> --
> William Lallemand
>


Re: [PATCH] MEDIUM: sample: Implement sample fetch for arbitrary PROXY protocol v2 TLV values

2023-08-31 Thread Илья Шипицин
cirrus-ci backtrace

freebsd (cirrus-ci) crash · Issue #2275 · haproxy/haproxy (github.com)
<https://github.com/haproxy/haproxy/issues/2275>

as usual, I'll send CI improvements once polished

чт, 31 авг. 2023 г. в 18:22, Илья Шипицин :

> while trying to enable "gdb bt" on cirrus-ci, I noticed that we have
> similar crashes on musl (where gdb implemented already)
>
> https://github.com/haproxy/haproxy/issues/2274
>
> ср, 30 авг. 2023 г. в 05:29, Willy Tarreau :
>
>> On Tue, Aug 29, 2023 at 11:16:32PM +0200,  ??? wrote:
>> > ??, 29 ???. 2023 ?. ? 16:45, Willy Tarreau :
>> >
>> > > On Tue, Aug 29, 2023 at 04:31:31PM +0200, Willy Tarreau wrote:
>> > > > On Tue, Aug 29, 2023 at 02:16:55PM +, Stephan, Alexander wrote:
>> > > > > However, I noticed there is a problem now with the FreeBSD test.
>> Have
>> > > you
>> > > > > already looked into it?
>> > > >
>> > > > Ah no, I had not noticed. I first pushed into a temporary branch and
>> > > > everything was OK so I pushed into master again without checking
>> since
>> > > > it was the exact same commit.
>> > > >
>> > > > > I was not able to reproduce it for now. Looks like the test
>> passes but
>> > > it
>> > > > > crashes afterwards? Maybe some SEGFAULT. Not sure... the CI on
>> your
>> > > branch
>> > > > > looked good iirc.
>> > > >
>> > > > Hmmm no, signal 4 is SIGILL, we're using it for our asserts
>> (BUG_ON()).
>> > > > I never know how to retrieve the stderr log from these tests, there
>> are
>> > > > too many layers for me, I've been explained several times but cannot
>> > > > memorize it.
>> > > >
>> > > > I'm rebuilding here on a freebsd machine, hoping to see it happen
>> again.
>> > > >
>> > > > Note that we cannot rule out a transient issue such as a temporary
>> memory
>> > > > shortage or too long a CPU pause as such VMs are usually overloaded.
>> > > > Maybe it has nothing to do with your new test but just randomly
>> triggered
>> > > > there, or maybe it put the light on a corner case. At least it's
>> not a
>> > > > segfault or such a crash that would indicate a pointer misuse.
>> > >
>> > > I'm getting totally random results from vtest on FreeBSD, I don't know
>> > > why. I even tried to limit to one parallel process and am still
>> getting
>> > > on average 2 errors per run, most always on SSL but not only. Some of
>> > > them include connection failures (sC) with 503 being returned and
>> showing
>> > > "Timeout during SSL handshake". I'm not getting the SIGILL though.
>> Thus
>> > > I don't know what to think about it, especially since it previously
>> worked.
>> > > We'll see if it happens again on next push.
>> > >
>> > > Willy
>> > >
>> >
>> > we can disable cirrus-ci, not sure what is the purpose of randomly
>> failing
>> > CI :)
>>
>> I think it's still used for arm builds or something like this. Regardless,
>> given that freebsd randomly fails on my local platform,there might be
>> something else. I remember that cirrus used to fail some time ago, I
>> don't know what its status is right now.
>>
>> Willy
>>
>


Re: [PATCH] MEDIUM: sample: Implement sample fetch for arbitrary PROXY protocol v2 TLV values

2023-08-31 Thread Илья Шипицин
while trying to enable "gdb bt" on cirrus-ci, I noticed that we have
similar crashes on musl (where gdb implemented already)

https://github.com/haproxy/haproxy/issues/2274

ср, 30 авг. 2023 г. в 05:29, Willy Tarreau :

> On Tue, Aug 29, 2023 at 11:16:32PM +0200,  ??? wrote:
> > ??, 29 ???. 2023 ?. ? 16:45, Willy Tarreau :
> >
> > > On Tue, Aug 29, 2023 at 04:31:31PM +0200, Willy Tarreau wrote:
> > > > On Tue, Aug 29, 2023 at 02:16:55PM +, Stephan, Alexander wrote:
> > > > > However, I noticed there is a problem now with the FreeBSD test.
> Have
> > > you
> > > > > already looked into it?
> > > >
> > > > Ah no, I had not noticed. I first pushed into a temporary branch and
> > > > everything was OK so I pushed into master again without checking
> since
> > > > it was the exact same commit.
> > > >
> > > > > I was not able to reproduce it for now. Looks like the test passes
> but
> > > it
> > > > > crashes afterwards? Maybe some SEGFAULT. Not sure... the CI on your
> > > branch
> > > > > looked good iirc.
> > > >
> > > > Hmmm no, signal 4 is SIGILL, we're using it for our asserts
> (BUG_ON()).
> > > > I never know how to retrieve the stderr log from these tests, there
> are
> > > > too many layers for me, I've been explained several times but cannot
> > > > memorize it.
> > > >
> > > > I'm rebuilding here on a freebsd machine, hoping to see it happen
> again.
> > > >
> > > > Note that we cannot rule out a transient issue such as a temporary
> memory
> > > > shortage or too long a CPU pause as such VMs are usually overloaded.
> > > > Maybe it has nothing to do with your new test but just randomly
> triggered
> > > > there, or maybe it put the light on a corner case. At least it's not
> a
> > > > segfault or such a crash that would indicate a pointer misuse.
> > >
> > > I'm getting totally random results from vtest on FreeBSD, I don't know
> > > why. I even tried to limit to one parallel process and am still getting
> > > on average 2 errors per run, most always on SSL but not only. Some of
> > > them include connection failures (sC) with 503 being returned and
> showing
> > > "Timeout during SSL handshake". I'm not getting the SIGILL though. Thus
> > > I don't know what to think about it, especially since it previously
> worked.
> > > We'll see if it happens again on next push.
> > >
> > > Willy
> > >
> >
> > we can disable cirrus-ci, not sure what is the purpose of randomly
> failing
> > CI :)
>
> I think it's still used for arm builds or something like this. Regardless,
> given that freebsd randomly fails on my local platform,there might be
> something else. I remember that cirrus used to fail some time ago, I
> don't know what its status is right now.
>
> Willy
>


Re: [PATCH] MEDIUM: sample: Implement sample fetch for arbitrary PROXY protocol v2 TLV values

2023-08-29 Thread Илья Шипицин
вт, 29 авг. 2023 г. в 16:45, Willy Tarreau :

> On Tue, Aug 29, 2023 at 04:31:31PM +0200, Willy Tarreau wrote:
> > On Tue, Aug 29, 2023 at 02:16:55PM +, Stephan, Alexander wrote:
> > > However, I noticed there is a problem now with the FreeBSD test. Have
> you
> > > already looked into it?
> >
> > Ah no, I had not noticed. I first pushed into a temporary branch and
> > everything was OK so I pushed into master again without checking since
> > it was the exact same commit.
> >
> > > I was not able to reproduce it for now. Looks like the test passes but
> it
> > > crashes afterwards? Maybe some SEGFAULT. Not sure... the CI on your
> branch
> > > looked good iirc.
> >
> > Hmmm no, signal 4 is SIGILL, we're using it for our asserts (BUG_ON()).
> > I never know how to retrieve the stderr log from these tests, there are
> > too many layers for me, I've been explained several times but cannot
> > memorize it.
> >
> > I'm rebuilding here on a freebsd machine, hoping to see it happen again.
> >
> > Note that we cannot rule out a transient issue such as a temporary memory
> > shortage or too long a CPU pause as such VMs are usually overloaded.
> > Maybe it has nothing to do with your new test but just randomly triggered
> > there, or maybe it put the light on a corner case. At least it's not a
> > segfault or such a crash that would indicate a pointer misuse.
>
> I'm getting totally random results from vtest on FreeBSD, I don't know
> why. I even tried to limit to one parallel process and am still getting
> on average 2 errors per run, most always on SSL but not only. Some of
> them include connection failures (sC) with 503 being returned and showing
> "Timeout during SSL handshake". I'm not getting the SIGILL though. Thus
> I don't know what to think about it, especially since it previously worked.
> We'll see if it happens again on next push.
>
> Willy
>

we can disable cirrus-ci, not sure what is the purpose of randomly failing
CI :)


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-08-17 Thread Илья Шипицин
I suggest to submit (i.e. send to mailing list as mentioned in
CONTRIBUTING) changes in src/build-ssl.sh separately. Those changes are
good and they do not affect any further steps.

nowadays we do not benefit from  echo "${AWS_LC_VERSION}" > "${HOME}
/opt/.aws_lc-version"
we used to use that for travis-ci caching earlier, today some people still
use it for local caching

as for further steps, there were several ideas

1) add aws-lc to push based vtest
2) add aws-lc to weekly ci
3) add dedicated USE_OPENSSL_AWSLC (similar to USE_OPENSSL_WOLFSSL)



чт, 17 авг. 2023 г. в 00:49, Hopkins, Andrew :

> Yes, what are the next steps? I updated my test PR with the latest changes
> from HAProxy master and it is still passing [1]. With a cached AWS-LC build
> the HAProxy build + test takes 2 minutes. Attached are the updated patch
> files, I can also combine them since they’re both small.
>
>
>
> For the defines: we already have OPENSSL_IS_AWSLC and I agree that’s
> reasonable to use if there is a spot we need to branch on, but the goal is
> not to need it. For the OPENSSL_VERSION_NUMBER we are currently not 100%
> 1.1.1 API compatible, we are working to improve that so other projects can
> easily migrate. [2] will make the version string behavior match OpenSSL’s.
> We are compatible for HAPRoxy’s current use of OpenSSL after [3], [4], [5]
> were merged in.
>
>
>
> [1] https://github.com/andrewhop/haproxy/pull/1
>
> [2] https://github.com/aws/aws-lc/pull/767
>
> [3] https://github.com/aws/aws-lc/pull/1032
>
> [4] https://github.com/aws/aws-lc/pull/1055
>
> [5] https://github.com/aws/aws-lc/pull/1070
>
>
>
> *From: *Илья Шипицин 
> *Date: *Wednesday, August 9, 2023 at 11:26 PM
> *To: *William Lallemand 
> *Cc: *Willy Tarreau , "Hopkins, Andrew" ,
> Aleksandar Lazic , "haproxy@formilux.org" <
> haproxy@formilux.org>
> *Subject: *RE: [EXTERNAL] [PATCH] BUILD: ssl: Build with new
> cryptographic library AWS-LC
>
>
>
> *CAUTION*: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> shall we unfreeze this activity?
>
>
>
> вт, 18 июл. 2023 г. в 10:46, William Lallemand :
>
> On Tue, Jul 18, 2023 at 09:11:33AM +0200, Willy Tarreau wrote:
> > I'll let the SSL maintainers check all this, but my sentiment is that in
> > general if there are differences between the libs, it would be better if
> > we have a special define for this one as well. It's easier to write and
> > maintain "#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC)"
> > than making it appear sometimes as one of them, sometimes as the other.
> > That's what we had a long time ago and it was a real pain, every single
> > move in any lib would cause breakage somewhere. Being able to reliably
> > identify a library and handle its special cases is much better.
>
> I agree, we could even add a build option OPENSSL_AWSLC=1 like we've
> done with wolfssl, since this is a variant of the Openssl API. Then
> every supported features could be activated with the HAVE_SSL_* defines
> in openssl-compat.h. Discovering the features with libreSSL and
> boringSSL version defines was a real mess, we are probably going to end
> up with a matrix of features supported by different libraries.
>
> I'm seeing multiple defines that can be useful in haproxy:
>
> - OPENSSL_IS_AWSLC could be used as Willy said, that could enough and we
>   maybe won't need the build option.
>
> - OPENSSL_VERSION_NUMBER it seems to be set to 0x1010107f but is this
>   100% compatible with the openssl 1.1.1 API?
>
> - AWSLC_VERSION_NUMBER_STRING It seems to be the OPENSSL_VERSION_TEXT
>   counterpart but I don't see the equivalent as a number, in
>   OpenSSL there is OPENSSL_VERSION_NUMBER which is used for doing #if
>   (OPENSSL_VERSION_NUMBER >= 0x1010107f) in the code for example, this
>   is really important for maintenance if we want to support multiple
>   versions of aws-lc.
>
> - AWSLC_API_VERSION maybe this would be enough instead of the
>   VERSION_NUMBER. We could activate the HAVE_SSL_* defines using
>   OPENSSL_VERSION_NUMBER and this.
>
> > > To Alex's concern on API compatibility: yes AWS-LC is aiming to
> provide a
> > > more stable API. We already run integration tests with 6 other
> projects [2]
> > > including HAProxy. This will help ensure API compatibility going
> forward.
> > > What is your specific concern with ABI compatibility? Are you looking
> to take
> > > the haproxy executable built with OpenSSL libcrypto/libssl and drop in
> AWS-LC
> > 

Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-08-10 Thread Илья Шипицин
shall we unfreeze this activity?

вт, 18 июл. 2023 г. в 10:46, William Lallemand :

> On Tue, Jul 18, 2023 at 09:11:33AM +0200, Willy Tarreau wrote:
> > I'll let the SSL maintainers check all this, but my sentiment is that in
> > general if there are differences between the libs, it would be better if
> > we have a special define for this one as well. It's easier to write and
> > maintain "#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC)"
> > than making it appear sometimes as one of them, sometimes as the other.
> > That's what we had a long time ago and it was a real pain, every single
> > move in any lib would cause breakage somewhere. Being able to reliably
> > identify a library and handle its special cases is much better.
>
> I agree, we could even add a build option OPENSSL_AWSLC=1 like we've
> done with wolfssl, since this is a variant of the Openssl API. Then
> every supported features could be activated with the HAVE_SSL_* defines
> in openssl-compat.h. Discovering the features with libreSSL and
> boringSSL version defines was a real mess, we are probably going to end
> up with a matrix of features supported by different libraries.
>
> I'm seeing multiple defines that can be useful in haproxy:
>
> - OPENSSL_IS_AWSLC could be used as Willy said, that could enough and we
>   maybe won't need the build option.
>
> - OPENSSL_VERSION_NUMBER it seems to be set to 0x1010107f but is this
>   100% compatible with the openssl 1.1.1 API?
>
> - AWSLC_VERSION_NUMBER_STRING It seems to be the OPENSSL_VERSION_TEXT
>   counterpart but I don't see the equivalent as a number, in
>   OpenSSL there is OPENSSL_VERSION_NUMBER which is used for doing #if
>   (OPENSSL_VERSION_NUMBER >= 0x1010107f) in the code for example, this
>   is really important for maintenance if we want to support multiple
>   versions of aws-lc.
>
> - AWSLC_API_VERSION maybe this would be enough instead of the
>   VERSION_NUMBER. We could activate the HAVE_SSL_* defines using
>   OPENSSL_VERSION_NUMBER and this.
>
> > > To Alex's concern on API compatibility: yes AWS-LC is aiming to
> provide a
> > > more stable API. We already run integration tests with 6 other
> projects [2]
> > > including HAProxy. This will help ensure API compatibility going
> forward.
> > > What is your specific concern with ABI compatibility? Are you looking
> to take
> > > the haproxy executable built with OpenSSL libcrypto/libssl and drop in
> AWS-LC
> > > without recompiling haproxy? Or do that between AWS-LC libcrypto/libssl
> > > versions?
> >
> > I personally have no interest in cross-libs ABI compatibility because
> > that does not make much sense, particularly when considering that Openssl
> > does not support QUIC so by definition there will be many symbol-level
> > differences. Regarding aws-lc's libs over time, yes for the users it
> > would be desirable that within a stable branch it's possible to update
> > the library or the application in any order without having to rebuild
> > the application. We all know that it's something that only becomes
> > possible once the lib stabilizes enough to avoid invasive backports in
> > stable branches. I don't know what the current status is for aws-lc's
> > stable branches at the moment.
> >
>
> Agreed, cross-libs ABI is not useful, but the ABI should remain stable
> between minor releases so the library package could be updated without
> rebuilding every software that depends on it.
>
> Regards,
>
>
> --
> William Lallemand
>
>


Re: [PATCH 2/2] CI: get rid of travis-ci wrapper for Coverity scan

2023-08-07 Thread Илья Шипицин
I made a typo

+  https://scan.coverity.com/builds?project=Hsproxy

can it be fixed on the fly ? or I can send v2.

вс, 6 авг. 2023 г. в 00:10, Ilya Shipitsin :

> historically coverity scan was performed by travis-ci script, let us
> rewrite it in bash
> ---
>  .github/workflows/coverity.yml | 28 +---
>  1 file changed, 17 insertions(+), 11 deletions(-)
>
> diff --git a/.github/workflows/coverity.yml
> b/.github/workflows/coverity.yml
> index e208c8cac..e4e2bd5dc 100644
> --- a/.github/workflows/coverity.yml
> +++ b/.github/workflows/coverity.yml
> @@ -16,13 +16,6 @@ jobs:
>scan:
>  runs-on: ubuntu-latest
>  if: ${{ github.repository_owner == 'haproxy' }}
> -env:
> -  COVERITY_SCAN_PROJECT_NAME: 'Haproxy'
> -  COVERITY_SCAN_BRANCH_PATTERN: '*'
> -  COVERITY_SCAN_NOTIFICATION_EMAIL: 'chipits...@gmail.com'
> -  # We cannot pass the DEBUG at once here because Coverity splits
> -  # parameters at whitespaces, without taking quoting into account.
> -  COVERITY_SCAN_BUILD_COMMAND: "make CC=clang TARGET=linux-glibc
> USE_ZLIB=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_LUA=1 USE_OPENSSL=1 USE_QUIC=1
> USE_SYSTEMD=1 USE_WURFL=1 WURFL_INC=addons/wurfl/dummy
> WURFL_LIB=addons/wurfl/dummy USE_DEVICEATLAS=1
> DEVICEATLAS_SRC=addons/deviceatlas/dummy USE_51DEGREES=1
> 51DEGREES_SRC=addons/51degrees/dummy/pattern
> ADDLIB=\"-Wl,-rpath,$HOME/opt/lib/\" SSL_LIB=${HOME}/opt/lib
> SSL_INC=${HOME}/opt/include DEBUG+=-DDEBUG_STRICT=1
> DEBUG+=-DDEBUG_USE_ABORT=1"
>  steps:
>  - uses: actions/checkout@v3
>  - name: Install apt dependencies
> @@ -34,10 +27,23 @@ jobs:
>  - name: Install QUICTLS
>run: |
>  QUICTLS=yes scripts/build-ssl.sh
> +- name: Download Coverity build tool
> +  run: |
> +wget -c -N https://scan.coverity.com/download/linux64
> --post-data "token=${{ secrets.COVERITY_SCAN_TOKEN }}=Haproxy" -O
> coverity_tool.tar.gz
> +mkdir coverity_tool
> +tar xzf coverity_tool.tar.gz --strip 1 -C coverity_tool
>  - name: Build WURFL
>run: make -C addons/wurfl/dummy
> -- name: Run Coverity Scan
> -  env:
> -COVERITY_SCAN_TOKEN: ${{ secrets.COVERITY_SCAN_TOKEN }}
> +- name: Build with Coverity build tool
> +  run: |
> +export PATH=`pwd`/coverity_tool/bin:$PATH
> +cov-build --dir cov-int make CC=clang TARGET=linux-glibc
> USE_ZLIB=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_LUA=1 USE_OPENSSL=1 USE_QUIC=1
> USE_SYSTEMD=1 USE_WURFL=1 WURFL_INC=addons/wurfl/dummy
> WURFL_LIB=addons/wurfl/dummy USE_DEVICEATLAS=1
> DEVICEATLAS_SRC=addons/deviceatlas/dummy USE_51DEGREES=1
> 51DEGREES_SRC=addons/51degrees/dummy/pattern
> ADDLIB=\"-Wl,-rpath,$HOME/opt/lib/\" SSL_LIB=${HOME}/opt/lib
> SSL_INC=${HOME}/opt/include DEBUG+=-DDEBUG_STRICT=1
> DEBUG+=-DDEBUG_USE_ABORT=1
> +- name: Submit build result to Coverity Scan
>run: |
> -curl -fsSL "
> https://scan.coverity.com/scripts/travisci_build_coverity_scan.sh; | bash
> || true
> +tar czvf cov.tar.gz cov-int
> +curl --form token=${{ secrets.COVERITY_SCAN_TOKEN }} \
> +  --form email=chipits...@gmail.com \
> +  --form file=@cov.tar.gz \
> +  --form version="Commit $GITHUB_SHA" \
> +  --form description="Build submitted via CI" \
> +  https://scan.coverity.com/builds?project=Hsproxy
> --
> 2.41.0
>
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-18 Thread Илья Шипицин
вт, 18 июл. 2023 г. в 09:14, Willy Tarreau :

> Hi Andrew,
>
> On Tue, Jul 18, 2023 at 06:26:45AM +, Hopkins, Andrew wrote:
> > Willy you're correct. AWS-LC does have support for the QUIC primitives
> > HAProxy  needs, we just need to fix some of the names [1] in either
> HAProxy's
> > code or AWS-LC in a follow up change.
>
> OK, thanks for confirming :-)
>
> I'll let the SSL maintainers check all this, but my sentiment is that in
> general if there are differences between the libs, it would be better if
> we have a special define for this one as well. It's easier to write and
> maintain "#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC)"
>

AWSLC_VERSION_NAME ?

aws-lc/include/openssl/base.h at 1dd5cf92e96edd4092bc307b14969dae5eaaa507 ·
aws/aws-lc (github.com)



> than making it appear sometimes as one of them, sometimes as the other.
> That's what we had a long time ago and it was a real pain, every single
> move in any lib would cause breakage somewhere. Being able to reliably
> identify a library and handle its special cases is much better.
>
> > To Alex's concern on API compatibility: yes AWS-LC is aiming to provide a
> > more stable API. We already run integration tests with 6 other projects
> [2]
> > including HAProxy. This will help ensure API compatibility going forward.
> > What is your specific concern with ABI compatibility? Are you looking to
> take
> > the haproxy executable built with OpenSSL libcrypto/libssl and drop in
> AWS-LC
> > without recompiling haproxy? Or do that between AWS-LC libcrypto/libssl
> > versions?
>
> I personally have no interest in cross-libs ABI compatibility because
> that does not make much sense, particularly when considering that Openssl
> does not support QUIC so by definition there will be many symbol-level
> differences. Regarding aws-lc's libs over time, yes for the users it
> would be desirable that within a stable branch it's possible to update
> the library or the application in any order without having to rebuild
> the application. We all know that it's something that only becomes
> possible once the lib stabilizes enough to avoid invasive backports in
> stable branches. I don't know what the current status is for aws-lc's
> stable branches at the moment.
>
> > Willy that's an interesting idea for library name conflict avoidance. I
> did
> > not know that's how quictls solved this problem, it seems like it would
> work
> > for simple applications with only one dependency on libcrypto (such as
> > HAProxy). However, I don't think it would solve the issue of transitive
> > dependencies mixing libcrypto implementations, e.g. an application
> depends on
> > 2 libraries that each depend on libcrypto, if one library moves to
> AWS-LC the
> > application will get symbol conflicts.
>
> Sure, but that application couldn't work otherwise anyway if some of
> its dependencies rely on openssl's libcrypto, because if you replace
> it with yours it'll get a different one than the one that was expected
> by all the macros, struct definitions and inline funtions anyway. That's
> already the problem we've faced with quictls: some applications with many
> dependencies (e.g. curl) need to have all these dependencies rebuilt
> against it anyway, so the application and its dependencies need to be
> explicitly built against the new lib. And at least with a distinct
> version number you don't need to rebuild the whole system, only the
> components that directly depend on aws-lc and their dependencies. This
> also allows to know better what depends on what.
>
> Just my two cents,
> Willy
>
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-17 Thread Илья Шипицин
сб, 15 июл. 2023 г. в 10:44, Willy Tarreau :

> Hi Alex, Andrew,
>
> On Thu, Jul 13, 2023 at 11:54:44AM +0200, Aleksandar Lazic wrote:
> > On 2023-07-13 (Do.) 08:22, Hopkins, Andrew wrote:
> > > * Do you plan to add quic (Server part) faster then OpenSSL?
> > >
> > > I have not looked into quic benchmarks but it uses the same
> > > cryptographic primitives as TLS so I imagine we'd be faster for a lot
> of
> > > the algorithms. It might not be useful for HAProxy which is all C, but
> > > AWS also launched s2n-quic [1] which does have extensive testing for
> > > correctness and performance. s2n-quic evenuses AWS-LC's libcrypto for
> > > all of the cryptographic operations [2] though our rust bindings
> > > aws-lc-rs [3].
> >
> > Hm, this implies a dependency for rust which increases the complexity to
> > build HAProxy. From my point of view isn't this very helpfull to bring
> the
> > library into haproxy.
>
> I think there was a misunderstanding between you two above. I understand
> Alex' question as "will you merge quic support soon", but my understanding
> last time I had the opportunity to discuss about AWS-LC was that it *is*
> already there. Alex, what Andrew mentioned above is that their own QUIC
> implementation, s2n-quic, uses their rust bindings, but we don't need
> s2n-quic (since we have our own stack) and this will not add any
> dependency.
>

I tried to build with USE_QUIC=1:

[ilia@fedora haproxy]$ sh go.sh
  CC  src/quic_conn.o
  CC  src/h3.o
  CC  src/xprt_quic.o
  CC  src/quic_frame.o
  CC  src/quic_tls.o
In file included from src/quic_conn.c:60:
include/haproxy/quic_tls.h: In function ‘tls_aead’:
include/haproxy/quic_tls.h:124:14: error: ‘TLS1_3_CK_AES_128_GCM_SHA256’
undeclared (first use in this function); did you mean
‘TLS1_CK_AES_128_GCM_SHA256’?
  124 | case TLS1_3_CK_AES_128_GCM_SHA256:
  |  ^~~~
  |  TLS1_CK_AES_128_GCM_SHA256
compilation terminated due to -Wfatal-errors.
make: *** [Makefile:998: src/quic_conn.o] Error 1
make: *** Waiting for unfinished jobs
[ilia@fedora haproxy]$


>
> > > * Will be there some packages for debian/ubuntu/RHEL/... so that the
> > > users of HAProxy can "just install and run" HAProxy with that SSL Lib?
> > >
> > > In the near future no. Currently AWS-LC does not support enough
> packages
> > > to fully replace libcrypto for the entire operating system, and
> > > balancing different programs using different library paths and
> libcrypto
> > > implementations is tricky. Eventually distributing static archives and
> > > shared libraries once we have more support makes sense. There is more
> > > context/history in this issue [4].
> >
> > Uh that's a show stopper, at least from my point of view. This implies
> the
> > same work as HAProxy team have for wolfssl, BoringSSL and quictls and
> that's
> > a lot of work.
>
> I think that what would be important would be to find a package maintainer
> willing to maintain this into a distro for the whole maintenance life cycle
> of that distro. It also helps a lot in designing stable and durable APIs
> that boost adoption, because the early maintenance trouble are quickly
> learned during backports and everyone is more careful over time.
>
> One thing that should really be avoided would be to name the library as
> the regular openssl one, because while it helps with *early* adoption,
> it complicates everything over time and for all packages. For example,
> quictls adopted the .81 suffix (81 for "Q") and as such there's no
> conflict on the shared lib. For building, it's as simple as passing the
> SSL_INC/SSL_LIB during the build (and basically all software supporting
> openssl support this), so there's no trouble either, and all libs can
> coexist on the same distro. But yeah, it's really important to find
> someone willing to maintain such a package long enough for a distro,
> that fills a hole that the whole world is waiting to be plugged (since
> we know it will not come from openssl anyway).
>
> The partial support from various software is not a problem as long as
> openssl and aws-lc can coexist on the system. Some will just be built
> with one and others with the other. This has always existed and I would
> be surprised if there's no software linked against GNUtls on distros
> that ship it for example. So if aws-lc enters say, debian and ubuntu
> LTS, and common software like curl, apache, nginx, ngtcp2, haproxy can
> build with it, we'll basically have what users are waiting for to fully
> enable QUIC, and with a lib that cares about performance. For now there
> is only wolfSSL in that place, and while it works pretty well, it
> requires more porting effort from applications and a sensible set of
> build settings still needs to be defined for distros. In any case, the
> more options for package maintainers, the better.
>
> Regards,
> Willy
>
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-17 Thread Илья Шипицин
пн, 17 июл. 2023 г. в 11:58, William Lallemand :

> On Wed, Jul 12, 2023 at 12:26:06AM +, Hopkins, Andrew wrote:
> > Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC)
> > project [1]. Our goal is to improve the cryptography we use internally
> > at AWS and help our customers externally. In the spirit of helping
> > people use good crypto we know it’s important to make it easy to use
> > AWS-LC everywhere they use cryptography. This is why we are interested
> > in integrating AWS-LC into HAProxy.
> >
> > AWS-LC is a fork of BoringSSL which you already partially support. We
> > recently merged in several PRs (Full OCSP support [2] and custom
> > extension support [3]) to fully support HAProxy the same as OpenSSL.
> > To ensure we continue to support HAProxy long term we added HAProxy
> > built with AWS-LC to our CI [4].
> >
> > In our early testing we see modest improvements in overall throughput
> > when compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar
> > setup as this blog [5] I observe a small (~2.5%) increase in requests
> > per second for 5 kb requests on a C6i (x86) and C6g (arm) instance
> > using TLS 1.3 and AES 256 GCM. For both tests I used `taskset -c 2-47
> > ./h1load -e -ll -P -t 46 -s 30 -d 120 -c 500 https://[c6i or c6g
> > ip]:[aws-lc or openssl port]/?s=5k`.
> >
> > This small difference in this symmetric crypto workload comes down to
> > AWS-LC and OpenSSL having similar AES implementations. We observe
> > larger performance improvements with our micro-benchmarks for
> > algorithms related to the TLS handshake such as 15% reduction for ECDH
> > with P-256, and 40% reduction for P-521 on a C6i. This comes from our
> > s2n-bignum library[6], a formally verified bignum library with a focus
> > on performance and correctness.
> >
> > When built with AWS-LC all current regression tests pass. I have
> > included a small patch to update your documentation with AWS-LC as an
> > option and I attempted to add AWS-LC to your CI. I need a little help
> > figuring out how to test that part. Lastly from your excellent
> > contributing guide I am not subscribed so I would like to be cc’d on
> > all responses.
> >
> > Thanks, Andrew
> >
>
> Hello Andrew,
>
> That's interesting and we are open to new libraries that can give us a
> good alternative to OpenSSL.
>
> However the CI is becoming quite slow and we'd rather not add a new
> build for each push. I don't really like the fact that the patch is
> based on the git master for the push, the CI principally used to check
> if we didn't break anything in haproxy, so if the library keep changing
> during haproxy development it's more difficult to know if a breakage is
> because of haproxy or because of the library. We stopped building
> BoringSSL on the CI because of this, because there wasn't any clear
> maintainenance cycle and the library kept changing. It looks like you
> have real releases in awc-ls though.
>

BoringSSL is "rolling releases only" while aws-lc has stable releases.
there's work in progress how to cache aws-lc build (I guess only couple of
small issues still exist)

[PATCH] Minor: ssl: Build with new cryptographic library AWS-LC by
andrewhop · Pull Request #1 · andrewhop/haproxy (github.com)


I'm fine with either scheduled or "on push" CI checks, no particular
opinion from my side.

also, if "aws-lc" is somewhat very similar to openssl-1.1.1, we do not
expect we'll catch a lot of build errors daily because we already run
builds against
openssl-1.1.1, maybe weekly CI would be enough.



>
> What I suggest is that we create a scheduled build for aws-lc like we've
> done with
>
> https://github.com/haproxy/haproxy/blob/master/.github/workflows/openssl-nodeprecated.yml
> for example.
>
> This way we don't increase the CI build for each push, and using the
> master branch don't become a problem.


> Regards,
>
> --
> William Lallemand
>
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-15 Thread Илья Шипицин
Andrew,

I could not find how to enable "DHE-RSA-AES256-GCM-SHA384" on aws-lc
(required by haproxy vtest)

*** h3 debug|[ALERT] (7370) : config : Proxy 'ssl-dhfile-lst': unable to
set SSL cipher list to 'DHE-RSA-AES256-GCM-SHA384' for bind
'/tmp/haregtests-2023-07-14_19-06-01.w4q6ak/vtc.6061.2c564146/ssl_dhfile.sock'
at
[/tmp/haregtests-2023-07-14_19-06-01.w4q6ak/vtc.6061.2c564146/h3/cfg:26].
*** h3 debug|Proxy 'ssl-dhfile-lst': unable to set SSL cipher list to
'DHE-RSA-AES256-GCM-SHA384' for bind
'/tmp/haregtests-2023-07-14_19-06-01.w4q6ak/vtc.6061.2c564146/ssl_dhfile.sock'
at
[/tmp/haregtests-2023-07-14_19-06-01.w4q6ak/vtc.6061.2c564146/h3/cfg:26].
*** h3 debug|[ALERT] (7370) : config : Fatal errors found in configuration.

ср, 12 июл. 2023 г. в 02:29, Hopkins, Andrew :

> Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project
> [1]. Our goal is to improve the cryptography we use internally at AWS and
> help our customers externally. In the spirit of helping people use good
> crypto we know it’s important to make it easy to use AWS-LC everywhere they
> use cryptography. This is why we are interested in integrating AWS-LC into
> HAProxy.
>
> AWS-LC is a fork of BoringSSL which you already partially support. We
> recently merged in several PRs (Full OCSP support [2] and custom extension
> support [3]) to fully support HAProxy the same as OpenSSL. To ensure we
> continue to support HAProxy long term we added HAProxy built with AWS-LC to
> our CI [4].
>
> In our early testing we see modest improvements in overall throughput when
> compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as
> this blog [5] I observe a small (~2.5%) increase in requests per second for
> 5 kb requests on a C6i (x86) and C6g (arm) instance using TLS 1.3 and AES
> 256 GCM. For both tests I used `taskset -c 2-47 ./h1load -e -ll -P -t 46 -s
> 30 -d 120 -c 500 https://[c6i or c6g ip]:[aws-lc or openssl port]/?s=5k`.
>
> This small difference in this symmetric crypto workload comes down to
> AWS-LC and OpenSSL having similar AES implementations. We observe larger
> performance improvements with our micro-benchmarks for algorithms related
> to the TLS handshake such as 15% reduction for ECDH with P-256, and 40%
> reduction for P-521 on a C6i. This comes from our s2n-bignum library[6], a
> formally verified bignum library with a focus on performance and
> correctness.
>
> When built with AWS-LC all current regression tests pass. I have included
> a small patch to update your documentation with AWS-LC as an option and I
> attempted to add AWS-LC to your CI. I need a little help figuring out how
> to test that part. Lastly from your excellent contributing guide I am not
> subscribed so I would like to be cc’d on all responses.
>
> Thanks, Andrew
>
> [1] https://github.com/aws/aws-lc
> [2] https://github.com/aws/aws-lc/pull/1054
> [3] https://github.com/aws/aws-lc/pull/1071
> [4] https://github.com/aws/aws-lc/pull/1083
> [5]
> https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance
> [6] https://github.com/awslabs/s2n-bignum
>
>
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-13 Thread Илья Шипицин
another significant thing is developer velocity, 4 min for supplementary
lib build is too high.

[image: image.png]


can we implement something like current openssl (i.e. taking the last
available tag, which is even easier because aws-lc uses semantic versioning)

@functools.lru_cache(5)
def determine_latest_openssl(ssl):
headers = {}
if environ.get("GITHUB_TOKEN") is not None:
headers["Authorization"] = "token
{}".format(environ.get("GITHUB_TOKEN"))
request = urllib.request.Request(
"https://api.github.com/repos/openssl/openssl/tags;, headers=headers
)
try:
  openssl_tags = urllib.request.urlopen(request)
except:
  return "OPENSSL_VERSION=failed_to_detect"
tags = json.loads(openssl_tags.read().decode("utf-8"))
latest_tag = ""
for tag in tags:
name = tag["name"]
if "openssl-" in name:
if name > latest_tag:
latest_tag = name
return "OPENSSL_VERSION={}".format(latest_tag[8:])


in such case we can enable github cache and lib won't be rebuilt every time

чт, 13 июл. 2023 г. в 08:25, Hopkins, Andrew :

> Hi Alex, thanks for taking a look at this change, to answer your questions:
>
> * Do you plan to make releases which stable ABI on that we can rely on?
> Yes, we have releases on GitHub that follow semantic versioning and
> within minor versions everything is backward compatible. Internal details
> of structs may change in an API compatible way over time but might not be
> ABI. This would be signaled in the release notes and version number.
>
> * Do you plan to add quic (Server part) faster then OpenSSL?
>
> I have not looked into quic benchmarks but it uses the same cryptographic
> primitives as TLS so I imagine we'd be faster for a lot of the
> algorithms. It might not be useful for HAProxy which is all C, but AWS also
> launched s2n-quic [1] which does have extensive testing for correctness and
> performance. s2n-quic even uses AWS-LC's libcrypto for all of the
> cryptographic operations [2] though our rust bindings aws-lc-rs [3].
>
>
> * Will be there some packages for debian/ubuntu/RHEL/... so that the users
> of HAProxy can "just install and run" HAProxy with that SSL Lib?
>
> In the near future no. Currently AWS-LC does not support enough packages
> to fully replace libcrypto for the entire operating system, and balancing
> different programs using different library paths and libcrypto
> implementations is tricky. Eventually distributing static archives and
> shared libraries once we have more support makes sense. There is more
> context/history in this issue [4].
>
>
>
> [1] https://github.com/aws/s2n-quic
> [2] https://github.com/aws/s2n-quic/pull/1840
> [3]
> https://github.com/aws/aws-lc-rs
> [4] https://github.com/aws/aws-lc/issues/804
>
> Thanks, Andrew
>
> --
> *From:* Aleksandar Lazic 
> *Sent:* Wednesday, July 12, 2023 1:14 AM
> *To:* Hopkins, Andrew; haproxy@formilux.org
> *Subject:* RE: [EXTERNAL][PATCH] BUILD: ssl: Build with new cryptographic
> library AWS-LC
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Hi Andrew.
>
> On 2023-07-12 (Mi.) 02:26, Hopkins, Andrew wrote:
> > Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project
> [1].
> > Our goal is to improve the cryptography we use internally at AWS and
> help our
> > customers externally. In the spirit of helping people use good crypto we
> know
> > it’s important to make it easy to use AWS-LC everywhere they use
> cryptography.
> > This is why we are interested in integrating AWS-LC into HAProxy.
> >
> > AWS-LC is a fork of BoringSSL which you already partially support. We
> recently
> > merged in several PRs (Full OCSP support [2] and custom extension
> support [3])
> > to fully support HAProxy the same as OpenSSL. To ensure we continue to
> support
> > HAProxy long term we added HAProxy built with AWS-LC to our CI [4].
> >
> > In our early testing we see modest improvements in overall throughput
> when
> > compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup
> as this
> > blog [5] I observe a small (~2.5%) increase in requests per second for 5
> kb
> > requests on a C6i (x86) and C6g (arm) instance using TLS 1.3 and AES 256
> GCM. For
> > both tests I used
> > `taskset -c 2-47 ./h1load -e -ll -P -t 46 -s 30 -d 120 -c 500
> https://[c6i or c6g ip]:[aws-lc or openssl port]/?s=5k`.
> >
> > This small difference in this symmetric crypto workload comes down to
> AWS-LC
> > and OpenSSL having similar AES implementations. We observe larger
> performance
> > improvements with our micro-benchmarks for algorithms related to the TLS
> > handshake such as 15% reduction for ECDH with P-256, and 40% reduction
> for
> > P-521 on a C6i. This comes from our s2n-bignum library[6], a formally
> verified
> > 

Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-13 Thread Илья Шипицин
maybe we'll join both VTest sections like that

[image: image.png]

чт, 13 июл. 2023 г. в 01:45, Hopkins, Andrew :

>
> Thanks for the tip, I got the CI running and it found a minor visibility
> issue that we had to fix with our shared build [1]. All but one test [2] is
> now passing in the HAProxy CI while they all pass locally. Do you have any
> suggestions/tips for debugging  this test?
>
> Also the compiler and/or options used in your CI turned a warning into an
> error so I had to update the patch slightly to use the correct callback
> type for modern libcryptos.
>
> src/ssl_sock.c:1183:43: error: passing argument 2 of
> ‘SSL_CTX_get_tlsext_status_cb’ from incompatible pointer type
> [-Werror=incompatible-pointer-types]
>  1183 | SSL_CTX_get_tlsext_status_cb(ctx, );
>   |   ^
>   |   |
>   |   void (**)(void)
> compilation terminated due to -Wfatal-errors.
>
> OpenSSL >= 1.1.1 have the same callback signature as AWS-LC: int
> (*callback)(SSL *, void *). I believe this works with OpenSSL >= 1.1.1.
> because their SSL_CTX_ctrl performs the cast while AWS-LC has a dedicated
> function SSL_CTX_get_tlsext_status_cb with the right type.
>
> [1] https://github.com/aws/aws-lc/pull/1091
> [1]
> https://github.com/andrewhop/haproxy/actions/runs/5537027817/jobs/10105411198?pr=1#step:15:215
>
>
> From: Илья Шипицин 
> Sent: Wednesday, July 12, 2023 12:53 AM
> To: Hopkins, Andrew
> Cc: haproxy@formilux.org
> Subject: RE: [EXTERNAL][PATCH] BUILD: ssl: Build with new cryptographic
> library AWS-LC
>
>
>CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Hello, Andrew!
>
>
> you already tried to launch CI in fork [PATCH] Minor: ssl: Build with new
> cryptographic library AWS-LC by andrewhop · Pull Request #1 ·
> andrewhop/haproxy (github.com)
>
>
> please make sure you've enabled GHA for fork (here: Actions ·
> andrewhop/haproxy (github.com))
>
>
> also, current trigger is set to "push"
> haproxy/.github/workflows/vtest.yml at master · andrewhop/haproxy · GitHub
>
>
>
> I'd try
>
>
> on: [ push, pull_request, workflow_dispatch ]
>
>
>
>
>
> ср, 12 июл. 2023 г. в 02:29, Hopkins, Andrew :
>   Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project
> [1]. Our goal is to improve the cryptography we use internally at AWS and
> help our customers externally. In the spirit of helping people use good
> crypto we know it’s important to make it  easy to use AWS-LC everywhere
> they use cryptography. This is why we are interested in integrating AWS-LC
> into HAProxy.
>
> AWS-LC is a fork of BoringSSL which you already partially support. We
> recently merged in several PRs (Full OCSP support [2] and custom extension
> support [3]) to fully support HAProxy the same as OpenSSL. To ensure we
> continue to support HAProxy long term we  added HAProxy built with AWS-LC
> to our CI [4].
>
> In our early testing we see modest improvements in overall throughput when
> compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as
> this blog [5] I observe a small (~2.5%) increase in requests per second for
> 5 kb requests on a C6i (x86) and  C6g (arm) instance using TLS 1.3 and AES
> 256 GCM. For both tests I used `taskset -c 2-47 ./h1load -e -ll -P -t 46 -s
> 30 -d 120 -c 500 https://[c6i or c6g ip]:[aws-lc or openssl port]/?s=5k`.
>
> This small difference in this symmetric crypto workload comes down to
> AWS-LC and OpenSSL having similar AES implementations. We observe larger
> performance improvements with our micro-benchmarks for algorithms related
> to the TLS handshake such as 15% reduction  for ECDH with P-256, and 40%
> reduction for P-521 on a C6i. This comes from our s2n-bignum library[6], a
> formally verified bignum library with a focus on performance and
> correctness.
>
> When built with AWS-LC all current regression tests pass. I have included
> a small patch to update your documentation with AWS-LC as an option and I
> attempted to add AWS-LC to your CI. I need a little help figuring out how
> to test that part. Lastly from your  excellent contributing guide I am not
> subscribed so I would like to be cc’d on all responses.
>
> Thanks, Andrew
>
> [1] https://github.com/aws/aws-lc
> [2]  https://github.com/aws/aws-lc/pull/1054
> [3]  https://github.com/aws/aws-lc/pull/1071
> [4]  https://github.com/aws/aws-lc/pull/1083
> [5]
> https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance
> [6]  https://github.com/awslabs/s2n-bignum
>
>
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-12 Thread Илья Шипицин
Hello, Andrew

failure reason was right there ))

[image: image.png]

чт, 13 июл. 2023 г. в 01:45, Hopkins, Andrew :

>
> Thanks for the tip, I got the CI running and it found a minor visibility
> issue that we had to fix with our shared build [1]. All but one test [2] is
> now passing in the HAProxy CI while they all pass locally. Do you have any
> suggestions/tips for debugging  this test?
>
> Also the compiler and/or options used in your CI turned a warning into an
> error so I had to update the patch slightly to use the correct callback
> type for modern libcryptos.
>
> src/ssl_sock.c:1183:43: error: passing argument 2 of
> ‘SSL_CTX_get_tlsext_status_cb’ from incompatible pointer type
> [-Werror=incompatible-pointer-types]
>  1183 | SSL_CTX_get_tlsext_status_cb(ctx, );
>   |   ^
>   |   |
>   |   void (**)(void)
> compilation terminated due to -Wfatal-errors.
>
> OpenSSL >= 1.1.1 have the same callback signature as AWS-LC: int
> (*callback)(SSL *, void *). I believe this works with OpenSSL >= 1.1.1.
> because their SSL_CTX_ctrl performs the cast while AWS-LC has a dedicated
> function SSL_CTX_get_tlsext_status_cb with the right type.
>
> [1] https://github.com/aws/aws-lc/pull/1091
> [1]
> https://github.com/andrewhop/haproxy/actions/runs/5537027817/jobs/10105411198?pr=1#step:15:215
>
>
> From: Илья Шипицин 
> Sent: Wednesday, July 12, 2023 12:53 AM
> To: Hopkins, Andrew
> Cc: haproxy@formilux.org
> Subject: RE: [EXTERNAL][PATCH] BUILD: ssl: Build with new cryptographic
> library AWS-LC
>
>
>CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
> Hello, Andrew!
>
>
> you already tried to launch CI in fork [PATCH] Minor: ssl: Build with new
> cryptographic library AWS-LC by andrewhop · Pull Request #1 ·
> andrewhop/haproxy (github.com)
>
>
> please make sure you've enabled GHA for fork (here: Actions ·
> andrewhop/haproxy (github.com))
>
>
> also, current trigger is set to "push"
> haproxy/.github/workflows/vtest.yml at master · andrewhop/haproxy · GitHub
>
>
>
> I'd try
>
>
> on: [ push, pull_request, workflow_dispatch ]
>
>
>
>
>
> ср, 12 июл. 2023 г. в 02:29, Hopkins, Andrew :
>   Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project
> [1]. Our goal is to improve the cryptography we use internally at AWS and
> help our customers externally. In the spirit of helping people use good
> crypto we know it’s important to make it  easy to use AWS-LC everywhere
> they use cryptography. This is why we are interested in integrating AWS-LC
> into HAProxy.
>
> AWS-LC is a fork of BoringSSL which you already partially support. We
> recently merged in several PRs (Full OCSP support [2] and custom extension
> support [3]) to fully support HAProxy the same as OpenSSL. To ensure we
> continue to support HAProxy long term we  added HAProxy built with AWS-LC
> to our CI [4].
>
> In our early testing we see modest improvements in overall throughput when
> compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as
> this blog [5] I observe a small (~2.5%) increase in requests per second for
> 5 kb requests on a C6i (x86) and  C6g (arm) instance using TLS 1.3 and AES
> 256 GCM. For both tests I used `taskset -c 2-47 ./h1load -e -ll -P -t 46 -s
> 30 -d 120 -c 500 https://[c6i or c6g ip]:[aws-lc or openssl port]/?s=5k`.
>
> This small difference in this symmetric crypto workload comes down to
> AWS-LC and OpenSSL having similar AES implementations. We observe larger
> performance improvements with our micro-benchmarks for algorithms related
> to the TLS handshake such as 15% reduction  for ECDH with P-256, and 40%
> reduction for P-521 on a C6i. This comes from our s2n-bignum library[6], a
> formally verified bignum library with a focus on performance and
> correctness.
>
> When built with AWS-LC all current regression tests pass. I have included
> a small patch to update your documentation with AWS-LC as an option and I
> attempted to add AWS-LC to your CI. I need a little help figuring out how
> to test that part. Lastly from your  excellent contributing guide I am not
> subscribed so I would like to be cc’d on all responses.
>
> Thanks, Andrew
>
> [1] https://github.com/aws/aws-lc
> [2]  https://github.com/aws/aws-lc/pull/1054
> [3]  https://github.com/aws/aws-lc/pull/1071
> [4]  https://github.com/aws/aws-lc/pull/1083
> [5]
> https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance
> [6]  https://github.com/awslabs/s2n-bignum
>
>
>


Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-07-12 Thread Илья Шипицин
Hello, Andrew!

you already tried to launch CI in fork [PATCH] Minor: ssl: Build with new
cryptographic library AWS-LC by andrewhop · Pull Request #1 ·
andrewhop/haproxy (github.com) 

please make sure you've enabled GHA for fork (here: Actions ·
andrewhop/haproxy (github.com)
)

also, current trigger is set to "push"
haproxy/.github/workflows/vtest.yml at master · andrewhop/haproxy · GitHub


I'd try

on: [ push, pull_request, workflow_dispatch ]


ср, 12 июл. 2023 г. в 02:29, Hopkins, Andrew :

> Hello HAProxy maintainers, I work on the AWS libcrypto (AWS-LC) project
> [1]. Our goal is to improve the cryptography we use internally at AWS and
> help our customers externally. In the spirit of helping people use good
> crypto we know it’s important to make it easy to use AWS-LC everywhere they
> use cryptography. This is why we are interested in integrating AWS-LC into
> HAProxy.
>
> AWS-LC is a fork of BoringSSL which you already partially support. We
> recently merged in several PRs (Full OCSP support [2] and custom extension
> support [3]) to fully support HAProxy the same as OpenSSL. To ensure we
> continue to support HAProxy long term we added HAProxy built with AWS-LC to
> our CI [4].
>
> In our early testing we see modest improvements in overall throughput when
> compared to OpenSSL 3.1 on x86 and arm CPUs. Following a similar setup as
> this blog [5] I observe a small (~2.5%) increase in requests per second for
> 5 kb requests on a C6i (x86) and C6g (arm) instance using TLS 1.3 and AES
> 256 GCM. For both tests I used `taskset -c 2-47 ./h1load -e -ll -P -t 46 -s
> 30 -d 120 -c 500 https://[c6i or c6g ip]:[aws-lc or openssl port]/?s=5k`.
>
> This small difference in this symmetric crypto workload comes down to
> AWS-LC and OpenSSL having similar AES implementations. We observe larger
> performance improvements with our micro-benchmarks for algorithms related
> to the TLS handshake such as 15% reduction for ECDH with P-256, and 40%
> reduction for P-521 on a C6i. This comes from our s2n-bignum library[6], a
> formally verified bignum library with a focus on performance and
> correctness.
>
> When built with AWS-LC all current regression tests pass. I have included
> a small patch to update your documentation with AWS-LC as an option and I
> attempted to add AWS-LC to your CI. I need a little help figuring out how
> to test that part. Lastly from your excellent contributing guide I am not
> subscribed so I would like to be cc’d on all responses.
>
> Thanks, Andrew
>
> [1] https://github.com/aws/aws-lc
> [2] https://github.com/aws/aws-lc/pull/1054
> [3] https://github.com/aws/aws-lc/pull/1071
> [4] https://github.com/aws/aws-lc/pull/1083
> [5]
> https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance
> [6] https://github.com/awslabs/s2n-bignum
>
>
>


Re: QUIC (mostly) working on top of unpatched OpenSSL

2023-07-07 Thread Илья Шипицин
currently, it is client support for QUIC  openssl/CHANGES.md at master ·
openssl/openssl · GitHub



пт, 7 июл. 2023 г. в 10:58, Aleksandar Lazic :

> Hi.
>
> Just a addendum below to my last mail.
>
> On 2023-07-07 (Fr.) 00:33, Aleksandar Lazic wrote:
> > Hi Willy
> >
> > On 2023-07-06 (Do.) 22:05, Willy Tarreau wrote:
> >> Hi all,
> >>
> >> as the subject says it, Fred managed to make QUIC mostly work on top of
> >> a regular OpenSSL. Credit goes to the NGINX team who found a clever and
> >> absolutely ugly way to abuse OpenSSL callbacks to intercept and inject
> >> data from/to the TLS hello messages. It does have limitations, such as
> >> 0-RTT not being supported, and maybe other ones we're not aware of. I'm
> >> hesitating in merging it because there are some non-negligible impacts
> >> for the QUIC ecosystem itself in doing this, ranging from a possibly
> >> lower performance or reliability that could disappoint some users of the
> >> protocol, to discouraging the efforts to get a real alternative stack
> >> working.
> >>
> >> I've opened the discussion on the QUIC working group here to collect
> >> various opinions and advices:
> >>
> >>
> >>
> https://mailarchive.ietf.org/arch/browse/quic/?gbt=1=M9pkSGzTSHunNC1yeySaB3irCVo
> >>
> >> Unsurprizingly, the perception for now is mostly aligned with my first
> >> feelings, i.e. "OpenSSL will be happy and QUIC will be degraded, that's
> >> a bad idea". But I also know that on the WG we exclusively speak between
> >> implementors, who don't always have the users' perspective.
> >>
> >> I would encourage those who really want to ease QUIC adoption to read
> >> the thread above (possibly even share their opinion, that would be
> >> welcome) so that we can come to a consensus regarding this (e.g. merge,
> >> drop, merge conditioned at build time, or with an expert runtime option,
> >> anything else, I don't know). I feel like it's a difficult stretch to
> >> find the best approach. The "it's not possible at all with openssl,
> >> period" excuse is no longer true, however "it's only a degraded
> approach"
> >> remains true.
> >>
> >> I wouldn't like end-users to just think "pwah, all that for this, I'm
> >> not impressed" without realizing that they wouldn't be benefitting from
> >> everything. But maybe it would be good enough for most of those who are
> >> not going to rebuild QuicTLS or wolfSSL. I sincerely don't know and I
> >> do welcome opinions.
> >
> > Amazing work from the nginx team :-)
> >
> >  From my point of view is the way to go wolfSSL as the way on which
> > OpenSSL is does not looks very promising for the future, at least for
> > me. This implies that HAProxy will have different packages for the OS
> > and creates much more work for the nice packaging Persons :-(. I don't
> > know how big the challenge is to run HAProxy complete with wolfSSL, if
> > it's not already done but to have a package like "haproxy-openssl" and
> > "haproxy-quic" which implies wolfSSL would be a nice solution for the
> > HAProxy Users, imho. A nice Change would be if nginx and Apache HTTPd
> > also move to wolfSSL :-).
> > What's not clear to me is how the future of wolfSSL will be as the
> > Company behind the lib looks for now very open for Open Source Projects
> > but who knows the future.
> >
> > Maybe another option could be gnutls as it added the QUIC API in 3.7.0
> > but I think that's even a higher challenge then to move from OpenSSL to
> > wolfSSL then to gnutls just because there is not even a single line of
> > code with gnutls.
> >
> > https://lists.gnupg.org/pipermail/gnutls-help/2020-December/004670.html
> > ...
> > ** libgnutls: Added a new set of API to enable QUIC implementation
> > (#826, #849, #850).
> > ...
> >
> > ngtcp2 have examples with different TLS library, just fyi.
> > https://github.com/ngtcp2/ngtcp2/tree/main/examples
> >
> > Another Question is, is the TLS/SSL Layer in HAProxy enough separated to
> > add another TLS implementation? I'm pretty sure that a lot of people
> > knows this but just for the archive let me share the way how curl handle
> > different TLS backends.
> >
> > https://github.com/curl/curl/tree/master/lib/vtls
> >
> > All in all from my point of view was OpenSSL a good library in the past
> > but for the future should a more modern and open (from Org and Mindset)
> > Library be used, Jm2c.
>
> Interesting point, at least for me, it looks like that OpenSSL starts to
> implement quic, is there any official info from OpenSSL about this part
> in this year? Is there also a statement about the performance issue with
> 3.x?
>
> https://github.com/openssl/openssl/tree/master/ssl/quic
>
> >> Cheers,
> >> Willy
> >
> > Regards
> > Alex
> >
>
>


regression caught in HAProxy + LibreSSL

2023-07-06 Thread Илья Шипицин
Hello,

since there's a day of interesting QUIC things, I tried to run Interop
against ASAN enabled LibreSSL and HAProxy.

it "mostly" works, however some bugs are caught (not found on QuicTLS)

QUIC regression on haproxy · Issue #862 · libressl/portable (github.com)


Ilya


Re: QUIC (mostly) working on top of unpatched OpenSSL

2023-07-06 Thread Илья Шипицин
interesting.

I think, I can try run QUIC Interop locally to compare against QuicTLS

чт, 6 июл. 2023 г. в 22:08, Willy Tarreau :

> Hi all,
>
> as the subject says it, Fred managed to make QUIC mostly work on top of
> a regular OpenSSL. Credit goes to the NGINX team who found a clever and
> absolutely ugly way to abuse OpenSSL callbacks to intercept and inject
> data from/to the TLS hello messages. It does have limitations, such as
> 0-RTT not being supported, and maybe other ones we're not aware of. I'm
> hesitating in merging it because there are some non-negligible impacts
> for the QUIC ecosystem itself in doing this, ranging from a possibly
> lower performance or reliability that could disappoint some users of the
> protocol, to discouraging the efforts to get a real alternative stack
> working.
>
> I've opened the discussion on the QUIC working group here to collect
> various opinions and advices:
>
>
> https://mailarchive.ietf.org/arch/browse/quic/?gbt=1=M9pkSGzTSHunNC1yeySaB3irCVo
>
> Unsurprizingly, the perception for now is mostly aligned with my first
> feelings, i.e. "OpenSSL will be happy and QUIC will be degraded, that's
> a bad idea". But I also know that on the WG we exclusively speak between
> implementors, who don't always have the users' perspective.
>
> I would encourage those who really want to ease QUIC adoption to read
> the thread above (possibly even share their opinion, that would be
> welcome) so that we can come to a consensus regarding this (e.g. merge,
> drop, merge conditioned at build time, or with an expert runtime option,
> anything else, I don't know). I feel like it's a difficult stretch to
> find the best approach. The "it's not possible at all with openssl,
> period" excuse is no longer true, however "it's only a degraded approach"
> remains true.
>
> I wouldn't like end-users to just think "pwah, all that for this, I'm
> not impressed" without realizing that they wouldn't be benefitting from
> everything. But maybe it would be good enough for most of those who are
> not going to rebuild QuicTLS or wolfSSL. I sincerely don't know and I
> do welcome opinions.
>
> Cheers,
> Willy
>
>


Re: Debian + QUIC / HTTP/3

2023-06-05 Thread Илья Шипицин
I think that people use README as landing page.
maybe it worth adding docker hub link there ? it is hard for first time
user to identify whether docker image(s) exists or not.

пн, 5 июн. 2023 г. в 11:57, Artur :

> Thank you Илья and Dinko.
>
> What I can see is that haproxy doc suggest using QuicTLS library.
> The build process is well explained in Dockerfile. That's perfect.
>
> I've also seen some information about haproxy 2.6 configuration for
> HTTP/3 over QUIC in the following article. I imagine it may be suitable
> for 2.8 version as well...
>
> https://www.haproxy.com/blog/announcing-haproxy-2-6
>
> --
> Best regards,
> Artur
>
>
>


Re: Debian + QUIC / HTTP/3

2023-06-05 Thread Илья Шипицин
There're at least

"build from source" haproxy/INSTALL at master · haproxy/haproxy (github.com)

"use docker images" haproxytech's Profile | Docker Hub


maybe other ways ?

пн, 5 июн. 2023 г. в 09:44, Artur :

> Hello,
>
> What is suggested/recommended way to get QUIC / HTTP/3 working in
> haproxy on Debian ?
>
> --
> Best regards,
> Artur
>
>
>


Re: Slower responses from me starting now

2023-06-02 Thread Илья Шипицин
nice, nothing will stop us from rewriting HAProxy in rust

пт, 2 июн. 2023 г. в 20:44, Willy Tarreau :

> Hi all,
>
> with 2.8 released and a nice weather here, I decided to take a few weeks
> of holidays (I think last time was in september 2016 so I don't remember
> how it feels). No travel plans in sight and mostly hacking stuff at home,
> catching up with kernel stuff I postponed due to the upcoming release,
> etc, so I'll continue to look at my messages from time to time but with
> no real frenezy, so keep this in mind if you send me a direct message and
> don't get a response, and preferably contact the list or the subsystem
> maintainers for questions, patches etc, this will be much more efficient.
> And if I get more messages than I can handle, it's not completely
> impossible I'll proceed like Linus: delete all on return ;-)
>
> Christopher and Amaury just verified they can produce releases if needed,
> so you're in good hands.
>
> Cheers,
> Willy
>
>


Re: Followup on openssl 3.0 note seen in another thread

2023-05-25 Thread Илья Шипицин
чт, 25 мая 2023 г. в 17:11, Willy Tarreau :

> On Thu, May 25, 2023 at 07:33:11AM -0600, Shawn Heisey wrote:
> > On 3/11/23 22:52, Willy Tarreau wrote:
> > > According to the OpenSSL devs, 3.1 should be "4 times better than 3.0",
> > > so it could still remain 5-40 times worse than 1.1.1. I intend to run
> > > some tests soon on it on a large machine, but preparing tests takes a
> > > lot of time and my progress got delayed by the painful bug of last
> week.
> > > I'll share my findings anywya.
> >
> > Just noticed that quictls has a special branch for lock changes in 3.1.0:
> >
> > https://github.com/quictls/openssl/tree/openssl-3.1.0+quic+locks
>
> Yes, it was made so that the few of us who reported important issues can
> retest the impact of the changes. I hope to be able to run a test on a
> smaller machine soon.
>
> > I am not sure how to go about proper testing for performance on this.  I
> did
> > try a very basic "curl a URL 1000 times in bash" test back when 3.1.0 was
> > released, but that showed 3.0.8 and 3.1.0 were faster than 1.1.1, so
> > concurrency is likely required to see a problem.
>
> The problem definitely is concurrency, so 1000 curl will show nothing
> and will not even match production traffic. You'll need to use a load
>

I do not think 1000 instances of curl are required.

I recall doing some comparative tests (when we evaluated arm64 servers),
some really lightweight
with profiling enabled were enough to compare "before" and "after".

I'll try the JMeter next weekend maybe.



> generator that allows you to tweak the TLS resume support, like we do
> with h1load's argument "--tls-reuse". Also I don't know how often the
> recently modified locks are used per server connection and per client
> connection, that's what the SSL guys want to know since they're not able
> to test their changes.
>
> The first test report *before* the changes was published here a month
> ago:
>
>
> https://github.com/openssl/openssl/issues/20286#issuecomment-1527869072
>
> And now we have to find time to setup a test platform to test this one
> in more or less similar conditions (or at least run a before/after).
>
> Do not hesitate to participate if you see you can provide results
> comparing the two quictls-3.1 branches, it will help already. It's even
> possible that these efforts do not bring anything yet, we don't know and
> that's what they want to know.
>
> Thanks,
> Willy
>
>


Re: [PATCH 1/1] BUILD: SSL: enable TLS key material logging if built with LibreSSL>=3.5.0

2023-05-24 Thread Илья Шипицин
please ignore this patch.

LibreSSL implementation of key logging is intended only to shut build
warnings. functions themselves do nothing.

вт, 23 мая 2023 г. в 22:57, Ilya Shipitsin :

> LibreSSL implements TLS key material since 3.5.0, let's enable it
> ---
>  include/haproxy/openssl-compat.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/include/haproxy/openssl-compat.h
> b/include/haproxy/openssl-compat.h
> index 7fb153810..ed162031c 100644
> --- a/include/haproxy/openssl-compat.h
> +++ b/include/haproxy/openssl-compat.h
> @@ -88,7 +88,8 @@
>  #define HAVE_SSL_SCTL
>  #endif
>
> -#if (HA_OPENSSL_VERSION_NUMBER >= 0x10101000L)
> +/* minimum OpenSSL 1.1.1 & libreSSL 3.5.0 */
> +#if (defined(LIBRESSL_VERSION_NUMBER) && (LIBRESSL_VERSION_NUMBER >=
> 0x305fL)) || (HA_OPENSSL_VERSION_NUMBER >= 0x10101000L)
>  #define HAVE_SSL_KEYLOG
>  #endif
>
> --
> 2.40.1
>
>


Re: [PATCH] re-enable EVP_chacha20_poly1305() for LibreSSL

2023-05-23 Thread Илья Шипицин
also, there'll be a patch for unlocking  haproxy/openssl-compat.h at master
· haproxy/haproxy · GitHub
<https://github.com/haproxy/haproxy/blob/master/include/haproxy/openssl-compat.h#L92>
for
LibreSSL soon
(it was too boring to run QUIC Interop without keylog)

вт, 23 мая 2023 г. в 17:06, Илья Шипицин :

> oops.
>
> btw, not enabling chacha20_poly1305 leads to LibreSSL api usage
> incostistance
> QUIC regression on LibreSSL-3.7.2 (HAProxy) · Issue #860 ·
> libressl/portable (github.com)
> <https://github.com/libressl/portable/issues/860>
>
> it is claimed that OpenSSL does not check for null deref as well, so
> LibreSSL just mimics it :)
> joke.
>
> вт, 23 мая 2023 г. в 16:57, Willy Tarreau :
>
>> Hi Ilya,
>>
>> On Sun, May 21, 2023 at 12:57:21PM +0200,  ??? wrote:
>> > Hello,
>> >
>> > that exclude was only needed for pre-3.6.0 LibreSSL, while support was
>> > added in
>> > 3.6.0, so every released LibreSSL supports that, no need to keep "ifdef"
>>
>> While I'm probably not the one who will be the best to review this, you
>> forgot to attach the patch :-)  (for once it's not me).
>>
>> Willy
>>
>


Re: [PATCH] re-enable EVP_chacha20_poly1305() for LibreSSL

2023-05-23 Thread Илья Шипицин
oops.

btw, not enabling chacha20_poly1305 leads to LibreSSL api usage
incostistance
QUIC regression on LibreSSL-3.7.2 (HAProxy) · Issue #860 ·
libressl/portable (github.com)


it is claimed that OpenSSL does not check for null deref as well, so
LibreSSL just mimics it :)
joke.

вт, 23 мая 2023 г. в 16:57, Willy Tarreau :

> Hi Ilya,
>
> On Sun, May 21, 2023 at 12:57:21PM +0200,  ??? wrote:
> > Hello,
> >
> > that exclude was only needed for pre-3.6.0 LibreSSL, while support was
> > added in
> > 3.6.0, so every released LibreSSL supports that, no need to keep "ifdef"
>
> While I'm probably not the one who will be the best to review this, you
> forgot to attach the patch :-)  (for once it's not me).
>
> Willy
>
From 4c2a848a9e9eb244244082c29cdcd5eebddbf9c5 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sun, 21 May 2023 12:51:46 +0200
Subject: [PATCH 1/3] BUILD: chacha20_poly1305 for libressl

this reverts d2be9d4c48b71b2132938dbfac36142cc7b8f7c4

LibreSSL implements EVP_chacha20_poly1305() with EVP_CIPHER for every
released version starting with 3.6.0
---
 include/haproxy/quic_tls.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/include/haproxy/quic_tls.h b/include/haproxy/quic_tls.h
index a2eb2230a..35efbb91d 100644
--- a/include/haproxy/quic_tls.h
+++ b/include/haproxy/quic_tls.h
@@ -118,10 +118,8 @@ static inline const EVP_CIPHER *tls_aead(const SSL_CIPHER 
*cipher)
return EVP_aes_128_gcm();
case TLS1_3_CK_AES_256_GCM_SHA384:
return EVP_aes_256_gcm();
-#if !defined(LIBRESSL_VERSION_NUMBER)
case TLS1_3_CK_CHACHA20_POLY1305_SHA256:
return EVP_chacha20_poly1305();
-#endif
 #ifndef USE_OPENSSL_WOLFSSL
case TLS1_3_CK_AES_128_CCM_SHA256:
return EVP_aes_128_ccm();
-- 
2.39.2.windows.1



couple of questions on QUIC Interop

2023-05-22 Thread Илья Шипицин
Hello,

I played with QUIC Interop suite (for HAProxy + LibreSSL) on weekend...

couple of questions

1) particular patch haproxy-qns/0001-Add-timestamps-to-stderr-sink.patch at
master · haproxytech/haproxy-qns (github.com)

not
included into haproxy upstream, no good reason ?

2) why "quic-dev" repo is used, not primary haproxy upsteam ?
haproxy-qns/Dockerfile
at master · haproxytech/haproxy-qns (github.com)


cheers,
Ilya


[PATCH] re-enable EVP_chacha20_poly1305() for LibreSSL

2023-05-21 Thread Илья Шипицин
Hello,

that exclude was only needed for pre-3.6.0 LibreSSL, while support was
added in
3.6.0, so every released LibreSSL supports that, no need to keep "ifdef"

Cheers,
Ilya


[PATCH] CI: drop dedicated Fedora m32 pipeline

2023-05-14 Thread Илья Шипицин
Hello,

no need to keep it, cross build matrix covers this.

Ilya
From 2d03749317d8963551cfef90b6a8b164e12ba643 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sun, 14 May 2023 21:40:20 +0200
Subject: [PATCH] CI: drop Fedora m32 pipeline in favour of cross matrix

Fedora m32 monthly was introduced before cross matrix. Actually,
many of cross builds are 32 bit, no need to keep dedicated Fedora
definition
---
 .github/workflows/m32.yml | 42 ---
 1 file changed, 42 deletions(-)
 delete mode 100644 .github/workflows/m32.yml

diff --git a/.github/workflows/m32.yml b/.github/workflows/m32.yml
deleted file mode 100644
index 1b61f1e7a..0
--- a/.github/workflows/m32.yml
+++ /dev/null
@@ -1,42 +0,0 @@
-#
-# special purpose CI: test build on x86_64 with "m32" flag enabled
-# let us run those builds weekly
-#
-# some details might be found at GH: https://github.com/haproxy/haproxy/issues/1760
-#
-
-name: 32 Bit
-
-on:
-  schedule:
-- cron: "0 0 * * 5"
-
-
-permissions:
-  contents: read
-
-jobs:
-  build:
-name: Fedora
-runs-on: ubuntu-latest
-container:
-  image: fedora:rawhide
-steps:
-- uses: actions/checkout@v3
-- name: Install dependencies
-  run: |
-dnf -y groupinstall "Development Tools"
-dnf -y install 'perl(FindBin)' 'perl(File::Compare)' perl-IPC-Cmd 'perl(File::Copy)' glibc-devel.i686
-- name: Compile QUICTLS
-  run: |
-QUICTLS=yes QUICTLS_EXTRA_ARGS="-m32 linux-generic32" ./scripts/build-ssl.sh
-- name: Compile HAProxy
-  run: |
-make -j$(nproc) CC=gcc ERR=1 \
-  TARGET=linux-glibc \
-  USE_OPENSSL=1 \
-  USE_QUIC=1 \
-  DEBUG_CFLAGS="-m32" \
-  LDFLAGS="-m32" \
-  SSL_LIB=${HOME}/opt/lib \
-  SSL_INC=${HOME}/opt/include
-- 
2.40.1



[PATCH] CI: re-enable Fedora Rawhide clang builds

2023-05-12 Thread Илья Шипицин
Hello,

this enables monthly clang builds (previously only gcc was run).

Ilya
From 9eaae2062b2800e166263855c096dfd44cc03a39 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Fri, 12 May 2023 19:26:49 +0200
Subject: [PATCH] CI: enable monthly Fedora Rawhide clang builds

that was temporarily disabled due to
https://github.com/haproxy/haproxy/issues/1868

we are unblocked, let us enable clang in matrix
---
 .github/workflows/fedora-rawhide.yml | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/.github/workflows/fedora-rawhide.yml b/.github/workflows/fedora-rawhide.yml
index 36ab7c141..7e735a36c 100644
--- a/.github/workflows/fedora-rawhide.yml
+++ b/.github/workflows/fedora-rawhide.yml
@@ -11,9 +11,7 @@ jobs:
   build_and_test:
 strategy:
   matrix:
-cc: [ gcc
-# ,clang  # commented due to https://github.com/haproxy/haproxy/issues/1868
-]
+cc: [ gcc, clang ]
 name: ${{ matrix.cc }}
 runs-on: ubuntu-latest
 container:
-- 
2.40.1



[PATCH] cleanup: remove redundant check

2023-05-10 Thread Илья Шипицин
Hello,

small clean patch.
mutes coverity finding.

Ilya
From 4fdccb44933c2a91c7d6711bf821cc8b1d4c6d30 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Wed, 26 Apr 2023 21:05:12 +0200
Subject: [PATCH 1/2] CLEANUP: src/listener.c: remove redundant NULL check

fixes #2031

quoting Willy Tarreau:

"Originally the listeners were intended to work without a bind_conf
(e.g. for FTP processing) hence these tests, but over time the
bind_conf has become omnipresent"
---
 src/listener.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/listener.c b/src/listener.c
index d5390ed85..3da01df21 100644
--- a/src/listener.c
+++ b/src/listener.c
@@ -160,7 +160,7 @@ struct task *accept_queue_process(struct task *t, void 
*context, unsigned int st
if (!(li->bind_conf->options & BC_O_UNLIMITED)) {
HA_ATOMIC_UPDATE_MAX(_max,
 
update_freq_ctr(_per_sec, 1));
-   if (li->bind_conf && li->bind_conf->options & 
BC_O_USE_SSL) {
+   if (li->bind_conf->options & BC_O_USE_SSL) {
HA_ATOMIC_UPDATE_MAX(_max,
 
update_freq_ctr(_per_sec, 1));
}
-- 
2.39.2.windows.1



Re: [PATCH] CI: more granular failure on build matrix generating

2023-05-08 Thread Илья Шипицин
np.

It addresses quite rare conditions, when either github api or openbsd
website are down.
yet we seen that once in 2 years.

пн, 8 мая 2023 г. в 14:07, Willy Tarreau :

> On Mon, May 08, 2023 at 01:59:15PM +0200,  ??? wrote:
> > seems, it was accidentally lost ...
>
> Indeed, I don't konw why I missed it. Thanks for resending Ilya,
> now applied!
>
> Willy
>


Re: [PATCH] CI: more granular failure on build matrix generating

2023-05-08 Thread Илья Шипицин
seems, it was accidentally lost ...

ср, 26 апр. 2023 г. в 20:45, Илья Шипицин :

> Hello,
>
> recent openbsd ftp unavailability has shown that we should more carefully
> generate build matrix
>
> Ilya
>


[PATCH] CI: more granular failure on build matrix generating

2023-04-26 Thread Илья Шипицин
Hello,

recent openbsd ftp unavailability has shown that we should more carefully
generate build matrix

Ilya
From 62069d1e7edefdd313bdc7567e3817069632bfda Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Wed, 26 Apr 2023 20:39:39 +0200
Subject: [PATCH] CI: more granular failure on generating build matrix

when some api endpoints used for determine latest OpenSSL, LibreSSL
are unavailable, fail only those builds, not entire matrix
---
 .github/matrix.py | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/.github/matrix.py b/.github/matrix.py
index a2a02e968..7f22c43bb 100755
--- a/.github/matrix.py
+++ b/.github/matrix.py
@@ -33,11 +33,13 @@ def determine_latest_openssl(ssl):
 headers = {}
 if environ.get("GITHUB_TOKEN") is not None:
 headers["Authorization"] = "token {}".format(environ.get("GITHUB_TOKEN"))
-
 request = urllib.request.Request(
 "https://api.github.com/repos/openssl/openssl/tags;, headers=headers
 )
-openssl_tags = urllib.request.urlopen(request)
+try:
+  openssl_tags = urllib.request.urlopen(request)
+except:
+  return "OPENSSL_VERSION=failed_to_detect"
 tags = json.loads(openssl_tags.read().decode("utf-8"))
 latest_tag = ""
 for tag in tags:
@@ -50,9 +52,12 @@ def determine_latest_openssl(ssl):
 
 @functools.lru_cache(5)
 def determine_latest_libressl(ssl):
-libressl_download_list = urllib.request.urlopen(
-"https://cdn.openbsd.org/pub/OpenBSD/LibreSSL/;
-)
+try:
+  libressl_download_list = urllib.request.urlopen(
+  "https://cdn.openbsd.org/pub/OpenBSD/LibreSSL/;
+  )
+except:
+  return "LIBRESSL_VERSION=failed_to_detect"
 for line in libressl_download_list.readlines():
 decoded_line = line.decode("utf-8")
 if "libressl-" in decoded_line and ".tar.gz.asc" in decoded_line:
-- 
2.40.0



[PATCH] temporarily switch to libressl mirror

2023-04-26 Thread Илья Шипицин
Hello,

it is probably good idea to learn not to fail when libressl site is down
(I'll work on that).

as a fast fix, let us switch to mirror.

Ilya
From 283f9b790071f5333f00792f883f470f12b7933c Mon Sep 17 00:00:00 2001
From: Ilia Shipitsin 
Date: Wed, 26 Apr 2023 12:15:11 +0200
Subject: [PATCH 2/2] BUILD: ssl: switch LibreSSL to Fastly CDN

OpenBSD ftp is down, let us switch to CDN
---
 scripts/build-ssl.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/build-ssl.sh b/scripts/build-ssl.sh
index 4360adf34..a1935dd1e 100755
--- a/scripts/build-ssl.sh
+++ b/scripts/build-ssl.sh
@@ -59,7 +59,7 @@ build_openssl () {
 download_libressl () {
 if [ ! -f "download-cache/libressl-${LIBRESSL_VERSION}.tar.gz" ]; then
 wget -P download-cache/ \
-   
"https://ftp.openbsd.org/pub/OpenBSD/LibreSSL/libressl-${LIBRESSL_VERSION}.tar.gz;
+   
"https://cdn.openbsd.org/pub/OpenBSD/LibreSSL/libressl-${LIBRESSL_VERSION}.tar.gz;
 fi
 }
 
-- 
2.39.2.windows.1

From d48b80838c22bdb1ad799cd14d6e8635428bffd5 Mon Sep 17 00:00:00 2001
From: Ilia Shipitsin 
Date: Wed, 26 Apr 2023 12:12:54 +0200
Subject: [PATCH 1/2] CI: switch to Fastly CDN to download LibreSSL

OpenBSD ftp is down, let us switch to mirror
---
 .github/matrix.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/matrix.py b/.github/matrix.py
index a0e90bc2d..a2a02e968 100755
--- a/.github/matrix.py
+++ b/.github/matrix.py
@@ -51,7 +51,7 @@ def determine_latest_openssl(ssl):
 @functools.lru_cache(5)
 def determine_latest_libressl(ssl):
 libressl_download_list = urllib.request.urlopen(
-"http://ftp.openbsd.org/pub/OpenBSD/LibreSSL/;
+"https://cdn.openbsd.org/pub/OpenBSD/LibreSSL/;
 )
 for line in libressl_download_list.readlines():
 decoded_line = line.decode("utf-8")
-- 
2.39.2.windows.1



[PATCH] spell fixes, spelling whitelist addition

2023-04-22 Thread Илья Шипицин
Hello,

yet another spell fixes

Ilya
From d1884cb1de7292ab657d27676c08cb7aaf3f1cba Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 22 Apr 2023 20:20:39 +0200
Subject: [PATCH 4/4] CLEANUP: assorted typo fixes in the code and comments

This is 36th iteration of typo fixes
---
 doc/lua-api/index.rst  |  2 +-
 include/haproxy/stconn-t.h |  2 +-
 src/hlua.c | 10 +-
 src/hlua_fcn.c |  4 ++--
 4 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index f7b453e73..66d5da67f 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -1338,7 +1338,7 @@ Server class
   care about a single server, and also prevents useless wakeups.
 
   For instance, if you want to be notified for UP/DOWN events on a given set of
-  servers, it is recommended to peform multiple per-server subscriptions since
+  servers, it is recommended to perform multiple per-server subscriptions since
   it will be more efficient that doing a single global subscription that will
   filter the received events.
   Unless you really want to be notified for servers events of ALL servers of
diff --git a/include/haproxy/stconn-t.h b/include/haproxy/stconn-t.h
index dd76a9422..fe10950b6 100644
--- a/include/haproxy/stconn-t.h
+++ b/include/haproxy/stconn-t.h
@@ -153,7 +153,7 @@ enum sc_flags {
 	SC_FL_NEED_BUFF = 0x0100,  /* SC waits for an rx buffer allocation to complete */
 	SC_FL_NEED_ROOM = 0x0200,  /* SC needs more room in the rx buffer to store incoming data */
 
-	SC_FL_RCV_ONCE  = 0x0400,  /* Don't loop to receive data. cleared after a sucessful receive */
+	SC_FL_RCV_ONCE  = 0x0400,  /* Don't loop to receive data. cleared after a successful receive */
 	SC_FL_SND_ASAP  = 0x0800,  /* Don't wait for sending. cleared when all data were sent */
 	SC_FL_SND_NEVERWAIT = 0x1000,  /* Never wait for sending (permanent) */
 	SC_FL_SND_EXP_MORE  = 0x1000,  /* More data expected to be sent very soon. cleared when all data were sent */
diff --git a/src/hlua.c b/src/hlua.c
index 5c29bc5ac..87f022d6b 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -9113,7 +9113,7 @@ static struct task *hlua_event_runner(struct task *task, void *context, unsigned
 
 		/* We reached the limit of pending events in the queue: we should
 		 * warn the user, and temporarily pause the subscription to give a chance
-		 * to the handler to catch up? (it also prevents ressource shortage since
+		 * to the handler to catch up? (it also prevents resource shortage since
 		 * the queue could grow indefinitely otherwise)
 		 * TODO: find a way to inform the handler that it missed some events
 		 * (example: stats within the subscription in event_hdl api exposed via lua api?)
@@ -9176,7 +9176,7 @@ static struct task *hlua_event_runner(struct task *task, void *context, unsigned
 		RESET_SAFE_LJMP(hlua_sub->hlua);
 
 		/* At this point the event was successfully translated into hlua ctx,
-		 * or hlua error occured, so we can safely discard it
+		 * or hlua error occurred, so we can safely discard it
 		 */
 		event_hdl_async_free_event(event);
 		event = NULL;
@@ -9199,7 +9199,7 @@ static struct task *hlua_event_runner(struct task *task, void *context, unsigned
 			task_wakeup(task, TASK_WOKEN_OTHER);
 		}
 		else if (hlua_sub->paused) {
-			/* empty queue, the handler catched up: resume the subscription */
+			/* empty queue, the handler caught up: resume the subscription */
 			event_hdl_resume(hlua_sub->sub);
 			hlua_sub->paused = 0;
 		}
@@ -9308,7 +9308,7 @@ __LJMP static struct event_hdl_sub_type hlua_check_event_sub_types(lua_State *L,
 /* Wrapper for hlua_fcn_new_event_sub(): catch errors raised by
  * the function to prevent LJMP
  *
- * If no error occured, the function returns 1, else it returns 0 and
+ * If no error occurred, the function returns 1, else it returns 0 and
  * the error message is pushed at the top of the stack
  */
 __LJMP static int _hlua_new_event_sub_safe(lua_State *L)
@@ -9328,7 +9328,7 @@ static int hlua_new_event_sub_safe(lua_State *L, struct event_hdl_sub *sub)
 		case LUA_OK:
 			return 1;
 		default:
-			/* error was catched */
+			/* error was caught */
 			return 0;
 	}
 }
diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
index f275c5434..188fcf4fd 100644
--- a/src/hlua_fcn.c
+++ b/src/hlua_fcn.c
@@ -1023,7 +1023,7 @@ int hlua_server_get_name(lua_State *L)
 }
 
 /* __index metamethod for server class
- * support for additionnal keys that are missing from the main table
+ * support for additional keys that are missing from the main table
  * stack:1 = table (server class), stack:2 = requested key
  * Returns 1 if key is supported
  * else returns 0 to make lua return NIL value to the caller
@@ -1567,7 +1567,7 @@ int hlua_proxy_get_uuid(lua_State *L)
 }
 
 /* __index metamethod for proxy class
- * support for additionnal keys that are missing from the main table
+ * support for additional keys that are 

[PATCH] regtests: remove unsupported "stats" keyword

2023-04-22 Thread Илья Шипицин
Hello,

small cleanup.

Ilya
From a92f679ba384d38a749fae909763c0e0598baec7 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 22 Apr 2023 20:09:05 +0200
Subject: [PATCH 2/4] reg-tests/connection/proxy_protocol_random_fail.vtc:
 remove unsupported "stats" keyword

***  h1debug|[ALERT](1476756) : config : parsing [/tmp/haregtests-2023-04-22_19-24-10.diuT6B/vtc.1476661.74f1092e/h1/cfg:7] : 'stats' is
***  h1debug|not supported anymore.
---
 reg-tests/connection/proxy_protocol_random_fail.vtc | 1 -
 1 file changed, 1 deletion(-)

diff --git a/reg-tests/connection/proxy_protocol_random_fail.vtc b/reg-tests/connection/proxy_protocol_random_fail.vtc
index bcaa03cf2..1ae33deb9 100644
--- a/reg-tests/connection/proxy_protocol_random_fail.vtc
+++ b/reg-tests/connection/proxy_protocol_random_fail.vtc
@@ -25,7 +25,6 @@ syslog Slog_1 -repeat 8 -level info {
 haproxy h1 -conf {
 global
 tune.ssl.default-dh-param 2048
-stats bind-process 1
 log ${Slog_1_addr}:${Slog_1_port} len 2048 local0 debug err
 
 defaults
-- 
2.40.0



[PATCH] CI: bump cirrus-ci freebsd to 13-2

2023-04-22 Thread Илья Шипицин
Hello

minor freebsd cirrus-ci image update

Ilya
From 11ecce42b32fd533a262e5e2adc0487d347aacf0 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 22 Apr 2023 19:13:03 +0200
Subject: [PATCH 1/4] CI: cirrus-ci: bump FreeBSD image to 13-1

FreeBSD-13.2 released on April 11, 2023
---
 .cirrus.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index e6b63e1d1..2993b943a 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -1,7 +1,7 @@
 FreeBSD_task:
   freebsd_instance:
 matrix:
-  image_family: freebsd-13-1
+  image_family: freebsd-13-2
   only_if: $CIRRUS_BRANCH =~ 'master|next'
   install_script:
 - pkg update -f && pkg upgrade -y && pkg install -y openssl git gmake lua53 socat pcre
-- 
2.40.0



[PATCH] CLEANUP: use "offsetof" macro where appropriate

2023-04-15 Thread Илья Шипицин
Hello,

small cleanup patch attached.

Ilya
From 77babd04c417709bb41c951701d62dec0574eb35 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 15 Apr 2023 23:39:43 +0200
Subject: [PATCH] CLEANUP: use "offsetof" where appropriate

let's use the C library macro "offsetof"
---
 src/cache.c| 4 ++--
 src/ssl_sock.c | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/cache.c b/src/cache.c
index 39e947820..4deb34ea8 100644
--- a/src/cache.c
+++ b/src/cache.c
@@ -425,12 +425,12 @@ static void delete_entry(struct cache_entry *del_entry)
 
 static inline struct shared_context *shctx_ptr(struct cache *cache)
 {
-	return (struct shared_context *)((unsigned char *)cache - ((struct shared_context *)NULL)->data);
+	return (struct shared_context *)((unsigned char *)cache -  offsetof(struct shared_context, data));
 }
 
 static inline struct shared_block *block_ptr(struct cache_entry *entry)
 {
-	return (struct shared_block *)((unsigned char *)entry - ((struct shared_block *)NULL)->data);
+	return (struct shared_block *)((unsigned char *)entry - offsetof(struct shared_block, data));
 }
 
 
diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index abbcfa6af..740fc0aeb 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -4205,7 +4205,7 @@ static inline void sh_ssl_sess_free_blocks(struct shared_block *first, struct sh
 /* return first block from sh_ssl_sess  */
 static inline struct shared_block *sh_ssl_sess_first_block(struct sh_ssl_sess_hdr *sh_ssl_sess)
 {
-	return (struct shared_block *)((unsigned char *)sh_ssl_sess - ((struct shared_block *)NULL)->data);
+	return (struct shared_block *)((unsigned char *)sh_ssl_sess - offsetof(struct shared_block, data));
 
 }
 
-- 
2.40.0



[PATCH] CI: monthly Fedora Rawhide, bump "actions/checkout" to v3

2023-04-08 Thread Илья Шипицин
Hello,

couple of patches:

1) Fedora Rawhide (known to include the most recent compilers) monthly
builds
2) small cleanup, "actions/checkout" bumped to v3

Cheers,
Ilya
From 2ffed99562df8be55ba6e120f9952f53904b2269 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 8 Apr 2023 13:32:31 +0200
Subject: [PATCH 2/2] CI: bump "actions/checkout" to v3 for cross zoo matrix

actions/checkout@v2 is deprecated, accidently it was not updated in our
build definition
---
 .github/workflows/cross-zoo.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/workflows/cross-zoo.yml b/.github/workflows/cross-zoo.yml
index e2a5816fa..f2c8d7ad8 100644
--- a/.github/workflows/cross-zoo.yml
+++ b/.github/workflows/cross-zoo.yml
@@ -97,7 +97,7 @@ jobs:
 sudo apt-get -yq --force-yes install \
 gcc-${{ matrix.platform.arch }} \
 ${{ matrix.platform.libs }}
-- uses: actions/checkout@v2
+- uses: actions/checkout@v3
 
 
 - name: install quictls
-- 
2.40.0

From 79486b3780f009777df799e5f057b1ee0dee7f4f Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 8 Apr 2023 13:30:42 +0200
Subject: [PATCH 1/2] CI: enable monthly test on Fedora Rawhide

Fedora Rawhide is shipped with the most recent compilers, not yet released with
more conservative distro. It is good to catch compile errors on those compilers.
---
 .github/workflows/fedora-rawhide.yml | 61 
 1 file changed, 61 insertions(+)
 create mode 100644 .github/workflows/fedora-rawhide.yml

diff --git a/.github/workflows/fedora-rawhide.yml b/.github/workflows/fedora-rawhide.yml
new file mode 100644
index 0..36ab7c141
--- /dev/null
+++ b/.github/workflows/fedora-rawhide.yml
@@ -0,0 +1,61 @@
+name: Fedora/Rawhide/QuicTLS
+
+on:
+  schedule:
+- cron: "0 0 25 * *"
+
+permissions:
+  contents: read
+
+jobs:
+  build_and_test:
+strategy:
+  matrix:
+cc: [ gcc
+# ,clang  # commented due to https://github.com/haproxy/haproxy/issues/1868
+]
+name: ${{ matrix.cc }}
+runs-on: ubuntu-latest
+container:
+  image: fedora:rawhide
+steps:
+- uses: actions/checkout@v3
+- name: Install dependencies
+  run: |
+dnf -y groupinstall 'C Development Tools and Libraries' 'Development Tools'
+dnf -y install pcre-devel zlib-devel pcre2-devel 'perl(FindBin)' perl-IPC-Cmd 'perl(File::Copy)' 'perl(File::Compare)' lua-devel socat findutils systemd-devel clang
+- name: Install VTest
+  run: scripts/build-vtest.sh
+- name: Install QuicTLS
+  run: QUICTLS=yes scripts/build-ssl.sh
+- name: Build contrib tools
+  run: |
+make admin/halog/halog
+make dev/flags/flags
+make dev/poll/poll
+make dev/hpack/decode dev/hpack/gen-enc dev/hpack/gen-rht
+- name: Compile HAProxy with ${{ matrix.cc }}
+  run: |
+make -j3 CC=${{ matrix.cc }} V=1 ERR=1 TARGET=linux-glibc USE_OPENSSL=1 USE_QUIC=1 USE_ZLIB=1 USE_PCRE=1 USE_PCRE_JIT=1 USE_LUA=1 USE_SYSTEMD=1 ADDLIB="-Wl,-rpath,${HOME}/opt/lib" SSL_LIB=${HOME}/opt/lib SSL_INC=${HOME}/opt/include
+make install
+- name: Show HAProxy version
+  id: show-version
+  run: |
+echo "::group::Show dynamic libraries."
+ldd $(command -v haproxy)
+echo "::endgroup::"
+haproxy -vv
+echo "version=$(haproxy -v |awk 'NR==1{print $3}')" >> $GITHUB_OUTPUT
+- name: Run VTest for HAProxy ${{ steps.show-version.outputs.version }}
+  id: vtest
+  run: |
+make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
+- name: Show VTest results
+  if: ${{ failure() && steps.vtest.outcome == 'failure' }}
+  run: |
+for folder in ${TMPDIR}/haregtests-*/vtc.*; do
+  printf "::group::"
+  cat $folder/INFO
+  cat $folder/LOG
+  echo "::endgroup::"
+done
-- 
2.40.0



Re: Interest in HA Proxy from Sonicwall

2023-04-05 Thread Илья Шипицин
ср, 5 апр. 2023 г. в 20:18, Aleksandar Lazic :

> Hi Kenny.
>
> On 05.04.23 20:04, Kenny Lederman wrote:
> > Hi team,
> >
> > Do you have an account rep assigned to Sonicwall that could help me with
> > getting a POC set up?
>
> This is the Open Source Mailing list, if you want to get in touch with
> the Company behind HAProxy please use this.
>

original intention was not clear :) maybe Kenny is looking for open source
individuals to hire them in purpose.

otherwise, yes, https://www.haproxy.com is proper way to contact
sales/commercial/whatever.


>
> https://www.haproxy.com/contact-us/
>
> Of course can you setup the Open Source HAProxy by your team, the
> documentation is hosted at this URL.
>
> http://docs.haproxy.org/
>
> > Thank you,
> >
> > Kenny Lederman
>
> Best Regards
> Alex
>
> > Enterprise Account Manager
> >
> > (206) 455-6488 - Office
> >
> > (847) 932-9771 - Cell
> >
> > kenny.leder...@softchoice.com 
> >
> > 
> >
> >
> >
> > Softchoice 
> >
> >
> >
> > 415 1st Avenue North, Suite 300
> > Seattle, WA  98109
> >
> >
> >
> > Manage Subscription
> > Unsubscribe
> > Privacy
> > 
> >
>
>


[PATCH] CI: add memory related code flow smoke test

2023-04-01 Thread Илья Шипицин
Hello,

after https://github.com/haproxy/haproxy/issues/2082 is resolved,
let's add ci test

Ilya
From 43f66093c25b182b22b26bd9037a9e2105e02521 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 1 Apr 2023 13:29:46 +0200
Subject: [PATCH] CI: run smoke tests on config syntax to check memory related
 issues

config syntax check seems add a value on testing code path not
covered by VTest, also checks are very fast
---
 .github/workflows/vtest.yml | 4 
 1 file changed, 4 insertions(+)

diff --git a/.github/workflows/vtest.yml b/.github/workflows/vtest.yml
index 5137099de..25d3cc72e 100644
--- a/.github/workflows/vtest.yml
+++ b/.github/workflows/vtest.yml
@@ -140,6 +140,10 @@ jobs:
 # the '-n' soft limit to the hard limit, thus failing to run.
 ulimit -n 65536
 make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
+- name: Config syntax check memleak smoke testing
+  if: ${{ contains(matrix.name, 'ASAN') }}
+  run: |
+./haproxy -f .github/h2spec.config -c
 - name: Show VTest results
   if: ${{ failure() && steps.vtest.outcome == 'failure' }}
   run: |
-- 
2.40.0



[PATCH] spelling fixes, CI filter

2023-04-01 Thread Илья Шипицин
Hello,

please find some spelling fixes.
also folders ./doc/design-thoughts,./doc/internals are excluded from
further checks.

cheers,
Ilya
From 15f8a4031bb53a77155ffd923d17d12478848bb9 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 1 Apr 2023 12:27:31 +0200
Subject: [PATCH 2/2] CI: exclude doc/{design-thoughts,internals} from spell
 check

as those directories do contain many documents written in French,
codespell is catching a lot of false positives scanning them.
---
 .github/workflows/codespell.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/workflows/codespell.yml b/.github/workflows/codespell.yml
index 2243d8b37..1e118c70b 100644
--- a/.github/workflows/codespell.yml
+++ b/.github/workflows/codespell.yml
@@ -15,5 +15,5 @@ jobs:
 - uses: codespell-project/codespell-problem-matcher@v1
 - uses: codespell-project/actions-codespell@master
   with:
-skip: CHANGELOG,Makefile,*.fig,*.pem
+skip: CHANGELOG,Makefile,*.fig,*.pem,./doc/design-thoughts,./doc/internals
 ignore_words_list: ist,ists,hist,wan,ca,cas,que,ans,te,nd,referer,ot,uint,iif,fo,keep-alives,dosen,ifset,thrid,strack,ba,chck,hel,unx,mor
-- 
2.40.0

From 0c9552a966b90ca4969978bbea85513576de8299 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 1 Apr 2023 12:26:42 +0200
Subject: [PATCH 1/2] CLEANUP: assorted typo fixes in the code and comments

This is 35th iteration of typo fixes
---
 doc/SPOE.txt   |  4 ++--
 doc/configuration.txt  |  6 +++---
 include/haproxy/htx.h  |  2 +-
 include/haproxy/quic_frame-t.h |  2 +-
 include/haproxy/quic_frame.h   |  2 +-
 include/haproxy/spoe-t.h   |  2 +-
 include/haproxy/spoe.h |  2 +-
 include/haproxy/stconn-t.h |  2 +-
 src/http_act.c |  2 +-
 src/http_conv.c|  4 ++--
 src/http_ext.c | 14 +++---
 src/mux_h2.c   |  2 +-
 src/mux_quic.c |  2 +-
 src/quic_conn.c|  4 ++--
 src/ssl_sock.c |  2 +-
 src/stconn.c   |  2 +-
 src/thread.c   |  2 +-
 17 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/doc/SPOE.txt b/doc/SPOE.txt
index 9a4cc4b5c..2001c1fa0 100644
--- a/doc/SPOE.txt
+++ b/doc/SPOE.txt
@@ -230,7 +230,7 @@ max-frame-size 
 
 max-waiting-frames 
   Set the maximum number of frames waiting for an acknowledgement on the same
-  connection. This value is only used when the pipelinied or asynchronus
+  connection. This value is only used when the pipelinied or asynchronous
   exchanges between HAProxy and SPOA are enabled. By default, it is set to 20.
 
 messages  ...
@@ -248,7 +248,7 @@ messages  ...
 
 option async
 no option async
-  Enable or disable the support of asynchronus exchanges between HAProxy and
+  Enable or disable the support of asynchronous exchanges between HAProxy and
   SPOA. By default, this option is enabled.
 
 
diff --git a/doc/configuration.txt b/doc/configuration.txt
index e236f0a87..586689ca2 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -3390,7 +3390,7 @@ tune.ssl.ocsp-update.mindelay 
 tune.stick-counters 
   Sets the number of stick-counters that may be tracked at the same time by a
   connection or a request via "track-sc*" actions in "tcp-request" or
-  "http-request" rules. The defaut value is set at build time by the macro
+  "http-request" rules. The default value is set at build time by the macro
   MAX_SESS_STK_CTR, and defaults to 3. With this setting it is possible to
   change the value and ignore the one passed at build time. Increasing this
   value may be needed when porting complex configurations to haproxy, but users
@@ -9175,7 +9175,7 @@ no option forwarded
 those result will be used as 'for' parameter nodeport value
 
 
-  Since HAProxy works in reverse-proxy mode, servers are loosing some request
+  Since HAProxy works in reverse-proxy mode, servers are losing some request
   context (request origin: client ip address, protocol used...)
 
   A common way to address this limitation is to use the well known
@@ -9189,7 +9189,7 @@ no option forwarded
   forwarded header (RFC7239).
   More information here: https://www.rfc-editor.org/rfc/rfc7239.html
 
-  The use of this single header allow to convey multiple informations
+  The use of this single header allow to convey numerous details
   within the same header, and most importantly, fixes the proxy chaining
   issue. (the rfc allows for multiple chained proxies to append their own
   values to an already existing header).
diff --git a/include/haproxy/htx.h b/include/haproxy/htx.h
index 339f0a559..e80ecad68 100644
--- a/include/haproxy/htx.h
+++ b/include/haproxy/htx.h
@@ -30,7 +30,7 @@
 #include 
 #include 
 
-/* ->extra field value when the payload lenght is unknown (non-chunked message
+/* ->extra field value when the payload length is unknown (non-chunked message
  * with no 

Re: PostgreSQL: How can use slave for some read operations?

2023-03-15 Thread Илья Шипицин
there are several L7 balancing tool like pgPool.

as for haproxy, currently it does not provide such advanced postgresql
routing

ср, 15 мар. 2023 г. в 06:09, Muhammed Fahid :

> Hi,
>
> I have A master and a slave PostgreSQL databases. I would like to know
> that major read operations can be processed with slave for reducing load in
> master ??
>
> for example : I have a large number of products.when customers want to
> list all products. Is it possible to read from a slave database? instead of
> from the master database ?. If major read operations are done in master its
> slows down the other operations in master.
>


Re: wolfSSL: how to treat expired certs ?

2023-03-12 Thread Илья Шипицин
btw, "build only tests" already pass in case of wolfSSL. should we start
with "build only wolfSSL CI job" ?

few "vtc" fail for various reasons.

вс, 12 мар. 2023 г. в 18:35, Илья Шипицин :

> Hello,
>
> during enabling wolfSSL CI I met the following
>
> #top  TEST reg-tests/ssl/ssl_default_server.vtc FAILED (5.123) exit=2
>
> ***  h1debug|<134>Mar 12 12:04:49 haproxy[115196]: unix:1
> [12/Mar/2023:12:04:49.922] ssl-lst/1: SSL client CA chain cannot be verified
> ***  h1debug|fd[0x12] OpenSSL error[0x2d] : unknown error number
> ***  h1debug|fd[0x12] OpenSSL error[0x139] : received alert fatal error
>  dT1.152
> ***  h1debug|fd[0x12] OpenSSL error[0x2d] : unknown error number
> ***  h1debug|fd[0x12] OpenSSL error[0x139] : received alert fatal error
>  dT1.157
> ***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
> date after
> ***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
> date after
> ***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
> date after
> ***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
> date after
> ***  h1debug|<134>Mar 12 12:04:51 haproxy[115196]: unix:1
> [12/Mar/2023:12:04:50.963] ssl-lst/1: SSL client CA chain cannot be verified
>
>
> I wonder what is prefferable way of addressing that
>
> 1) excluding several "vtc" if haproxy is built with wolfSSL
> 2) adding "WOLFSSL_LOAD_FLAG_DATE_ERR_OKAY" to cert validation
>
> cheers,
> Ilya
>


wolfSSL: how to treat expired certs ?

2023-03-12 Thread Илья Шипицин
Hello,

during enabling wolfSSL CI I met the following

#top  TEST reg-tests/ssl/ssl_default_server.vtc FAILED (5.123) exit=2

***  h1debug|<134>Mar 12 12:04:49 haproxy[115196]: unix:1
[12/Mar/2023:12:04:49.922] ssl-lst/1: SSL client CA chain cannot be verified
***  h1debug|fd[0x12] OpenSSL error[0x2d] : unknown error number
***  h1debug|fd[0x12] OpenSSL error[0x139] : received alert fatal error
 dT1.152
***  h1debug|fd[0x12] OpenSSL error[0x2d] : unknown error number
***  h1debug|fd[0x12] OpenSSL error[0x139] : received alert fatal error
 dT1.157
***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
date after
***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
date after
***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
date after
***  h1debug|fd[0x13] OpenSSL error[0x97] : ASN date error, current
date after
***  h1debug|<134>Mar 12 12:04:51 haproxy[115196]: unix:1
[12/Mar/2023:12:04:50.963] ssl-lst/1: SSL client CA chain cannot be verified


I wonder what is prefferable way of addressing that

1) excluding several "vtc" if haproxy is built with wolfSSL
2) adding "WOLFSSL_LOAD_FLAG_DATE_ERR_OKAY" to cert validation

cheers,
Ilya


Re: RFQ - Royal Court - HAProxy

2023-03-11 Thread Илья Шипицин
you have reached open source mailing list. sales not processed here.

please reach Haproxy Tech: https://www.haproxy.com/contact-us/

вс, 12 мар. 2023 г. в 07:46, Jumanh Khalid :

> Dear Team,
> We are waiting for your kind reply.
>
> Regards,
>
> *Jumanh Khalid*
>
> *Operations Manager.*
>
> *Neusol Middle East.*
>
> Riyadh | Melbourne | Calgary | Karachi | Dubai
>
> Office 108, Al Ummam Commercial Center, Riyadh, KSA
>
> Mob: +966 50 562 9863 - Tel: +966 11 292 3918
>
> jum...@neusolme.com   – www.neusolme.com
>
> --
> *From:* Jumanh Khalid 
> *Sent:* Tuesday, March 7, 2023 12:33 PM
> *To:* haproxy@formilux.org 
> *Subject:* Re: RFQ - Royal Court - HAProxy
>
> Dear Team ,
>
> Kindly send me HAProxy quotation for the requirements below. The end
> customer is Royal Court - KSA.
>
>- 02 Load Balancer for DMZ.
>- 02 Load Balancers for Internal
>
> Thank you.
>
> *Jumanh Khalid*
>
> *Operations Manager.*
>
> *Neusol Middle East.*
>
> Riyadh | Melbourne | Calgary | Karachi | Dubai
>
> Office 108, Al Ummam Commercial Center, Riyadh, KSA
>
> Mob: +966 50 562 9863 - Tel: +966 11 292 3918
>
> jum...@neusolme.com   – www.neusolme.com
>
>


Re: HAProxy performance on OpenBSD

2023-01-23 Thread Илья Шипицин
gmail decided to put original message to spam.
I replied to first reply.

indeed it was mentioned. sorry

пн, 23 янв. 2023 г. в 14:22, Willy Tarreau :

> Hi Ilya,
>
> On Mon, Jan 23, 2023 at 02:11:56PM +0600,  ??? wrote:
> > I would start with big picture view
> >
> > 1) are CPUs utilized at 100% ?
> > 2) what is CPU usage in details - fraction of system, user, idle ... ?
> >
> > it will allow us to narrow things and find what is the bottleneck, either
> > kernel space or user space.
>
> This was mentioned:
>
>  the haproxy process uses about 1545% CPU
>  under this load. Overall CPU utilization is 21% user, 0% nice, 37% sys,
>  18% spin, 0.7% intr, 23.1% idle
>
> The %sys is high. The %spin could indicate spinlocks and if so it's
> related to the kernel running in SMP and not necessarily being very
> scalable.
>
> Willy
>


Re: HAProxy performance on OpenBSD

2023-01-23 Thread Илья Шипицин
also, I wonder what is LibreSSL <--> OpenSSL perf.
I'll try "openssl speed" (I recall LibreSSL has the same feature), but I'm
not sure I can get OpenBSD machine.

can you try haproxy + openssl-1.1.1 (it is considered the most performant
these days) ?

пн, 23 янв. 2023 г. в 14:17, Илья Шипицин :

> and fun fact from my own experience.
> I used to run load balancer on FreeBSD with OpenSSL built from ports.
> somehow I chose "assembler optimization" to "no" and OpenSSL big numbers
> arith were implemented in slow way
>
> I was able to find big fraction of BN-functions using "perf" tool.
> something like 25% of general impact
>
> later, I used "openssl speed", I compared Linux <--> FreeBSD (on required
> cipher suites)
>
> How can I interpret openssl speed output? - Stack Overflow
> <https://stackoverflow.com/questions/17410270/how-can-i-interpret-openssl-speed-output>
>
> пн, 23 янв. 2023 г. в 14:11, Илья Шипицин :
>
>> I would start with big picture view
>>
>> 1) are CPUs utilized at 100% ?
>> 2) what is CPU usage in details - fraction of system, user, idle ... ?
>>
>> it will allow us to narrow things and find what is the bottleneck, either
>> kernel space or user space.
>>
>> пн, 23 янв. 2023 г. в 14:01, Willy Tarreau :
>>
>>> Hi Marc,
>>>
>>> On Mon, Jan 23, 2023 at 12:13:13AM -0600, Marc West wrote:
>>> (...)
>>> > I understand that raw performance on OpenBSD is sometimes not as high
>>> as
>>> > other OSes in some scenarios, but the difference of 500 vs 10,000+
>>> > req/sec and 1100 vs 40,000 connections here is very large so I wanted
>>> to
>>> > see if there are any thoughts, known issues, or tunables that could
>>> > possibly help improve HAProxy throughput on OpenBSD?
>>>
>>> Based on my experience a long time ago (~13-14 years), I remember that
>>> PF's connection tracking didn't scale at all with the number of
>>> connections. It was very clear that there was a very high per-packet
>>> lookup cost indicating that a hash table was too small. Unfortunately
>>> I didn't know how to change such settings, and since my home machine
>>> was being an ADSL line anyway, the line would have been filled long
>>> before the hash table so I didn't really care. But I was a bit shocked
>>> by this observation. I supposed that since then it has significantly
>>> evolved, but it would be worth having a look around this.
>>>
>>> > The usual OS tunables openfiles-cur/openfiles-max are raised to 200k,
>>> > kern.maxfiles=205000 (openfiles peaked at 15k), and haproxy stats
>>> > reports those as expected. PF state limit is raised to 1 million and
>>> > peaked at 72k in use. BIOS power profile is set to max performance.
>>>
>>> I think you should try to flood the machine using UDP traffic to see
>>> the difference between the part that happens in the network stack and
>>> the part that happens in the rest of the system (haproxy included). If
>>> a small UDP flood on accepted ports brings the machine on its knees,
>>> it's definitely related to the network stack and/or filtering/tracking.
>>> If it does nothing to it, I would tend to say that the lower network
>>> layers and PF are innocent. This would leave us with TCP and haproxy.
>>> A SYN flood test could be useful, maybe the listening queues are too
>>> small and incoming packets are dropped too fast.
>>>
>>> At the TCP layer, a long time ago OpenBSD used to be a bit extremist
>>> in the way it produces random sequence numbers. I don't know how it
>>> is today nor if this has a significant cost. Similarly, outgoing
>>> connections will need a random source port, and this can be expensive,
>>> particularly when the number of concurrent connections raises and ports
>>> become scarce, though you said that even blocked traffic causes harm
>>> to the machine, so I doubt this is your concern for now.
>>>
>>> > pid = 78180 (process #1, nbproc = 1, nbthread = 32)
>>> > uptime = 1d 19h10m11s
>>> > system limits: memmax = unlimited; ulimit-n = 20
>>> > maxsock = 20; maxconn = 99904; maxpipes = 0
>>> >
>>> > No errors that I can see in logs about hitting any limits. There is no
>>> > change in results with http vs https, http/1.1 vs h2, with or without
>>> > httplog, or reducing nbthread on this 40 core machine. If there are any
>>> > other details I can pr

Re: HAProxy performance on OpenBSD

2023-01-23 Thread Илья Шипицин
and fun fact from my own experience.
I used to run load balancer on FreeBSD with OpenSSL built from ports.
somehow I chose "assembler optimization" to "no" and OpenSSL big numbers
arith were implemented in slow way

I was able to find big fraction of BN-functions using "perf" tool.
something like 25% of general impact

later, I used "openssl speed", I compared Linux <--> FreeBSD (on required
cipher suites)

How can I interpret openssl speed output? - Stack Overflow
<https://stackoverflow.com/questions/17410270/how-can-i-interpret-openssl-speed-output>

пн, 23 янв. 2023 г. в 14:11, Илья Шипицин :

> I would start with big picture view
>
> 1) are CPUs utilized at 100% ?
> 2) what is CPU usage in details - fraction of system, user, idle ... ?
>
> it will allow us to narrow things and find what is the bottleneck, either
> kernel space or user space.
>
> пн, 23 янв. 2023 г. в 14:01, Willy Tarreau :
>
>> Hi Marc,
>>
>> On Mon, Jan 23, 2023 at 12:13:13AM -0600, Marc West wrote:
>> (...)
>> > I understand that raw performance on OpenBSD is sometimes not as high as
>> > other OSes in some scenarios, but the difference of 500 vs 10,000+
>> > req/sec and 1100 vs 40,000 connections here is very large so I wanted to
>> > see if there are any thoughts, known issues, or tunables that could
>> > possibly help improve HAProxy throughput on OpenBSD?
>>
>> Based on my experience a long time ago (~13-14 years), I remember that
>> PF's connection tracking didn't scale at all with the number of
>> connections. It was very clear that there was a very high per-packet
>> lookup cost indicating that a hash table was too small. Unfortunately
>> I didn't know how to change such settings, and since my home machine
>> was being an ADSL line anyway, the line would have been filled long
>> before the hash table so I didn't really care. But I was a bit shocked
>> by this observation. I supposed that since then it has significantly
>> evolved, but it would be worth having a look around this.
>>
>> > The usual OS tunables openfiles-cur/openfiles-max are raised to 200k,
>> > kern.maxfiles=205000 (openfiles peaked at 15k), and haproxy stats
>> > reports those as expected. PF state limit is raised to 1 million and
>> > peaked at 72k in use. BIOS power profile is set to max performance.
>>
>> I think you should try to flood the machine using UDP traffic to see
>> the difference between the part that happens in the network stack and
>> the part that happens in the rest of the system (haproxy included). If
>> a small UDP flood on accepted ports brings the machine on its knees,
>> it's definitely related to the network stack and/or filtering/tracking.
>> If it does nothing to it, I would tend to say that the lower network
>> layers and PF are innocent. This would leave us with TCP and haproxy.
>> A SYN flood test could be useful, maybe the listening queues are too
>> small and incoming packets are dropped too fast.
>>
>> At the TCP layer, a long time ago OpenBSD used to be a bit extremist
>> in the way it produces random sequence numbers. I don't know how it
>> is today nor if this has a significant cost. Similarly, outgoing
>> connections will need a random source port, and this can be expensive,
>> particularly when the number of concurrent connections raises and ports
>> become scarce, though you said that even blocked traffic causes harm
>> to the machine, so I doubt this is your concern for now.
>>
>> > pid = 78180 (process #1, nbproc = 1, nbthread = 32)
>> > uptime = 1d 19h10m11s
>> > system limits: memmax = unlimited; ulimit-n = 20
>> > maxsock = 20; maxconn = 99904; maxpipes = 0
>> >
>> > No errors that I can see in logs about hitting any limits. There is no
>> > change in results with http vs https, http/1.1 vs h2, with or without
>> > httplog, or reducing nbthread on this 40 core machine. If there are any
>> > other details I can provide please let me know.
>>
>> At least I'm seeing you're using kqueue, which is a good point.
>>
>> >   source  0.0.0.0 usesrc clientip
>>
>> I don't know if it's on-purpose that you're using transparent proxying
>> to the servers, but it's very likely that it will increase the processing
>> cost at the lower layers by creating extra states in the network sessions
>> table. Again this will only have an effect for traffic between haproxy and
>> the servers.
>>
>> > listen test_https
>> >   bind ip.ip.ip.ip:443 ssl crt /path/to/cert.pem no-tlsv11 alpn
>> h2,http/1.1
&g

Re: HAProxy performance on OpenBSD

2023-01-23 Thread Илья Шипицин
I would start with big picture view

1) are CPUs utilized at 100% ?
2) what is CPU usage in details - fraction of system, user, idle ... ?

it will allow us to narrow things and find what is the bottleneck, either
kernel space or user space.

пн, 23 янв. 2023 г. в 14:01, Willy Tarreau :

> Hi Marc,
>
> On Mon, Jan 23, 2023 at 12:13:13AM -0600, Marc West wrote:
> (...)
> > I understand that raw performance on OpenBSD is sometimes not as high as
> > other OSes in some scenarios, but the difference of 500 vs 10,000+
> > req/sec and 1100 vs 40,000 connections here is very large so I wanted to
> > see if there are any thoughts, known issues, or tunables that could
> > possibly help improve HAProxy throughput on OpenBSD?
>
> Based on my experience a long time ago (~13-14 years), I remember that
> PF's connection tracking didn't scale at all with the number of
> connections. It was very clear that there was a very high per-packet
> lookup cost indicating that a hash table was too small. Unfortunately
> I didn't know how to change such settings, and since my home machine
> was being an ADSL line anyway, the line would have been filled long
> before the hash table so I didn't really care. But I was a bit shocked
> by this observation. I supposed that since then it has significantly
> evolved, but it would be worth having a look around this.
>
> > The usual OS tunables openfiles-cur/openfiles-max are raised to 200k,
> > kern.maxfiles=205000 (openfiles peaked at 15k), and haproxy stats
> > reports those as expected. PF state limit is raised to 1 million and
> > peaked at 72k in use. BIOS power profile is set to max performance.
>
> I think you should try to flood the machine using UDP traffic to see
> the difference between the part that happens in the network stack and
> the part that happens in the rest of the system (haproxy included). If
> a small UDP flood on accepted ports brings the machine on its knees,
> it's definitely related to the network stack and/or filtering/tracking.
> If it does nothing to it, I would tend to say that the lower network
> layers and PF are innocent. This would leave us with TCP and haproxy.
> A SYN flood test could be useful, maybe the listening queues are too
> small and incoming packets are dropped too fast.
>
> At the TCP layer, a long time ago OpenBSD used to be a bit extremist
> in the way it produces random sequence numbers. I don't know how it
> is today nor if this has a significant cost. Similarly, outgoing
> connections will need a random source port, and this can be expensive,
> particularly when the number of concurrent connections raises and ports
> become scarce, though you said that even blocked traffic causes harm
> to the machine, so I doubt this is your concern for now.
>
> > pid = 78180 (process #1, nbproc = 1, nbthread = 32)
> > uptime = 1d 19h10m11s
> > system limits: memmax = unlimited; ulimit-n = 20
> > maxsock = 20; maxconn = 99904; maxpipes = 0
> >
> > No errors that I can see in logs about hitting any limits. There is no
> > change in results with http vs https, http/1.1 vs h2, with or without
> > httplog, or reducing nbthread on this 40 core machine. If there are any
> > other details I can provide please let me know.
>
> At least I'm seeing you're using kqueue, which is a good point.
>
> >   source  0.0.0.0 usesrc clientip
>
> I don't know if it's on-purpose that you're using transparent proxying
> to the servers, but it's very likely that it will increase the processing
> cost at the lower layers by creating extra states in the network sessions
> table. Again this will only have an effect for traffic between haproxy and
> the servers.
>
> > listen test_https
> >   bind ip.ip.ip.ip:443 ssl crt /path/to/cert.pem no-tlsv11 alpn
> h2,http/1.1
>
> One thing you can try here is to duplicate that line to have multiple
> listening sockets (or just append "shards X" to specify the number of
> sockets you want). One of the benefits is that it will multiply the
> number of listening sockets hence increase the global queue size. Maybe
> some of your packets are lost in socket queues and this could improve
> the situation.
>
> I don't know if you have something roughly equivalent to "perf" on
> OpenBSD nowadays, as that could prove extremely useful to figure where
> the CPU time is spent. Other than that I'm a bit out of ideas.
>
> Willy
>
>


Re: Information Required For PostgreSQL HA

2023-01-18 Thread Илья Шипицин
there might be professional paid services how to migrate to F5.
but I'm afraid it is wrong place to ask for such kind of services.

чт, 19 янв. 2023 г. в 13:07, Willy Tarreau :

> On Thu, Jan 19, 2023 at 06:40:30AM +, Zahid Haseeb wrote:
> > ENVIRONMENT DETAIL
> > We have setup high availability environment for KONG API Gateway
> product. We
> > used two kong applications and two postgresql databases and placed a
> haproxy
> > load balancer between kong application and postgresql database. Every
> product
> > used in this environment is opensource. Now we want to replace only
> > opensource haproxy product with an existing fortigate/F5 load balancer
> > instead of haproxy. Is it possible. please advise
>
> That's probably one of the funniest request we've received here in many
> years :-)
>
> You're basically asking those who make the product that you use if you
> can replace it with another product they don't know, and this, without
> even any detail on your config so that even if some participants here
> would have knowledge on multiple products, they wouldn't be able to
> respond. Also, you should be aware that members of this list tend to
> value opensource, so explaining that you want to replace your opensource
> product with a proprietary one is not going to give you a lot of help
> I'm afraid.
>
> Good luck in your fun adventures anyway! And if you manage to do it,
> do not forget to document it publicly, explaining what that would
> bring you, beyond paying for something you previously had for free!
>
> Willy
>
>


Re: is there releases.json ?

2023-01-11 Thread Илья Шипицин
ср, 11 янв. 2023 г. в 20:52, Willy Tarreau :

> Hi Ilya,
>
> On Wed, Jan 11, 2023 at 08:39:43PM +0600,  ??? wrote:
> > Hello,
> >
> > is "releases.json" generated by haproxy/make-releases-json at master ·
> > haproxy/haproxy (github.com)
> > <
> https://github.com/haproxy/haproxy/blob/master/scripts/make-releases-json>
> > published somewhere ?
>
> Yes, it's in the download/$VER/src directory. E.g:
>
>   http://www.haproxy.org/download/2.7/src/releases.json



thanks!


>
>
> Willy
>


is there releases.json ?

2023-01-11 Thread Илья Шипицин
Hello,

is "releases.json" generated by haproxy/make-releases-json at master ·
haproxy/haproxy (github.com)

published
somewhere ?

Ilya


Re: [PATCH 0/5] Changes to matrix.py

2022-12-29 Thread Илья Шипицин
I'm fine with reformatting/caching/whatever.

btw, Tim, while on this, can you please add LibreSSL-3.7.0 (fixed) to
stable branches ?
I've forgotten, now we do not run libressl for stable branches at all

чт, 29 дек. 2022 г. в 22:40, Tim Duesterhus :

> Willy,
>
> please find some opinionated (formatting) changes to matrix.py that I
> believe
> improve readability and maintainability of that script.
>
> All of them may be backported if desired, but I did not add any such note
> to
> the commit message. Also feel free to drop any patches you disagree with.
>
> Best regards
>
> Tim Duesterhus (6):
>   CI: Improve headline in matrix.py
>   CI: Add in-memory cache for the latest OpenSSL/LibreSSL
>   CI: Use proper `if` blocks instead of conditional expressions in
> matrix.py
>   CI: Unify the `GITHUB_TOKEN` name across matrix.py and vtest.yml
>   CI: Explicitly check environment variable against `None` in matrix.py
>   CI: Reformat `matrix.py` using `black`
>
>  .github/matrix.py   | 70 ++---
>  .github/workflows/vtest.yml |  2 +-
>  2 files changed, 50 insertions(+), 22 deletions(-)
>
> --
> 2.39.0
>
>


Re: testing haproxy against older/newer gcc compilers

2022-12-29 Thread Илья Шипицин
чт, 29 дек. 2022 г. в 22:06, Willy Tarreau :

> Hi Ilya,
>
> On Thu, Dec 29, 2022 at 09:24:43PM +0600,  ??? wrote:
> > Hello,
> >
> > I noticed some patches/commits related to "fix compilation on gcc-4/5..."
> >
> > I came to an idea to use official gcc images:
> > https://hub.docker.com/_/gcc/tags?page=1
> > that mostly works in Github actions except gcc-4.8 :(
> >
> > so...
> > are we interested in (monthly ?) run of something like this
> >
> https://github.com/chipitsine/haproxy/actions/runs/3801460101/jobs/6465951522
>
> Well, I'd say it depends on the effort required for this. We often get
> reports of breakage on older compilers, but at the same time they're so
> rarely used that issues can sometimes last for weeks or months. And most
> often it's a combination of a compiler and a specific architecture or
> certain build options that breaks.
>
> For example I cannot reproduce the issue you found above with gcc-4.7,
> gcc-5.5 nor gcc-6.5 on x86_64. It typically means that it's in fact an
> API problem in OpenSSL that was likely fixed at some point. Trying to
> fix it by removing the const would probably issue warnings on other
> compiler/openssl combinations.
>

yes, that particular failure is related to OpenSSL shipped with gcc docker
image.
we can omit ERR=1 maybe


>
> FWIW I'm using 11.3 locally on my laptop, 9.5 and 6.5 on the build farm
> and sometimes 4.7 on the build farm as well or when I build for less
> common architectures such as MIPS. I seldom do some minimal builds on
> 4.4 when I feel brave enough to trigger a compilation on my old Atom
> server, and used to have 4.5 on our AIX/PPC machine (which doesn't
> boot anymore, dead disk).
>
> I tend to think that the coverage is generally sufficient even for
> older compilers, and that those who need to build with them either do
> not care about an occasional warning or are not in a hurry and are
> willing to report the issue and wait for it to be fixed.
>
> Thus if you're interested in trying old versions, maybe it can make
> sense to run on the latest 4.8. It was far from having been the best,
> but RHEL7 shipped with it and it's still one of the reasons we sometimes
> get feedback about build issues.
>

ok, I'll check RHEL7


>
> Willy
>


testing haproxy against older/newer gcc compilers

2022-12-29 Thread Илья Шипицин
Hello,

I noticed some patches/commits related to "fix compilation on gcc-4/5..."

I came to an idea to use official gcc images:
https://hub.docker.com/_/gcc/tags?page=1
that mostly works in Github actions except gcc-4.8 :(

so...
are we interested in (monthly ?) run of something like this
https://github.com/chipitsine/haproxy/actions/runs/3801460101/jobs/6465951522

?

Ilya


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
haproxy/vtest.yml at master · chipitsine/haproxy (github.com)


secret name can be arbitrary, for example "TOKEN".
env variable is GITHUB_API_TOKEN

пт, 23 дек. 2022 г. в 00:12, Willy Tarreau :

> On Fri, Dec 23, 2022 at 12:08:29AM +0600,  ??? wrote:
> > not perfect, but it works
>
> Can you please elaborate ? You sent a two-line screenshot of
> something I have no idea what this is nor what to do with it.
> Are you suggesting to rename the token or something else ? I'm
> sorry but your messages are too cryptic for me Ilya.
>
> Willy
>


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
not perfect, but it works

[image: image.png]


from github point of view, if token is bad, you'll get 401.
as long as I'm getting 200, I assume it works for "openssl" org as well :)

пт, 23 дек. 2022 г. в 00:04, Willy Tarreau :

> On Thu, Dec 22, 2022 at 11:56:24PM +0600,  ??? wrote:
> > you can limit token scope to read repo information.
>
> I tried anyway, it created one and failed with:
>
> Failed to add secret. Secret names must not start with GITHUB_.
>
> So I guess we should have tried it before committing the entry :-/
>
> Willy
>


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
you can limit token scope to read repo information.

[image: image.png]

чт, 22 дек. 2022 г. в 23:49, Willy Tarreau :

> On Thu, Dec 22, 2022 at 11:35:35PM +0600,  ??? wrote:
> > here's how it works
> >
> > (unfortunately, github does not allow secret named GITHUB_ , so I created
> > secret "TOKEN" and assigned it to variable GITHUB_API_TOKEN)
> >
> > I also added "env" to print all variables, you can value of
> > GITHUB_API_TOKEN is masked. is it set to wrong value, so api call failed:
> >
> >
> https://github.com/chipitsine/haproxy/actions/runs/3759885064/jobs/6389967966
>
> OK, it was supposed to appear at line 27 and was maked in the console
> output. And the backtrace didn't reveal the value of the argument, just
> their name. So normally if it fails in urllib.request.Request() it should
> only log the URL and "headers", nothing more.
>
> In that case I think it's acceptable. We'll just need to watch from time
> to time and destroy the token if we notice it for whatever other reason
> (e.g. debug mode enabled in HTTP fetch showing headers etc). Sorry for
> being annoying but you'll agree that the whole security around this is
> extremely fragile and solely relies on the console filtering known
> strings!
>
> So now the next step will be for me to find my way through the painful
> settings interface. I'll find Tim's previous howto in my mails.
>
> Thanks!
> Willy
>


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
here's how it works

(unfortunately, github does not allow secret named GITHUB_ , so I created
secret "TOKEN" and assigned it to variable GITHUB_API_TOKEN)

I also added "env" to print all variables, you can value of
GITHUB_API_TOKEN is masked. is it set to wrong value, so api call failed:

https://github.com/chipitsine/haproxy/actions/runs/3759885064/jobs/6389967966

чт, 22 дек. 2022 г. в 23:28, Willy Tarreau :

> On Thu, Dec 22, 2022 at 06:20:26PM +0100, William Lallemand wrote:
> > On Thu, Dec 22, 2022 at 06:12:46PM +0100, Willy Tarreau wrote:
> > > On Thu, Dec 22, 2022 at 11:00:26PM +0600,  ??? wrote:
> > > > I'm not sure if it possible to issue organization based token (not a
> > > > personal one).
> > > >
> > > > As for visibility, secrets are not visible for pull requests.
> > >
> > > My concern is not that they are in PR or any such thing, but they're
> > > passed in HTTP requests and function arguments in python scripts. So
> > > once we get a failure, if the failed request is dumped into the CI's
> > > logs, or if the python interpreter emits a stack trace with all
> > > arguments to the functions in the stack, the build logs will reveal
> > > the secret. Maybe there's a way to be certain that the logs from the
> > > python script are never dumped to publicly accessible logs, or to
> > > redirect them to files only accessible to authorized people, and that
> > > would be fine, but until this, I don't know what such guarantees we
> > > have. This is my concern regarding the use of this token like this.
> > >
> > > Thanks,
> > > Willy
> >
> > You need to be logged to see the logs of the CI, I don't know if it is
> > only accessible to the people in the haproxy group or if it only need to
> > be logged to github.
>
> OK. At least this is something we need to verify before proceeding. I
> don't know if anyone has access to an account not part of the users
> here. Or conversely maybe we can try to look for another project's
> CI logs.
>
> Willy
>


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
I'm not sure if it possible to issue organization based token (not a
personal one).

As for visibility, secrets are not visible for pull requests.

чт, 22 дек. 2022 г. в 22:57, Илья Шипицин :

> there are couple of steps left (no hurry, because "matrix.py" is backward
> compatible)
>
> 1. issue "some kind of token".
>either Personal Access Tokens (Classic) (github.com)
> <https://github.com/settings/tokens>   (no time limit)
>or  Fine-grained Personal Access Tokens (github.com)
> <https://github.com/settings/tokens?type=beta>  (1year token)
>
> 2. add issued token to secrets:
> https://github.com/haproxy/haproxy/settings/secrets/actions/new
>
> 3. add secret definition to workflow, like this: haproxy/coverity.yml at
> master · haproxy/haproxy (github.com)
> <https://github.com/haproxy/haproxy/blob/master/.github/workflows/coverity.yml#L40-L41>
>
> чт, 22 дек. 2022 г. в 22:43, Willy Tarreau :
>
>> On Thu, Dec 22, 2022 at 10:32:22PM +0600,  ??? wrote:
>> > I attached a patch. It keeps current behaviour and is safe to apply.
>> >
>> > in order to make a difference, github token must be issued and set via
>> > github ci settings.
>>
>> OK I understand better now, thanks! I didn't know that there was a
>> difference between auth vs non-auth.
>>
>> I'm having a few questions though:
>>   - where are we supposed to find that token to fill the variable (most
>> likely Tim will facepalm and come rescue me here :-))
>>
>>   - how can we certain that there isn't a risk that this token leaks
>> into build logs which are public ? Because that's what I absolutely
>> hate with the principle of github insecure tokens, it's that they're
>> purely private keys that have to be blindly copy-pasted everywhere.
>>
>> It would be wise to be certain we don't become the de-facto standard
>> github API token provider for all anonymous users...
>>
>> Thanks,
>> Willy
>>
>


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
there are couple of steps left (no hurry, because "matrix.py" is backward
compatible)

1. issue "some kind of token".
   either Personal Access Tokens (Classic) (github.com)
   (no time limit)
   or  Fine-grained Personal Access Tokens (github.com)
  (1year token)

2. add issued token to secrets:
https://github.com/haproxy/haproxy/settings/secrets/actions/new

3. add secret definition to workflow, like this: haproxy/coverity.yml at
master · haproxy/haproxy (github.com)


чт, 22 дек. 2022 г. в 22:43, Willy Tarreau :

> On Thu, Dec 22, 2022 at 10:32:22PM +0600,  ??? wrote:
> > I attached a patch. It keeps current behaviour and is safe to apply.
> >
> > in order to make a difference, github token must be issued and set via
> > github ci settings.
>
> OK I understand better now, thanks! I didn't know that there was a
> difference between auth vs non-auth.
>
> I'm having a few questions though:
>   - where are we supposed to find that token to fill the variable (most
> likely Tim will facepalm and come rescue me here :-))
>
>   - how can we certain that there isn't a risk that this token leaks
> into build logs which are public ? Because that's what I absolutely
> hate with the principle of github insecure tokens, it's that they're
> purely private keys that have to be blindly copy-pasted everywhere.
>
> It would be wise to be certain we don't become the de-facto standard
> github API token provider for all anonymous users...
>
> Thanks,
> Willy
>


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
I attached a patch. It keeps current behaviour and is safe to apply.

in order to make a difference, github token must be issued and set via
github ci settings.

Ilya

чт, 22 дек. 2022 г. в 16:57, Willy Tarreau :

> On Thu, Dec 22, 2022 at 04:47:09PM +0600,  ??? wrote:
> > what if I make it conditional, i.e. if github token is defined via env,
> > make non anonymous api call,
>
> I'm sorry, Ilya, but I have no idea what this means :-)
>
> Willy
>
From c4e038b014c3c8e565857bc971d200b091192d93 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Thu, 22 Dec 2022 22:27:37 +0600
Subject: [PATCH] CI: enable github api authentication for OpenSSL tags read

github api throttles requests with no auth, thus we can enable
GITHUB_API_TOKEN env variable. if not set, current behaviour is kept
---
 .github/matrix.py | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/.github/matrix.py b/.github/matrix.py
index ffc3414b9..72e6b1a85 100755
--- a/.github/matrix.py
+++ b/.github/matrix.py
@@ -26,7 +26,9 @@ def clean_ssl(ssl):
 return ssl.replace("_VERSION", "").lower()
 
 def determine_latest_openssl(ssl):
-openssl_tags = urllib.request.urlopen("https://api.github.com/repos/openssl/openssl/tags;)
+headers = {'Authorization': 'token ' + environ.get('GITHUB_API_TOKEN')} if environ.get('GITHUB_API_TOKEN') else {}
+request = urllib.request.Request('https://api.github.com/repos/openssl/openssl/tags', headers=headers)
+openssl_tags = urllib.request.urlopen(request)
 tags = json.loads(openssl_tags.read().decode('utf-8'))
 latest_tag = ''
 for tag in tags:
-- 
2.38.1



Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
what if I make it conditional, i.e. if github token is defined via env,
make non anonymous api call,

чт, 22 дек. 2022 г. в 16:27, Willy Tarreau :

> On Thu, Dec 22, 2022 at 03:49:34PM +0600,  ??? wrote:
> > it is something I was afraid of "HTTP Error 403: rate limit exceeded".
> > ok, I'll try to deal with that
>
> Yep I've also seen a 429 this morning, indicating we were making too many
> requests to clone a repo. I think this is purely a problem of threshold on
> github's side. They might need to white-list their own CI servers or to
> raise some thresholds to reasonable levels.
>
> Willy
>


Re: Failures on "Generate Build Matrix"

2022-12-22 Thread Илья Шипицин
it is something I was afraid of "HTTP Error 403: rate limit exceeded".
ok, I'll try to deal with that

чт, 22 дек. 2022 г. в 15:41, William Lallemand :

> Hi Guys,
>
> Since a few days I'm seeing some failure on the "Generate Build Matrix"
> part of
> the CI, the request.urlopen() seems to fail the urlopen(), it's easy to
> restart
> it manually but it happened to me a few times recently.
>
> Do you think that would be possible to cache these values so the script
> don't
> fail ? or maybe just let the "latest" fail.
>
> Generating matrix for type 'master'.
> Traceback (most recent call last):
>   File "/home/runner/work/haproxy/haproxy/.github/matrix.py", line 167, in
> 
> ssl = determine_latest_openssl(ssl)
>   File "/home/runner/work/haproxy/haproxy/.github/matrix.py", line 29, in
> determine_latest_openssl
> openssl_tags = urllib.request.urlopen("
> https://api.github.com/repos/openssl/openssl/tags;)
>   File "/usr/lib/python3.10/urllib/request.py", line 216, in urlopen
> return opener.open(url, data, timeout)
>   File "/usr/lib/python3.10/urllib/request.py", line 525, in open
> response = meth(req, response)
>   File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
> response = self.parent.error(
>   File "/usr/lib/python3.10/urllib/request.py", line 563, in error
> return self._call_chain(*args)
>   File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
> result = func(*args)
>   File "/usr/lib/python3.10/urllib/request.py", line 643, in
> http_error_default
> raise HTTPError(req.full_url, code, msg, hdrs, fp)
> urllib.error.HTTPError: HTTP Error 403: rate limit exceeded
>
> Thanks!
> --
> William Lallemand
>


Re: Followup on openssl 3.0 note seen in another thread

2022-12-14 Thread Илья Шипицин
Can you try to bisect?

I suspect that it won't help, browsers tend to remember things in their own
way

On Thu, Dec 15, 2022, 9:09 AM Shawn Heisey  wrote:

> On 12/14/22 19:33, Shawn Heisey wrote:
> > With quictls 3.0.7 it was working.  I will try rebuilding and see
> > whether it still does.  There was probably an update to haproxy as well
> > as changing quictls -- my build script pulls the latest from the 2.7 git
> > repo.
>
> Rebuilding with quictls 3.0.7 didn't change the behavior -- browsers
> still don't switch to http as they did before, so the obvious conclusion
> is that something changed in haproxy.
>
> If you would like me to do anything to help troubleshoot, please let me
> know.
>
> This is the simplest test I have.  Reloading this page used to switch to
> http3:
>
> https://http3test.elyograg.org/
>
> I also built and installed the latest 2.8.0-dev version with quictls
> 1.1.1s.  It doesn't switch to h3 either.
>
> Thanks,
> Shawn
>
>


Re: Reproducible CI build with OpenSSL and "latest" keyword

2022-12-14 Thread Илья Шипицин
as for reporting "what is ubuntu-latest" and "what is ssl=stock", I did not
have much success yet. github does not expose that information in easy way.

actually, there's build step where image version is reported, but it is
collapsed

[image: image.png]

ср, 14 дек. 2022 г. в 19:55, Илья Шипицин :

>
>
> ср, 14 дек. 2022 г. в 19:23, William Lallemand :
>
>> On Wed, Dec 14, 2022 at 06:34:26PM +0500, Илья Шипицин wrote:
>> > I am attaching another patch, i.e. using "ubuntu-latest" and
>> "macos-latest"
>> > for development branches and fixed images for stable branches.
>> >
>>
>> Thank you, that make sense, I'll backport it to 2.6 as well.
>>
>> We just need to be careful every 2 years when the ubuntu version change
>> and an HAProxy release is done, not to be stuck in 22.04 :-)
>>
>
> it is what you have asked for, if versions are fixed, someone has to keep
> an eye on it :)
>
>
>>
>> --
>> William Lallemand
>>
>


Re: Reproducible CI build with OpenSSL and "latest" keyword

2022-12-14 Thread Илья Шипицин
ср, 14 дек. 2022 г. в 19:23, William Lallemand :

> On Wed, Dec 14, 2022 at 06:34:26PM +0500, Илья Шипицин wrote:
> > I am attaching another patch, i.e. using "ubuntu-latest" and
> "macos-latest"
> > for development branches and fixed images for stable branches.
> >
>
> Thank you, that make sense, I'll backport it to 2.6 as well.
>
> We just need to be careful every 2 years when the ubuntu version change
> and an HAProxy release is done, not to be stuck in 22.04 :-)
>

it is what you have asked for, if versions are fixed, someone has to keep
an eye on it :)


>
> --
> William Lallemand
>


  1   2   3   4   5   6   7   8   9   10   >