Re: Dataplane exits at haproxytech/haproxy-ubuntu:2.9 in Containers

2024-03-17 Thread Aleksandar Lazic

Hi.

Looks like there was a similar question in the forum
https://discourse.haproxy.org/t/trouble-with-starting-the-data-plane-api/9200

Any idea how to fix this?

Regards
Alex


On 2024-03-13 (Mi.) 00:11, Aleksandar Lazic wrote:

Hi.

I try to run dataplane as "random" user inside haproxy.cfg.

That's the debug output of the start of the container. Even as I have set the 
--log-level=trace to the dataplane can't I see any reason why the api kills the 
process.



```
# Debug output with dataplane api
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:51:49_CET 
/datadisk/container-haproxy $ sudo buildah bud --tag craftcms-hap .

STEP 1/4: FROM haproxytech/haproxy-ubuntu:2.9
STEP 2/4: COPY container-files/ /
STEP 3/4: RUN set -x   && mkdir -p /data/haproxy/etc /data/haproxy/run 
/data/haproxy/maps /data/haproxy/ssl /data/haproxy/general 
/data/haproxy/spoe   && chown -R1001:0 /data   && chmod -R g=u /data   && touch 
/data/haproxy/etc/dataplaneapi.yaml
+ mkdir -p /data/haproxy/etc /data/haproxy/run /data/haproxy/maps 
/data/haproxy/ssl /data/haproxy/general /data/haproxy/spoe

+ chown -R 1001:0 /data
+ chmod -R g=u /data
+ touch /data/haproxy/etc/dataplaneapi.yaml
STEP 4/4: USER 1001
COMMIT craftcms-hap
Getting image source signatures
Copying blob d101c9453715 skipped: already exists
Copying blob 5c32e8ef5ef0 skipped: already exists
Copying blob 5bbbd68c0c20 skipped: already exists
Copying blob 2f5b49454406 [--] 0.0b / 0.0b
Copying blob 83d27970fa5a [--] 0.0b / 0.0b
Copying blob 5a567c1d5233 done
Copying config 1ac0ae6824 done
Writing manifest to image destination
Storing signatures
--> 1ac0ae6824c
Successfully tagged localhost/craftcms-hap:latest
1ac0ae6824c91a9bc4fa1f19979c0b9dc672981fb82949429006d53252f8de9c
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:21_CET 
/datadisk/container-haproxy $ sudo podman run -it --rm --network host --name 
haproxy craftcms-hap haproxy -f /data/haproxy/etc/haproxy.cfg -d

Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
   [NOTICE]   (1) : New program 'api' (3) forked
   [NOTICE]   (1) : New worker (4) forked
   [NOTICE]   (1) : Loading success.
Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
   [BWLIM] bwlim-in
   [BWLIM] bwlim-out
   [CACHE] cache
   [COMP] compression
   [FCGI] fcgi-app
   [SPOE] spoe
   [TRACE] trace
Using epoll() as the polling mechanism.
time="2024-03-12T22:54:24Z" level=info msg="HAProxy Data Plane API v2.9.1 
4d10854c"
time="2024-03-12T22:54:24Z" level=info msg="Build from: 
https://github.com/haproxytech/dataplaneapi.git;

time="2024-03-12T22:54:24Z" level=info msg="Reload strategy: custom"
time="2024-03-12T22:54:24Z" level=info msg="Build date: 2024-02-26T18:06:06Z"
:GLOBAL.accept(0008)=0038 from [unix:1] ALPN=
:GLOBAL.clicls[:]
:GLOBAL.srvcls[:]
:GLOBAL.closed[:]
0001:GLOBAL.accept(0008)=0039 from [unix:1] ALPN=
0001:GLOBAL.clicls[:]
0001:GLOBAL.srvcls[:]
0001:GLOBAL.closed[:]
[NOTICE]   (1) : haproxy version is 2.9.6-9eafce5
[NOTICE]   (1) : path to executable is /usr/local/sbin/haproxy

[ALERT]    (1) : Current program 'api' (3) exited with code 1 (Exit) #< Why exit

[ALERT]    (1) : exit-on-failure: killing every processes with SIGTERM
[ALERT]    (1) : Current worker (4) exited with code 143 (Terminated)
[WARNING]  (1) : All workers exited. Exiting... (1)
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:24_CET 
/datadisk/container-haproxy $

```

When I start HAProxy without the lines in the Block "program api" HAProxy is 
able to start. After I connect with another shell to the container and run the 
dataplane inside the container can I see that dataplane connects to haproxy and 
stops immediately.


# shell 1
```
sudo podman run -it --rm --network host --name haproxy craftcms-hap haproxy -f 
/data/haproxy/etc/haproxy.cfg -d

Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.

Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
[NOTICE]   (1) : New worker (3) forked
[NOTICE]   (1) : Loading success.
Available polling systems :
epoll : pref=300,  test result OK
poll : pref=200,  test result OK
select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


[PATCH 2/2] CI: temporarily adjust kernel entropy to work with ASAN/clang

2024-03-17 Thread Ilia Shipitsin
clang runtime (shipped with clang14) is not compatible with recent
Ubuntu kernels

more details: https://github.com/actions/runner-images/issues/9491
---
 .github/workflows/vtest.yml | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/.github/workflows/vtest.yml b/.github/workflows/vtest.yml
index 6adf2a456..8c461385f 100644
--- a/.github/workflows/vtest.yml
+++ b/.github/workflows/vtest.yml
@@ -57,6 +57,17 @@ jobs:
   run: |
 echo "key=$(echo ${{ matrix.name }} | sha256sum | awk '{print $1}')" 
>> $GITHUB_OUTPUT
 
+
+#
+# temporary hack
+# should be revisited after 
https://github.com/actions/runner-images/issues/9491 is resolved
+#
+
+- name: Setup enthropy
+  if: ${{ startsWith(matrix.os, 'ubuntu-') }}
+  run: |
+sudo sysctl vm.mmap_rnd_bits=28
+
 - name: Cache SSL libs
   if: ${{ matrix.ssl && matrix.ssl != 'stock' && matrix.ssl != 
'BORINGSSL=yes' && matrix.ssl != 'QUICTLS=yes' }}
   id: cache_ssl
-- 
2.43.0.windows.1




[PATCH 0/2] CI entropy adjust (clang asan fix) and spell fixes

2024-03-17 Thread Ilia Shipitsin
couple of patches
1) spell fixes
2) CI sysctl to make new ubuntu kernels and asan friends again

Ilia Shipitsin (2):
  CLEANUP: assorted typo fixes in the code and comments
  CI: temporarily adjust kernel entropy to work with ASAN/clang

 .github/workflows/vtest.yml | 11 +++
 src/quic_cli.c  |  4 ++--
 2 files changed, 13 insertions(+), 2 deletions(-)

-- 
2.43.0.windows.1




[PATCH 1/2] CLEANUP: assorted typo fixes in the code and comments

2024-03-17 Thread Ilia Shipitsin
This is 40th iteration of typo fixes
---
 src/quic_cli.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/quic_cli.c b/src/quic_cli.c
index 49501d53d..b4b9329da 100644
--- a/src/quic_cli.c
+++ b/src/quic_cli.c
@@ -12,7 +12,7 @@
 unsigned int qc_epoch = 0;
 
 enum quic_dump_format {
-   QUIC_DUMP_FMT_DEFAULT, /* value used if not explicitely specified. */
+   QUIC_DUMP_FMT_DEFAULT, /* value used if not explicitly specified. */
 
QUIC_DUMP_FMT_ONELINE,
QUIC_DUMP_FMT_CUST,
@@ -39,7 +39,7 @@ struct show_quic_ctx {
 
 #define QC_CLI_FL_SHOW_ALL 0x1 /* show closing/draining connections */
 
-/* Returns the output format for show quic. If specified explicitely use it as
+/* Returns the output format for show quic. If specified explicitly use it as
  * set. Else format depends if filtering on a single connection instance. If
  * true, full format is preferred else oneline.
  */
-- 
2.43.0.windows.1




Re: About the SPOE

2024-03-17 Thread Aleksandar Lazic

Hi.

On 2024-03-15 (Fr.) 15:09, Christopher Faulet wrote:

Hi all,

It was evoked on the ML by Willy and mentioned in few issues on GH. It is now 
official. The SPOE was marked as deprecated for the 3.0. It is not a pleasant 
announce because it is always an admission of failure to remove a feature. 
Sadly, this filter should be refactored to work properly. It was implemented as 
a functional PoC for the 1.7 and since then, no time was invest to improve it 
and make it truly maintainable in time. Worst, other parts of HAProxy evolved, 
especially applets part, making maintenance ever more expensive.


We must be realistic on the subject, there was no real adoption on the SPOE and 
this partly explains why no time was invest on it. So we are really sorry for 
users relying on it. But we cannot continue in this direction.


The 3.0 is be an LTS version. It means the SPOE will still be maintained on this 
version and lower ones for 5 years. On the 3.1, it will be marked as 
unmaintained and possibly removed if an alternative solution is implemented.


It remains few months before the 3.0 release to change our mind. Maybe this 
announce will be an electroshock to give it a new lease of life. Otherwise it is 
time to find an alternative solution based on an existing protocol.


For all 3.0 users, there is now a warning if a SPOE filter is configured. But 
there is also a global option to silent it. To do so, 
"expose-deprecated-directives" must be added in the global section.


Now we are open to discussion on this subject. Let us know your feeling and if 
you have any suggestion, we will be happy to talk about it.


As I fully understand this step it would he very helpful to have a filter which 
have the possibility to run some tasks outside of HAProxy in async way.


There was a short discussion, in the past, about the future of filters
https://www.mail-archive.com/haproxy@formilux.org/msg44164.html
maybe there are some Ideas which can be reused.

From my point of view would be a http-filter (1,2,!3 imho), with all the pros 
and cons, one of the best way for a filter, because this protocol is so widely 
used and a lot of knowledge could be reused. One of the biggest benefit is also 
that, even in Enterprise environments, could this filter be used as this 
protocol is able to run across a proxy.


FCGI is also another option as it's already part of the Filter chain :-).
I don't know too much about grpc but maybe this protocol could also be used as 
filter ¯\_(ツ)_/¯.


Lua-API with some external Daemons could be also used to move the workload out 
of HAProxy.


From my point of view, what ever solution is chosen the Idea behind the SPOE 
should be kept because it's a good concept to scale the filters outside of HAProxy.


I see a lot of possibilities here the main point is always how much work it it 
to maintain the filter chain.



Regards,


Jm2c

Regards
Alex