Re: git clone git.haproxy.git with curl-8.7.1 failing writing received data

2024-04-05 Thread Bertrand Jacquin

On 2024-04-05 20:24, Bertrand Jacquin wrote:

Just let us know if you're interested. We can also first wait for 
Stefan
and/or Daniel's analysis of a possible cause for the commit you 
bisected

above before hacking too much stuff, though :-)


Let's see! Latest digging seems to lead to some issue with libcurl in 
this case, I might do some test to drop haproxy from the loop to 
eliminate that, but it'll take me a few hours before I can setup lab.


Well, I'm just stupid, I've a setup now bypassing HAProxy directly 
targeting nginx. I did tests against H2 with and without TLS, I'm not 
able to reproduce any of both issue (fail writing received data with 
curl 8.7.1 and git falling back to use dumb http client, and hang with 
dumb http client forced) with curl 8.7.1 with H2 enabled


Cheers,

--
Bertrand



Re: git clone git.haproxy.git with curl-8.7.1 failing writing received data

2024-04-05 Thread Bertrand Jacquin

Hey Willy,

On 2024-04-05 19:44, Willy Tarreau wrote:


Thanks a lot for these details. One thing to have in mind that could
explain that you have not observed this on other servers is that we're
using plain HTTP, we haven't deployed the git-server stuff, so maybe
it triggers a different object transfer sequence and/or code path.


You're definitely onto something here, by forcing git client to use HTTP 
dumb (GIT_SMART_HTTP=0), it's also failing against my own endpoint using 
HAProxy 2.8.7-1a82cdf, although with different symptoms, clone end up 
hanging, although (still with HTTP dumb forced) work just fine with curl 
8.6.0 with HTTP2 enabled, or curl 8.7.1 with HTTP2 disabled, so that's 
definitely an HTTP2 issue and likely not an HAProxy one. Let me know if 
you want me to drop haproxy list form recipients.


Bisecting curl again in this situation points to a different first bad 
commit: 0dc036225b30 ("HTTP/2: write response directly") 
(https://github.com/curl/curl/commit/0dc036225b3013bf994da81fa44571bcfcecbe92)


Note this issue only appears when curl is compiled with HTTP2 enabled. 
I'm
quite sure git.haproxy.org is running on bleeding edge HAProxy, which 
might
not be supporting changes brought by 463472a2d6e3, however I'm curious 
here

to understand protocol corruption.


If you want to make some tests, we can synchronize.


I've done more than enough for this week, I'm off for today, so I'm 
definitely free at the moment :)



Among the easy stuff
we can try on the haproxy side are:
  - restart with the stable version in case it has any effect
  - disable H2 (not sure if that's doable in git-clone, but we can also
easily open an extra port for you to test)


I haven't found a way to prevent git from using HTTP2, curl is small 
enough to recompile quickly to test with different features enabled. 
I've been only observing this issue with HTTP2 indeed, I'm not able to 
reproduce initial findings with curl built with HTTP2 disabled at build 
time.


Just let us know if you're interested. We can also first wait for 
Stefan
and/or Daniel's analysis of a possible cause for the commit you 
bisected

above before hacking too much stuff, though :-)


Let's see! Latest digging seems to lead to some issue with libcurl in 
this case, I might do some test to drop haproxy from the loop to 
eliminate that, but it'll take me a few hours before I can setup lab.



Cheers,
Willy


--
Bertrand



git clone git.haproxy.git with curl-8.7.1 failing writing received data

2024-04-05 Thread Bertrand Jacquin

Hi,

For the last few days, I've been unable to git clone 
https://git.haproxy.org/git/haproxy.git with curl-8.7.1, where I'm 
getting the following error:


  $ GIT_TRACE=1 git fetch https://git.haproxy.org/git/haproxy.git
  19:12:01.277740 git.c:463   trace: built-in: git fetch 
https://git.haproxy.org/git/haproxy.git
  19:12:01.278266 run-command.c:657   trace: run_command: 
GIT_DIR=.git git remote-https https://git.haproxy.org/git/haproxy.git 
https://git.haproxy.org/git/haproxy.git
  19:12:01.279001 git.c:749   trace: exec: git-remote-https 
https://git.haproxy.org/git/haproxy.git 
https://git.haproxy.org/git/haproxy.git
  19:12:01.279018 run-command.c:657   trace: run_command: 
git-remote-https https://git.haproxy.org/git/haproxy.git 
https://git.haproxy.org/git/haproxy.git
  fatal: unable to access 'https://git.haproxy.org/git/haproxy.git/': 
Failed writing received data to disk/application


I'm able to identify anything going off on the client, even though the 
error seems to indicate git could not write data on disk. After doing 
some digging, fetching https://git.haproxy.org/git/haproxy.git using 
curl 8.6.0 works like a charm, however I'm only seeing issue with git 
fetch/clone against git.haproxy.org and no other repo, thus both mailing 
list are made destination of this email.


Here is attached the output of git fetch with GIT_TRACE_CURL=1 in 
git-fetch-haproxy-curl-8.7.1.log


Version of git used on client side:
  git version 2.43.2
  cpu: x86_64
  no commit associated with this build
  sizeof-long: 8
  sizeof-size_t: 8
  shell-path: /bin/sh

Details of functional curl 8.6.0:
  8.6.0 (x86_64-pc-linux-gnu) libcurl/8.6.0 OpenSSL/3.0.13 zlib/1.3 
zstd/1.5.5 c-ares/1.26.0 libidn2/2.3.7 libpsl/0.21.5 nghttp2/1.57.0

  Release-Date: 2024-01-31
  Protocols: dict file http https imap imaps ipfs ipns mqtt pop3 pop3s 
rtsp smtp smtps tftp
  Features: alt-svc AsynchDNS HSTS HTTP2 HTTPS-proxy IDN IPv6 Largefile 
libz NTLM PSL SSL threadsafe TLS-SRP UnixSockets zstd


Details of non functional curl 8.7.1
  curl 8.7.1 (x86_64-pc-linux-gnu) libcurl/8.7.1 OpenSSL/3.0.13 zlib/1.3 
brotli/1.1.0 zstd/1.5.5 c-ares/1.26.0 libidn2/2.3.7 libpsl/0.21.5 
libssh2/1.11.0 nghttp2/1.61.0 librtmp/2.3

  Release-Date: 2024-03-27
  Protocols: dict file ftp ftps gopher gophers http https imap imaps 
ipfs ipns mqtt pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp ws 
wss
  Features: alt-svc AsynchDNS brotli HSTS HTTP2 HTTPS-proxy IDN IPv6 
Largefile libz NTLM PSL SSL threadsafe TLS-SRP UnixSockets zstd


Bisecting through changes from curl 8.6.0 to 8.7.1 points me to 
463472a2d6e3 ("lib: move client writer into own source") 
(https://github.com/curl/curl/commit/463472a2d6e3301c1468b5323b856cb67a91f579) 
as the first bad commit.


Note this issue only appears when curl is compiled with HTTP2 enabled. 
I'm quite sure git.haproxy.org is running on bleeding edge HAProxy, 
which might not be supporting changes brought by 463472a2d6e3, however 
I'm curious here to understand protocol corruption.


Cheers,

--
Bertrand

git-fetch-haproxy-curl-8.7.1.log
Description: Binary data


[PATCH 1/1] BUG/MEDIUM: tests: use tmpdir to create UNIX socket

2022-12-17 Thread Bertrand Jacquin
testdir can be a very long directory since it depends on source
directory path, this can lead to failure during tests when UNIX socket
path exceeds maximum allowed length of 97 characters as defined in
str2sa_range().

  16:48:14 [ALERT] ***  h1debug|(10082) : config : parsing 
[/tmp/haregtests-2022-12-17_16-47-39.4RNzIN/vtc.4850.5d0d728a/h1/cfg:19] : 
'bind' : socket path 
'unix@/local/p4clients/pkgbuild-bB20r/workspace/build/HAProxy/HAProxy-2.7.x.68.0/AL2_x86_64/DEV.STD.PTHREAD/build/private/HAProxy-2.7.x/src/reg-tests/lua/srv3'
 too long (max 97)

Also, it is not advisable to create UNIX socket in actual source
directory, but instead use dedicated temporary directory create for test
purpose.

This should be backported to 2.6
---
 reg-tests/lua/lua_httpclient.vtc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/reg-tests/lua/lua_httpclient.vtc b/reg-tests/lua/lua_httpclient.vtc
index 0850ddb5f3f2..0a274932ab23 100644
--- a/reg-tests/lua/lua_httpclient.vtc
+++ b/reg-tests/lua/lua_httpclient.vtc
@@ -51,13 +51,13 @@ haproxy h1 -conf {
 
listen li1
mode http
-   bind unix@${testdir}/srv3
+   bind unix@${tmpdir}/srv3
server srv3 ${s3_addr}:${s3_port}
 
 } -start
 
 client c0 -connect ${h1_fe1_sock} {
-txreq -url "/" -hdr "vtcport: ${s1_port}" -hdr "vtcport2: ${s2_port}" -hdr 
"vtcport3: unix@${testdir}/srv3"
+txreq -url "/" -hdr "vtcport: ${s1_port}" -hdr "vtcport2: ${s2_port}" -hdr 
"vtcport3: unix@${tmpdir}/srv3"
 rxresp
 expect resp.status == 200
 } -run



[PATCH 1/1] BUG/MINOR: lua: remove loop initial declarations

2021-11-24 Thread Bertrand Jacquin
HAProxy is documented to support gcc >= 3.4 as per INSTALL file, however
hlua.c makes use of c11 only loop initial declarations leading to build
failure when using gcc-4.9.4:

  x86_64-unknown-linux-gnu-gcc -Iinclude  -Wchar-subscripts -Wcomment -Wformat 
-Winit-self -Wmain -Wmissing-braces -Wno-pragmas -Wparentheses -Wreturn-type 
-Wsequence-point -Wstrict-aliasing -Wswitch -Wtrigraphs -Wuninitialized 
-Wunknown-pragmas -Wunused-label -Wunused-variable -Wunused-value 
-Wpointer-sign -Wimplicit -pthread -fdiagnostics-color=auto -D_LARGEFILE_SOURCE 
-D_FILE_OFFSET_BITS=64 -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -O3 -msse 
-mfpmath=sse -march=core2 -g -fPIC -g -Wall -Wextra -Wundef 
-Wdeclaration-after-statement -fwrapv -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wtype-limits  -DUSE_EPOLL  -DUSE_NETFILTER   -DUSE_PCRE2 -DUSE_PCRE2_JIT 
-DUSE_POLL -DUSE_THREAD -DUSE_BACKTRACE   -DUSE_TPROXY -DUSE_LINUX_TPROXY 
-DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H -DUSE_GETADDRINFO -DUSE_OPENSSL 
-DUSE_LUA -DUSE_ACCEPT4   -DUSE_SLZ -DUSE_CPU_AFFINITY -DUSE_TFO -DUSE_NS 
-DUSE_DL -DUSE_RT  -DUSE_PRCTL  -DUSE_THREAD_DUMP-DUSE_PCRE2 
-DPCRE2_CODE_UNIT_WIDTH=8  -I/usr/local/include 
-DCONFIG_HAPROXY_VERSION=\"2.5.0\" -DCONFIG_HAPROXY_DATE=\"2021/11/23\" -c -o 
src/connection.o src/connection.c
  src/hlua.c: In function 'hlua_config_prepend_path':
  src/hlua.c:11292:2: error: 'for' loop initial declarations are only allowed 
in C99 or C11 mode
for (size_t i = 0; i < 2; i++) {
^
  src/hlua.c:11292:2: note: use option -std=c99, -std=gnu99, -std=c11 or 
-std=gnu11 to compile your code

This commit moves loop iterator to an explicit declaration.

No backport needed as this issue was introduced in v2.5-dev10~69 with
commit 9e5e586e35c5 ("BUG/MINOR: lua: Fix lua error handling in
`hlua_config_prepend_path()`")
---
 src/hlua.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/hlua.c b/src/hlua.c
index 08735374af77..8dea91e75832 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -11249,6 +11249,7 @@ static int hlua_config_prepend_path(char **args, int 
section_type, struct proxy
char *path;
char *type = "path";
struct prepend_path *p = NULL;
+   size_t i;
 
if (too_many_args(2, args, err, NULL)) {
goto err;
@@ -11289,7 +11290,7 @@ static int hlua_config_prepend_path(char **args, int 
section_type, struct proxy
 * thread. The remaining threads will be initialized based on
 * prepend_path_list.
 */
-   for (size_t i = 0; i < 2; i++) {
+   for (i = 0; i < 2; i++) {
lua_State *L = hlua_states[i];
const char *error;
 



[PATCH spoa-server] BUG/MINOR: build: install binary inside bin/ directory

2021-06-12 Thread Bertrand Jacquin
Prior to the change, spoa is installed under DESTDIR with name `bin`
---
 Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Makefile b/Makefile
index 5de6135ec7d6..52eb43e5f938 100644
--- a/Makefile
+++ b/Makefile
@@ -68,6 +68,7 @@ spoa: $(OBJS)
$(LD) $(LDFLAGS) -o $@ $^ $(LDLIBS)
 
 install: spoa
+   install -v -d $(DESTDIR)$(BINDIR)
install spoa $(DESTDIR)$(BINDIR)
 
 clean:



Re: [ANNOUNCE] haproxy-2.4-dev14

2021-03-27 Thread Bertrand Jacquin
Hi Willy,

> Another discussion started around an easier support for some modern
> platforms. In issue #1194, Ashley Penney was caught running on AWS's ARM
> instances with the default ARM target optimizations. For having run some
> tests on these machines, I can't say enough how great they are, but I
> also know that in order to unlock their power when dealing with 16 cores
> or more, it's mandatory to use their modern locking extensions. Worse,
> the default ones do not scale at all and easily maintain deadlocks until
> the watchdog gets rid of the situation. I can trivially add a new CPU
> option in the makefile, there are already a few preset, it's easy. But
> the question is more how this could flow back into distro packages if
> possible at all, considering that such code will not run on legacy
> platforms (e.g. RPi). The difficulty here is that it's not about
> optimization anymore but rather choose from "crashes all the time" and
> "works amazingly fast". Another option could be for distros to limit
> the number of threads on such platforms to 4 to cover legacy devices
> well and prevent the degradation from happening on larger systems. But
> this will definitely require that users rebuild themselves to use larger
> platforms. In my opinion it is exactly the same problem we've seen a long
> time ago with x86 and cmov/mmx/sse/avx in that these are all extensions
> used at the lowest compiler level, so I'm sure there's a clean way to
> deal with this but I'm not qualified to say how. All ideas welcome!

Correct me if I'm wrong, I do believe this is a perfect example for
using new glibc feature hwcaps which allow a given object to be
compiled with multiple level of optimization and let the loader select
the more appropriate elf tree at runtime. I am no expert about nor have
played with it, however the issue you are mentioning here does remind me
https://sourceware.org/pipermail/libc-alpha/2020-June/115250.html. Note
it does appear only x86_64, powerpc and s390 are supported at this stage.

Cheers,

-- 
Bertrand



Re: [PATCH 1/1] MINOR: build: force CC to set a return code when probing options

2021-03-06 Thread Bertrand Jacquin
Hi Lukas,

On Saturday, March 06 2021 at 23:48:52 +0100, Lukas Tribus wrote:
> Hello,
> 
> On Sat, 6 Mar 2021 at 21:25, Bertrand Jacquin  wrote:
> >
> > gcc returns non zero code if an option is not supported (tested
> > from 6.5 to 10.2).
> >
> >   $ gcc -Wfoobar -E -xc - -o /dev/null < /dev/null > /dev/null 2>&1 ; echo 
> > $?
> >   1
> >
> > clang always return 0 if an option in not recognized unless
> > -Werror is also passed, preventing a correct probing of options
> > supported by the compiler (tested with clang 6.0.1 to 11.1.0)
> 
> -Werror is more than just "-Wunknown-warning-option" on clang.
> -Werror/ERR is specifically optional in the Makefile.
> 
> If we want to change the haproxy build-system to do -Werror from now
> on we need a) discuss it and b) fix it all up. We can't hardcode
> -Werror and at the same time claim that it's an optional parameter.

I am not proposing haproxy build-system to use -Werror here, I'm only
proposing to use -Werror when probing for options supported by the
compiler, as effectively clang return a code if 0 even if an option is
not supported. The fact haproxy does not use -Wno-clobbered today is
with clang build because of an internal bug in clang with how haproxy is
using stdin/stderr indirection.

With the proposal above, Werror is only use to probe options, it is not
reflected in SPEC_CFLAGS.

-- 
Bertrand



[PATCH 1/1] MINOR: build: force CC to set a return code when probing options

2021-03-06 Thread Bertrand Jacquin
gcc returns non zero code if an option is not supported (tested
from 6.5 to 10.2).

  $ gcc -Wfoobar -E -xc - -o /dev/null < /dev/null > /dev/null 2>&1 ; echo $?
  1

clang always return 0 if an option in not recognized unless
-Werror is also passed, preventing a correct probing of options
supported by the compiler (tested with clang 6.0.1 to 11.1.0).

  $ clang -Wfoobar -E -xc - -o /dev/null < /dev/null > /dev/null 2>&1 ; echo $?
  0
  $ clang -Werror -Wfoobar -E -xc - -o /dev/null < /dev/null > /dev/null 2>&1 ; 
echo $?
  1

Please note today this is not visible since clang 11 exit with SIGABRT
or with return code 1 on older version due to bad file descriptor from
file descriptor handling

  $ clang -Wfoobar -E -xc - -o /dev/null < /dev/null 2>&0 ; echo $?
  Aborted (core dumped)
  134
  $ clang -Wfoobar -E -xc - -o /dev/null < /dev/null ; echo $?
  warning: unknown warning option '-Wfoobar'; did you mean '-Wformat'? 
[-Wunknown-warning-option]
  1 warning generated.
  0
  $ clang-11 -Werror -Wfoobar -E -xc - -o /dev/null < /dev/null ; echo $?
  error: unknown warning option '-Wfoobar'; did you mean '-Wformat'? 
[-Werror,-Wunknown-warning-option]
  1

This specific issue is being tracked with clang upstream in 
https://bugs.llvm.org/show_bug.cgi?id=49463
---
 Makefile | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index 77960ba4cac5..0867047fdc69 100644
--- a/Makefile
+++ b/Makefile
@@ -126,16 +126,16 @@ endif
 # Usage: CFLAGS += $(call cc-opt,option). Eg: $(call cc-opt,-fwrapv)
 # Note: ensure the referencing variable is assigned using ":=" and not "=" to
 #   call it only once.
-cc-opt = $(shell set -e; if $(CC) $(1) -E -xc - -o /dev/null &0 
2>&0; then echo "$(1)"; fi;)
+cc-opt = $(shell set -e; if $(CC) -Werror $(1) -E -xc - -o /dev/null 
&0 2>&0; then echo "$(1)"; fi;)
 
 # same but emits $2 if $1 is not supported
-cc-opt-alt = $(shell set -e; if $(CC) $(1) -E -xc - -o /dev/null &0 2>&0; then echo "$(1)"; else echo "$(2)"; fi;)
+cc-opt-alt = $(shell set -e; if $(CC) -Werror $(1) -E -xc - -o /dev/null 
&0 2>&0; then echo "$(1)"; else echo "$(2)"; fi;)
 
 # Disable a warning when supported by the compiler. Don't put spaces around the
 # warning! And don't use cc-opt which doesn't always report an error until
 # another one is also returned.
 # Usage: CFLAGS += $(call cc-nowarn,warning). Eg: $(call 
cc-opt,format-truncation)
-cc-nowarn = $(shell set -e; if $(CC) -W$(1) -E -xc - -o /dev/null &0 2>&0; then echo "-Wno-$(1)"; fi;)
+cc-nowarn = $(shell set -e; if $(CC) -Werror -W$(1) -E -xc - -o /dev/null 
&0 2>&0; then echo "-Wno-$(1)"; fi;)
 
  Installation options.
 DESTDIR =



[PATCH v2 2/2] MINOR: sample: allow build with OpenSSL < 1.0.1d

2021-01-21 Thread Bertrand Jacquin
According to INSTALL file, OpenSSL 1.0.1d is still supported by HAProxy,
however OpenSSL 1.0.2 lacking CRYPTO_memcmp(), haproxy does not build:

  $ make V=1 TARGET=linux-glibc USE_NS= USE_OPENSSL=1
  ..
  cc -Iinclude  -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv  
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers   -Wtype-limits   -DUSE_EPOLL  
-DUSE_NETFILTER -DUSE_POLL  -DUSE_THREAD  -DUSE_BACKTRACE   -DUSE_TPROXY 
-DUSE_LINUX_TPROXY -DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H 
-DUSE_GETADDRINFO -DUSE_OPENSSL  -DUSE_FUTEX -DUSE_ACCEPT4
-DUSE_CPU_AFFINITY -DUSE_TFO  -DUSE_DL -DUSE_RT  -DUSE_PRCTL 
-DUSE_THREAD_DUMP  -DCONFIG_HAPROXY_VERSION=\"2.4-dev5-37286a-78\" 
-DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/sample.o src/sample.c
src/sample.c: In function 'sample_conv_secure_memcmp':
src/sample.c:3130:2: warning: implicit declaration of function 'CRYPTO_memcmp'
  ..
  cc  -g -o haproxy src/ev_poll.o src/ev_epoll.o src/ssl_sample.o 
src/ssl_sock.o src/ssl_crtlist.o src/ssl_ckch.o src/ssl_utils.o 
src/cfgparse-ssl.o src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o 
src/mux_h1.o src/stats.o src/flt_spoe.o src/backend.o src/tcpcheck.o 
src/server.o src/tools.o src/cli.o src/cfgparse.o src/log.o 
src/cfgparse-listen.o src/check.o src/stick_table.o src/peers.o src/dns.o 
src/stream_interface.o src/sample.o src/http_htx.o src/haproxy.o src/http_act.o 
src/proxy.o src/pattern.o src/listener.o src/cache.o src/http_fetch.o 
src/session.o src/connection.o src/sink.o src/task.o src/filters.o 
src/fcgi-app.o src/tcp_rules.o src/payload.o src/mux_pt.o src/flt_http_comp.o 
src/cfgparse-global.o src/vars.o src/map.o src/debug.o src/queue.o src/h1_htx.o 
src/compression.o src/mworker.o src/flt_trace.o src/acl.o src/trace.o 
src/proto_sockpair.o src/proto_tcp.o src/lb_chash.o src/htx.o 
src/xprt_handshake.o src/h1.o src/sock.o src/ring.o src/extcheck.o 
src/tcp_sample.o src/frontend.o src/h2.o src/channel.o src/applet.o 
src/tcp_act.o src/http_rules.o src/fd.o src/raw_sock.o src/pool.o src/mailers.o 
src/http_conv.o src/lb_fwrr.o src/proto_uxst.o src/http.o src/lb_fwlc.o 
src/lb_fas.o src/activity.o src/sock_unix.o src/protocol.o src/mworker-prog.o 
src/signal.o src/proto_udp.o src/lb_map.o src/sock_inet.o src/ev_select.o 
src/cfgparse-tcp.o src/action.o src/thread.o src/sha1.o src/ebmbtree.o 
src/cfgparse-unix.o src/dict.o src/time.o src/hpack-dec.o src/arg.o 
src/hpack-tbl.o src/eb64tree.o src/chunk.o src/shctx.o src/regex.o src/fcgi.o 
src/eb32tree.o src/eb32sctree.o src/dynbuf.o src/pipe.o src/lru.o 
src/ebimtree.o src/uri_auth.o src/freq_ctr.o src/ebsttree.o src/ebistree.o 
src/auth.o src/wdt.o src/http_acl.o src/hpack-enc.o src/hpack-huff.o 
src/ebtree.o src/base64.o src/hash.o src/dgram.o src/version.o src/fix.o 
src/mqtt.o   -lcrypt -ldl -lrt -lpthread -Wl,--export-dynamic  -lssl -lcrypto 
-ldl
  src/sample.o: In function `sample_conv_secure_memcmp':
  src/sample.c:3130: undefined reference to `CRYPTO_memcmp'
  collect2: ld returned 1 exit status
  make: *** [haproxy] Error 1

See: 
https://git.openssl.org/?p=openssl.git;a=commit;h=2ee798880a246d648ecddadc5b91367bee4a5d98

  $ git describe --contains 2ee798880a246d648ecddadc5b91367bee4a5d98
  OpenSSL_1_0_1d~26

Since secure_memcmp is meant to be used in constant time sensible
environment, this commit removes the converter when the version of
OpenSSL does not meant the requirement. Adjusting related documentation,
pointing the user to strcmp instead.

Cc: Emeric Brun 
Cc: William Lallemand 
Cc: Tim Düsterhus 
Cc: Илья Шипицин 
---
 doc/configuration.txt | 2 +-
 src/sample.c  | 6 +++---
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 899bdf553a85..4c9d75dbc9a9 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15996,7 +15996,7 @@ secure_memcmp()
   performed in constant time.
 
   Please note that this converter is only available when haproxy has been
-  compiled with USE_OPENSSL.
+  compiled with USE_OPENSSL. Requires at least OpenSSL 1.0.1d.
 
   Example :
 
diff --git a/src/sample.c b/src/sample.c
index bf2de2a2522d..22246abf8dd7 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -3100,7 +3100,7 @@ static int sample_conv_strcmp(const struct arg *arg_p, 
struct sample *smp, void
return 1;
 }
 
-#ifdef USE_OPENSSL
+#if defined(USE_OPENSSL) && HA_OPENSSL_VERSION_NUMBER >= 0x1010104fL
 /* Compares bytestring with a variable containing a bytestring. Return value
  * is `true` if both bytestrings are bytewise identical and `false` otherwise.
  *
@@ -3422,7 +3422,7 @@ static int smp_check_strcmp(struct arg *args, struct 
sample_conv *conv,
return 0;
 }
 
-#ifdef USE_OPENSSL
+#if defined(USE_OPENSSL) && HA_OPENSSL_VERSION_NUMBER >= 0x1010104fL
 /* This function checks the "secure_memcmp" converter's arguments and extracts 
the
  * variable name and its scope.
  

[PATCH v2 1/2] DOC: ssl: fix typo in INSTALL

2021-01-21 Thread Bertrand Jacquin
---
 INSTALL | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/INSTALL b/INSTALL
index 32c0dd338fb5..7d3cbf4ba5c4 100644
--- a/INSTALL
+++ b/INSTALL
@@ -213,7 +213,7 @@ to forcefully enable it using "USE_LIBCRYPT=1".
 4.5) Cryptography
 -
 For SSL/TLS, it is necessary to use a cryptography library. HAProxy currently
-supports the OpenSSL library, and is known to build ant work with branches
+supports the OpenSSL library, and is known to build and work with branches
 0.9.8, 1.0.0, 1.0.1, 1.0.2, 1.1.0 and 1.1.1. OpenSSL follows a long-term
 support cycle similar to HAProxy's, and each of the branches above receives its
 own fixes, without forcing you to upgrade to another branch. There is no excuse



Re: [PATCH 1/1] MINOR: sample: allow build with OpenSSL 1.0.2

2021-01-21 Thread Bertrand Jacquin
On Friday, January 22 2021 at 00:58:06 +0100, Tim Düsterhus wrote:
> Bertrand,
> 
> Am 22.01.21 um 00:45 schrieb Bertrand Jacquin:
> >> The strcmp converter is not binary safe. It uses strncmp internally.
> > 
> > It is not indeed, what do you think of improving current related strcmp
> > documentation and example to add an hex conversion to achieve the same
> > goal? This would be pretty slow, I'd be happy if you have something more
> > efficient to offer for this use case. Also, CRYPTO_memcmp() is
> 
> Do you have a specific use-case in mind? Where would you / one need to
> compare binary data outside something like hash comparisons?

I don't have a specific use case in mind, the idea was to reduce the
feature set gap from the underlying lib being used and so remove confusion
for the user. However I initially though this was introduced in 1.0.2
when it technically was introduced within a patch release. I don't
believe it is legit for haproxy to carry on burden for old release of
OpenSSL by backporting some feature used by a very few users and instead
encourage them to upgrade if they really need this constant time binary
comparison.

> I'd say that users can figure out how to combine strcmp with the hex
> converter themselves. If performance is desired it might make sense to
> add a memcmp() converter that nicely complements strcmp and
> secure_memcmp. It's just that I did not yet have a need for this (and
> apparently no one else did).

Good to me.

> > relatively simple and could be rewritten in openssl compat as an inline
> > function too.
> 
> I prefer to defer to a "known good" implementation. Getting this right
> across compilers is non-trivial to prevent the compiler from optimizing
> it. OpenSSL specifically includes Assembler implementations. IMO If
> users actually need secure_memcmp in HAProxy they should upgrade their
> OpenSSL.

I concur on this.

Cheers,

-- 
Bertrand



Re: [PATCH 1/1] MINOR: sample: allow build with OpenSSL 1.0.2

2021-01-21 Thread Bertrand Jacquin
Hi,

On Thursday, January 21 2021 at 22:34:27 +0100, Tim Düsterhus wrote:
> Bertrand,
> 
> Note: I was the contributor that added the secure_memcmp converter.
> 
> Am 21.01.21 um 22:16 schrieb Bertrand Jacquin:
> > diff --git a/doc/configuration.txt b/doc/configuration.txt
> > index 899bdf553a85..f25da9c1bfa6 100644
> > --- a/doc/configuration.txt
> > +++ b/doc/configuration.txt
> > @@ -15996,7 +15996,11 @@ secure_memcmp()
> >performed in constant time.
> >  
> >Please note that this converter is only available when haproxy has been
> > -  compiled with USE_OPENSSL.
> > +  compiled with USE_OPENSSL. Requires at least OpenSSL 1.0.2.
> > +
> > +  See also the strcmp converter if you need to compare two binary
> > +  strings without concern related to constant time or if OpenSSL is not
> > +  enabled.
> 
> The strcmp converter is not binary safe. It uses strncmp internally.

It is not indeed, what do you think of improving current related strcmp
documentation and example to add an hex conversion to achieve the same
goal? This would be pretty slow, I'd be happy if you have something more
efficient to offer for this use case. Also, CRYPTO_memcmp() is
relatively simple and could be rewritten in openssl compat as an inline
function too.

> >Example :
> >  
> > diff --git a/src/sample.c b/src/sample.c
> > index bf2de2a2522d..bb12789b551f 100644
> > --- a/src/sample.c
> > +++ b/src/sample.c
> > @@ -3100,12 +3100,14 @@ static int sample_conv_strcmp(const struct arg 
> > *arg_p, struct sample *smp, void
> > return 1;
> >  }
> >  
> > -#ifdef USE_OPENSSL
> > +#if defined(USE_OPENSSL) && HA_OPENSSL_VERSION_NUMBER > 0x1000200fL
> 
> We strive to use feature detection instead of version number comparisons
> for SSL. Is it possible to use feature detection here? Adding Ilya to CC.

Feature detection is definitely the way to go, however it is not
available for CRYPTO_memcmp() specifically.

You can find the initial definition of this function in
https://git.openssl.org/?p=openssl.git;a=commit;h=2ee798880a246d648ecddadc5b91367bee4a5d98
as part of OpenSSL 1.0.1d release (I'll adjust that as well).

> >  /* Compares bytestring with a variable containing a bytestring. Return 
> > value
> >   * is `true` if both bytestrings are bytewise identical and `false` 
> > otherwise.
> >   *
> > - * Comparison will be performed in constant time if both bytestrings are of
> > - * the same length. If the lengths differ execution time will not be 
> > constant.
> > + * Comparison will be performed in constant time if the library support
> > + * constant time memcmp (starting with OpenSSL 1.0.2) and if both
> > + * bytestrings are of the same length. Otherwise execution time will not
> > + * be constant.
> 
> I am not sure whether this wording change is useful, as the definition
> of the function already is guarded by the #if. As such
> sample_conv_secure_memcmp guarantees the constant time comparison
> (independent of the library support). It just might be that the function
> might not exist.

You are correct, that's a left over from initial change :) I'll send a
new version with this adjusted once the first point is addressed.

Cheers,
Bertrand

-- 
Bertrand



[PATCH 1/1] MINOR: sample: allow build with OpenSSL 1.0.2

2021-01-21 Thread Bertrand Jacquin
According to INSTALL file, OpenSSL 1.0.2 is still supported by HAProxy,
however OpenSSL 1.0.2 lacking CRYPTO_memcmp(), haproxy does not build:

  $ make V=1 TARGET=linux-glibc USE_NS= USE_OPENSSL=1
  ..
  cc -Iinclude  -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv  
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers   -Wtype-limits   -DUSE_EPOLL  
-DUSE_NETFILTER -DUSE_POLL  -DUSE_THREAD  -DUSE_BACKTRACE   -DUSE_TPROXY 
-DUSE_LINUX_TPROXY -DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H 
-DUSE_GETADDRINFO -DUSE_OPENSSL  -DUSE_FUTEX -DUSE_ACCEPT4
-DUSE_CPU_AFFINITY -DUSE_TFO  -DUSE_DL -DUSE_RT  -DUSE_PRCTL 
-DUSE_THREAD_DUMP  -DCONFIG_HAPROXY_VERSION=\"2.4-dev5-37286a-78\" 
-DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/sample.o src/sample.c
src/sample.c: In function 'sample_conv_secure_memcmp':
src/sample.c:3130:2: warning: implicit declaration of function 'CRYPTO_memcmp'
  ..
  cc  -g -o haproxy src/ev_poll.o src/ev_epoll.o src/ssl_sample.o 
src/ssl_sock.o src/ssl_crtlist.o src/ssl_ckch.o src/ssl_utils.o 
src/cfgparse-ssl.o src/mux_h2.o src/mux_fcgi.o src/http_ana.o src/stream.o 
src/mux_h1.o src/stats.o src/flt_spoe.o src/backend.o src/tcpcheck.o 
src/server.o src/tools.o src/cli.o src/cfgparse.o src/log.o 
src/cfgparse-listen.o src/check.o src/stick_table.o src/peers.o src/dns.o 
src/stream_interface.o src/sample.o src/http_htx.o src/haproxy.o src/http_act.o 
src/proxy.o src/pattern.o src/listener.o src/cache.o src/http_fetch.o 
src/session.o src/connection.o src/sink.o src/task.o src/filters.o 
src/fcgi-app.o src/tcp_rules.o src/payload.o src/mux_pt.o src/flt_http_comp.o 
src/cfgparse-global.o src/vars.o src/map.o src/debug.o src/queue.o src/h1_htx.o 
src/compression.o src/mworker.o src/flt_trace.o src/acl.o src/trace.o 
src/proto_sockpair.o src/proto_tcp.o src/lb_chash.o src/htx.o 
src/xprt_handshake.o src/h1.o src/sock.o src/ring.o src/extcheck.o 
src/tcp_sample.o src/frontend.o src/h2.o src/channel.o src/applet.o 
src/tcp_act.o src/http_rules.o src/fd.o src/raw_sock.o src/pool.o src/mailers.o 
src/http_conv.o src/lb_fwrr.o src/proto_uxst.o src/http.o src/lb_fwlc.o 
src/lb_fas.o src/activity.o src/sock_unix.o src/protocol.o src/mworker-prog.o 
src/signal.o src/proto_udp.o src/lb_map.o src/sock_inet.o src/ev_select.o 
src/cfgparse-tcp.o src/action.o src/thread.o src/sha1.o src/ebmbtree.o 
src/cfgparse-unix.o src/dict.o src/time.o src/hpack-dec.o src/arg.o 
src/hpack-tbl.o src/eb64tree.o src/chunk.o src/shctx.o src/regex.o src/fcgi.o 
src/eb32tree.o src/eb32sctree.o src/dynbuf.o src/pipe.o src/lru.o 
src/ebimtree.o src/uri_auth.o src/freq_ctr.o src/ebsttree.o src/ebistree.o 
src/auth.o src/wdt.o src/http_acl.o src/hpack-enc.o src/hpack-huff.o 
src/ebtree.o src/base64.o src/hash.o src/dgram.o src/version.o src/fix.o 
src/mqtt.o   -lcrypt -ldl -lrt -lpthread -Wl,--export-dynamic  -lssl -lcrypto 
-ldl
  src/sample.o: In function `sample_conv_secure_memcmp':
  src/sample.c:3130: undefined reference to `CRYPTO_memcmp'
  collect2: ld returned 1 exit status
  make: *** [haproxy] Error 1

See: 
https://git.openssl.org/?p=openssl.git;a=commitdiff;h=f5cd3561ba9363e6bcc58fcb6b1e94930f81967d

  $ git describe --contains f5cd3561ba9363e6bcc58fcb6b1e94930f81967d
  OpenSSL_1_0_2-beta1~439

Since secure_memcmp is meant to be used in constant time sensible
environment, this commit removes the converter when the version of
OpenSSL does not meant the requirement. Adjusting related documentation,
pointing the user to strcmp instead.

Cc: Emeric Brun 
Cc: William Lallemand 
---
 doc/configuration.txt |  6 +-
 src/sample.c  | 12 +++-
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 899bdf553a85..f25da9c1bfa6 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15996,7 +15996,11 @@ secure_memcmp()
   performed in constant time.
 
   Please note that this converter is only available when haproxy has been
-  compiled with USE_OPENSSL.
+  compiled with USE_OPENSSL. Requires at least OpenSSL 1.0.2.
+
+  See also the strcmp converter if you need to compare two binary
+  strings without concern related to constant time or if OpenSSL is not
+  enabled.
 
   Example :
 
diff --git a/src/sample.c b/src/sample.c
index bf2de2a2522d..bb12789b551f 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -3100,12 +3100,14 @@ static int sample_conv_strcmp(const struct arg *arg_p, 
struct sample *smp, void
return 1;
 }
 
-#ifdef USE_OPENSSL
+#if defined(USE_OPENSSL) && HA_OPENSSL_VERSION_NUMBER > 0x1000200fL
 /* Compares bytestring with a variable containing a bytestring. Return value
  * is `true` if both bytestrings are bytewise identical and `false` otherwise.
  *
- * Comparison will be performed in constant time if both bytestrings are of
- * the same length. If the lengths differ execution time will not be constant.
+ * Comparison will be 

[PATCH 1/1] BUG/MINOR: lua: define _GNU_SOURCE for LLONG_MAX

2021-01-21 Thread Bertrand Jacquin
Lua requires LLONG_MAX defined with __USE_ISOC99 which is set by
_GNU_SOURCE, not necessarely defined by default on old compiler/glibc.

  $ make V=1 TARGET=linux-glibc-legacy USE_THREAD= USE_ACCEPT4= USE_PCRE=1 
USE_OPENSSL=1 USE_ZLIB=1  USE_LUA=1
  ..
  cc -Iinclude  -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv  
-Wno-strict-aliasing -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter  
-Wno-missing-field-initializers  -DUSE_EPOLL  -DUSE_NETFILTER 
-DUSE_PCRE-DUSE_POLL   -DUSE_TPROXY -DUSE_LINUX_TPROXY 
-DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H -DUSE_GETADDRINFO -DUSE_OPENSSL 
-DUSE_LUA -DUSE_FUTEX   -DUSE_ZLIB  -DUSE_CPU_AFFINITY   -DUSE_DL -DUSE_RT  
-DUSE_PRCTL -DUSE_THREAD_DUMP -I/usr/include/openssl101e/  -DUSE_PCRE 
-I/usr/include  -DCONFIG_HAPROXY_VERSION=\"2.4-dev5-73246d-83\" 
-DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/hlua.o src/hlua.c
  In file included from /usr/local/include/lua.h:15,
   from /usr/local/include/lauxlib.h:15,
   from src/hlua.c:16:
  /usr/local/include/luaconf.h:581:2: error: #error "Compiler does not support 
'long long'. Use option '-DLUA_32BITS'   or '-DLUA_C89_NUMBERS' (see file 
'luaconf.h' for details)"
  ..
  cc -Iinclude  -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv  
-Wno-strict-aliasing -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter  
-Wno-missing-field-initializers  -DUSE_EPOLL  -DUSE_NETFILTER 
-DUSE_PCRE-DUSE_POLL   -DUSE_TPROXY -DUSE_LINUX_TPROXY 
-DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H -DUSE_GETADDRINFO -DUSE_OPENSSL 
-DUSE_LUA -DUSE_FUTEX   -DUSE_ZLIB  -DUSE_CPU_AFFINITY   -DUSE_DL -DUSE_RT  
-DUSE_PRCTL -DUSE_THREAD_DUMP -I/usr/include/openssl101e/  -DUSE_PCRE 
-I/usr/include  -DCONFIG_HAPROXY_VERSION=\"2.4-dev5-73246d-83\" 
-DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/hlua_fcn.o src/hlua_fcn.c
  In file included from /usr/local/include/lua.h:15,
   from /usr/local/include/lauxlib.h:15,
   from src/hlua_fcn.c:17:
  /usr/local/include/luaconf.h:581:2: error: #error "Compiler does not support 
'long long'. Use option '-DLUA_32BITS'   or '-DLUA_C89_NUMBERS' (see file 
'luaconf.h' for details)"
  ..

Cc: Thierry Fournier 
---
 src/hlua.c | 2 ++
 src/hlua_fcn.c | 3 +++
 2 files changed, 5 insertions(+)

diff --git a/src/hlua.c b/src/hlua.c
index 785a1fa3686e..0de1937db145 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -10,6 +10,8 @@
  *
  */
 
+#define _GNU_SOURCE
+
 #include 
 #include 
 
diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
index 1f66c181fb49..aab864370cc7 100644
--- a/src/hlua_fcn.c
+++ b/src/hlua_fcn.c
@@ -14,6 +14,9 @@
  * in an environment able to catch a longjmp, otherwise a
  * critical error can be raised.
  */
+
+#define _GNU_SOURCE
+
 #include 
 #include 
 #include 



[PATCH 1/1] MINOR: lua: remove unused variable

2021-01-21 Thread Bertrand Jacquin
hlua_init() uses 'idx' only in openssl related code, while 'i' is used
in shared code and is safe to be reused. This commit replaces the use of
'idx' with 'i'

  $ make V=1 TARGET=linux-glibc USE_LUA=1 USE_OPENSSL=
  ..
  cc -Iinclude  -O2 -g -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-clobbered -Wno-missing-field-initializers 
-Wno-cast-function-type  -Wtype-limits -Wshift-negative-value 
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference   -DUSE_EPOLL  
-DUSE_NETFILTER -DUSE_POLL  -DUSE_THREAD  -DUSE_BACKTRACE   -DUSE_TPROXY 
-DUSE_LINUX_TPROXY -DUSE_LINUX_SPLICE -DUSE_LIBCRYPT -DUSE_CRYPT_H 
-DUSE_GETADDRINFO  -DUSE_LUA -DUSE_FUTEX -DUSE_ACCEPT4-DUSE_CPU_AFFINITY 
-DUSE_TFO -DUSE_NS -DUSE_DL -DUSE_RT  -DUSE_PRCTL -DUSE_THREAD_DUMP
-I/usr/include/lua5.3 -I/usr/include/lua5.3  
-DCONFIG_HAPROXY_VERSION=\"2.4-dev5-37286a-78\" 
-DCONFIG_HAPROXY_DATE=\"2021/01/21\" -c -o src/hlua.o src/hlua.c
  src/hlua.c: In function 'hlua_init':
  src/hlua.c:9145:6: warning: unused variable 'idx' [-Wunused-variable]
   9145 |  int idx;
|  ^~~
---
 src/hlua.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/src/hlua.c b/src/hlua.c
index 1893511f4a59..785a1fa3686e 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -9142,7 +9142,6 @@ lua_State *hlua_init_state(int thread_num)
 
 void hlua_init(void) {
int i;
-   int idx;
 #ifdef USE_OPENSSL
struct srv_kw *kw;
int tmp_error;
@@ -9273,8 +9272,8 @@ void hlua_init(void) {
socket_ssl.use_ssl = 1;
socket_ssl.xprt = xprt_get(XPRT_SSL);
 
-   for (idx = 0; args[idx] != NULL; idx++) {
-   if ((kw = srv_find_kw(args[idx])) != NULL) { /* Maybe it's 
registered server keyword */
+   for (i = 0; args[i] != NULL; i++) {
+   if ((kw = srv_find_kw(args[i])) != NULL) { /* Maybe it's 
registered server keyword */
/*
 *
 * If the keyword is not known, we can search in the 
registered
@@ -9282,13 +9281,13 @@ void hlua_init(void) {
 * features like client certificates and ssl_verify.
 *
 */
-   tmp_error = kw->parse(args, , _proxy, 
_ssl, );
+   tmp_error = kw->parse(args, , _proxy, 
_ssl, );
if (tmp_error != 0) {
fprintf(stderr, "INTERNAL ERROR: %s\n", error);
abort(); /* This must be never arrives because 
the command line
not editable by the user. */
}
-   idx += kw->skip;
+   i += kw->skip;
}
}
 #endif



[PATCH 1/1] BUG/MINOR: worker: define _GNU_SOURCE for strsignal()

2021-01-20 Thread Bertrand Jacquin
glibc < 2.10 requires _GNU_SOURCE in order to make use of strsignal(),
otherwise leading to SEGV at runtime.

  $ make V=1 TARGET=linux-glibc-legacy USE_THREAD= USE_ACCEPT4=
  ..
  src/mworker.c: In function 'mworker_catch_sigchld':
  src/mworker.c:285: warning: implicit declaration of function 'strsignal'
  src/mworker.c:285: warning: pointer/integer type mismatch in conditional 
expression
  ..

  $ make V=1 reg-tests REGTESTS_TYPES=slow,default
  ..
  ## Test case: reg-tests/mcli/mcli_start_progs.vtc ##
  ## test results in: 
"/tmp/haregtests-2021-01-19_15-18-07.n24989/vtc.29077.28f6153d"
   h1Bad exit status: 0x008b exit 0x0 signal 11 core 128
   h1Assert error in haproxy_wait(), src/vtc_haproxy.c line 792:  
Condition(*(>fds[1]) >= 0) not true.  Errno=0 Success
  ..

  $ gdb ./haproxy /tmp/core.0.haproxy.30270
  ..
  Core was generated by `/root/haproxy/haproxy -d -W -S fd@8 -dM -f 
/tmp/haregtests-2021-01-19_15-18-07.'.
  Program terminated with signal 11, Segmentation fault.
  #0  0x2b387a10 in strlen () from /lib64/libc.so.6
  (gdb) bt
  #0  0x2b387a10 in strlen () from /lib64/libc.so.6
  #1  0x2b354b69 in vfprintf () from /lib64/libc.so.6
  #2  0x2b37788a in vsnprintf () from /lib64/libc.so.6
  #3  0x004a76a3 in memvprintf (out=0x7fffedc680a0, format=0x5a5d58 
"Current worker #%d (%d) exited with code %d (%s)\n", orig_args=0x7fffedc680d0)
  at src/tools.c:3868
  #4  0x004bbd40 in print_message (label=0x58abed "ALERT", fmt=0x5a5d58 
"Current worker #%d (%d) exited with code %d (%s)\n", argp=0x7fffedc680d0)
  at src/log.c:1066
  #5  0x004bc07f in ha_alert (fmt=0x5a5d58 "Current worker #%d (%d) 
exited with code %d (%s)\n") at src/log.c:1109
  #6  0x00534b7b in mworker_catch_sigchld (sh=) at 
src/mworker.c:293
  #7  0x00556af3 in __signal_process_queue () at src/signal.c:88
  #8  0x004f6216 in signal_process_queue () at 
include/haproxy/signal.h:39
  #9  run_poll_loop () at src/haproxy.c:2859
  #10 0x004f63b7 in run_thread_poll_loop (data=) 
at src/haproxy.c:3028
  #11 0x004faaac in main (argc=, 
argv=0x7fffedc68498) at src/haproxy.c:904

See: https://man7.org/linux/man-pages/man3/strsignal.3.html
---
 src/mworker.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/mworker.c b/src/mworker.c
index 690f3f025721..abdc1d3893cd 100644
--- a/src/mworker.c
+++ b/src/mworker.c
@@ -10,6 +10,8 @@
  *
  */
 
+#define _GNU_SOURCE
+
 #include 
 #include 
 #include 



Re: [PATCH] DOC: replace use of HA-Proxy with HAProxy

2021-01-20 Thread Bertrand Jacquin
Hi Willy,

On Wednesday, January 20 2021 at 19:54:09 +0100, Willy Tarreau wrote:
> On Mon, Jan 18, 2021 at 08:47:43AM +0100, William Lallemand wrote:
> > Hello Bertrand,
> > 
> > On Sun, Jan 17, 2021 at 06:58:46PM +0000, Bertrand Jacquin wrote:
> > > This is a pretty lame commit in a attempt to use a common wording of
> > > HAProxy used 1319 times compared to HAproxy used 10 times
> > > index e36e020c5ce7..92449a04f6e2 100644
> > > 
> > > [...]
> > >
> > > --- a/src/haproxy.c
> > > +++ b/src/haproxy.c
> > > @@ -537,7 +537,7 @@ static void display_version()
> > >  {
> > >   struct utsname utsname;
> > >  
> > > - printf("HA-Proxy version %s %s - https://haproxy.org/\n;
> > > + printf("HAProxy version %s %s - https://haproxy.org/\n;
> > >  PRODUCT_STATUS "\n", haproxy_version, haproxy_date);
> > >  
> > >   if (strlen(PRODUCT_URL_BUGS) > 0) {
> > > 
> > 
> > I wanted to do this a long time ago, and at this time we decided to keep
> > it as it was to not break existing scripts. I think we'll let Willy
> > decide if that's a good idea now :-)
> 
> I'm totally fine with changing this ugly one that I usually spot right
> after the release :-)
> 
> However, this one and the other only real user-visible one affecting the
> mailers subject should be changed as a separate patch because we won't
> backport them. Ideally the doc and code comments should be in separate
> patches so that the doc ones can be backported and we keep the cleanups
> apart. But at quick glance there aren't that many comments so I think
> they'll easily be backported without causing trouble. Just something to
> keep in mind for next time.

This all definitely make sense, I'll provide the split patchset over the
week-end as I want to adjust vtest as well as William righfully pointed
out vtest itself is also messing around with naming and making my eyes
bleed everytime I see this. Again, this is all silly and pretty much
point less, not a reason to not split and test all this properly.

Cheers,
Bertrand

-- 
Bertrand



Re: [PATCH] DOC: replace use of HA-Proxy with HAProxy

2021-01-17 Thread Bertrand Jacquin
On Sunday, January 17 2021 at 20:28:40 +0100, Tim Düsterhus wrote:
> Bertrand,
> 
> Am 17.01.21 um 20:19 schrieb Bertrand Jacquin:
> > On Sunday, January 17 2021 at 20:02:47 +0100, Tim Düsterhus wrote:
> >> Bertrand,
> >>
> >> Am 17.01.21 um 19:58 schrieb Bertrand Jacquin:
> >>> This is a pretty lame commit in a attempt to use a common wording of
> >>> HAProxy used 1319 times compared to HAproxy used 10 times
> >>
> >> I believe you have a typo in the commit message.
> > 
> > You are correct, I was too quick in reusing the commit message.
> > Fixed in new patchset
> > 
> 
> I see you use git send-email, but the patches are not properly numbered
> which makes it hard to see which patches belong together. Here's what I
> do for my patches, which results in proper threading.

Indeed, there are not well numbered since I use format.numbered = false
in my git config. Let me know if you want me to resend them with proper
subject/threading.

Cheers,

-- 
Bertrand



[PATCH] DOC: replace use of HA-Proxy with HAProxy

2021-01-17 Thread Bertrand Jacquin
This is a pretty lame commit in a attempt to use a common wording of
HAProxy used 1319 times compared to HA-Proxy used 10 times
---
 doc/internals/filters.txt | 2 +-
 doc/intro.txt | 8 
 doc/management.txt| 2 +-
 examples/haproxy.init | 2 +-
 scripts/run-regtests.sh   | 2 +-
 src/haproxy.c | 4 ++--
 6 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/doc/internals/filters.txt b/doc/internals/filters.txt
index 5e9b58e56194..a72e908136b4 100644
--- a/doc/internals/filters.txt
+++ b/doc/internals/filters.txt
@@ -109,7 +109,7 @@ itself.
 The list of available filters is reported by 'haproxy -vv':
 
 $> haproxy -vv
-HA-Proxy version 1.7-dev2-3a1d4a-33 2016/03/21
+HAProxy version 1.7-dev2-3a1d4a-33 2016/03/21
 Copyright 2000-2016 Willy Tarreau 
 
 [...]
diff --git a/doc/intro.txt b/doc/intro.txt
index c8021405ba7c..3e650e5faf07 100644
--- a/doc/intro.txt
+++ b/doc/intro.txt
@@ -1498,24 +1498,24 @@ branch, you need to proceed this way :
 generally sufficient to type "haproxy -v". A development version will
 appear like this, with the "dev" word after the branch number :
 
-  HA-Proxy version 1.6-dev3-385ecc-68 2015/08/18
+  HAProxy version 1.6-dev3-385ecc-68 2015/08/18
 
 A stable version will appear like this, as well as unmodified stable
 versions provided by operating system vendors :
 
-  HA-Proxy version 1.5.14 2015/07/02
+  HAProxy version 1.5.14 2015/07/02
 
 And a nightly snapshot of a stable version will appear like this with an
 hexadecimal sequence after the version, and with the date of the snapshot
 instead of the date of the release :
 
-  HA-Proxy version 1.5.14-e4766ba 2015/07/29
+  HAProxy version 1.5.14-e4766ba 2015/07/29
 
 Any other format may indicate a system-specific package with its own
 patch set. For example HAProxy Enterprise versions will appear with the
 following format (--) :
 
-  HA-Proxy version 1.5.0-994126-357 2015/07/02
+  HAProxy version 1.5.0-994126-357 2015/07/02
 
 In addition, versions 2.1 and above will include a "Status" line indicating
 whether the version is safe for production or not, and if so, till when, as
diff --git a/doc/management.txt b/doc/management.txt
index 2600478fddbc..b4a610d46f12 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -354,7 +354,7 @@ the versions of the libraries being used are reported 
there. It is also what
 you will systematically be asked for when posting a bug report :
 
   $ haproxy -vv
-  HA-Proxy version 1.6-dev7-a088d3-4 2015/10/08
+  HAProxy version 1.6-dev7-a088d3-4 2015/10/08
   Copyright 2000-2015 Willy Tarreau 
 
   Build options :
diff --git a/examples/haproxy.init b/examples/haproxy.init
index f08fcb0dd95c..cc120d855dae 100644
--- a/examples/haproxy.init
+++ b/examples/haproxy.init
@@ -1,7 +1,7 @@
 #!/bin/sh
 #
 # chkconfig: - 85 15
-# description: HA-Proxy is a TCP/HTTP reverse proxy which is particularly 
suited \
+# description: HAProxy is a TCP/HTTP reverse proxy which is particularly 
suited \
 #  for high availability environments.
 # processname: haproxy
 # config: /etc/haproxy/haproxy.cfg
diff --git a/scripts/run-regtests.sh b/scripts/run-regtests.sh
index 27bb13cbf75b..5e2cf0f23bf6 100755
--- a/scripts/run-regtests.sh
+++ b/scripts/run-regtests.sh
@@ -345,7 +345,7 @@ if [ $preparefailed ]; then
 fi
 
 { read HAPROXY_VERSION; read TARGET; read FEATURES; read SERVICES; } << EOF
-$($HAPROXY_PROGRAM $HAPROXY_ARGS -vv | grep 'HA-Proxy 
version\|TARGET.*=\|^Feature\|^Available services' | sed 's/.* [:=] //')
+$($HAPROXY_PROGRAM $HAPROXY_ARGS -vv | grep 'HAProxy 
version\|TARGET.*=\|^Feature\|^Available services' | sed 's/.* [:=] //')
 EOF
 
 HAPROXY_VERSION=$(echo $HAPROXY_VERSION | cut -d " " -f 3)
diff --git a/src/haproxy.c b/src/haproxy.c
index e36e020c5ce7..92449a04f6e2 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1,5 +1,5 @@
 /*
- * HA-Proxy : High Availability-enabled HTTP/TCP proxy
+ * HAProxy : High Availability-enabled HTTP/TCP proxy
  * Copyright 2000-2021 Willy Tarreau .
  *
  * This program is free software; you can redistribute it and/or
@@ -537,7 +537,7 @@ static void display_version()
 {
struct utsname utsname;
 
-   printf("HA-Proxy version %s %s - https://haproxy.org/\n;
+   printf("HAProxy version %s %s - https://haproxy.org/\n;
   PRODUCT_STATUS "\n", haproxy_version, haproxy_date);
 
if (strlen(PRODUCT_URL_BUGS) > 0) {



Re: [PATCH] DOC: replace use of HA-Proxy with HAProxy

2021-01-17 Thread Bertrand Jacquin
On Sunday, January 17 2021 at 20:02:47 +0100, Tim Düsterhus wrote:
> Bertrand,
> 
> Am 17.01.21 um 19:58 schrieb Bertrand Jacquin:
> > This is a pretty lame commit in a attempt to use a common wording of
> > HAProxy used 1319 times compared to HAproxy used 10 times
> 
> I believe you have a typo in the commit message.

You are correct, I was too quick in reusing the commit message.
Fixed in new patchset

-- 
Bertrand



[PATCH] DOC: replace use of HAproxy with HAProxy

2021-01-17 Thread Bertrand Jacquin
This is a pretty lame commit in a attempt to use a common wording of
HAProxy used 1319 times compared to HAproxy used 18 times
---
 contrib/syntax-highlight/haproxy.vim |  2 +-
 doc/SPOE.txt |  2 +-
 doc/architecture.txt |  2 +-
 doc/internals/htx-api.txt|  2 +-
 doc/lua-api/index.rst|  8 
 doc/lua.txt  |  6 +++---
 doc/regression-testing.txt   | 10 +-
 src/mailers.c|  2 +-
 src/mux_fcgi.c   |  2 +-
 9 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/contrib/syntax-highlight/haproxy.vim 
b/contrib/syntax-highlight/haproxy.vim
index 48fd78c637f1..02c323255190 100644
--- a/contrib/syntax-highlight/haproxy.vim
+++ b/contrib/syntax-highlight/haproxy.vim
@@ -1,5 +1,5 @@
 " Vim syntax file
-" Language:HAproxy
+" Language:HAProxy
 " Maintainer:  Bruno Michel 
 " Last Change: Mar 30, 2007
 " Version: 0.3
diff --git a/doc/SPOE.txt b/doc/SPOE.txt
index dfc0b718b8e3..9c0ae6be237e 100644
--- a/doc/SPOE.txt
+++ b/doc/SPOE.txt
@@ -749,7 +749,7 @@ Here are the list of official capabilities that HAProxy and 
agents can support:
 
   * pipelining: This is the ability for a peer to decouple NOTIFY and ACK
 frames. This is a symmectical capability. To be used, it must
-be supported by HAproxy and agents. Unlike HTTP pipelining, the
+be supported by HAProxy and agents. Unlike HTTP pipelining, the
 ACK frames can be send in any order, but always on the same TCP
 connection used for the corresponding NOTIFY frame.
 
diff --git a/doc/architecture.txt b/doc/architecture.txt
index f359dd7f11b6..6b122e858a09 100644
--- a/doc/architecture.txt
+++ b/doc/architecture.txt
@@ -665,7 +665,7 @@ Description :
 When an application is spread across several servers, the time to update all
 instances increases, so the application seems jerky for a longer period.
 
-HAproxy offers several solutions for this. Although it cannot be reconfigured
+HAProxy offers several solutions for this. Although it cannot be reconfigured
 without being stopped, nor does it offer any external command, there are other
 working solutions.
 
diff --git a/doc/internals/htx-api.txt b/doc/internals/htx-api.txt
index 32cfd1e4125c..d2dc87b012af 100644
--- a/doc/internals/htx-api.txt
+++ b/doc/internals/htx-api.txt
@@ -295,7 +295,7 @@ buffer. There are 2 functions to do so, the second one 
relying on the first:
 Both functions return a "zero-sized" HTX message if the buffer is null. This
 way, you are sure to always have a valid HTX message. The first function is the
 default function to use. The second one is only useful when some content will 
be
-added. For instance, it used by the HTX analyzers when HAproxy generates a
+added. For instance, it used by the HTX analyzers when HAProxy generates a
 response. This way, the buffer is in a right state and you don't need to take
 care of it anymore outside the possible error paths.
 
diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index fb7eb0978caf..e0bda2dc8963 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -488,7 +488,7 @@ Core class
   end)
 ..
 
-  This example code is used in HAproxy configuration like this:
+  This example code is used in HAProxy configuration like this:
 
 ::
 
@@ -511,7 +511,7 @@ Core class
   core.register_action("hello-world", { "tcp-req", "http-req" }, hello_world, 
2)
 ..
 
-  This example code is used in HAproxy configuration like this:
+  This example code is used in HAProxy configuration like this:
 
 ::
 
@@ -635,7 +635,7 @@ Core class
   end)
 ..
 
-  This example code is used in HAproxy configuration like this:
+  This example code is used in HAProxy configuration like this:
 
 ::
 
@@ -2808,7 +2808,7 @@ Action class
 
   This attribute is an integer (6). It aborts the current message. The message
   processing is stopped and the transaction is terminated. For HTTP streams,
-  HAproxy assumes a response was already sent to the client. From the Lua
+  HAProxy assumes a response was already sent to the client. From the Lua
   actions point of view, when this code is used, the transaction is terminated
   with no reply.
 
diff --git a/doc/lua.txt b/doc/lua.txt
index 9bf9f144d9b0..87b6af423010 100644
--- a/doc/lua.txt
+++ b/doc/lua.txt
@@ -384,7 +384,7 @@ object name.
 The Lua developer can add entries to the HAProxy object. They just work 
carefully
 and prevent to modify the index [0].
 
-Common HAproxy objects are:
+Common HAProxy objects are:
 
  - TXN: manipulates the transaction between the client and the server
  - Channel: manipulates proxified data between the client and the server
@@ -488,7 +488,7 @@ yield must be jump to the root of execution. The 
intermediate setjmp() avoids
 this behaviour.
 
 
-   HAproxy start Lua execution
+   HAProxy start Lua execution
  + Lua puts a 

[PATCH] DOC: replace use of HAproxy with HAProxy

2021-01-17 Thread Bertrand Jacquin
This is a pretty lame commit in a attempt to use a common wording of
HAProxy used 1319 times compared to HAproxy used 18 times
---
 contrib/syntax-highlight/haproxy.vim |  2 +-
 doc/SPOE.txt |  2 +-
 doc/architecture.txt |  2 +-
 doc/internals/htx-api.txt|  2 +-
 doc/lua-api/index.rst|  8 
 doc/lua.txt  |  6 +++---
 doc/regression-testing.txt   | 10 +-
 src/mailers.c|  2 +-
 src/mux_fcgi.c   |  2 +-
 9 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/contrib/syntax-highlight/haproxy.vim 
b/contrib/syntax-highlight/haproxy.vim
index 48fd78c637f1..02c323255190 100644
--- a/contrib/syntax-highlight/haproxy.vim
+++ b/contrib/syntax-highlight/haproxy.vim
@@ -1,5 +1,5 @@
 " Vim syntax file
-" Language:HAproxy
+" Language:HAProxy
 " Maintainer:  Bruno Michel 
 " Last Change: Mar 30, 2007
 " Version: 0.3
diff --git a/doc/SPOE.txt b/doc/SPOE.txt
index dfc0b718b8e3..9c0ae6be237e 100644
--- a/doc/SPOE.txt
+++ b/doc/SPOE.txt
@@ -749,7 +749,7 @@ Here are the list of official capabilities that HAProxy and 
agents can support:
 
   * pipelining: This is the ability for a peer to decouple NOTIFY and ACK
 frames. This is a symmectical capability. To be used, it must
-be supported by HAproxy and agents. Unlike HTTP pipelining, the
+be supported by HAProxy and agents. Unlike HTTP pipelining, the
 ACK frames can be send in any order, but always on the same TCP
 connection used for the corresponding NOTIFY frame.
 
diff --git a/doc/architecture.txt b/doc/architecture.txt
index f359dd7f11b6..6b122e858a09 100644
--- a/doc/architecture.txt
+++ b/doc/architecture.txt
@@ -665,7 +665,7 @@ Description :
 When an application is spread across several servers, the time to update all
 instances increases, so the application seems jerky for a longer period.
 
-HAproxy offers several solutions for this. Although it cannot be reconfigured
+HAProxy offers several solutions for this. Although it cannot be reconfigured
 without being stopped, nor does it offer any external command, there are other
 working solutions.
 
diff --git a/doc/internals/htx-api.txt b/doc/internals/htx-api.txt
index 32cfd1e4125c..d2dc87b012af 100644
--- a/doc/internals/htx-api.txt
+++ b/doc/internals/htx-api.txt
@@ -295,7 +295,7 @@ buffer. There are 2 functions to do so, the second one 
relying on the first:
 Both functions return a "zero-sized" HTX message if the buffer is null. This
 way, you are sure to always have a valid HTX message. The first function is the
 default function to use. The second one is only useful when some content will 
be
-added. For instance, it used by the HTX analyzers when HAproxy generates a
+added. For instance, it used by the HTX analyzers when HAProxy generates a
 response. This way, the buffer is in a right state and you don't need to take
 care of it anymore outside the possible error paths.
 
diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index fb7eb0978caf..e0bda2dc8963 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -488,7 +488,7 @@ Core class
   end)
 ..
 
-  This example code is used in HAproxy configuration like this:
+  This example code is used in HAProxy configuration like this:
 
 ::
 
@@ -511,7 +511,7 @@ Core class
   core.register_action("hello-world", { "tcp-req", "http-req" }, hello_world, 
2)
 ..
 
-  This example code is used in HAproxy configuration like this:
+  This example code is used in HAProxy configuration like this:
 
 ::
 
@@ -635,7 +635,7 @@ Core class
   end)
 ..
 
-  This example code is used in HAproxy configuration like this:
+  This example code is used in HAProxy configuration like this:
 
 ::
 
@@ -2808,7 +2808,7 @@ Action class
 
   This attribute is an integer (6). It aborts the current message. The message
   processing is stopped and the transaction is terminated. For HTTP streams,
-  HAproxy assumes a response was already sent to the client. From the Lua
+  HAProxy assumes a response was already sent to the client. From the Lua
   actions point of view, when this code is used, the transaction is terminated
   with no reply.
 
diff --git a/doc/lua.txt b/doc/lua.txt
index 9bf9f144d9b0..87b6af423010 100644
--- a/doc/lua.txt
+++ b/doc/lua.txt
@@ -384,7 +384,7 @@ object name.
 The Lua developer can add entries to the HAProxy object. They just work 
carefully
 and prevent to modify the index [0].
 
-Common HAproxy objects are:
+Common HAProxy objects are:
 
  - TXN: manipulates the transaction between the client and the server
  - Channel: manipulates proxified data between the client and the server
@@ -488,7 +488,7 @@ yield must be jump to the root of execution. The 
intermediate setjmp() avoids
 this behaviour.
 
 
-   HAproxy start Lua execution
+   HAProxy start Lua execution
  + Lua puts a 

[PATCH] MINOR: build: discard echoing in help target

2021-01-17 Thread Bertrand Jacquin
When V=1 is used in conjuction with help, the output becomes pretty
difficult to read properly.

  $ make TARGET=linux-glibc V=1 help
  ..
DEBUG_USE_ABORT: use abort() for program termination, see 
include/haproxy/bug.h for details
  echo; \
 if [ -n "" ]; then \
   if [ -n "" ]; then \
  echo "Current TARGET: "; \
   else \
  echo "Current TARGET:  (custom target)"; \
   fi; \
 else \
   echo "TARGET not set, you may pass 'TARGET=xxx' to set one among :";\
   echo "  linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, 
netbsd,"; \
   echo "  osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, generic,"; 
\
   echo "  custom"; \
 fi

  TARGET not set, you may pass 'TARGET=xxx' to set one among :
linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, netbsd,
osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, generic,
custom
  echo;echo "Enabled features for TARGET '' (disable with 'USE_xxx=') :"

  Enabled features for TARGET '' (disable with 'USE_xxx=') :
  set --POLL  ; echo "  $*" | (fmt || 
cat) 2>/dev/null
POLL
  echo;echo "Disabled features for TARGET '' (enable with 'USE_xxx=1') :"

  Disabled features for TARGET '' (enable with 'USE_xxx=1') :
  set -- EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT  PRIVATE_CACHE 
THREAD PTHREAD_PSHARED BACKTRACE STATIC_PCRE STATIC_PCRE2 TPROXY LINUX_TPROXY 
LINUX_SPLICE LIBCRYPT CRYPT_H GETADDRINFO OPENSSL LUA FUTEX ACCEPT4 CLOSEFROM 
ZLIB SLZ CPU_AFFINITY TFO NS DL RT DEVICEATLAS 51DEGREES WURFL SYSTEMD 
OBSOLETE_LINKER PRCTL THREAD_DUMP EVPORTS OT QUIC; echo "  $*" | (fmt || cat) 
2>/dev/null
EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT PRIVATE_CACHE

This commit ensure the help target always discard line echoing
regardless of V variable as done for reg-tests-help target.
---
 Makefile | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/Makefile b/Makefile
index 27d56451cdd7..b0ab6bce5281 100644
--- a/Makefile
+++ b/Makefile
@@ -882,8 +882,8 @@ INCLUDES = $(wildcard include/*/*.h)
 DEP = $(INCLUDES) .build_opts
 
 help:
-   $(Q)sed -ne "/^[^#]*$$/q;s/^# \{0,1\}\(.*\)/\1/;p" Makefile
-   $(Q)echo; \
+   @sed -ne "/^[^#]*$$/q;s/^# \{0,1\}\(.*\)/\1/;p" Makefile
+   @echo; \
   if [ -n "$(TARGET)" ]; then \
 if [ -n "$(set_target_defaults)" ]; then \
echo "Current TARGET: $(TARGET)"; \
@@ -896,10 +896,10 @@ help:
 echo "  osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, 
generic,"; \
 echo "  custom"; \
   fi
-   $(Q)echo;echo "Enabled features for TARGET '$(TARGET)' (disable with 
'USE_xxx=') :"
-   $(Q)set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),$(opt),)); echo "  $$*" | (fmt || cat) 2>/dev/null
-   $(Q)echo;echo "Disabled features for TARGET '$(TARGET)' (enable with 
'USE_xxx=1') :"
-   $(Q)set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),,$(opt))); echo "  $$*" | (fmt || cat) 2>/dev/null
+   @echo;echo "Enabled features for TARGET '$(TARGET)' (disable with 
'USE_xxx=') :"
+   @set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),$(opt),)); echo "  $$*" | (fmt || cat) 2>/dev/null
+   @echo;echo "Disabled features for TARGET '$(TARGET)' (enable with 
'USE_xxx=1') :"
+   @set -- $(foreach opt,$(patsubst USE_%,%,$(use_opts)),$(if 
$(USE_$(opt)),,$(opt))); echo "  $$*" | (fmt || cat) 2>/dev/null
 
 # Used only to force a rebuild if some build options change, but we don't do
 # it for certain targets which take no build options



[PATCH] BUG/MEDIUM: contrib/spoa: do not register python3.8 if --embed fail

2020-05-22 Thread Bertrand Jacquin
spoa server fails to build when python3.8 is not available. If
python3-config --embed fails, the output of the command is registered in
check_python_config.  However when it's later used to define
PYTHON_DEFAULT_INC and PYTHON_DEFAULT_LIB it's content does not match
and fallback to python2.7

Content of check_python_config when building with python3.6:

  Usage: bin/python3-config 
--prefix|--exec-prefix|--includes|--libs|--cflags|--ldflags|--extension-suffix|--help|--abiflags|--configdir
 python3

As we are only looking for return code, this commit ensure we always
ignore the output of python3-config or hash commands.
---
 contrib/spoa_server/Makefile | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/contrib/spoa_server/Makefile b/contrib/spoa_server/Makefile
index e7b20db4c00f..5de6135ec7d6 100644
--- a/contrib/spoa_server/Makefile
+++ b/contrib/spoa_server/Makefile
@@ -25,9 +25,9 @@ ifneq ($(USE_PYTHON),)
 OBJS += ps_python.o
 
 # "--embed" flag is supported (and required) only from python 3.8+
-check_python_config := $(shell if python3-config --embed; then echo 
"python3.8+"; \
-elif hash python3-config; then echo "python3"; \
-elif hash python-config; then echo "python2"; fi)
+check_python_config := $(shell if python3-config --embed > /dev/null 2>&1 ; 
then echo "python3.8+"; \
+elif hash python3-config > /dev/null 2>&1 ; then echo "python3"; \
+elif hash python-config > /dev/null 2>&1 ; then echo "python2"; fi)
 
 ifeq ($(check_python_config), python3.8+)
 PYTHON_DEFAULT_INC := $(shell python3-config --includes)



[PATCH] DOC: fix successful typo

2019-05-16 Thread Bertrand Jacquin
---
 scripts/run-regtests.sh | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/scripts/run-regtests.sh b/scripts/run-regtests.sh
index ccfdd601acf4..19e8a1564b1b 100755
--- a/scripts/run-regtests.sh
+++ b/scripts/run-regtests.sh
@@ -49,12 +49,12 @@ _help()
 
   Including text below into a .vtc file will check for its requirements
   related to haproxy's target and compilation options
-# Below targets are not capable of completing this test succesfully
+# Below targets are not capable of completing this test successfully
 #EXCLUDE_TARGET=freebsd, abns sockets are not available on freebsd
 
 #EXCLUDE_TARGETS=dos,freebsd,windows
 
-# Below option is required to complete this test succesfully
+# Below option is required to complete this test successfully
 #REQUIRE_OPTION=OPENSSL, this test needs OPENSSL compiled in.
 
 #REQUIRE_OPTIONS=ZLIB|SLZ,OPENSSL,LUA
@@ -355,7 +355,7 @@ fi
 
 
 if [ $_vtresult -eq 0 ]; then
-  # all tests were succesfull, removing tempdir (the last part.)
+  # all tests were successful, removing tempdir (the last part.)
   # ignore errors is the directory is not empty or if it does not exist
rmdir "$TESTDIR" 2>/dev/null
 fi



Re: [PATCH 2/2] DOC: ssl: Specify stronger example ciphers

2019-02-06 Thread Bertrand Jacquin

Hi all,

On 05/02/2019 05:37, Willy Tarreau wrote:

Hi guys,

On Mon, Feb 04, 2019 at 10:13:11PM +0100, Lukas Tribus wrote:

> Since TLS ciphers are not well understand, it is very common parameters
> from documentation are used as is. Since RC4 should not be used anymore
> I believe it is wiser to show example including stronger ciphers to
> avoid deploying unsafe configuration in the wild.
>
> "ALL" is also to avoid since it contains a lot of deprecated,
> insecure ciphers, and garbage that are not applicable in haproxy
> context.

Frankly I would rather remove those altogether and maybe link to
somewhere else, like the Mozilla TLS recommendations:
https://wiki.mozilla.org/Security/Server_Side_TLS

No one checks for documentation updates in stable releases, unless
it's for a new feature, so I'd be inclined to say backporting doc
fixes regarding security relevant stuff does not really work.


I agree, we've been caught several times shipping old warnings like
"threads are experimental" or "haproxy doesn't cache" or stuff like
this. It's terribly difficult to maintain isolated doc parts and even
harder to keep them up to date in stable versions. Thus probably we
should instead propose the link to Mozilla's Wiki above as well as
the link to their config generator which is trivial to use :

https://mozilla.github.io/server-side-tls/ssl-config-generator/

It even explains how to use HSTS by default. What do you think Bertrand 
?


Yep, all of this sounds legit. Please find attache a new patch serie 
attempting to address all your concerns.


Cheers,
Bertrand

--
BertrandFrom 10071238c893b49cd43cf447a885e4b6af4cd44c Mon Sep 17 00:00:00 2001
From: Bertrand Jacquin 
Date: Sun, 3 Feb 2019 18:48:49 +
Subject: [PATCH 2/2] DOC: ssl: Stop documenting ciphers example to use

Since TLS ciphers are not well understand, it is very common pratice to
copy and paste parameters from documentation and use them as-is. Since RC4
should not be used anymore, it is wiser to link users to up to date
documnetation from Mozilla to avoid unsafe configuration in the wild.

Clarify the location of man pages for OpenSSL when missing.
---
 doc/configuration.txt | 62 +--
 1 file changed, 31 insertions(+), 31 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 9d366b9c7e7a..d2a49cf11bdf 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1029,10 +1029,12 @@ ssl-default-bind-ciphers 
   the default string describing the list of cipher algorithms ("cipher suite")
   that are negotiated during the SSL/TLS handshake up to TLSv1.2 for all
   "bind" lines which do not explicitly define theirs. The format of the string
-  is defined in "man 1 ciphers" from OpenSSL man pages, and can be for instance
-  a string such as "AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH" (without quotes). For
-  TLSv1.3 cipher configuration, please check the "ssl-default-bind-ciphersuites"
-  keyword. Please check the "bind" keyword for more information.
+  is defined in "man 1 ciphers" from OpenSSL man pages. For background
+  information and recommendations see e.g.
+  (https://wiki.mozilla.org/Security/Server_Side_TLS) and
+  (https://mozilla.github.io/server-side-tls/ssl-config-generator/). For TLSv1.3
+  cipher configuration, please check the "ssl-default-bind-ciphersuites" keyword.
+  Please check the "bind" keyword for more information.
 
 ssl-default-bind-ciphersuites 
   This setting is only available when support for OpenSSL was built in and
@@ -1040,11 +1042,9 @@ ssl-default-bind-ciphersuites 
   describing the list of cipher algorithms ("cipher suite") that are negotiated
   during the TLSv1.3 handshake for all "bind" lines which do not explicitly define
   theirs. The format of the string is defined in
-  "man 1 ciphers" from OpenSSL man pages under the section "ciphersuites", and can
-  be for instance a string such as
-  "TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256"
-  (without quotes). For cipher configuration for TLSv1.2 and earlier, please check
-  the "ssl-default-bind-ciphers" keyword. Please check the "bind" keyword for more
+  "man 1 ciphers" from OpenSSL man pages under the section "ciphersuites". For
+  cipher configuration for TLSv1.2 and earlier, please check the
+  "ssl-default-bind-ciphers" keyword. Please check the "bind" keyword for more
   information.
 
 ssl-default-bind-options []...
@@ -1061,9 +1061,13 @@ ssl-default-server-ciphers 
   sets the default string describing the list of cipher algorithms that are
   negotiated during the SSL/TLS handshake up to TLSv1.2 with the server,
   for all "server" lines which do not explicitly define theirs. The format of
-  

[PATCH 1/2] DOC: ssl: Clarify when pre TLSv1.3 cipher can be used

2019-02-04 Thread Bertrand Jacquin
This is mainly driven by the fact TLSv1.3 will have a successor at some
point.
---
 doc/configuration.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index fe5eb25076c7..f7e1339a3e9b 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1027,7 +1027,7 @@ setenv  
 ssl-default-bind-ciphers 
   This setting is only available when support for OpenSSL was built in. It sets
   the default string describing the list of cipher algorithms ("cipher suite")
-  that are negotiated during the SSL/TLS handshake except for TLSv1.3 for all
+  that are negotiated during the SSL/TLS handshake up to TLSv1.2 for all
   "bind" lines which do not explicitly define theirs. The format of the string
   is defined in "man 1 ciphers" from OpenSSL man pages, and can be for instance
   a string such as "AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH" (without quotes). For
@@ -10893,7 +10893,7 @@ ca-sign-pass 
 ciphers 
   This setting is only available when support for OpenSSL was built in. It sets
   the string describing the list of cipher algorithms ("cipher suite") that are
-  negotiated during the SSL/TLS handshake except for TLSv1.3. The format of the
+  negotiated during the SSL/TLS handshake up to TLSv1.2. The format of the
   string is defined in "man 1 ciphers" from OpenSSL man pages, and can be for
   instance a string such as "AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH" (without
   quotes). Depending on the compatibility and security requirements, the list



[PATCH 2/2] DOC: ssl: Specify stronger example ciphers

2019-02-04 Thread Bertrand Jacquin
Since TLS ciphers are not well understand, it is very common parameters
from documentation are used as is. Since RC4 should not be used anymore
I believe it is wiser to show example including stronger ciphers to
avoid deploying unsafe configuration in the wild.

"ALL" is also to avoid since it contains a lot of deprecated,
insecure ciphers, and garbage that are not applicable in haproxy
context.
---
 doc/configuration.txt | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index f7e1339a3e9b..14951d662f97 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1030,7 +1030,7 @@ ssl-default-bind-ciphers 
   that are negotiated during the SSL/TLS handshake up to TLSv1.2 for all
   "bind" lines which do not explicitly define theirs. The format of the string
   is defined in "man 1 ciphers" from OpenSSL man pages, and can be for instance
-  a string such as "AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH" (without quotes). For
+  a string such as "ECDHE+AES:CHACHA20:!SHA1:@STRENGTH" (without quotes). For
   TLSv1.3 cipher configuration, please check the 
"ssl-default-bind-ciphersuites"
   keyword. Please check the "bind" keyword for more information.
 
@@ -10895,7 +10895,7 @@ ciphers 
   the string describing the list of cipher algorithms ("cipher suite") that are
   negotiated during the SSL/TLS handshake up to TLSv1.2. The format of the
   string is defined in "man 1 ciphers" from OpenSSL man pages, and can be for
-  instance a string such as "AES:ALL:!aNULL:!eNULL:+RC4:@STRENGTH" (without
+  instance a string such as "ECDHE+AES:CHACHA20:!SHA1:@STRENGTH" (without
   quotes). Depending on the compatibility and security requirements, the list
   of suitable ciphers depends on a variety of variables. For background
   information and recommendations see e.g.
@@ -11665,8 +11665,8 @@ ciphers 
   servers on the local network, it is common to see a weaker set of algorithms
   than what is used over the internet. Doing so reduces CPU usage on both the
   server and haproxy while still keeping it compatible with deployed software.
-  Some algorithms such as RC4-SHA1 are reasonably cheap. If no security at all
-  is needed and just connectivity, using DES can be appropriate.
+  Some algorithms such as ECDHE+CHACHA20 are reasonably cheap. If no security
+  at all is needed and just connectivity, using DES can be appropriate.
 
 ciphersuites 
   This setting is only available when support for OpenSSL was built in and



[PATCH] DOC: Fix typos in lua documentation

2018-09-13 Thread Bertrand Jacquin

Hi,

Please find attached a patch to fix some typos in the lua documentation.

Cheers,

--
BertrandFrom a8c24069246fc6ba6e9a956963158596ec4a0a3b Mon Sep 17 00:00:00 2001
From: Bertrand Jacquin 
Date: Mon, 10 Sep 2018 21:26:07 +0100
Subject: [PATCH] DOC: Fix typos in lua documentation

---
 doc/lua-api/index.rst  | 114 -
 src/filters.c  |   2 +-
 src/stream_interface.c |   2 +-
 3 files changed, 59 insertions(+), 59 deletions(-)

diff --git a/doc/lua-api/index.rst b/doc/lua-api/index.rst
index 0c79766eb939..553cf521edd3 100644
--- a/doc/lua-api/index.rst
+++ b/doc/lua-api/index.rst
@@ -209,7 +209,7 @@ Core class
   configuration file, on the default syslog server if it is configured and on
   the stderr if it is allowed.
 
-  :param integer loglevel: Is the log level asociated with the message. It is a
+  :param integer loglevel: Is the log level associated with the message. It is a
 number between 0 and 7.
   :param string msg: The log content.
   :see: :js:attr:`core.emerg`, :js:attr:`core.alert`, :js:attr:`core.crit`,
@@ -314,8 +314,8 @@ Core class
   Returns HAProxy core informations. We can found information like the uptime,
   the pid, memory pool usage, tasks number, ...
 
-  These information are also returned by the management sockat via the command
-  "show info". See the management socket documentation fpor more information
+  These information are also returned by the management socket via the command
+  "show info". See the management socket documentation for more information
   about the content of these variables.
 
   :returns: an array of values.
@@ -325,7 +325,7 @@ Core class
   **context**: body, init, task, action
 
   This function returns the current time. The time returned is fixed by the
-  HAProxy core and assures than the hour will be monotnic and that the system
+  HAProxy core and assures than the hour will be monotonic and that the system
   call 'gettimeofday' will not be called too. The time is refreshed between each
   Lua execution or resume, so two consecutive call to the function "now" will
   probably returns the same result.
@@ -338,7 +338,7 @@ Core class
 
   **context**: body, init, task, action
 
-  This function take a string repsenting http date, and returns an integer
+  This function take a string representing http date, and returns an integer
   containing the corresponding date with a epoch format. A valid http date
   me respect the format IMF, RFC850 or ASCTIME.
 
@@ -353,7 +353,7 @@ Core class
 
   **context**: body, init, task, action
 
-  This function take a string repsenting IMF date, and returns an integer
+  This function take a string representing IMF date, and returns an integer
   containing the corresponding date with a epoch format.
 
   :param string date: a date IMF formatted
@@ -371,7 +371,7 @@ Core class
 
   **context**: body, init, task, action
 
-  This function take a string repsenting RFC850 date, and returns an integer
+  This function take a string representing RFC850 date, and returns an integer
   containing the corresponding date with a epoch format.
 
   :param string date: a date RFC859 formatted
@@ -389,7 +389,7 @@ Core class
 
   **context**: body, init, task, action
 
-  This function take a string repsenting ASCTIME date, and returns an integer
+  This function take a string representing ASCTIME date, and returns an integer
   containing the corresponding date with a epoch format.
 
   :param string date: a date ASCTIME formatted
@@ -407,7 +407,7 @@ Core class
 
   **context**: body, init, task, action
 
-  This function take a string repsenting http date, and returns an integer
+  This function take a string representing http date, and returns an integer
   containing the corresponding date with a epoch format.
 
   :param string date: a date http-date formatted
@@ -416,7 +416,7 @@ Core class
 
   **context**: body, init, task, action
 
-  This function take a string repsenting http date, and returns an integer
+  This function take a string representing http date, and returns an integer
   containing the corresponding date with a epoch format.
 
   :param string date: a date http-date formatted
@@ -433,7 +433,7 @@ Core class
 
   **context**: body, init, task, action, sample-fetch, converter
 
-  proxies is a table containing the list of all proxies declared in the
+  Proxies is a table containing the list of all proxies declared in the
   configuration file. The table is indexed by the proxy name, and each entry
   of the proxies table is an object of type :ref:`proxy_class`.
 
@@ -466,9 +466,9 @@ Core class
   * **txn** (:ref:`txn_class`): this is a TXN object used for manipulating the
 current request or TCP stream.
 
-  * **argX**: this is argument provided throught the HAProxy configuration file.
+  * **argX**: this is argument provided through the HAProxy configuration file.
 
-  Here, an exemple of action regis

Re: [ANNOUNCE] haproxy-1.8.13

2018-07-31 Thread Bertrand Jacquin

On 31/07/2018 18:26, Bertrand Jacquin wrote:

Hi Willy,

On 30/07/2018 19:55, Willy Tarreau wrote:

On Mon, Jul 30, 2018 at 07:41:33PM +0200, Tim Düsterhus wrote:

Willy,

Am 30.07.2018 um 18:05 schrieb Willy Tarreau:
> A small update happened to the download directory, the sha256 of the
> tar.gz files are now present in addition to the (quite old) md5 ones.
> We may start to think about phasing md5 signatures out, for example
> after 1.9 is released.

I'd even like to see PGP signatures, like you already do for the git
tags (but not the Tarballs). But this is a greater change than just
updating the checksums :-)


I know and I've already thought about it. But I personally refuse to 
store
my PGP key on any exposed machine. Right now in order to tag, I have 
to
SSH into an isolated machine, run "git pull --tags", create-release, 
and

"git push --tags". Then I upload the release.

What I don't like with PGP on an exposed machine is that it reduces 
the

size of your 4096-bit key to the size of your passphrase (which most
often contains much less than the ~700 characters it would need to be
as large), and also increases your ability to get fooled into entering
it. Some would call me paranoid, but I don't think I am, I'm just 
trying
to keep a balanced level of security, knowing that the global one is 
not

better than the weakest point.

If I wanted to sign the images, it would require to find a different
release method and would significantly complicate the procedure.


I know old farts don't change, but for the two cents, newer version of
OpenSSH (>= 6.7) and GnuPG (>=2.1.1) allow you to forward GnuPG agent
over SSH with reduce capacity to reduce the attack surface you are
mentioning. More details are available on
https://wiki.gnupg.org/AgentForwarding


Also, old farts press the send button too quickly.

The benefit of forwarding the gpg agent is that you don't need to copy 
your private key to any remote machine, the gpg agent running on the 
machine forwarding it will perform all the crypto operations. With a ssh 
config alias, you could enable agent forwarding only when you want to 
create a release.


Mixed with a smartcard, no computer at all would be able to access 
private material.


Cheers

--
Bertrand



Re: [ANNOUNCE] haproxy-1.8.13

2018-07-31 Thread Bertrand Jacquin

Hi Willy,

On 30/07/2018 19:55, Willy Tarreau wrote:

On Mon, Jul 30, 2018 at 07:41:33PM +0200, Tim Düsterhus wrote:

Willy,

Am 30.07.2018 um 18:05 schrieb Willy Tarreau:
> A small update happened to the download directory, the sha256 of the
> tar.gz files are now present in addition to the (quite old) md5 ones.
> We may start to think about phasing md5 signatures out, for example
> after 1.9 is released.

I'd even like to see PGP signatures, like you already do for the git
tags (but not the Tarballs). But this is a greater change than just
updating the checksums :-)


I know and I've already thought about it. But I personally refuse to 
store

my PGP key on any exposed machine. Right now in order to tag, I have to
SSH into an isolated machine, run "git pull --tags", create-release, 
and

"git push --tags". Then I upload the release.

What I don't like with PGP on an exposed machine is that it reduces the
size of your 4096-bit key to the size of your passphrase (which most
often contains much less than the ~700 characters it would need to be
as large), and also increases your ability to get fooled into entering
it. Some would call me paranoid, but I don't think I am, I'm just 
trying
to keep a balanced level of security, knowing that the global one is 
not

better than the weakest point.

If I wanted to sign the images, it would require to find a different
release method and would significantly complicate the procedure.


I know old farts don't change, but for the two cents, newer version of 
OpenSSH (>= 6.7) and GnuPG (>=2.1.1) allow you to forward GnuPG agent 
over SSH with reduce capacity to reduce the attack surface you are 
mentioning. More details are available on 
https://wiki.gnupg.org/AgentForwarding


Cheers

--
Bertrand



server source behaviour

2018-02-05 Thread Bertrand Jacquin
Hi,

I am not sure if the following configuration should be working, but it
looks like it does while the documentation does not specify if I could
be using it. Having a quick look at the source, it does not seem
obvious either. Anyway, I'm just curious to know if the following should
be working:

  backend bk_ipv6
mode http
server upc fe80::5a23:8cff:fe9f:fa9c:80 source * interface wlan0

I used 'source *' here since I was not able to specify an IPv6
link-local address to us.

Cheers,
Bertrand

-- 
Bertrand


signature.asc
Description: Digital signature


Re: [PATCH] BUG: NetScaler CIP handling is incorrect

2017-12-21 Thread Bertrand Jacquin


On 21/12/17 13:28, Willy Tarreau wrote:
> On Thu, Dec 21, 2017 at 10:46:17AM +0000, Bertrand Jacquin wrote:
>> I'm all good with backporting this in 1.8. Feel free to.
> 
> Great, now merged. Do not hesitate to report back any issues you
> would notice on your infrastructure.

Thanks Willy, sure I will!

-- 
Bertrand




signature.asc
Description: OpenPGP digital signature
Amazon Data Services Ireland Limited registered office: One Burlington Plaza, 
Burlington Road, Dublin 4, Ireland. Registered in Ireland. Registration number 
390566.

Re: [PATCH] BUG: NetScaler CIP handling is incorrect

2017-12-21 Thread Bertrand Jacquin
Hi Willy,

On 21/12/17 10:08, Willy Tarreau wrote:
> On Thu, Dec 21, 2017 at 11:05:30AM +0100, Andreas Mahnke wrote:
>> Hi Willy,
>>
>> The support of the standard protocol in 1.8 would be nice, because we are
>> planning to migrate to haproxy 1.8 from our self - patched 1.7 instances.
> 
> OK. Bertrand, if you don't scream quickly, I'll pick your patches in 1.8 :-)

I'm all good with backporting this in 1.8. Feel free to.

Cheers,
Bertrand

-- 
Bertrand



signature.asc
Description: OpenPGP digital signature
Amazon Data Services Ireland Limited registered office: One Burlington Plaza, 
Burlington Road, Dublin 4, Ireland. Registered in Ireland. Registration number 
390566.

Re: [PATCH] BUG: NetScaler CIP handling is incorrect

2017-12-19 Thread Bertrand Jacquin
Hi Andreas and Willy,

Please find attached a patch serie adding support for both legacy and
standard CIP protocol while keeping compatibility with current
configuration format.

This also fixes numerous bugs spotted during this dev cycle and present
since the first version of the patch.

This serie applies cleanly on v1.9-dev0-76-g789691778fde but also on
v1.8.1-20-gdd8ea125889d while I only tested it on v1.9.

Cheers,
Bertrand

On 11/12/17 15:04, Bertrand Jacquin wrote:
> Hi Andreas,
> 
> I got this really side tracked, my apology. Let me take a look at that
> this evening again. Some corps need to be unburied.
> 
> I'm afraid the patch, as is, will break compatibility with other version
> of the CIP protocol, I'd like haproxy to support both of them.
> 
> Cheers,
> Bertrand
> 
> On 07/12/17 13:41, Andreas Mahnke wrote:
>> Hello everybody,
>>
>> Is there an update regarding the merging of the patch? We are thinking
>> to switch to version 1.8 in the near future and it would be nice to have
>> the patch merged, so that we do not need to merge each update on our own.
>>
>> Best regards,
>> Andreas
>>
>> On Fri, Jul 7, 2017 at 2:43 PM, Willy Tarreau <w...@1wt.eu
>> <mailto:w...@1wt.eu>> wrote:
>>
>> On Fri, Jul 07, 2017 at 02:36:07PM +0200, Andreas Mahnke wrote:
>> > Hi Willy,
>> >
>> > I had some direct mail conversations with him and he wanted to create a
>> > patch based in the findings.
>> > In the meantime we got it working using the patch I provided, 
>> therefore I
>> > sent it yesterday so that it gets integrated and we do not need to 
>> patch
>> > haproxy on our end everytime a new version comes out.
>> >
>> > I wrote him yesterday that the patch was sent by me, but he seems to 
>> be out
>> > of office until monday - so maybe he will get back to us then.
>>
>> Yep I noticed the out-of-office notification as well. Let's wait for
>> him to
>> get back. I'm pretty fine with merging your patch, I just want to be
>> certain
>> that everything is OK on his side as well, especially before
>> backporting it
>> (since it's a bug fix).
>>
>> Thanks!
>> Willy
>>
>>
> 

-- 
Bertrand
Payments Infrastructure Engineering, Amazon
From 3245e08747ed3c72fa429365010450c2ca04caac Mon Sep 17 00:00:00 2001
From: Bertrand Jacquin <jacqu...@amazon.com>
Date: Tue, 12 Dec 2017 01:17:23 +
Subject: [PATCH 8/8] MEDIUM: netscaler: add support for standard NetScaler CIP
 protocol

It looks like two version of the protocol exist as reported by
Andreas Mahnke. This patch add support for both legacy and standard CIP
protocol according to NetScaler specifications.
---
 doc/netscaler-client-ip-insertion-protocol.txt | 28 ++-
 src/connection.c   | 38 --
 2 files changed, 50 insertions(+), 16 deletions(-)

diff --git a/doc/netscaler-client-ip-insertion-protocol.txt b/doc/netscaler-client-ip-insertion-protocol.txt
index 6f77f6522c7f..559d98a82a92 100644
--- a/doc/netscaler-client-ip-insertion-protocol.txt
+++ b/doc/netscaler-client-ip-insertion-protocol.txt
@@ -10,7 +10,9 @@ Specifications and documentations from NetScaler:
   https://www.citrix.com/blogs/2016/04/25/how-to-enable-client-ip-in-tcpip-option-of-netscaler/
 
 When CIP is enabled on the NetScaler, then a TCP packet is inserted just after
-the TCP handshake. This is composed as:
+the TCP handshake. Two versions of the CIP extension exist.
+
+Legacy (NetScaler < 10.5)
 
   - CIP magic number : 4 bytes
 Both sender and receiver have to agree on a magic number so that
@@ -27,3 +29,27 @@ the TCP handshake. This is composed as:
   - TCP header : >= 20 bytes
 Contains the header of the last TCP packet sent by the client during TCP
 handshake.
+
+Standard (NetScaler >= 10.5)
+
+  - CIP magic number : 4 bytes
+Both sender and receiver have to agree on a magic number so that
+they both handle the incoming data as a NetScaler Client IP insertion
+packet.
+
+  - CIP length : 4 bytes
+Defines the total length on the CIP header.
+
+  - CIP type: 2 bytes
+Always set to 1.
+
+  - Header length : 2 bytes
+Defines the length on the remaining data.
+
+  - IP header : >= 20 bytes if IPv4, 40 bytes if IPv6
+Contains the header of the last IP packet sent by the client during TCP
+handshake.
+
+  - TCP header : >= 20 bytes
+Contains the header of the last TCP packet sent by the client during TCP
+handshake.
diff --git a/src/connection.c b/src/connection.c
index 58bf4a5f85f5..0f8acb02dbdb 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -678,14 +678,8 @@ in

Re: [PATCH] BUG: NetScaler CIP handling is incorrect

2017-12-11 Thread Bertrand Jacquin
Hi Andreas,

I got this really side tracked, my apology. Let me take a look at that
this evening again. Some corps need to be unburied.

I'm afraid the patch, as is, will break compatibility with other version
of the CIP protocol, I'd like haproxy to support both of them.

Cheers,
Bertrand

On 07/12/17 13:41, Andreas Mahnke wrote:
> Hello everybody,
> 
> Is there an update regarding the merging of the patch? We are thinking
> to switch to version 1.8 in the near future and it would be nice to have
> the patch merged, so that we do not need to merge each update on our own.
> 
> Best regards,
> Andreas
> 
> On Fri, Jul 7, 2017 at 2:43 PM, Willy Tarreau  > wrote:
> 
> On Fri, Jul 07, 2017 at 02:36:07PM +0200, Andreas Mahnke wrote:
> > Hi Willy,
> >
> > I had some direct mail conversations with him and he wanted to create a
> > patch based in the findings.
> > In the meantime we got it working using the patch I provided, therefore 
> I
> > sent it yesterday so that it gets integrated and we do not need to patch
> > haproxy on our end everytime a new version comes out.
> >
> > I wrote him yesterday that the patch was sent by me, but he seems to be 
> out
> > of office until monday - so maybe he will get back to us then.
> 
> Yep I noticed the out-of-office notification as well. Let's wait for
> him to
> get back. I'm pretty fine with merging your patch, I just want to be
> certain
> that everything is OK on his side as well, especially before
> backporting it
> (since it's a bug fix).
> 
> Thanks!
> Willy
> 
> 

-- 
Bertrand
Payments Infrastructure Engineering, Amazon



signature.asc
Description: OpenPGP digital signature
Amazon Data Services Ireland Limited registered office: One Burlington Plaza, 
Burlington Road, Dublin 4, Ireland. Registered in Ireland. Registration number 
390566.

Re: NetScaler CIP analysing code seems to be incorrect

2017-04-04 Thread Bertrand Jacquin
Hi Andreas,

Sorry for the long delay, I was out for a while and miss this email.

To be honest with you, I was expecting to get some weirdness like this.
I'm reaching out Citrix on a side channel to see if different specs for
the protocol exist so I may do the appropriate changes in haproxy to
support both (or more) version of the protocol.

Cheers,
Bertrand

On 03/03/17 16:57, Andreas Mahnke wrote:
> Hello,
> 
> as requested at discource.haproxy.org 
> (http://discourse.haproxy.org/t/netscaler-cip-analysing-code-seems-to-be-incorrect/1043)
> I hereby report the issue below to the HAProxy mailling list cc-ing the
> author of the netscaler-cip patch.
> 
> we are trying to run several instances of HAProxy (v1.7.3) behind a
> NetScaler VPX (Version 11.0 64.34) and want to use the NetScaler CIP
> feature so that the original IP of the client can be passed through HAProxy.
> 
> The haproxy.cfg listener looks like this:
> 
> listen weblistener
> bind *:80 accept-netscaler-cip 4711
> tcp-request connection expect-netscaler-cip layer4
> mode tcp
> option tcplog
> server s1 192.168.0.123:8000  check
> 
> The NetScaler sends the CIP Packet as specified here:
> https://support.citrix.com/article/CTX205670 and the Packet looks as
> expected after taking a tcpdump and viewing it with wireshark. But the
> analysis in HAProxy fails with 'CO_ER_CIP_BAD_PROTO' at line 784. (IP
> Version not v4/v6).
> 
> An exemplary CIP Header arriving in HAPRoxy looks like this:
> 
> 0x00 0x00 0x12 0x67 0x00 0x00 0x00 0x34 0x00 0x01 0x00 0x28 0x45
> 0x00 0x00 0x28
> 0x5c 0xef 0x40 0x00 0x3e 0x06 0x39 0x08 0xbe 0x64 0xdc 0x99 0x0a
> 0x70 0x01 0x6b
> 0xe3 0xfc 0x00 0x50 0xba 0x67 0x50 0x9e 0x9e 0xcc 0x0e 0xcd 0x50
> 0x10 0x72 0x10
> 0xf9 0xfe 0x00 0x00
> 
> where the beginning parts are:
> 
> Magic Number:0x00 0x00 0x12 0x67
> CIP Length:  0x00 0x00 0x00 0x34
> Type:  0x00 0x01
> CIP Header size:   0x00 0x28
> IP Version:  0x45 0x00 0x00 0x28
> 
> Based on this Header and the specificaton from citrix, the code part in
> HAProxy which analyses, the CIP seems to be incorrect in 2 places:
> 
>   - Line 711
> The "IP Version" bytes start at index 12 as specified by Citrix.
> Hence the correct increment here would be 12 (instead of 8) in our eyes.
> We patched the code for testing purposes and with the increment of
> 12 the IP Version analysis works as expected and also the source ip
> retrieval worked fine.
> 
>   - Line 788
> The line pointer is incremented by the length of the CIP, hence the
> pointer should be decremented by the amout used in item 1 (8 or 12,
> depending what is correct)
> 
> Does anyone has a deeper knowledge of NetScaler CIP and can review our
> findings in order to determine the cause of the problem? Maybe the
> version of our NetScaler is to new or old?
> 
> Best regards,
> mahnkong

-- 
Bertrand
Payments Infrastructure Engineering, Amazon



signature.asc
Description: OpenPGP digital signature
Amazon Data Services Ireland Limited registered office: One Burlington Plaza, 
Burlington Road, Dublin 4, Ireland. Registered in Ireland. Registration number 
390566.

Re: [ANNOUNCE] haproxy-1.7.1

2016-12-15 Thread Bertrand Jacquin

On 15/12/2016 19:03, Lukas Tribus wrote:

Hi Igor,


Am 14.12.2016 um 20:47 schrieb Igor Pav:

Hi Lukas, in fact, openssl already gets early TLS 1.3 adoption in dev,
will release in 1.1.1, and BoringSSL supports TLSv1.3 already.


That's nice, and in fact since 1.1.1 will be API compatible with 1.1.0
[1] *and* support
TLS 1.3 (or whatever we end up calling it [2]), this shouldn't require
any changes in
haproxy at all.


haproxy might be able to use it as soon as the underlying lib supports 
it. But it might be good to update the doc and configuration like 
no-tls13, force-tls13 and so on.


--
Bertrand



Re: [PATCH 1/8] MINOR: tcp: Store upstream proxy TCP informations before overwrite

2016-11-13 Thread Bertrand Jacquin
On Sun, Nov 13, 2016 at 07:48:46PM +0100, Willy Tarreau wrote:
> Hi Bertrand,
> 
> On Sun, Nov 13, 2016 at 04:37:07PM +, Bertrand Jacquin wrote:
> > This can be useful in order to extend ACL and log format with upstream
> > proxy information when accept-proxy or accept-netscaler-cip is being
> > used
> 
> Thanks for these patches!
> 
> Well, I understand that it can be useful, however I'm having an issue
> with doubling the size of the connection struct, particularly due to
> the fact that few people use the proxy protocol and that these 256
> extra bytes are never used/usable on the other side. For 100k conns,
> that's 51 extra MB of memory that are used. The most annoying here is
> that we know that most users only have IPv4 and would be fine with
> 8-16 bytes per connection).

This makes total sense indeed.

> I'm wondering what could be done to address this. I've been thinking
> about having a pool of available address blocks which would be usable
> at several places (we also need to have some in the stream interface
> to fix another problem related to the proxy mode). We could even
> imagine later having multiple pools depending on the address size if
> we want to save more resources.
> 
> I'm not against merging this patchset as-is, but at least I'd like
> to be sure that we find an elegant long-term solution to this. If
> it's not too hard to implement we could even implement it later and
> backport it as a resource usage fix.
> 
> What do you think ?

I personally have no urgency in getting this merged, so if your prefer to
get the right thing done from day one, I really don't mind delay this
feature. I understand your proposal, I need to dive deep in this since I
believe the change will be pretty invasive, this would take quite some
time. Maybe we can #ifdef this feature until we have a proper solution
so at least some people might be able to use it while enhancement of
addresses structure can happens. Or more easily I may be able to replace
the struct proxy_addr with a pointer to a struct proxy_addr that would
be initialized only of one of the proxy protocol is in use.

Cheers

-- 
Bertrand


signature.asc
Description: Digital signature


[PATCH 3/8] MINOR: log: Add upstream proxy keyword

2016-11-13 Thread Bertrand Jacquin
When accept-proxy or accept-netscaler-cip are being used, this gives the
ability to log upstream proxy source IP and port.
---
 doc/configuration.txt |  4 
 include/types/log.h   |  4 
 src/log.c | 66 +++
 3 files changed, 74 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index a8036abf9f22..6d92a4530749 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15328,6 +15328,10 @@ Please refer to the table below for currently defined 
variables :
   |   | %bq  | backend_queue | numeric |
   |   | %ci  | client_ip (accepted address)  | IP  |
   |   | %cp  | client_port   (accepted address)  | numeric |
+  |   | %psi | proxy_source_ip   | IP  |
+  |   | %psp | proxy_source_port | numeric |
+  |   | %pdi | proxy_destination_ip  | IP  |
+  |   | %pdp | proxy_destination_port| numeric |
   |   | %f   | frontend_name | string  |
   |   | %fc  | feconn (frontend concurrent connections)  | numeric |
   |   | %fi  | frontend_ip  (accepting address)  | IP  |
diff --git a/include/types/log.h b/include/types/log.h
index 0fdb77577103..4ca9197b8ba6 100644
--- a/include/types/log.h
+++ b/include/types/log.h
@@ -55,6 +55,10 @@ enum {
LOG_FMT_GLOBAL,
LOG_FMT_CLIENTIP,
LOG_FMT_CLIENTPORT,
+   LOG_FMT_PROXY_SRC_IP,
+   LOG_FMT_PROXY_SRC_PORT,
+   LOG_FMT_PROXY_DST_IP,
+   LOG_FMT_PROXY_DST_PORT,
LOG_FMT_BACKENDIP,
LOG_FMT_BACKENDPORT,
LOG_FMT_FRONTENDIP,
diff --git a/src/log.c b/src/log.c
index 12329888130e..5d21a12926ff 100644
--- a/src/log.c
+++ b/src/log.c
@@ -145,6 +145,10 @@ static const struct logformat_type logformat_keywords[] = {
{ "bq", LOG_FMT_BCKQUEUE, PR_MODE_TCP, LW_BYTES, NULL }, /* 
backend_queue */
{ "ci", LOG_FMT_CLIENTIP, PR_MODE_TCP, LW_CLIP, NULL },  /* client ip */
{ "cp", LOG_FMT_CLIENTPORT, PR_MODE_TCP, LW_CLIP, NULL }, /* client 
port */
+   { "psi", LOG_FMT_PROXY_SRC_IP, PR_MODE_TCP, LW_CLIP, NULL },  /* proxy 
source ip */
+   { "psp", LOG_FMT_PROXY_SRC_PORT, PR_MODE_TCP, LW_CLIP, NULL }, /* proxy 
source port */
+   { "pdi", LOG_FMT_PROXY_DST_IP, PR_MODE_TCP, LW_CLIP, NULL },  /* proxy 
destination ip */
+   { "pdp", LOG_FMT_PROXY_DST_PORT, PR_MODE_TCP, LW_CLIP, NULL }, /* proxy 
destination port */
{ "f", LOG_FMT_FRONTEND, PR_MODE_TCP, LW_INIT, NULL },  /* frontend */
{ "fc", LOG_FMT_FECONN, PR_MODE_TCP, LW_BYTES, NULL },   /* feconn */
{ "fi", LOG_FMT_FRONTENDIP, PR_MODE_TCP, LW_FRTIP, NULL }, /* frontend 
ip */
@@ -1399,6 +1403,68 @@ int build_logline(struct stream *s, char *dst, size_t 
maxsize, struct list *list
last_isspace = 0;
break;
 
+   case LOG_FMT_PROXY_SRC_IP:  // %psi
+   conn = objt_conn(sess->origin);
+   if (conn)
+   ret = lf_ip(tmplog, (struct sockaddr 
*)>proxy_addr.from, dst + maxsize - tmplog, tmp);
+   else
+   ret = lf_text_len(tmplog, NULL, 0, dst 
+ maxsize - tmplog, tmp);
+   if (ret == NULL)
+   goto out;
+   tmplog = ret;
+   last_isspace = 0;
+   break;
+
+   case LOG_FMT_PROXY_SRC_PORT:  // %psp
+   conn = objt_conn(sess->origin);
+   if (conn) {
+   if (conn->proxy_addr.from.ss_family == 
AF_UNIX) {
+   ret = 
ltoa_o(sess->listener->luid, tmplog, dst + maxsize - tmplog);
+   } else {
+   ret = lf_port(tmplog, (struct 
sockaddr *)>proxy_addr.from,
+ dst + maxsize - 
tmplog, tmp);
+   }
+   }
+   else
+   ret = lf_text_len(tmplog, NULL, 0, dst 
+ maxsize - tmplog, tmp);
+
+   if (ret == NULL)
+   goto out;
+   tmplog = ret;
+   last_isspace = 0;
+   break;
+
+   case LOG_FMT_PROXY_DST_IP:  // %pdi
+   conn = objt_conn(sess->origin);
+   if (conn)
+

[PATCH 4/8] MINOR: ssl: Remove goto after return dead code

2016-11-13 Thread Bertrand Jacquin
This code can never be reached.
---
 src/payload.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/src/payload.c b/src/payload.c
index 3a534c377925..a02a86966051 100644
--- a/src/payload.c
+++ b/src/payload.c
@@ -181,9 +181,6 @@ smp_fetch_req_ssl_st_ext(const struct arg *args, struct 
sample *smp, const char
smp->flags = SMP_F_VOLATILE;
return 1;
 
-   /* server name not found */
-   goto not_ssl_hello;
-
  too_short:
smp->flags = SMP_F_MAY_CHANGE;
 



[PATCH 7/8] BUG/MEDIUM: ssl: Store certificate filename in a variable

2016-11-13 Thread Bertrand Jacquin
Before this change, trash is being used to create certificate filename
to read in care Mutli-Cert are in used. But then ssl_sock_load_ocsp()
modify trash leading to potential wrong information given in later error
message.

This also blocks any further use of certificate filename for other
usage, like ongoing patch to support Certificate Transparency handling
in Multi-Cert bundle.
---
 src/ssl_sock.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index ae6d19f2f69c..453647bd7e6c 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -2137,6 +2137,7 @@ static int ssl_sock_load_multi_cert(const char *path, 
struct bind_conf *bind_con
node = ebmb_first(_keytypes_map);
while (node) {
SSL_CTX *cur_ctx;
+   char cur_file[MAXPATHLEN+1];
 
str = (char *)container_of(node, struct sni_keytype, 
name)->name.key;
i = container_of(node, struct sni_keytype, name)->keytypes;
@@ -2156,8 +2157,8 @@ static int ssl_sock_load_multi_cert(const char *path, 
struct bind_conf *bind_con
for (n = 0; n < SSL_SOCK_NUM_KEYTYPES; n++) {
if (i & (1<

[PATCH 2/8] MINOR: tcp: Add upstream proxy information fetch

2016-11-13 Thread Bertrand Jacquin
When accept-proxy or accept-netscaler-cip are being used, this gives the
ability to perform action based on the TCP connections between upstream
proxy and haproxy instead of the connection between the client and the
upstream proxy.
---
 doc/configuration.txt | 28 
 src/proto_tcp.c   | 93 +++
 2 files changed, 121 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index ec851d80d66c..a8036abf9f22 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -13164,6 +13164,34 @@ fe_id : integer
   backends to check from which backend it was called, or to stick all users
   coming via a same frontend to the same server.
 
+proxy_dst : ip
+  This is the destination IP address of TCP connection from the upstream
+  proxy if "accept-proxy" or "accept-netscaler-cip" bind directive is used.
+
+proxy_dst_port : integer
+  Returns an integer value corresponding to the destination TCP port of the
+  upstream proxy if "accept-proxy" or "accept-netscaler-cip" bind directive is
+  used.
+
+proxy_src : ip
+  This is the source IP address of TCP connection from the upstream
+  proxy if "accept-proxy" or "accept-netscaler-cip" bind directive is used.
+  It is of type IP and works on both IPv4 and IPv6 tables. On IPv6 tables, IPv4
+  addresses are mapped to their IPv6 equivalent, according to RFC 4291. Note
+  that it is the TCP-level source address which is used, and not the address of
+  a client behind a proxy.
+
+  Example:
+   # Allow connection only from known upstream proxy
+   acl known-proxy proxy_src 198.51.100.29
+   http-request deny if !proxy_src
+
+proxy_src_port : integer
+  Returns an integer value corresponding to the TCP source port of the upstream
+  proxy if "accept-proxy" or "accept-netscaler-cip" bind directive is used.
+  Usage of this function is very limited as modern protocols do not care much
+  about source ports nowadays.
+
 sc_bytes_in_rate([,]) : integer
 sc0_bytes_in_rate([]) : integer
 sc1_bytes_in_rate([]) : integer
diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index 8b3d546fd1e8..dd3ec09abecf 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -2420,6 +2420,31 @@ int smp_fetch_src(const struct arg *args, struct sample 
*smp, const char *kw, vo
return 1;
 }
 
+/* fetch the connection's proxy source IPv4/IPv6 address */
+int smp_fetch_proxy_src(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
+{
+   struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+   if (!cli_conn)
+   return 0;
+
+   switch (cli_conn->proxy_addr.from.ss_family) {
+   case AF_INET:
+   smp->data.u.ipv4 = ((struct sockaddr_in 
*)_conn->proxy_addr.from)->sin_addr;
+   smp->data.type = SMP_T_IPV4;
+   break;
+   case AF_INET6:
+   smp->data.u.ipv6 = ((struct sockaddr_in6 
*)_conn->proxy_addr.from)->sin6_addr;
+   smp->data.type = SMP_T_IPV6;
+   break;
+   default:
+   return 0;
+   }
+
+   smp->flags = 0;
+   return 1;
+}
+
 /* set temp integer to the connection's source port */
 static int
 smp_fetch_sport(const struct arg *args, struct sample *smp, const char *k, 
void *private)
@@ -2437,6 +2462,23 @@ smp_fetch_sport(const struct arg *args, struct sample 
*smp, const char *k, void
return 1;
 }
 
+/* fetch the connection's proxy source TCP port */
+static int
+smp_fetch_proxy_sport(const struct arg *args, struct sample *smp, const char 
*k, void *private)
+{
+   struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+   if (!cli_conn)
+   return 0;
+
+   smp->data.type = SMP_T_SINT;
+   if (!(smp->data.u.sint = get_host_port(_conn->proxy_addr.from)))
+   return 0;
+
+   smp->flags = 0;
+   return 1;
+}
+
 /* fetch the connection's destination IPv4/IPv6 address */
 static int
 smp_fetch_dst(const struct arg *args, struct sample *smp, const char *kw, void 
*private)
@@ -2465,6 +2507,34 @@ smp_fetch_dst(const struct arg *args, struct sample 
*smp, const char *kw, void *
return 1;
 }
 
+/* fetch the connection's proxy destination IPv4/IPv6 address */
+static int
+smp_fetch_proxy_dst(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
+{
+   struct connection *cli_conn = objt_conn(smp->sess->origin);
+
+   if (!cli_conn)
+   return 0;
+
+   conn_get_to_proxy_addr(cli_conn);
+
+   switch (cli_conn->proxy_addr.to.ss_family) {
+   case AF_INET:
+   smp->data.u.ipv4 = ((struct sockaddr_in 
*)_conn->proxy_addr.to)->sin_addr;
+   smp->data.type = SMP_T_IPV4;
+   break;
+   case AF_INET6:
+   smp->data.u.ipv6 = ((struct sockaddr_in6 
*)_conn->proxy_addr.to)->sin6_addr;
+   smp->data.type = SMP_T_IPV6;
+   break;
+   default:
+   return 

[PATCH 8/8] BUG/MINOR: ssl: Print correct filename when error occurs reading OCSP

2016-11-13 Thread Bertrand Jacquin
When Multi-Cert bundle are used, error is throwned regarding certificate
filename without including certifcate type extension.
---
 src/ssl_sock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 453647bd7e6c..ef03525fc514 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -2169,7 +2169,7 @@ static int ssl_sock_load_multi_cert(const char *path, 
struct bind_conf *bind_con
if (ssl_sock_load_ocsp(cur_ctx, 
cur_file) < 0) {
if (err)
memprintf(err, "%s 
'%s.ocsp' is present and activates OCSP but it is impossible to compute the 
OCSP certificate ID (maybe the issuer could not be found)'.\n",
- *err ? *err : 
"", path);
+ *err ? *err : 
"", cur_file);
SSL_CTX_free(cur_ctx);
rv = 1;
goto end;



[PATCH 5/8] CLEANUP: ssl: Fix bind keywords name in comments

2016-11-13 Thread Bertrand Jacquin
Along with a whitespace cleanup and a grammar typo
---
 src/ssl_sock.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index b14bb8a46e1b..ae6d19f2f69c 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -5144,8 +5144,7 @@ static int bind_parse_ca_sign_file(char **args, int 
cur_arg, struct proxy *px, s
return 0;
 }
 
-/* parse the ca-sign-pass bind keyword */
-
+/* parse the "ca-sign-pass" bind keyword */
 static int bind_parse_ca_sign_pass(char **args, int cur_arg, struct proxy *px, 
struct bind_conf *conf, char **err)
 {
if (!*args[cur_arg + 1]) {
@@ -5237,7 +5236,7 @@ static int bind_parse_crl_file(char **args, int cur_arg, 
struct proxy *px, struc
 #endif
 }
 
-/* parse the "ecdhe" bind keyword keywords */
+/* parse the "ecdhe" bind keyword keyword */
 static int bind_parse_ecdhe(char **args, int cur_arg, struct proxy *px, struct 
bind_conf *conf, char **err)
 {
 #if OPENSSL_VERSION_NUMBER < 0x0090800fL
@@ -5261,7 +5260,7 @@ static int bind_parse_ecdhe(char **args, int cur_arg, 
struct proxy *px, struct b
 #endif
 }
 
-/* parse the "crt_ignerr" and "ca_ignerr" bind keywords */
+/* parse the "crt-ignore-err" and "ca-ignore-err" bind keywords */
 static int bind_parse_ignore_err(char **args, int cur_arg, struct proxy *px, 
struct bind_conf *conf, char **err)
 {
int code;



[PATCH 6/8] DOC: ssl: Use correct wording for ca-sign-pass

2016-11-13 Thread Bertrand Jacquin
Doc references ca-sign-passphrase but the source code is referring
ca-sign-pass. Align doc to reality.
---
 doc/configuration.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 6d92a4530749..24a222f3d1b9 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -10110,7 +10110,7 @@ ca-sign-file 
   setting when the dynamic generation of certificates is enabled. See
   'generate-certificates' for details.
 
-ca-sign-passphrase 
+ca-sign-pass 
   This setting is only available when support for OpenSSL was built in. It is
   the CA private key passphrase. This setting is optional and used only when
   the dynamic generation of certificates is enabled. See



[PATCH 1/8] MINOR: tcp: Store upstream proxy TCP informations before overwrite

2016-11-13 Thread Bertrand Jacquin
This can be useful in order to extend ACL and log format with upstream
proxy information when accept-proxy or accept-netscaler-cip is being
used
---
 include/proto/connection.h | 32 
 include/types/connection.h |  9 -
 src/connection.c   | 42 ++
 src/session.c  |  4 ++--
 4 files changed, 84 insertions(+), 3 deletions(-)

diff --git a/include/proto/connection.h b/include/proto/connection.h
index ef078add2f20..86087950b421 100644
--- a/include/proto/connection.h
+++ b/include/proto/connection.h
@@ -524,6 +524,22 @@ static inline void conn_get_from_addr(struct connection 
*conn)
conn->flags |= CO_FL_ADDR_FROM_SET;
 }
 
+/* Retrieves the connection's proxy source address */
+static inline void conn_get_from_proxy_addr(struct connection *conn)
+{
+   if (conn->flags & CO_FL_PROXY_ADDR_FROM_SET)
+   return;
+
+   if (!conn_ctrl_ready(conn) || !conn->ctrl->get_src)
+   return;
+
+   if (conn->ctrl->get_src(conn->t.sock.fd, (struct sockaddr 
*)>proxy_addr.from,
+   sizeof(conn->proxy_addr.from),
+   obj_type(conn->target) != OBJ_TYPE_LISTENER) == 
-1)
+   return;
+   conn->flags |= CO_FL_PROXY_ADDR_FROM_SET;
+}
+
 /* Retrieves the connection's original destination address */
 static inline void conn_get_to_addr(struct connection *conn)
 {
@@ -540,6 +556,22 @@ static inline void conn_get_to_addr(struct connection 
*conn)
conn->flags |= CO_FL_ADDR_TO_SET;
 }
 
+/* Retrieves the connection's proxy destination address */
+static inline void conn_get_to_proxy_addr(struct connection *conn)
+{
+   if (conn->flags & CO_FL_PROXY_ADDR_TO_SET)
+   return;
+
+   if (!conn_ctrl_ready(conn) || !conn->ctrl->get_dst)
+   return;
+
+   if (conn->ctrl->get_dst(conn->t.sock.fd, (struct sockaddr 
*)>proxy_addr.to,
+   sizeof(conn->proxy_addr.to),
+   obj_type(conn->target) != OBJ_TYPE_LISTENER) == 
-1)
+   return;
+   conn->flags |= CO_FL_PROXY_ADDR_TO_SET;
+}
+
 /* Attaches a connection to an owner and assigns a data layer */
 static inline void conn_attach(struct connection *conn, void *owner, const 
struct data_cb *data)
 {
diff --git a/include/types/connection.h b/include/types/connection.h
index beb9b898e2cd..0576778f62b1 100644
--- a/include/types/connection.h
+++ b/include/types/connection.h
@@ -126,7 +126,10 @@ enum {
/* This connection may not be shared between clients */
CO_FL_PRIVATE   = 0x1000,
 
-   /* unused : 0x2000, 0x4000 */
+   /* These flags are used to report whether the from/to proxy
+* addresses are set or not */
+   CO_FL_PROXY_ADDR_FROM_SET = 0x2000, /* proxy_addr.from is set */
+   CO_FL_PROXY_ADDR_TO_SET   = 0x4000, /* proxy_addr.to is set */
 
/* This last flag indicates that the transport layer is used (for 
instance
 * by logs) and must not be cleared yet. The last call to 
conn_xprt_close()
@@ -286,6 +289,10 @@ struct connection {
struct sockaddr_storage from;   /* client address, or address 
to spoof when connecting to the server */
struct sockaddr_storage to; /* address reached by the 
client, or address to connect to */
} addr; /* addresses of the remote side, client for producer and server 
for consumer */
+   struct {
+   struct sockaddr_storage from;   /* upstream proxy address */
+   struct sockaddr_storage to; /* address reached by upstream 
proxy */
+   } proxy_addr; /* Addresses of the upstream proxy if accept-proxy or 
accept-netscaler-cip is used */
 };
 
 /* proxy protocol v2 definitions */
diff --git a/src/connection.c b/src/connection.c
index 7a9f3913ca74..380dca1b5370 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -420,6 +420,13 @@ int conn_recv_proxy(struct connection *conn, int flag)
if (*line++ != '\n')
goto bad_header;
 
+   /* update the session's proxy addresses and mark them set */
+   memcpy ((struct sockaddr_in *)>proxy_addr.from, (struct 
sockaddr_in *)>addr.from,
+   sizeof(struct sockaddr_in));
+   memcpy ((struct sockaddr_in *)>proxy_addr.to, (struct 
sockaddr_in *)>addr.to,
+   sizeof(struct sockaddr_in));
+   conn->flags |= CO_FL_PROXY_ADDR_TO_SET | 
CO_FL_PROXY_ADDR_TO_SET;
+
/* update the session's addresses and mark them set */
((struct sockaddr_in *)>addr.from)->sin_family  = 
AF_INET;
((struct sockaddr_in *)>addr.from)->sin_addr.s_addr = 
htonl(src3);
@@ -481,6 +488,13 @@ int conn_recv_proxy(struct connection *conn, int flag)
if (inet_pton(AF_INET6, dst_s, (void *)) != 1)

Re: [PATCH] MINOR: build: Allow linking to device-atlas library file

2016-10-06 Thread Bertrand Jacquin
Hi,

On Thu, Oct 06, 2016 at 09:00:06AM +0100, David CARLIER wrote:
> Hi,
> 
> DEVICEATLAS_SRC instead of DEVICEATLAS_LIB might be more appropriated here
> ...
> +ifeq ($(DEVICEATLAS_LIB),)
> ...
> 
> Otherwise seems ok to me.

Indeed, you are right. You will find attached a new version of the
patch.

Cheers,
Bertrand

> Cheers.
> 
> On 6 October 2016 at 00:39, Bertrand Jacquin <bertr...@jacquin.bzh> wrote:
> > DeviceAtlas might be installed in a location where a user might not have
> > enough permissions to write json.o and dac.o
> > ---
> >  Makefile | 4 
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/Makefile b/Makefile
> > index 0ce4b325305c..a10d2e4c1040 100644
> > --- a/Makefile
> > +++ b/Makefile
> > @@ -622,8 +622,12 @@ endif
> >  DEVICEATLAS_SRC =
> >  DEVICEATLAS_INC = $(DEVICEATLAS_SRC)
> >  DEVICEATLAS_LIB = $(DEVICEATLAS_SRC)
> > +ifeq ($(DEVICEATLAS_LIB),)
> > +OPTIONS_LDFLAGS += -lda
> > +else
> >  OPTIONS_OBJS   += $(DEVICEATLAS_LIB)/json.o
> >  OPTIONS_OBJS   += $(DEVICEATLAS_LIB)/dac.o
> > +endif
> >  OPTIONS_OBJS   += src/da.o
> >  OPTIONS_CFLAGS += -DUSE_DEVICEATLAS $(if 
> > $(DEVICEATLAS_INC),-I$(DEVICEATLAS_INC))
> >  BUILD_OPTIONS  += $(call ignore_implicit,USE_DEVICEATLAS)

-- 
Bertrand
From 430ed00dfedb25b686b327cbd0f685aa29bea5a1 Mon Sep 17 00:00:00 2001
From: Bertrand Jacquin <bertr...@jacquin.bzh>
Date: Thu, 6 Oct 2016 00:32:39 +0100
Subject: [PATCH] MINOR: build: Allow linking to device-atlas library file

DeviceAtlas might be installed in a location where a user might not have
enough permissions to write json.o and dac.o
---
 Makefile | 4 
 1 file changed, 4 insertions(+)

diff --git a/Makefile b/Makefile
index 0ce4b325305c..226a32627b9f 100644
--- a/Makefile
+++ b/Makefile
@@ -622,8 +622,12 @@ endif
 DEVICEATLAS_SRC =
 DEVICEATLAS_INC = $(DEVICEATLAS_SRC)
 DEVICEATLAS_LIB = $(DEVICEATLAS_SRC)
+ifeq ($(DEVICEATLAS_SRC),)
+OPTIONS_LDFLAGS += -lda
+else
 OPTIONS_OBJS	+= $(DEVICEATLAS_LIB)/json.o
 OPTIONS_OBJS	+= $(DEVICEATLAS_LIB)/dac.o
+endif
 OPTIONS_OBJS	+= src/da.o
 OPTIONS_CFLAGS += -DUSE_DEVICEATLAS $(if $(DEVICEATLAS_INC),-I$(DEVICEATLAS_INC))
 BUILD_OPTIONS  += $(call ignore_implicit,USE_DEVICEATLAS)


signature.asc
Description: Digital signature


[PATCH] MINOR: build: Allow linking to device-atlas library file

2016-10-05 Thread Bertrand Jacquin
DeviceAtlas might be installed in a location where a user might not have
enough permissions to write json.o and dac.o
---
 Makefile | 4 
 1 file changed, 4 insertions(+)

diff --git a/Makefile b/Makefile
index 0ce4b325305c..a10d2e4c1040 100644
--- a/Makefile
+++ b/Makefile
@@ -622,8 +622,12 @@ endif
 DEVICEATLAS_SRC =
 DEVICEATLAS_INC = $(DEVICEATLAS_SRC)
 DEVICEATLAS_LIB = $(DEVICEATLAS_SRC)
+ifeq ($(DEVICEATLAS_LIB),)
+OPTIONS_LDFLAGS += -lda
+else
 OPTIONS_OBJS   += $(DEVICEATLAS_LIB)/json.o
 OPTIONS_OBJS   += $(DEVICEATLAS_LIB)/dac.o
+endif
 OPTIONS_OBJS   += src/da.o
 OPTIONS_CFLAGS += -DUSE_DEVICEATLAS $(if 
$(DEVICEATLAS_INC),-I$(DEVICEATLAS_INC))
 BUILD_OPTIONS  += $(call ignore_implicit,USE_DEVICEATLAS)



Re: [PATCH] MAJOR: filters: Add filters support

2016-09-22 Thread Bertrand Jacquin

Hi Christopher,

On 22/09/2016 14:59, Christopher Faulet wrote:

Le 22/09/2016 à 04:05, Bertrand Jacquin a écrit :

On Tue, Sep 20, 2016 at 08:16:09AM +0200, Willy Tarreau wrote:

Hi Bertrand,

On Tue, Sep 20, 2016 at 12:13:32AM +0100, Bertrand Jacquin wrote:

And finally, If you can share with me your HA and
Nginx configurations, this could help.


I'm attaching a strip down version of haproxy/nginx/php-fpm on which 
I

can reproduice this issue.


I think another thing would be extremely useful, it would be a 
full-packet
network capture between haproxy and nginx so that we have the full 
headers,
the chunk sizes (if any) and the response timing, which generally 
matters a
lot. Ideally the dump in text (or binary) format in a distinct file 
using

curl -i would be nice as well.


Sure thing, you will find attached a tcpdump between haproxy and nginx
along with a curl output.

I changed initial configuration to use TCP instead of UNIX socket for
capture purpose. I also removed the proxy protocol between haproxy and
nginx to get an easier output to parse.


Thanks for all these information. It helps me to find the bug. I
attached a patch. Could you check if it fixes your bug ?


Thanks a lot for this, I confirm this patch this the issue I was 
observing.


Cheers,

--
Bertrand



Re: [PATCH] MAJOR: filters: Add filters support

2016-09-19 Thread Bertrand Jacquin
On Mon, Sep 19, 2016 at 10:08:32AM +0200, Christopher Faulet wrote:
> Le 18/09/2016 à 04:17, Bertrand Jacquin a écrit :
> > Today I noticed data corruption when haproxy is used for compression
> > offloading. I bisected twice, and it lead to this specific commit but
> > I'm not 100% confident this commit is the actual root cause.
> > 
> > HTTP body coming from the nginx backend is consistent, but HTTP headers
> > are different depending on the setup I'm enabling. Data corruption only
> > happens with transfer encoding chunked. HTTP body coming then from
> > haproxy to curl can be randomly corrupted, I attached a diff
> > (v1.7-dev1-50-gd7c9196ae56e.Transfer-Encoding-chunked.diff) revealing an
> > unrelated blob like TLS structure in the middle of the javascript. For
> > example, you will find my x509 client certificate in there
> > 
> > I'm also attaching HTTP headers from haproxy to nginx may that help.
> > 
> > Note that I tested with zlib 1.2.8 and libslz 1.0.0, result remains the
> > same in both case.
> 
> I've done some tests. Unfortunately, I'm unable to reproduce the bug. So
> I need more information. First, I need to known how you hit it. Is this
> happen under load or randomly when you do a single request ?

I can reproduce this issue with 100% accuracy on arm or amd64:

  $ for (( i = 0 ; i < 25 ; i++ )) ; do
  curl -s -H 'Accept-Encoding: gzip' \
'https://pants-off.xyz/v1.7-dev1-50-gd7c9196ae56e.js' \
  | zcat | md5sum
done
  01a32fcef0a6894caf112c1a9d5c2a5d  -
  b2a109843f4c43fcde3cb051e4fbf8d2  -
  dedc59fb28ae5d91713234e3e5c08dec  -
  3c8f6d8d53c0ab36bb464b8283570355  -
  e1957e16479bc3106adc68fee2019be8  -
  4cc54367717e5adcdf940f619949ea72  -
  bf637a26e62582c35da6888a4928d4ec  -
  3eeecd478f8e6ea4d690c70f9444954a  -
  79ab805209777ab02bdc6fb829048c74  -
  2aaf9577c1fefdd107a5173aee270c83  -
  .. and so on, shrinking the output here

Note that md5sum of the file should be a4d8bb8ba2a76d7caf090ab632708d7d.

> Then, do
> you still have the bug when you are not using SSL ? Let me also know how
> often the bug appear.

I did not do that test since is was easy for me to track output
containing details about my x509 client certificate.

Running same test as before with 100 iterations, counting similar
output.

  $ for (( i = 0 ; i < 100 ; i++ )) ; do
  curl -s -H 'Accept-Encoding: gzip' \
'http://pants-off.xyz/v1.7-dev1-50-gd7c9196ae56e.js' \
  | zcat | md5sum
done | uniq -c
  1 6c38ef6556efa9e0fa6825803679b2f2  -
 99 a4d8bb8ba2a76d7caf090ab632708d7d  -

Note that 6c38ef6556efa9e0fa6825803679b2f2 appears for the first
iteration. Second test after a few seconds.

  1 ffaf62147b43f82d587df59c39b48e54  -
 29 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 ae6e4404422b93c9fe64bffdea87f36d  -
 41 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 3e8c507e16733af8b728e229c00f21c3  -
  4 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 f6195005b050edcb5ca682b1cde9777f  -
 22 a4d8bb8ba2a76d7caf090ab632708d7d  -

Third test:

  1 6c38ef6556efa9e0fa6825803679b2f2  -
 80 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 3e8c507e16733af8b728e229c00f21c3  -
 18 a4d8bb8ba2a76d7caf090ab632708d7d  -

So it looks a bit more stable. Now if if query HTTP and HTTPS at the
same time, here is what I get:

HTTPS:
  2 17bfe6f7f6296cc5e1d623381afc9e55  -
  1 cbc1779ce5636c31bcf3ea175088da11  -
  1 52ba63995295f5399ddd91b9f9bdf81d  -
  1 5b4115080f35ac5f564b7164a3ada701  -
  1 adfb87fe9efc33e0218a891b2b8b4d42  -
  1 a6f8707556b2f760d20b51dd59b11fb4  -
  .. and so on

HTTP:
  1 3a794f99df4f7a282f822bbaca508852  -
  1 24242f218d9041383c523984d19feddc  -
  2 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 9987d0621c7fbe4b399e462f421b2157  -
  1 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 e261d9cdf988c4fd3d75877812fa5028  -
  .. and so on

Yet it does not look stable. Let's do a test with HTTP only from 2
different hosts:

HTTP client 1:
  1 64cd299604d1f7fac29ef7b2b623b1d0  -
  6 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 bd0372d30c564925ebd1866cf2476474  -
 11 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 64cd299604d1f7fac29ef7b2b623b1d0  -
  9 a4d8bb8ba2a76d7caf090ab632708d7d  -

HTTP client 2:
  1 8749926476d446ead3bd8d81523330eb  -
 16 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 c533c33a3ff469086bdbb6a936e2  -
 14 a4d8bb8ba2a76d7caf090ab632708d7d  -
  1 bd89ab7eab271b2ac13dff42e8e96ba4  -

We are again in a less stable situation.

> And finally, If you can share with me your HA and
> Nginx configurations, this could help.

I'm attaching a strip down version of haproxy/nginx/php-fpm on which I
can reproduice this issue.

Cheers,

-- 
Bertrand


v1.7-dev1-50-gd7c9196ae56e.tgz
Description: GNU Unix tar archive


signature.asc
Description: Digital signature


Re: timeout gated by ACL is enforced regardless of ACL match set

2014-05-21 Thread Bertrand Jacquin

Hi Adam,

On 2014-05-21 21:45, Adam Bruehl wrote:


I added the following to one of my front ends.

acl abuse_users src -f /etc/haproxy/abuse_users.lst

timeout http-request 5s if abuse_users


Unfortunately, you cannot use any condition on 'timeout' keyword, they 
are simply ignored. There have been another a similar topic lately on 
this ML :


  http://marc.info/?l=haproxym=140058111320423w=2

--
Bertrand



Re: BUG/FEATURE? http-request always applied if backend not available

2014-04-02 Thread Bertrand Jacquin

On 2014-04-02 14:05, Willy Tarreau wrote:

Hi,

On Wed, Apr 02, 2014 at 12:50:08AM +0200, Cyril Bonté wrote:

Le 02/04/2014 00:16, Bertrand Jacquin a écrit :
What is adding the Vary and Strict-Transport-Security headers in this
second case ?

A missing 'http-response set-header' in the previous copy and paste.

   http-response set-header Vary   Accept-Encoding
   http-response set-header Strict-Transport-Security max-age=16070400 if
   { ssl_fc }

Sorry but we're certainly missing something with your configuration.
Even if those set-header were added, they can't be applied to the
redirect with the configuration provided in the example.

It makes me think there is a second level of proxy in your test. Am I
wrong ?


Strange, I can't reproduce with latest master.

I easily imagine there could be a bug with the way the http-request
redirect rule works though (since we're keeping the pointer to the last
validated rule and executing it later).

But looking at the code, I don't see how we can leave the function
http_req_get_intercept_rule() with a valid rule when the ACL condition
is not met :-/

Bertrand, would you like to add a return NULL; at the top of the
aforementionned function ?


My bad, too much crap in my eyes yesterday evening, I had another proxy 
in the round due to some (old) iptables rules. Sorry for noise Willy and 
Cyril. Thanks




BUG/FEATURE? http-request always applied if backend not available

2014-04-01 Thread Bertrand Jacquin
Hi Willy,

I'm getting trouble with that sample configuration when backend has no
server available :

defaults HTTP
  mode http

  option httplog
  log global

frontend ft_public
  bind 0.0.0.0:80 name HTTP
  bind 0.0.0.0:443 name HTTPS ssl crt foo.pem

  acl v-local hdr(Host) 203.0.113.42
  acl p-admin path_beg /__bar

  http-request redirect scheme https code 301 if v-local p-admin ! { ssl_fc }

  use_backend bk_local if v-local p-admin

  default_backend bk_default

backend bk_local
  balance source
  option forwardfor except 127.0.0.1/8

  server localhost 127.0.0.1:8080 weight 10 maxconn 100 check inter 1000 fall 2 
rise 2

backend bk_default
  block if TRUE

If bk_local has server UP in the farm, and request look like
https://203.0.113.42/__bar, then everything is fine, request is nicely
handled by bk_local/localhost. http://203.0.113.42/__bar is correctly
redirected.

If bk_local has no server UP in the farm, then the 'http-request
redirect scheme' is always applied instead of a 503 response. I don't
known if this is the really intended result. In the request
(https://203.0.113.42/__bar),

  v-local match
  p-admin match
  ! { ssl_fc } does not match

So no redirection should be applied.

I'm using HA-Proxy version 1.5-dev22-1a34d57 2014/02/03
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = x86_64-pc-linux-gnu-gcc
  CFLAGS  = -march=native -O2 -pipe -fomit-frame-pointer -fno-strict-aliasing
  OPTIONS = USE_LIBCRYPT=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 
USE_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.33 2013-05-28
PCRE library supports JIT : yes
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Thanks.

-- 
Bertrand



Re: BUG/FEATURE? http-request always applied if backend not available

2014-04-01 Thread Bertrand Jacquin
Hi Cyril,

D'ar meurzh 01 a viz Ebrel 2014 e 23 eur 35, « Cyril Bonté » he deus skrivet :

  If bk_local has server UP in the farm, and request look like
  https://203.0.113.42/__bar, then everything is fine, request is nicely
  handled by bk_local/localhost. http://203.0.113.42/__bar is correctly
  redirected.
 
  If bk_local has no server UP in the farm, then the 'http-request
  redirect scheme' is always applied instead of a 503 response. I don't
  known if this is the really intended result. In the request
  (https://203.0.113.42/__bar),
 
 I'm not sure to understand. Did you want to write 
 http://203.0.113.42/__bar just above ?

No https://203.0.113.42/__bar should not pass 'http-request redirect
scheme' as not the all prerequisites are matched.

I need the 'http-request redirect scheme' be applied only if request is
for the right vhost (v-local) and for the right path (p-admin) and if
is not a ssl request.

 If it was supposed to be http instead of https, I'd call it a 
 feature and you can use nbsrv to disable disable redirects when no 
 server are available.

Sure, but I'm more trying to understand that behaviour here.

-- 
Bertrand



Re: BUG/FEATURE? http-request always applied if backend not available

2014-04-01 Thread Bertrand Jacquin
D'ar merc'her 02 a viz Ebrel 2014 e 00 eur 13, « Cyril Bonté » he deus skrivet :
 Le 01/04/2014 23:56, Bertrand Jacquin a écrit :
  When bk_local/localhost is UP :
 
  $ curl -vk -so /dev/null https://203.0.113.42/__bar/
  (...)
  GET /__bar/ HTTP/1.1
  User-Agent: curl/7.35.0
  Host: 203.0.113.42
  Accept: */*
 
   HTTP/1.1 200 OK
   Date: Tue, 01 Apr 2014 21:54:46 GMT
  * Server Apache is not blacklisted
   Server: Apache
   Expires: Tue, 01 Apr 2014 21:54:46 +
   Cache-Control: no-store, no-cache, must-revalidate,  pre-check=0, 
  post-check=0, max-age=0
   Last-Modified: Tue, 01 Apr 2014 21:54:46 +
   Transfer-Encoding: chunked
   Content-Type: text/html; charset=utf-8
   Vary: Accept-Encoding
   Strict-Transport-Security: max-age=16070400
  { [data not shown]
  * Connection #0 to host 203.0.113.42 left intact
 
  When bk_local/localhost is DOWN :
 
  $ curl -vk -so /dev/null https://203.0.113.42/__bar/
  (...)
  GET /__bar/ HTTP/1.1
  User-Agent: curl/7.35.0
  Host: 203.0.113.42
  Accept: */*
 
   HTTP/1.1 301 Moved Permanently
   Content-length: 0
   Location: https://203.0.113.42/__bar/
   Vary: Accept-Encoding
   Strict-Transport-Security: max-age=16070400
  
  * Connection #0 to host 203.0.113.42 left intact
 
 What is adding the Vary and Strict-Transport-Security headers in this 
 second case ?

A missing 'http-response set-header' in the previous copy and paste.

  http-response set-header Vary   Accept-Encoding
  http-response set-header Strict-Transport-Security max-age=16070400 if { 
ssl_fc }

-- 
Bertrand



Re: [PATCH] proxy: support use_backend with dynamic names

2014-03-31 Thread Bertrand Jacquin
Le lundi 31 mars 2014 à 10h45, « Willy Tarreau » a écrit :
 On Thu, Mar 27, 2014 at 08:57:09PM -0400, Rajat Chopra wrote:
  Hi!
 This solution very much solves the problem that I have been facing i.e. 
  large number of acl rules causing latency in requests. Been in discussions 
  separately about it and today I got a chance to test out this patch. I 
  report that it works great! I have been able to route 150k backends with 
  this and the latency added because of the dynamic lookup is in order of 
  microseconds (compared to 24ms earlier).
  
  
  The usage 'use_backend bk_%[hdr(Host)] if TRUE' works for my use-case but 
  originally I was wondering if one could do a map based lookup for the 
  backend.
  As posted here :
  http://stackoverflow.com/questions/22025412/how-to-use-thousands-of-backends-in-haproxy-is-the-new-map-feature-useful-for-t
  
  Most of the issues in the above question are now solved, but I tested this 
  with the patch -
  use_backend bk_%[hdr(Host), map(host_to_backend_map.file)] if TRUE
  
  And it does not work. I am not yet familiar with code to determine why this 
  does not work. Again, the current proposal works well for me but an 
  enhancement should probably consider using maps within dynamic lookup.
  
  +1 for the patch.
 
 OK so in the absence of any other comment, I just merged it.

Thanks Willy for advices and for the merge !

-- 
Bertrand Jacquin
Ingénieur technique
Tel: +33 1 30 67 60 65
GSM: +33 6 71 01 70 30
http://pom-monitoring.com
*POM Monitoring*, Pilotage de la performance du Systeme d'Information



[PATCH] proxy: support use_backend with dynamic names

2014-03-23 Thread Bertrand Jacquin
Hi,

I did this patch for dev19 some time ago but I am still not sure whether
it is the best way to do it or not, and did not have the time to discuss
it since. As the latest changes broke it and forced me to rebase it, and
it's very useful for us, I'd like to propose it for inclusion before the
final release if you think it's OK, or to discuss how it should be done.

Main purpose wanted to achieve is it be able to use many backends
without the need to declare each routing process from frontend to
backend and instead use generic and dynamic switching when a sane
parameter can be used from user request using the logformat logic. For
example when we have a backend farm dedicated to each 'Host: ' http-header,
it's pain in the ass to have to declare the backend and the relevant
use_backend.

With the proposed solution, you first need to declare a dynamic
use_backend as the following :

  use_backend bk_cust_%[hdr(Host)] if { hdr(Host) -m found }

And then to declare the needed backend. For every new vhost hosted will only
need to add the backend section to the configuration.

More detailed usage and implementation in patch itself.

It was rebased on commit 0e9b1b4d1f0efc5e46a10371d9be21e97581faab.

A current limitation of that patch can be that if a dynamic use_backend
is evaluated and the named backend is not found, the default_backend is
then used. This is not a need we have, but some may complain about it.

Future enhancement could be to use the same logic for use-server and
many other part such as reqadd..

Bertrand

-- 
Bertrand Jacquin, EXOSEC (http://www.exosec.fr/)
ZAC des Metz - 3 Rue du petit robinson - 78350 JOUY EN JOSAS
Tel: +33 1 30 67 60 65  -  Fax: +33 1 75 43 40 70
mailto:bjacq...@exosec.fr
From bc306e460752244e73ca7c04dcf3df7506f1c013 Mon Sep 17 00:00:00 2001
From: Bertrand Jacquin bjacq...@exosec.fr
Date: Tue, 19 Nov 2013 11:43:06 +0100
Subject: [PATCH] MEDIUM: proxy: support use_backend with dynamic names

We have a use case where we look up a customer ID in an HTTP header
and direct it to the corresponding server. This can easily be done
using ACLs and use_backend rules, but the configuration becomes
painful to maintain when the number of customers grows to a few
tens or even a several hundreds.

We realized it would be nice if we could make the use_backend
resolve its name at run time instead of config parsing time, and
use a similar expression as http-request add-header to decide on
the proper backend to use. This permits the use of prefixes or
even complex names in backend expressions. If no name matches,
then the default backend is used. Doing so allowed us to get rid
of all the use_backend rules.

Since there are some config checks on the use_backend rules to see
if the referenced backend exists, we want to keep them to detect
config errors in normal config. So this patch does not modify the
default behaviour and proceeds this way :

  - if the backend name in the use_backend directive parses as a log
format rule, it's used as-is and is resolved at run time ;

  - otherwise it's a static name which must be valid at config time.

There was the possibility of doing this with the use-server directive
instead of use_backend, but it seems like use_backend is more suited
to this task, as it can be used for other purposes. For example, it
becomes easy to serve a customer-specific proxy.pac file based on the
customer ID by abusing the errorfile primitive :

 use_backend bk_cust_%[hdr(X-Cust-Id)] if { hdr(X-Cust-Id) -m found }
 default_backend bk_err_404

 backend bk_cust_1
 errorfile 200 /etc/haproxy/static/proxy.pac.cust1

Signed-off-by: Bertrand Jacquin bjacq...@exosec.fr
---
 doc/configuration.txt | 24 +---
 include/types/proxy.h |  2 ++
 src/cfgparse.c| 29 -
 src/session.c | 19 ++-
 4 files changed, 69 insertions(+), 5 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index ba8057f..661f537 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2463,7 +2463,9 @@ fullconn conns
 
   Since it's hard to get this value right, haproxy automatically sets it to
   10% of the sum of the maxconns of all frontends that may branch to this
-  backend. That way it's safe to leave it unset.
+  backend (based on use_backend and default_backend rules). That way it's
+  safe to leave it unset. However, use_backend involving dynamic names are
+  not counted since there is no way to know if they could match or not.
 
   Example :
  # The servers will accept between 100 and 1000 concurrent connections each
@@ -7191,7 +7193,8 @@ use_backend backend unless condition
   May be used in sections :   defaults | frontend | listen | backend
   no   |yes   |   yes  |   no
   Arguments :
-backend   is the name of a valid backend or listen section.
+backend   is the name of a valid backend or listen section, or a
+log

Re: Follow-up on thread 'SSL handshake failure' from 2/5/2013

2013-04-26 Thread Bertrand Jacquin

Hi,

If it can help, I've been in touch with Emeric about SSL handshake 
failure since
some times now but it's maybe preferable to use the ML to share 
experience.


I'm using the following cipher filter list :
 'ALL:!SSLv2:!eNULL:!aNULL:!LOW:!EXPORT:!kECDH:!MD5:@STRENGTH'

The PEM file I used is composed by the following :
 -BEGIN CERTIFICATE-  = Leaf cert
 -BEGIN CERTIFICATE-  = Intermediate cert
 -BEGIN CERTIFICATE-  = Root cert
 -BEGIN DH PARAMETERS- = openssl dhparam 4096 result
 -BEGIN DSA PARAMETERS- = openssl dsaparam 4096 result
 -BEGIN EC PARAMETERS-  = openssl ecparam -name prime256v1 
result

 -BEGIN RSA PRIVATE KEY- = Dumbo jacket

Here is the result on trying to use each cipher on the list :

 $ openssl ciphers -v 
'ALL:!SSLv2:!eNULL:!aNULL:!LOW:!EXPORT:!kECDH:!MD5:@STRENGTH' \

  | while read C dumb; do
  echo -n # $C 
  openssl s_client -connect 176.31.104.63:443 -cipher $C  
/dev/null  /dev/null 21 \

 echo OK \
|| echo FAIL \
done \
  | sort -k 3 \
  | column -t

#  DHE-DSS-AES128-GCM-SHA256  FAIL
#  DHE-DSS-AES128-SHA256  FAIL
#  DHE-DSS-AES128-SHA FAIL
#  DHE-DSS-AES256-GCM-SHA384  FAIL
#  DHE-DSS-AES256-SHA256  FAIL
#  DHE-DSS-AES256-SHA FAIL
#  DHE-DSS-CAMELLIA128-SHAFAIL
#  DHE-DSS-CAMELLIA256-SHAFAIL
#  DHE-DSS-SEED-SHA   FAIL
#  ECDHE-ECDSA-AES128-GCM-SHA256  FAIL
#  ECDHE-ECDSA-AES128-SHA256  FAIL
#  ECDHE-ECDSA-AES128-SHA FAIL
#  ECDHE-ECDSA-AES256-GCM-SHA384  FAIL
#  ECDHE-ECDSA-AES256-SHA384  FAIL
#  ECDHE-ECDSA-AES256-SHA FAIL
#  ECDHE-ECDSA-DES-CBC3-SHA   FAIL
#  ECDHE-ECDSA-RC4-SHAFAIL
#  EDH-DSS-DES-CBC3-SHA   FAIL
#  PSK-3DES-EDE-CBC-SHA   FAIL
#  PSK-AES128-CBC-SHA FAIL
#  PSK-AES256-CBC-SHA FAIL
#  PSK-RC4-SHAFAIL
#  SRP-DSS-3DES-EDE-CBC-SHA   FAIL
#  SRP-DSS-AES-128-CBC-SHAFAIL
#  SRP-DSS-AES-256-CBC-SHAFAIL
#  SRP-RSA-3DES-EDE-CBC-SHA   FAIL
#  SRP-RSA-AES-128-CBC-SHAFAIL
#  SRP-RSA-AES-256-CBC-SHAFAIL
#  AES128-GCM-SHA256  OK
#  AES128-SHA256  OK
#  AES128-SHA OK
#  AES256-GCM-SHA384  OK
#  AES256-SHA256  OK
#  AES256-SHA OK
#  CAMELLIA128-SHAOK
#  CAMELLIA256-SHAOK
#  DES-CBC3-SHA   OK
#  DHE-RSA-AES128-GCM-SHA256  OK
#  DHE-RSA-AES128-SHA256  OK
#  DHE-RSA-AES128-SHA OK
#  DHE-RSA-AES256-GCM-SHA384  OK
#  DHE-RSA-AES256-SHA256  OK
#  DHE-RSA-AES256-SHA OK
#  DHE-RSA-CAMELLIA128-SHAOK
#  DHE-RSA-CAMELLIA256-SHAOK
#  DHE-RSA-SEED-SHA   OK
#  ECDHE-RSA-AES128-GCM-SHA256OK
#  ECDHE-RSA-AES128-SHA256OK
#  ECDHE-RSA-AES128-SHA   OK
#  ECDHE-RSA-AES256-GCM-SHA384OK
#  ECDHE-RSA-AES256-SHA384OK
#  ECDHE-RSA-AES256-SHA   OK
#  ECDHE-RSA-DES-CBC3-SHA OK
#  ECDHE-RSA-RC4-SHA  OK
#  EDH-RSA-DES-CBC3-SHA   OK
#  IDEA-CBC-SHA   OK
#  RC4-SHAOK
#  SEED-SHA   OK

On the client (openssl s_client) it give :
error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert 
handshake failure:s23_clnt.c:741:

---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE

On the server :
SSL handshake failure

Also, while debugging haproxy v1.5-dev18-34-gf27af0d I can see that 
PEM_read_bio_DHparams()
called in ssl_sock_load_dh_params() return the origin (in PEM file) DH 
parameter and that

ssl_sock_load_dh_params() return 1.

beber

On 2013-04-19 20:53, Connelly, Zachary (CGI Federal) wrote:

HAProxy list,

I am currently working to implement SSL within HAProxy using the
1.5-dev18 version. Much like the thread started by Samat Galimov [1]
on 2/5/2013, I am seeing the same behavior where the first time I send
a request via SSL the request is serviced and everything is fine; the
next time the same request is attempted I receive 'ERROR:Exception in
request: javax.net.ssl.SSLHandshakeException: Remote host closed
connection during handshake.' I noticed the attached code in the
thread was not put into the dev18 version (I believe). Did that code
end up resolving the issue or is the issue still being reviewed? I can
supply my config file if that would help. Is there any way to get more
info out of HAProxy to see what it is doing while it handles the SSL
Handshake (the log does not seem to write anything when the request
fails)?

Any assistance would be appreciated. Thanks,

Zack Connelly

Links:
--
[1] http://search.gmane.org/?author=Samat#43;Galimovamp;sort=date