Re: dns fails to process response / hold valid? (since commit 2.2-dev0-13a9232)

2020-02-19 Thread PiBa-NL

Hi Baptiste,

Op 19-2-2020 om 13:06 schreef Baptiste:

Hi,

I found a couple of bugs in that part of the code.
Can you please try the attached patch? (0001 is useless but I share it 
too in case of)

Works for me, thanks!
It will allow parsing of additional records for SRV queries only and 
when done, will silently ignore any record which are not A or .


@maint team, please don't apply the patch yet, I want to test it much 
more before.



When the final patch is ready ill be happy to give it a try as well.

Baptiste

On a side note. With config below i would expect 2 servers with status 
'MAINT(resolving)'.


Using this configuration in Unbound (4 server IP's defined.):
server:
local-data: "_https._tcp.pkg.test.tld 3600 IN SRV 0 100 80 srv1.test.tld"
local-data: "_https._tcp.pkg.test.tld 3600 IN SRV 0 100 80 srv2.test.tld"
local-data: "srv1.test.tld 3600 IN A 192.168.0.51"
local-data: "srv2.test.tld 3600 IN A 192.168.0.52"
local-data: "srvX.test.tld 3600 IN A 192.168.0.53"
local-data: "srvX.test.tld 3600 IN A 192.168.0.54"

And this in a HAProxy backend:
    server-template            PB_SRVrecords 3 
ipv4@_https._tcp.pkg.test.tld:77 id 10110 check inter 18 resolvers 
globalresolvers resolve-prefer ipv4
    server-template            PB_multipleA 3 i...@srvx.test.tld:78 id 
10111 check inter 18  resolvers globalresolvers resolve-prefer 
ipv4Results in 6 servers, but 1 is


This results in 6 servers of which 1 server has 'MAINT(resolution)' 
status and 1 has an IP of 0.0.0.0 but shows as 'DOWN'. I would have 
expected 2 servers with status MAINT.?
(p.s. none of the IP's actually exist on my network so that the other 
servers are also shown as down is correct..)


PB_ipv4,PB_SRVrecords1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,124,124,,1,10102,10110,,0,,2,0,,0,L4CON,,74995,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.51:80,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_SRVrecords2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,94,94,,1,10102,2,,0,,2,0,,0,L4CON,,75029,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.52:80,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_SRVrecords3,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,64,64,,1,10102,3,,0,,2,0,,0,L4CON,,75039,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection problem,,2,3,00.0.0.0:77,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_multipleA1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,2,34,34,,1,10102,10111,,0,,2,0,,0,L4CON,,75002,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.53:78,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_multipleA2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,2,4,4,,1,10102,5,,0,,2,0,,0,L4CON,,75014,0,0,0,0,0,0,0,0,-1,,,0,0,0,0Layer4 
connection 
problem,,2,3,0192.168.0.54:78,,http0,0,0,,,0,,0,0,0,0,0,
PB_ipv4,PB_multipleA3,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT 
(resolution),1,1,0,0,1,199,199,,1,10102,6,,0,,2,0,,00,0,0,0,0,0,0,0,-1,,,0,0,0,00.0.0.0:78,,http0,0,0,,,0,,0,0,0,0,0,


If additional info is desired, please let me know :).

On Tue, Feb 18, 2020 at 2:03 PM Baptiste > wrote:


Hi guys,

Thx Tim for investigating.
I'll check the PCAP and see why such behavior happens.

Baptiste


On Tue, Feb 18, 2020 at 12:09 AM Tim Düsterhus mailto:t...@bastelstu.be>> wrote:

Pieter,

Am 09.02.20 um 15:35 schrieb PiBa-NL:
> Before commit '2.2-dev0-13a9232, released 2020/01/22 (use
additional
> records from SRV responses)' i get seemingly proper working
resolving of
> server a name.
> After this commit all responses are counted as 'invalid' in
the socket
> stats.

I can confirm the issue with the provided configuration. The
'if (len ==
0) {' check in line 1045 of the commit causes HAProxy to
consider the
responses 'invalid':


Thanks for confirming :).




https://github.com/haproxy/haproxy/commit/13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b#diff-b2ddf457bc423779995466f7d8b9d147R1045-R1048

Best regards
Tim Düsterhus


Regards,
PiBa-NL (Pieter)




Documentation clarification: option redispatch

2020-02-19 Thread Luke Seelenbinder
Hello list,

I'm working on improving our error rates (the elusive 0 is rather close), and, 
as a result, working on tightening up our HAProxy configuration. Based on some 
testing I'm doing, I realized there's a bit of a documentation hole around the 
exact behavior of `option redispatch`.

In the part I'm currently debugging, I have two servers. One is the main server 
and one is the backup. Does `option redispatch 1` retry on a backup server if 
the request to the main server fails or does it redispatch to the same (main) 
backend server? Ideally a dispatch could operate across normal/backup server 
pools, but based on behavior, I'm rather convinced it does not. My next step is 
to configure the backup server as a normal server, but assign a weight of 0 to 
make it act as a backup and also allow redispatches.

Is anyone able to shed some light on the specifics of this behavior?

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com



[PATCH[ partially enable s390x builds in travis-ci

2020-02-19 Thread Илья Шипицин
Hello,


I enabled s390x builds except reg-tests/seamless-reload/abns_socket.vtc

Cheers,
Ilya Shipitcin
From 8924057b8a25cf8f6595929f4b18fec1a85a10c5 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Wed, 19 Feb 2020 23:47:56 +0500
Subject: [PATCH] BUILD: travis-ci: enable s390x builds

reg-tests/seamless-reload/abns_socket.vtc is skipped due to #504
---
 .travis.yml | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/.travis.yml b/.travis.yml
index fd136c980..d263cf75a 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -47,6 +47,13 @@ matrix:
 if: type != cron
 compiler: clang
 env: TARGET=linux-glibc OPENSSL_VERSION=1.1.1d
+  - os: linux
+arch: s390x
+if: type != cron
+compiler: gcc
+env: TARGET=linux-glibc OPENSSL_VERSION=1.1.1d
+before_script:
+  - rm reg-tests/seamless-reload/abns_socket.vtc # please, see https://github.com/haproxy/haproxy/issues/504
   - os: linux
 if: type == cron
 compiler: clang
-- 
2.24.1



Re: Segfault on HAProxy 2.0.11 on HTX mode

2020-02-19 Thread Christopher Faulet

Le 19/02/2020 à 17:12, Olivier D a écrit :


I thought HTX was default mode since 2.0-dev3 
(https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#no%20option%20http-use-htx)

We don't have custom config on this, so default mode was used everywhere.


Ahhh, you're right. I've mixed up the versions...




 >     Did you make any recent changes on HAproxy or your servers ? I'm
surprised the
 >     segaults appear spontaneously after 2 months without any problem.
 >
 >
 > Only minor modifications in the last few days ...

minor modifications may have huge impact especially if you hit an hidden 
bug :)


Config file is auto-generated from a central server, so we always add frontends, 
backends or certificates. That's all.


I can send you the config file, but it's 8k lines, so it wont help you much I 
guess. Can the coredump help you more, with the binary used ?




Yes, send me everything, it could help and limits the useless round-trips. Don't 
forget to tell me the distro you are using.


Thanks,
--
Christopher Faulet



Re: Segfault on HAProxy 2.0.11 on HTX mode

2020-02-19 Thread Olivier D
Le mer. 19 févr. 2020 à 16:24, Christopher Faulet  a
écrit :

> Le 19/02/2020 à 16:05, Olivier D a écrit :
> > A bug was fixed in 2.0.12 that could explain such of crashes. The
> upstream
> > commit id is eec7f8ac0 (or 0ed1e8963 in the 2.0 tree). It is related
> to the
> > GitHub issue #420.
> >
> > But I don't know if it is the same bug because I don't know how it
> is possible
> > to apply an HTTP load-balancing algo on a TCP backend. I must take a
> look at
> > your configuration. You can send it to me in private. Maybe I'll
> found
> > something
> > explaining your crashes.
> >
> >
> > I have hundreds of frontend/backends in this config. What made you think
> this is
> > related to a tcp backend ? That would help me a lot.
> >
> >
>
> Because the mentioned commit fixes a bug where it was possible to assign a
> TCP
> backend to an HTX stream. It is possible to hit this bug when dynamic
> rules are
> used to choose the backend. In such case, we are unable to detect bad
> configuration during HAProxy startup.
>

We do use some use_backend if {}, but only on http frontends (I checked).
Never on tcp.
We have a mix between "listen" blocks with "server" defined inside, and
some frontend/backend blocks. So one "listen" block may also have a
"use_backend if".

Yes, it's bad, but it has been auto-generated since we use HAProxy 1.5 and
we never rewrite this part.


So, if you have TCP frontends that can be dynamically routed to HTTP or TCP
> backends, you may hit the bug. See github issue #420.
>

I don't think it is this one. Our only tcp frontends are all formated like
this :

listen x
id 20609
bind-process 18
balance source
hash-type consistent
mode tcp
bind X.X.X.X:443
server s1 X.X.X.X:443  id 4567 check weight 5 send-proxy-v2-ssl-cn
check-ssl verify none
server s2 X.X.X.X:443 id 1234 check weight 5 send-proxy-v2-ssl-cn
check-ssl verify none



> There is another source of bugs. In HAProxy 2.0, the HTX mode is not
> enabled by
> default. If you have dynamic routing rules, be careful to have the same
> mode
> (legacy or HTX) everywhere. I will do some tests to be sure this case it
> properly handled.
>

I thought HTX was default mode since 2.0-dev3 (
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#no%20option%20http-use-htx
)
We don't have custom config on this, so default mode was used everywhere.


>
>
> > Did you make any recent changes on HAproxy or your servers ? I'm
> surprised the
> > segaults appear spontaneously after 2 months without any problem.
> >
> >
> > Only minor modifications in the last few days ...
>
> minor modifications may have huge impact especially if you hit an hidden
> bug :)
>

Config file is auto-generated from a central server, so we always add
frontends, backends or certificates. That's all.

I can send you the config file, but it's 8k lines, so it wont help you much
I guess. Can the coredump help you more, with the binary used ?

Olivier



>
> --
> Christopher Faulet
>


Re: Segfault on HAProxy 2.0.11 on HTX mode

2020-02-19 Thread Christopher Faulet

Le 19/02/2020 à 16:05, Olivier D a écrit :

A bug was fixed in 2.0.12 that could explain such of crashes. The upstream
commit id is eec7f8ac0 (or 0ed1e8963 in the 2.0 tree). It is related to the
GitHub issue #420.

But I don't know if it is the same bug because I don't know how it is 
possible
to apply an HTTP load-balancing algo on a TCP backend. I must take a look at
your configuration. You can send it to me in private. Maybe I'll found
something
explaining your crashes.


I have hundreds of frontend/backends in this config. What made you think this is 
related to a tcp backend ? That would help me a lot.





Because the mentioned commit fixes a bug where it was possible to assign a TCP 
backend to an HTX stream. It is possible to hit this bug when dynamic rules are 
used to choose the backend. In such case, we are unable to detect bad 
configuration during HAProxy startup.


So, if you have TCP frontends that can be dynamically routed to HTTP or TCP 
backends, you may hit the bug. See github issue #420.


There is another source of bugs. In HAProxy 2.0, the HTX mode is not enabled by 
default. If you have dynamic routing rules, be careful to have the same mode 
(legacy or HTX) everywhere. I will do some tests to be sure this case it 
properly handled.




Did you make any recent changes on HAproxy or your servers ? I'm surprised 
the
segaults appear spontaneously after 2 months without any problem.


Only minor modifications in the last few days ...


minor modifications may have huge impact especially if you hit an hidden bug :)

--
Christopher Faulet



Re: Segfault on HAProxy 2.0.11 on HTX mode

2020-02-19 Thread Olivier D
Hello,

Le mer. 19 févr. 2020 à 15:27, Christopher Faulet  a
écrit :

> Le 19/02/2020 à 11:35, Olivier D a écrit :
> > Hello,
> >
> > I would like to report a segfault on HAProxy 2.0.11 ; this version has
> been
> > running fine for two months, and this morning starting segfaulting over
> and over.
> > Mitigation was performed by adding "no option http-use-htx" on
> 'defaults' block.
> >
> > I know it's not the latest version :) I'll update to 2.0.13 this evening.
> >
> > Program terminated with signal 11, Segmentation fault.
> > #0  htx_sl_p2 (sl=) at include/common/htx.h:293
> > 293 include/common/htx.h: No such file or directory.
> > (gdb) bt
> > #0  htx_sl_p2 (sl=) at include/common/htx.h:293
> > #1  htx_sl_req_uri (sl=) at include/common/htx.h:308
> > #2  assign_server (s=0xdc139f0) at src/backend.c:746
> > #3  0x00552114 in assign_server_and_queue (s=s@entry=0xdc139f0)
> at
> > src/backend.c:977
> > #4  0x005556f8 in assign_server_and_queue (s=0xdc139f0) at
> > src/backend.c:1772
> > #5  srv_redispatch_connect (s=s@entry=0xdc139f0) at src/backend.c:1705
> > #6  0x004c2cf8 in sess_prepare_conn_req (s=) at
> > src/stream.c:1250
> > #7  process_stream (t=t@entry=0xd1db790, context=0xdc139f0,
> state= > out>) at src/stream.c:2414
> > #8  0x00594865 in process_runnable_tasks () at src/task.c:412
> > #9  0x005038f7 in run_poll_loop () at src/haproxy.c:2520
> > #10 run_thread_poll_loop (data=data@entry=0x0) at src/haproxy.c:2641
> > #11 0x004653b0 in main (argc=,
> argv=0x7fff848ae498) at
> > src/haproxy.c:3318
> >
> >
> >
> > Config file is very long ... If needed, a coredump + binary can be sent
> on private.
> >
>
> Hi,
>
> A bug was fixed in 2.0.12 that could explain such of crashes. The upstream
> commit id is eec7f8ac0 (or 0ed1e8963 in the 2.0 tree). It is related to
> the
> GitHub issue #420.
>
> But I don't know if it is the same bug because I don't know how it is
> possible
> to apply an HTTP load-balancing algo on a TCP backend. I must take a look
> at
> your configuration. You can send it to me in private. Maybe I'll found
> something
> explaining your crashes.
>

I have hundreds of frontend/backends in this config. What made you think
this is related to a tcp backend ? That would help me a lot.


>
> Did you make any recent changes on HAproxy or your servers ? I'm surprised
> the
> segaults appear spontaneously after 2 months without any problem.
>

Only minor modifications in the last few days ...

I'll update to latest haproxy version to check.

Olivier


>
>
> --
> Christopher Faulet
>


Re: Segfault on HAProxy 2.0.11 on HTX mode

2020-02-19 Thread Christopher Faulet

Le 19/02/2020 à 11:35, Olivier D a écrit :

Hello,

I would like to report a segfault on HAProxy 2.0.11 ; this version has been 
running fine for two months, and this morning starting segfaulting over and over.

Mitigation was performed by adding "no option http-use-htx" on 'defaults' block.

I know it's not the latest version :) I'll update to 2.0.13 this evening.

Program terminated with signal 11, Segmentation fault.
#0  htx_sl_p2 (sl=) at include/common/htx.h:293
293     include/common/htx.h: No such file or directory.
(gdb) bt
#0  htx_sl_p2 (sl=) at include/common/htx.h:293
#1  htx_sl_req_uri (sl=) at include/common/htx.h:308
#2  assign_server (s=0xdc139f0) at src/backend.c:746
#3  0x00552114 in assign_server_and_queue (s=s@entry=0xdc139f0) at 
src/backend.c:977
#4  0x005556f8 in assign_server_and_queue (s=0xdc139f0) at 
src/backend.c:1772

#5  srv_redispatch_connect (s=s@entry=0xdc139f0) at src/backend.c:1705
#6  0x004c2cf8 in sess_prepare_conn_req (s=) at 
src/stream.c:1250
#7  process_stream (t=t@entry=0xd1db790, context=0xdc139f0, state=out>) at src/stream.c:2414

#8  0x00594865 in process_runnable_tasks () at src/task.c:412
#9  0x005038f7 in run_poll_loop () at src/haproxy.c:2520
#10 run_thread_poll_loop (data=data@entry=0x0) at src/haproxy.c:2641
#11 0x004653b0 in main (argc=, argv=0x7fff848ae498) at 
src/haproxy.c:3318




Config file is very long ... If needed, a coredump + binary can be sent on 
private.



Hi,

A bug was fixed in 2.0.12 that could explain such of crashes. The upstream 
commit id is eec7f8ac0 (or 0ed1e8963 in the 2.0 tree). It is related to the 
GitHub issue #420.


But I don't know if it is the same bug because I don't know how it is possible 
to apply an HTTP load-balancing algo on a TCP backend. I must take a look at 
your configuration. You can send it to me in private. Maybe I'll found something 
explaining your crashes.


Did you make any recent changes on HAproxy or your servers ? I'm surprised the 
segaults appear spontaneously after 2 months without any problem.



--
Christopher Faulet



Re: dns fails to process response / hold valid? (since commit 2.2-dev0-13a9232)

2020-02-19 Thread Baptiste
Hi,

I found a couple of bugs in that part of the code.
Can you please try the attached patch? (0001 is useless but I share it too
in case of)
It will allow parsing of additional records for SRV queries only and when
done, will silently ignore any record which are not A or .

@maint team, please don't apply the patch yet, I want to test it much more
before.

Baptiste


On Tue, Feb 18, 2020 at 2:03 PM Baptiste  wrote:

> Hi guys,
>
> Thx Tim for investigating.
> I'll check the PCAP and see why such behavior happens.
>
> Baptiste
>
>
> On Tue, Feb 18, 2020 at 12:09 AM Tim Düsterhus  wrote:
>
>> Pieter,
>>
>> Am 09.02.20 um 15:35 schrieb PiBa-NL:
>> > Before commit '2.2-dev0-13a9232, released 2020/01/22 (use additional
>> > records from SRV responses)' i get seemingly proper working resolving of
>> > server a name.
>> > After this commit all responses are counted as 'invalid' in the socket
>> > stats.
>>
>> I can confirm the issue with the provided configuration. The 'if (len ==
>> 0) {' check in line 1045 of the commit causes HAProxy to consider the
>> responses 'invalid':
>>
>>
>> https://github.com/haproxy/haproxy/commit/13a9232ebc63fdf357ffcf4fa7a1a5e77a1eac2b#diff-b2ddf457bc423779995466f7d8b9d147R1045-R1048
>>
>> Best regards
>> Tim Düsterhus
>>
>
From fa0b9563c40006be83c3fa1b52eeb3dbbb1b028b Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Wed, 19 Feb 2020 00:53:26 +0100
Subject: [PATCH 1/2] CLEANUP: remove obsolete comments

---
 src/dns.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index 86147a417..9e49babf1 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1030,7 +1030,6 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend,
 
 	/* now parsing additional records */
 	nb_saved_records = 0;
-	//TODO: check with Dinko for DNS poisoning
 	for (i = 0; i < dns_p->header.arcount; i++) {
 		if (reader >= bufend)
 			return DNS_RESP_INVALID;
@@ -1202,7 +1201,6 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend,
 	continue;
 tmp_record->ar_item = dns_answer_record;
 			}
-			//TODO: there is a leak for now, since we don't clean up AR records
 
 			LIST_ADDQ(_p->ar_list, _answer_record->list);
 		}
-- 
2.17.1

From 96a09ab7538af2644c7247be2313fc0cc294949b Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Wed, 19 Feb 2020 01:08:51 +0100
Subject: [PATCH 2/2] BUG/MEDIUM: dns: improper parsing of aditional records

---
 src/dns.c | 26 ++
 1 file changed, 6 insertions(+), 20 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index 9e49babf1..5550ab976 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1028,7 +1028,9 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend,
 	/* Save the number of records we really own */
 	dns_p->header.ancount = nb_saved_records;
 
-	/* now parsing additional records */
+	/* now parsing additional records for SRV queries only */
+	if (dns_query->type != DNS_RTYPE_SRV)
+		goto skip_parsing_additional_records;
 	nb_saved_records = 0;
 	for (i = 0; i < dns_p->header.arcount; i++) {
 		if (reader >= bufend)
@@ -1043,25 +1045,7 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend,
 
 		if (len == 0) {
 			pool_free(dns_answer_item_pool, dns_answer_record);
-			return DNS_RESP_INVALID;
-		}
-
-		/* Check if the current record dname is valid.  previous_dname
-		 * points either to queried dname or last CNAME target */
-		if (dns_query->type != DNS_RTYPE_SRV && memcmp(previous_dname, tmpname, len) != 0) {
-			pool_free(dns_answer_item_pool, dns_answer_record);
-			if (i == 0) {
-/* First record, means a mismatch issue between
- * queried dname and dname found in the first
- * record */
-return DNS_RESP_INVALID;
-			}
-			else {
-/* If not the first record, this means we have a
- * CNAME resolution error */
-return DNS_RESP_CNAME_ERROR;
-			}
-
+			continue;
 		}
 
 		memcpy(dns_answer_record->name, tmpname, len);
@@ -1206,6 +1190,8 @@ static int dns_validate_dns_response(unsigned char *resp, unsigned char *bufend,
 		}
 	} /* for i 0 to arcount */
 
+ skip_parsing_additional_records:
+
 	/* Save the number of records we really own */
 	dns_p->header.arcount = nb_saved_records;
 
-- 
2.17.1



Re: [PATCH] BUG/MINOR: ssl: Stop passing dynamic strings as format arguments

2020-02-19 Thread William Lallemand
On Wed, Feb 19, 2020 at 11:41:13AM +0100, Tim Duesterhus wrote:
> gcc complains rightfully:
> 
> src/ssl_sock.c: In function ‘ssl_load_global_issuers_from_path’:
> src/ssl_sock.c:9860:4: warning: format not a string literal and no format 
> arguments [-Wformat-security]
> ha_warning(warn);
> ^
> 
> Introduced in 70df7bf19cebd5593c0abb01923e6c9f72961da6.
> ---
>  src/ssl_sock.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/ssl_sock.c b/src/ssl_sock.c
> index e30bb8a6c..ade5ffc84 100644
> --- a/src/ssl_sock.c
> +++ b/src/ssl_sock.c
> @@ -9857,7 +9857,7 @@ static int ssl_load_global_issuers_from_path(char 
> **args, int section_type, stru
>   goto next;
>   ssl_load_global_issuer_from_BIO(in, fp, );
>   if (warn) {
> - ha_warning(warn);
> + ha_warning("%s", warn);
>   free(warn);
>   warn = NULL;
>   }
> -- 
> 2.25.0
> 

Merged, thanks!

-- 
William Lallemand



Re: [PATCH] BUG/MINOR: ssl: Stop passing dynamic strings as format arguments

2020-02-19 Thread Илья Шипицин
It happens because we now run ERR=1 in ci builds

On Wed, Feb 19, 2020, 3:41 PM Tim Duesterhus  wrote:

> gcc complains rightfully:
>
> src/ssl_sock.c: In function ‘ssl_load_global_issuers_from_path’:
> src/ssl_sock.c:9860:4: warning: format not a string literal and no format
> arguments [-Wformat-security]
> ha_warning(warn);
> ^
>
> Introduced in 70df7bf19cebd5593c0abb01923e6c9f72961da6.
> ---
>  src/ssl_sock.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/ssl_sock.c b/src/ssl_sock.c
> index e30bb8a6c..ade5ffc84 100644
> --- a/src/ssl_sock.c
> +++ b/src/ssl_sock.c
> @@ -9857,7 +9857,7 @@ static int ssl_load_global_issuers_from_path(char
> **args, int section_type, stru
> goto next;
> ssl_load_global_issuer_from_BIO(in, fp, );
> if (warn) {
> -   ha_warning(warn);
> +   ha_warning("%s", warn);
> free(warn);
> warn = NULL;
> }
> --
> 2.25.0
>
>


[PATCH] BUG/MINOR: ssl: Stop passing dynamic strings as format arguments

2020-02-19 Thread Tim Duesterhus
gcc complains rightfully:

src/ssl_sock.c: In function ‘ssl_load_global_issuers_from_path’:
src/ssl_sock.c:9860:4: warning: format not a string literal and no format 
arguments [-Wformat-security]
ha_warning(warn);
^

Introduced in 70df7bf19cebd5593c0abb01923e6c9f72961da6.
---
 src/ssl_sock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index e30bb8a6c..ade5ffc84 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -9857,7 +9857,7 @@ static int ssl_load_global_issuers_from_path(char **args, 
int section_type, stru
goto next;
ssl_load_global_issuer_from_BIO(in, fp, );
if (warn) {
-   ha_warning(warn);
+   ha_warning("%s", warn);
free(warn);
warn = NULL;
}
-- 
2.25.0




Segfault on HAProxy 2.0.11 on HTX mode

2020-02-19 Thread Olivier D
Hello,

I would like to report a segfault on HAProxy 2.0.11 ; this version has been
running fine for two months, and this morning starting segfaulting over and
over.
Mitigation was performed by adding "no option http-use-htx" on 'defaults'
block.

I know it's not the latest version :) I'll update to 2.0.13 this evening.

Program terminated with signal 11, Segmentation fault.
#0  htx_sl_p2 (sl=) at include/common/htx.h:293
293 include/common/htx.h: No such file or directory.
(gdb) bt
#0  htx_sl_p2 (sl=) at include/common/htx.h:293
#1  htx_sl_req_uri (sl=) at include/common/htx.h:308
#2  assign_server (s=0xdc139f0) at src/backend.c:746
#3  0x00552114 in assign_server_and_queue (s=s@entry=0xdc139f0) at
src/backend.c:977
#4  0x005556f8 in assign_server_and_queue (s=0xdc139f0) at
src/backend.c:1772
#5  srv_redispatch_connect (s=s@entry=0xdc139f0) at src/backend.c:1705
#6  0x004c2cf8 in sess_prepare_conn_req (s=) at
src/stream.c:1250
#7  process_stream (t=t@entry=0xd1db790, context=0xdc139f0,
state=) at src/stream.c:2414
#8  0x00594865 in process_runnable_tasks () at src/task.c:412
#9  0x005038f7 in run_poll_loop () at src/haproxy.c:2520
#10 run_thread_poll_loop (data=data@entry=0x0) at src/haproxy.c:2641
#11 0x004653b0 in main (argc=, argv=0x7fff848ae498)
at src/haproxy.c:3318

haproxy -vv:
HA-Proxy version 2.0.11 2019/12/11 - https://haproxy.org/
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wno-implicit-fallthrough
-Wno-stringop-overflow -Wtype-limits -Wshift-negative-value
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_THREAD=0 USE_STATIC_PCRE=1 USE_OPENSSL=1 USE_LUA=1
USE_ZLIB=1 USE_NS=

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE
-PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED
-REGPARM +STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE
+LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4
-MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO -NS +DL +RT -DEVICEATLAS
-51DEGREES -WURFL -SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=20).
Built with OpenSSL version : OpenSSL 1.1.1d  10 Sep 2019
Running on OpenSSL version : OpenSSL 1.1.1d  10 Sep 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.43 2019-02-23
Running on PCRE version : 8.43 2019-02-23
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE mux=H2
  h2 : mode=HTTP   side=FEmux=H2
: mode=HTXside=FE|BE mux=H1
: mode=TCP|HTTP   side=FE|BE mux=PASS

Available services : none

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace


Config file is very long ... If needed, a coredump + binary can be sent on
private.

Olivier