Re: Loading multiple TLS certificates

2019-05-13 Thread Gibson, Brian (IMS)
I personally use separate files to make it easier for my own sanity.

Sent from Nine

From: Norman Branitsky 
Sent: Monday, May 13, 2019 4:57 PM
To: haproxy@formilux.org
Subject: Loading multiple TLS certificates

For the first time, I have a client that refused to let me use a wildcard 
certificate.
So I submitted 6 separate CSRs and now have 6 separate certificates and 6 
separate keys.
The intermediate certificates all appear to be the same.
So should I create 6 separate PEM files containing the certificate, the 
intermediates, and the key,
or should I create a single PEM file containing all 6 certificates, 6 keys, and 
1 intermediate file?

Norman Branitsky
Senior Cloud Architect
MicroPact Toronto
416.916.1752 (61752)



Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.



Loading multiple TLS certificates

2019-05-13 Thread Norman Branitsky
For the first time, I have a client that refused to let me use a wildcard 
certificate.
So I submitted 6 separate CSRs and now have 6 separate certificates and 6 
separate keys.
The intermediate certificates all appear to be the same.
So should I create 6 separate PEM files containing the certificate, the 
intermediates, and the key,
or should I create a single PEM file containing all 6 certificates, 6 keys, and 
1 intermediate file?

Norman Branitsky
Senior Cloud Architect
MicroPact Toronto
416.916.1752 (61752)


capture length and redirect

2019-05-13 Thread Jeff Abrahamson
I have an haproxy (1.6) instance that 301 redirects certain GET
requests. Unfortunately, the paths of those requests are being truncated
at 1024 bytes.

After much reading and experimenting, I strongly suspect the issue is
the length of what is captured by |capture.req.uri|, which is 1024. But
I've not succeeded in increasing that number. I'm quite open to pointers.

Here are the most relevant snippets from my |haproxy.cfg|:

|global tune.bufsize 131072 tune.maxrewrite 65536 defaults frontend
www-https bind 1.2.3.4:443 ssl crt /etc/haproxy/ssl/ declare capture
request len 16382 declare capture response len 16382 http-request
capture capture.req.uri len 16382 acl redirect_canonical ssl_fc_sni
-i myname.example.com http-request redirect code 301 location
https://www.example.com%[capture.req.uri] if redirect_canonical |

There's clearly some cruft in there, as I've been experimenting trying
to make it go.  And I've dropped all that's clearly not relevant, like
the backends etc.

My test is to do this (which may be slightly off, because, of course, my
domain is not example.com, but the point is that the Location: should
end with 1234567890 and it doesn't.  The printf just generates
TESTTESTTEST... 256 times in a row.

$ curl -I --silent https://myname.example.com/$(printf 'TEST%.0s'
{1..256}; echo 1234567890) | sed -e 's/TEST//g;'
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://www.example.com/T
Connection: close

$

I've poked around in the code a bit, and I see that the default for
captures is 1024 (defaults.h:58), but it seems that it can be
overridden, just not how I'm trying to do it.

Any pointers?

-- 

Jeff Abrahamson
http://p27.eu/jeff/
http://transport-nantes.com/




Re: [PATCH v2 0/2] mworker: Fix memory leak of mworker_proc members

2019-05-13 Thread Илья Шипицин
I tested, everything is ok

пн, 13 мая 2019 г. в 20:24, Tim Duesterhus :

> Oops, my patch was incomplete, because I noticed that I missed one
> location after creating the commit and forgot to amend after making
> the necessary adjustments.
>
> So here's version 2 that fixes the leak on SIGTERM in addition to the
> leak on SIGUSR2.
>
> Best regards
> Tim Duesterhus
>
> Tim Duesterhus (2):
>   BUG/MINOR: mworker: Prevent potential use-after-free in
> mworker_env_to_proc_list
>   BUG/MINOR: mworker: Fix memory leak of mworker_proc members
>
>  include/proto/mworker.h |  2 ++
>  src/haproxy.c   |  3 ++-
>  src/mworker-prog.c  | 19 +--
>  src/mworker.c   | 36 +---
>  4 files changed, 34 insertions(+), 26 deletions(-)
>
> --
> 2.21.0
>
>
>


[PATCH v2 2/2] BUG/MINOR: mworker: Fix memory leak of mworker_proc members

2019-05-13 Thread Tim Duesterhus
The struct mworker_proc is not uniformly freed everywhere, sometimes leading
to leaks of the `id` string (and possibly the other strings).

Introduce a mworker_free_child function instead of duplicating the freeing
logic everywhere to prevent this kind of issues.

This leak was reported in issue #96.

It looks like the leaks have been introduced in commit 
9a1ee7ac31c56fd7d881adf2ef4659f336e50c9f,
which is specific to 2.0-dev. Backporting `mworker_free_child` might be
helpful to ease backporting other fixes, though.
---
 include/proto/mworker.h |  2 ++
 src/haproxy.c   |  3 ++-
 src/mworker-prog.c  | 19 +--
 src/mworker.c   | 30 ++
 4 files changed, 31 insertions(+), 23 deletions(-)

diff --git a/include/proto/mworker.h b/include/proto/mworker.h
index 86f09049f..05fe82af6 100644
--- a/include/proto/mworker.h
+++ b/include/proto/mworker.h
@@ -37,4 +37,6 @@ int mworker_ext_launch_all();
 
 void mworker_kill_max_reloads(int sig);
 
+void mworker_free_child(struct mworker_proc *);
+
 #endif /* PROTO_MWORKER_H_ */
diff --git a/src/haproxy.c b/src/haproxy.c
index a47b7dd32..b688375e2 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -3002,7 +3002,8 @@ int main(int argc, char **argv)
continue;
}
LIST_DEL(>list);
-   free(child);
+   mworker_free_child(child);
+   child = NULL;
}
}
 
diff --git a/src/mworker-prog.c b/src/mworker-prog.c
index fd8e66384..467ce9b24 100644
--- a/src/mworker-prog.c
+++ b/src/mworker-prog.c
@@ -69,24 +69,7 @@ int mworker_ext_launch_all()
 
 
LIST_DEL(>list);
-   if (child->command) {
-   int i;
-
-   for (i = 0; child->command[i]; i++) {
-   if (child->command[i]) {
-   free(child->command[i]);
-   child->command[i] = 
NULL;
-   }
-   }
-   free(child->command);
-   child->command = NULL;
-   }
-   if (child->id) {
-   free(child->id);
-   child->id = NULL;
-   }
-
-   free(child);
+   mworker_free_child(child);
child = NULL;
 
continue;
diff --git a/src/mworker.c b/src/mworker.c
index 491d40837..6b814870b 100644
--- a/src/mworker.c
+++ b/src/mworker.c
@@ -185,8 +185,7 @@ void mworker_env_to_proc_list()
if (child->pid) {
LIST_ADDQ(_list, >list);
} else {
-   free(child->id);
-   free(child);
+   mworker_free_child(child);
}
}
 
@@ -244,7 +243,6 @@ void mworker_catch_sigchld(struct sig_handler *sh)
 {
int exitpid = -1;
int status = 0;
-   struct mworker_proc *child, *it;
int childfound;
 
 restart_wait:
@@ -253,6 +251,8 @@ restart_wait:
 
exitpid = waitpid(-1, , WNOHANG);
if (exitpid > 0) {
+   struct mworker_proc *child, *it;
+
if (WIFEXITED(status))
status = WEXITSTATUS(status);
else if (WIFSIGNALED(status))
@@ -300,7 +300,8 @@ restart_wait:
ha_warning("Former program '%s' (%d) 
exited with code %d (%s)\n", child->id, exitpid, status, (status >= 128) ? 
strsignal(status - 128) : "Exit");
}
}
-   free(child);
+   mworker_free_child(child);
+   child = NULL;
}
 
/* do it again to check if it was the last worker */
@@ -553,6 +554,27 @@ out:
return err_code;
 }
 
+void mworker_free_child(struct mworker_proc *child) {
+   if (child == NULL) return;
+
+   if (child->command) {
+   int i;
+
+   for (i = 0; child->command[i]; i++) {
+   if (child->command[i]) {
+   free(child->command[i]);
+   child->command[i] = NULL;
+   }
+   }
+   free(child->command);
+   child->command = NULL;
+   }
+   if (child->id) {
+   free(child->id);
+   child->id = NULL;
+   }
+   

[PATCH v2 0/2] mworker: Fix memory leak of mworker_proc members

2019-05-13 Thread Tim Duesterhus
Oops, my patch was incomplete, because I noticed that I missed one
location after creating the commit and forgot to amend after making
the necessary adjustments.

So here's version 2 that fixes the leak on SIGTERM in addition to the
leak on SIGUSR2.

Best regards
Tim Duesterhus

Tim Duesterhus (2):
  BUG/MINOR: mworker: Prevent potential use-after-free in
mworker_env_to_proc_list
  BUG/MINOR: mworker: Fix memory leak of mworker_proc members

 include/proto/mworker.h |  2 ++
 src/haproxy.c   |  3 ++-
 src/mworker-prog.c  | 19 +--
 src/mworker.c   | 36 +---
 4 files changed, 34 insertions(+), 26 deletions(-)

-- 
2.21.0




[PATCH v2 1/2] BUG/MINOR: mworker: Prevent potential use-after-free in mworker_env_to_proc_list

2019-05-13 Thread Tim Duesterhus
This was found by reading the code while investigating issue #96 and not
verified with any tools:

If `child->pid` is falsy `child` will be freed instead of being added to
`proc_list`. The setting of `PROC_O_LEAVING` happens unconditionally after
this check.

Fix the issue by mising the setting of the LEAVING option right behind the
allocation of `child`.

This bug was introduced in 4528611ed66d8bfa344782f6c7f1e7151cf48bf5, which
is specific to the 2.0-dev branch. No backport required.
---
 src/mworker.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/mworker.c b/src/mworker.c
index 8df748d3f..491d40837 100644
--- a/src/mworker.c
+++ b/src/mworker.c
@@ -147,6 +147,9 @@ void mworker_env_to_proc_list()
 
child = calloc(1, sizeof(*child));
 
+   /* this is a process inherited from a reload that should be 
leaving */
+   child->options |= PROC_O_LEAVING;
+
while ((subtoken = strtok_r(token, ";", ))) {
 
token = NULL;
@@ -184,10 +187,7 @@ void mworker_env_to_proc_list()
} else {
free(child->id);
free(child);
-
}
-   /* this is a process inherited from a reload that should be 
leaving */
-   child->options |= PROC_O_LEAVING;
}
 
unsetenv("HAPROXY_PROCESSES");
-- 
2.21.0




[ANNOUNCE] haproxy-1.9.8

2019-05-13 Thread Willy Tarreau
Hi,

HAProxy 1.9.8 was released on 2019/05/13. It added 53 new commits
after version 1.9.7.

The most important bugs fall into 3 main categories here :
  - a possible crash in multi-threads when issuing "show map" or
"show acl" on the CLI in parallel to "clear map" or "clear acl" on
another CLI session ;

  - an incorrect handling in H2 of the HTX end-of-message mark after
the response trailers which can lead to an endless loop between
the caller seeing there's still something to send and the callee
seeing it cannot send this block alone. This one gave a few of us
some difficulties and helped us see how we can improve HTX for
future versions by making certain cases more straightforward.
Thanks to Patrick Hemmer for providing backtraces exhibiting the
issue.

  - multiple incorrect list handling in the H2 mux resulting in endless
loops for some users with large objects. The assumptions that once
were granted in this code evolved several times during 1.9-dev and
have led to situations where the presence of an element in the send
list was not guarded anymore by some previous conditions. Multiple
iterations of fixes were only pushing the problem forward to the
next point. Now that these issues were addressed, we've figured how
certain parts can be simplified to significantly reduce the
probability that similar issues appear in the future. We owe a big
thanks to Maciej Zdeb for testing countless patches and reporting
detailed traces, and even core dumps.

There were some other annoying issues among which :
  - occasionally a 100% CPU condition (but traffic not interrupted) on
certain fragmented H2 HEADER frames. Thanks go to Yves Lafon for
providing cores and traces.

  - missing locks on source port ranges occasionally causing connections
running on different threads to pick the same outgoing source port,
resulting in connection failures.

  - a missing lock on the server slowstart code causing deadlocks on the
roundrobin algorithm when using threads and slowstart.

The rest is either a bit less important or became confuse to me after
having dealt with the ones above, to be honest.

I'm quite confident this one works way better than previous ones, and at
the same time that someone will soon raise their hand saying "I think I
have a problem". Having said that, with 305 bugs fixed since 1.9.0 was
released, you have no valid reason for still using an earlier release
now that this one is available.

I would generally like to thank all the early adopters who are running
on 1.9, because they are the ones going through all the problems above
and taking the risks for others, and thanks to them we can expect a much
calmer 2.0. So please continue to report any issue you'll meet!

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Sources  : http://www.haproxy.org/download/1.9/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
   Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Chris Packham (1):
  BUILD: threads: Add __ha_cas_dw fallback for single threaded builds

Christopher Faulet (13):
  BUG/MINOR: http: Call stream_inc_be_http_req_ctr() only one time per 
request
  MINOR: spoe: Use the sample context to pass frag_ctx info during encoding
  MINOR: examples: Use right locale for the last changelog date in 
haproxy.spec
  BUG/MEDIUM: listener: Fix how unlimited number of consecutive accepts is 
handled
  MINOR: config: Test validity of tune.maxaccept during the config parsing
  CLEANUP: config: Don't alter listener->maxaccept when nbproc is set to 1
  BUG/MEDIUM: spoe: Be sure the sample is found before setting its context
  BUG/MINOR: mux-h1: Fix the parsing of trailers
  BUG/MINOR: htx: Never transfer more than expected in htx_xfer_blks()
  MINOR: htx: Split on DATA blocks only when blocks are moved to an HTX 
message
  BUG/MINOR: stream: Attach the read side on the response as soon as 
possible
  BUG/MEDIUM: http: Use pointer to the begining of input to parse message 
headers
  MINOR: spoe: Set the argument chunk size to 0 when SPOE variables are 
checked

Dragan Dosen (4):
  BUG/MINOR: haproxy: fix rule->file memory leak
  BUG/MINOR: log: properly free memory on logformat parse error and deinit()
  BUG/MINOR: checks: free memory allocated for tasklets
  BUG/MEDIUM: pattern: fix memory leak in regex pattern functions

Ilya Shipitsin (1):
  BUG/MEDIUM: servers: fix typo "src" instead of "srv"

Kevin Zhu (1):
  BUG/MEDIUM: spoe: arg len encoded in 

Re: HAProxy 1.9.6 unresponsive

2019-05-13 Thread Willy Tarreau
Hi Patrick,

On Mon, May 13, 2019 at 08:18:21AM -0400, Patrick Hemmer wrote:
> There's been mention of releasing 1.9.8. Will that release contain a fix for
> the issue reported in this thread?

Sorry, I mixed it with the other ones speaking about 100% CPU. I've
re-read the whole thread and yours was related to the trailers in HTX
mode which was indeed addressed in 1.9 by commit ec4ae19eb ("BUG/MEDIUM:
mux-h2/htx: never wait for EOM when processing trailers").

I've been trying to release 1.9.8 this week-end then today but got
distracted by reviews, last minute fixes and carefully checking that we
don't need any new patch. Now everything looks OK, trying to get back
to this now.

Willy



Re: QAT intermittent healthcheck errors

2019-05-13 Thread Marcin Deranek

Hi Emeric,

On 5/13/19 11:06 AM, Emeric Brun wrote:


Just to known that I'm waiting for the feedback of intel's team and I will 
receive QAT 1.7 compliant hardware soon to make some tests here.


Thank you for an update.
Regards,

Marcin Deranek



[PATCH 2/2] BUG/MINOR: mworker: Fix memory leak of mworker_proc members

2019-05-13 Thread Tim Duesterhus
The struct mworker_proc is not uniformly freed everywhere, sometimes leading
to leaks of the `id` string (and possibly the other strings).

Introduce a mworker_free_child function instead of duplicating the freeing
logic everywhere to prevent this kind of issues.

This leak was reported in issue #96.

It looks like the leaks have been introduced in commit 
9a1ee7ac31c56fd7d881adf2ef4659f336e50c9f,
which is specific to 2.0-dev. Backporting `mworker_free_child` might be
helpful to ease backporting other fixes, though.
---
 include/proto/mworker.h |  2 ++
 src/haproxy.c   |  3 ++-
 src/mworker-prog.c  | 19 +--
 src/mworker.c   | 24 ++--
 4 files changed, 27 insertions(+), 21 deletions(-)

diff --git a/include/proto/mworker.h b/include/proto/mworker.h
index 86f09049f..05fe82af6 100644
--- a/include/proto/mworker.h
+++ b/include/proto/mworker.h
@@ -37,4 +37,6 @@ int mworker_ext_launch_all();
 
 void mworker_kill_max_reloads(int sig);
 
+void mworker_free_child(struct mworker_proc *);
+
 #endif /* PROTO_MWORKER_H_ */
diff --git a/src/haproxy.c b/src/haproxy.c
index a47b7dd32..b688375e2 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -3002,7 +3002,8 @@ int main(int argc, char **argv)
continue;
}
LIST_DEL(>list);
-   free(child);
+   mworker_free_child(child);
+   child = NULL;
}
}
 
diff --git a/src/mworker-prog.c b/src/mworker-prog.c
index fd8e66384..467ce9b24 100644
--- a/src/mworker-prog.c
+++ b/src/mworker-prog.c
@@ -69,24 +69,7 @@ int mworker_ext_launch_all()
 
 
LIST_DEL(>list);
-   if (child->command) {
-   int i;
-
-   for (i = 0; child->command[i]; i++) {
-   if (child->command[i]) {
-   free(child->command[i]);
-   child->command[i] = 
NULL;
-   }
-   }
-   free(child->command);
-   child->command = NULL;
-   }
-   if (child->id) {
-   free(child->id);
-   child->id = NULL;
-   }
-
-   free(child);
+   mworker_free_child(child);
child = NULL;
 
continue;
diff --git a/src/mworker.c b/src/mworker.c
index 491d40837..8debbdc36 100644
--- a/src/mworker.c
+++ b/src/mworker.c
@@ -185,8 +185,7 @@ void mworker_env_to_proc_list()
if (child->pid) {
LIST_ADDQ(_list, >list);
} else {
-   free(child->id);
-   free(child);
+   mworker_free_child(child);
}
}
 
@@ -553,6 +552,27 @@ out:
return err_code;
 }
 
+void mworker_free_child(struct mworker_proc *child) {
+   if (child == NULL) return;
+
+   if (child->command) {
+   int i;
+
+   for (i = 0; child->command[i]; i++) {
+   if (child->command[i]) {
+   free(child->command[i]);
+   child->command[i] = NULL;
+   }
+   }
+   free(child->command);
+   child->command = NULL;
+   }
+   if (child->id) {
+   free(child->id);
+   child->id = NULL;
+   }
+   free(child);
+}
 
 static struct cfg_kw_list mworker_kws = {{ }, {
{ CFG_GLOBAL, "mworker-max-reloads", mworker_parse_global_max_reloads },
-- 
2.21.0




[PATCH 1/2] BUG/MINOR: mworker: Prevent potential use-after-free in mworker_env_to_proc_list

2019-05-13 Thread Tim Duesterhus
This was found by reading the code while investigating issue #96 and not
verified with any tools:

If `child->pid` is falsy `child` will be freed instead of being added to
`proc_list`. The setting of `PROC_O_LEAVING` happens unconditionally after
this check.

Fix the issue by mising the setting of the LEAVING option right behind the
allocation of `child`.

This bug was introduced in 4528611ed66d8bfa344782f6c7f1e7151cf48bf5, which
is specific to the 2.0-dev branch. No backport required.
---
 src/mworker.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/mworker.c b/src/mworker.c
index 8df748d3f..491d40837 100644
--- a/src/mworker.c
+++ b/src/mworker.c
@@ -147,6 +147,9 @@ void mworker_env_to_proc_list()
 
child = calloc(1, sizeof(*child));
 
+   /* this is a process inherited from a reload that should be 
leaving */
+   child->options |= PROC_O_LEAVING;
+
while ((subtoken = strtok_r(token, ";", ))) {
 
token = NULL;
@@ -184,10 +187,7 @@ void mworker_env_to_proc_list()
} else {
free(child->id);
free(child);
-
}
-   /* this is a process inherited from a reload that should be 
leaving */
-   child->options |= PROC_O_LEAVING;
}
 
unsetenv("HAPROXY_PROCESSES");
-- 
2.21.0




Re: HAProxy 1.9.6 unresponsive

2019-05-13 Thread Patrick Hemmer




*From:* Willy Tarreau [mailto:w...@1wt.eu]
*Sent:* Saturday, May 11, 2019, 06:10 EDT
*To:* Patrick Hemmer 
*Cc:* haproxy@formilux.org
*Subject:* HAProxy 1.9.6 unresponsive


Hi Patrick,

On Fri, May 10, 2019 at 09:17:25AM -0400, Patrick Hemmer wrote:

So I see a few updates on some of the other 100% CPU usage threads, and that
some fixes have been pushed. Are any of those in relation to this issue? Or
is this one still outstanding?

Apparently we've pulled a long piece of string and uncovered a series of
such bugs. It's likely that different persons have been affected by
different bugs. We still have the issue Maciej is experiencing that I'd
really like to nail down, given the last occurrence doesn't seem to make
sense as the code looks right after Olivier's fix.

Thanks,
Willy


Thanks, but I'm unsure if that means the issue I reported is fixed, or 
if other related issues are fixed and this one is still outstanding.
There's been mention of releasing 1.9.8. Will that release contain a fix 
for the issue reported in this thread?


-Patrick


Re: [External] Re: QAT intermittent healthcheck errors

2019-05-13 Thread Emeric Brun
Hi Marcin,

> 
> Thank you Marcin, It shows that haproxy is waiting for an event on all those 
> fds because a crypto jobs were launched on the engine 
> and we can't free the session until the end of this job (it would result in a 
> segfault).
> 
> So the processes are stucked, unable to free the session because the engine 
> doesn't signal the end of those job via the async fd.
> 
> I didn't reproduce this issue on QAT 1.5 so I will try to discuss it with 
> intel guys to known why there is this behavior change in the v1.7
> and what we can do.
> 
> R,
> Emeric
> 

Just to known that I'm waiting for the feedback of intel's team and I will 
receive QAT 1.7 compliant hardware soon to make some tests here.

R,
Emeric




Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-13 Thread Maciej Zdeb
Ok, I'll wait for update from you! :)

pon., 13 maj 2019 o 08:03 Willy Tarreau  napisał(a):

> Hi Maciej,
>
> On Mon, May 13, 2019 at 07:21:59AM +0200, Maciej Zdeb wrote:
> > Hi,
> >
> > I'm not observing any issues, so I think it's fixed. :)
> >
> > Willy, Olivier thank you very much!
>
> Great, thanks. I'm going to issue 1.9.8 with the patch I sent you then.
> However after discussing about it with Olivier, we came to the conclusion
> that it is not the cleanest solution, as it addresses the consequence of
> a problem (being flow controlled and being in the send list) instead of
> the cause.  Olivier proposed me a different one that I'd very much
> appreciate you to test after the release; I don't want to take any new
> risk for 1.9.8 for now. We'll keep you updated.
>
> Cheers,
> Willy
>


Re: [PATCH] BUG/MINOR: Fix memory leak in cfg_parse_peers

2019-05-13 Thread Willy Tarreau
On Sun, May 12, 2019 at 10:54:50PM +0200, Tim Duesterhus wrote:
> cfg_parse_peers previously leaked the contents of the `kws` string,
> as it was unconditionally filled using bind_dump_kws, but only used
> (and freed) within the error case.
(...)

Applied, thanks Tim!
Willy



Re: [1.9 HEAD] HAProxy using 100% CPU

2019-05-13 Thread Willy Tarreau
Hi Maciej,

On Mon, May 13, 2019 at 07:21:59AM +0200, Maciej Zdeb wrote:
> Hi,
> 
> I'm not observing any issues, so I think it's fixed. :)
> 
> Willy, Olivier thank you very much!

Great, thanks. I'm going to issue 1.9.8 with the patch I sent you then.
However after discussing about it with Olivier, we came to the conclusion
that it is not the cleanest solution, as it addresses the consequence of
a problem (being flow controlled and being in the send list) instead of
the cause.  Olivier proposed me a different one that I'd very much
appreciate you to test after the release; I don't want to take any new
risk for 1.9.8 for now. We'll keep you updated.

Cheers,
Willy