On Wed, May 23, 2018 at 08:45:04PM +0200, William Dauchy wrote:
> More details which could help understand what is going on:
>
> ps output:
>
> root 15928 0.3 0.0 255216 185268 ? Ss May21 10:11
> /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haprox
Hi William,
Thank you for your reply.
On Thu, May 24, 2018 at 12:01:38PM +0200, William Lallemand wrote:
> I managed to reproduce something similar with the 1.8.8 version. It looks like
> letting a socat connected to the socket helps.
>
> I'm looking into the code to see what's happening.
Hello,
I am trying to understand a possible issue we have regarding haproxy (seamless)
reloads.
I am using haproxy v1.8.9 with the following config (using nbthread):
global
log 127.0.0.1 local0 info
maxconn 262144
user haproxy
group haproxy
nbproc 1
daemon
On Wed, May 23, 2018 at 06:49:09PM +0200, William Dauchy wrote:
> We do frequent reloads (approximatively every 10s).
> After a while some processes remains alive and seem to never exit (waited >24
> hours). While stracing them, some of them are still handling traffic and
> doi
On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> That's great news!
>
> Here's the new patches. It shouldn't change anything to the fix, it only
> changes the sigprocmask to pthread_sigmask.
thanks, I attached the backport for 1.8 and started a new test with
them.
Feel free to
Hello William L.,
On Thu, Jun 07, 2018 at 11:50:45AM +0200, William Lallemand wrote:
> Sorry for the late reply, I manage to reproduce and fix what seams to be the
> bug.
> The signal management was not handled correctly with threads.
> Could you try those patches and see if it fixes the
Hello William L,
On Fri, Jun 08, 2018 at 04:31:30PM +0200, William Lallemand wrote:
> That's great news!
>
> Here's the new patches. It shouldn't change anything to the fix, it only
> changes the sigprocmask to pthread_sigmask.
In fact, I now have a different but similar issue.
root 18547
On Tue, Jun 12, 2018 at 04:33:43PM +0200, William Lallemand wrote:
> Those processes are still using a lot of CPU...
> Are they still delivering traffic?
they don't seem to handle any traffic (at least I can't see it through strace)
but that's the main difference here, using lots of CPU.
> >
Hello,
Thanks for your answer. Here are the information requested.
On Fri, Jun 15, 2018 at 11:22 AM William Lallemand
wrote:
> - haproxy -vv
HA-Proxy version 1.8.9-83616ec 2018/05/18
Copyright 2000-2018 Willy Tarreau
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
Hello Dave,
On Tue, May 29, 2018 at 5:55 PM, Dave Chiluk wrote:
> We've battled the same issue with our haproxys. We root caused it to slow
> dns lookup times while parsing the config was causing haproxy config parsing
> to take so long that we were attempting to reload again before the
On Wed, May 30, 2018 at 5:29 PM, William Lallemand
wrote:
> I can reproduce the same situation there, however I disabled the seamless
> reload. When doing a -USR1 & strace on an remaining worker, I can see that the
> the signal is not blocked, and that it's still polling
good news!
>
Hello Tim,
On Tue, May 29, 2018 at 9:33 PM, Tim Düsterhus wrote:
> Run systemctl status haproxy. It shows the status:
>
>> [timwolla@/s/haproxy (maxrewrite-warn)]sudo systemctl status haproxy
>> ● haproxy.service - HAProxy Load Balancer
>>Loaded: loaded (/lib/systemd/system/haproxy.service;
Hello Stephane,
On Tue, May 29, 2018 at 12:38 PM, Stéphane Cottin
wrote:
> This simple test config file :
>
> resolvers mydns
> nameserver ns1 10.0.0.1:53
> listen mysql
> bind unix@/tmp/mysql.sock user www-data mode 600
> option mysql-check user haproxy post-41
> server MYSQL
Hello William,
Sorry for the last answer.
> Are the problematical workers leaving when you reload a second time?
no, they seems to stay for a long time (forever?)
> Did you try to kill -USR1 the worker ? It should exits with "Former worker
> $PID
> exited with code 0" on stderr.
> If not,
Hello William L.,
I did some more testing:
I simplified my config, removing the multi binding part and cpu-map.
Conclusion is, I have this issue when I activate nbthread feature
(meaning no probkem without).
I tried to kill -USR1 the failing worker, but it remains.
Here are the Sig* from status
Hello Christopher,
Thanks for the followup patch.
On Wed, Jun 20, 2018 at 04:42:58PM +0200, Christopher Faulet wrote:
> Hum, ok, forget the previous patch. Here is a second try. It solves the same
> bug using another way. In this patch, all threads must enter in the sync
> point to exit. I hope
Hello William,
Not much progress on my side, apart from the fact I forgot to mention
where the process are now stuck using all the cpu, in
src/hathreads.c:112
while (*barrier != all_threads_mask)
pl_cpu_relax();
--
William
On Tue, Jun 19, 2018 at 4:30 PM William Lallemand
wrote:
> That's interesting, we can suppose that this bug is not related anymore to the
> signal problem we had previously.
> Looks like it's blocking in the thread sync point.
> Are you able to do a backtrace with gdb? that could help a lot.
On Wed, Jan 17, 2018 at 05:43:02PM +0100, Olivier Houchard wrote:
> Ok you got me convinced, the attached patch don't check for duplicate
> cookies for disabled server, until we enable them.
I also prefer this approach.
> From cfc333d2b04686a3c488fdcb495cba64dbfec14b Mon Sep 17 00:00:00 2001
>
backported to 1.8.
Reported-by: Pierre Cheynier <p.cheyn...@criteo.com>
Tested-by: William Dauchy <w.dau...@criteo.com>
> ---
> src/server.c | 50 +++---
> 1 file changed, 35 insertions(+), 15 deletions(-)
>
> diff --git a/
Hello,
I retrieve this old thread since we are getting the issue again with:
# haproxy -vv
HA-Proxy version 1.8.4-1deb90d 2018/02/08
I am trying to see whether I can reproduce it easily.
Best,
--
William
Hello Christopher,
Thank you for the quick answer and the patch.
On Wed, Jun 20, 2018 at 11:32 AM Christopher Faulet wrote:
> Here is a patch to avoid a thread to exit its polling loop while others
> are waiting in the sync point. It is a theoretical patch because I was
> not able to reproduce
On Thu, Jun 21, 2018 at 5:03 PM William Lallemand
wrote:
> Maybe one client was still connected on a frontend (including the stats
> socket).
> The process first unbind the listeners, and then wait for all clients to
> leave.
> It's difficult to see what's going on since the stats socket is
Hi Christopher,
A quick followup from this morning.
On Thu, Jun 21, 2018 at 10:41 AM William Dauchy wrote:
> it seems better now, but not completely gone, in a way, I think we now
> have a new issue.
> this morning, on one test machine I have a process which remains polling
>
On Thu, Jun 21, 2018 at 5:21 PM William Lallemand
wrote:
> Once you are sure this is not a bug and that a client is still connected, you
> could use the keyword 'hard-stop-after' to force a hard stop.
After double checking some cases, indeed there are still a few
remaining established
Hi Willy,
Thank your for your detailed answer.
On Mon, Mar 19, 2018 at 07:28:16PM +0100, Willy Tarreau wrote:
> Threading was clearly released with an experimental status, just like
> H2, because we knew we'd be facing some post-release issues in these
> two areas that are hard to get 100% right
Hi Christopher,
On Thu, Mar 15, 2018 at 04:05:04PM +0100, Christopher Faulet wrote:
> From 91b1349b6a1a64d43cc41e8546ff1d1ce17a8e14 Mon Sep 17 00:00:00 2001
> From: Christopher Faulet
> Date: Wed, 14 Mar 2018 16:18:06 +0100
> Subject: [PATCH] BUG/MAJOR: threads/queue: Fix
On Mon, Mar 19, 2018 at 08:41:16PM +0100, Willy Tarreau wrote:
> For me, "experimental" simply means "we did our best to ensure it works
> but we're realist and know that bug-free doesn't exist, so a risk remains
> that a bug will be hard enough to fix so as to force you to disable the
> feature
Hello,
On Wed, Mar 21, 2018 at 5:09 AM, Willy Tarreau wrote:
> OK patch now applied, thanks. Since you added a new hash algo, it could
> be nice to create a "crc32c" converter to expose it to the configuration
> as well. Could you please take a look at crc32 and do the same ?
We
On Fri, Jun 22, 2018 at 2:28 PM William Lallemand
wrote:
> The seamless reload only transfers the listeners, the remaining connections
> are
> handled in the leaving process. It would be complicated to handle them in a
> new
> process with a different configuration.
Thanks for the
Hi Willy,
On Thu, Apr 11, 2019 at 11:34:52AM +0200, Willy Tarreau wrote:
> Unfortunately this one looks different. It dies when it is stopping
> (see deinit, unbind*, ...), are you sure it's not the previous process
> when you're performing a reload ? Note that it *could* possibly still
> be the
Hello Willy,
On Thu, Apr 11, 2019 at 11:34:52AM +0200, Willy Tarreau wrote:
> With this said, we've got no negative feedback on the patch above after
> one month and a half, which likely is a good indication that we should
> now backport it (carefully as mentionned in the commit message).
Based
Hello,
Using haproxy2.0 we are seeing logs with connection number while reloading:
Proxy stopped (FE: 0 conns, BE: 549563 conns).
while we have in our configuration:
global maxconn 262144
defaults maxconn 262134
I was wondering whether this could considered as expected to have a
backend
Hello,
Using haproxy v1.9.8 + 3d4052131ba3abe63d9cde213d6b89763f5bf828 we are
seeing unexplained behavior after a reload where ~5/7 cores at 100% are
used over the 16 possible in the config.
It's hard to know whether it is still handling traffic since the unix
socket is not reachable anymore.
Hello,
We are using "rate limiting" config such as the one mentioned below:
backend foo
stick-table type ip size 200k expire 30s store conn_rate(60s)
tcp-request content track-sc1 src
http-request deny deny_status 429 if { src_conn_rate ge 42000 }
While giving a try to v2.0 we are
upposed the stick-table was
> always
> stored in struct arg at parsing time. This is never the case with the usage of
> "if/unless" conditions in stick-table declared as backends. In this case,
> this is
> the name of the proxy which must be considered as the stick-table
On Wed, May 15, 2019 at 2:10 PM Olivier Houchard wrote:
> We usually only add options in ssl-default-bind-options that can later be
> overriden on a per-bind basis, but right now, there's no option to disable
> 0RTT.
Thanks for the explanation!
--
William
Hello Olivier,
In another subject related to 0rtt was wondering why it was not
available in ssl-default-bind-options?
Thanks,
--
William
Hi,
On Fri, Apr 19, 2019 at 6:26 AM Willy Tarreau wrote:
> Good catch but in my opinion we should instead fix the called function
> (h1_skip_chunk_crlf). h1_skip_chunk_size() properly deals with the case
> where start==stop, but h1_skip_chunk_crlf() says "If not enough data are
> available, the
Hi Willy,
On Sat, Apr 13, 2019 at 08:01:53AM +0200, Willy Tarreau wrote:
> Did you issue one of the commands that tries to be alone, thus "show sess"
> or "show fd" ? It's possible that you were having only one thread blocked
> initially and that with the command that was waiting for all threads
Hello,
We are triggering a segfault on the last HEAD of haproxy-1.9 tree, last
commit being
1e0fd266db3e503783ff623faabcb1dfe211cb89 BUG/MINOR: mworker: disable busy
polling in the master process
backtrace:
Thread 1 (Thread 0x7f73aeffd700 (LWP 13044)):
#0 h1_skip_chunk_crlf (stop=0, start=0,
On Fri, Apr 19, 2019 at 01:30:07PM +0200, Willy Tarreau wrote:
> Note that with all the scheduling issues we've fixed over the last
> days, there are multiple candidates which could cause this. Another
> one was the lack of effect of the nice parameter which is normally
> set on the CLI but the
On Sat, May 4, 2019 at 1:58 PM Vincent Bernat wrote:
> The more changes, the less likely the release team will accept the
> change. Assuming we can only make one proposition (which is not true),
> what would you (as upstream) try? 1.8.19, one bug, all major bugs, even
> more bugs, or 1.8.20?
Not
Hi Willy,
On Sun, May 05, 2019 at 07:07:21AM +0200, Willy Tarreau wrote:
> Hi William,
> > we got a similar issue with last v1.9.7+HEAD
> At first I thought you were again on a deadlock that I couldn't spot, due
> to the fact that nearly all threads were waiting on the LB lock, and I
> couldn't
Hi Willy,
> Note that with all the scheduling issues we've fixed over the last
> days, there are multiple candidates which could cause this. Another
> one was the lack of effect of the nice parameter which is normally
> set on the CLI but the lack of which could result in socat timing
> out
On Thu, Jul 04, 2019 at 01:36:48PM +0200, Olivier Houchard wrote:
> There's indeed no reason to disallow tfo on default-server, it was an
> oversight, it is fixed in master with commit
> 8d82db70a5f32427fb592a947d2a612bcb01d95e, and should be backported in 2.0
> before the next release.
Thanks!
On Wed, Jun 26, 2019 at 11:29:47AM +1000, Igor Cicimov wrote:
> Those maxconn values are per frontend so if your backend is referenced by
> two frontends you might end up with a limit of 2 x maxconn on the backend.
> Hence it is recommended to set maxconn per server too to protect from
> situation
Hello,
On haproxy v2.0.x, while using tfo option in default-server:
default-server inter 5s fastinter 1s fall 3 slowstart 20s observe layer7
error-limit 5 on-error fail-check pool-purge-delay 10s tfo
we are getting:
'default-server inter' : 'tfo' option is not accepted in default-server
On Wed, Jun 19, 2019 at 05:51:19PM +0200, William Dauchy wrote:
> Using haproxy v1.9.8 + 3d4052131ba3abe63d9cde213d6b89763f5bf828 we are
> seeing unexplained behavior after a reload where ~5/7 cores at 100% are
> used over the 16 possible in the config.
> It's hard to know whether
Hello,
Probably a useless report as I don't have a lot information to provide,
but we faced an issue where the unix socket was unresponsive, with the
processes using all cpu (1600% with 16 nbthreads)
I only have the backtrace of the main process but lost the backtrace of
all threads (nbthread
Hello,
We are seeing a quite regular segfault when haproxy v1.9 joins our cluster and
almost immediately crash with:
#0 0x55b66f75c825 in do_unbind_listener
(listener=listener@entry=0x55b67206dcb0, do_close=do_close@entry=1) at
src/listener.c:328
328
Hi Willy,
Thank you for the quick ansswer.
On Sun, Nov 17, 2019 at 11:12:29AM +0100, Willy Tarreau wrote:
> That's strange, I was absolutely certain it was done after the setuid
> stuff precisely for the reason you explain above. Do you think the
> sequence differs in master-worker mode maybe ?
.
this patch moves the whole set-dumpable logic before the polling code in
order to catch all possible cases where we could have changed the
uid/gid. It however does not cover the possible segfault at startup.
this patch should be backported in 2.0.
Signed-off-by: William Dauchy
---
src/haproxy.c
Hi Willy,
As discussed, here in my v2 where I simply moved the whole set-dumpable
logic. While being here I took the liberty to mutualise the setuid/gid
code in the place to avoid confusion in the future. I let you decide
whether this is something acceptable.
Thanks,
William Dauchy (2):
BUG
since the introduction of mworker, the setuid/setgid was duplicated in
two places; try to improve that by creating a dedicated function.
this patch does not introduce any functional change.
Signed-off-by: William Dauchy
---
src/haproxy.c | 63
thought it was sufficient to simply avoid a given type
of metric.
By default, everything is exported as before.
Signed-off-by: William Dauchy
---
contrib/prometheus-exporter/README| 7 ++
.../prometheus-exporter/service-prometheus.c | 100 ++
2 files changed, 87
.
this patch duplicates the prctl part done earlier in the init, but after
the uid/gid changes. There might be a cleaner way to do it and avoid
code duplication.
this patch should be backported in 2.0.
Signed-off-by: William Dauchy
---
src/haproxy.c | 9 +
1 file changed, 9 insertions
d and scraping time. This
> is
> particularly true for huge configuration with thousands of backends and
> servers.
> Also note that this configuration was possible on the previous official
> haproxy
> exporter but with even more parameters to select the needed metrics. Here we
> thought it wa
On Thu, Nov 21, 2019 at 03:09:30PM +0500, Илья Шипицин wrote:
> I understand. However, those patches add complexity (which might be moved
> to another dedicated tool)
those patch makes sense for heavy haproxy instances. As you might have
seen above, we are talking about > 130MB of data. So for a
On Thu, Nov 21, 2019 at 03:00:02PM +0500, Илья Шипицин wrote:
> is it common thing in Prometheus world to perform cascade export ? like
> "exporter" --> "filter out" --> "aggregate" --> "deliver to prometheus"
> in order to keep things simple and not to push everything into single tool
no, the
On Fri, Nov 22, 2019 at 11:25:44AM +0100, Christopher Faulet wrote:
> URI delimiters must not be encoded, except if you want to escape it. So,
> your first url is not equivalent to the second one. The encoded question
> mark is part of the path in the first url, it is not the query-string
>
Hi Christopher,
On Tue, Nov 19, 2019 at 04:35:47PM +0100, Christopher Faulet wrote:
> +/* Parse the query stirng of request URI to filter the metrics. It returns 1
> on
> + * success and -1 on error. */
> +static int promex_parse_uri(struct appctx *appctx, struct stream_interface
> *si)
> +{
>
conditions look the same in "no" parsing.
Signed-off-by: William Dauchy
---
doc/configuration.txt | 4 +++-
src/cfgparse.c| 9 ++---
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/doc/configuration.txt b/doc/configuration.txt
index a2bafcf40..d949f6bf4 100644
busy-polling parameter was forgotten in the list of performance tuning
Signed-off-by: William Dauchy
---
doc/configuration.txt | 1 +
1 file changed, 1 insertion(+)
diff --git a/doc/configuration.txt b/doc/configuration.txt
index f942b83dc..a2bafcf40 100644
--- a/doc/configuration.txt
+++ b
incoherency in the code handling set-dumpable
I choose to implement as you suggested in the previous thread, but
tell me if I need to adapt something.
for reference, v1 was
https://www.mail-archive.com/haproxy@formilux.org/msg35142.html
William Dauchy (3):
MINOR: doc: fix busy-polling performance
he
general health check of the process.
As discussed, plan to use the strict by default mode starting from v2.3.
Signed-off-by: William Dauchy
---
Changes in v2:
- introduced a specific strict limits option to force process failure
when a setrlimit fails
---
doc/configuration
this is a change of behaviour and you can see that as
a starting point of a conversation to better handle those breaking
cases. But the general idea is to be coherent with other failing config
errors.
Signed-off-by: William Dauchy
---
src/haproxy.c | 23 ---
1 file changed, 12
On Tue, Oct 29, 2019 at 06:57:44AM +0100, Willy Tarreau wrote:
> Here are the adjustments I'm proposing (only 2 and 3 were slightly
> touched). I'm not committing them without your consent since they
> were signed-off :-) Just let me know.
:-)
the new diff looks good to me
Thanks,
--
William
On Tue, Oct 29, 2019 at 06:40:23AM +0100, Willy Tarreau wrote:
> I'm taking the patchset. However I'll apply some small changes :
>
> On Sun, Oct 27, 2019 at 08:08:10PM +0100, William Dauchy wrote:
> > in global config parsing, we currently expect to have a possible no
>
to ms
while reading the code.
Signed-off-by: William Dauchy
---
src/proto_tcp.c | 4
1 file changed, 4 deletions(-)
diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index 3dfc6a62f..b136a602b 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -1580,10 +1580,6 @@ smp_fetch_dport(const struct
Hello Willy,
On Tue, Oct 22, 2019 at 04:46:43PM +0200, Willy Tarreau wrote:
> To be honest, I'm quite embarrassed by such a change. I think the main
> reason for the initial warning instead of error was that most people
> using haproxy in unprivileged environments never knew what to use as
> a
Hi,
On Tue, Nov 19, 2019 at 02:42:23PM +0500, Илья Шипицин wrote:
> small question.
> `/proc/sys/fs/suid_dumpable` is linux specific. will it work under freebsd,
> openbsd ? windows ?
> also, linux might not mount that filesystem. will it work ?
this code is protected around USE_PRCTL define,
.
this patch moves the whole set-dumpable logic before the polling code in
order to catch all possible cases where we could have changed the
uid/gid. It however does not cover the possible segfault at startup.
this patch should be backported in 2.0.
Signed-off-by: William Dauchy
---
Hi Willy,
Here
On Tue, Nov 19, 2019 at 04:35:47PM +0100, Christopher Faulet wrote:
> Here is updated patches with the support for "scope" and "no-maint"
> parameters. If this solution is good enough for you (and if it works :), I
> will push it.
$ curl
On Tue, Nov 19, 2019 at 03:31:28PM +0100, Christopher Faulet wrote:
> > * also for `check_status`, there is the case of L7STS and its associated
> > values that are present in another field. Most probably it could benefit
> > from a better representation in a prometheus output (thanks to labels)?
Hi Christopher,
On Tue, Nov 19, 2019 at 04:35:47PM +0100, Christopher Faulet wrote:
> Here is updated patches with the support for "scope" and "no-maint"
> parameters. If this solution is good enough for you (and if it works :), I
> will push it.
this looks good to me and the test was conclusive
On Mon, Nov 25, 2019 at 10:30:55AM +0100, Christopher Faulet wrote:
> First, a key without value is not properly handled. You must not try to
> parse the value, otherwise the following parameter is read as value. For
> instance "/metrics?no-maint=server".
oh good catch fixed.
> Then, if empty
ould be backported to 2.0
Signed-off-by: William Dauchy
---
.../prometheus-exporter/service-prometheus.c | 80 +++
1 file changed, 45 insertions(+), 35 deletions(-)
diff --git a/contrib/prometheus-exporter/service-prometheus.c
b/contrib/prometheus-exporter/service-promethe
On Wed, Nov 27, 2019 at 02:30:31PM +0100, Christopher Faulet wrote:
> It was merged. I amended your patch to fix 2 issues. The fixes were minor,
> so I preferred to not bother you with that. First, when the scope value is
> tested, we must first be sure it is defined to not dereference a null
>
we were decoding all substring and then parsing; this could lead to
consider & and = in decoding result as delimiters where it should not.
this patch reverses the order by first parsing and then decoding each key
and value separately.
This patch should be backported to 2.0
Signed-off-by: Wil
we were checking two times old_ckchs where we wanted to check whether
the transaction path was freed (and so null) before
this should fix issue #351
Reported-by: Илья Шипицин
Signed-off-by: William Dauchy
---
src/ssl_sock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
quot;MINOR: ssl: ssl_sock_prepare_ctx() return an error code")
Reported-by: Илья Шипицин
Signed-off-by: William Dauchy
---
src/ssl_sock.c | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 6513760a..bcfa3e71 100644
--- a/
On Thu, Nov 28, 2019 at 11:10:34AM +0500, Илья Шипицин wrote:
> Willy thinks it should be "eb" instead of "res"
yup I saw that comment but I don't get why as it is tested above.
--
William
`eb` being tested above, `res` cannot be null, so the condition is
not needed and introduces potential dead code.
also fix a typo in associated comment
This should fix issue #349
Reported-by: Илья Шипицин
Signed-off-by: William Dauchy
---
src/dns.c | 7 +--
1 file changed, 1 insertion
Hi Christopher,
On Wed, Nov 20, 2019 at 02:56:28PM +0100, Christopher Faulet wrote:
> Nice, Thanks for your feedback. It is merged now. And I'm on the backports
> for the 2.0.
You apparently forgot to backport
commit 0d1c2a65e8370a770d01 (MINOR: stats: Report max times in addition of the
trivial patch to fix issue #351
Fixes: bc6ca7ccaa72 ("MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'")
Reported-by: Илья Шипицин
Signed-off-by: William Dauchy
---
src/ssl_sock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/ssl_sock.c b/src/ssl_so
On Sun, Nov 24, 2019 at 12:15 AM William Lallemand
wrote:
> That's a remain of the previous way of doing this, which was done with an
> array of 2 old_ckch, so the previous check was something like:
>
> > if (!old_ckchs[0] && !old_ckchs[1])
>
> When a transaction is created the old_ckchs and the
On Mon, Nov 25, 2019 at 05:57:04PM +0100, Willy Tarreau wrote:
> What I'd suggest instead as a better and more durable cleanup would be
> to explicitly mention above the function's prototype that it must not
> be called with a null err pointer, and remove all "if (err)" or "err &&"
> tests so that
;)
Signed-off-by: William Dauchy
---
src/proto_tcp.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index a9d5229c9..e509a17bc 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -906,9 +906,9 @@ int tcp_bind_listener(struct listener *listener,
we were trying to close file descriptor even when `socket` call was
failing.
this should fix github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae69a1 ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: W
On Wed, Feb 12, 2020 at 03:32:07PM +0100, Willy Tarreau wrote:
> I'd do it differently so that we neither try nor report an error if
> the default mss was not set. Indeed, if it already failed earlier,
> we already had an issue, so no need to fail again. So if you agree
> I'll change it to :
>
>
;)
Signed-off-by: William Dauchy
---
src/proto_tcp.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index a9d5229c9..044ade430 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -906,9 +906,9 @@ int tcp_bind_listener(struct listener *listener,
hostname were limited to 62 char, which is not RFC1035 compliant;
- the loop should stop when above max char
- fix resulting test where d[i] was wrongly used
(also simplify brackets for readability)
this should github issue #387
Signed-off-by: William Dauchy
---
src/dns.c | 4 ++--
1 file
On Sat, Jan 25, 2020 at 6:31 PM William Lallemand
wrote:
> There is no limitation with the chroot since the file is uploaded over the
> CLI.
> If you use the "set ssl cert" and "commit ssl cert" commands over the CLI the
> chroot option is not suppose to affect these.
>
> Do you have an example
this log could be sometimes a bit confusing (depending on the number in
fact) when you read it (e.g is it the number of active connection?) -
only trained eyes knows haproxy output a different log when closing
active connections while stopping.
Signed-off-by: William Dauchy
---
src/proxy.c | 8
hostname were limited to 62 char, which is not RFC1035 compliant;
simply remove equality in condition as counter is initialised at zero.
(also simplify brackets for readability)
this should github issue #387
Signed-off-by: William Dauchy
---
src/dns.c | 2 +-
1 file changed, 1 insertion(+), 1
Hello Miroslav,
On Sun, Jan 26, 2020 at 2:57 AM Miroslav Zagorac wrote:
> i think that patch does not correct the bug in that function because in
> the loop above the variable i and pointer d increase at the same time.
>
> This means that *d should be written instead of d[i].
that is very true,
On Sun, Jan 26, 2020 at 11:06 PM Lukas Tribus wrote:
> Just because commit 456 fixes something that has been introduced with
> commit 123 DOES NOT mean we backport 456 to all the branches that 123
> was committed to - instead we backport 456 to a certain branch if it
> *actually* makes sense to
On Sun, Jan 26, 2020 at 7:33 PM Tim Düsterhus wrote:
> Backport information are missing (without looking up that commit). 1.8+
> it is.
Thanks. Could be nice to change a bit these rules; indeed, when the
`Fixes` tag is present, it's very easy to ask git in which tag this
was introduced; so in my
On Sun, Jan 26, 2020 at 8:17 PM Julien Pivotto wrote:
> That will not work with cherry-picked commits.
that's what I nuanced in the previous email you quoted indeed.
--
William
1 - 100 of 413 matches
Mail list logo