INVITATION TO ATTEND A WORKSHOP ON MONITORING AND EVALUATION ACCOUNTABILITY AND LEARNING 5TH-16TH SEPTEMBER, 2022

2022-08-19 Thread Afriex Training Limited



INVITATION TO ATTEND A WORKSHOP ON MONITORING AND EVALUATION 
ACCOUNTABILITY AND LEARNING 5TH-16TH SEPTEMBER, 
2022
 


Register to attend 



 


Calendar for 2022 

 


Contact us
 
 
VENUE: 
BEST WESTERN MERIDIAN HOTEL, NAIROBI, KENYA
Office 
Telephone: +254-715-310-685
Register 
as a group of 5 or more participants and get 15% discount on the course fee. 

Send 
us an email: 

training@afriextraining.orgor 
call +254-715-310-685
 
INTRODUCTION
Monitoring, 
evaluation, accountability, and learning (MEAL) are part of everyday program 
management and are critical to the success of all. Without an effective MEAL 
system, we would be unable to track progress, make adjustments, discover 
unplanned effects of program, or judge the impact that we have made on the 
lives 
of those with whom we are working. It also helps individuals and teams to be 
accountable to stakeholders through information sharing and developing a 
complaints or feedback mechanism which can help to guide program 
implementation. 
This training introduces participants to MEAL concepts and practices.  It 
will stimulate ideas on how to design and implement monitoring and evaluation 
processes that strengthen accountability and learning, and so promote project, 
program and strategy effectiveness
DURATION
10 
days
TARGET 
AUDIENCE
This 
is a general training is targeting variety of program staff, management team 
members and thematic staff, Project Management Officials, government officials, 
program managers; policy makers and program implementers; development 
practitioners and activists and NGO and CSO members, 
   
 
 
OBJECTIVES
The 
training will enable participants to;
  Develop an understanding of the basic 
  principles underlying MEAL
  Design and implement MEAL strategies, 
  systems, and frameworks
  Develop and support the use of MEAL tools 
  and practices in program development
  Supporting the development of MEAL work 
  pieces such as program baselines, mid-term reviews and end lines
  Equip participants with an understanding 
  of the major tools and best practices involved in MEAL
  Provide participants effective rules of 
  thumb and a framework to approach methodological issues in MEAL
  Collect data, ensure data quality 
  management and analyses the data
  Understand project reporting by using 
  MEAL
COURSE 
OUTLINE
Module 1: Introduction 
Monitoring & Evaluation
  M&E and the project/program 
  cycle
  Importance of M&E
  Purposes and uses of M&E
  Identifying gaps in M&E
  Barriers to effective M&E
Monitoring, Evaluation, 
Accountability and Learning  (MEAL) overview
  Introduction to MEAL
  MEAL Components
  Purposes and functions of MEAL
  MEAL approach
  How MEAL changes M&E
  Challenges to MEAL
Module 2: Frameworks 
and MEAL Cycle
  Frameworks and approaches informing 
  MEAL
  The language used in MEAL
  The MEAL cycle
  Exploring key ele

Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-19 Thread Willy Tarreau
On Fri, Aug 19, 2022 at 11:37:47PM +0200, Vincent Bernat wrote:
> On 2022-08-19 23:09, Ionel GARDAIS wrote:
> > Aug 19 22:09:09 haproxy-2 haproxy[1280]: [WARNING]  (1280) : Failed to 
> > connect to the old process socket '/run/haproxy/admin.sock'
> > Aug 19 22:09:09 haproxy-2 haproxy[1280]: [ALERT](1280) : Failed to get 
> > the sockets from the old process!
> 
> There was a change in 2.6.0 (but not in 2.6.3) where "expose-fd listeners"
> for stats socket is not needed anymore. Is the line present in your
> configuration? (grep admin.sock /etc/haproxy/haproxy.cfg)
> 
> What's the output of systemctl cat haproxy?
> 
> > Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : New worker 
> > (1282) forked
> > Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : Loading 
> > success.
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: [WARNING]  (1282) : Server 
> > bck-speedtest/go-v6 is DOWN, reason: Layer4 connection problem, info: 
> > "Connection>
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-speedtest/go-v6 is 
> > DOWN, reason: Layer4 connection problem, info: "Connection refused", check 
> > dur>
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-speedtest/go-v6 is 
> > DOWN, reason: Layer4 connection problem, info: "Connection refused", check 
> > dur>
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: [WARNING]  (1282) : Server 
> > bck-nuxeo-arch/nuxeo-arch is DOWN, reason: Layer4 connection problem, info: 
> > "No r>
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: [ALERT](1282) : backend 
> > 'bck-nuxeo-arch' has no server available!
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-nuxeo-arch/nuxeo-arch 
> > is DOWN, reason: Layer4 connection problem, info: "No route to host", check>
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-nuxeo-arch/nuxeo-arch 
> > is DOWN, reason: Layer4 connection problem, info: "No route to host", check>
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: backend bck-nuxeo-arch has no 
> > server available!
> > Aug 19 22:09:09 haproxy-2 haproxy[1282]: backend bck-nuxeo-arch has no 
> > server available!
> > [...] // others few backends not responding
> > // then
> 
> Are these expected?

That's what I would like to know as well. I'm bothered by the systemd
message indicating that haproxy is not responding, I don't really know
what it corresponds to, and could possibly indicate that the sd_notify()
message was not sent before switching to wait mode.

Ideally the startup messages from 2.6.2 where it's working fine would help
as well.

Thanks,
Willy



Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-19 Thread Vincent Bernat

On 2022-08-19 23:09, Ionel GARDAIS wrote:

Aug 19 22:09:09 haproxy-2 haproxy[1280]: [WARNING]  (1280) : Failed to connect 
to the old process socket '/run/haproxy/admin.sock'
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [ALERT](1280) : Failed to get the 
sockets from the old process!


There was a change in 2.6.0 (but not in 2.6.3) where "expose-fd 
listeners" for stats socket is not needed anymore. Is the line present 
in your configuration? (grep admin.sock /etc/haproxy/haproxy.cfg)


What's the output of systemctl cat haproxy?


Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : New worker (1282) 
forked
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : Loading success.
Aug 19 22:09:09 haproxy-2 haproxy[1282]: [WARNING]  (1282) : Server bck-speedtest/go-v6 
is DOWN, reason: Layer4 connection problem, info: "Connection>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-speedtest/go-v6 is DOWN, reason: Layer4 
connection problem, info: "Connection refused", check dur>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-speedtest/go-v6 is DOWN, reason: Layer4 
connection problem, info: "Connection refused", check dur>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: [WARNING]  (1282) : Server 
bck-nuxeo-arch/nuxeo-arch is DOWN, reason: Layer4 connection problem, info: "No 
r>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: [ALERT](1282) : backend 
'bck-nuxeo-arch' has no server available!
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-nuxeo-arch/nuxeo-arch is DOWN, reason: 
Layer4 connection problem, info: "No route to host", check>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-nuxeo-arch/nuxeo-arch is DOWN, reason: 
Layer4 connection problem, info: "No route to host", check>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: backend bck-nuxeo-arch has no server 
available!
Aug 19 22:09:09 haproxy-2 haproxy[1282]: backend bck-nuxeo-arch has no server 
available!
[…] // others few backends not responding
// then


Are these expected?



Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-19 Thread Ionel GARDAIS
Yes from haproxy.debian.net

Aug 19 22:09:09 haproxy-2 systemd[1]: Starting HAProxy Load Balancer...
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : haproxy version is 
2.6.3-1~bpo11+1
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : path to executable 
is /usr/sbin/haproxy
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [WARNING]  (1280) : config : 'option 
forwardfor' ignored for frontend 'gerrit-ssh' as it requires HTTP mode.
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [WARNING]  (1280) : config : 'option 
forwardfor' ignored for backend 'bck-review-ssh' as it requires HTTP mo>
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [WARNING]  (1280) : Failed to connect 
to the old process socket '/run/haproxy/admin.sock'
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [ALERT](1280) : Failed to get the 
sockets from the old process!
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : New worker (1282) 
forked
Aug 19 22:09:09 haproxy-2 haproxy[1280]: [NOTICE]   (1280) : Loading success.
Aug 19 22:09:09 haproxy-2 haproxy[1282]: [WARNING]  (1282) : Server 
bck-speedtest/go-v6 is DOWN, reason: Layer4 connection problem, info: 
"Connection>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-speedtest/go-v6 is DOWN, 
reason: Layer4 connection problem, info: "Connection refused", check dur>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-speedtest/go-v6 is DOWN, 
reason: Layer4 connection problem, info: "Connection refused", check dur>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: [WARNING]  (1282) : Server 
bck-nuxeo-arch/nuxeo-arch is DOWN, reason: Layer4 connection problem, info: "No 
r>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: [ALERT](1282) : backend 
'bck-nuxeo-arch' has no server available!
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-nuxeo-arch/nuxeo-arch is 
DOWN, reason: Layer4 connection problem, info: "No route to host", check>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: Server bck-nuxeo-arch/nuxeo-arch is 
DOWN, reason: Layer4 connection problem, info: "No route to host", check>
Aug 19 22:09:09 haproxy-2 haproxy[1282]: backend bck-nuxeo-arch has no server 
available!
Aug 19 22:09:09 haproxy-2 haproxy[1282]: backend bck-nuxeo-arch has no server 
available!
[…] // others few backends not responding
// then
Aug 19 22:10:39 haproxy-2 systemd[1]: haproxy.service: start operation timed 
out. Terminating.
Aug 19 22:10:39 haproxy-2 systemd[1]: haproxy.service: Killing process 1282 
(haproxy) with signal SIGKILL.
Aug 19 22:10:39 haproxy-2 systemd[1]: haproxy.service: Failed with result 
'timeout'.
Aug 19 22:10:39 haproxy-2 systemd[1]: haproxy.service: Unit process 1282 
(haproxy) remains running after unit stopped.
Aug 19 22:10:39 haproxy-2 systemd[1]: Failed to start HAProxy Load Balancer.
Aug 19 22:10:39 haproxy-2 systemd[1]: haproxy.service: Consumed 1min 30.496s 
CPU time.


- Mail original -
De: "Vincent Bernat" 
À: "Ionel GARDAIS" , "Willy Tarreau" 

Cc: "haproxy" 
Envoyé: Vendredi 19 Août 2022 22:37:49
Objet: Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

On 2022-08-19 22:16, Ionel GARDAIS wrote:

> I had to rollback to 2.6.2 after having upgrade to 2.6.3 because systemd was 
> restarting the haproxy process every 1m30s (on an up-to-date Debian 11)
> apt upgrade itself hung while doing the upgrade.

With Debian packages from haproxy.debian.net? Logs from "journalctl -eu 
haproxy" should help.
--
232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301




Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-19 Thread Vincent Bernat

On 2022-08-19 22:16, Ionel GARDAIS wrote:


I had to rollback to 2.6.2 after having upgrade to 2.6.3 because systemd was 
restarting the haproxy process every 1m30s (on an up-to-date Debian 11)
apt upgrade itself hung while doing the upgrade.


With Debian packages from haproxy.debian.net? Logs from "journalctl -eu 
haproxy" should help.




Re: [*EXT*] [ANNOUNCE] haproxy-2.6.3

2022-08-19 Thread Ionel GARDAIS
Hi Willy,

I had to rollback to 2.6.2 after having upgrade to 2.6.3 because systemd was 
restarting the haproxy process every 1m30s (on an up-to-date Debian 11)
apt upgrade itself hung while doing the upgrade.

Regards,
Ionel

- Mail original -
De: "Willy Tarreau" 
À: "haproxy" 
Envoyé: Vendredi 19 Août 2022 18:51:25
Objet: [*EXT*] [ANNOUNCE] haproxy-2.6.3

Hi,

HAProxy 2.6.3 was released on 2022/08/19. It added 60 new commits
after version 2.6.2.

This release contains assorted fixes for issues discovered after the
previous 2.6.2 release, and there are quite a few annoying ones so I
preferred not to let them rot too long:

- there was an issue with the log-forward section, where a missing
  initialization due to code duplication caused some settings from
  "bind" lines to be ignored (ssl, thread, a few such things).

- the late cleanup of the CLI keyword processing in 2.6 caused some
  breakage when certain commands are chained using a semi-colon, due
  to a command context that was not reset between commands and could
  then be misused. For example "show version; show sess" could crash
  the process.

- some ugly crashes saying "offset > buf->data" were reported when
  using the DNS (e.g. issue #1781) , and it was found that it was using
  uninitialized fields in a structure. A pool_zalloc() was used to paper
  over it, since it's not even impossible that others fields are affected
  and that this part requires a deep breath before being dived into.

- there was a logic but in processing of option http-restrict-req-hdr-names
  that could cause deletion of a wrong header or a crash when facing
  multiple forbidden chars. This was reported in issue #1822, analysed
  and fixed by Mateusz Malek.

- an old bug in the H2 mux may cause spurious stream resets when uploading
  and downloading at the same time from the same stream, due to the window
  update frames having to be delayed when the output is full, and sent
  later after the stream ID was reset. Those using POST to servers might
  have experienced such occasional issues and might want to check for any
  improvement there. This was reported in issue #1830 and diagnosed by
  David le Blanc.

- during atomic map updates of entries based on prefix length ("_ip" and
  "_beg"), if a new finer entry was added and matched an input before being
  committed, it was naturally ignored, but the lookup would continue with
  next keys without rechecking the key, possibly returning an incorrect
  match. This was reported by Miroslav in issue #1802.

- Tim reported in issue #1799 that upon reload, and old process that failed
  to synchronize its tables with the new one could loop for a while without
  any pause and waste a lot of CPU doing this.

- the recently added assertion in fd_delete() already spotted a long
  existing bug on reload, where the FD that was used by the pipe of an
  exiting thread could be instantly reused as a socket by another thread
  and be incorrectly inserted in the table. Most of the time it remained
  unnoticed as these were mostly health checks on a reloading process, but
  since the assertion a few users started to see logs of a crash of the
  exiting process. This was reported both by Christian Ruppert in issue
  #1807 and by Cedric Paillet.

- there was an undesired sharing of data between default-servers that
  could lead to double-frees concretized by crashes when checking the
  config. This was reported in issue #1804 by Fabiano Nunes.

- when a server had numerous requests waiting in queue, it was possible
  for a thread to spend its time picking requests from this queue while
  all other threads were working at refilling it, and the time spent
  doing this was unbounded, which could 1) add high processing latencies,
  and 2) even trigger the watchdog if the thread worked too long. I could
  trigger the watchdog a few times on a 48-thread machine. I think it's
  the same issue that was reported 2 years ago by Jaroslaw Rzeszotko in
  issue #880.

- the ring section's "size" parser was too lax and would take "1M" for "1"
  without even issuing a warning... Also error messages regarding incorrect
  values would copy the input string instead of the parsed value, providing
  no way to diagnose.

- there was a problem with the ring forwarding that's not very clear to me
  (I have no idea about the impact, commit 96417f3 in master).

- I managed to trigger an error on reload where the old process died saying
  "t->tid >= 0 && t->tid != tid". This is caused by the deinit code that
  needs to stop stuff initialized on other threads, and as such it violates
  some consistency checks. The check was relaxed to ignore the stopping
  condition.

- reading from the rings could also occasionally freeze at high rate if
  the reader had to stop due to a buffer full while the writer had already
  stopped due to a ring full.

- Tim reported in issue #1803 that sometimes a new process would fail to
  get the sockets from the old one on reload, du

Re: [PATCH] MINOR: tcp_sample: extend support for get_tcp_info to OpenBSD

2022-08-19 Thread Brad Smith

ping.

On 8/12/2022 10:23 PM, Brad Smith wrote:

extend support for get_tcp_info to OpenBSD.


diff --git a/src/tcp_sample.c b/src/tcp_sample.c
index 2e8a62173..300ce1c8d 100644
--- a/src/tcp_sample.c
+++ b/src/tcp_sample.c
@@ -357,8 +357,8 @@ static inline int get_tcp_info(const struct arg *args, 
struct sample *smp,
case 5:  smp->data.u.sint = info.tcpi_retrans;break;
case 6:  smp->data.u.sint = info.tcpi_fackets;break;
case 7:  smp->data.u.sint = info.tcpi_reordering; break;
-# elif defined(__FreeBSD__) || defined(__NetBSD__)
-   /* the ones are found on FreeBSD and NetBSD featuring TCP_INFO */
+# elif defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)
+   /* the ones are found on FreeBSD, NetBSD and OpenBSD featuring TCP_INFO 
*/
case 2:  smp->data.u.sint = info.__tcpi_unacked;  break;
case 3:  smp->data.u.sint = info.__tcpi_sacked;   break;
case 4:  smp->data.u.sint = info.__tcpi_lost; break;
@@ -373,7 +373,7 @@ static inline int get_tcp_info(const struct arg *args, 
struct sample *smp,
return 1;
  }
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__APPLE__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__) || defined(__APPLE__)
  /* get the mean rtt of a client connection */
  static int
  smp_fetch_fc_rtt(const struct arg *args, struct sample *smp, const char *kw, 
void *private)
@@ -389,7 +389,7 @@ smp_fetch_fc_rtt(const struct arg *args, struct sample 
*smp, const char *kw, voi
  }
  #endif
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__APPLE__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__) || defined(__APPLE__)
  /* get the variance of the mean rtt of a client connection */
  static int
  smp_fetch_fc_rttvar(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
@@ -406,7 +406,7 @@ smp_fetch_fc_rttvar(const struct arg *args, struct sample 
*smp, const char *kw,
  #endif
  
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__APPLE__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__) || defined(__APPLE__)
  /* get the unacked counter on a client connection */
  static int
  smp_fetch_fc_unacked(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
@@ -417,7 +417,7 @@ smp_fetch_fc_unacked(const struct arg *args, struct sample 
*smp, const char *kw,
  }
  #endif
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__)
  /* get the sacked counter on a client connection */
  static int
  smp_fetch_fc_sacked(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
@@ -428,7 +428,7 @@ smp_fetch_fc_sacked(const struct arg *args, struct sample 
*smp, const char *kw,
  }
  #endif
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__APPLE__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__) || defined(__APPLE__)
  /* get the lost counter on a client connection */
  static int
  smp_fetch_fc_lost(const struct arg *args, struct sample *smp, const char *kw, 
void *private)
@@ -439,7 +439,7 @@ smp_fetch_fc_lost(const struct arg *args, struct sample 
*smp, const char *kw, vo
  }
  #endif
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || defined(__APPLE__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__) || defined(__APPLE__)
  /* get the retrans counter on a client connection */
  static int
  smp_fetch_fc_retrans(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
@@ -450,7 +450,7 @@ smp_fetch_fc_retrans(const struct arg *args, struct sample 
*smp, const char *kw,
  }
  #endif
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__)
  /* get the fackets counter on a client connection */
  static int
  smp_fetch_fc_fackets(const struct arg *args, struct sample *smp, const char 
*kw, void *private)
@@ -461,7 +461,7 @@ smp_fetch_fc_fackets(const struct arg *args, struct sample 
*smp, const char *kw,
  }
  #endif
  
-#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__)

+#if defined(__linux__) || defined(__FreeBSD__) || defined(__NetBSD__) || 
defined(__OpenBSD__)
  /* get the reordering counter on a client connection */
  static int
  smp_fetch_fc_reordering(const struct arg *args, struct sample *smp, const 
char *kw, void *private)
@@ -502,22 +502,22 @@ static struct sample_fetch_kw_list sample_fetch_keywords 
= {ILH, {
  #ifdef TCP_INFO
{ "fc_rtt",   s

[ANNOUNCE] haproxy-2.6.3

2022-08-19 Thread Willy Tarreau
Hi,

HAProxy 2.6.3 was released on 2022/08/19. It added 60 new commits
after version 2.6.2.

This release contains assorted fixes for issues discovered after the
previous 2.6.2 release, and there are quite a few annoying ones so I
preferred not to let them rot too long:

- there was an issue with the log-forward section, where a missing
  initialization due to code duplication caused some settings from
  "bind" lines to be ignored (ssl, thread, a few such things).

- the late cleanup of the CLI keyword processing in 2.6 caused some
  breakage when certain commands are chained using a semi-colon, due
  to a command context that was not reset between commands and could
  then be misused. For example "show version; show sess" could crash
  the process.

- some ugly crashes saying "offset > buf->data" were reported when
  using the DNS (e.g. issue #1781) , and it was found that it was using
  uninitialized fields in a structure. A pool_zalloc() was used to paper
  over it, since it's not even impossible that others fields are affected
  and that this part requires a deep breath before being dived into.

- there was a logic but in processing of option http-restrict-req-hdr-names
  that could cause deletion of a wrong header or a crash when facing
  multiple forbidden chars. This was reported in issue #1822, analysed
  and fixed by Mateusz Malek.

- an old bug in the H2 mux may cause spurious stream resets when uploading
  and downloading at the same time from the same stream, due to the window
  update frames having to be delayed when the output is full, and sent
  later after the stream ID was reset. Those using POST to servers might
  have experienced such occasional issues and might want to check for any
  improvement there. This was reported in issue #1830 and diagnosed by
  David le Blanc.

- during atomic map updates of entries based on prefix length ("_ip" and
  "_beg"), if a new finer entry was added and matched an input before being
  committed, it was naturally ignored, but the lookup would continue with
  next keys without rechecking the key, possibly returning an incorrect
  match. This was reported by Miroslav in issue #1802.

- Tim reported in issue #1799 that upon reload, and old process that failed
  to synchronize its tables with the new one could loop for a while without
  any pause and waste a lot of CPU doing this.

- the recently added assertion in fd_delete() already spotted a long
  existing bug on reload, where the FD that was used by the pipe of an
  exiting thread could be instantly reused as a socket by another thread
  and be incorrectly inserted in the table. Most of the time it remained
  unnoticed as these were mostly health checks on a reloading process, but
  since the assertion a few users started to see logs of a crash of the
  exiting process. This was reported both by Christian Ruppert in issue
  #1807 and by Cedric Paillet.

- there was an undesired sharing of data between default-servers that
  could lead to double-frees concretized by crashes when checking the
  config. This was reported in issue #1804 by Fabiano Nunes.

- when a server had numerous requests waiting in queue, it was possible
  for a thread to spend its time picking requests from this queue while
  all other threads were working at refilling it, and the time spent
  doing this was unbounded, which could 1) add high processing latencies,
  and 2) even trigger the watchdog if the thread worked too long. I could
  trigger the watchdog a few times on a 48-thread machine. I think it's
  the same issue that was reported 2 years ago by Jaroslaw Rzeszotko in
  issue #880.

- the ring section's "size" parser was too lax and would take "1M" for "1"
  without even issuing a warning... Also error messages regarding incorrect
  values would copy the input string instead of the parsed value, providing
  no way to diagnose.

- there was a problem with the ring forwarding that's not very clear to me
  (I have no idea about the impact, commit 96417f3 in master).

- I managed to trigger an error on reload where the old process died saying
  "t->tid >= 0 && t->tid != tid". This is caused by the deinit code that
  needs to stop stuff initialized on other threads, and as such it violates
  some consistency checks. The check was relaxed to ignore the stopping
  condition.

- reading from the rings could also occasionally freeze at high rate if
  the reader had to stop due to a buffer full while the writer had already
  stopped due to a ring full.

- Tim reported in issue #1803 that sometimes a new process would fail to
  get the sockets from the old one on reload, due to a flag that was not
  correctly updated before switching to wait mode.

- the function used to send FDs to the new process was using a wrong error
  code on failure, leading to the failure not being detected.

- and a usual lot of QUIC fixes and updates, but more on that below.

A few build issues were addressed (essentially warnings with older compilers).
Two 

Re: "Success" logs in HTTP frontends

2022-08-19 Thread Christian Ruppert

On 2022-08-01 09:45, Christian Ruppert wrote:

On 2022-07-29 13:59, William Lallemand wrote:

On Fri, Jul 29, 2022 at 11:10:32AM +0200, Christopher Faulet wrote:

Le 7/29/22 à 10:13, Christian Ruppert a écrit :
> Hi list,
>
> so I noticed on my private HAProxy I have 2 of those logs within the
> past ~1-2 months:
> haproxy[28669]: 1.2.3.4:48596 [17/Jun/2022:13:55:18.530] public/HTTPSv4:
> Success
>
> So that's nothing so far but still no idea what that means.
> At work, of 250 mio log entries per day, there are about 600k of those
> "Success" ones.
> haproxy[27892]: 192.168.70.102:7904 [29/May/2022:00:13:37.316]
> genfrontend_35310-foobar/3: Success
>
> I'm not sure what it means by "3". Is it the third bind?
>
> I couldn't trigger those "Success" logs by either restarting or
> reloading. What is it for / where does it come from?
>

Hi Christian,

What is your version ? At first glance, I can't find such log message 
in the

code. It could come from a lua module.

In fact, I found something. It is probably because an "embryonic" 
session is
killed with no connection/ssl error. For instance, an SSL connection 
rejected
because of a "tcp-request session" rule (so after the SSL handshake). 
The same

may happen with a listener using the PROXY protocol.

Regards,



Could be something like that indeed, the "Success" message is the 
string

for CO_ER_NONE in the fc_err_str fetch. (The default error string)

Maybe we lack some intermediate state, or we could just change the
string ?

It is only the string for the handshake status so this is confusing 
when

used as an error.


Since it's that much every day I'd agree to change/improve it.
If it's the connection one then I only see it in combination with
SOCKS. There is no SOCKS in my config though, unless that also
triggers if something does a SOCKS request on that bind anyway.
I wasn't able to reproduce/trigger it that way yet.



Does anybody know how to trigger that on purpose? Would be really 
interesting.

--
Regards,
Christian Ruppert



Understanding show table output and rate limiting weirdness

2022-08-19 Thread Corin Langosch
Hello guys,

I’m using the docker image 2.5.7-2ef551d with basic rate limiting configured 
like this:

  backend test
acl test_rate_limit_by_ip_exceeds_limit 
src,table_http_req_rate(test_rate_limit_by_ip) gt 5
http-request deny deny_status 429 if test_rate_limit_by_ip_exceeds_limit
http-request track-sc0 src table test_rate_limit_by_ip

acl test_rate_limit_api_exceeds_limit 
req.hdr(authorization),table_http_req_rate(test_rate_limit_api) gt 100
http-request deny deny_status 429 if test_rate_limit_api_exceeds_limit
http-request track-sc1 req.hdr(authorization) table test_rate_limit_api if 
{ path -i -m beg "/api" }

acl test_rate_limit_graphql_exceeds_limit 
req.hdr(x-api-key),table_http_req_rate(test_rate_limit_graphql) gt 100
http-request deny deny_status 429 if test_rate_limit_graphql_exceeds_limit
http-request track-sc2 req.hdr(authorization) table test_rate_limit_graphql 
if { path -i -m beg "/graphql" }

server s1 10.0.0.1:80
server s2 10.0.0.2:80

  backend test_rate_limit_by_ip
stick-table type ipv6 size 1m expire 24h store http_req_rate(5m)

  backend test_rate_limit_api
stick-table type string size 1m expire 24h store http_req_rate(3m)

  backend test_rate_limit_graphql
stick-table type string size 1m expire 24h store http_req_rate(3m)

Now I have some users reporting they are blocked (getting a 429) even though 
they don’t perform a lot of requests. To analyze I ran "show table 
table_rate_limit_by_ip", but the output looks a bit weierd to me. Here’s an 
anonymized extract:

  0x55cfd58df130: key=:::1.2.3.1 use=0 exp=1498762521 
http_req_rate(30)=1181926
  0x55cfd627f840: key=:::1.2.3.2 use=0 exp=1599966154 
http_req_rate(30)=80740
  0x55cfda287e00: key=:::1.2.3.3 use=0 exp=58273431 http_req_rate(30)=0
  0x55cfd5cd5320: key=:::1.2.3.4 use=0 exp=2006327751 
http_req_rate(30)=1606589

I wonder what’s the value of exp? It doesn’t seem to be a unix timestamp (not 
even in milliseconds), nor does it seem to be the number of (milli)seconds till 
the entry expires. Unfortunately, I can’t find any documentation about it.

The http_req_rate (which should be the rate within the last 5 minutes for this 
table) is extremely (unrealistic) high for many entries. How is this possible?

Is my configuration broken? Is it a bug? …?

Thanks for any help,
Corin

PS: Is there any way to filter the output of show table table_rate_limit_by_ip 
by key (by ip in my case)? In the docs, I only find how to filter by value (by 
http_req_rate in my case).