stable-bot: Bugfixes waiting for a release 2.1 (26)

2020-07-07 Thread stable-bot
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent periodically once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.


Last release 2.1.7 was issued on 2020-06-09.  There are currently 26 patches in 
the queue cut down this way:
- 9 MEDIUM, first one merged on 2020-06-12
- 17 MINOR, first one merged on 2020-06-12

Thus the computed ideal release date for 2.1.8 would be 2020-07-10, which is in 
one week or less.

The current list of patches in the queue is:
 - 2.1   - MEDIUM  : log: don't hold the log lock during 
writev() on a file descriptor
 - 2.1   - MEDIUM  : ssl: crt-list must continue parsing on 
ERR_WARN
 - 2.1   - MEDIUM  : ebtree: use a byte-per-byte memcmp() 
to compare memory blocks
 - 2.1   - MEDIUM  : fetch: Fix hdr_ip misparsing IPv4 
addresses due to missing NUL
 - 2.1   - MEDIUM  : mux-h1: Disable splicing for the 
conn-stream if read0 is received
 - 2.1   - MEDIUM  : mux-h1: Subscribe rather than waking 
up in h1_rcv_buf()
 - 2.1   - MEDIUM  : connection: Continue to recv data to a 
pipe when the FD is not ready
 - 2.1   - MEDIUM  : pattern: Add a trailing \0 to match 
strings only if possible
 - 2.1   - MEDIUM  : pattern: fix thread safety of pattern 
matching
 - 2.1   - MINOR   : proxy: fix dump_server_state()'s 
misuse of the trash
 - 2.1   - MINOR   : ssl: fix ssl-{min,max}-ver with 
openssl < 1.1.0
 - 2.1   - MINOR   : spoe: add missing key length check 
before checking key names
 - 2.1   - MINOR   : cli: allow space escaping on the CLI
 - 2.1   - MINOR   : mux-h1: Fix the splicing in TUNNEL mode
 - 2.1   - MINOR   : mworker/cli: fix the escaping in the 
master CLI
 - 2.1   - MINOR   : spoe: correction of setting bits for 
analyzer
 - 2.1   - MINOR   : http: make smp_fetch_body() report 
that the contents may change
 - 2.1   - MINOR   : systemd: Wait for network to be online
 - 2.1   - MINOR   : backend: Remove CO_FL_SESS_IDLE if a 
client remains on the last server
 - 2.1   - MINOR   : mux-h1: Don't read data from a pipe if 
the mux is unable to receive
 - 2.1   - MINOR   : mux-h1: Disable splicing only if input 
data was processed
 - 2.1   - MINOR   : proxy: always initialize the trash in 
show servers state
 - 2.1   - MINOR   : tcp-rules: tcp-response must check the 
buffer's fullness
 - 2.1   - MINOR   : mworker/cli: fix semicolon escaping in 
master CLI
 - 2.1   - MINOR   : http_ana: clarify connection pointer 
check on L7 retry
 - 2.1   - MINOR   : http_act: don't check capture id in 
backend (2)

-- 
The haproxy stable-bot is freely provided by HAProxy Technologies to help 
improve the quality of each HAProxy release.  If you have any issue with these 
emails or if you want to suggest some improvements, please post them on the 
list so that the solutions suiting the most users can be found.



Re: [PATCH] CLEANUP: contrib/prometheus-exporter: typo fixes for ssl reuse metric

2020-07-07 Thread Willy Tarreau
Hi Pierre,

On Tue, Jul 07, 2020 at 07:14:08PM +0200, Pierre Cheynier wrote:
> A typo I identified while having a look to our metric inventory.

Thank you, now merged.

Willy



Re: [BUG] haproxy retries dispatch to wrong server

2020-07-07 Thread Christopher Faulet

Le 07/07/2020 à 15:16, Michael Wimmesberger a écrit :

Hi,

I might have found a potentially critical bug in haproxy. It occurs when
haproxy is retrying to dispatch a request to a server. If haproxy fails
to dispatch a request to a server that is either up or has no health
checks enabled it dispatches the request to a random server on any
backend in any mode (tcp or http) as long as they are in the up state
(via tcp-connect or httpchk health checks). In addition haproxy logs the
correct server although it dispatches the request to a wrong server.



Hi Michael,

Thanks for the reproducer and the detailed description. I'm able to reproduce 
the bug thanks to it. I attached a patch to address it. I will see with Willy 
tomorrow morning if it is the good way to fix it. But it should do the trick.


Thanks again !

--
Christopher Faulet
>From bbc89e04a252b3719f221cd0afbad49f507c4c46 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Tue, 7 Jul 2020 22:25:19 +0200
Subject: [PATCH] BUG/MAJOR: stream: Mark the server address as unset on new
 outgoing connection

In connect_server() function, when a new connection is created, it is important
to mark the server address as explicitly unset, removing the SF_ADDR_SET
stream's flag, to be sure to set it. On the first connect attempt, it is not a
problem because the flag is not set. But on a connection failure, the faulty
endpoint is detached. It is specific to the 2.0. See the commit 7b69c91e7
("BUG/MAJOR: stream-int: always detach a faulty endpoint on connect failure")
for details. As a result of this commit, on a connect retry, a new connection is
created. But this time, the SF_ADDR_SET flag is set (except on a
redispatch). But in reality, because it is a new connection, the server address
is not set.

On the other end, when a connection is released (or when it is created), the
from/to addresses are not cleared. Thus because of the described bug, when a
connection is get from the memory pool, the addresses of the previous connection
is used. leading to undefined and random routing. For a totally new connection,
no addresses are set and an internal error is reported by si_connect().

A reproducer with a detailed description was posted on the ML:

  https://www.mail-archive.com/haproxy@formilux.org/msg37850.html

This patch should fix the issue #717. There is no direct mainline commit ID for
this fix, and it must not be backported as it's solely for 2.0.
---
 src/backend.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/backend.c b/src/backend.c
index d0d11779a..2d0fd6a43 100644
--- a/src/backend.c
+++ b/src/backend.c
@@ -1457,6 +1457,7 @@ int connect_server(struct stream *s)
 
 	/* no reuse or failed to reuse the connection above, pick a new one */
 	if (!srv_conn) {
+		s->flags &= ~SF_ADDR_SET;
 		srv_conn = conn_new();
 		if (srv_conn)
 			srv_conn->target = s->target;
-- 
2.26.2



Re: [BUG] haproxy retries dispatch to wrong server

2020-07-07 Thread Lukas Tribus
Hello Michael,


On Tue, 7 Jul 2020 at 15:16, Michael Wimmesberger
 wrote:
>
> Hi,
>
> I might have found a potentially critical bug in haproxy. It occurs when
> haproxy is retrying to dispatch a request to a server. If haproxy fails
> to dispatch a request to a server that is either up or has no health
> checks enabled it dispatches the request to a random server on any
> backend in any mode (tcp or http) as long as they are in the up state
> (via tcp-connect or httpchk health checks). In addition haproxy logs the
> correct server although it dispatches the request to a wrong server.
>
> I could not reproduce this issue on 2.0.14 or any 2.1.x version. This
> happens in tcp and http mode and http requests might be dispatched to
> tcp servers and vice versa.
>
> I have tried to narrow this problem down in source using git bisect,
> which results in this commit marked as the first bad one:
> 7b69c91e7d9ac6d7513002ecd3b06c1ac3cb8297.

Makes sense that 2.1 is not affected because this commit was
specifically written for 2.0 (it's not a backport).

Exceptionally detailed and thorough reporting here, this will help a
lot, thank you.

A bug has been previously filed, but the details mentioned in this
thread will help get things going:
https://github.com/haproxy/haproxy/issues/717



Lukas



[ANNOUNCE] haproxy-2.2.0

2020-07-07 Thread Willy Tarreau
Hi,

HAProxy 2.2.0 was released on 2020/07/07. It added 24 new commits
after version 2.2-dev12.

There were very few last-minute changes since dev12, just as I hoped,
that's pretty fine.

We're late by about 1 month compared to the initial planning, which is
not terrible and should be seen instead as an investment on the debugging
cycle since almost only bug fixes were merged during that period. In the
end you get a better version later.

While I was initially worried that this version didn't seem to contain
any outstanding changes, looking back in the mirror tells be it's another
awesome one instead:

  - dynamic content emission:
 - "http-request return" directive to build dynamic responses ;
 - rewrite of headers (including our own) after the response ;
 - dynamic error files (errorfiles can be used as templates to
   deliver personalized pages)

  - further improvements to TLS runtime certificates management:
 - insertion of new certificates
 - split of key and cert
 - manipulation and creation of crt-lists
 - even directories can be handled

And by the way now TLSv1.2 is set as the default minimum version.

  - significant reduction of server-side resources by sharing idle
connection pools between all threads ; till 2.1 if you had 64 threads,
each of them had its own connections, so the reuse rate was lower, and
the idle connection count was very high. This is not the case anymore.

  - health-checks were rewritten to all rely on tcp-check rules behind the
curtains. This allowed to get rid of all the dirt we had accumulate over
18 years and to write extensible checks. New ones are much easier to add.
In addition we now have http-checks which support header and body
addition, and which pass through muxes (HTTP/1 and HTTP/2).

  - ring buffer creation with ability to forward any event to any log server
including over TCP. This means that it's now possible to log over a TCP
syslog server, and that adding new protocols should be fairly easy.

  - further refined and improved debugging (symbols in panic dumps, malloc
debugging, more activity counters)

  - the default security was improved. For example fork() is forbidden by
default, which will block against any potential code execution (and
will also block external checks by default unless explicitly unblocked).

  - new performance improvements in the scheduler and I/O layers, reducing
the cost of I/O processing and overall latency. I've known from private
discussions that some noticed tremendous gains there.

I'm pretty sure there are many other things but I don't remember, I'm
looking at my notes. I'm aware that HaproxyTech will soon post an in-depth
review on the haproxy.com blog so just have a look there for all the details.
(edit: it's already there: https://www.haproxy.com/blog/announcing-haproxy-2-2/ 
).

There are three things I noted during the development of this version.

The first one is that with the myriad of new tools we're using to help
users and improve our code quality (discourse, travis, cirrus, oss-fuzz,
mailing-list etc), some people really found their role in the project and
are becoming more autonomous. This definitely scales much better and helps
me spend less time on things that are not directly connected to my code
activities, so thank you very much for this (Lukas, Tim, Ilya, Cyril).

The second one is that this is the first version that has been tortured
in production long before the release. And when I'm saying "tortured", I
really mean it, because several of us were suffering as well. But it
allowed to address very serious issues that would have been a nightmare
to debug and fix post-release. For this I really want to publicly thank
William Dauchy for all his work and involvement on this, and for all the
very detailed reports he's sent us. For me this is the proof that running
code early on very limited traffic is enough to catch unacceptable bugs
that will not hit you later. And this pays off because he will be able to
deploy 2.2 soon without sweating. Others might face bugs that were not in
the perimeter he tested, hehe :-) I really encourage anyone who can to do
this. I know it's not easy and can be risky, but with some organization
and good prod automation it's possible and is great. What's nice with
reporting bugs during development is that you have a safe version to roll
back to and it can take the time it takes to fix the bug, it's not a
problem! Please think about it and what it would imply for you to adopt
such a model, it's a real time saver and risk saver for your production.

The last one is that we started to use the -next branch to queue some
pending work (that was already merged) and that the principle of finishing
one version while we're starting to queue some work for the next one is
well accepted and will help really us. I'd like this to continue and grow
in importance.

Enough talking, now's time to download and 

[PATCH] CLEANUP: contrib/prometheus-exporter: typo fixes for ssl reuse metric

2020-07-07 Thread Pierre Cheynier
A typo I identified while having a look to our metric inventory.

---
 contrib/prometheus-exporter/README   | 2 +-
 contrib/prometheus-exporter/service-prometheus.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/contrib/prometheus-exporter/README 
b/contrib/prometheus-exporter/README
index 1c5a99241..a1b9e269c 100644
--- a/contrib/prometheus-exporter/README
+++ b/contrib/prometheus-exporter/README
@@ -122,7 +122,7 @@ Exported metrics
 | haproxy_process_max_ssl_rate   | Maximum observed number of 
SSL sessions per second.   |
 | haproxy_process_current_frontend_ssl_key_rate  | Current frontend SSL Key 
computation per second over last elapsed second. |
 | haproxy_process_max_frontend_ssl_key_rate  | Maximum observed frontend 
SSL Key computation per second. |
-| haproxy_process_frontent_ssl_reuse | SSL session reuse ratio 
(percent).|
+| haproxy_process_frontend_ssl_reuse | SSL session reuse ratio 
(percent).|
 | haproxy_process_current_backend_ssl_key_rate   | Current backend SSL Key 
computation per second over last elapsed second.  |
 | haproxy_process_max_backend_ssl_key_rate   | Maximum observed backend 
SSL Key computation per second.  |
 | haproxy_process_ssl_cache_lookups_total| Total number of SSL session 
cache lookups.|
diff --git a/contrib/prometheus-exporter/service-prometheus.c 
b/contrib/prometheus-exporter/service-prometheus.c
index 952558c70..009e817ae 100644
--- a/contrib/prometheus-exporter/service-prometheus.c
+++ b/contrib/prometheus-exporter/service-prometheus.c
@@ -485,7 +485,7 @@ const struct ist promex_inf_metric_names[INF_TOTAL_FIELDS] 
= {
[INF_MAX_SSL_RATE]   = IST("max_ssl_rate"),
[INF_SSL_FRONTEND_KEY_RATE]  = 
IST("current_frontend_ssl_key_rate"),
[INF_SSL_FRONTEND_MAX_KEY_RATE]  = IST("max_frontend_ssl_key_rate"),
-   [INF_SSL_FRONTEND_SESSION_REUSE_PCT] = IST("frontent_ssl_reuse"),
+   [INF_SSL_FRONTEND_SESSION_REUSE_PCT] = IST("frontend_ssl_reuse"),
[INF_SSL_BACKEND_KEY_RATE]   = 
IST("current_backend_ssl_key_rate"),
[INF_SSL_BACKEND_MAX_KEY_RATE]   = IST("max_backend_ssl_key_rate"),
[INF_SSL_CACHE_LOOKUPS]  = IST("ssl_cache_lookups_total"),
-- 
2.27.0




[BUG] haproxy retries dispatch to wrong server

2020-07-07 Thread Michael Wimmesberger
Hi,

I might have found a potentially critical bug in haproxy. It occurs when
haproxy is retrying to dispatch a request to a server. If haproxy fails
to dispatch a request to a server that is either up or has no health
checks enabled it dispatches the request to a random server on any
backend in any mode (tcp or http) as long as they are in the up state
(via tcp-connect or httpchk health checks). In addition haproxy logs the
correct server although it dispatches the request to a wrong server.

I could not reproduce this issue on 2.0.14 or any 2.1.x version. This
happens in tcp and http mode and http requests might be dispatched to
tcp servers and vice versa.

I have tried to narrow this problem down in source using git bisect,
which results in this commit marked as the first bad one:
7b69c91e7d9ac6d7513002ecd3b06c1ac3cb8297.


I have created a setup with a minimal config to reproduce this
unintended behavior with a high probability to occur. The odds of this
bug occuring can be increased by having more backend servers using
health checks. With 2 faulty servers without health checks and 20
servers with health checks I get about a 90-95% chance for a wrong dispatch.


reduced haproxy.cfg:
# note: replace 127.0.0.1 with the internal ip of the host running the
container, i.e. 172.17.0.1 when using
# docker or the container names when using a container-network
# make sure port 8999 is not available
defaults
  mode  http
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 5s
  timeout  client 1m
  timeout  server 1m

frontend fe_http_in
  bind 0.0.0.0:8100
  use_backend be_bad.example.com if { req.hdr(host) bad.example.com }
  use_backend be_good.example.com if { req.hdr(host) good.example.com }

backend be_bad.example.com
  server bad.example.com_8999 127.0.0.1:8999 # make sure this port is
not bound

backend be_good.example.com
  server good.example.com_8070 127.0.0.1:8070 check

listen li_bad.example.com_tcp_39100:
  bind 0.0.0.0:39100
  mode tcp
  server bad.example.com_tcp_8999 127.0.0.1:8999 # make sure this port
is not bound

listen li_good.example.com_tcp_39200:
  bind 0.0.0.0:39200
  mode tcp
  server good.example.com_tcp_8071 127.0.0.1:8071 check

running test-webservices:
podman run -d --rm -p 8070:80 --name nginxdemo nginxdemos/hello
podman run -d --rm -p 8071:8000 --name crccheckdemo crccheck/hello-world
# note: I am running to different webservices to highlight the random
aspect for the redispatch

run haproxy inside a container:
podman run -it --rm \
  --name haproxy \
  -v "${PWD}/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:z" \
  -p 8100:8100 \
  -p 39100:39100 \
  -p 39200:39200 \
  haproxy:2.0.15-alpine
# note: I have selinux enabled and thus require :z or :Z to mount a file
or directory into the container


testing using curl:
# expected: HTTP/1.1 503 Service Unavailable
curl -sv -o /dev/null http://bad.example.com --connect-to
::127.0.0.1:8100 2>&1 | grep HTTP/1
# expected: nothing (curl writes "Empty reply from server")
curl -sv -o /dev/null http://127.0.0.1:39100 2>&1 | grep HTTP/1

# expected: HTTP/1.1 200 OK
curl -sv -o /dev/null http://good.example.com --connect-to
::127.0.0.1:8100 2>&1 | grep HTTP/1
# expected: HTTP/1.0 200 OK
curl -sv -o /dev/null http://127.0.0.1:39200 2>&1 | grep HTTP/1


In this setup the curls which get mismatched to the wrong backend server
flip between either HTTP/1.1 when dispatched to the nginxdemos/hello,
between HTTP/1.0 when dispatched to the crccheck/hello-world or the
correct response (503 or nothing) in consecutive runs.

I have attached a simple script which recreates this small test-setup
using podman but it could fairly easily be converted to docker.


cheers,
Michael



create-setup.sh
Description: application/shellscript


Re: [PATCH] DOC: typo fixes for configuration.txt

2020-07-07 Thread Daniel Corbett

Hello,


On 7/7/20 7:19 AM, Илья Шипицин wrote:

Daniel,

can you please tell what spellchecker do you use ?

(codespell does not see those typos)



I used aspell.


Thanks,

-- Daniel





Re: [PATCH] DOC: typo fixes for configuration.txt

2020-07-07 Thread Илья Шипицин
Daniel,

can you please tell what spellchecker do you use ?

(codespell does not see those typos)

вт, 7 июл. 2020 г. в 08:34, Daniel Corbett :

> Hello,
>
>
> Here's a quick round of typo corrections for configuration.txt
>
>
> Thanks,
> -- Daniel
>
>
>


Re: [PATCH] DOC: typo fixes for configuration.txt

2020-07-07 Thread Willy Tarreau
> Here's a quick round of typo corrections for configuration.txt

Merged, thank you Daniel
Willy



Bid Writing Workshops Via Zoom

2020-07-07 Thread NFP Workshops


NFP WORKSHOPS
18 Blake Street, York YO1 8QG
Affordable Training Courses for Charities, Schools & Public Sector 
Organisations 




This email has been sent to haproxy@formilux.org
CLICK TO UNSUBSCRIBE FROM LIST
Alternatively send a blank e-mail to unsubscr...@nfpmail2001.co.uk quoting 
haproxy@formilux.org in the subject line.
Unsubscribe requests will take effect within seven days. 




Bid Writing: The Basics
Online via ZOOM 

START 13.30 FINISH 16.00

COST £95.00

TOPICS COVERED

Do you know the most common reasons for rejection? Are you gathering the right 
evidence? Are you making the right arguments? Are you using the right 
terminology? Are your numbers right? Are you learning from rejections? Are you 
assembling the right documents? Do you know how to create a clear and concise 
standard funding bid?

Are you communicating with people or just excluding them? Do you know your own 
organisation well enough? Are you thinking through your projects carefully 
enough? Do you know enough about your competitors? Are you answering the 
questions funders will ask themselves about your application? Are you 
submitting applications correctly?

PARTICIPANTS  

Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to submit grant funding 
applications to charitable grant making trusts and foundations. People who 
provide advice to these organisations are also welcome.

BOOKING DETAILS   

Participants receive full notes and sample bids by e-mail after the workshop. 
The workshop consists of talk, questions and answers. There are no power points 
or audio visuals used. All places must be booked through the online booking 
system using a debit card, credit card or paypal. We do not issue invoices or 
accept bank or cheque payments. If you do not have a payment card from your 
organisation please use a personal one and claim reimbursement using the 
booking confirmation e-mail as proof of purchase.

BOOKING TERMS

Workshop bookings are non-cancellable and non-refundable. If you are unable to 
participate on the booked date you may allow someone else to log on in your 
place. There is no need to contact us to let us know that there will be a 
different participant. Bookings are non-transferable between dates unless an 
event is postponed. If an event is postponed then bookings will be valid on any 
future scheduled date for that workshop.
   
QUESTIONS

If you have a question please e-mail questi...@nfpmail2001.co.uk You will 
usually receive a response within 24 hours. Due to our training commitments we 
are unable to accept questions by phone. 
Bid Writing: Advanced
Online via ZOOM 

START 13.30 FINISH 16.00

COST £95.00

TOPICS COVERED

Are you applying to the right trusts? Are you applying to enough trusts? Are 
you asking for the right amount of money? Are you applying in the right ways? 
Are your projects the most fundable projects? 

Are you carrying out trust fundraising in a professional way? Are you 
delegating enough work? Are you highly productive or just very busy? Are you 
looking for trusts in all the right places? 

How do you compare with your competitors for funding? Is the rest of your 
fundraising hampering your bids to trusts? Do you understand what trusts are 
ideally looking for?

PARTICIPANTS  

Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to submit grant funding 
applications to charitable grant making trusts and foundations. People who 
provide advice to these organisations are also welcome.

BOOKING DETAILS   

Participants receive full notes and sample bids by e-mail after the workshop. 
The workshop consists of talk, questions and answers. There are no power points 
or audio visuals used. All places must be booked through the online booking 
system using a debit card, credit card or paypal. We do not issue invoices or 
accept bank or cheque payments. If you do not have a payment card from your 
organisation please use a personal one and claim reimbursement using the 
booking confirmation e-mail as proof of purchase.

BOOKING TERMS

Workshop bookings are non-cancellable and non-refundable. If you are unable to 
participate on the booked date you may allow someone else to log on in your 
place. There is no need to contact us to let us know that there will be a 
different participant. Bookings are non-transferable between dates unless an 
event is postponed. If an event is postponed then bookings will be valid on any 
future scheduled date for that workshop.
   
QUESTIONS

If you have a question please e-mail questi...@nfpmail2001.co.uk You will 
usually receive a response within 24 hours. Due to our training commitments we 
are unable to accept questions by phone. 
Dates & Booking Links
BID WRITING: THE BASICS
Mon 13 Jul 2020Booking Link
Mon 27 Jul 2020Booking Link
Mon 10 Aug 2020Booking Link
Mon 24 Aug 2020Booking