Re: [PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread William Dauchy
On Thu, Feb 13, 2020 at 07:48:03AM +0100, Willy Tarreau wrote:
> For me it's different, it's not related to the fact that it's used
> later but to the fact that we only need to undo the namespace change
> if it was changed in the first call. Indeed "ns" is not null only
> when the caller wants to switch the namespace to create a socket. In
> fact as long as there's no namespace configured on the servers, or
> if we're trying to connect to a dispatch or transparent address, ns
> will be NULL and we'll save two setns calls (i.e. almost always the
> case).

indeed, the code was not very clear in that regard, thank you for the
clarification. I overlooked the case we only go through this function to
get a file descriptor without namespace.

> In fact I think we could simplify the logic a bit and merge the code
> into the inline function present in common/namespace.h. This would also
> require to export default_namespace:
> 
> 
>   static inline int my_socketat(const struct netns_entry *ns, int domain, int 
> type, int protocol)
>   {
>   #ifdef USE_NS
>   int sock;
> 
>   if (likely(!ns || default_namespace < 0))
>   goto no_ns;
> 
>   if (setns(ns->fd, CLONE_NEWNET) == -1)
>   return -1;
> 
>   sock = socket(domain, type, protocol);
> 
>   if (setns(default_namespace, CLONE_NEWNET) == -1) {
>   close(sock);
>   sock = -1;
>   }
>   return sock;
>   no_ns:
>   #endif
>   return socket(domain, type, protocol);
>   }
> 
> This allows to remove the !ns || default_namespace logic from
> the function's epilogue. What do you think ?

Indeed, that's clearer in my opinion. I guess I let you handle that
as you wrote the new proposition.

Thank you,
-- 
William



Re: how to properly reload haproxy (from systemd + master-worker) ?

2020-02-12 Thread Илья Шипицин
I confirm that adding "no option start-on-reload" resolves the reload issue.

I'll report to dataplane documentation.

вт, 4 февр. 2020 г. в 16:01, William Lallemand :

> On Tue, Feb 04, 2020 at 12:48:18AM +0500, Илья Шипицин wrote:
> > > > вс, 2 февр. 2020 г. в 22:58, Tim Düsterhus :
> > > >
> > > > [...]
> > > > > > Feb 02 20:50:07 xxx systemd[1]: haproxy.service failed.
> > > > >
> > > > > ... leading to the unit being stopped.
> > > > >
> > > > > So you would have to find out with the dataplane API dies. My
> educated
> > > > > guess would be that it fails because the port is already being in
> use
> > > by
> > > > > the old dataplane API process.
> > > > >
> > >
> > > From what I understand of the dataplaneapi, it is reloaded upon a USR1
> > > signal,
> > > signal which is sent by the master process upon a reload.
> > >
> > > So you probably just need to add "no option start-on-reload" in your
> > > program
> > > section to prevent HAProxy to launch the dataplane-api again.
> > >
> >
> >
> > thank you, I'll try
> >
> > maybe it should be added to docs.
> >
>
> Could you open an issue on the dataplaneapi bugtracker? Because I'm not
> sure
> this is the right behavior.
>
> The doc speaks about "no option start-on-reload" but only if used by
> docker.
>
>
> https://www.haproxy.com/documentation/hapee/1-9r1/configuration/dataplaneapi/
>
> --
> William Lallemand
>


Re: is it allowed to bind both http and https on the same port ?

2020-02-12 Thread Willy Tarreau
On Thu, Feb 13, 2020 at 11:40:08AM +0500,  ??? wrote:
> hello,
> while playing with dataplane api (I copy-pasted code), accidentally I
> created the following config
> 
> frontend git_example_com_https_frontend
>   mode http
>   bind 10.216.7.1:7080 name http
>   bind 10.216.7.1:7080 name https crt /etc/haproxy/bundle.pem ssl alpn
> h2,http/1.1
>   default_backend git_example_com_https_backend
>   redirect scheme https code 301 if !{ ssl_fc }
> 
> both bind lines include the same 7080 port. haproxy does not complain. is
> that configuration correct ? should haproxy complain on such config ?

Theorically it's not valid. But it's hard to make it complain on this
as the IP and ports are just some elements, you can also have other
differentiators like the interface, the namespace etc.

For example this config wouldn't probably make much sense:

  bind 10.216.7.1:7080 ssl crt /etc/haproxy/bundle1.pem
  bind 10.216.7.1:7080 ssl crt /etc/haproxy/bundle2.pem

But what if someone is purposely trying to progressively deploy a new
certificate (4096bit or ECDSA) and observe the performance impacts ?

Similarly, this one doesn't seem to make much sense at first glance:

  bind 10.216.7.1:7080 ssl crt /etc/haproxy/bundle.pem alpn h2,http/1.1
  bind 10.216.7.1:7080 ssl crt /etc/haproxy/bundle.pem alpn http/1.1

But someone might want to enforce H1 on some clients to collect various
metrics or to experiment a bit.

Thus I'd tend to agree that having both SSL and non-SSL on the same
IP+port+interface+namespace doesn't seem to make much sense, but it's
just *one* very likely wrong combination in the middle of a lot of
other suspicious ones.

In fact that's exactly the type of things I'd like a diag utility to
detect. For me it's very similar to the case where two servers have
the same cookie value in a farm. Very often it's a copy-paste mistake
but sometimes it's on purpose and you don't want to emit warnings or
even less errors when seeing this. And with such a tool, I'd be happy
to remove some of the warnings we have.

I initially thought we'd have haproxy itself run advanced self-checks
on a config, but given that such diags would require more complex logic
and more relations between config elements, it would needlessly complicate
what is already complicated, so better delegate that to external tools.

Willy



Re: [PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread Willy Tarreau
On Wed, Feb 12, 2020 at 09:42:06PM +0100, William Dauchy wrote:
> On Thu, Feb 13, 2020 at 01:31:51AM +0500,  ??? wrote:
> > we "use" it.
> > depending on true/false we either return -1 or not
> 
> I guess it is present in the first condition to be able to access
> `ns->fd` safely in setns; but the second condition does not acces `ns`
> later.

For me it's different, it's not related to the fact that it's used
later but to the fact that we only need to undo the namespace change
if it was changed in the first call. Indeed "ns" is not null only
when the caller wants to switch the namespace to create a socket. In
fact as long as there's no namespace configured on the servers, or
if we're trying to connect to a dispatch or transparent address, ns
will be NULL and we'll save two setns calls (i.e. almost always the
case).

In fact I think we could simplify the logic a bit and merge the code
into the inline function present in common/namespace.h. This would also
require to export default_namespace:


  static inline int my_socketat(const struct netns_entry *ns, int domain, int 
type, int protocol)
  {
  #ifdef USE_NS
int sock;

if (likely(!ns || default_namespace < 0))
goto no_ns;

if (setns(ns->fd, CLONE_NEWNET) == -1)
return -1;

sock = socket(domain, type, protocol);

if (setns(default_namespace, CLONE_NEWNET) == -1) {
close(sock);
sock = -1;
}
return sock;
  no_ns:
  #endif
return socket(domain, type, protocol);
  }

This allows to remove the !ns || default_namespace logic from
the function's epilogue. What do you think ?

Willy



is it allowed to bind both http and https on the same port ?

2020-02-12 Thread Илья Шипицин
hello,
while playing with dataplane api (I copy-pasted code), accidentally I
created the following config

frontend git_example_com_https_frontend
  mode http
  bind 10.216.7.1:7080 name http
  bind 10.216.7.1:7080 name https crt /etc/haproxy/bundle.pem ssl alpn
h2,http/1.1
  default_backend git_example_com_https_backend
  redirect scheme https code 301 if !{ ssl_fc }

both bind lines include the same 7080 port. haproxy does not complain. is
that configuration correct ? should haproxy complain on such config ?

(I did not mean to put that to production, it happened accidentally).

Cheers,
Ilya Shipitcin


Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Willy Tarreau
Hi Daniel,

On Wed, Feb 12, 2020 at 06:47:07PM -0500, Daniel Corbett wrote:
> I'll do what I can to improve it starting with working on moving these items
> to a thread and increasing the calculations for expected release dates for
> older branches.

Probably we can start simple for the calculations: if we consider that there
are "recent" and "older" branches, that the last two ones (i.e. one LTS and
one non-LTS) are "recent", and others are "older". Just double all delays
for older ones, and maybe update them at most once every two weeks (don't
do complicated calculations, do it only on sundays whose day-of-year is
even).

> I'll also take your other suggestions into account as well but I think these
> might be a good starting point.
> 
> I may be a little slow on this but I'll put something together that can
> hopefully make the bot more useful.

No problem. I do really value these announces (even if I totally understand
how they can be boring). Regularly when I see them, I can't help but have a
quick check and think "ah shit, we're late", and that was exactly the point.
I'm also thinking right now that it could probably be more productive to
send them in the middle of the week (wednesday?) than on sundays: they will
be seen instantly instead of appearing in the middle of the backlog noise
of the past week-end.

I think in the mean time you can already kill 1.9 from the list of announces.

Thanks,
Willy



Re: Mirror concepts

2020-02-12 Thread Aleksandar Lazic


Hi.


Feb 13, 2020 1:04:58 AM Panneer Selvam :

> Hi I need quick helping for HAproxy mirroring concepts

Please can you tell us a little bit more what you need and please answer to 
all, thanks.

Have you read an understand the post?
https://www.haproxy.com/blog/haproxy-traffic-mirroring-for-real-world-testing/

> Thanks Panneer

Regards
Aleks




Mirror concepts

2020-02-12 Thread Panneer Selvam
Hi
I need quick helping for HAproxy mirroring concepts

Thanks
Panneer


Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Julien Pivotto
On 12 Feb 18:47, Daniel Corbett wrote:
> Hello,
> 
> 
> On 2/12/20 12:55 PM, Tim Düsterhus wrote:
> > 
> > Threading would solve most of the pain points for me, because the emails
> > will nicely be merged on both my computer and my phone. For the
> > remaining points I don't really care that much. I'll leave this up to
> > the people that actually read the emails. I'm currently just marking
> > them as read without taking a single look :-) Most of by curiosity is
> > satisfied using git and the bug list on haproxy.org.
> 
> 
> I just wanted to acknowledge this thread and let you know that I appreciate
> the suggestions.
> 
> I'll do what I can to improve it starting with working on moving these items
> to a thread and increasing the calculations for expected release dates for
> older branches.
> 
> I'll also take your other suggestions into account as well but I think these
> might be a good starting point.
> 
> I may be a little slow on this but I'll put something together that can
> hopefully make the bot more useful.
> 
> 
> Thanks again,
> 
> -- Daniel

Hi Daniel,

I want to tell another story.

I find the bot useful and appreciate it. We get more real spam on the
mailing list than emails from the bot, and it gives a good reminder
about what's coming next and what are the bugs. In the current status of
HAProxy development, where lots of things only happen on the mailing
list, and branches are not all mirrored on github, this is really welcome.


-- 
 (o-Julien Pivotto
 //\Open-Source Consultant
 V_/_   Inuits - https://www.inuits.eu


signature.asc
Description: PGP signature


Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Daniel Corbett

Hello,


On 2/12/20 12:55 PM, Tim Düsterhus wrote:


Threading would solve most of the pain points for me, because the emails
will nicely be merged on both my computer and my phone. For the
remaining points I don't really care that much. I'll leave this up to
the people that actually read the emails. I'm currently just marking
them as read without taking a single look :-) Most of by curiosity is
satisfied using git and the bug list on haproxy.org.



I just wanted to acknowledge this thread and let you know that I 
appreciate the suggestions.


I'll do what I can to improve it starting with working on moving these 
items to a thread and increasing the calculations for expected release 
dates for older branches.


I'll also take your other suggestions into account as well but I think 
these might be a good starting point.


I may be a little slow on this but I'll put something together that can 
hopefully make the bot more useful.



Thanks again,

-- Daniel





Re: [PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread William Dauchy
On Thu, Feb 13, 2020 at 01:31:51AM +0500, Илья Шипицин wrote:
> we "use" it.
> depending on true/false we either return -1 or not

I guess it is present in the first condition to be able to access
`ns->fd` safely in setns; but the second condition does not acces `ns`
later.

-- 
William



Re: [PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread Илья Шипицин
чт, 13 февр. 2020 г. в 01:26, William Dauchy :

> we check ns variable but we don't use it later
>

we "use" it.
depending on true/false we either return -1 or not


>
> Signed-off-by: William Dauchy 
> ---
>  src/namespace.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/src/namespace.c b/src/namespace.c
> index 89a968e36..3536629bc 100644
> --- a/src/namespace.c
> +++ b/src/namespace.c
> @@ -120,7 +120,7 @@ int my_socketat(const struct netns_entry *ns, int
> domain, int type, int protocol
>
> sock = socket(domain, type, protocol);
>
> -   if (default_namespace >= 0 && ns && setns(default_namespace,
> CLONE_NEWNET) == -1) {
> +   if (default_namespace >= 0 && setns(default_namespace,
> CLONE_NEWNET) == -1) {
> if (sock >= 0)
> close(sock);
> return -1;
> --
> 2.25.0
>
>
>


[PATCH 1/2] BUG/MINOR: namespace: avoid closing fd when socket failed in my_socketat

2020-02-12 Thread William Dauchy
we cannot return right after socket opening as we need to move back to
the default namespace first

this should fix github issue #500

this might be backported to all version >= 1.6

Fixes: b3e54fe387c7c1 ("MAJOR: namespace: add Linux network namespace
support")
Signed-off-by: William Dauchy 
---
 src/namespace.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/namespace.c b/src/namespace.c
index f23da48f8..89a968e36 100644
--- a/src/namespace.c
+++ b/src/namespace.c
@@ -121,7 +121,8 @@ int my_socketat(const struct netns_entry *ns, int domain, 
int type, int protocol
sock = socket(domain, type, protocol);
 
if (default_namespace >= 0 && ns && setns(default_namespace, 
CLONE_NEWNET) == -1) {
-   close(sock);
+   if (sock >= 0)
+   close(sock);
return -1;
}
return sock;
-- 
2.25.0




[PATCH 2/2] CLEANUP: namespace: remove uneeded ns check in my_socketat

2020-02-12 Thread William Dauchy
we check ns variable but we don't use it later

Signed-off-by: William Dauchy 
---
 src/namespace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/namespace.c b/src/namespace.c
index 89a968e36..3536629bc 100644
--- a/src/namespace.c
+++ b/src/namespace.c
@@ -120,7 +120,7 @@ int my_socketat(const struct netns_entry *ns, int domain, 
int type, int protocol
 
sock = socket(domain, type, protocol);
 
-   if (default_namespace >= 0 && ns && setns(default_namespace, 
CLONE_NEWNET) == -1) {
+   if (default_namespace >= 0 && setns(default_namespace, CLONE_NEWNET) == 
-1) {
if (sock >= 0)
close(sock);
return -1;
-- 
2.25.0




[PATCH 0/2] namespace cleaning in my_socketat

2020-02-12 Thread William Dauchy
Two minor patches for namespace.

William Dauchy (2):
  BUG/MINOR: namespace: avoid closing fd when socket failed in
my_socketat
  CLEANUP: namespace: remove uneeded ns check in my_socketat

 src/namespace.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-- 
2.25.0




Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Tim Düsterhus
Willy,

Am 12.02.20 um 18:09 schrieb Willy Tarreau:
> Hi Tim,
> 
> On Wed, Feb 12, 2020 at 04:36:58PM +0100, Tim Düsterhus wrote:
>>> Thus the computed ideal release date for 2.1.3 would be 2020/02/03, which 
>>> was one week ago.
>>> [...]
>>> ---
>>> The haproxy stable-bot is freely provided by HAProxy Technologies to help 
>>> improve the quality of each HAProxy release.  If you have any issue with 
>>> these emails or if you want to suggest some improvements, please post them 
>>> on the list so that the solutions suiting the most users can be found.
>>>
>>
>> It appears that with the new backporting helper scripts you immediately
>> backport fixes, instead of doing that in bulk whenever you feel like a
>> new release. This causes this bot to send more emails than before. It's
>> regularly 4 emails every Sunday now.
> 
> I don't think the bot gets triggered on new patches additions, though
> I could be wrong. Instead I think that as it now announces 4 versions,
> the messages are becoming more visible.

It sends an email once per week (on Sunday), but only if there are
actually patches in queue. Previously the backporting process for the
older branches often happened right before actually releasing a new
version with no Sunday in between. Thus no email for those.

>> 2. I don't expect any 1.8 releases for the nearer future,
> 
> So I had a quick look. Last one was 2 months ago, pending fixes there
> are mostly non-critical so I think we can make it wait a bit more.

That's exactly what I assumed.

>> so the bot
>> will continue to annoy the list about an optimal release date on
>> December, 24th which is long in the past. Not acting on the emails and
>> the calculated date makes this whole "optimal" release date calculation
>> rather useless.
> 
> Not that much because it bores us and adds pressure on the stable team
> to produce a release to shut them up. The bot is only a human manipulation
> tool, nothing more.

Yes. That's what I was attempting to say: I've never seen a release at
the calculated "optimal release date". If that's never happening we
could also employ a random number generator for the "optimal release date".

>> 3. They are "boring": One does not need to be reminded about the same
>> bugs every week, with a single addition buried somewhere deep in the list.
>>
>> Proposal:
>>
>> - Merge the 4 emails into a single one. Alternatively make sure they
>> thread so that they can be collapsed within the recipients mailers.
> 
> I'd rather not merge them into a single one because having a visual
> reminder of a rotting branch is useful, however I agree that having

The branch information could be put into the subject. Like this:

stable-bot: Bugfixes waiting for a release 2.1 (60), 2.0 (50), 1.9 (34)

> a single thread would be nice! Also we might possibly prefer to split
> the list of pending patches from the announces of desired releases.
> 
> But I'm having an idea based on what we're doing right now with the
> backports: the reality is that older stable branches need to move slower
> than recent ones. So the algorithm calculating the delays should be
> adjusted to modulate the delay based on the branch's age. And very likely
> it should avoid talking about a branch when the expected release date is
> more than 20 days ahead, and maybe talk about it less often if the date
> was missed (e.g. once every two weeks only, then once a month, i.e. 
> when the age is 7-13 days, then 21-27, then 35-41, then 65-71, then
> 95-101, then 125-131 and so on).
> 
> In addition, I think that once we emit a technical version (an odd one)
> we can stop talking about the previous one. Thus, we have 2.1 so we can
> stop talking about 1.9 at all. It will also help remind users that it
> is about to disappear.
> 
>> - Instead of listing all patches backported just list what *changed* in
>> the past week. This would make the emails more interesting, because they
>> contain *new* information. Or list all of them with a special 'new
>> patches' section at the top.
> 
> That's an interesting idea. What would you think about this then :
> 
>   - limited set of announces as detailed above ;
>   - one mail per branch (threaded) with only the changes since
> the last announce ;
>   - a summary one at the end of the thread with a summary of all
> pending patches for all supported branches.

I'd make the summary the top level email of the thread, because the
first email is the one that's most readily available. It should also be
easier to implement, because the other ones can simply be 'In-Reply-To'
that summary.

Threading would solve most of the pain points for me, because the emails
will nicely be merged on both my computer and my phone. For the
remaining points I don't really care that much. I'll leave this up to
the people that actually read the emails. I'm currently just marking
them as read without taking a single look :-) Most of by curiosity is
satisfied using git and the bug list on haproxy.org.

Best 

Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Willy Tarreau
Hi Tim,

On Wed, Feb 12, 2020 at 04:36:58PM +0100, Tim Düsterhus wrote:
> > Thus the computed ideal release date for 2.1.3 would be 2020/02/03, which 
> > was one week ago.
> > [...]
> > ---
> > The haproxy stable-bot is freely provided by HAProxy Technologies to help 
> > improve the quality of each HAProxy release.  If you have any issue with 
> > these emails or if you want to suggest some improvements, please post them 
> > on the list so that the solutions suiting the most users can be found.
> > 
> 
> It appears that with the new backporting helper scripts you immediately
> backport fixes, instead of doing that in bulk whenever you feel like a
> new release. This causes this bot to send more emails than before. It's
> regularly 4 emails every Sunday now.

I don't think the bot gets triggered on new patches additions, though
I could be wrong. Instead I think that as it now announces 4 versions,
the messages are becoming more visible.

> I assume that 1.7 and lower is already excluded from the emails.

It seems to to me as well.

> 1. Can we get a 2.1 release in the next days? I'm primarily asking
> because of the backported Lua package path patches :-)

Comment ignored, as requested :-)  Others have been requesting it
as well for about a week now, it's just that it takes time and it's
always hard to settle on something when you see issues accumulating.

> 2. I don't expect any 1.8 releases for the nearer future,

So I had a quick look. Last one was 2 months ago, pending fixes there
are mostly non-critical so I think we can make it wait a bit more.

> so the bot
> will continue to annoy the list about an optimal release date on
> December, 24th which is long in the past. Not acting on the emails and
> the calculated date makes this whole "optimal" release date calculation
> rather useless.

Not that much because it bores us and adds pressure on the stable team
to produce a release to shut them up. The bot is only a human manipulation
tool, nothing more.

> 3. They are "boring": One does not need to be reminded about the same
> bugs every week, with a single addition buried somewhere deep in the list.
> 
> Proposal:
> 
> - Merge the 4 emails into a single one. Alternatively make sure they
> thread so that they can be collapsed within the recipients mailers.

I'd rather not merge them into a single one because having a visual
reminder of a rotting branch is useful, however I agree that having
a single thread would be nice! Also we might possibly prefer to split
the list of pending patches from the announces of desired releases.

But I'm having an idea based on what we're doing right now with the
backports: the reality is that older stable branches need to move slower
than recent ones. So the algorithm calculating the delays should be
adjusted to modulate the delay based on the branch's age. And very likely
it should avoid talking about a branch when the expected release date is
more than 20 days ahead, and maybe talk about it less often if the date
was missed (e.g. once every two weeks only, then once a month, i.e. 
when the age is 7-13 days, then 21-27, then 35-41, then 65-71, then
95-101, then 125-131 and so on).

In addition, I think that once we emit a technical version (an odd one)
we can stop talking about the previous one. Thus, we have 2.1 so we can
stop talking about 1.9 at all. It will also help remind users that it
is about to disappear.

> - Instead of listing all patches backported just list what *changed* in
> the past week. This would make the emails more interesting, because they
> contain *new* information. Or list all of them with a special 'new
> patches' section at the top.

That's an interesting idea. What would you think about this then :

  - limited set of announces as detailed above ;
  - one mail per branch (threaded) with only the changes since
the last announce ;
  - a summary one at the end of the thread with a summary of all
pending patches for all supported branches.

I really want to have the last one so that we don't forget the
critical bug merged at the beginning that deserves a quick fix that
we didn't have the time to emit due to being at a conference and that
we later forgot about, you see. I remember having let some versions
rot for a long time with very important fixes in them. When you're
on the backporter side, you remember having done the painful backport
work and you never know if a release was emitted with this or that fix.
For users it's the opposite, they need their fixes in a released version.
This complete list is what reminds the divergence between what one thinks
he's done and what others expect.

Note, I'm commenting on the process and what we'd like to have, but I'm
not the one maintaining the script, I don't know how feasible all of this
is :-)

Cheers,
Willy



[ANNOUNCE] haproxy-2.1.3

2020-02-12 Thread Willy Tarreau
Hi,

HAProxy 2.1.3 was released on 2020/02/12. It added 86 new commits
after version 2.1.2.

It's clear that 2.1 has been one of the calmest releases in a while, to
the point of making us forget that it still had a few fixes pending that
would be pleasant to have in a released version! So after accumulating
fixes for 7 weeks, it's about time to have another one!

Here are the most relevant fixes:

  - pools: there is an ABA race condition in pool_flush() (which is called
when stopping as well as under memory pressure) which can lead to a
crash. It's been there since 1.9 and is very hard to trigger, but if
you run with many threads and reload very often you may occasionally
hit it, seeing a trace of the old process crashing in your system
logs.

  - there was a bug in the way our various hashes were calculated, some
of them were considering the inputs as signed chars instead of
unsigned ones, so some non-ASCII characters would hash differently
across different architectures and wouldn't match another component's
calculation (e.g. a CRC32 inserted in a header would differ when given
values with the 8th bit set, or applied to the PROXY protocol header).
The bug has been there since 1.5-dev20 but became visible since it
affected Postfix's validation of the PROXY protocol's CRC32. It's
unlikely that anyone will ever witness it if it didn't happen already,
but I tagged it "major" to make sure it is properly backported to
distro packages, since not having it on certain nodes may sometimes
result in hash inconsistencies which can be very hard to diagnose.

  - the addition of the Early-Data header when using 0rtt could wrongly
be emitted during SSL handshake as well.

  - health checks could crash if using handshakes (e.g. SSL) mixed with
DNS that takes time to retrieve an address, causing an attempt to
use an incompletely initialized connection.

  - the peers listening socket was missing from the seamless reload,
possibly causing some failed bindings when not using reuseport,
resulting in the new process giving up.

  - splicing could often end up on a timeout because after the last block
we did not switch back to HTX to complete the message.

  - fixed a small race affecting idle connections, allowing one thread to
pick a connection at the same moment another one would decide to free
it because there are too many idle.

  - response redirects were appended to the actual response instead of
replacing it. This could cause various errors, including data
corruption on the client if the entire response didn't fit into the
buffer at once.

  - when stopping or when releasing a few connections after a listener's
maxconn was reached, we could distribute some work to inexistent
threads if the listener had "1/odd" or "1/even" while the process
had less than 64 threads. An easy workaround for this is to explicitly
reference the thread numbers instead.

  - when proxying an HTTP/1 client to an HTTP/2 server, make sure to clean
up the "TE" header from anything but "trailers", otherwise the server
may reject a request if it came from a browser placing "gzip" there.

  - the H2 mux had an incorrect buffer full detection causing the send
phase to stop on a fragment boundary then to immediately wake up all
waiting threads to go on, resulting in an excessive CPU usage in some
tricky situations. It is possible that those using H2 with many streams
per connection and moderately large objects, like Luke's maps servers,
could observe a CPU usage drop (maybe Luke on his map servers).

  - it was possible to lose the master-worker status after a failed reload
when it was only mentioned in the config and not on the command line.

  - when decoding the Netscaler's CIP protocol we forgot to allocate the
storage for the src/dst addresses, crashing the process.

  - upon pipe creation failure due to shortage of file descriptors, the
struct pipe was still returned after having been released, quickly
crashing the process. Fortunately the automatic maxconn/maxpipe
settings do not allow this situation to happen but very old configs
still having "ulimit-n" could have been affected.

  - the "tcp-request session" rules would report an error upon a "reject"
action, making the listener throttle itself to protect resources,
which could actually amplify the problem.

  - the "commit ssl cert" command on the CLI used the old SSL_CTX instead
of the new one, which caused some certs not to work anymore (found on
openssl-1.0.2 with ECDSA+ECDHE). There is quite a number of other SSL
SSL fixes for small bugs that were found while troubleshooting this
issue, mainly in relation with dynamic cert updates.

  - the H1 mux could attempt to perform a sendto() when facing new data
after having already failed, resulting in excess calls to sendto().

The rest has 

Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Tim Düsterhus
Willy,

Am 12.02.20 um 16:36 schrieb Tim Düsterhus:
> 1. Can we get a 2.1 release in the next days? I'm primarily asking
> because of the backported Lua package path patches :-)

Talk about timing. I'm seeing a 2.1.3 tag in git now. So ignore that
point (1).

Best regards
Tim Düsterhus



Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Tim Düsterhus
List,
Willy,

[removed bot from Cc]

Am 09.02.20 um 01:00 schrieb stable-...@haproxy.com:
> This is a friendly bot that watches fixes pending for the next haproxy-stable 
> release!  One such e-mail is sent periodically once patches are waiting in 
> the last maintenance branch, and an ideal release date is computed based on 
> the severity of these fixes and their merge date.  Responses to this mail 
> must be sent to the mailing list.
> 
> Last release 2.1.2 was issued on 2019/12/21.  There are currently 54 patches 
> in the queue cut down this way:
> - 2 MAJOR, first one merged on 2020/01/20
> - 20 MEDIUM, first one merged on 2020/01/09
> - 32 MINOR, first one merged on 2020/01/07
> 
> Thus the computed ideal release date for 2.1.3 would be 2020/02/03, which was 
> one week ago.
> [...]
> ---
> The haproxy stable-bot is freely provided by HAProxy Technologies to help 
> improve the quality of each HAProxy release.  If you have any issue with 
> these emails or if you want to suggest some improvements, please post them on 
> the list so that the solutions suiting the most users can be found.
> 

It appears that with the new backporting helper scripts you immediately
backport fixes, instead of doing that in bulk whenever you feel like a
new release. This causes this bot to send more emails than before. It's
regularly 4 emails every Sunday now. I assume that 1.7 and lower is
already excluded from the emails.

1. Can we get a 2.1 release in the next days? I'm primarily asking
because of the backported Lua package path patches :-)

2. I don't expect any 1.8 releases for the nearer future, so the bot
will continue to annoy the list about an optimal release date on
December, 24th which is long in the past. Not acting on the emails and
the calculated date makes this whole "optimal" release date calculation
rather useless.

3. They are "boring": One does not need to be reminded about the same
bugs every week, with a single addition buried somewhere deep in the list.

Proposal:

- Merge the 4 emails into a single one. Alternatively make sure they
thread so that they can be collapsed within the recipients mailers.
- Instead of listing all patches backported just list what *changed* in
the past week. This would make the emails more interesting, because they
contain *new* information. Or list all of them with a special 'new
patches' section at the top.

Best regards
Tim Düsterhus



Re: [PATCH v2] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread Willy Tarreau
On Wed, Feb 12, 2020 at 03:53:04PM +0100, William Dauchy wrote:
> when `getsockopt` previously failed, we were trying to set defaultmss
> with -2 value.

Now merged, thanks William.

Willy



unsubscribe

2020-02-12 Thread Dustin Schuemann



Re: [PATCH] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread William Dauchy
On Wed, Feb 12, 2020 at 03:32:07PM +0100, Willy Tarreau wrote:
> I'd do it differently so that we neither try nor report an error if
> the default mss was not set. Indeed, if it already failed earlier,
> we already had an issue, so no need to fail again. So if you agree
> I'll change it to :
> 
>  if (defaultmss > 0 &&
>  tmpmaxseg != defaultmss &&
>  setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
> sizeof(defaultmss)) == -1)

agreed, sent v2 as well.

Thanks,
-- 
William



[PATCH v2] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread William Dauchy
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.

this is a followup of github issue #499

this should be backported to all versions >= v1.8

Fixes: 153659f1ae69a1 ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy 
---
 src/proto_tcp.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index a9d5229c9..044ade430 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -906,9 +906,9 @@ int tcp_bind_listener(struct listener *listener, char 
*errmsg, int errlen)
defaultmss = default_tcp6_maxseg;
 
getsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , );
-   if (tmpmaxseg != defaultmss && setsockopt(fd, IPPROTO_TCP,
-   TCP_MAXSEG, ,
-   sizeof(defaultmss)) == -1) {
+   if (defaultmss > 0 &&
+   tmpmaxseg != defaultmss &&
+   setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
sizeof(defaultmss)) == -1) {
msg = "cannot set MSS";
err |= ERR_WARN;
}
-- 
2.25.0




Re: [PATCH] travis-ci: remove "allow failures", add ERR=1, modernize build-ssl.sh script

2020-02-12 Thread Willy Tarreau
Hi Ilya,

series now applied, thanks!
Willy



Re: [PATCH v3] MINOR: build: add aix72-gcc build TARGET and power{8,9} CPUs

2020-02-12 Thread Willy Tarreau
Hi Chris,

On Wed, Feb 12, 2020 at 08:25:36AM +0100, Chris wrote:
> I agree! I just made a new patch which adds the missing documentation
> for the new build-TARGET as well as the two new CPU-types.

Perfect, now applied. Thanks for taking care of the doc as well!
Don't worry for the "newbie issues" as you call them, that's normal
and we're all someone else's newbie once in a while :-)

I've also backported it to 2.1 as you requested.

Thanks!
Willy



Re: [PATCH] BUG/MINOR: tcp: avoid closing fd when socket failed in tcp_bind_listener

2020-02-12 Thread Willy Tarreau
On Wed, Feb 12, 2020 at 10:09:14AM +0100, William Dauchy wrote:
> we were trying to close file descriptor even when `socket` call was
> failing.
> this should fix github issue #499
> 
> this should be backported to all versions >= v1.8

Now merged, thanks!
Willy



Re: [PATCH] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread Willy Tarreau
On Wed, Feb 12, 2020 at 01:16:15PM +0100, William Dauchy wrote:
>   getsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , );
> - if (tmpmaxseg != defaultmss && setsockopt(fd, IPPROTO_TCP,
> - TCP_MAXSEG, ,
> - sizeof(defaultmss)) == -1) {
> + if (defaultmss < 0 ||
> + (tmpmaxseg != defaultmss &&
> +  setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
> sizeof(defaultmss)) == -1)) {
>   msg = "cannot set MSS";
>   err |= ERR_WARN;

I'd do it differently so that we neither try nor report an error if
the default mss was not set. Indeed, if it already failed earlier,
we already had an issue, so no need to fail again. So if you agree
I'll change it to :

 if (defaultmss > 0 &&
 tmpmaxseg != defaultmss &&
 setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
sizeof(defaultmss)) == -1)

Thanks,
Willy



[PATCH] BUG/MINOR: tcp: don't try to set defaultmss when value is negative

2020-02-12 Thread William Dauchy
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.

this is a followup of github issue #499

this should be backported to all versions >= v1.8

Fixes: 153659f1ae69a1 ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy 
---
 src/proto_tcp.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index a9d5229c9..e509a17bc 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -906,9 +906,9 @@ int tcp_bind_listener(struct listener *listener, char 
*errmsg, int errlen)
defaultmss = default_tcp6_maxseg;
 
getsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , );
-   if (tmpmaxseg != defaultmss && setsockopt(fd, IPPROTO_TCP,
-   TCP_MAXSEG, ,
-   sizeof(defaultmss)) == -1) {
+   if (defaultmss < 0 ||
+   (tmpmaxseg != defaultmss &&
+setsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, , 
sizeof(defaultmss)) == -1)) {
msg = "cannot set MSS";
err |= ERR_WARN;
}
-- 
2.25.0




Re : haproxy.com: New ways to improve your google Ranking.

2020-02-12 Thread Olivia Giles
Hi* haproxy.com *

Warm greetings of the day. We hope this email finds you well as this is
concerning your website *haproxy.com .* We had a glance
at the current status of your *haproxy.com * and came
across a huge room of improvement. Apart from several technical errors,
your website also contains a lack luster ranking in the major search
engines like Google and Bing. We understand, the online presence get’s
stagnated after a while and starts to fall off the grid. That’s when we
come in help. We as a Digital Marketing firm have over 7 years of
experience in helping clients in a predicament like this.

We can very well, help you close the gap and cross over the bridge of
stagnation and start doing well in the online presence sector, with the
assurance of bringing your website in the first 3 ranks of all the major
search engines(Google,Bing) in a very short span of time with our quality
SEO service.

If you are* interested* we can do a complete *Audit* on your
website*”haproxy.com
”* and share the *No Obligation Audit Report* of your
website with you. Once you check the report we can figure out where we
stand and we can customize a tailor made package attaining the improvement
of your website and social media and discuss further.

*We Await your Kind Response.*

Kind Regards

*Olivia Giles*

Business Development Manager.

---
If this communication is no longer relevant to you, please email us on *No* -
thank you!

[image: beacon]


Re: payload inspection using req.payload

2020-02-12 Thread mihe...@gmx.de

Hey Mathias,

wow, brilliant! Made my day, really! - I was about getting frustrated
during troubleshooting :)
That was exactly what I needed. Thanks a bunch!
Failed to find something like that, because I was not exactly knowing
what to search for.

> As a side note: In case you want to match the payload in a binary
(non-HTTP) protocol,
> make sure you convert the payload to hex first, see section 7.1.3 in the
> newest configuration docs, here's the excerpt:

Yes, thats right. Luckily I already had some experience how to handle
that type of stuff from previous scripting jobs.

I wrote a bin2hex function for the LUA script I am testing. Not sure,
maybe in terms of performance(?) it makes more sense to leave that to
haproxy "payload(),hex" and just evaluate the converted result in my
script. Will have a look into that.

So far I got the impression tshooting and testing patterns is more
"obvious" and debug-able when implemented in my own LUA script.
Felt a bit "blind" on tracking decision making when testing a haproxy
ACLs equivalent (maybe just my first impression)
Used "set-var" + "if acl" and printed that via log-format, not sure if
there is a better way when testing ACLs?

Thanks again, BR
Micha



On 12.02.2020 12:09, Mathias Weiersmüller (cyberheads GmbH) wrote:

Hi Micha,


My problem is that the "req.payload(0,10)" fetch, which I am using for
that purpose, does not seem to reliably have access to the payload at
all times.

The problem is not the fetch per se, it is the timing of the evaluation
of the rule: tcp-request content rules are evaluated very early - there's
a high probability the payload buffer is empty at this moment.

if you add a condition to check if there is already any content present,
it will always match (checked using your config, thanks!):

example:
tcp-request content set-var(txn.rawPayload) req.payload(0,2),hex if { req_len 
gt 0 }

As a side note: In case you want to match the payload in a binary (non-HTTP) 
protocol,
make sure you convert the payload to hex first, see section 7.1.3 in the
newest configuration docs, here's the excerpt:

Do not use string matches for binary fetches which might contain null bytes
(0x00), as the comparison stops at the occurrence of the first null byte.
Instead, convert the binary fetch to a hex string with the hex converter first.

Example:

# matches if the string  is present in the binary sample
acl tag_found req.payload(0,0),hex -m sub 3C7461673E


Best regards

Mathias​




Re: payload inspection using req.payload

2020-02-12 Thread cyberheads GmbH
Hi Micha,

> My problem is that the "req.payload(0,10)" fetch, which I am using for
> that purpose, does not seem to reliably have access to the payload at
> all times.

The problem is not the fetch per se, it is the timing of the evaluation
of the rule: tcp-request content rules are evaluated very early - there's
a high probability the payload buffer is empty at this moment.

if you add a condition to check if there is already any content present, 
it will always match (checked using your config, thanks!):

example:
tcp-request content set-var(txn.rawPayload) req.payload(0,2),hex if { req_len 
gt 0 }

As a side note: In case you want to match the payload in a binary (non-HTTP) 
protocol, 
make sure you convert the payload to hex first, see section 7.1.3 in the
newest configuration docs, here's the excerpt:

Do not use string matches for binary fetches which might contain null bytes
(0x00), as the comparison stops at the occurrence of the first null byte.
Instead, convert the binary fetch to a hex string with the hex converter first.

Example:

# matches if the string  is present in the binary sample
acl tag_found req.payload(0,0),hex -m sub 3C7461673E


Best regards

Mathias​


Re: spoa-mirror

2020-02-12 Thread Aleksandar Lazic

Hi Dmitry.

Please keep the mailing-list in the loop, thanks.

On 12.02.20 08:17, Дмитрий Меркулов wrote:

Hello Aleks
Information about version, haproxy and spoa-agent conf in the attachment.


HA-Proxy version 2.1.0 2019/11/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.0.html
...

 conf
...

backend mirroragent
mode tcp
option tcplog
balance roundrobin
timeout connect 5s
timeout server 5s
server agent 0.0.0.0:12345 #check observe layer4 inter 3000 rise 2 fall 2

listen port:10026
mode http
option httplog
bind *:10026
maxconn 1000
filter spoe engine trafficmirror config /etc/haproxy/mirror.cfg
balance roundrobin
###


During the request, in the haproxy logs I see the following strings:
Feb 12 10:13:47 localhost haproxy[6374]: SPOE: [mirroragent] 
 sid=52 st=0 0/0/0/0/0 2/2 0/0 1/8
Feb 12 10:13:47 localhost haproxy[6374]: 194.87.225.137:54385 [12/Feb/2020:10:13:47.615] 
port:10026 port:10026/DPS1 1/0/0/2/3 200 138 - -  1/1/0/0/0 0/0 "GET / 
HTTP/1.1"
If I create 3-4 requests in a row, then in the agent’s logs:
[ 2][   66.224617]   <2:27> (E) Failed to send frame length: Broken pipe


I can't find this message in the code.
https://github.com/haproxytech/spoa-mirror/search?q=Failed+to+send+frame+length_q=Failed+to+send+frame+length

Can you try to build the mirror with debug enabled?
./configure --enable-debug

Can you try to run `strace -fveall -a1024 -s1024 -o spoa-mirror.log spoa-mirror 
`

I don't use the mirror by my self, hopefully someone on the list can help you 
more to debug the issue.



Вторник, 11 февраля 2020, 19:23 +03:00 от Aleksandar Lazic 
:
Hi Dmitry.

On 11.02.20 15:29, Дмитрий Меркулов wrote:
 > Good day!
 > Could you help with the setup spoa-mirror v1.2.1?
 > SPOE: [mirroragent]  sid=0 st=0 
0/0/0/0/0 1/1 0/0 0/1
 > I see the backend sends data to the agent, but the agent does not 
broadcast anything to the destination server.
 > I run the agent with the following command
 > spoa-mirror --runtime 0 -u http://***.***.***.**:*/  --logfile 
W:/var/log/haproxy-mirror.log -n 1 -i 2s -b 30
 > Sometimes in the log I get the following error:
 > [ 1][  110.567823]   <7:10> (E) Failed to send frame length: Broken pipe
 > I would be very grateful for your help.

Which version of haproxy do you use?
haproxy -vv

What's your haproxy config?
Do you have any logs from haproxy?

Please don't send Screenshots because they are not visible in text only 
mailers, thanks.

 > --
 > Dmitry Merkulov

Regards
Aleks

-- Dmitry Merkulov





payload inspection using req.payload

2020-02-12 Thread mihe...@gmx.de

Hi everyone,

writing to get some help on a setup I am building with haproxy.

Part of the setup is a content inspection of the tcp payload (binary
stream), for which the load balancing will be done.
Testing with content inspection based on simple ACL pattern matches but
also tried evaluating the payload in LUA scripts. Where the latter is my
personal preference.
In the end the incoming requests should be accepted/rejected, based on
the payload evaluation result.
My target is to process multiple hundreds of simultaneous requests at
peak times, which *ALL* should undergo a payload inspection for the
initial request. Scenario will also terminate TLS later on, but this
should make no difference for the inspection (at least to my understanding)

My problem is that the "req.payload(0,10)" fetch, which I am using for
that purpose, does not seem to reliably have access to the payload at
all times.
So far I was not able to find out, what the cause of that could be.
There were several mitigation hints on that problem, but somehow I am
failing to get it to work.

For troubleshooting I got down to a very simplistic setup, which just
accesses the payload and prints it to the logfile.

I am using apache benchmark "ab" to generate ingress traffic for larger
batches. An apache server acts as a test backend.
Please not this is just for testing purposes. The final protocol is
*NOT* http.
I think this is negligible atm(?) as the part I am focussing on is
actual inboud/eval stuff, before the backend is contacted.

So out of a 100 requests sent with "ab" about 10-50% of the requests are
failing to display payload content.
I also noticed, that localhost generated ab requests have a much higher
chance of failing to print the payload.

Have the strong feeling that the payload is trying to be accessed before
its fully available to haproxy - even if its just a few bytes (testing
with 2-8)

Kind of lost here at the moment and I would really be grateful for any
suggestions and help on that one.
Is there a reasonable way to reliably "wait" for incoming requests w/o
delaying the requests too much in the end?

Best Regards
Micha


Below you can find the setup I came up with:



# VERSIONS

$ grep VERSION= /etc/os-release
VERSION="18.04.4 LTS (Bionic Beaver)"

$ grep 2.1 /etc/apt/sources.list.d/vbernat-ubuntu-haproxy-2_0-bionic.list
deb http://ppa.launchpad.net/vbernat/haproxy-2.1/ubuntu bionic main


$ haproxy -vv
HA-Proxy version 2.1.2-1ppa1~bionic 2019/12/21 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2021.
Known bugs: http://www.haproxy.org/bugs/bugs-2.1.2.html
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -O2
-fdebug-prefix-map=/build/haproxy-HuTwKZ/haproxy-2.1.2=.
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
-D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wno-implicit-fallthrough
-Wno-stringop-overflow -Wtype-limits -Wshift-negative-value
-Wshift-overflow=2 -Wduplicated-cond -Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_ZLIB=1 USE_SYSTEMD=1
Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER -PCRE
-PCRE_JIT +PCRE2 +PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD
-PTHREAD_PSHARED +REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY
+LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO
+OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY +TFO
+NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD -OBSOLETE_LINKER
+PRCTL +THREAD_DUMP -EVPORTS
Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
Running on OpenSSL version : OpenSSL 1.1.1  11 Sep 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND
Built with PCRE2 version : 10.31 2018-02-12
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with the Prometheus exporter as a service
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
 

[PATCH] BUG/MINOR: tcp: avoid closing fd when socket failed in tcp_bind_listener

2020-02-12 Thread William Dauchy
we were trying to close file descriptor even when `socket` call was
failing.
this should fix github issue #499

this should be backported to all versions >= v1.8

Fixes: 153659f1ae69a1 ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy 
---
 src/proto_tcp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/proto_tcp.c b/src/proto_tcp.c
index df4e5a4d2..a9d5229c9 100644
--- a/src/proto_tcp.c
+++ b/src/proto_tcp.c
@@ -749,8 +749,8 @@ int tcp_bind_listener(struct listener *listener, char 
*errmsg, int errlen)
if (getsockopt(fd, IPPROTO_TCP, TCP_MAXSEG, 
_tcp_maxseg,
_len) == -1)
ha_warning("Failed to get the default value of 
TCP_MAXSEG\n");
+   close(fd);
}
-   close(fd);
}
if (default_tcp6_maxseg == -1) {
default_tcp6_maxseg = -2;
-- 
2.25.0