RE: adding Coverity badge to some visible place

2021-05-14 Thread Daniel Corbett
Hello Ilya,

> From: Илья Шипицин  
> Sent: Friday, May 14, 2021 11:10 AM
> To: HAProxy 
> Subject: adding Coverity badge to some visible place

> Hello,
> 
> I would like to improve code quality visibility.
> what if we convert README --> README.md and put Coverity badge there ?
>

Funny, I had a similar thought earlier today about having a README.md.

I would like to help improve it from a visual perspective.

+1 from me and very interested!

Thanks,
-- Daniel




[PATCH] DOC: config: Fix configuration example for mqtt

2021-05-13 Thread Daniel Corbett
Hello,

 

This patch fixes the example for mqtt_is_valid(), which was missing

curly braces within the ACL.

 

 

Thanks,

-- Daniel

 



0001-DOC-config-Fix-configuration-example-for-mqtt.patch
Description: Binary data


[PATCH] CLEANUP: cli/activity: Remove double spacing in set profiling

2021-05-10 Thread Daniel Corbett
Hello,

 

 

It was found that when viewing the help output from the CLI that

"set profiling" had 2 spaces in it, which was pushing it out from

the rest of similar commands.

 

i.e. it looked like this:

  prepare acl 

  prepare map 

  set  profiling{auto|on|off}

  set dynamic-cookie-key backend  

  set map  [|#] 

  set maxconn frontend  

 

This patch removes all of the double spaces within the command and

unifies them to single spacing, which is what is observed within the

rest of the commands.

 

 

Thanks,

-- Daniel

 

 



0001-CLEANUP-cli-activity-Remove-double-spacing-in-set-pr.patch
Description: Binary data


RE: [PATCH] DOC: Fix a few grammar/spelling issues and casing of HAProxy

2021-05-09 Thread Daniel Corbett
Hey Willy,

> -Original Message-
> From: Willy Tarreau 
> Sent: Sunday, May 9, 2021 12:07 AM
> To: Daniel Corbett 
> Cc: haproxy@formilux.org
> Subject: Re: [PATCH] DOC: Fix a few grammar/spelling issues and casing of
> HAProxy
> 
> Hi Daniel,
> 
...

> Thank you for going through this painful task. After carefully reviewing
it, I
> only found two that I had to revert (one occurrence of "HAProxy -vv"
> and the process name for "kill"), and applied it.
> 

My pleasure. I tried to avoid those situations with the binary. Thanks for
catching that!


> There's still the version reported on startup that I want to fix (and will
have to
> adjust the regtest script as well I think). It's been consistently
reported as
> "HA-Proxy" for 20 years and it's the only place I can think of where it's
> reported like this. I think it was Bertrand who proposed a patch for this
a few
> months ago but it had to be split.


Ah yes! For some reason I thought that was merged already. 

I see you have now fixed it. That's great!

Thanks Willy. Have a great day.

-- Daniel





RE: [PATCH] DOC: Fix a few grammar/spelling issues and casing of HAProxy

2021-05-08 Thread Daniel Corbett
Sorry, I forgot to add the area the patch touches to the commit title.

 

New patch attached to add that.

 

Thanks,

-- Daniel

 



0001-DOC-config-Fix-a-few-grammar-spelling-issues-and-cas.patch
Description: Binary data


[PATCH] DOC: Fix a few grammar/spelling issues and casing of HAProxy

2021-05-08 Thread Daniel Corbett
Hello,

 

 

This patch fixes a few grammar and spelling issues in configuration.txt.

It was also noted that there was a wide range of case usage

(i.e. haproxy, HAproxy, HAProxy, etc... ). This patch updates them

all to be consistently "HAProxy" except where a binary is mentioned.

 

 

Thanks,

-- Daniel

 

 



0001-DOC-Fix-a-few-grammar-spelling-issues-and-casing-of-.patch
Description: Binary data


[PATCH] BUG/MINOR: sample: Rename SenderComID/TargetComID to SenderCompID/TargetCompID

2021-03-09 Thread Daniel Corbett
Hello,

 

 

The recently introduced Financial Information eXchange (FIX)

converters have some hard coded tags based on the specification that

were misspelled. Specifically, SenderComID and TargetComID should

be SenderCompID and TargetCompID according to the specification [1][2].

 

This patch updates all references, which includes the converters

themselves, the regression test, and the documentation.

 

[1] https://fiximate.fixtrading.org/en/FIX.5.0SP2_EP264/tag49.html

[2] https://fiximate.fixtrading.org/en/FIX.5.0SP2_EP264/tag56.html

 

 

 

Thanks,

-- Daniel

 



0001-BUG-MINOR-sample-Rename-SenderComID-TargetComID-to-S.patch
Description: Binary data


[PATCH] DOC: Add dns as an available domain to show stat

2020-11-01 Thread Daniel Corbett

Hello,


Within management.txt, proxy was listed as the only available option. "dns"
is now supported so let's add that. This change also updates the command 
to list
the available options  for "domain" as previously it only 
specified

, which could be confusing as a user may think this field accepts
dynamic options when it actually requires a specific keyword.


Thanks,

-- Daniel

>From 2107724812463093fbcee23d669dd86d3949aa28 Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Sun, 1 Nov 2020 10:54:17 -0500
Subject: [PATCH] DOC: Add dns as an available domain to show stat

Within management.txt, proxy was listed as the only available option. "dns"
is now supported so let's add that. This change also updates the command to list
the available options  for "domain" as previously it only specified
, which could be confusing as a user may think this field accepts
dynamic options when it actually requires a specific keyword.
---
 doc/management.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/doc/management.txt b/doc/management.txt
index 308e725d4..1ca5f7bef 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -2438,10 +2438,10 @@ show sess 
   The special id "all" dumps the states of all sessions, which must be avoided
   as much as possible as it is highly CPU intensive and can take a lot of time.
 
-show stat [domain ] [{|}  ] [typed|json] \
+show stat [domain ] [{|}  ] [typed|json] \
   [desc] [up|no-maint]
-  Dump statistics. The domain is used to select which statistics to print; only
-  proxy is available for now. By default, the CSV format is used; you can
+  Dump statistics. The domain is used to select which statistics to print; dns
+  and proxy are available for now. By default, the CSV format is used; you can
   activate the extended typed output format described in the section above if
   "typed" is passed after the other arguments; or in JSON if "json" is passed
   after the other arguments. By passing ,  and , it is possible
-- 
2.17.1



Re: [PATCH] CONTRIB: release-estimator: Add release estimating tool

2020-10-24 Thread Daniel Corbett

Hey Willy,


On 10/24/20 6:29 AM, Willy Tarreau wrote:

On Thu, Oct 22, 2020 at 03:55:14PM -0400, Daniel Corbett wrote:

Attached is a patch that adds the release estimating tool (stable-bot) to
the contrib directory.

Ah, great, thank you Daniel!

I've merged it now. If we find that it starts to receive contribs,
maybe it will make sense to spin it off into its own repo, since,
after all, it's cross-version. But it's the same for other tools
and that's not a problem like this at all.



Sounds good. Thank you!

Have a great day.


-- Daniel




[PATCH] CONTRIB: release-estimator: Add release estimating tool

2020-10-22 Thread Daniel Corbett

Hello,


Attached is a patch that adds the release estimating tool (stable-bot) 
to the contrib directory.


This tool monitors the HAProxy stable branches and calculates a proposed 
release date for the next minor release based on the bug fixes that are 
in the queue.


This tool will continue to run weekly as it has been but it will allow 
others to contribute to it and also run it locally.



Thanks,

-- Daniel

>From b3ad15e87b70ed741b8cafc2830491c9d1b7e01a Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Thu, 22 Oct 2020 11:19:55 -0400
Subject: [PATCH] CONTRIB: release-estimator: Add release estimating tool

This tool monitors the HAProxy stable branches and calculates a proposed
release date for the next minor release based on the bug fixes that are in
the queue.

Print only:
./release-estimator.py --print

Send email:
./release-estimator.py --send-mail --from-email from@domain.local --to-email to@domain.local

See contrib/release-estimator/README.md for details.
---
 contrib/release-estimator/README.md   |  69 +++
 .../release-estimator/release-estimator.py| 429 ++
 2 files changed, 498 insertions(+)
 create mode 100644 contrib/release-estimator/README.md
 create mode 100755 contrib/release-estimator/release-estimator.py

diff --git a/contrib/release-estimator/README.md b/contrib/release-estimator/README.md
new file mode 100644
index 0..5688eb065
--- /dev/null
+++ b/contrib/release-estimator/README.md
@@ -0,0 +1,69 @@
+# Release Estimator
+This tool monitors the HAProxy stable branches and calculates a proposed
+release date for the next minor release based on the bug fixes that are in
+the queue.
+
+
+## Requirements
+  - Python 3.x
+  - [lxml](https://lxml.de/installation.html)
+
+
+## Usage
+release-estimator.py [-h] [--print] [--to-email TO_EMAIL]
+   [--from-email FROM_EMAIL] [--send-mail]
+
+optional arguments:
+  -h, --helpshow this help message and exit
+  --print   Print email only
+  --to-email TO_EMAIL   Send email to 
+  --from-email FROM_EMAIL
+Send email from 
+  --send-mail   Send email
+
+
+## Examples
+
+
+### Print only:
+./release-estimator.py --print
+
+
+### Send email:
+./release-estimator.py --send-mail --from-email from@domain.local --to-email to@domain.local
+
+
+## How it works
+For each version we check the age and apply the following logic:
+  - Skip the release if it's:
+  - older than MAX_VERSION_AGE
+  - older than MAX_VERSION_AGE_NONLTS days and an odd numbered release
+(1.9,2.1,2.3)
+
+  - For all other valid releases we will then collect the number of bug fixes
+in queue for each of the defined severity levels:
+  - BUG
+  - BUILD
+  - MINOR
+  - MEDIUM
+  - MAJOR
+  - CRITICAL
+
+We'll then begin calculating the proposed release date based on the last
+release date plus the first commit date of the first bug fix for the defined
+severity level.
+
+By default the proposed release dates use the following padding:
+(Can be modified in THRESHOLDS)
+  - BUG/BUILD/MINOR - 28 days
+  - MEDIUM - 30 days
+  - MAJOR - 14 days
+  - CRITICAL - 2 days
+
+After we have a proposed release date we will assign a release urgency
+to it. As we get closer to the proposed release date the urgency level changes.
+By default the urgency levels and their times are:
+  - WARNING - proposed date is 7 days or less
+  - NOTICE  - proposed date is 21 days or less
+  - INFO- proposed date is longer than the above
+
diff --git a/contrib/release-estimator/release-estimator.py b/contrib/release-estimator/release-estimator.py
new file mode 100755
index 0..bf005df3f
--- /dev/null
+++ b/contrib/release-estimator/release-estimator.py
@@ -0,0 +1,429 @@
+#!/usr/bin/python3
+#
+# Release estimator for HAProxy
+#
+# A tool that monitors the HAProxy stable branches and calculates a proposed
+# release date for the next minor release based on the bug fixes that are in
+# the queue.
+#
+# Copyright 2020 HAProxy Technologies, Daniel Corbett 
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version
+# 3 of the License, or (at your option) any later version.
+#
+#
+
+from lxml import html
+import requests
+import traceback
+import smtplib
+import math
+import copy
+import time
+import sys
+import argparse
+from datetime import datetime
+from datetime import timedelta
+from email.mime.text import MIMEText
+
+# Do not report on versions older than
+# MAX_VERSION_AGE.
+MAX_VERSION_AGE = 1095 # days
+
+# Do not report on non-lts releases (odd releases) that
+# are older than MAX_VERSION_AGE_NONLTS
+MAX_VERSION_AGE_NONLTS = 547 # days
+
+# For each severity/issue type, set thresholds
+# co

Re: stable-bot: Bugfixes waiting for a release 2.2 (18), 2.1 (13), 2.0 (8), 1.8 (6)

2020-08-19 Thread Daniel Corbett

Hello Tim,


On 8/19/20 5:09 AM, Tim Düsterhus wrote:

Daniel,

The indentation is messed up here. Is this caused by the removal of 1.9?



Yea, quite likely.



I also have another proposal for readability of this list: Can the
versions be ordered in descending order and possibly aligned within
columns? Here's an example of what I'm thinking of:



I'll see what I can do about fixing the indentations and improving the 
readability.


The last time I tried improving the indentations it just didn't go well 
and I haven't had much time lately to work on it.


I was thinking to contribute the script to the contrib/ directory and 
allow others to help improve it as well.



Thanks,

-- Daniel




Re: [PATCH] DOC: typo fixes for configuration.txt

2020-07-07 Thread Daniel Corbett

Hello,


On 7/7/20 7:19 AM, Илья Шипицин wrote:

Daniel,

can you please tell what spellchecker do you use ?

(codespell does not see those typos)



I used aspell.


Thanks,

-- Daniel





[PATCH] DOC: typo fixes for configuration.txt

2020-07-06 Thread Daniel Corbett
Hello,


Here's a quick round of typo corrections for configuration.txt


Thanks,
-- Daniel




0001-DOC-configuration-various-typo-fixes.patch
Description: Binary data


Re: [PATCH] BUG/MEDIUM: sink: fix crash when null sink is used in __do_send_log

2020-06-20 Thread Daniel Corbett

Hello,


On 6/19/20 3:22 PM, Willy Tarreau wrote:

Hi Daniel,
...
Thanks for the fix and the detailed report, that's very useful! However
the problem is somewhere else, and I suspect is slightly harder to solve.
It's normally not possible to have a null sink on a log server if its
type is a buffer, so we have an inconsistency somewhere that we must
address.



I suspected this to be the case but could not really trace the root of 
the issue and chose the path of least resistance :)




I'm CCing Emeric so that he can have a look, and am keeping the rest
intact below.



If I can provide any more info please let me know.


Thanks,

-- Daniel





Re: [PATCH] BUG/MEDIUM: sink: fix crash when null sink is used in __do_send_log

2020-06-17 Thread Daniel Corbett

Hello,


On 6/18/20 12:35 AM, Daniel Corbett wrote:

Hello,


When using a ring log in combination with proto fcgi, it was possible
to cause a crash by sending a request for a non-existent fastcgi file
to php-fpm, causing it to produce the error "Primary script unknown".
When php-fpm produces this error for some reason a sink is not able to be
established and __do_send_log gets passed a null sink.

This commit verifies that the sink exists in __do_send_log before 
attempting

to use it.



My apologies, I mentioned the wrong function in the commit message, the 
fix was applied in sink_write and not __do_send_log. Attached is an 
amended commit.



Thanks,

-- Daniel


>From 90e1a21db447da3053bd80cede5b45628004117f Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Thu, 18 Jun 2020 00:10:17 -0400
Subject: [PATCH] BUG/MEDIUM: sink: fix crash when null sink is used in
 sink_write

When using a ring log in combination with proto fcgi, it was possible
to cause a crash by sending a request for a non-existent fastcgi file
to php-fpm, causing it to produce the error "Primary script unknown".
When php-fpm produces this error for some reason a sink is not able to be
established and sink_write gets passed a null sink.

This commit verifies that the sink exists in sink_write before attempting
to use it.
---
 include/haproxy/sink.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/haproxy/sink.h b/include/haproxy/sink.h
index 025fa4185..b290d2189 100644
--- a/include/haproxy/sink.h
+++ b/include/haproxy/sink.h
@@ -49,6 +49,9 @@ static inline ssize_t sink_write(struct sink *sink, const struct ist msg[], size
 {
 	ssize_t sent;
 
+	if (!sink)
+		goto fail;
+
 	if (unlikely(sink->ctx.dropped > 0)) {
 		/* We need to take an exclusive lock so that other producers
 		 * don't do the same thing at the same time and above all we
-- 
2.17.1



[PATCH] BUG/MEDIUM: sink: fix crash when null sink is used in __do_send_log

2020-06-17 Thread Daniel Corbett

Hello,


When using a ring log in combination with proto fcgi, it was possible
to cause a crash by sending a request for a non-existent fastcgi file
to php-fpm, causing it to produce the error "Primary script unknown".
When php-fpm produces this error for some reason a sink is not able to be
established and __do_send_log gets passed a null sink.

This commit verifies that the sink exists in __do_send_log before attempting
to use it.


Sample below to cause the crash. Note, it requires that you have a 
php-fpm server running and make a request for a file not found.


global
    log ring@requests0 local1 info

ring requests0
    description "request logs"
    format rfc3164
    maxlen 1200
    size 32764
    timeout connect 5s
    timeout server 10s
    server request-log 127.0.0.1:6514

fcgi-app php-fpm
    log-stderr global
    option keep-conn
    docroot /var/www/html
    index app.php
    path-info ^(/.+\.php)(/.*)?$


listen be_dynamic
    mode http
    bind :80
    use-fcgi-app php-fpm
    option httpchk
    http-check connect proto fcgi
    http-check send meth GET uri /ping
    http-check expect string pong
    server php-fpm1 172.31.31.151:18081 proto fcgi



$ curl 192.168.1.25/404.php
curl: (52) Empty reply from server

0014:be_dynamic.accept(0006)=0010 from [76.110.41.73:51892] ALPN=
0014:be_dynamic.clireq[0010:]: GET /404.php HTTP/1.1
0014:be_dynamic.clihdr[0010:]: host: demo.haproxy.net
0014:be_dynamic.clihdr[0010:]: user-agent: curl/7.58.0
0014:be_dynamic.clihdr[0010:]: accept: */*
Segmentation fault (core dumped)



Backtrace:

(gdb) bt
#0  0x5586ffea472b in sink_write (sink=0x0, msg=0x7ffcd94951b0, 
nmsg=1, level=3, facility=17, tag=0x7ffcd94951c0, pid=0x7ffcd94951d0, 
sd=0x7ffcd94951e0) at include/haproxy/sink.h:55
#1  0x5586ffea8fe2 in __do_send_log (logsrv=0x5587008e9f30, 
nblogger=1, pid_str=0x7f7d48069e7c "21018", pid_size=5, level=3, 
message=0x558700916340 "Primary script unknown\n\n", size=22,
    sd=0x55870025e6c6  "- ", sd_size=2, 
tag_str=0x558700917d70 "php-fpm", tag_size=7) at src/log.c:1742
#2  0x5586ffea9717 in __send_log (logsrvs=0x558700917ed8, 
tag=0x7ffcd94954e0, level=3, message=0x558700916340 "Primary script 
unknown\n\n", size=24, sd=0x55870025e6c6  
"- ",

    sd_size=2) at src/log.c:1851
#3  0x5586ffeaea21 in app_log (logsrvs=0x558700917ed8, 
tag=0x7ffcd94954e0, level=3, format=0x5586fffb3006 "%s") at src/log.c:3097
#4  0x5586ffda3ed8 in fcgi_strm_handle_stderr (fconn=0x5587009a4200, 
fstrm=0x558700987e50) at src/mux_fcgi.c:2376
#5  0x5586ffda5b33 in fcgi_process_demux (fconn=0x5587009a4200) at 
src/mux_fcgi.c:2563
#6  0x5586ffda8eb4 in fcgi_process (fconn=0x5587009a4200) at 
src/mux_fcgi.c:2977
#7  0x5586ffda8b37 in fcgi_io_cb (t=0x558700987e00, 
ctx=0x5587009a4200, status=0) at src/mux_fcgi.c:2948
#8  0x5586fff5e870 in run_tasks_from_list (list=0x5587003893f0 
, max=67) at src/task.c:345

#9  0x5586fff5ecfb in process_runnable_tasks () at src/task.c:446
#10 0x5586ffefa415 in run_poll_loop () at src/haproxy.c:2884
#11 0x5586ffefa956 in run_thread_poll_loop (data=0x0) at 
src/haproxy.c:3056
#12 0x5586ffefc622 in main (argc=4, argv=0x7ffcd94960a8) at 
src/haproxy.c:3758




Thanks,

-- Daniel


>From c5793a0ab7372329b7bc69e823b4bb960311a58a Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Thu, 18 Jun 2020 00:10:17 -0400
Subject: [PATCH] BUG/MEDIUM: sink: fix crash when null sink is used in 
 __do_send_log

When using a ring log in combination with proto fcgi, it was possible
to cause a crash by sending a request for a non-existent fastcgi file
to php-fpm, causing it to produce the error "Primary script unknown".
When php-fpm produces this error for some reason a sink is not able to be
established and __do_send_log gets passed a null sink.

This commit verifies that the sink exists in __do_send_log before attempting
to use it.
---
 include/haproxy/sink.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/haproxy/sink.h b/include/haproxy/sink.h
index 025fa4185..b290d2189 100644
--- a/include/haproxy/sink.h
+++ b/include/haproxy/sink.h
@@ -49,6 +49,9 @@ static inline ssize_t sink_write(struct sink *sink, const struct ist msg[], size
 {
 	ssize_t sent;
 
+	if (!sink)
+		goto fail;
+
 	if (unlikely(sink->ctx.dropped > 0)) {
 		/* We need to take an exclusive lock so that other producers
 		 * don't do the same thing at the same time and above all we
-- 
2.17.1



[PATCH] BUG/MINOR: stats: Fix color of draining servers on stats page

2020-03-28 Thread Daniel Corbett

Hello,


This patch fixes #53 where it was noticed that when an active
server is set to DRAIN it no longer has the color blue reflected
within the stats page. This patch addresses that and adds the
color back to drain. It's to be noted that backup servers are
configured to have an orange color when they are draining.

Should be backported as far as 1.7.

Thanks,
-- Daniel
>From ab59ade862665e8235d889d398e63d1b67aff243 Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Sat, 28 Mar 2020 12:35:50 -0400
Subject: [PATCH] BUG/MINOR: stats: Fix color of draining servers on stats page

This patch fixes #53 where it was noticed that when an active
server is set to DRAIN it no longer has the color blue reflected
within the stats page. This patch addresses that and adds the
color back to drain. It's to be noted that backup servers are
configured to have an orange color when they are draining.

Should be backported as far as 1.7.
---
 src/stats.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/stats.c b/src/stats.c
index 28724fa2f..36be3153b 100644
--- a/src/stats.c
+++ b/src/stats.c
@@ -912,6 +912,9 @@ static int stats_dump_fields_html(struct buffer *out,
 		else if (strcmp(field_str(stats, ST_F_STATUS), "DOWN ") == 0) {
 			style = "going_up";
 		}
+		else if (strcmp(field_str(stats, ST_F_STATUS), "DRAIN") == 0) {
+			style = "draining";
+		}
 		else if (strcmp(field_str(stats, ST_F_STATUS), "NOLB ") == 0) {
 			style = "going_down";
 		}
-- 
2.17.1



Re: stable-bot: Bugfixes waiting for a release 2.1 (7), 2.0 (5)

2020-02-25 Thread Daniel Corbett

Hey Tim & Willy,

On 2/25/20 10:45 PM, Willy Tarreau wrote:

On Wed, Feb 26, 2020 at 01:23:24AM +0100, Tim Düsterhus wrote:

Daniel,

I already told you in IRC, but as I now have an email to reply to: This
new email format is much better than the old one.

FWIW I agree, it's more synthetic and still seems to contain all the
relevant information.

Thanks for these changes!
Willy


Thanks for the feedback!  Tim, I've immediately implemented your 
suggestion around the dashes.  I'll look into the alignment recommendations.


Glad I could help improve the output (required quite a bit of refactoring!)

Good news is the bot is much more dynamic now.  I no longer need to 
manually add new versions and older versions should have their release 
dates automatically adjusted to account for age.  Also, we will see 
non-LTS versions (1.9) disappear from the emails after 1 year of 
release.  Versions older than 3 years will not be in the report either.


Thanks again for all of the feedback and suggestions.

Thanks,
-- Daniel



Re: stable-bot: WARNING: 54 bug fixes in queue for next release - 2.1

2020-02-12 Thread Daniel Corbett

Hello,


On 2/12/20 12:55 PM, Tim Düsterhus wrote:


Threading would solve most of the pain points for me, because the emails
will nicely be merged on both my computer and my phone. For the
remaining points I don't really care that much. I'll leave this up to
the people that actually read the emails. I'm currently just marking
them as read without taking a single look :-) Most of by curiosity is
satisfied using git and the bug list on haproxy.org.



I just wanted to acknowledge this thread and let you know that I 
appreciate the suggestions.


I'll do what I can to improve it starting with working on moving these 
items to a thread and increasing the calculations for expected release 
dates for older branches.


I'll also take your other suggestions into account as well but I think 
these might be a good starting point.


I may be a little slow on this but I'll put something together that can 
hopefully make the bot more useful.



Thanks again,

-- Daniel





Re: stable-bot for 2.1

2019-12-09 Thread Daniel Corbett

Hey William,


On 12/9/19 1:34 AM, William Lallemand wrote:

On Sun, Dec 08, 2019 at 07:37:46PM -0500, Daniel Corbett wrote:

Hey Julien,


On 12/8/19 2:49 PM, Julien Pivotto wrote:

Hi,

I have the impression that stable-bot has not been configured for the
2.1 branch yet.

Regards,


Thanks for the reminder!  I've gone ahead and added it.

Thanks,


Hi Daniel,

I fear we will be reaching the point where the bot is sending too much mails
very soon, if we continue to add 2 versions per year.
Maybe the bot should group the mails in the same thread or send only one mail
with all the content in it so it's not too much noise on the mailing list.

What do you think?




I suspect you are right.

I think both are your suggestions are good.

I am open to suggestions on the best way to handle this and will make 
any required adjustments.





Thanks,
-- Daniel





Re: stable-bot for 2.1

2019-12-08 Thread Daniel Corbett

Hey Julien,


On 12/8/19 2:49 PM, Julien Pivotto wrote:

Hi,

I have the impression that stable-bot has not been configured for the
2.1 branch yet.

Regards,



Thanks for the reminder!  I've gone ahead and added it.

Thanks,

-- Daniel




Re: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"

2019-11-18 Thread Daniel Corbett

Hello,


On 11/18/19 7:05 AM, Willy Tarreau wrote:

On Mon, Nov 18, 2019 at 12:06:08PM +0100, Baptiste wrote:

When we first designed this feature, we did it with this in mind "if admins
can update a SRV record in a DNS server, they can adjust the weight
accordingly".

I understand the need, but the response is way too short. It's a global
question of precedence in HAProxy from my point of view.
I am scared that if we start to adjust things this way, we'll end up with
1000s of flags overlapping each others and adding complexity on top of
complexity.

The real question is "what prevents an admin from updating a DNS record?"
Or why they don't failover to A/ records only?

I must admit I understand a valid use case : have the DNS set up to advertise
the list of servers, and let the agent adjust the servers' health based on
their load, the fact that they're running backup or OS updates etc. Thus in
my opinion, the *use case* makes sense. What I'm unsure about is the proper
way to do it, because as you mention, it's more a matter of overall
consistency between all sources. We could very well instead have a per-backend
setting indicating what source to fetch the weight from (agent, dns, health,
other?), where to fetch the maxconn from etc. Some may even want to combine
these (average, multiply, ...). I'm fine if you prefer to postpone it. If in
the end we decide to merge it as-is we could also backport it, and if we
decide to address it differently, at least we won't have to maintain one
extra short-lived flag.

Thanks,
Willy



I'm open to ideas on implementation method, definitely not stuck on this 
method :)    To be honest I was trying to find some "good first issues" 
to tackle.


GitHub request here: https://github.com/haproxy/haproxy/issues/48


Thanks for taking the time to review and provide your input guys!


Thanks,

-- Daniel





[PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"

2019-11-17 Thread Daniel Corbett

Hello,


I realize that new features are not preferred at the moment but I think 
this might be a usability issue and hopefully it can be considered for 
2.1-dev, however, it's perfectly fine if it's decided to wait till next.


It was noted in GitHub issue #48 that there are times when a 
configuration may use the server-template directive with SRV records and 
simultaneously want to control weights separately using an agent-check 
or through the runtime api.  This patch adds a new option 
"ignore-weight" to the "resolve-opts" directive.


When specified, any weight indicated within an SRV record will be 
ignored.  This is for both initial resolution and ongoing resolution.


I wanted to include  VTC test with this, however, I could not think of 
an appropriate way to do it as I suspect we may need a "fake dns server" 
similar to what was made for syslog.



Thanks,

-- Daniel


>From 59381396c68d776d220e76ccc6b80e4a2f7ff068 Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Sun, 17 Nov 2019 09:48:56 -0500
Subject: [PATCH] [MEDIUM] dns: Add resolve-opts "ignore-weight"

It was noted in #48 that there are times when a configuration
may use the server-template directive with SRV records and
simultaneously want to control weights using an agent-check or
through the runtime api.  This patch adds a new option
"ignore-weight" to the "resolve-opts" directive.

When specified, any weight indicated within an SRV record will
be ignored.  This is for both initial resolution and ongoing
resolution.
---
 doc/configuration.txt |  5 +
 include/types/dns.h   |  1 +
 src/dns.c | 21 -
 src/server.c  |  6 +-
 4 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index b17ab9064..6ce23ece4 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -11932,6 +11932,11 @@ resolve-opts ,,...
 For such case, simply enable this option.
 This is the opposite of prevent-dup-ip.
 
+  * ignore-weight
+Ignore any weight that is set within an SRV record.  This is useful when
+you would like to control the weights using an alternate method, such as
+using an "agent-check" or through the runtime api.
+
   * prevent-dup-ip
 Ensure HAProxy's default behavior is enforced on a server: prevent re-using
 an IP address already set to a server in the same backend and sharing the
diff --git a/include/types/dns.h b/include/types/dns.h
index 5a60c0708..8347e93ab 100644
--- a/include/types/dns.h
+++ b/include/types/dns.h
@@ -249,6 +249,7 @@ struct dns_options {
 	int pref_net_nb; /* The number of registered preferred networks. */
 	int accept_duplicate_ip; /* flag to indicate whether the associated object can use an IP address
 already set to an other object of the same group */
+	int ignore_weight; /* flag to indicate whether to ignore the weight within the record */
 };
 
 /* Resolution structure associated to single server and used to manage name
diff --git a/src/dns.c b/src/dns.c
index 5f9e7eae5..9f422eef6 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -537,7 +537,8 @@ static void dns_check_dns_response(struct dns_resolution *res)
 HA_SPIN_LOCK(SERVER_LOCK, >lock);
 if (srv->srvrq == srvrq && srv->svc_port == item->port &&
 item->data_len == srv->hostname_dn_len &&
-!memcmp(srv->hostname_dn, item->target, item->data_len)) {
+!memcmp(srv->hostname_dn, item->target, item->data_len) &&
+srv->dns_opts.ignore_weight != 1) {
 	int ha_weight;
 
 	/* DNS weight range if from 0 to 65535
@@ -589,15 +590,17 @@ static void dns_check_dns_response(struct dns_resolution *res)
 !(srv->flags & SRV_F_CHECKPORT))
 	srv->check.port = item->port;
 
-/* DNS weight range if from 0 to 65535
- * HAProxy weight is from 0 to 256
- * The rule below ensures that weight 0 is well respected
- * while allowing a "mapping" from DNS weight into HAProxy's one.
- */
-ha_weight = (item->weight + 255) / 256;
+if (srv->dns_opts.ignore_weight != 1) {
+	/* DNS weight range if from 0 to 65535
+ 	 * HAProxy weight is from 0 to 256
+ 	 * The rule below ensures that weight 0 is well respected
+ 	 * while allowing a "mapping" from DNS weight into HAProxy's one.
+ 	 */
+	ha_weight = (item->weight + 255) / 256;
 
-snprintf(weight, sizeof(weight), "%d", ha_weight);
-server_parse_weight_change_request(srv, weight);
+	snprintf(weight, sizeof(weight), "%d", ha_weight);
+	server_parse_weight_change_request(srv, weight);
+}
 HA_SPIN_UNLOCK(SERVER_LOCK, >lock);
 			}
 		}
diff --git a/src/server.c b/src/server.c
index 6b740240e..60c5ee38d 100644
--- a/src/server.c
+++ b/src/server.c
@@ -17

Re: Haproxy only tracking at maximum 3 keys for api-key based tracking

2019-11-01 Thread Daniel Corbett

Hello


On 11/1/19 2:03 AM, Victor Jonathon Calel wrote:


backend sec_table
         stick-table type string len 64 size 500k expire 5s store 
gpc0,gpt0,http_req_rate(1s)


backend min_table
          stick-table type string len 64 size 500k expire 5s store 
http_req_rate(60s)



As you can see triad03 matching 5 keys. And requests is coming from 
all 5 keys, but stick-table only shows 3 keys in it (First, second and 
4rth).



watch -n1 'echo "show table sec_table" | socat stdio 
unix:/var/run/haproxy.sock'

# table: sec_table, type: string, size:512000, used:3

Why is that? Does this have anything to do with MAX_SESS_STCKTR ? Bur 
here I am only using two counter sc0 and sc1.


What am I doing wrong? Please help.





I have tested this config and it seems to work fine for me on the latest 
stable version.


# table: sec_table, type: string, size:512000, used:5

Are you sure the entries are not expiring?  It seems your expiration 
time is set to 5 seconds.




Thanks,

-- Daniel




Re: [PATCH] BUG/MEDIUM: dns: Correctly use weight specified in SRV record

2019-10-17 Thread Daniel Corbett

Hello,

On 10/17/19 1:47 AM, Baptiste wrote:



Hi Daniel,

Thanks for the patch, but I don't think it's accurate.
What this part of the code aims to do is to "map" a DNS weight into an 
HAProxy weight.
There is a ratio of 256 between both: DNS being in range "0-65535" and 
HAProxy in range "0-255".
What your code does, is that it ignores any DNS weight above 256 and 
force them to 1...


The only "bug" I can see here now is that a server's weight can never 
be 0. But nobody reported this as an issue yet.


I'll check what question is asked into #48 and answer it.



Ah ha!  Thanks for the explanation and my apologies for the noise.  This 
makes sense now.


I have put together another patch that I will send later for the 
"resolve-opts ignore-weight" within that same issue report but wanted to 
get this one out first.



Thanks for taking the time to review this Baptiste.


-- Daniel





Re: [PATCH] BUG/MEDIUM: dns: Correctly use weight specified in SRV record

2019-10-16 Thread Daniel Corbett

Hello,


On 10/16/19 11:32 PM, Daniel Corbett wrote:



This patch should be backported to 1.8 and 1.9



Sorry, I forgot to indicate that this should be backported to 2.0 also.


Thanks,

-- Daniel





[PATCH] BUG/MEDIUM: dns: Correctly use weight specified in SRV record

2019-10-16 Thread Daniel Corbett

Hello,


In #48 it was reported that when using the server-template

directive combined with an SRV record that HAProxy would
always set the weight to "1" regardless of what the SRV record
contains.

It was found that in an attempt to force a minimum value of "1"
actually ended up forcing "1" in all situations.  This was due to
an improper equation: ( x / 256 ) + 1

This patch should be backported to 1.8 and 1.9



Thanks,

-- Daniel


>From b9c2d037b0b3cf06a7908bcaa4949a29245574ca Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Wed, 16 Oct 2019 23:13:20 -0400
Subject: [PATCH] BUG/MEDIUM: dns: Use weight as specified in SRV record

In #48 it was reported that when using the server-template
directive combined with an SRV record that HAProxy would
always set the weight to "1" regardless of what the SRV record
contains.

It was found that in an attempt to force a minimum value of "1"
actually ended up forcing "1" in all situations.  This was due to
an improper equation: ( x / 256) + 1

This patch should be backported to 1.8 and 1.9
---
 src/dns.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/src/dns.c b/src/dns.c
index 0ce6e8302..8a71bd495 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -546,7 +546,12 @@ static void dns_check_dns_response(struct dns_resolution *res)
 	/* Make sure weight is at least 1, so
 	 * that the server will be used.
 	 */
-	ha_weight = item->weight / 256 + 1;
+	if (item->weight > 1 && item->weight <= 256) {
+		ha_weight = item->weight;
+	}
+	else {
+		ha_weight = 1;
+	}
 	if (srv->uweight != ha_weight) {
 		char weight[9];
 
@@ -593,7 +598,12 @@ static void dns_check_dns_response(struct dns_resolution *res)
 /* Make sure weight is at least 1, so
  * that the server will be used.
  */
-ha_weight = item->weight / 256 + 1;
+if (item->weight > 1 && item->weight <= 256) {
+	ha_weight = item->weight;
+}
+else {
+	ha_weight = 1;
+}
 
 snprintf(weight, sizeof(weight), "%d", ha_weight);
 server_parse_weight_change_request(srv, weight);
-- 
2.17.1



Re: [PATCH] BUG/MEDIUM: contrib/spoa_server: Set FIN flag on agent frames

2019-06-11 Thread Daniel Corbett
Hello Willy,


Apologies for the delay.

Thank you for taking the time to review.


> On Jun 2, 2019, at 10:11 PM, Willy Tarreau  wrote:
> 
> Hi Daniel,
> 
> On Thu, May 30, 2019 at 12:57:10AM -0400, Daniel Corbett wrote:
>> Hello,
>> 
>> 
>> When communicating over SPOP the AGENT-HELLO, AGENT-DISCONNECT,
>> and ACK frames must have the FIN flag set.
>> 
>> This patch also upgrades the SPOP_VERSION to 2.0 and adds the
>> "ip_score" transaction variable to the python & lua scripts.
>> 
>> Thanks to Christopher for giving me the answer on how to solve this issue.
> 
> Are all these changes related ?  It seems to me these are distinct
> features, or maybe even a bug fix plus an extra feature, in which
> case these should be separate patches.


Something like that :)  

They are all changes to make the spoa_server work out of the box as currently 
it's not working with 2.0-dev.

I have gone ahead and separated the patches.


> 
> I'm having a few quick questions below.
> 
>> From e0e7992b3b2af3ac0e898d48d6ce5f7e9416b568 Mon Sep 17 00:00:00 2001
>> From: Daniel Corbett 
>> Date: Wed, 29 May 2019 17:44:05 -0400
>> Subject: [PATCH] BUG/MEDIUM: contrib/spoa_server: Set FIN flag on agent 
>> frames
>> 
>> When communicating over SPOP the AGENT-HELLO, AGENT-DISCONNECT,
>> and ACK frames must have the FIN flag set.
>> 
>> This patch also upgrades the SPOP_VERSION to 2.0 and adds the
>> "ip_score" transaction variable to the python & lua scripts.
> (...)
>> diff --git a/contrib/spoa_server/ps_lua.lua b/contrib/spoa_server/ps_lua.lua
>> index 26620459..aaf373c6 100644
>> --- a/contrib/spoa_server/ps_lua.lua
>> +++ b/contrib/spoa_server/ps_lua.lua
>> @@ -14,4 +14,5 @@ spoa.register_message("check-client-ip", function(args)
>>  spoa.set_var_ipv6("ipv6", spoa.scope.txn, "1::f")
>>  spoa.set_var_str("str", spoa.scope.txn, "1::f")
>>  spoa.set_var_bin("bin", spoa.scope.txn, "1::f")
>> +spoa.set_var_int32("ip_score", spoa.scope.txn, 77)
> 
> Be careful, your editor seems to have mixed spaces and tabs here, and
> a few other times in the same patch.
> 


Sorry about that.  I believe it was mainly due to python indention errors that 
I was dealing with :(


>> diff --git a/contrib/spoa_server/spoa-server.conf 
>> b/contrib/spoa_server/spoa-server.conf
>> index 13bd126a..87bbb721 100644
>> --- a/contrib/spoa_server/spoa-server.conf
>> +++ b/contrib/spoa_server/spoa-server.conf
>> @@ -1,5 +1,6 @@
>> global
>>  debug
>> +nbthread 1
> 
> What is the reason for forcing nbthread to 1 here ? Did you face a
> particular bug revealed by threads ? Given that many people use example
> configs as starting point for theirs, I'd rather avoid doing this, or
> this will last 10 years just like "option httpclose" or even the funny
> "ssl-engine rdrand" which in addition to being wrong was so much copied
> that google now auto-completes it when searching "ssl-engine haproxy"!
> 


Yes I have hit a bug where the SPOA results are blank between requests if 
threading is enabled.  I found the quick fix to be setting nbthread to 1 -- 
however, I agree with you that we must be careful with these settings as I 
realize they can persist for long after their intended need.

Unfortunately, I'm not really sure where to start fixing this issue.


>> diff --git a/contrib/spoa_server/spoa.c b/contrib/spoa_server/spoa.c
>> index a958f222..8cb40c54 100644
>> --- a/contrib/spoa_server/spoa.c
>> +++ b/contrib/spoa_server/spoa.c
>> @@ -679,13 +679,16 @@ error:
>>  * the number of written bytes otherwise. */
>> static void prepare_agentack(struct worker *w)
>> {
>> +unsigned int flags = SPOE_FRM_FL_FIN;
>> +
>>  w->ack_len = 0;
>> 
>>  /* Frame type */
>>  w->ack[w->ack_len++] = SPOE_FRM_T_AGENT_ACK;
>> 
>> -/* No flags for now */
>> -memset(w->ack + w->ack_len, 0, 4); /* No flags */
>> +/* Set flags */
>> +flags = htonl(flags);
> 
> In general it's not a good idea to have the same variable hold two
> different endian versions of a same value. You can be certain that
> at one point something will be added above or below and will use the
> wrong one. Better simply do "flags = htonl(SPOE_FRM_FL_FIN)" or even
> better, preset flags to zero at the beginning of the function, and
> then add this flag this way : "flags |= htonl(SPOE_FRM_FL_FIN)".
> 
>> +memcpy(w->ack + w->ack_len, (char

[PATCH] BUG/MEDIUM: contrib/spoa_server: Set FIN flag on agent frames

2019-05-29 Thread Daniel Corbett
Hello,


When communicating over SPOP the AGENT-HELLO, AGENT-DISCONNECT,
and ACK frames must have the FIN flag set.

This patch also upgrades the SPOP_VERSION to 2.0 and adds the
"ip_score" transaction variable to the python & lua scripts.

Thanks to Christopher for giving me the answer on how to solve this issue.


-- Daniel




0001-BUG-MEDIUM-contrib-spoa_server-Set-FIN-flag-on-agent.patch
Description: Binary data


Re: [PATCH] BUG/MEDIUM: init: Initialize idle_orphan_conns for first server in server-template

2019-01-09 Thread Daniel Corbett

Hello,

On 1/9/19 6:06 AM, Willy Tarreau wrote:

On Wed, Jan 09, 2019 at 11:54:36AM +0100, Olivier Houchard wrote:

Oops, that seems right, and the patch looks fine, Willy can you push it ?

Sure. Daniel, may I put your real name or do you want to resubmit the
patch ? We usually don't take patches using aliases only for the author.

Thanks!
Willy



Sure -- attached you will find the new patch with the updated details.

Thanks,
-- Daniel
>From 13b37d5366be36535b3c67242ae0ac328e3aaaf8 Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Wed, 9 Jan 2019 08:13:29 -0500
Subject: [PATCH] BUG/MEDIUM: init: Initialize idle_orphan_conns for first
 server in server-template

When initializing server-template all of the servers after the first
have srv->idle_orphan_conns initialized within server_template_init()
The first server does not have this initialized and when http-reuse
is active this causes a segmentation fault when accessed from
srv_add_to_idle_list().  This patch removes the check for
srv->tmpl_info.prefix within server_finalize_init() and allows
the first server within a server-template to have srv->idle_orphan_conns
properly initialized.

This should be backported to 1.9.
---
 src/server.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/server.c b/src/server.c
index 4cd8784..bc9e805 100644
--- a/src/server.c
+++ b/src/server.c
@@ -1936,7 +1936,7 @@ static int server_finalize_init(const char *file, int linenum, char **args, int
 		px->srv_act++;
 	srv_lb_commit_status(srv);
 
-	if (!srv->tmpl_info.prefix && srv->max_idle_conns != 0) {
+	if (srv->max_idle_conns != 0) {
 			int i;
 
 			srv->idle_orphan_conns = calloc(global.nbthread, sizeof(*srv->idle_orphan_conns));
-- 
2.7.4



Re: HAProxy listed as Ingress controllers

2018-09-25 Thread Daniel Corbett

Hello Aleks,

On 09/25/2018 03:25 AM, Aleksandar Lazic wrote:

Hi.

Today was the PR to the website approved which add the haproxy ingress
controller to the list of Ingress controllers ;-)

https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers

Is there any plan to have a official HAProxy Operator/Controller for
kubernetes/openshift from HAProxy Inc?

Regards
Aleks



I'm the Director of Product at HAProxy Technologies and just wanted to 
clear up some of the confusion.  We greatly appreciate the work and 
effort that was made by Joao in producing the haproxy-ingress project.  
In fact, we've not only contributed to it but we also offer support of 
it for our enterprise customers.


With that said, we do have plans to develop our own ingress controller.  
We've just entered into the beginning phases of it and are developing a 
proof of concept.


We're quite excited and while we can't give a timeline at the moment we 
will be sure to update this list once we're close to release.


Thanks,
-- Daniel



Re: how to run vtc files?

2018-06-20 Thread Daniel Corbett

Hello,


On 06/20/2018 03:34 PM, Илья Шипицин wrote:

hi


[ilia@localhost haproxy]$ HAPROXY_PROGRAM=./haproxy varnishtest 
reg-tests/ssl/h0.vtc

 top   0.0 extmacro def pwd=/home/ilia/xxx/haproxy
 top   0.0 extmacro def localhost=127.0.0.1
 top   0.0 extmacro def bad_backend=127.0.0.1 36769
 top   0.0 extmacro def bad_ip=192.0.2.255
 top   0.0 macro def tmpdir=/tmp/vtc.31222.33cfe809
*    top   0.0 TEST reg-tests/ssl/h0.vtc starting
**   top   0.0 === varnishtest "OpenSSL bug: Random crashes"
*    top   0.0 TEST OpenSSL bug: Random crashes
**   top   0.0 === feature ignore_unknown_macro
**   top   0.0 === haproxy h1 -conf {
 top   0.0 Unknown command: "haproxy"
*    top   0.0 RESETTING after reg-tests/ssl/h0.vtc
*    top   0.0 TEST reg-tests/ssl/h0.vtc FAILED
#    top  TEST reg-tests/ssl/h0.vtc FAILED (0.001) exit=2
[ilia@localhost haproxy]$


Something like this should work:

$ HAPROXY_PROGRAM=$PWD/haproxy varnishtest reg-tests/ssl/h0.vtc
#    top  TEST reg-tests/ssl/h0.vtc passed (0.147)


Thanks,
-- Daniel




[PATCH] REGTEST: stick-tables: Test expiration when used with table_*

2018-06-20 Thread Daniel Corbett

Hello,

Thanks for adding this integration Fred.  Great job!

Attached is a new regression test to check for stick-tables expiration when
they are used with table_* converters as noted in commit id:
3e60b11100cbc812b77029ca142b83ac7a314db1


Thanks,

-- Daniel

>From 386ed3ca039ea2b5dec7397ba9934576217a421e Mon Sep 17 00:00:00 2001
From: Daniel Corbett 
Date: Wed, 20 Jun 2018 10:16:16 -0400
Subject: [PATCH] REGTEST: stick-tables: Test expiration when used with table_*

New regression test to check for stick-tables expiration when
they are used with table_* converters as noted in commit id:
3e60b11100cbc812b77029ca142b83ac7a314db1.
---
 reg-tests/stick-tables/h0.vtc | 65 +++
 1 file changed, 65 insertions(+)
 create mode 100644 reg-tests/stick-tables/h0.vtc

diff --git a/reg-tests/stick-tables/h0.vtc b/reg-tests/stick-tables/h0.vtc
new file mode 100644
index 000..c4fe6fb
--- /dev/null
+++ b/reg-tests/stick-tables/h0.vtc
@@ -0,0 +1,65 @@
+# commit 3e60b11
+# BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters
+#
+# When using table_* converters ref_cnt was incremented
+# and never decremented causing entries to not expire.
+#
+# The root cause appears to be that stktable_lookup_key()
+# was called within all sample_conv_table_* functions which was
+# incrementing ref_cnt and not decrementing after completion.
+#
+# Added stktable_release() to the end of each sample_conv_table_*
+# function and reworked the end logic to ensure that ref_cnt is
+# always decremented after use.
+#
+# This should be backported to 1.8
+
+varnishtest "stick-tables: Test expirations when used with table_*"
+
+# As some macros for haproxy are used in this file, this line is mandatory.
+feature ignore_unknown_macro
+
+haproxy h1 -conf {
+# Configuration file of 'h1' haproxy instance.
+defaults
+mode   http
+timeout connect 5s
+timeout server  30s
+timeout client  30s
+
+frontend http1
+bind "fd@${my_frontend_fd}"
+stick-table size 1k expire 1ms type ip store conn_rate(10s),http_req_cnt,http_err_cnt,http_req_rate(10s),http_err_rate(10s),gpc0,gpc0_rate(10s),gpt0
+http-request track-sc0 req.hdr(X-Forwarded-For)
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_http_req_cnt(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_trackers(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),in_table(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_bytes_in_rate(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_bytes_out_rate(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_conn_cnt(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_conn_cur(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_conn_rate(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_gpt0(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_gpc0(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_gpc0_rate(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_http_err_cnt(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_http_err_rate(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_http_req_cnt(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_http_req_rate(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_kbytes_in(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_kbytes_out(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_server_id(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_sess_cnt(http1) -m int lt 0  }
+http-request redirect location https://www.haproxy.com/ if { req.hdr(X-Forwarded-For),table_sess_rate(http1) -m int lt 0  }
+http-request 

Re: [PATCH] BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters

2018-05-27 Thread Daniel Corbett

Hello Willy,


On 05/24/2018 04:16 PM, Willy Tarreau wrote:


Interesting one! However it's not correct, it doesn't decrement the
refcount on the return 0 path so the problem remains when a data
type is looked up in a table where it is not stored. For example :


Nice catch.  Thanks for pointing this out.


Given that all functions seem to be written the same way, I suggest
that we change the end and invert the !ptr condition to centralize
the release call. It would give this above :

 ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_CUR);
 if (ptr)
smp->data.u.sint = stktable_data_cast(ptr, conn_cur);
 stktable_release(t, ts);
 return !!ptr;

Could you please rework your patch to do this so that I can merge it ?



I have attached the latest patch with these changes.

Thanks,
-- Daniel


>From e47065ac6cc7478d21cfa00c5c45a0ae7cd412bf Mon Sep 17 00:00:00 2001
From: Daniel Corbett <dcorb...@haproxy.com>
Date: Sun, 27 May 2018 09:47:12 -0400
Subject: [PATCH] BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_*
 converters

When using table_* converters ref_cnt was incremented
and never decremented causing entries to not expire.

The root cause appears to be that stktable_lookup_key()
was called within all sample_conv_table_* functions which was
incrementing ref_cnt and not decrementing after completion.

Added stktable_release() to the end of each sample_conv_table_*
function and reworked the end logic to ensure that ref_cnt is
always decremented after use.

This should be backported to 1.8
---
 src/stick_table.c | 170 +++---
 1 file changed, 86 insertions(+), 84 deletions(-)

diff --git a/src/stick_table.c b/src/stick_table.c
index fcc6fe6..101a4e2 100644
--- a/src/stick_table.c
+++ b/src/stick_table.c
@@ -875,6 +875,7 @@ static int sample_conv_in_table(const struct arg *arg_p, struct sample *smp, voi
 	smp->data.type = SMP_T_BOOL;
 	smp->data.u.sint = !!ts;
 	smp->flags = SMP_F_VOL_TEST;
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -907,12 +908,12 @@ static int sample_conv_table_bytes_in_rate(const struct arg *arg_p, struct sampl
 		return 1;
 
 	ptr = stktable_data_ptr(t, ts, STKTABLE_DT_BYTES_IN_RATE);
-	if (!ptr)
-		return 0; /* parameter not stored */
+	if (ptr)
+		smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, bytes_in_rate),
+   t->data_arg[STKTABLE_DT_BYTES_IN_RATE].u);
 
-	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, bytes_in_rate),
-	   t->data_arg[STKTABLE_DT_BYTES_IN_RATE].u);
-	return 1;
+	stktable_release(t, ts);
+	return !!ptr;
 }
 
 /* Casts sample  to the type of the table specified in arg(0), and looks
@@ -944,11 +945,11 @@ static int sample_conv_table_conn_cnt(const struct arg *arg_p, struct sample *sm
 		return 1;
 
 	ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_CNT);
-	if (!ptr)
-		return 0; /* parameter not stored */
+	if (ptr)
+		smp->data.u.sint = stktable_data_cast(ptr, conn_cnt);
 
-	smp->data.u.sint = stktable_data_cast(ptr, conn_cnt);
-	return 1;
+	stktable_release(t, ts);
+	return !!ptr;
 }
 
 /* Casts sample  to the type of the table specified in arg(0), and looks
@@ -980,11 +981,11 @@ static int sample_conv_table_conn_cur(const struct arg *arg_p, struct sample *sm
 		return 1;
 
 	ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_CUR);
-	if (!ptr)
-		return 0; /* parameter not stored */
+	if (ptr)
+		smp->data.u.sint = stktable_data_cast(ptr, conn_cur);
 
-	smp->data.u.sint = stktable_data_cast(ptr, conn_cur);
-	return 1;
+	stktable_release(t, ts);
+	return !!ptr;
 }
 
 /* Casts sample  to the type of the table specified in arg(0), and looks
@@ -1016,12 +1017,12 @@ static int sample_conv_table_conn_rate(const struct arg *arg_p, struct sample *s
 		return 1;
 
 	ptr = stktable_data_ptr(t, ts, STKTABLE_DT_CONN_RATE);
-	if (!ptr)
-		return 0; /* parameter not stored */
+	if (ptr)
+		smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, conn_rate),
+   t->data_arg[STKTABLE_DT_CONN_RATE].u);
 
-	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, conn_rate),
-	   t->data_arg[STKTABLE_DT_CONN_RATE].u);
-	return 1;
+	stktable_release(t, ts);
+	return !!ptr;
 }
 
 /* Casts sample  to the type of the table specified in arg(0), and looks
@@ -1053,12 +1054,12 @@ static int sample_conv_table_bytes_out_rate(const struct arg *arg_p, struct samp
 		return 1;
 
 	ptr = stktable_data_ptr(t, ts, STKTABLE_DT_BYTES_OUT_RATE);
-	if (!ptr)
-		return 0; /* parameter not stored */
+	if (ptr)
+		smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, bytes_out_rate),
+   t->data_arg[STKTABLE_DT_BYTES_OUT_RATE].u);
 
-	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, bytes_out_rate),
-	   t->data_arg[STKTABLE_DT_BYTE

Re: Use acl on spoe events

2018-05-27 Thread Daniel Corbett

Hello Joao,


On 05/26/2018 05:54 PM, Joao Morais wrote:


There is no difference if I use acl like the example above, or use the `if 
{...}` syntax or remove the acl at all, my modsecurity agent always receive a 
new connection despite of the host I’m using.

Is there a way to use a spoe filter only if some l7 conditions match?


This appears to have been fixed in 1.9-dev with this commit:

https://git.haproxy.org/?p=haproxy.git;a=commitdiff;h=333694d7715952a9610a3e6f00807eaf5edd209a;hp=336d3ef0e77192582c98b3c578927a529ceadd9b


Thanks,
-- Daniel




Re: gRPC protocol

2018-05-24 Thread Daniel Corbett

Hello Aleks,

gRPC is on our road map.  We're currently working on implementing a new 
native internal HTTP representation and that will bring us end to end 
HTTP/2, which is the requirement for us to add gRPC.


In regards to the gRPC lua script -- thanks for sharing.  It's the first 
time I have seen it so can't comment :)


Thanks,
-- Daniel




Re: SPOE and modsecurity contrib

2018-05-20 Thread Daniel Corbett

Hello Joao,

While I haven't been able to get 'tcp-request content reject' to work 
with this configuration -- I am able to get 'http-request deny' to work:



http-request deny if { var(txn.modsec.code) -m int gt 0 }


Regarding txn.modsec.code -- I have been able to reproduce the 
"txn.modsec.code=-101" and "set variable code=4294967195" when 
crs-setup.conf  / crs.setup.conf.example is missing the following 
SecDefaultAction lines:



SecDefaultAction "phase:1,log,auditlog,deny,status:403"
SecDefaultAction "phase:2,log,auditlog,deny,status:403"


When those are in place --  I receive the following in logs:

The txn.modsec.code is: 403

Please let me know if that solves it for you.


Thanks,
-- Daniel





Re: warnings during loading load-server-state, expected?

2018-05-19 Thread Daniel Corbett

Hi Pieter,

While I'm not sure what may be happening in regards to the 
server-template messages that you have pointed out.


I have ran into the unix socket one a couple weeks ago and have been 
meaning to send this patch to the mailing list.


What is happening is that currently only AF_INET and AF_INET6 are 
checked within the switch statement when dumping the servers state. This 
causes the value of srv_addr to be empty and thus a missing field in the 
server state file.


This patch adds a default case that sets srv_addr to "-" when not 
covered by a socket family.


This should be backported to 1.8

Thanks,
-- Daniel


>From 24f8a74f490435969c04e2bb5387d396b62850c0 Mon Sep 17 00:00:00 2001
From: Daniel Corbett <dcorb...@haproxy.com>
Date: Sat, 19 May 2018 19:43:24 -0400
Subject: [PATCH] BUG/MEDIUM: servers state: Add srv_addr default placeholder

When creating a state file using "show servers state" an empty field is created in the srv_addr column if the server is from the socket family AF_UNIX.  This leads to a warning on start up when using "load-server-state-from-file".  This patch defaults srv_addr to "-" if the socket family is not covered.

This patch should be backported to 1.8.
---
 src/proxy.c  | 3 +++
 src/server.c | 3 ++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/proxy.c b/src/proxy.c
index 31253f1..6f71b4b 100644
--- a/src/proxy.c
+++ b/src/proxy.c
@@ -1450,6 +1450,9 @@ static int dump_servers_state(struct stream_interface *si, struct chunk *buf)
 inet_ntop(srv->addr.ss_family, &((struct sockaddr_in6 *)>addr)->sin6_addr,
 	  srv_addr, INET6_ADDRSTRLEN + 1);
 break;
+			default:
+memcpy(srv_addr, "-\0", 2);
+break;
 		}
 		srv_time_since_last_change = now.tv_sec - srv->last_change;
 		bk_f_forced_id = px->options & PR_O_FORCED_ID ? 1 : 0;
diff --git a/src/server.c b/src/server.c
index ebac357..277d140 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2936,7 +2936,8 @@ static void srv_update_state(struct server *srv, int version, char **params)
 			server_recalc_eweight(srv);
 
 			/* load server IP address */
-			srv->lastaddr = strdup(params[0]);
+			if (strcmp(params[0], "-"))
+srv->lastaddr = strdup(params[0]);
 
 			if (fqdn && srv->hostname) {
 if (!strcmp(srv->hostname, fqdn)) {
-- 
2.7.4



[PATCH] BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters

2018-05-17 Thread Daniel Corbett

Hello,

When using table_* converters ref_cnt was incremented
and never decremented causing entries to not expire.

The root cause appears to be that stktable_lookup_key()
was called within all sample_conv_table_* functions which was
incrementing ref_cnt and not decrementing after completion.

Added stktable_release() to the end of each sample_conv_table_*
function.

This should be backported to 1.8.


Thanks,
-- Daniel

>From 28530921746e62bb229880774a311bfebfcf7579 Mon Sep 17 00:00:00 2001
From: Daniel Corbett <dcorb...@haproxy.com>
Date: Thu, 17 May 2018 13:17:54 -0400
Subject: [PATCH] BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_*
 converters

When using table_* converters ref_cnt was incremented
and never decremented causing entries to not expire.

The root cause appears to be that stktable_lookup_key()
was called within all sample_conv_table_* functions which was
incrementing ref_cnt and not decrementing after completion.

Added stktable_release() to the end of each sample_conv_table_*
function.

This should be backported to 1.8
---
 src/stick_table.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/src/stick_table.c b/src/stick_table.c
index 3e44747..f1ad347 100644
--- a/src/stick_table.c
+++ b/src/stick_table.c
@@ -912,6 +912,7 @@ static int sample_conv_table_bytes_in_rate(const struct arg *arg_p, struct sampl
 
 	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, bytes_in_rate),
 	   t->data_arg[STKTABLE_DT_BYTES_IN_RATE].u);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -948,6 +949,7 @@ static int sample_conv_table_conn_cnt(const struct arg *arg_p, struct sample *sm
 		return 0; /* parameter not stored */
 
 	smp->data.u.sint = stktable_data_cast(ptr, conn_cnt);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -984,6 +986,7 @@ static int sample_conv_table_conn_cur(const struct arg *arg_p, struct sample *sm
 		return 0; /* parameter not stored */
 
 	smp->data.u.sint = stktable_data_cast(ptr, conn_cur);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1021,6 +1024,7 @@ static int sample_conv_table_conn_rate(const struct arg *arg_p, struct sample *s
 
 	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, conn_rate),
 	   t->data_arg[STKTABLE_DT_CONN_RATE].u);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1058,6 +1062,7 @@ static int sample_conv_table_bytes_out_rate(const struct arg *arg_p, struct samp
 
 	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, bytes_out_rate),
 	   t->data_arg[STKTABLE_DT_BYTES_OUT_RATE].u);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1094,6 +1099,7 @@ static int sample_conv_table_gpt0(const struct arg *arg_p, struct sample *smp, v
 		return 0; /* parameter not stored */
 
 	smp->data.u.sint = stktable_data_cast(ptr, gpt0);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1130,6 +1136,7 @@ static int sample_conv_table_gpc0(const struct arg *arg_p, struct sample *smp, v
 		return 0; /* parameter not stored */
 
 	smp->data.u.sint = stktable_data_cast(ptr, gpc0);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1167,6 +1174,7 @@ static int sample_conv_table_gpc0_rate(const struct arg *arg_p, struct sample *s
 
 	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, gpc0_rate),
 	  t->data_arg[STKTABLE_DT_GPC0_RATE].u);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1203,6 +1211,7 @@ static int sample_conv_table_gpc1(const struct arg *arg_p, struct sample *smp, v
 		return 0; /* parameter not stored */
 
 	smp->data.u.sint = stktable_data_cast(ptr, gpc1);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1240,6 +1249,7 @@ static int sample_conv_table_gpc1_rate(const struct arg *arg_p, struct sample *s
 
 	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, gpc1_rate),
 	  t->data_arg[STKTABLE_DT_GPC1_RATE].u);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1276,6 +1286,7 @@ static int sample_conv_table_http_err_cnt(const struct arg *arg_p, struct sample
 		return 0; /* parameter not stored */
 
 	smp->data.u.sint = stktable_data_cast(ptr, http_err_cnt);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1313,6 +1324,7 @@ static int sample_conv_table_http_err_rate(const struct arg *arg_p, struct sampl
 
 	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, http_err_rate),
 	   t->data_arg[STKTABLE_DT_HTTP_ERR_RATE].u);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1349,6 +1361,7 @@ static int sample_conv_table_http_req_cnt(const struct arg *arg_p, struct sample
 		return 0; /* parameter not stored */
 
 	smp->data.u.sint = stktable_data_cast(ptr, http_req_cnt);
+	stktable_release(t, ts);
 	return 1;
 }
 
@@ -1386,6 +1399,7 @@ static int sample_conv_table_http_req_rate(const struct arg *arg_p, struct sampl
 
 	smp->data.u.sint = read_freq_ctr_period(_data_cast(ptr, http_req_rate),
 	   t->data_arg[STKTABLE_DT_HTTP_REQ_RATE].u);
+	st