using variables in reg-tests

2020-02-27 Thread Илья Шипицин
Hello,

${no-htx} option http-use-htx

how can setup that variable per test run? I want to run reg-tests with and
without htx

cheers,
Ilya Shipitcin


Re: FW: HAProxy: Information request

2020-02-27 Thread Sander Klein

Hi,

please be aware you are posting to a public mailinglist. You might want
to check where you sent your emails.

Regards,

Sander Klein



On 2020-02-27 22:14, EMEA Request wrote:

Hi Team,

Apologies for delayed response.

Can you please help with the details provided below and provide a
quote.

Thanks and Regards,

 [3]

 Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE

 anandita.sha...@softwareone.com [4]  | www.softwareone.com [3]
 Phone no : +91 8950320646

 Check out: Why SoftwareONE? [8] | PyraCloud [9] | Customer
Transformation [10]

From: Parsons, Branden 
Sent: Thursday, February 27, 2020 8:14 PM
To: Sharma, Anandita 
Subject: RE: HAProxy: Information request

Hi Anandita

Please see below

On AWS,  but not sure on the number of connections, can they get a
quote without knowing that? We will set up a call once we have an idea
of price?

With kind regards,

Branden Parsons

Internal Sales Executive

SoftwareONE UK Ltd

Direct. +44 203 3729 481

From: Sharma, Anandita 
Sent: 24 February 2020 14:16
To: Parsons, Branden 
Subject: FW: HAProxy: Information request

Hi Branden,

FYI

 [3]

 Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE

 anandita.sha...@softwareone.com [4]  | www.softwareone.com [3]
 Phone no : +91 8950320646

 Check out: Why SoftwareONE? [8] | PyraCloud [9] | Customer
Transformation [10]

From: Anamarija Murgic 
Sent: Friday, January 17, 2020 7:23 PM
To: EMEA Request 
Cc: Sean Meroth 
Subject: Re: HAProxy: Information request

Hi Anandita,

Thanks for letting me know.

Have a great weekend!

Best,
Anamarija

On 17/01/2020 1:34 PM, EMEA Request wrote:


Hi Anamarija ,

Apologies for delay in reply.

Our team is in contact with customer for some clarifications.

Will get back to you after clarifying.

Thanks and Regards,

[3]

Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE

anandita.sha...@softwareone.com [4]  | www.softwareone.com [3]
Phone no : +91 8950320646

Check out: Why SoftwareONE? [5] | PyraCloud [6] | Customer
Transformation [7]

From: Anamarija Murgic 
Sent: Tuesday, January 14, 2020 4:20 PM
To: Sharma, Anandita 
Cc: Sean Meroth 
Subject: Re: HAProxy: Information request

Hello Anandita,

I am following up on my previous email as I haven't heard back from
you. Please let me know when is a good time to talk?

Looking forward to hearing from you soon.

Thanks,
Anamarija

On 07/01/2020 6:08 PM, Anamarija Murgic wrote:


Hi Anandita,

My colleagues forwarded me your email request sent to our Open
source email asking for the product information.

We have both, ALOHA LB, virtual or hardware and we have our
software only HAProxy Enterprise Edition (HAPEE) that you would
install on your their own infrastructure.  HAProxy Enterprise
Edition (HAPEE) comes as an annual subscription per server while
ALOHA appliances prices are based on the application performance
you need to sustain.

It would be very helpful to know:

- Are they using current appliance on Azure or AWS
- The number of new connections (HTTP or HTTPS) per second
- The number of concurrent connections per second.

Also, if possible at all, if you can share with us their current
ADC configuration.

In general, we've found that it's best to get some more context in
a quick conference call that will help us understand the use case
of TheTrainline.com. Then we can make the best recommendation for
you and the project and go over pricing.

Please let me know your availability this week, tomorrow or Friday
afternoon?

Thanks,
Anamarija

--

Anamarija Murgic

Sr. Account Executive

HAProxy Technologies - Powering your uptime!

15 Avenue Raymond Aron | 92160 Antony, France

+385 99 44 11 521 | www.haproxy.com [1] | Unsubscribe [2]


--

Anamarija Murgic

Sr. Account Executive

HAProxy Technologies - Powering your uptime!

15 Avenue Raymond Aron | 92160 Antony, France

+385 99 44 11 521 | www.haproxy.com [1] | Unsubscribe [2]


--

Anamarija Murgic

Sr. Account Executive

HAProxy Technologies - Powering your uptime!

15 Avenue Raymond Aron | 92160 Antony, France

+385 99 44 11 521 | www.haproxy.com [11] | Unsubscribe [12]

Links:
--
[1]
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.com_d=DwMDaQc=-5LgSL_TkF3nGRQI95ci6eeFVMQ5VESHPf5koMIAxOAr=t_QP427c6yP1s5t47wSRYPnCW5oQW71pV6vHdqbRap8m=SdHBecwJYxDvk1OEHAJB19YxCUoN___V5z6l1bRc8Dws=VjsyrZ9hejKpS-zBGVukDcHhAXXYjJsF8nVP92Ocg6Ue>
 [2]
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.haproxy.com_manage-2Demail-2Dpreferences_d=DwMDaQc=-5LgSL_TkF3nGRQI95ci6eeFVMQ5VESHPf5koMIAxOAr=t_QP427c6yP1s5t47wSRYPnCW5oQW71pV6vHdqbRap8m=SdHBecwJYxDvk1OEHAJB19YxCUoN___V5z6l1bRc8DwsgFR5QK4GXUhO2mbkb-MDVmXX-OZjVZlHwRZsF3UOBUe>
 [3] http://www.softwareone.com/
[4] http://@softwareone.com
[5]

RE: HAProxy v2.1.0 health checks with POST and data corrupted by extra "connection: close"

2020-02-27 Thread Kiran Gavali
Hello Willy,



As per my findings, the existing "httpchk" directive contains header and body 
as single argument args[4] and there is no definite identifier to differentiate 
header from body.

Therefore, i have added 2 new options to http-check directive for header and 
body. Specifying these options would be mandatory if "http-check expect" option 
is configured in haproxy.cfg file. If "http-check expect" is configured but 
"http-check header" and/or "http-check body" not configured, it would result 
into an error logged in haproxy.log



Following is the pseudocode in src/checks.c file.



if check->type == PR_O2_HTTP_CHK

 /* prevent HTTP keep-alive when "http-check expect" is used */

 if s->proxy->options2 & PR_O2_EXP_TYPE

   if  "http-check header" and  "http-check body" present in the 
haproxy.cfg file

b_putblk(>bo, s->proxy->header_check_req, 
s->proxy->header_check_len);

b_putist(>bo, ist("Connection: close\r\n"));

b_putblk(>bo, s->proxy->body_check_req, 
s->proxy->body_check_len); }

   else

Error logged in haproxy.log ErrorMsg=Specify the header and body

   endif

 else

b_putblk(>bo, s->proxy->check_req, s->proxy->check_len)

 endif

endif



Thanks And Regards

Kiran Gavali



-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: Monday, February 24, 2020 1:26 PM
To: Kiran Gavali 
Subject: Re: HAProxy v2.1.0 health checks with POST and data corrupted by extra 
"connection: close"



Hello Kiran,



On Mon, Feb 24, 2020 at 07:00:08AM +, Kiran Gavali wrote:

> Hello Mr. Willy Tarreau ,

>

> Issue Link: https://github.com/haproxy/haproxy/issues/16



Oh I didn't notice that you've updated the report there a month ago, sorry 
about that, I get so many github notifications that I do miss quite some of 
them :-(



> I have reproduced and analyzed the above bug on HAProxy v2.1.0 (using 
> Wireshark).

> As per my findings, the "Connection: close" string is appended after

> the data block instead of being appended after the header. [Refer the

> attached Wireshark screenshot at the end]



Yes absolutely, that's the problem (and limitation) described there.



> As stated by @wtarreau in [

> #https://www.mail-archive.com/haproxy@formilux.org/msg28200.html], it

> would be best to add another option "body" to http-check directive so

> as to correctly differentiate body from header, as shown below:-

>

> option httpchk POST / HTTP/1.1\r\n

> http-check header Host: 10.10.26.236\r\nContent-Type:

> application/json\r\nContent-Length: 38\r\n http-check body

> {"command":"getNodeInfo"} http-check expect rstatus (2|3)[0-9][0-9]

>

> To incorporate the change, we need to do changes in below code:-

> File: cfgparse-listen.c

> Function: int cfg_parse_listen(const char *file, int linenum, char

> **args, int kwm) Code Snippet: curproxy->check_len =

> snprintf(curproxy->check_req, reqlen, "%s %s %s\r\n", args[2],

> args[3], *args[4]?args[4]:"HTTP/1.0");

>

> Please let me know if my understanding of the above issue is correct,

> so that I can proceed further accordingly. Looking forward for your guidance.



Your analysis is correct. However this issue is more than two years old now and 
the check system is currently being reworked in order to support muxes for 
HTTP/1 and HTTP/2 so that we have better checks in 2.2. However I don't know if 
for this specific issue we can come up with something trivial enough to be 
backported to older releases with no risk at all.

I guess that it might be worth trying, but let me be clear about the fact that 
if the code becomes tricky, we may finally decide not to merge it and/or not to 
backport it.



Do not hesitate to share your findings on the mailing list so that others can 
follow the progress and possibly suggest adjustments.



Thanks!

Willy


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only. It shall not attach any liability on the 
originator or NECTI or its affiliates. Any views or opinions presented in this 
email are solely those of the author and may not necessarily reflect the 
opinions of NECTI or its affiliates. Any form of reproduction, dissemination, 
copying, disclosure, modification, distribution and / or publication of this 
message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have received this email in error please delete it 
and notify the sender immediately.


Re: Log lines in 2.0

2020-02-27 Thread Willy Tarreau
On Fri, Feb 28, 2020 at 01:49:36PM +1100, Igor Cicimov wrote:
> > Do you have a *real* use case of "debug" in the global section that is
> > not easily solved with "-d" ? I'm asking because I'd really like to see
> > this design mistake disappear, but am not (too much) stubborn.
> 
> Not a problem at all it is just something carried over from our default
> test setup that has it's roots since v1.5. I'll remove it and use "-d" from
> now on as intended.

OK perfect then!
Willy



Re: Log lines in 2.0

2020-02-27 Thread Igor Cicimov
Hi Willy,

On Fri, Feb 28, 2020, 2:15 AM Willy Tarreau  wrote:

> Hi Igor,
>
> On Thu, Feb 27, 2020 at 10:36:44PM +1100, Igor Cicimov wrote:
> > > This looks like you are running HAProxy in debug mode. Debug mode is
> > > enabled via the '-d' command line switch or 'debug' configuration
> option
> > > (http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#debug).
> > >
> > > Best regards
> > > Tim Düsterhus
> > >
> >
> > Yes I have debug option on, thanks. The thing is it is there in 1.8 too
> but
> > I don't see the same effect.
>
> I recently marked the debug option deprecated because it has caused a lot
> of trouble over time (services staying in foreground, spamming logs,
> filling
> file-systems with boot log files etc), and indicated that only "-d" should
> be used explicitly when you want to use the debug mode.
>
> Do you have a *real* use case of "debug" in the global section that is
> not easily solved with "-d" ? I'm asking because I'd really like to see
> this design mistake disappear, but am not (too much) stubborn.
>
> Thanks,
> Willy
>

Not a problem at all it is just something carried over from our default
test setup that has it's roots since v1.5. I'll remove it and use "-d" from
now on as intended.

Thanks,
Igor

>


Linkedin Admire

2020-02-27 Thread Dellena Blanchard
I saw your profile on LinkedIn, I'm sorry to intrude but I couldn't help but 
notice your lovely smile so I thought  to write you,if there is any possibility 
of knowing each other better


FW: HAProxy: Information request

2020-02-27 Thread EMEA Request

Hi Team,

Apologies for delayed response.


Can you please help with the details provided below and provide a quote.



Thanks and Regards,
[cid:image002.jpg@01D5ED7C.621F1730]

 Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE
 anandita.sha...@softwareone.com  | 
www.softwareone.com
 Phone no : +91 8950320646
 Check out: Why SoftwareONE? | 
PyraCloud | Customer 
Transformation 


From: Parsons, Branden 
Sent: Thursday, February 27, 2020 8:14 PM
To: Sharma, Anandita 
Subject: RE: HAProxy: Information request

Hi Anandita

Please see below

On AWS,  but not sure on the number of connections, can they get a quote 
without knowing that? We will set up a call once we have an idea of price?


With kind regards,

Branden Parsons
Internal Sales Executive
SoftwareONE UK Ltd
Direct. +44 203 3729 481

[cid:image003.png@01D5EDE1.0F69E3C0]


From: Sharma, Anandita 
mailto:anandita.sha...@softwareone.com>>
Sent: 24 February 2020 14:16
To: Parsons, Branden 
mailto:branden.pars...@softwareone.com>>
Subject: FW: HAProxy: Information request

Hi Branden,

FYI

[cid:image002.jpg@01D5ED7C.621F1730]

 Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE
 anandita.sha...@softwareone.com  | 
www.softwareone.com
 Phone no : +91 8950320646
 Check out: Why SoftwareONE? | 
PyraCloud | Customer 
Transformation 


From: Anamarija Murgic mailto:amur...@haproxy.com>>
Sent: Friday, January 17, 2020 7:23 PM
To: EMEA Request 
mailto:request.e...@softwareone.com>>
Cc: Sean Meroth mailto:smer...@haproxy.com>>
Subject: Re: HAProxy: Information request


Hi Anandita,

Thanks for letting me know.

Have a great weekend!

Best,
Anamarija
On 17/01/2020 1:34 PM, EMEA Request wrote:
Hi Anamarija ,


Apologies for delay in reply.

Our team is in contact with customer for some clarifications.

Will get back to you after clarifying.


Thanks and Regards,

[cid:image002.jpg@01D5ED7C.621F1730]

 Anandita Sharma | Procurement Specialist –GSDC| SoftwareONE
 anandita.sha...@softwareone.com  | 
www.softwareone.com
 Phone no : +91 8950320646
 Check out: Why 
SoftwareONE?
 | 
PyraCloud
 | Customer Transformation 



From: Anamarija Murgic 
Sent: Tuesday, January 14, 2020 4:20 PM
To: Sharma, Anandita 

Cc: Sean Meroth 
Subject: Re: HAProxy: Information request


Hello Anandita,

I am following up on my previous email as I haven't heard back from you. Please 
let me know when is a good time to talk?

Looking forward to hearing from you soon.

Thanks,
Anamarija
On 07/01/2020 6:08 PM, Anamarija Murgic wrote:

Hi Anandita,

My colleagues forwarded me your email request sent to our Open source email 
asking for the product information.

We have both, ALOHA LB, virtual or hardware and we have our software only 
HAProxy Enterprise Edition (HAPEE) that you would install on your their own 
infrastructure.  HAProxy Enterprise Edition (HAPEE) comes as an annual 
subscription per server while ALOHA appliances prices are based on the 
application performance you need to sustain.

It would be very helpful to know:

- Are they using current appliance on Azure or AWS
- The number of new connections (HTTP or HTTPS) per second
- The number of concurrent connections per second.

Also, if possible at all, if you can share with us their current ADC 
configuration.

In general, we've found that it's best to get some more context in a quick 
conference call that will help us understand the use case of TheTrainline.com. 
Then we can make the best recommendation for you and the project and go over 
pricing.

Please let me know your availability this week, tomorrow or Friday afternoon?


Re: SRV Record Priority Values

2020-02-27 Thread Luke Seelenbinder
Hi Willy,

> Yes it is! They're typically used to drain old user sessions while
> progressively taking a server off. Some also use them to let an
> overloaded server cool down for a moment with no extra session. This
> is completely unrelated to backup servers in fact, which have their
> own weights and which can even be load balanced when all active servers
> are dead.

This makes sense. I'm glad I know (now) I can use 0 weights to drain servers.

> I suspect that it's more a property of the resolvers than the servers.
> I mean, if you know that you're using your DNS servers this way, this
> should really have the same meaning for all servers. So you shouldn't
> have a per-server option to adjust this behavior but a per-resolvers
> section.

That's even better! And probably more easily implemented. I'll wait for 
Baptiste's response.

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

> On 27 Feb 2020, at 16:11, Willy Tarreau  wrote:
> 
> Hi Luke,
> 
> On Thu, Feb 27, 2020 at 03:07:35PM +0100, Luke Seelenbinder wrote:
>> Hello List,
>> 
>> We use SRV records extensively (for internal service discovery, etc.).
>> 
>> When the patch was integrated to support a 0 weighted SRV records, I thought
>> that would simplify our setup, because at the time, I thought 0 weight meant
>> "backup server" without a "backup" flag on the server. Unfortunately for our
>> simplicity, that is not the case. A 0 weight means "will never be used unless
>> explicitly chosen".
>> 
>> That leads me to my questions:
>> 
>> - Is that the intended behaviors of 0 weight servers: to not function as a
>>  backup if all other servers are down?
> 
> Yes it is! They're typically used to drain old user sessions while
> progressively taking a server off. Some also use them to let an
> overloaded server cool down for a moment with no extra session. This
> is completely unrelated to backup servers in fact, which have their
> own weights and which can even be load balanced when all active servers
> are dead.
> 
>> - Would you (Willy?) accept a patch that used the Priority field of SRV
>> records to determine backup/non-backup status? Or perhaps an additional
>> server option to specify 0 weighted SRV records means "backup"?
> 
> I suspect that it's more a property of the resolvers than the servers.
> I mean, if you know that you're using your DNS servers this way, this
> should really have the same meaning for all servers. So you shouldn't
> have a per-server option to adjust this behavior but a per-resolvers
> section. I'm personally not opposed to having more flexibility, and I
> even find that it is a good idea. however I'm really not skilled at all
> in the DNS area and Baptiste is the maintainer so I'm CCing him and
> will let him decide.
> 
> Cheers,
> Willy



[PR] MINOR: Add health check duration metric to Prometheus service

2020-02-27 Thread PR Bot
Dear list!

Author: Seena Fallah 
Number of patches: 1

This is an automated relay of the Github pull request:
   MINOR: Add health check duration metric to Prometheus service

Patch title(s): 
   MINOR: Add health check duration metric to Prometheus service

Link:
   https://github.com/haproxy/haproxy/pull/520

Edit locally:
   wget https://github.com/haproxy/haproxy/pull/520.patch && vi 520.patch

Apply locally:
   curl https://github.com/haproxy/haproxy/pull/520.patch | git am -

Description:
   Fixes: #519

Instructions:
   This github pull request will be closed automatically; patch should be
   reviewed on the haproxy mailing list (haproxy@formilux.org). Everyone is
   invited to comment, even the patch's author. Please keep the author and
   list CCed in replies. Please note that in absence of any response this
   pull request will be lost.



Re: [PATCH] OPTIM: startup: fast unique_id allocation for acl

2020-02-27 Thread Carl Henrik Holth Lunde
On Thu, 2020-02-27 at 15:48 +0100, Willy Tarreau wrote:
> On Thu, Feb 27, 2020 at 02:35:38PM +, Carl Henrik Holth Lunde
> wrote:
> > > Well, after having looked at it more closely, I'd still rather
> > > keep
> > > the hole-filling algorithm, so if you have a v3 between the two
> > > versions it would be great :-)
> > 
> > Do you mean just with an exit code?
> 
> I suppose you mean the check on pattern_finalize_config() ? I guess
> we can't do much more in this case, unless you find where the
> conflicting ones were declared but I'm not sure we keep that info in
> the patterns.

The exit code I added for v3 was for OOM as you originally suggested
for v1. There is nothing from v2 in v3/v4.

Duplicate IDs are already handled by existing code during config
parsing.

By the way, changing the bsearch start offset would not improve total
speed significantly, and because it would be more code I prefer to not
do that.

> So I think I'm not seeing anything wrong here. If you're OK with that
> as well, I'll just change the sizeof in the calloc to make things
> more future-proof.
> 

Fixed all to sizeof(*arr), attached as v4.

Please backport to 2.1, and 2.0/1.8 if possible too.

> Thanks!
> Willy

Thanks!
From 0088824e56d1806bf89f43586cdf1698c4a07dbc Mon Sep 17 00:00:00 2001
From: Carl Henrik Lunde 
Date: Thu, 27 Feb 2020 16:45:50 +0100
Subject: [PATCH] OPTIM: startup: fast unique_id allocation for acl.

pattern_finalize_config() uses an inefficient algorithm which is a
problem with very large configuration files. This affects startup, and
therefore reload time. When haproxy is deployed as a router in a
Kubernetes cluster the generated configuration file may be large and
reloads are frequently occuring, which makes this a significant issue.

The old algorithm is O(n^2)
* allocate missing uids - O(n^2)
* sort linked list - O(n^2)

The new algorithm is O(n log n):
* find the user allocated uids - O(n)
* store them for efficient lookup - O(n log n)
* allocate missing uids - n times O(log n)
* sort all uids - O(n log n)
* convert back to linked list - O(n)

Performance examples, startup time in seconds:

pat_refs old new
1000  0.02   0.01
1 2.10.04
212.30.07
327.90.10
452.50.14
577.50.17

Please backport to 1.8, 2.0 and 2.1.
---
 include/proto/pattern.h |  2 +-
 src/haproxy.c   |  6 ++-
 src/pattern.c   | 89 +++--
 3 files changed, 64 insertions(+), 33 deletions(-)

diff --git a/include/proto/pattern.h b/include/proto/pattern.h
index 5b9929614..73d3cdcf3 100644
--- a/include/proto/pattern.h
+++ b/include/proto/pattern.h
@@ -37,7 +37,7 @@ extern void (*pat_prune_fcts[PAT_MATCH_NUM])(struct pattern_expr *);
 extern struct pattern *(*pat_match_fcts[PAT_MATCH_NUM])(struct sample *, struct pattern_expr *, int);
 extern int pat_match_types[PAT_MATCH_NUM];
 
-void pattern_finalize_config(void);
+int pattern_finalize_config(void);
 
 /* return the PAT_MATCH_* index for match name "name", or < 0 if not found */
 static inline int pat_find_match_name(const char *name)
diff --git a/src/haproxy.c b/src/haproxy.c
index f04ccea6e..25a4328cb 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1898,7 +1898,11 @@ static void init(int argc, char **argv)
 		exit(1);
 	}
 
-	pattern_finalize_config();
+	err_code |= pattern_finalize_config();
+	if (err_code & (ERR_ABORT|ERR_FATAL)) {
+		ha_alert("Failed to finalize pattern config.\n");
+		exit(1);
+	}
 
 	/* recompute the amount of per-process memory depending on nbproc and
 	 * the shared SSL cache size (allowed to exist in all processes).
diff --git a/src/pattern.c b/src/pattern.c
index 90067cd23..76a6a4eea 100644
--- a/src/pattern.c
+++ b/src/pattern.c
@@ -2643,53 +2643,80 @@ int pattern_delete(struct pattern_expr *expr, struct pat_ref_elt *ref)
 	return 1;
 }
 
-/* This function finalize the configuration parsing. Its set all the
+/* This function compares two pat_ref** on unique_id */
+static int cmp_pat_ref(const void *_a, const void *_b)
+{
+	struct pat_ref * const *a = _a;
+	struct pat_ref * const *b = _b;
+
+	if ((*a)->unique_id < (*b)->unique_id)
+		return -1;
+	else if ((*a)->unique_id > (*b)->unique_id)
+		return 1;
+	return 0;
+}
+
+/* This function finalize the configuration parsing. It sets all the
  * automatic ids
  */
-void pattern_finalize_config(void)
+int pattern_finalize_config(void)
 {
-	int i = 0;
-	struct pat_ref *ref, *ref2, *ref3;
+	int len = 0;
+	int unassigned_pos = 0;
+	int next_unique_id = 0;
+	int i, j;
+	struct pat_ref *ref, **arr;
 	struct list pr = LIST_HEAD_INIT(pr);
 
 	pat_lru_seed = random();
 
+	/* Count pat_refs with user defined unique_id and totalt count */
 	list_for_each_entry(ref, _reference, list) {
-		if (ref->unique_id == -1) {
-			/* Look for the first free id. */
-			while (1) {
-list_for_each_entry(ref2, _reference, list) {
-	if (ref2->unique_id == i) {
-		i++;
-		break;
-			

Re: [PATCH] supress cirrus-ci OS version check

2020-02-27 Thread Willy Tarreau
On Wed, Feb 26, 2020 at 07:32:44PM +0500,  ??? wrote:
> I've adjusted commit message.

Now merged, thanks guys!

Willy



Re: Log lines in 2.0

2020-02-27 Thread Willy Tarreau
Hi Igor,

On Thu, Feb 27, 2020 at 10:36:44PM +1100, Igor Cicimov wrote:
> > This looks like you are running HAProxy in debug mode. Debug mode is
> > enabled via the '-d' command line switch or 'debug' configuration option
> > (http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#debug).
> >
> > Best regards
> > Tim Düsterhus
> >
> 
> Yes I have debug option on, thanks. The thing is it is there in 1.8 too but
> I don't see the same effect.

I recently marked the debug option deprecated because it has caused a lot
of trouble over time (services staying in foreground, spamming logs, filling
file-systems with boot log files etc), and indicated that only "-d" should
be used explicitly when you want to use the debug mode.

Do you have a *real* use case of "debug" in the global section that is
not easily solved with "-d" ? I'm asking because I'd really like to see
this design mistake disappear, but am not (too much) stubborn.

Thanks,
Willy



Re: SRV Record Priority Values

2020-02-27 Thread Willy Tarreau
Hi Luke,

On Thu, Feb 27, 2020 at 03:07:35PM +0100, Luke Seelenbinder wrote:
> Hello List,
> 
> We use SRV records extensively (for internal service discovery, etc.).
> 
> When the patch was integrated to support a 0 weighted SRV records, I thought
> that would simplify our setup, because at the time, I thought 0 weight meant
> "backup server" without a "backup" flag on the server. Unfortunately for our
> simplicity, that is not the case. A 0 weight means "will never be used unless
> explicitly chosen".
> 
> That leads me to my questions:
> 
> - Is that the intended behaviors of 0 weight servers: to not function as a
>   backup if all other servers are down?

Yes it is! They're typically used to drain old user sessions while
progressively taking a server off. Some also use them to let an
overloaded server cool down for a moment with no extra session. This
is completely unrelated to backup servers in fact, which have their
own weights and which can even be load balanced when all active servers
are dead.

> - Would you (Willy?) accept a patch that used the Priority field of SRV
> records to determine backup/non-backup status? Or perhaps an additional
> server option to specify 0 weighted SRV records means "backup"?

I suspect that it's more a property of the resolvers than the servers.
I mean, if you know that you're using your DNS servers this way, this
should really have the same meaning for all servers. So you shouldn't
have a per-server option to adjust this behavior but a per-resolvers
section. I'm personally not opposed to having more flexibility, and I
even find that it is a good idea. however I'm really not skilled at all
in the DNS area and Baptiste is the maintainer so I'm CCing him and
will let him decide.

Cheers,
Willy



Re: We'd Like to Offer You a Free Infographic!

2020-02-27 Thread Paul Je
Hi there,



Just following up on our last email on your article

on RFID referring to another one

on the same topic of RFID.



We wanted to offer you guys an infographic

at the bottom of our page to use (completely free of charge of course) to
include in your article to explain how RFID works

using hexadecimals and bits to store information to do what it needs to.



We believe it will help your readers better understand how it works, and
would supplement your piece.



Would love for you to use it!



Best,



Paul

FobToronto


On 2020-02-25 23:32:40 UTC, Paul Je  wrote:

Hi there!



We came across your article

referring to a different one

regarding the topic of RFID.



We actually have an entire web page

speaking more about radio-frequency identification, and how it works using
the hexadecimal system.



It speaks to how it can be used to differentiate between other tags, to
access buildings and makes it easily understood using an Infographic

at the very bottom of the page to summarize the topic.



We'd like to offer this Infographic for free to provide value to your
readers to better understand your article on RFID, as we've found many of
our readers and customers have a hard time understanding how it can almost
magically react to a building security reader to open a secured door.



We'd love for you to use it! Please check it out to see if you feel it's a
fit to link to it on your website:



https://fobtoronto.ca/how-do-fobs-work/




Best,



Paul

FobToronto


Re: [PATCH] OPTIM: startup: fast unique_id allocation for acl

2020-02-27 Thread Willy Tarreau
On Thu, Feb 27, 2020 at 02:35:38PM +, Carl Henrik Holth Lunde wrote:
> > Well, after having looked at it more closely, I'd still rather keep
> > the hole-filling algorithm, so if you have a v3 between the two
> > versions it would be great :-)
> 
> Do you mean just with an exit code?

I suppose you mean the check on pattern_finalize_config() ? I guess we
can't do much more in this case, unless you find where the conflicting
ones were declared but I'm not sure we keep that info in the patterns.

> And maybe change
> sizeof(struct pat_ref *) to sizeof(arr[0]) - if this is safe event when
> arr is of length 0?

I think you're speaking of the calloc call, right ? It's indeed cleaner,
and the recommended way of doing it (or just sizeof(*arr)). I don't see
the relation with arr being of length zero. I mean you're dereferencing
its *type* to get another type then its size, you're not dereferencing
the pointer's value, if that's your concern.

So I think I'm not seeing anything wrong here. If you're OK with that
as well, I'll just change the sizeof in the calloc to make things more
future-proof.

Thanks!
Willy



[PATCH] BUG/MINOR: dns: ignore trailing dot

2020-02-27 Thread Lukas Tribus
As per issue #435 a hostname with a trailing dot confuses our DNS code,
as for a zero length DNS label we emit a null-byte. This change makes us
ignore the zero length label instead.

Must be backported to 1.8.
---

As discussed in issue #435

---
 src/dns.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/dns.c b/src/dns.c
index c131f08..e2fa387 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1208,6 +1208,12 @@ int dns_str_to_dn_label(const char *str, int str_len, 
char *dn, int dn_len)
if (i == offset)
return -1;
 
+   /* ignore trailing dot */
+   if (i + 2 == str_len) {
+   i++;
+   break;
+   }
+
dn[offset] = (i - offset);
offset = i+1;
continue;
-- 
2.7.4




Re: [PATCH] OPTIM: startup: fast unique_id allocation for acl

2020-02-27 Thread Carl Henrik Holth Lunde
On Tue, 2020-02-25 at 11:47 +0100, Willy Tarreau wrote:
[...]
> > The patch was written under the assumption that you wanted exactly
> > the same behavior.
> 
> Yes that's preferable indeed.
> 
> > Since you say we maybe can drop them, I have created a
> > new patch which should work fine but does not guarantee that
> > automatic
> > ids form a gapless sequence from 0. This simplifies the algorithm
> > csignificantly. The downside is that an overflow can happen if a
> > very
> > high -u number is used in the config file, so I added a check for
> > that.
> > The maximum is still very high.
> 
> The problem now becomes a matter of backwards compatibility as it
> will refuse to start on some configs otherwise valid on older
> versions.
> I'd rather not do this. We could have simply detected the overflow in
> the automatic assignment but I'd rather keep the feature even if I
> don't use it myself. I've looked at the doc and figured what it's
> used for. It's for when you want to replace ACL or map values from
> the CLI. There is not always a unique reference file name to work
> with, but there's a unique ID that can be used prefixed by a '#'.
> It's possible that some infrastructures automatically update usage
> thresholds from the CLI using this, for example.
> 
> > > I don't know how the current code deals with duplicated IDs nor
> > > what impact these might have.
> > > 
> > 
> > Duplicate IDs are handled when the configuration file is read. That
> > code is not efficient either but I did not look at improving that
> > because I do not know if anyone uses this feature *and* use very
> > large configuration files. For small hand written files the code is
> > fine.
> 
> Agreed. I've seen the pat_ref_lookupid() which does linear search.
> I'm not that much worried by this considering that you managed to
> significantly shrink the startup time in your case already.
> 
> Well, after having looked at it more closely, I'd still rather keep
> the hole-filling algorithm, so if you have a v3 between the two
> versions it would be great :-)

Do you mean just with an exit code? And maybe change
sizeof(struct pat_ref *) to sizeof(arr[0]) - if this is safe event when
arr is of length 0?


> Thanks!
> Willy
From 7f1d255901fb4ed6252154a8d14775180835088f Mon Sep 17 00:00:00 2001
From: Carl Henrik Lunde 
Date: Thu, 27 Feb 2020 15:30:13 +0100
Subject: [PATCH] OPTIM: startup: fast unique_id allocation for acl.

pattern_finalize_config() uses an inefficient algorithm which is a
problem with very large configuration files. This affects startup, and
therefore reload time. When haproxy is deployed as a router in a
Kubernetes cluster the generated configuration file may be large and
reloads are frequently occuring, which makes this a significant issue.

The old algorithm is O(n^2)
* allocate missing uids - O(n^2)
* sort linked list - O(n^2)

The new algorithm is O(n log n):
* find the user allocated uids - O(n)
* store them for efficient lookup - O(n log n)
* allocate missing uids - n times O(log n)
* sort all uids - O(n log n)
* convert back to linked list - O(n)

Performance examples, startup time in seconds:

pat_refs old new
1000  0.02   0.01
1 2.10.04
212.30.07
327.90.10
452.50.14
577.50.17

Please backport to 1.8 and 2.0.
---
 include/proto/pattern.h |  2 +-
 src/haproxy.c   |  6 ++-
 src/pattern.c   | 89 +++--
 3 files changed, 64 insertions(+), 33 deletions(-)

diff --git a/include/proto/pattern.h b/include/proto/pattern.h
index 5b9929614..73d3cdcf3 100644
--- a/include/proto/pattern.h
+++ b/include/proto/pattern.h
@@ -37,7 +37,7 @@ extern void (*pat_prune_fcts[PAT_MATCH_NUM])(struct pattern_expr *);
 extern struct pattern *(*pat_match_fcts[PAT_MATCH_NUM])(struct sample *, struct pattern_expr *, int);
 extern int pat_match_types[PAT_MATCH_NUM];
 
-void pattern_finalize_config(void);
+int pattern_finalize_config(void);
 
 /* return the PAT_MATCH_* index for match name "name", or < 0 if not found */
 static inline int pat_find_match_name(const char *name)
diff --git a/src/haproxy.c b/src/haproxy.c
index f04ccea6e..25a4328cb 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -1898,7 +1898,11 @@ static void init(int argc, char **argv)
 		exit(1);
 	}
 
-	pattern_finalize_config();
+	err_code |= pattern_finalize_config();
+	if (err_code & (ERR_ABORT|ERR_FATAL)) {
+		ha_alert("Failed to finalize pattern config.\n");
+		exit(1);
+	}
 
 	/* recompute the amount of per-process memory depending on nbproc and
 	 * the shared SSL cache size (allowed to exist in all processes).
diff --git a/src/pattern.c b/src/pattern.c
index 90067cd23..6f9ffcdc5 100644
--- a/src/pattern.c
+++ b/src/pattern.c
@@ -2643,53 +2643,80 @@ int pattern_delete(struct pattern_expr *expr, struct pat_ref_elt *ref)
 	return 1;
 }
 
-/* This function finalize the configuration parsing. Its set all the
+/* This function 

Re: Prometheus service

2020-02-27 Thread Christopher Faulet

Le 27/02/2020 à 11:36, Seena Fallah a écrit :

Hi all.
I have upgraded to HAProxy 2.0.13 and enabled Prometheus service on it. In 
previous version (1.8.8) I used haproxy_exporter 
 and I 
have haproxy_server_check_duration_milliseconds and new_session_rate for each 
server but in HAProxy v2.0.13 Prometheus service I don't see these metrics.

How can I have them?


Hi,

check_duration is not exported but it could be. But if so, it would be exported 
in second using a float representation. So the metric name would be 
haproxy_server_check_duration_seconds. About new_session_rate, I guess you mean 
current_session_rate. This one was removed on purpose. It can be deduced from 
sessions_total metric using Prometheus rate() function.


--
Christopher Faulet



SRV Record Priority Values

2020-02-27 Thread Luke Seelenbinder
Hello List,

We use SRV records extensively (for internal service discovery, etc.).

When the patch was integrated to support a 0 weighted SRV records, I thought 
that would simplify our setup, because at the time, I thought 0 weight meant 
"backup server" without a "backup" flag on the server. Unfortunately for our 
simplicity, that is not the case. A 0 weight means "will never be used unless 
explicitly chosen".

That leads me to my questions:

- Is that the intended behaviors of 0 weight servers: to not function as a 
backup if all other servers are down?
- Would you (Willy?) accept a patch that used the Priority field of SRV records 
to determine backup/non-backup status? Or perhaps an additional server option 
to specify 0 weighted SRV records means "backup"?

Best,
Luke

—
Luke Seelenbinder
Stadia Maps | Founder
stadiamaps.com

Re: [PATCH 3/3] MINOR: stream: Use stream_generate_unique_id

2020-02-27 Thread Willy Tarreau
On Thu, Feb 27, 2020 at 02:17:04PM +0100, Tim Düsterhus wrote:
> > Does this mean that the unique-id will now be generated only when
> > logging ? Because if that's the case it won't find any element from
> > the request anymore.
> 
> If neither the unique-id-header is set, nor the unique-id sample fetch
> is used then this is correct. I'm not sure how useful an ID that is
> never referenced elsewhere is,

It's not never referenced. It may be used for various things, just for
getting a unique log identifier, passed as part of another header, or
the query string, used in a redirect to match the second request against
the first one, used in Lua, passed to an SPOA etc.

> but this indeed would be a breaking change. I'll have a look.

OK thanks!

Willy



Re: [PATCH 3/3] MINOR: stream: Use stream_generate_unique_id

2020-02-27 Thread Tim Düsterhus
Willy,

Am 27.02.20 um 13:32 schrieb Willy Tarreau:
> On Thu, Feb 27, 2020 at 12:50:44PM +0100, Tim Düsterhus wrote:
/* add unique-id if "header-unique-id" is specified */
 +  if (sess->fe->header_unique_id && 
 !LIST_ISEMPTY(>fe->format_unique_id)) {
>>> ^^
>>> All the unique-id generation seems to be enclosed in this. Am I missing
>>> something ?
>>
>> Yes. The `header_unique_id` check is only in `http_process_request`.
>> I've merged the previous two ifs into a single one. A unique ID in
>> `http_process_request` is only generated when it is actually sent to the
>> upstream server. If it is not sent we don't need to generate one,
>> because no one is seeing it. This might technically be a slight behavior
>> change (the unique ID is generated later), but the user won't see a
>> difference.
> 
> Thank for the explanation (this should be part of the commit message
> so that someone bisecting to that commit later doesn't ask the question
> again).

It looks like I need to change this anyway. I guess I can disregard this
then?

>> By replacing all the ad-hoc generation of the unique IDs with a call to
>> `stream_generate_unique_id` the unique ID will be generated when it is
>> first used. This is happening in this order:
>>
>> 1. unique-id-header
>> 2. unique-id sample fetch
>> 3. Logging
> 
> I'm just having a doubt now. What if the unique-id-format refernces
> some part of the request ? Let's say I have this :
> 
>  unique-id-format %ci_%cp_%[req.hdr(host),crc32,hex]
> 
> And in my logs I have "%ID".
> 
> Does this mean that the unique-id will now be generated only when
> logging ? Because if that's the case it won't find any element from
> the request anymore.

If neither the unique-id-header is set, nor the unique-id sample fetch
is used then this is correct. I'm not sure how useful an ID that is
never referenced elsewhere is, but this indeed would be a breaking
change. I'll have a look.

>> I was thinking about this, but didn't make the change to keep the diff
>> small and because it increases the size of a stream by an additional
>> `size_t` which I'm not sure is desired. I can create a follow up patch
>> if you want.
> 
> It would be nice. We're trying to progressively clean such things from
> the code. There are still a lot of them, cookie names and whatever, over
> which we run strlen for each and every use case, or we have a separate
> length and do manual operations to keep them in sync while we already
> have everything needed to manipulate them directly.

Okay, sure.

Best regards
Tim Düsterhus



Re: [PATCH 3/3] MINOR: stream: Use stream_generate_unique_id

2020-02-27 Thread Willy Tarreau
On Thu, Feb 27, 2020 at 12:50:44PM +0100, Tim Düsterhus wrote:
> >>/* add unique-id if "header-unique-id" is specified */
> >> +  if (sess->fe->header_unique_id && 
> >> !LIST_ISEMPTY(>fe->format_unique_id)) {
> > ^^
> > All the unique-id generation seems to be enclosed in this. Am I missing
> > something ?
> 
> Yes. The `header_unique_id` check is only in `http_process_request`.
> I've merged the previous two ifs into a single one. A unique ID in
> `http_process_request` is only generated when it is actually sent to the
> upstream server. If it is not sent we don't need to generate one,
> because no one is seeing it. This might technically be a slight behavior
> change (the unique ID is generated later), but the user won't see a
> difference.

Thank for the explanation (this should be part of the commit message
so that someone bisecting to that commit later doesn't ask the question
again).

> By replacing all the ad-hoc generation of the unique IDs with a call to
> `stream_generate_unique_id` the unique ID will be generated when it is
> first used. This is happening in this order:
> 
> 1. unique-id-header
> 2. unique-id sample fetch
> 3. Logging

I'm just having a doubt now. What if the unique-id-format refernces
some part of the request ? Let's say I have this :

 unique-id-format %ci_%cp_%[req.hdr(host),crc32,hex]

And in my logs I have "%ID".

Does this mean that the unique-id will now be generated only when
logging ? Because if that's the case it won't find any element from
the request anymore.

> I specifically added debug code locally that printed a message with the
> current function when the unique ID is actually generated for a request.
> If neither the header is set, not the sample fetch is used then an ID
> will still be generated for the logging.

OK. I'm just concerned about the case above, to be sure the unique-id
is still built during the request processing and not too late.

> > (...)
> > 
> > Other than that I have a suggestion, I've seen recently in this thread
> > a few calls to strlen() on unique_id and header_unique_id. I think they
> > should be turned to ist so that the length is always stored with them
> > and we don't need to run strlen on them anymore at runtime. And this
> > will simplify the header addition which will basically look like this:
> > 
> >  http_add_header(htx, sess->fe->header_unique_id, s->unique_id);
> 
> I was thinking about this, but didn't make the change to keep the diff
> small and because it increases the size of a stream by an additional
> `size_t` which I'm not sure is desired. I can create a follow up patch
> if you want.

It would be nice. We're trying to progressively clean such things from
the code. There are still a lot of them, cookie names and whatever, over
which we run strlen for each and every use case, or we have a separate
length and do manual operations to keep them in sync while we already
have everything needed to manipulate them directly.

Thanks!
Willy



[PATCH v2 1/2] MINOR: stream: Add stream_generate_unique_id function

2020-02-27 Thread Tim Duesterhus
Currently unique IDs for a stream are generated using repetitive code in
multiple locations, possibly allowing for inconsistent behavior.
---
 include/proto/stream.h |  3 +++
 src/http_ana.c |  1 -
 src/stream.c   | 24 
 3 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/include/proto/stream.h b/include/proto/stream.h
index f8c0887b9..e54ac60cc 100644
--- a/include/proto/stream.h
+++ b/include/proto/stream.h
@@ -53,6 +53,7 @@ extern struct trace_source trace_strm;
 #define IS_HTX_STRM(strm) ((strm)->flags & SF_HTX)
 
 extern struct pool_head *pool_head_stream;
+extern struct pool_head *pool_head_uniqueid;
 extern struct list streams;
 
 extern struct data_cb sess_conn_cb;
@@ -65,6 +66,8 @@ void stream_shutdown(struct stream *stream, int why);
 void stream_dump(struct buffer *buf, const struct stream *s, const char *pfx, 
char eol);
 void stream_dump_and_crash(enum obj_type *obj, int rate);
 
+int stream_generate_unique_id(struct stream *strm, struct list *format);
+
 void stream_process_counters(struct stream *s);
 void sess_change_server(struct stream *sess, struct server *newsrv);
 struct task *process_stream(struct task *t, void *context, unsigned short 
state);
diff --git a/src/http_ana.c b/src/http_ana.c
index e3d22445e..20c7b6e50 100644
--- a/src/http_ana.c
+++ b/src/http_ana.c
@@ -5093,7 +5093,6 @@ void http_end_txn(struct stream *s)
 
 
 DECLARE_POOL(pool_head_http_txn, "http_txn", sizeof(struct http_txn));
-DECLARE_POOL(pool_head_uniqueid, "uniqueid", UNIQUEID_LEN);
 
 __attribute__((constructor))
 static void __http_protocol_init(void)
diff --git a/src/stream.c b/src/stream.c
index 9798c5f0f..306444e89 100644
--- a/src/stream.c
+++ b/src/stream.c
@@ -66,6 +66,7 @@
 #include 
 
 DECLARE_POOL(pool_head_stream, "stream", sizeof(struct stream));
+DECLARE_POOL(pool_head_uniqueid, "uniqueid", UNIQUEID_LEN);
 
 struct list streams = LIST_HEAD_INIT(streams);
 __decl_spinlock(streams_lock);
@@ -2657,6 +2658,29 @@ void stream_dump_and_crash(enum obj_type *obj, int rate)
abort();
 }
 
+/* Generates a unique ID based on the given , stores it in the given 
 and
+ * returns the length of the ID. -1 is returned on memory allocation failure.
+ *
+ * If an ID is already stored within the stream nothing happens and length of 
the stored
+ * ID is returned.
+ */
+int stream_generate_unique_id(struct stream *strm, struct list *format)
+{
+   if (strm->unique_id != NULL) {
+   return strlen(strm->unique_id);
+   }
+   else {
+   char *unique_id;
+   if ((unique_id = pool_alloc(pool_head_uniqueid)) == NULL)
+   return -1;
+
+   strm->unique_id = unique_id;
+   strm->unique_id[0] = 0;
+
+   return build_logline(strm, strm->unique_id, UNIQUEID_LEN, 
format);
+   }
+}
+
 //
 /*   All supported ACL keywords must be declared here.  */
 //
-- 
2.25.1




[PATCH v2 2/2] MINOR: stream: Use stream_generate_unique_id

2020-02-27 Thread Tim Duesterhus
This patch replaces the ad-hoc generation of stream's `unique_id` values
by calls to `stream_generate_unique_id`.
---
 src/http_ana.c   | 14 ++
 src/http_fetch.c | 17 -
 src/log.c|  3 +--
 3 files changed, 15 insertions(+), 19 deletions(-)

diff --git a/src/http_ana.c b/src/http_ana.c
index 20c7b6e50..d6f41b428 100644
--- a/src/http_ana.c
+++ b/src/http_ana.c
@@ -788,20 +788,18 @@ int http_process_request(struct stream *s, struct channel 
*req, int an_bit)
http_manage_client_side_cookies(s, req);
 
/* add unique-id if "header-unique-id" is specified */
+   if (sess->fe->header_unique_id && 
!LIST_ISEMPTY(>fe->format_unique_id)) {
+   struct ist n, v;
+   int length;
 
-   if (!LIST_ISEMPTY(>fe->format_unique_id) && !s->unique_id) {
-   if ((s->unique_id = pool_alloc(pool_head_uniqueid)) == NULL) {
+   if ((length = stream_generate_unique_id(s, 
>fe->format_unique_id)) < 0) {
if (!(s->flags & SF_ERR_MASK))
s->flags |= SF_ERR_RESOURCE;
goto return_int_err;
}
-   s->unique_id[0] = '\0';
-   build_logline(s, s->unique_id, UNIQUEID_LEN, 
>fe->format_unique_id);
-   }
 
-   if (sess->fe->header_unique_id && s->unique_id) {
-   struct ist n = ist2(sess->fe->header_unique_id, 
strlen(sess->fe->header_unique_id));
-   struct ist v = ist2(s->unique_id, strlen(s->unique_id));
+   n = ist2(sess->fe->header_unique_id, 
strlen(sess->fe->header_unique_id));
+   v = ist2(s->unique_id, length);
 
if (unlikely(!http_add_header(htx, n, v)))
goto return_int_err;
diff --git a/src/http_fetch.c b/src/http_fetch.c
index d288e841d..dbbb5ecfd 100644
--- a/src/http_fetch.c
+++ b/src/http_fetch.c
@@ -409,19 +409,18 @@ static int smp_fetch_stcode(const struct arg *args, 
struct sample *smp, const ch
 
 static int smp_fetch_uniqueid(const struct arg *args, struct sample *smp, 
const char *kw, void *private)
 {
+   int length;
+
if (LIST_ISEMPTY(>sess->fe->format_unique_id))
return 0;
 
-   if (!smp->strm->unique_id) {
-   if ((smp->strm->unique_id = pool_alloc(pool_head_uniqueid)) == 
NULL)
-   return 0;
-   smp->strm->unique_id[0] = '\0';
-   build_logline(smp->strm, smp->strm->unique_id,
- UNIQUEID_LEN, >sess->fe->format_unique_id);
-   }
-   smp->data.u.str.data = strlen(smp->strm->unique_id);
-   smp->data.type = SMP_T_STR;
+   length = stream_generate_unique_id(smp->strm, 
>sess->fe->format_unique_id);
+   if (length < 0)
+   return 0;
+
smp->data.u.str.area = smp->strm->unique_id;
+   smp->data.u.str.data = length;
+   smp->data.type = SMP_T_STR;
smp->flags = SMP_F_CONST;
return 1;
 }
diff --git a/src/log.c b/src/log.c
index 60b1a5a4d..b46605b8d 100644
--- a/src/log.c
+++ b/src/log.c
@@ -2983,8 +2983,7 @@ void strm_log(struct stream *s)
 
/* if unique-id was not generated */
if (!s->unique_id && !LIST_ISEMPTY(>fe->format_unique_id)) {
-   if ((s->unique_id = pool_alloc(pool_head_uniqueid)) != NULL)
-   build_logline(s, s->unique_id, UNIQUEID_LEN, 
>fe->format_unique_id);
+   stream_generate_unique_id(s, >fe->format_unique_id);
}
 
if (!LIST_ISEMPTY(>fe->logformat_sd)) {
-- 
2.25.1




Re: [PATCH 3/3] MINOR: stream: Use stream_generate_unique_id

2020-02-27 Thread Tim Düsterhus
Willy,

Am 27.02.20 um 03:43 schrieb Willy Tarreau:
> On Wed, Feb 26, 2020 at 04:20:51PM +0100, Tim Duesterhus wrote:
>> This patch replaces the ad-hoc generation of stream's `unique_id` values
>> by calls to `stream_generate_unique_id`.
> 
> It seems to me that it won't generate the unique_id anymore if there
> is no unique-id-header directive in the config :
> 
>>  http_manage_client_side_cookies(s, req);
>>  
>>  /* add unique-id if "header-unique-id" is specified */
>> +if (sess->fe->header_unique_id && 
>> !LIST_ISEMPTY(>fe->format_unique_id)) {
> ^^
> All the unique-id generation seems to be enclosed in this. Am I missing
> something ?

Yes. The `header_unique_id` check is only in `http_process_request`.
I've merged the previous two ifs into a single one. A unique ID in
`http_process_request` is only generated when it is actually sent to the
upstream server. If it is not sent we don't need to generate one,
because no one is seeing it. This might technically be a slight behavior
change (the unique ID is generated later), but the user won't see a
difference.

By replacing all the ad-hoc generation of the unique IDs with a call to
`stream_generate_unique_id` the unique ID will be generated when it is
first used. This is happening in this order:

1. unique-id-header
2. unique-id sample fetch
3. Logging

I specifically added debug code locally that printed a message with the
current function when the unique ID is actually generated for a request.
If neither the header is set, not the sample fetch is used then an ID
will still be generated for the logging.

> (...)
> 
> Other than that I have a suggestion, I've seen recently in this thread
> a few calls to strlen() on unique_id and header_unique_id. I think they
> should be turned to ist so that the length is always stored with them
> and we don't need to run strlen on them anymore at runtime. And this
> will simplify the header addition which will basically look like this:
> 
>  http_add_header(htx, sess->fe->header_unique_id, s->unique_id);

I was thinking about this, but didn't make the change to keep the diff
small and because it increases the size of a stream by an additional
`size_t` which I'm not sure is desired. I can create a follow up patch
if you want.

Best regards
Tim Düsterhus



Re: [PATCH 2/3] MINOR: stream: Add stream_generate_unique_id function

2020-02-27 Thread Tim Düsterhus
Willy,

Am 27.02.20 um 03:49 schrieb Willy Tarreau:
>> +int stream_generate_unique_id(struct stream *strm, struct list *format) {
> 
> Please put the function's opening brace on its own line. The reason for
> this is when you have many arguments and variables, it easily becomes a
> mess where you cannot always visually tell which ones belong to what.

That happens when working with different projects. My usual code style
is the opening brace on the same line.

>> +if (strm->unique_id != NULL) {
>> +return strlen(strm->unique_id);
>> +}
>> +
>> +if ((strm->unique_id = pool_alloc(pool_head_uniqueid)) == NULL)
>> +return -1;
> 
> Please avoid assignments in "if" conditions. There are two reasons for
> this, both related to debugging:
>- if you want to quickly disable the error check or put a lock
>  around or whatever, you cannot without splitting the line in
>  two ;
> 
>- you often cannot single-step through it in a debugger or put a
>  breakpoint after the pool_alloc() call.

Sure, adjusted. In this case I just "copied" the existing code / control
flow.

Will send the updated series when I had a look at the other mail.

Best regards
Tim Düsterhus



Re: Log lines in 2.0

2020-02-27 Thread Igor Cicimov
Hi Tim,

On Thu, Feb 27, 2020, 10:09 PM Tim Düsterhus  wrote:

> Igor,
>
> Am 27.02.20 um 05:27 schrieb Igor Cicimov:
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.accept(0009)=0012 from [IP:56142] ALPN=
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clireq[0012:]: GET /monitor-url HTTP/1.1
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clihdr[0012:]: host: 10.0.4.33:PORT
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clihdr[0012:]: user-agent:
> ELB-HealthChecker/1.0
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clihdr[0012:]: accept: */*
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.clicls[0012:]
> > Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> > 0d56:monitor-in.closed[0012:]
> >
> > I don't have any log-format settings thus the default ones should be in
> > play so wonder if this is what I should see?
>
> This looks like you are running HAProxy in debug mode. Debug mode is
> enabled via the '-d' command line switch or 'debug' configuration option
> (http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#debug).
>
> Best regards
> Tim Düsterhus
>

Yes I have debug option on, thanks. The thing is it is there in 1.8 too but
I don't see the same effect.

Cheers,
Igor

>


Re: Log lines in 2.0

2020-02-27 Thread Tim Düsterhus
Igor,

Am 27.02.20 um 05:27 schrieb Igor Cicimov:
> Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> 0d56:monitor-in.accept(0009)=0012 from [IP:56142] ALPN=
> Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> 0d56:monitor-in.clireq[0012:]: GET /monitor-url HTTP/1.1
> Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> 0d56:monitor-in.clihdr[0012:]: host: 10.0.4.33:PORT
> Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> 0d56:monitor-in.clihdr[0012:]: user-agent: ELB-HealthChecker/1.0
> Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> 0d56:monitor-in.clihdr[0012:]: accept: */*
> Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> 0d56:monitor-in.clicls[0012:]
> Feb 27 03:37:21 ip-10-0-4-33 haproxy[21361]:
> 0d56:monitor-in.closed[0012:]
> 
> I don't have any log-format settings thus the default ones should be in
> play so wonder if this is what I should see?

This looks like you are running HAProxy in debug mode. Debug mode is
enabled via the '-d' command line switch or 'debug' configuration option
(http://cbonte.github.io/haproxy-dconv/2.1/configuration.html#debug).

Best regards
Tim Düsterhus



Prometheus service

2020-02-27 Thread Seena Fallah
Hi all.
I have upgraded to HAProxy 2.0.13 and enabled Prometheus service on it. In
previous version (1.8.8) I used haproxy_exporter
 and I
have haproxy_server_check_duration_milliseconds and new_session_rate for
each server but in HAProxy v2.0.13 Prometheus service I don't see these
metrics.
How can I have them?