Re: ELB scaling => sudden backend tragedy

2019-10-24 Thread Jim Freeman
https://github.com/haproxy/haproxy/issues/341

On Thu, Oct 24, 2019 at 11:44 AM Lukas Tribus  wrote:

> Hello,
>
> On Thu, Oct 24, 2019 at 5:53 PM Jim Freeman  wrote:
> >
> > Yesterday we had an ELB scale to 26 IP addresses, at which time ALL of
> the servers in that backend were suddenly marked down, e.g. :
> >
> >Server www26 is going DOWN for maintenance (unspecified DNS error)
> >
> > Ergo, ALL requests to that backend got 503s ==> complete outage
> >
> > Mayhap src/dns.c::dns_validate_dns_response() bravely running away when
> DNS_RESP_TRUNCATED (skipping parsing of the partial list of servers,
> abandoning TTL updates to perfectly good endpoints) is not the best course
> of action ?
> >
> > Of course we'll hope (MTUs allowing) that we'll be able to paper this
> over for awhile using an accepted_payload_size >default(512).
>
> I agree this is basically a ticking time-bomb for everyone not
> thinking about the DNS payload size every single day.
>
> However we also need to make sure people will become aware of it when
> they are hitting truncation size. This would have to be at least a
> warning on critical syslog level.
>
>
> Reliable DNS resolution for everyone without surprises will only
> happen with TCP based DNS:
> https://github.com/haproxy/haproxy/issues/185
>
> For the issue in question on the other hand: can you file a bug on github?
>
>
>
> Thanks,
>
> Lukas
>


Re: [PATCH] MINOR: sample: add us/ms support to date sample

2019-10-24 Thread Tim Düsterhus
Damien,

Am 24.10.19 um 17:49 schrieb Damien Claisse:
> @@ -3259,7 +3275,7 @@ static struct sample_fetch_kw_list smp_kws = {ILH, {
>   { "always_false", smp_fetch_false, 0,NULL, SMP_T_BOOL, 
> SMP_USE_INTRN },
>   { "always_true",  smp_fetch_true,  0,NULL, SMP_T_BOOL, 
> SMP_USE_INTRN },
>   { "env",  smp_fetch_env,   ARG1(1,STR),  NULL, SMP_T_STR,  
> SMP_USE_INTRN },
> - { "date", smp_fetch_date,  ARG1(0,SINT), NULL, SMP_T_SINT, 
> SMP_USE_INTRN },
> + { "date", smp_fetch_date,  ARG2(0,SINT,STR), NULL, SMP_T_SINT, 
> SMP_USE_INTRN },

If I fat finger the unit I will silently receive "seconds". I recommend
to add validation (replace "NULL" with a function name).

Best regards
Tim Düsterhus



Research on Chord Progressions

2019-10-24 Thread Jen Miller
Dear Editor,

My name is Jen and I’m an Editor at Jen Reviews
. I was doing research on chord progressions
and just finished reading your wonderful piece: https://sock-raw.org/

In that article, I noticed that you cited a solid post that I’ve read in
the past: https://musescore.org/en

We just published an updated, comprehensive guide on three famous jazz
chord progressions to master on our sister site, Beginner Guitar HQ. It is
completely free and you can find it here:
https://beginnerguitarhq.com/jazz-chord-progressions/

If you like the piece we’d be humbled if you cited us in your article. Of
course, we will also share your article with our 100k newsletter
subscribers and followers across our social platforms.

Either way, keep up the great work!

Warmly,
Jen







P.O. Box 135, Whitianga 3510, New Zealand. You may unsubscribe

to
stop receiving our emails.


Re: ELB scaling => sudden backend tragedy

2019-10-24 Thread Lukas Tribus
Hello,

On Thu, Oct 24, 2019 at 5:53 PM Jim Freeman  wrote:
>
> Yesterday we had an ELB scale to 26 IP addresses, at which time ALL of the 
> servers in that backend were suddenly marked down, e.g. :
>
>Server www26 is going DOWN for maintenance (unspecified DNS error)
>
> Ergo, ALL requests to that backend got 503s ==> complete outage
>
> Mayhap src/dns.c::dns_validate_dns_response() bravely running away when 
> DNS_RESP_TRUNCATED (skipping parsing of the partial list of servers, 
> abandoning TTL updates to perfectly good endpoints) is not the best course of 
> action ?
>
> Of course we'll hope (MTUs allowing) that we'll be able to paper this over 
> for awhile using an accepted_payload_size >default(512).

I agree this is basically a ticking time-bomb for everyone not
thinking about the DNS payload size every single day.

However we also need to make sure people will become aware of it when
they are hitting truncation size. This would have to be at least a
warning on critical syslog level.


Reliable DNS resolution for everyone without surprises will only
happen with TCP based DNS:
https://github.com/haproxy/haproxy/issues/185

For the issue in question on the other hand: can you file a bug on github?



Thanks,

Lukas



RE: Lock contention in pat_match_str in threaded mode

2019-10-24 Thread Brian Diekelman
Thank you for turning that around so quickly, Willy.

We'll pull down the new release when it's available.


-Original Message-
From: Willy Tarreau  
Sent: Tuesday, October 22, 2019 10:53 PM
To: Brian Diekelman 
Cc: haproxy@formilux.org
Subject: Re: Lock contention in pat_match_str in threaded mode

Notice: This email is from an external sender.



Hi Brian,

On Tue, Oct 22, 2019 at 04:19:58PM +0200, Willy Tarreau wrote:
> At the moment I don't know what it requires to break it down per 
> thread, so I'll add a github issue referencing your report so that we don't 
> forget.
> Depending on the complexity, it may make sense to backport it once 
> done, because it's still a regression.

So this morning I spent a little time on it and it was really trivial to fix. 
I've now splitted the LRU cache per thread (just as it is when using nbproc in 
fact) so that we can get rid of the lock. Result, from 67kH/s I jump to 369kH/s 
on a test with 7 threads on 4 cores :-)

I could backport it as far as 1.8 so I did it, as I really consider this as a 
regression and it can be a showstopper for some users to migrate to threads. 
The gains are less important in 1.8 since the scalability is much less linear 
there. I'm going to work on issuing some releases today or over this week, I'd 
encourage you to migrate to 2.0.8 once it's out.

Thanks again for your detailed report!
Willy



[ANNOUNCE] haproxy-1.9.12

2019-10-24 Thread Willy Tarreau
Hi,

HAProxy 1.9.12 was released on 2019/10/24. It added 39 new commits
after version 1.9.11.

This version addresses mostly the same issues as 2.0.8 yesterday. In
addition it addresses a failed backport of the H2 connection timeout
fix, which can cause some rare crashes if the timeout finally strikes
at the exact same moment a new stream is created.

Please note that we're reaching a point where 2.0 starts to work better than
1.9. There are still issues being addressed in 2.0 but they are also present
in 1.9 and will probably be harder to fix in 1.9 (read "the fixes may be less
reliable there"). Given that 1.9 is not supposed to be maintained long, it's
worth starting to think about migrating at some point. There's no rush though,
but each update should be a reminder that it doesn't require much more work to
jump than to stay on the same branch ;-)

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Sources  : http://www.haproxy.org/download/1.9/src/
   Git repository   : http://git.haproxy.org/git/haproxy-1.9.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-1.9.git
   Changelog: http://www.haproxy.org/download/1.9/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Christopher Faulet (8):
  BUG/MEDIUM: htx: Catch chunk_memcat() failures when HTX data are 
formatted to h1
  BUG/MINOR: chunk: Fix tests on the chunk size in functions copying data
  BUG/MINOR: mux-h1: Mark the output buffer as full when the xfer is 
interrupted
  BUG/MINOR: mux-h1: Capture ignored parsing errors
  BUG/MINOR: http-htx: Properly set htx flags on error files to support 
keep-alive
  BUG/MINOR: tcp: Don't alter counters returned by tcp info fetchers
  BUG/MINOR: ssl: Fix fd leak on error path when a TLS ticket keys file is 
parsed
  BUG/MINOR: stick-table: Never exceed (MAX_SESS_STKCTR-1) when fetching a 
stkctr

Emeric Brun (5):
  CLEANUP: ssl: make ssl_sock_put_ckch_into_ctx handle errcode/warn
  CLEANUP: ssl: make ssl_sock_load_dh_params handle errcode/warn
  CLEANUP: bind: handle warning label on bind keywords parsing.
  BUG/MEDIUM: ssl: 'tune.ssl.default-dh-param' value ignored with openssl > 
1.1.1
  BUG/MINOR: ssl: fix memcpy overlap without consequences.

Miroslav Zagorac (1):
  BUG/MINOR: WURFL: fix send_log() function arguments

Olivier Houchard (4):
  BUG/MEDIUM: mux_pt: Make sure we don't have a conn_stream before freeing.
  Revert e8826ded5fea3593d89da2be5c2d81c522070995.
  BUG/MEDIUM: mux_pt: Don't destroy the connection if we have a stream 
attached.
  BUG/MEDIUM: mux_pt: Only call the wake emthod if nobody subscribed to 
receive.

Tim Duesterhus (2):
  BUG/MINOR: lua: Properly initialize the buffer's fields for string 
samples in hlua_lua2(smp|arg)
  BUG/MINOR: sample: Make the `field` converter compatible with `-m found`

William Lallemand (7):
  BUG/MINOR: ssl: abort on sni allocation failure
  BUG/MINOR: ssl: free the sni_keytype nodes
  BUG/MINOR: ssl: abort on sni_keytypes allocation failure
  BUILD: ssl: wrong #ifdef for SSL engines code
  BUG/MINOR: mworker/ssl: close openssl FDs unconditionally
  BUG/MINOR: mworker/cli: reload fail with inherited FD
  BUG/MINOR: cache: alloc shctx after check config

Willy Tarreau (12):
  MINOR: mux-h2: add a per-connection list of blocked streams
  BUILD: ebtree: make eb_is_empty() and eb_is_dup() take a const
  BUG/MEDIUM: mux-h2: do not enforce timeout on long connections
  BUG/MEDIUM: cache: make sure not to cache requests with absolute-uri
  DOC: clarify some points around http-send-name-header's behavior
  MINOR: stats: mention in the help message support for "json" and "typed"
  BUG/MAJOR: idle conns: schedule the cleanup task on the correct threads
  CLEANUP: ssl: make ssl_sock_load_cert*() return real error codes
  BUG/MAJOR: mux-h2: fix incorrect backport of connection timeout fix
  BUG/MINOR: mux-h2: also make sure blocked legacy connections may expire
  BUG/MINOR: stick-table: fix an incorrect 32 to 64 bit key conversion
  BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and lockless

---



ELB scaling => sudden backend tragedy

2019-10-24 Thread Jim Freeman
Yesterday we had an ELB scale to 26 IP addresses, at which time ALL of the
servers in that backend were suddenly marked down, e.g. :

   Server www26 is going DOWN for maintenance (unspecified DNS error)

Ergo, ALL requests to that backend got 503s ==> complete outage

Mayhap src/dns.c::dns_validate_dns_response() bravely running away when
DNS_RESP_TRUNCATED (skipping parsing of the partial list of servers,
abandoning TTL updates to perfectly good endpoints) is not the best course
of action ?

Of course we'll hope (MTUs allowing) that we'll be able to paper this over
for awhile using an accepted_payload_size >default(512).

But as-is, this looks to be an avoidable pathology?

Thoughts?

Yours, endlessly impressed with haproxy,
...jfree

https://packages.debian.org/stretch-backports/haproxy
1.8.19-1~bpo9+1


[PATCH] MINOR: sample: add us/ms support to date sample

2019-10-24 Thread Damien Claisse
It can be sometimes interesting to have a timestamp with
a resolution of less than a second. It is currently painful
to obtain this, because concatenation of date and date_us lead
to a shorter timestamp during the first 100ms of a second, which
is not parseable and needs ugly ACLs in configuration to prepend
0s when needed.
To improve this, add an optional  parameter to date sample.
This allows to have milliseconds or microseconds appended to
timestamp with the correct length.
---
 doc/configuration.txt |  8 +++-
 src/sample.c  | 20 ++--
 2 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index d8e1b57f1..12c54e6be 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -14151,7 +14151,7 @@ cpu_ns_tot : integer
   high cpu_calls count, for example when processing many HTTP chunks, and for
   this reason it is often preferred to log cpu_ns_avg instead.
 
-date([]) : integer
+date([, ]) : integer
   Returns the current date as the epoch (number of seconds since 01/01/1970).
   If an offset value is specified, then it is a number of seconds that is added
   to the current date before returning the value. This is particularly useful
@@ -14163,6 +14163,12 @@ date([]) : integer
  # set an expires header to now+1 hour in every response
  http-response set-header Expires %[date(3600),http_date]
 
+   is facultative, and can be set to "ms" for milliseconds or "us" for
+  microseconds. Default unit is seconds.
+  If  is set, microseconds or milliseconds of current date sample are
+  appended to timestamp. This is useful when a time resolution of less than
+  a second is needed.
+
 date_us : integer
   Return the microseconds part of the date (the "second" part is returned by
   date sample). This sample is coherent with the date sample as it is comes
diff --git a/src/sample.c b/src/sample.c
index 98b5d573f..1f6c925d3 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -2941,7 +2941,7 @@ smp_fetch_env(const struct arg *args, struct sample *smp, 
const char *kw, void *
 }
 
 /* retrieve the current local date in epoch time, and applies an optional 
offset
- * of args[0] seconds.
+ * of args[0] seconds. Add milli/microseconds if asked to in args[1].
  */
 static int
 smp_fetch_date(const struct arg *args, struct sample *smp, const char *kw, 
void *private)
@@ -2954,6 +2954,22 @@ smp_fetch_date(const struct arg *args, struct sample 
*smp, const char *kw, void
 
smp->data.type = SMP_T_SINT;
smp->flags |= SMP_F_VOL_TEST | SMP_F_MAY_CHANGE;
+
+   if (!args || args[1].type != ARGT_STR)
+   return 1;
+
+   /* add milliseconds if needed */
+   if (strcmp(args[1].data.str.area, "ms") == 0) {
+   smp->data.u.sint *= 1000;
+   smp->data.u.sint += (date.tv_usec + 500) / 1000;
+   }
+
+   /* add microseconds if needed */
+   else if (strcmp(args[1].data.str.area, "us") == 0) {
+   smp->data.u.sint *= 100;
+   smp->data.u.sint += date.tv_usec;
+   }
+
return 1;
 }
 
@@ -3259,7 +3275,7 @@ static struct sample_fetch_kw_list smp_kws = {ILH, {
{ "always_false", smp_fetch_false, 0,NULL, SMP_T_BOOL, 
SMP_USE_INTRN },
{ "always_true",  smp_fetch_true,  0,NULL, SMP_T_BOOL, 
SMP_USE_INTRN },
{ "env",  smp_fetch_env,   ARG1(1,STR),  NULL, SMP_T_STR,  
SMP_USE_INTRN },
-   { "date", smp_fetch_date,  ARG1(0,SINT), NULL, SMP_T_SINT, 
SMP_USE_INTRN },
+   { "date", smp_fetch_date,  ARG2(0,SINT,STR), NULL, SMP_T_SINT, 
SMP_USE_INTRN },
{ "date_us",  smp_fetch_date_us,  0, NULL, SMP_T_SINT, 
SMP_USE_INTRN },
{ "hostname", smp_fetch_hostname, 0, NULL, SMP_T_STR,  
SMP_USE_INTRN },
{ "nbproc",   smp_fetch_nbproc,0,NULL, SMP_T_SINT, 
SMP_USE_INTRN },
-- 
2.20.1




Re: Haproxy tcp:80 and http:80 configuration difference without setting other http options

2019-10-24 Thread 联通集团联通云数据有限公司本部
Yes, Thank you. In tcp mode, No need forwardfor. In my understanding, it add a 
field X-Forwarded-For in http header. Besides, Do they have differences in 
scheduling according to my configuration?


2019年10月24日 下午9:34,Aleksandar Lazic 
mailto:al-hapr...@none.at>> 写道:

Hi.

Am 24.10.19 um 13:05 schrieb Zhengyu Pan(联通集团联通云数据有限公司本部):
Hi,
Does configuration mode tcp, port 80 and mode http, port 80 have differences 
without setting other http options in Haproxy?

No afaik, but `option forwardfor` is a http option which does not work
in tcp mode. you should get a warning at config and start phase.

Regards
Aleks

This is http:80 configuration:

frontend 18857e04-cd6a-47a8-859e-38a7a4e0d8c5
option tcplog
option forwardfor
bind 0.0.0.0:80
mode http
default_backend 86d030d2-4893-4c37-b391-48e54d86c7e3
backend 86d030d2-4893-4c37-b391-48e54d86c7e3
mode http
balance roundrobin
option forwardfor
server 56653038-5af1-47b3-881c-e0e164bf459f 192.168.2.21:80 weight 1
server 80e5fb55-81fd-4753-98bd-2f330478b953 192.168.2.22:80 weight 1

This is tcp:80 configuration:

frontend 18857e04-cd6a-47a8-859e-38a7a4e0d8c5
option tcplog
option forwardfor
bind 0.0.0.0:80
mode tcp
default_backend 86d030d2-4893-4c37-b391-48e54d86c7e3
backend 86d030d2-4893-4c37-b391-48e54d86c7e3
mode tcp
balance roundrobin
option forwardfor
server 56653038-5af1-47b3-881c-e0e164bf459f 192.168.2.21:80 weight 1
server 80e5fb55-81fd-4753-98bd-2f330478b953 192.168.2.22:80 weight 1
如果您错误接收了该邮件,请通过电子邮件立即通知我们。请回复邮件到 
hqs-s...@chinaunicom.cn,即可以退订此邮件。我们将立即将您的信息从我们的发送目录中删除。
 If you have received this email in error please notify us immediately by 
e-mail. Please reply to hqs-s...@chinaunicom.cn 
,you can unsubscribe from this mail. We will immediately remove your 
information from send catalogue of our.

如果您错误接收了该邮件,请通过电子邮件立即通知我们。请回复邮件到 
hqs-s...@chinaunicom.cn,即可以退订此邮件。我们将立即将您的信息从我们的发送目录中删除。 If you have received 
this email in error please notify us immediately by e-mail. Please reply to 
hqs-s...@chinaunicom.cn ,you can unsubscribe from this mail. We will 
immediately remove your information from send catalogue of our.


Re: Haproxy tcp:80 and http:80 configuration difference without setting other http options

2019-10-24 Thread Aleksandar Lazic
Hi.

Am 24.10.19 um 13:05 schrieb Zhengyu Pan(联通集团联通云数据有限公司本部):
> Hi,
> Does configuration mode tcp, port 80 and mode http, port 80 have differences 
> without setting other http options in Haproxy?

No afaik, but `option forwardfor` is a http option which does not work
in tcp mode. you should get a warning at config and start phase.

Regards
Aleks

> This is http:80 configuration:
> 
> frontend 18857e04-cd6a-47a8-859e-38a7a4e0d8c5
> option tcplog
> option forwardfor
> bind 0.0.0.0:80
> mode http
> default_backend 86d030d2-4893-4c37-b391-48e54d86c7e3
> backend 86d030d2-4893-4c37-b391-48e54d86c7e3
> mode http
> balance roundrobin
> option forwardfor
> server 56653038-5af1-47b3-881c-e0e164bf459f 192.168.2.21:80 weight 1
> server 80e5fb55-81fd-4753-98bd-2f330478b953 192.168.2.22:80 weight 1
> 
> This is tcp:80 configuration:
> 
> frontend 18857e04-cd6a-47a8-859e-38a7a4e0d8c5
> option tcplog
> option forwardfor
> bind 0.0.0.0:80
> mode tcp
> default_backend 86d030d2-4893-4c37-b391-48e54d86c7e3
> backend 86d030d2-4893-4c37-b391-48e54d86c7e3
> mode tcp
> balance roundrobin
> option forwardfor
> server 56653038-5af1-47b3-881c-e0e164bf459f 192.168.2.21:80 weight 1
> server 80e5fb55-81fd-4753-98bd-2f330478b953 192.168.2.22:80 weight 1
> 如果您错误接收了该邮件,请通过电子邮件立即通知我们。请回复邮件到 
> hqs-s...@chinaunicom.cn,即可以退订此邮件。我们将立即将您的信息从我们的发送目录中删除。 If you have received 
> this email in error please notify us immediately by e-mail. Please reply to 
> hqs-s...@chinaunicom.cn ,you can unsubscribe from this mail. We will 
> immediately remove your information from send catalogue of our.
> 




Haproxy tcp:80 and http:80 configuration difference without setting other http options

2019-10-24 Thread 联通集团联通云数据有限公司本部
Hi,
Does configuration mode tcp, port 80 and mode http, port 80 have differences 
without setting other http options in Haproxy?

This is http:80 configuration:

frontend 18857e04-cd6a-47a8-859e-38a7a4e0d8c5
option tcplog
option forwardfor
bind 0.0.0.0:80
mode http
default_backend 86d030d2-4893-4c37-b391-48e54d86c7e3
backend 86d030d2-4893-4c37-b391-48e54d86c7e3
mode http
balance roundrobin
option forwardfor
server 56653038-5af1-47b3-881c-e0e164bf459f 192.168.2.21:80 weight 1
server 80e5fb55-81fd-4753-98bd-2f330478b953 192.168.2.22:80 weight 1

This is tcp:80 configuration:

frontend 18857e04-cd6a-47a8-859e-38a7a4e0d8c5
option tcplog
option forwardfor
bind 0.0.0.0:80
mode tcp
default_backend 86d030d2-4893-4c37-b391-48e54d86c7e3
backend 86d030d2-4893-4c37-b391-48e54d86c7e3
mode tcp
balance roundrobin
option forwardfor
server 56653038-5af1-47b3-881c-e0e164bf459f 192.168.2.21:80 weight 1
server 80e5fb55-81fd-4753-98bd-2f330478b953 192.168.2.22:80 weight 1
如果您错误接收了该邮件,请通过电子邮件立即通知我们。请回复邮件到 
hqs-s...@chinaunicom.cn,即可以退订此邮件。我们将立即将您的信息从我们的发送目录中删除。 If you have received 
this email in error please notify us immediately by e-mail. Please reply to 
hqs-s...@chinaunicom.cn ,you can unsubscribe from this mail. We will 
immediately remove your information from send catalogue of our.


[PATCH] MINOR: ssl: deduplicate ca-file

2019-10-24 Thread Emmanuel Hocdet
Hi,

Little patch with big win when ca-file is used in server line.

++
Manu



0001-MINOR-ssl-deduplicate-ca-file.patch
Description: Binary data




Re: Possible optimization to 51d in multithread

2019-10-24 Thread Илья Шипицин
thank you, Ben.

there's one more "finding", I guess we just mark it as intentional (inside
coverity itself), right ?

693#ifdef FIFTYONEDEGREES_H_PATTERN_INCLUDED

CID 1403837 (#1 of 1): Calling risky function (DC.WEAK_CRYPTO)dont_call:
random should not be used for security-related applications, because linear
congruential algorithms are too easy to break.

Use a compliant random number generator, such as /dev/random or /dev/urandom
on Unix-like systems, and CNG (Cryptography API: Next Generation) on
Windows.
694_51d_lru_seed = random();
695if (global_51degrees.cache_size) {
696_51d_lru_tree = lru64_new(global_51degrees.cache_size);
697}
698#endif

чт, 24 окт. 2019 г. в 13:50, Ben Shillito :

> Hi,
>
>
>
> This is intentional, as the calling method would return early if the htx
> is null (this is assuming that smp_prefetch_htx will not return a different
> result the second time?):
>
>htx = smp_prefetch_htx(smp, chn, 1);
>
>if (!htx)
>
>   return 0;
>
> Rather than doing the null check again, I will instead change the method
> to take the htx which has already been checked. This way coverity should
> not have an issue with it, we avoid having an extra null check, and we
> remove the second call to smp_prefetch_htx which was unnecessary.
>
>
>
> I will submit this as a separate patch to the other change.
>
>
>
> Thanks,
>
>
>
> Ben Shillito
> Developer
>
> [image: 51Degrees] 
>
> O: +44 1183 287152 
> E: b...@51degrees.com 
>  @51Degrees 
>  51Degrees 
>
> [image: Find out More] 
>
>
>
> *From:* Илья Шипицин [mailto:chipits...@gmail.com]
> *Sent:* 24 October 2019 08:04
> *To:* Ben Shillito 
> *Cc:* Willy Tarreau ; HAProxy 
> *Subject:* Re: Possible optimization to 51d in multithread
>
>
>
> Hello, Ben.
>
>
>
> since you are going to touch 51d code, can you please review the following
> coverity finding (it thinks there might be null pointer dereference) ?
>
>
>
> 239
> // No need to null check as this has already been carried out in the
>
> 240// calling method
>
>
>
> 2. returned_null: smp_prefetch_htx returns NULL (checked 35 out of 36
> times). [show details
> 
> ]
>
>
>
> 3. var_assigned: Assigning: htx = NULL return value from smp_prefetch_htx.
>
> 241htx = smp_prefetch_htx(smp, chn, 1);
>
> 242
>
>
>
> 4. Condition i < global_51degrees.header_count, taking true branch.
>
> 243for (i = 0; i < global_51degrees.header_count; i++) {
>
> 244name.ptr = (global_51degrees.header_names + i)->area;
>
> 245name.len = (global_51degrees.header_names + i)->data;
>
> 246ctx.blk = NULL;
>
> 247
>
>
>
> CID 1403847 (#1 of 1): Dereference null return value (NULL_RETURNS)5.
> dereference: Dereferencing a pointer that might be NULL htx when calling
> http_find_header. [show details
> 
> ]
>
> 248if (http_find_header(htx, name, , 1)) {
>
> 249ws->importantHeaders[ws->importantHeadersCount
> ].header = ws->dataSet->httpHeaders + i;
>
> 250ws->importantHeaders[ws->importantHeadersCount
> ].headerValue = ctx.value.ptr;
>
> 251ws->importantHeaders[ws->importantHeadersCount
> ].headerValueLength = ctx.value.len;
>
> 252ws->importantHeadersCount++;
>
> 253}
>
> 254}
>
> 255}
>
> 256#endif
>
>
>
> ср, 23 окт. 2019 г. в 14:40, Ben Shillito :
>
> Hi Willy,
>
> Thanks for letting me know. I will get the work for this scheduled in and
> get a patch over to you.
>
> I agree it looks like a fairly small change for what I'm sure is a
> considerable performance increase for large systems.
>
> Good to see this level of attention to detail when it comes to the
> performance of HAProxy.
>
> Thanks,
>
> Ben Shillito
> Developer
> O: +44 1183 287152
> E: b...@51degrees.com
> T: @51Degrees
>
> -Original Message-
> From: Willy Tarreau [mailto:w...@1wt.eu]
> Sent: 23 October 2019 07:04
> To: Ben Shillito 
> Cc: HAProxy 
> Subject: Possible optimization to 51d in multithread
>
> Hi Ben,
>
> after Brian reported the thread performance regression affecting the
> pattern matchings in haproxy when relying on the LRU cache, I had a look at
> other users of the LRU cache and found that 51d.c is using it with a lock
> as well and may also suffer from a lack of linearity with threads.
>
> You may want to have a look at this patch I just committed to make the
> pattern LRU cache per thread :
>
>   403bfbb130 ("BUG/MEDIUM: pattern: make the pattern LRU cache
> thread-local and lockless")
>
> It's quite straightforward, and if you want to do the same 

RE: Possible optimization to 51d in multithread

2019-10-24 Thread Ben Shillito
Hi,

This is intentional, as the calling method would return early if the htx is 
null (this is assuming that smp_prefetch_htx will not return a different result 
the second time?):
   htx = smp_prefetch_htx(smp, chn, 1);
   if (!htx)
  return 0;
Rather than doing the null check again, I will instead change the method to 
take the htx which has already been checked. This way coverity should not have 
an issue with it, we avoid having an extra null check, and we remove the second 
call to smp_prefetch_htx which was unnecessary.

I will submit this as a separate patch to the other change.

Thanks,

Ben Shillito
Developer
[51Degrees]
O: +44 1183 287152
E: b...@51degrees.com
[https://51degrees.com/portals/0/images/twitterbird.png] 
@51Degrees
[https://51degrees.com/portals/0/images/linkedinicon.png] 
51Degrees
[Find out More]

From: Илья Шипицин [mailto:chipits...@gmail.com]
Sent: 24 October 2019 08:04
To: Ben Shillito 
Cc: Willy Tarreau ; HAProxy 
Subject: Re: Possible optimization to 51d in multithread

Hello, Ben.

since you are going to touch 51d code, can you please review the following 
coverity finding (it thinks there might be null pointer dereference) ?

239// No need to null check as this has already been carried out in the
240// calling method

2. returned_null: smp_prefetch_htx returns NULL (checked 35 out of 36 times). 
[show 
details]

3. var_assigned: Assigning: htx = NULL return value from smp_prefetch_htx.
241htx = smp_prefetch_htx(smp, chn, 1);
242

4. Condition i < global_51degrees.header_count, taking true branch.
243for (i = 0; i < global_51degrees.header_count; i++) {
244name.ptr = (global_51degrees.header_names + i)->area;
245name.len = (global_51degrees.header_names + i)->data;
246ctx.blk = NULL;
247

CID 1403847 (#1 of 1): Dereference null return value (NULL_RETURNS)5. 
dereference: Dereferencing a pointer that might be NULL htx when calling 
http_find_header. [show 
details]
248if (http_find_header(htx, name, , 1)) {
249
ws->importantHeaders[ws->importantHeadersCount].header = 
ws->dataSet->httpHeaders + i;
250
ws->importantHeaders[ws->importantHeadersCount].headerValue = ctx.value.ptr;
251
ws->importantHeaders[ws->importantHeadersCount].headerValueLength = 
ctx.value.len;
252ws->importantHeadersCount++;
253}
254}
255}
256#endif

ср, 23 окт. 2019 г. в 14:40, Ben Shillito 
mailto:b...@51degrees.com>>:
Hi Willy,

Thanks for letting me know. I will get the work for this scheduled in and get a 
patch over to you.

I agree it looks like a fairly small change for what I'm sure is a considerable 
performance increase for large systems.

Good to see this level of attention to detail when it comes to the performance 
of HAProxy.

Thanks,

Ben Shillito
Developer
O: +44 1183 287152
E: b...@51degrees.com
T: @51Degrees

-Original Message-
From: Willy Tarreau [mailto:w...@1wt.eu]
Sent: 23 October 2019 07:04
To: Ben Shillito mailto:b...@51degrees.com>>
Cc: HAProxy mailto:haproxy@formilux.org>>
Subject: Possible optimization to 51d in multithread

Hi Ben,

after Brian reported the thread performance regression affecting the pattern 
matchings in haproxy when relying on the LRU cache, I had a look at other users 
of the LRU cache and found that 51d.c is using it with a lock as well and may 
also suffer from a lack of linearity with threads.

You may want to have a look at this patch I just committed to make the pattern 
LRU cache per thread :

  403bfbb130 ("BUG/MEDIUM: pattern: make the pattern LRU cache thread-local and 
lockless")

It's quite straightforward, and if you want to do the same on 51d.c, you just 
have to make your lru_tree pointer thread_local and allocate it for each thread 
in a small callback registered to be called after threads are initialized. Same 
for the call to lru64_destroy(). Then you can remove the lru lock and gain a 
lot of scalability.

I'll let you have a look because I'd rather not break somehing non- obvious in 
your code and also because you know better than me how to benchmark the changes 
using the real lib and database, but if you need some help, just let me know.

Given that it's quite small and simple a change, I'm fine with merging such a 
patch for 2.1 even a bit late (but only this, no other changes that are not 
bugfixes please).

Willy
This email and any attachments are confidential and may also be 

Re: [PATCH] MINOR: tcp: avoid confusion in time parsing init

2019-10-24 Thread Christopher Faulet

Le 23/10/2019 à 19:31, William Dauchy a écrit :

if I'm not wrong, we never enter val_fc_time_value when an associated
fetcher such as `fc_rtt` is called without argument.
meaning `type == ARGT_STOP` will never be true and so the default
`data.sint = TIME_UNIT_MS` will never be set.
remove this part to avoid thinking default data.sint is set to ms
while reading the code.



Hi William,

You're right. I merged your patch. Thanks.

--
Christopher Faulet



Re: Possible optimization to 51d in multithread

2019-10-24 Thread Илья Шипицин
Hello, Ben.

since you are going to touch 51d code, can you please review the following
coverity finding (it thinks there might be null pointer dereference) ?

239
// No need to null check as this has already been carried out in the
240// calling method

2. returned_null: smp_prefetch_htx returns NULL (checked 35 out of 36
times). [show details

]

3. var_assigned: Assigning: htx = NULL return value from smp_prefetch_htx.
241htx = smp_prefetch_htx(smp, chn, 1);
242

4. Condition i < global_51degrees.header_count, taking true branch.
243for (i = 0; i < global_51degrees.header_count; i++) {
244name.ptr = (global_51degrees.header_names + i)->area;
245name.len = (global_51degrees.header_names + i)->data;
246ctx.blk = NULL;
247

CID 1403847 (#1 of 1): Dereference null return value (NULL_RETURNS)5.
dereference: Dereferencing a pointer that might be NULL htx when calling
http_find_header. [show details

]
248if (http_find_header(htx, name, , 1)) {
249ws->importantHeaders[ws->importantHeadersCount].
header = ws->dataSet->httpHeaders + i;
250ws->importantHeaders[ws->importantHeadersCount].
headerValue = ctx.value.ptr;
251ws->importantHeaders[ws->importantHeadersCount].
headerValueLength = ctx.value.len;
252ws->importantHeadersCount++;
253}
254}
255}
256#endif

ср, 23 окт. 2019 г. в 14:40, Ben Shillito :

> Hi Willy,
>
> Thanks for letting me know. I will get the work for this scheduled in and
> get a patch over to you.
>
> I agree it looks like a fairly small change for what I'm sure is a
> considerable performance increase for large systems.
>
> Good to see this level of attention to detail when it comes to the
> performance of HAProxy.
>
> Thanks,
>
> Ben Shillito
> Developer
> O: +44 1183 287152
> E: b...@51degrees.com
> T: @51Degrees
>
> -Original Message-
> From: Willy Tarreau [mailto:w...@1wt.eu]
> Sent: 23 October 2019 07:04
> To: Ben Shillito 
> Cc: HAProxy 
> Subject: Possible optimization to 51d in multithread
>
> Hi Ben,
>
> after Brian reported the thread performance regression affecting the
> pattern matchings in haproxy when relying on the LRU cache, I had a look at
> other users of the LRU cache and found that 51d.c is using it with a lock
> as well and may also suffer from a lack of linearity with threads.
>
> You may want to have a look at this patch I just committed to make the
> pattern LRU cache per thread :
>
>   403bfbb130 ("BUG/MEDIUM: pattern: make the pattern LRU cache
> thread-local and lockless")
>
> It's quite straightforward, and if you want to do the same on 51d.c, you
> just have to make your lru_tree pointer thread_local and allocate it for
> each thread in a small callback registered to be called after threads are
> initialized. Same for the call to lru64_destroy(). Then you can remove the
> lru lock and gain a lot of scalability.
>
> I'll let you have a look because I'd rather not break somehing non-
> obvious in your code and also because you know better than me how to
> benchmark the changes using the real lib and database, but if you need some
> help, just let me know.
>
> Given that it's quite small and simple a change, I'm fine with merging
> such a patch for 2.1 even a bit late (but only this, no other changes that
> are not bugfixes please).
>
> Willy
> This email and any attachments are confidential and may also be
> privileged. If you are not the named recipient, please notify the sender
> immediately and do not disclose, use, store or copy the information
> contained herein. This is an email from 51Degrees.mobi Limited, 5 Charlotte
> Close, Reading. RG47BY. T: +44 118 328 7152; E: i...@51degrees.com;
> 51Degrees.mobi Limited t/as 51Degrees.
>
>