Re: HAproxy constant memory leak
Hi Georges-Etienne, On Thu, Feb 05, 2015 at 09:10:25PM -0500, Georges-Etienne Legendre wrote: Hi Willy, I'm not sure how to document this leak. I don't know exactly how is implemented the firewall SSL health check... Would the Wireshark trace be enough to report the issue? Yes I think it will definitely help, given that the exchange is very short, it basically is an SSL hello. It is important to report the strace output as well and to mention that haproxy is running in a chroot. If that helps you, I can send you the script I wrote from your capture. You could try it to confirm it does the same effect as the firewall's checks. Once confirmed, it will be much easier to build a bug report given that you'll simply have to attach the script as a reproducer. Best regards, Willy
Re: [PATCH 0/2] Allow suppression of email alerts by log level
Hi Simon, On Fri, Feb 06, 2015 at 11:11:55AM +0900, Simon Horman wrote: Hi, This series adds a new option which allows configuration of the maximum log level of messages for which email alerts will be sent. (...) Great! Both patches applied. Thanks! Willy
Re: SSL Performance increase?
On 2/5/2015 5:54 AM, Klavs Klavsen wrote: Adding nbproc 4, improved performance of https from 511 req/s to 1296 req/s.. not quite an exponential scaling.. We tested with 8 cores and got 1328 req/s.. so it seems we're hitting something else already after 2,5 core.. vmstat 1 - also reveals a lot of cpu-idle time.. For cleartext performance I really don't know for sure what you can do, except maybe using bare metal rather than a virtual machine. Other people have been around this community a lot longer than I have and may have better ideas. Getting that cleartext performance up to a reasonable level will be your first step. Once that's done, there are a lot of things that will help with performance using SSL. This is an AWESOME video on that subject: https://www.youtube.com/watch?v=0EB7zh_7UE4 The current haproxy version implements almost every performance-enhancing method mentioned in that video, as long as your openssl is new enough. Thanks, Shawn
Re: SSL Performance increase?
Hi, On Thu, Feb 05, Klavs Klavsen wrote: Jarno Huuskonen wrote on 02/05/2015 01:28 PM: Hi, On Thu, Feb 05, Klavs Klavsen wrote: Hi guys, I'm testing our haproxy setup in regards to SSL performance - by simply using ab, and fetching a favicon.ico file.. over http haproxy delivers 3.000 req/s. over https haproxy delivers 511 req/s. I tried giving haproxy more cores (it's a virtual server) -but this did not help at all :( Silly question: when you added more vCPUs to the virtual machine did you change haproxy nbproc setting (make haproxy use more than 1process) ? Not silly at all.. I had naive figured it would scale on cpu's per default :) Adding daemon and nbproc setting = number-of-cores and testing again. I tried also setting cpu-map - but apperaently that needs to be specificly enabled at build time.. Is that an unsafe feature? Also - in regards to stats, I can understand that the stats will no longer be accurate? You can bind different procs to use different stats sockets. Something like: stats socket /path/to/stats level admin process 1 stats socket /path/to/stats2 level admin process 2 stats socket /path/to/stats3 level admin process 3 stats socket /path/to/stats4 level admin process 4 We currently fetch stats and insert in out graphite system.. which will be useless then.. :( You could collect stats from all processes and aggregate results. Or have dedicated processes for https that send traffic to http frontend (something like): listen HTTPS_in bind-process 2 3 4 mode tcp # (or http if you want http acls here) bind address:443 crt ... server http_in 127.0.0.1:666 send-proxy-v2 frontend HTTP_in bind-process 1 bind 127.0.0.1:666 accept-proxy bind *:80 ... mode http ... default_backend ... And then you can collect stats from socket thats bound to process 1. I think you can find better examples from list archives. (Or think this describes similar setup: http://brokenhaze.com/blog/2014/03/25/how-stack-exchange-gets-the-most-out-of-haproxy/) -Jarno Is it in the works, for haproxy to perhaps used some shared-memory section, with a semaphore locking ofcourse, to collect stats for all in one ? -- Jarno Huuskonen
Re: SSL Performance increase?
Adding nbproc 4, improved performance of https from 511 req/s to 1296 req/s.. not quite an exponential scaling.. We tested with 8 cores and got 1328 req/s.. so it seems we're hitting something else already after 2,5 core.. vmstat 1 - also reveals a lot of cpu-idle time.. Jarno Huuskonen wrote on 02/05/2015 01:28 PM: Hi, On Thu, Feb 05, Klavs Klavsen wrote: Hi guys, I'm testing our haproxy setup in regards to SSL performance - by simply using ab, and fetching a favicon.ico file.. over http haproxy delivers 3.000 req/s. over https haproxy delivers 511 req/s. I tried giving haproxy more cores (it's a virtual server) -but this did not help at all :( Silly question: when you added more vCPUs to the virtual machine did you change haproxy nbproc setting (make haproxy use more than 1process) ? -Jarno I can't seem to find anything on haproxy actually hitting a bottleneck.. I did try to do the https test from 2 clients simultaneously- and then they just get half the req/s - so total is the same. What should I look at, to improve https performance in haproxy? -- Regards, Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200 Those who do not understand Unix are condemned to reinvent it, poorly. --Henry Spencer
nbproc 1 and stats in ADMIN mode?
Hi guys, Just to check.. if I set nbproc to f.ex. 4 - then I understand I need to define 4xstats.. and when I visit the webinterface.. I'll actually only get stats from one of the 4 processes.. But we have ADMIN enabled for stats - so we can disable backend servers etc.. will we have to do that for each of the 4 stats editions - before it's actually active or is that state shared among them all? -- Regards, Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200 Those who do not understand Unix are condemned to reinvent it, poorly. --Henry Spencer
Re: SSL Performance increase?
Jarno Huuskonen wrote on 02/05/2015 01:28 PM: Hi, On Thu, Feb 05, Klavs Klavsen wrote: Hi guys, I'm testing our haproxy setup in regards to SSL performance - by simply using ab, and fetching a favicon.ico file.. over http haproxy delivers 3.000 req/s. over https haproxy delivers 511 req/s. I tried giving haproxy more cores (it's a virtual server) -but this did not help at all :( Silly question: when you added more vCPUs to the virtual machine did you change haproxy nbproc setting (make haproxy use more than 1process) ? Not silly at all.. I had naive figured it would scale on cpu's per default :) Adding daemon and nbproc setting = number-of-cores and testing again. I tried also setting cpu-map - but apperaently that needs to be specificly enabled at build time.. Is that an unsafe feature? Also - in regards to stats, I can understand that the stats will no longer be accurate? We currently fetch stats and insert in out graphite system.. which will be useless then.. :( Is it in the works, for haproxy to perhaps used some shared-memory section, with a semaphore locking ofcourse, to collect stats for all in one ? -- Regards, Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200 Those who do not understand Unix are condemned to reinvent it, poorly. --Henry Spencer
Re: SSL Performance increase?
Baptiste wrote on 02/05/2015 04:44 PM: [CUT] 3000 req/s in clear is low and a so rounded number is not normal :) Move (far far) away from this provider. You're wasting your time investigating perfomance issue while the limitation is in the hypervisor and multitenancy of your supplier. it's running on vmware 5.5 on local hardware - nowhere else to go :( If I set haproxy to just send a 301 response (ie. not relay to varnish delivering the favicon.ico) - I get approx 15k req/s.. -- Regards, Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200 Those who do not understand Unix are condemned to reinvent it, poorly. --Henry Spencer
Re: SSL Performance increase?
On Thu, Feb 5, 2015 at 2:03 PM, Jarno Huuskonen jarno.huusko...@uef.fi wrote: Hi, On Thu, Feb 05, Klavs Klavsen wrote: Jarno Huuskonen wrote on 02/05/2015 01:28 PM: Hi, On Thu, Feb 05, Klavs Klavsen wrote: Hi guys, I'm testing our haproxy setup in regards to SSL performance - by simply using ab, and fetching a favicon.ico file.. over http haproxy delivers 3.000 req/s. over https haproxy delivers 511 req/s. Hi, 3000 req/s in clear is low and a so rounded number is not normal :) Move (far far) away from this provider. You're wasting your time investigating perfomance issue while the limitation is in the hypervisor and multitenancy of your supplier. Baptiste
Re: nbproc 1 and stats in ADMIN mode?
On Thu, Feb 5, 2015 at 3:44 PM, Pavlos Parissis pavlos.paris...@gmail.com wrote: On 05/02/2015 03:01 μμ, Klavs Klavsen wrote: Hi guys, Just to check.. if I set nbproc to f.ex. 4 - then I understand I need to define 4xstats.. and when I visit the webinterface.. I'll actually only get stats from one of the 4 processes.. But we have ADMIN enabled for stats - so we can disable backend servers etc.. will we have to do that for each of the 4 stats editions - before it's actually active or is that state shared among them all? Yes, you have to do all our admin operations on each webinterface. Cheers, Pavlos You can also write a small application to take the admin requests and send them to each haproxy process' web interface, or you can set the backend server health check to something you can dynamically change to start failing and bring the server down gracefully that way.
tcp-response inspect-delay with WAIT_END
Hello, We have some complex logic in our application that will at times determine that the response to a specific query should be delayed. Currently this is handled in the application with a short (~100ms) sleep. We would like to move this delay in response to the load balancer. I have tried to do this by adding a response header as a flag for HAProxy to act on and adding configuration like the following to the backend: acl trigger_delay res.hdr(response-delay) -m found tcp-response inspect-delay 100ms tcp-response content accept unless trigger_delay tcp-response content accept if WAIT_END With the above configuration, the response is delayed until the client times out (2 minutes) regardless of how trigger_delay evaluates. The following configurations exhibit the same behavior: tcp-response inspect-delay 100ms tcp-response content accept if WAIT_END - or - acl trigger_delay res.hdr(response-delay) -m found tcp-response inspect-delay 100ms tcp-response content accept unless trigger_delay It seems that either a header-based ACL or WAIT_END cause any tcp-response inpect-delay to timeout. It does not seem to matter if the header-based ACL returns true or false. Are they not compatible with a response delay? Ideally when we encounter the delay flag in the response of the app server, we would also add the src to a stick-table for reference in delaying subsequent incoming connections from that IP (maybe the next, say 5 minutes or so). Is this possible/reasonable? Thank you, Chris
Re: HAProxy backend server AWS S3 Static Web Hosting
On 03/02/2015 02:02 πμ, Thomas Amsler wrote: Hello, Is it possible to front AWS S3 Static Web Hosting with HAProxy? I have tried to setup a backend to proxy requests to SomeHost.s3-website-us-east-1.amazonaws.com:80 http://SomeHost.s3-website-us-east-1.amazonaws.com:80. But I am getting an error from S3 indicating that the bucket SomeHost does not exist. Has anybody tried to do that? Best, Thomas Amsler Please provide more information on what you are trying to achieve and paste your HAProxy configuration. Cheers, Pavlos signature.asc Description: OpenPGP digital signature
Re: Global ACLs
On 02/02/2015 05:31 μμ, Willy Tarreau wrote: Hi Christian, [...snip...] We've been considering this for a while now without any elegant solution. Recently while discussing with Emeric we got an idea to implement scopes, and along these lines I think we could instead try to inherit ACLs from other frontends/backends/defaults sections. Currently defaults sections support having a name, though this name is not internally used, admins often put some notes there such as tcp or a customer's id. Here we could have something like this : defaults foo acl local src 127.0.0.1 frontend bar acl client src 192.168.0.0/24 use_backend c1 if client use_backend c2 if foo/local It would also bring the extra benefit of allowing complex shared configs to use their own global ACLs regardless of what is being used in other sections. That's just an idea, of course. That sounds awesome, please bring in on :-) Cheers, Pavlos signature.asc Description: OpenPGP digital signature
Re: SSL Performance increase?
On Thu, Feb 5, 2015 at 4:54 PM, Klavs Klavsen k...@vsen.dk wrote: Baptiste wrote on 02/05/2015 04:44 PM: [CUT] 3000 req/s in clear is low and a so rounded number is not normal :) Move (far far) away from this provider. You're wasting your time investigating perfomance issue while the limitation is in the hypervisor and multitenancy of your supplier. it's running on vmware 5.5 on local hardware - nowhere else to go :( If I set haproxy to just send a 301 response (ie. not relay to varnish delivering the favicon.ico) - I get approx 15k req/s.. this is very low We can get more than 50K conn/s in our VMWare lab using our HAProxy based ALOHA appliance. you must have an issue somwhere. Baptiste
Re: Help haproxy
Do you have the words option forward for in your config. http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20forwardfor Can you copy/paste your config (without sensitive info if needed). Regards, Long Wu Yuan 龙 武 缘 Sr. Linux Engineer 高级工程师 ChinaNetCloud 云络网络科技(上海)有限公司 | www.ChinaNetCloud.com1238 Xietu Lu, X2 Space 1-601, Shanghai, China | 中国上海市徐汇区斜土路1238号X2空 间1-601室 24x7 Support Hotline: +86-400-618-0024 | Office Tel: +86-(21)-6422-1946 We are hiring! http://careers.chinanetcloud.com | Customer Portal - https://customer-portal.service.chinanetcloud.com/ On Mon, Feb 2, 2015 at 11:45 PM, Sander Klein roe...@roedie.nl wrote: On 02.02.2015 16:33, Mathieu Sergent wrote: Hi Sander, Yes i reloaded the haproxy and my web server too. But no change. And i'm not using proxy protocol. To give you more precisions, on my web server i used tcpdump functions which give me back the header of the requete http. And in this i found my client's address. But this is really strange that i can do it without the forwardfor. The only other thing that I can think of is that your client is behind a proxy server which adds the X-Forward-For header for you... Or you got something strange in your config... Sander
Setting uuid cookies not for sticky sessions
I have multiple back ends using different stacks. All I need is to ensure that every client gets a unique cookie. They don't need to be used for sticky sessions. Pretty much all the examples I find are for hard coding, prefixing and/or for sticky session purposes. Is there a way to get haproxy just set a simple uuid cookie if one isn't there? Thanks, Alberto
Interpreting ttime, rtime friends correctly
Hello everybody, I have wondered this several times before and searched the mailing list to no avail, so I thought I'd just go ahead and ask: Can somebody shed some more light on how to read the ttime metric that haproxy supplies? I am confident I understand qtime, ctime and rtime, but at least ttime I don't understand. The docs say: 61. ttime [..BS]: the average total session time in ms over the 1024 last requests Now, we are not using any session tracking at all. When I read this value for a backend, how are the 1024 last requests mapped to session times? Does one request map to one session? If so, I am very concerned about how high the ttime values are compared to rtime for some of our backend. What other factors, besides [qcr]time are making up a sessions ttime? If not, wouldn't some of the last 1024 requests belong to sessions that are not yet over? If so, is any value for these sessions used to calculate the final metric? Any insights greatly appreciated, Conrad -- Conrad Hoffmann Traffic Engineer SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany Managing Director: Alexander Ljung | Incorporated in England Wales with Company No. 6343600 | Local Branch Office | AG Charlottenburg | HRB 110657B
Re: SSL Performance increase?
On 05.02.2015 20:09, Baptiste wrote: On Thu, Feb 5, 2015 at 4:54 PM, Klavs Klavsen k...@vsen.dk wrote: Baptiste wrote on 02/05/2015 04:44 PM: [CUT] 3000 req/s in clear is low and a so rounded number is not normal :) Move (far far) away from this provider. You're wasting your time investigating perfomance issue while the limitation is in the hypervisor and multitenancy of your supplier. it's running on vmware 5.5 on local hardware - nowhere else to go :( If I set haproxy to just send a 301 response (ie. not relay to varnish delivering the favicon.ico) - I get approx 15k req/s.. this is very low We can get more than 50K conn/s in our VMWare lab using our HAProxy based ALOHA appliance. you must have an issue somwhere. Are there any best practices on how to test haproxy in this regard? What are the recommended tools and settings to make a realistic conn/s test? Regards, Dennis
Re: SSL Performance increase?
Hi, On Thu, Feb 05, Klavs Klavsen wrote: Hi guys, I'm testing our haproxy setup in regards to SSL performance - by simply using ab, and fetching a favicon.ico file.. over http haproxy delivers 3.000 req/s. over https haproxy delivers 511 req/s. I tried giving haproxy more cores (it's a virtual server) -but this did not help at all :( Silly question: when you added more vCPUs to the virtual machine did you change haproxy nbproc setting (make haproxy use more than 1process) ? -Jarno I can't seem to find anything on haproxy actually hitting a bottleneck.. I did try to do the https test from 2 clients simultaneously- and then they just get half the req/s - so total is the same. What should I look at, to improve https performance in haproxy? -- Jarno Huuskonen
Re: HAproxy constant memory leak
Hi Willy, I'm not sure how to document this leak. I don't know exactly how is implemented the firewall SSL health check... Would the Wireshark trace be enough to report the issue? Thanks! -- Georges-Etienne On Tue, Feb 3, 2015 at 5:52 PM, Willy Tarreau w...@1wt.eu wrote: Hi Georges-Etienne, On Tue, Feb 03, 2015 at 08:09:15AM -0500, Georges-Etienne Legendre wrote: Hi Willy, Thanks a lot for this investigation, it was really helpful. My OpenSSL is up-to-date on this server. I first tried to remove the chroot statement. I'm pretty sure this in itself solved the leak, but I no longer have the traces and couple of hours after, our Ops changed the SSL check to a simple TCP check on port 443. So, I cannot confirm 100%. I can however confirm that I no longer experience the leak. I put back the chroot command to be safer. OK that's great. This also prompted me to tweak the SSL ciphers. I now use a more thoughtful list of ciphers ( https://mozilla.github.io/server-side-tls/ssl-config-generator/) and disabled SSLv3. This indeed disables KRB5. ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA ssl-default-bind-options no-sslv3 Wow! When we introduced SSL, I expected that a lot of difficulties would come from it, but not that the ugliest config statements would come with it as well :-) I will keep a close eye on the memory usage... HAproxy has been running for about 16 hours now, and here is the ps output: # ps -u nobody u USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND nobody 63985 0.5 0.0 53868 10960 ?Ss Feb02 5:19 /usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid Looks good :-) Yes indeed. Now I think it will really be important to report this leak to whomever it concerns (probably the distro vendor so that they decide whether it's in their own patches or in openssl upstream). My openssl version doesn't have krb5 and I have never understood what is needed to enable it nor what it provides. Crypto libs tend to be cryptic ... Willy
[PATCH 2/2] MEDIUM: Allow suppression of email alerts by log level
This patch adds a new option which allows configuration of the maximum log level of messages for which email alerts will be sent. The default is alert which is more restrictive than the current code which sends email alerts for all priorities. That behaviour may be configured using the new configuration option to set the maximum level to notice or greater. email-alert level notice Signed-off-by: Simon Horman ho...@verge.net.au --- doc/configuration.txt | 36 ++-- include/proto/checks.h | 4 ++-- include/types/proxy.h | 3 +++ src/cfgparse.c | 20 +--- src/checks.c | 7 --- src/server.c | 6 -- 6 files changed, 60 insertions(+), 16 deletions(-) diff --git a/doc/configuration.txt b/doc/configuration.txt index bd2de33..9a50ef4 100644 --- a/doc/configuration.txt +++ b/doc/configuration.txt @@ -1375,6 +1375,7 @@ description - X X X disabled X X X X dispatch - - X X email-alert from X X X X +email-alert level X X X X email-alert mailers X X X X email-alert myhostnameX X X X email-alert toX X X X @@ -2697,7 +2698,30 @@ email-alert from emailaddr Also requires email-alert mailers and email-alert to to be set and if so sending email alerts is enabled for the proxy. - See also : email-alert mailers, email-alert myhostname, email-alert to, + See also : email-alert level, email-alert mailers, + email-alert myhostname, email-alert to, section 3.6 about mailers. + + +email-alert level level + Declare the maximum log level of messages for which email alerts will be + sent. This acts as a filter on the sending of email alerts. + May be used in sections:defaults | frontend | listen | backend + yes |yes | yes | yes + + Arguments : + +level One of the 8 syslog levels: + emerg alert crit err warning notice info debug +The above syslog levels are ordered from lowest to highest. + + By default level is alert + + Also requires email-alert from, email-alert mailers and + email-alert to to be set and if so sending email alerts is enabled + for the proxy. + + See also : email-alert from, email-alert mailers, + email-alert myhostname, email-alert to, section 3.6 about mailers. @@ -2713,8 +2737,8 @@ email-alert mailers mailersect Also requires email-alert from and email-alert to to be set and if so sending email alerts is enabled for the proxy. - See also : email-alert from, email-alert myhostname, email-alert to, - section 3.6 about mailers. + See also : email-alert from, email-alert level, email-alert myhostname, + email-alert to, section 3.6 about mailers. email-alert myhostname hostname @@ -2733,8 +2757,8 @@ email-alert myhostname hostname email-alert to to be set and if so sending email alerts is enabled for the proxy. - See also : email-alert from, email-alert mailers, email-alert to, - section 3.6 about mailers. + See also : email-alert from, email-alert level, email-alert mailers, + email-alert to, section 3.6 about mailers. email-alert to emailaddr @@ -2750,7 +2774,7 @@ email-alert to emailaddr Also requires email-alert mailers and email-alert to to be set and if so sending email alerts is enabled for the proxy. - See also : email-alert from, email-alert mailers, + See also : email-alert from, email-alert level, email-alert mailers, email-alert myhostname, section 3.6 about mailers. diff --git a/include/proto/checks.h b/include/proto/checks.h index b4faed0..67d659f 100644 --- a/include/proto/checks.h +++ b/include/proto/checks.h @@ -47,8 +47,8 @@ static inline void health_adjust(struct server *s, short status) const char *init_check(struct check *check, int type); void free_check(struct check *check); -void send_email_alert(struct server *s, const char *format, ...) - __attribute__ ((format(printf, 2, 3))); +void send_email_alert(struct server *s, int priority, const char *format, ...) + __attribute__ ((format(printf, 3, 4))); #endif /* _PROTO_CHECKS_H */ /* diff --git a/include/types/proxy.h b/include/types/proxy.h index 230b804..9689460 100644 --- a/include/types/proxy.h +++ b/include/types/proxy.h @@ -402,6 +402,9 @@ struct proxy { char *from; /* Address to send email alerts from */ char *to; /* Address(es) to send email alerts to */ char *myhostname;
[PATCH 0/2] Allow suppression of email alerts by log level
Hi, This series adds a new option which allows configuration of the maximum log level of messages for which email alerts will be sent. The default is alert which is more restrictive than the current code which sends email alerts for all priorities. That behaviour may be configured using the new configuration option to set the maximum level to notice or greater. email-alert level notice The first patch in the series provides a contextual dependency of the second patch. Simon Horman (2): LOW: Remove trailing '.' from email alert messages MEDIUM: Allow suppression of email alerts by log level doc/configuration.txt | 36 ++-- include/proto/checks.h | 4 ++-- include/types/proxy.h | 3 +++ src/cfgparse.c | 20 +--- src/checks.c | 7 --- src/server.c | 6 -- 6 files changed, 60 insertions(+), 16 deletions(-) -- 2.1.4
[PATCH 1/2] LOW: Remove trailing '.' from email alert messages
This removes the trailing '.' from both the header and the body of email alerts. The main motivation for this change is to make the format of email alerts generated from srv_set_stopped() consistent with those generated from set_server_check_status(). Signed-off-by: Simon Horman ho...@verge.net.au --- src/server.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/server.c b/src/server.c index 6acd332..4458642 100644 --- a/src/server.c +++ b/src/server.c @@ -255,7 +255,7 @@ void srv_set_stopped(struct server *s, const char *reason) %sServer %s/%s is DOWN, s-flags SRV_F_BACKUP ? Backup : , s-proxy-id, s-id); - send_email_alert(s, %s., trash.str); + send_email_alert(s, %s, trash.str); srv_append_status(trash, s, reason, xferred, 0); Warning(%s.\n, trash.str); -- 2.1.4
[SPAM] Jusqu'à moins 60 pour cent sur tout le site
Title: Newsletter - OZOA-chemises.com Consulter la version en ligne CHEMISES HOMME CINTREES | CHEMISES HOMME DROITES | CHEMISIERS FEMME CINTRES | CHEMISIERS FEMME DROITS GAGNEZ DU TEMPS ET ACCEDEZ DIRECTEMENT AUX MODELES DISPONI BLES DANS VOTRE TAILLE : Dtail de l'offre : Les SOLDES sont valables sur tout le site, dans la limite des stocks disponibles, du 07 janvier 2014 au 17/02/2015. Toutes les chemises sont proposees a un prix reduit, allant jusqu a 60 pour cent de reduction. Les frais de port sont egalement offerts des 2 articles achetes. Il vous suffit d'ajouter vos articles votre panier et la rduction s'appliquera automatiquement. Vous pouvez combiner chemises homme et chemisiers femme, ou ajouter 2 fois le meme article. Ozoa Chemises est le n1 de la vente de chemises et chemisiers en ligne. Notre objectif est de vous proposer en permanence des modeles 100% coton en edition limitee qui font la mode. Plus de 300 chemises et chemisiers sont disponibles en coupe cintree et coupe droite, du S au L pour l'homme et du 36 au 52 pour la femme. Tous les styles sont representes : casual, chic, sportswear, etc... Trouvez votre modele et commandez en un minimum de clics, en toute confiance : vous beneficiez de la garantie satisfait ou rembourse et du paiement 100% securise. 2012 Ozoa Chemises Pour ne plus recevoir de messages, suivez cette page.
Re: [PATCH/RFC 0/8] Email Alerts
On 04/02/2015 01:26 πμ, Simon Horman wrote: On Tue, Feb 03, 2015 at 05:13:02PM +0100, Baptiste wrote: On Tue, Feb 3, 2015 at 4:59 PM, Pavlos Parissis pavlos.paris...@gmail.com wrote: On 01/02/2015 03:15 μμ, Willy Tarreau wrote: Hi Simon, On Fri, Jan 30, 2015 at 11:22:52AM +0900, Simon Horman wrote: Hi Willy, Hi All, the purpose of this email is to solicit feedback on an implementation of email alerts for haproxy the design of which is based on a discussion in this forum some months ago. It would be great if we could use something like this acl low_capacity nbsrv(foo_backend) lt 2 mail alert if low_capacity In some environments you only care to wake up the on-call sysadmin if you are real troubles and not because 1-2 servers failed. Nice work, Pavlos This might be doable using monitor-uri and monitor fail directives in a dedicated listen section which would fail if number of server in a monitored farm goes below a threshold. That said, this is a dirty hack. A agree entirely that there is a lot to be said for providing a facility for alert suppression and escalation. To my mind the current implementation, which internally works with a queue, lends itself to these kinds of extensions. The key question in mind is how to design advanced such as the one you have suggested in such a way that they can be useful in a wide range of use-cases. So far there seem to be three semi-related ideas circulating on this list. I have added a fourth: 1. Suppressing alerts based on priority. e.g. Only send alerts for events whose priority is x. 2. Combining alerts into a single message. e.g. If n alerts are queued up to be sent within time t then send them in one message rather than n. 3. Escalate alerts e.g. Only send alerts of priority x if more than n have occurred within time t. This seems to be a combination of 1 and 2. This may or not involve raising the priority of the resulting combined alert (internally or otherwise) An extra qualification may be that the events need to relate to something common: e.g. servers of the same proxy Loosing one may not be bad, loosing all of them I may wish to get out of bed for 4. Suppressing transient alerts e.g. I may not care if server s goes down then comes back up again within time t. But I may if it keeps happening. This part seems like a variant of 3. I expect we can grow this list of use-cases. I also think things may become quite complex quite quickly. But it would be nice to implement something not overly convoluted yet useful. What you have done so far provides the basic 'monitoring' alert functionality and it is the first step to something than can become bigger, better but complex as you say. The functionality you have listed, it is covered by several monitor systems, either dummy like nagios or 'smart' which apply real-time anomaly detection(skyline, etc) by either actively probing services or passively receiving events. HAProxy it is another service inside a data center which produces events, servers go down/up, dip/spike on traffic and etc. In small companies which can't afford to have a centralized monitor system and prefer to just receive various e-mail from ~10 systems, having some monitor intelligence (aggregation, alerts based on thresholds) build-in is perfect and very much appreciated. But, in large installation where you have 10K servers and 400 services, you want to receive raw events without any aggregation and the 'smart' monitor system will figure out what to do before it wakes up the on-call sysadmin(I am on of them). To sum up, the current data exposed over stats socket satisfies the need of the large installation, I know that because I am quite happy with amount of data HAProxy exposes and I work in environment where we utilize these 'smart' monitor systems. At my friend's start-up company which has 8 services and I don't want to develop scripts/tools to pull info from stats socket, just mail me and I will alter my self based on the amount of e-mails I receive, and if HAProxy can do some kind of aggregation/threshold then my mailbox will thank HAProxy a lot. I hope it helps and once again thanks for your hard work, Pavlos signature.asc Description: OpenPGP digital signature
Re: HAProxy 1.5.10 on FreeBSD 9.3 - status page questions
On 04/02/2015 11:38 πμ, Tobias Feldhaus wrote: Hi, To refresh the page did not help (the number of seconds the PRIMARY backend was considered to be down increased continuously, but not the number of Bytes or the color). [deploy@haproxy-tracker-one /var/log] /usr/local/sbin/haproxy -vv HA-Proxy version 1.5.10 2014/12/31 Copyright 2000-2014 Willy Tarreau w...@1wt.eu mailto:w...@1wt.eu Build options : TARGET = freebsd CPU = generic CC = cc CFLAGS = -O2 -pipe -fstack-protector -fno-strict-aliasing -DFREEBSD_PORTS OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built with zlib version : 1.2.8 Compression algorithms supported : identity, deflate, gzip Built with OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014 Running on OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014 OpenSSL library supports TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL library supports prefer-server-ciphers : yes Built with PCRE version : 8.35 2014-04-04 PCRE library supports JIT : yes Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY Available polling systems : kqueue : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use kqueue. - haproxy.conf - global daemon stats socket /var/run/haproxy.sock level admin log /var/run/log local0 notice defaults mode http stats enable stats hide-version stats uri /lbstats global log frontend LBSTATS *: mode http frontend KAFKA *:8090 mode tcp default_backend KAFKA_BACKEND backend KAFKA_BACKEND mode tcp log global option tcplog option dontlog-normal option httpchk GET / httpcheck in tcp mode? Have you manage to load HAProxy with this setting without getting an error like [ALERT] 035/213450 (17326) : Unable to use proxy 'foo_com' with wrong mode, required: http, has: tcp. [ALERT] 035/213450 (17326) : You may want to use 'mode http'. server KAFKA_PRIMARY kafka-primary.acc:9092 check port 9093 inter 2000 rise 302400 fall 5 rise 302400!! Are you sure? HAProxy will have to wait 302400 * 2 seconds before it detects the server up server KAFKA_SECONDARY kafka-overflow.acc:9092 check port 9093 inter 2000 rise 2 fall 5 backup I can't reproduce your problem even when I use your server settings but in http mode for backend. Cheers, Pavlos signature.asc Description: OpenPGP digital signature
Re: nbproc 1 and stats in ADMIN mode?
On 05/02/2015 03:01 μμ, Klavs Klavsen wrote: Hi guys, Just to check.. if I set nbproc to f.ex. 4 - then I understand I need to define 4xstats.. and when I visit the webinterface.. I'll actually only get stats from one of the 4 processes.. But we have ADMIN enabled for stats - so we can disable backend servers etc.. will we have to do that for each of the 4 stats editions - before it's actually active or is that state shared among them all? Yes, you have to do all our admin operations on each webinterface. Cheers, Pavlos signature.asc Description: OpenPGP digital signature