Re: Multiprocess Loadsharing
Hi, thank you for confirming. Best regards, Annika --- Systemadministration Travian Games GmbH Wilhelm-Wagenfeld-Str. 22 80807 München Germany Tel: +49 / (0)89 / 324 915 - 171 Fax: +49 / (0)89 / 324 915 - 970 a.wick...@traviangames.com www.traviangames.de Sitz der Gesellschaft München AG München HRB: 173511 Geschäftsführer: Siegfried Müller USt-IdNr.: DE246258085 Diese Email einschließlich ihrer Anlagen ist vertraulich und nur für den Adressaten bestimmt. Wenn Sie nicht der vorgesehene Empfänger sind, bitten wir Sie, diese Email mit Anlagen unverzüglich und vollständig zu löschen und uns umgehend zu benachrichtigen. This email and its attachments are strictly confidential and are intended solely for the attention of the person to whom it is addressed. If you are not the intended recipient of this email, please delete it including its attachments immediately and inform us accordingly. On 04 Dec 2013, at 17:20, Chris Burroughs chris.burrou...@gmail.com wrote: On 11/28/2013 03:10 AM, Annika Wickert wrote: Is this a normal behaviour? http://imgur.com/I7sRWy2 A graph of similar behavior at nbproc=3. Anecdotally the variance seems to be higher under lower loads.
splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
Hi everybody, we have a few regarding load at our Haproxy 1.5-dev19 cluster. We run constantly at a load of 12 - 15 most of it is system load. I started to do debugging with strace and see constantly the following message: epoll_ctl(0, EPOLL_CTL_ADD, 1541, {EPOLLIN|0x2000, {u32=1541, u64=1541}}) = 0 epoll_ctl(0, EPOLL_CTL_ADD, 1032, {EPOLLIN|0x2000, {u32=1032, u64=1032}}) = 0 epoll_wait(0, {{EPOLLIN|0x2000, {u32=3685, u64=3685}}}, 200, 0) = 1 splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable) recvfrom(5110, 0xaeb50a4, 8192, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable) On our old cluster i do not see any of the Resource temporarily unavailable” at splicing operation. Could this lead to such a performance impact? Has something changed in kernel 3.11.5? Are there any things which can be tried out at staging cluster to break down this problem? Best regards, Annika --- Systemadministration Travian Games GmbH Wilhelm-Wagenfeld-Str. 22 80807 München Germany a.wick...@traviangames.com www.traviangames.de Sitz der Gesellschaft München AG München HRB: 173511 Geschäftsführer: Siegfried Müller USt-IdNr.: DE246258085 Diese Email einschließlich ihrer Anlagen ist vertraulich und nur für den Adressaten bestimmt. Wenn Sie nicht der vorgesehene Empfänger sind, bitten wir Sie, diese Email mit Anlagen unverzüglich und vollständig zu löschen und uns umgehend zu benachrichtigen. This email and its attachments are strictly confidential and are intended solely for the attention of the person to whom it is addressed. If you are not the intended recipient of this email, please delete it including its attachments immediately and inform us accordingly.
Bye, bye
We are sorry that you decided to opt-out. We confirm that this email account haproxy@formilux.org has un-subscribed.
RE: splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
Hi Annika, we have a few regarding load at our Haproxy 1.5-dev19 cluster. We run constantly at a load of 12 - 15 most of it is system load. [...] On our old cluster i do not see any of the Resource temporarily unavailable” at splicing operation. We can't tell if that kind of load is normal for your box; please don't make us guess from the context. Also please tell: - hardware (cpu/ram/nic at least) on old/new cluster - software (kernel/OS) on old/new cluster - HAProxy configuration on old/new cluster - what is the actual number of concurrent sessions? Has something changed in kernel 3.11.5? Compared to what kernel release? Are there any things which can be tried out at staging cluster to break down this problem? It seems you have a load problem; what happens if you disabling splicing? Are you using splice-auto or forcing splice by configuring splice-request / splice-response? Could be a kernel thing, could be a NIC limitation or it could simply be a higher load due to more concurrent connections ... Regards, Lukas
Re: splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
There are some bugs with splice in 1.5-dev19... they have been fixed. See this thread for the patches: http://comments.gmane.org/gmane.comp.web.haproxy/12774 (Or google for: Oh and by the way, the bug was present since 1.5-dev12. ) On Mon, Dec 9, 2013 at 2:56 PM, Lukas Tribus luky...@hotmail.com wrote: Hi Annika, we have a few regarding load at our Haproxy 1.5-dev19 cluster. We run constantly at a load of 12 - 15 most of it is system load. [...] On our old cluster i do not see any of the Resource temporarily unavailable” at splicing operation. We can't tell if that kind of load is normal for your box; please don't make us guess from the context. Also please tell: - hardware (cpu/ram/nic at least) on old/new cluster - software (kernel/OS) on old/new cluster - HAProxy configuration on old/new cluster - what is the actual number of concurrent sessions? Has something changed in kernel 3.11.5? Compared to what kernel release? Are there any things which can be tried out at staging cluster to break down this problem? It seems you have a load problem; what happens if you disabling splicing? Are you using splice-auto or forcing splice by configuring splice-request / splice-response? Could be a kernel thing, could be a NIC limitation or it could simply be a higher load due to more concurrent connections ... Regards, Lukas -- Mark Janssen -- maniac(at)maniac.nl Unix / Linux Open-Source and Internet Consultant Maniac.nl Sig-IO.nl Vps.Stoned-IT.com
Re: splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
Hi, I am sorry I forgot to attach the necessary files :x. On 09 Dec 2013, at 14:56, Lukas Tribus luky...@hotmail.com wrote: Hi Annika, we have a few regarding load at our Haproxy 1.5-dev19 cluster. We run constantly at a load of 12 - 15 most of it is system load. [...] On our old cluster i do not see any of the Resource temporarily unavailable” at splicing operation. We can't tell if that kind of load is normal for your box; please don't make us guess from the context. Also please tell: - hardware (cpu/ram/nic at least) on old/new cluster - Two Intel(R) Xeon(R) CPU X6550 @ 2.00GHz in each cluster node - 2x Emulex Corporation OneConnect 10Gb NIC (rev 02) in each cluster node - 32gbit RAM in each cluster node - Two nodes per cluster (active-active in the new one) - software (kernel/OS) on old/new cluster - Debian Squeeze / 3.1.0-1-amd64 / Tickrate 250 - CentOS release 6.4 (Final) / 3.11.5-1.el6 / Tickrate 1000 - HAProxy configuration on old/new cluster - Haproxy Configs use nbproc 24 without binding to specfic bind-process - Enabled SSL for 2 frontends on new cluster - what is the actual number of concurrent sessions? - Cannot get the sessions from stats socket because of multiple processes - Firewall is seeing 300k concurrent sessions Has something changed in kernel 3.11.5? Compared to what kernel release? Are there any things which can be tried out at staging cluster to break down this problem? It seems you have a load problem; what happens if you disabling splicing? - Going to try Are you using splice-auto or forcing splice by configuring splice-request / splice-response? - We are forcing by splice-request / splice-responce Could be a kernel thing, could be a NIC limitation or it could simply be a higher load due to more concurrent connections … - Concurrent connections staid the same during migration. The factor which we see is four times more load on the new loadbalancers. In our testing environment it does not seem to be such a hard impact. But the responsetimes stay the same regardless of the load. So is it just a displaying issue? And does not affect the users? Best regards, Annika Regards, Lukas
Re: SSL client mode
Hi, On 08.12.2013 21:34, Igor wrote: Hi, it may like stunnel's client mode. In haproxy, we may get like this to terminate SSL server to HTTP server. listen http bind: 80 mode ssl-client use-server sslsrv 127.0.0.1:443 I think this should work -- listen http :80 mode http server sslsrv 127.0.0.1:443 ssl -- As Lukas mentioned haproxy-devel has a builtin for client ssl mode. cheers thomas Bests, -Igor On Mon, Dec 9, 2013 at 4:25 AM, Lukas Tribus luky...@hotmail.com wrote: Hi Igor, For testing and bench purpose, client mode like stud[1] would be useful, any plan to implement this feature? Not sure what that means, can you elaborate on the use case? SSL encrypted backend connections are already supported. Regards, Lukas -- Thomas Heil - ! note my new number ! Skype: phiber.sun Email: h...@terminal-consulting.de Tel: 0176 / 44555622 --
Re: splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
Hi, There are some bugs with splice in 1.5-dev19... they have been fixed. See this thread for the patches: http://comments.gmane.org/gmane.comp.web.haproxy/12774 (Or google for: Oh and by the way, the bug was present since 1.5-dev12. ) thank you for the hint. I will try this on staging. On Mon, Dec 9, 2013 at 2:56 PM, Lukas Tribus luky...@hotmail.com wrote: Hi Annika, we have a few regarding load at our Haproxy 1.5-dev19 cluster. We run constantly at a load of 12 - 15 most of it is system load. [...] On our old cluster i do not see any of the Resource temporarily unavailable” at splicing operation. We can't tell if that kind of load is normal for your box; please don't make us guess from the context. Also please tell: - hardware (cpu/ram/nic at least) on old/new cluster - software (kernel/OS) on old/new cluster - HAProxy configuration on old/new cluster - what is the actual number of concurrent sessions? Has something changed in kernel 3.11.5? Compared to what kernel release? Are there any things which can be tried out at staging cluster to break down this problem? It seems you have a load problem; what happens if you disabling splicing? Are you using splice-auto or forcing splice by configuring splice-request / splice-response? Could be a kernel thing, could be a NIC limitation or it could simply be a higher load due to more concurrent connections ... Regards, Lukas -- Mark Janssen -- maniac(at)maniac.nl Unix / Linux Open-Source and Internet Consultant Maniac.nl Sig-IO.nl Vps.Stoned-IT.com
RE: splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
Hi, There are some bugs with splice in 1.5-dev19... they have been fixed. See this thread for the patches: http://comments.gmane.org/gmane.comp.web.haproxy/12774 (Or google for: Oh and by the way, the bug was present since 1.5-dev12. ) This is not what Annika is seeing; that bug is about 100% cpu load in userspace haproxy, but Annika is seeing higher system load. Also please tell: - hardware (cpu/ram/nic at least) on old/new cluster - Two Intel(R) Xeon(R) CPU X6550 @ 2.00GHz in each cluster node - 2x Emulex Corporation OneConnect 10Gb NIC (rev 02) in each cluster node - 32gbit RAM in each cluster node - Two nodes per cluster (active-active in the new one) The hardware of the old and the new cluster is the same? - Debian Squeeze / 3.1.0-1-amd64 / Tickrate 250 - CentOS release 6.4 (Final) / 3.11.5-1.el6 / Tickrate 1000 The higher the tickrate, the higher the CPU load. You quadripled the tickrate, and your load what - quadripled? I suggest you try a lower tickrate in the very same configuration. That said, splice should be way more efficient in 3.11 than in 3.1. Are you using splice-auto or forcing splice by configuring splice-request / splice-response? - We are forcing by splice-request / splice-responce I believe splice is not always more efficient than recv/send; use splice-auto to use it less aggressively (doc: splice-auto): Haproxy uses heuristics to estimate if kernel splicing might improve performance or not. Both directions are handled independently. Note that the heuristics used are not much aggressive in order to limit excessive use of splicing. Don't know if those heuristics are still fully valid for post 3.5 kernels, but it probably doesn't hurt. Regards, Lukas
Compile warning on OS X
include/common/time.h:111:29: warning: implicit conversion from 'unsigned long' to '__darwin_suseconds_t' (aka 'int') changes value from 18446744073709551615 to -1 [-Wconstant-conversion] tv-tv_sec = tv-tv_usec = TV_ETERNITY; ~ ^~~ include/common/time.h:32:26: note: expanded from macro 'TV_ETERNITY' Can I ignore this warning even the compile succeed? Thanks for any suggestion. Bests, -Igor
Re: SSL client mode
Thanks Thomas and Lukas, that's what I look for. Bests, -Igor On Mon, Dec 9, 2013 at 10:17 PM, Thomas Heil h...@terminal-consulting.de wrote: Hi, On 08.12.2013 21:34, Igor wrote: Hi, it may like stunnel's client mode. In haproxy, we may get like this to terminate SSL server to HTTP server. listen http bind: 80 mode ssl-client use-server sslsrv 127.0.0.1:443 I think this should work -- listen http :80 mode http server sslsrv 127.0.0.1:443 ssl -- As Lukas mentioned haproxy-devel has a builtin for client ssl mode. cheers thomas Bests, -Igor On Mon, Dec 9, 2013 at 4:25 AM, Lukas Tribus luky...@hotmail.com wrote: Hi Igor, For testing and bench purpose, client mode like stud[1] would be useful, any plan to implement this feature? Not sure what that means, can you elaborate on the use case? SSL encrypted backend connections are already supported. Regards, Lukas -- Thomas Heil - ! note my new number ! Skype: phiber.sun Email: h...@terminal-consulting.de Tel: 0176 / 44555622 --
RE: SSL client mode
Hi, listen http bind: 80 mode ssl-client use-server sslsrv 127.0.0.1:443 I think this should work -- listen http :80 mode http server sslsrv 127.0.0.1:443 ssl -- Yes exactly, or something like this when using the frontend/backend approach: frontend myfrontend mode http bind :80 default_backend mybackend backend mybackend mode http server s4 10.0.0.3:443 ssl Its really that simple, because when in http mode, once we strip SSL, we have straightforward and plaintext HTTP and can do with it what we want. Regards, Lukas
Re: splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
Hi, Hi There are some bugs with splice in 1.5-dev19... they have been fixed. See this thread for the patches: http://comments.gmane.org/gmane.comp.web.haproxy/12774 (Or google for: Oh and by the way, the bug was present since 1.5-dev12. ) This is not what Annika is seeing; that bug is about 100% cpu load in userspace haproxy, but Annika is seeing higher system load. Yes system load is much higher then userspace load. Also please tell: - hardware (cpu/ram/nic at least) on old/new cluster - Two Intel(R) Xeon(R) CPU X6550 @ 2.00GHz in each cluster node - 2x Emulex Corporation OneConnect 10Gb NIC (rev 02) in each cluster node - 32gbit RAM in each cluster node - Two nodes per cluster (active-active in the new one) The hardware of the old and the new cluster is the same? Yes. - Debian Squeeze / 3.1.0-1-amd64 / Tickrate 250 - CentOS release 6.4 (Final) / 3.11.5-1.el6 / Tickrate 1000 The higher the tickrate, the higher the CPU load. You quadripled the tickrate, and your load what - quadripled? I suggest you try a lower tickrate in the very same configuration. That said, splice should be way more efficient in 3.11 than in 3.1. Are you using splice-auto or forcing splice by configuring splice-request / splice-response? - We are forcing by splice-request / splice-responce I believe splice is not always more efficient than recv/send; use splice-auto to use it less aggressively (doc: splice-auto): For testing we disabled splicing on one of the cluster members on the new cluster (after succesfull tests). Now load drops below 8 from 16. So I maybe try it with splice-auto and if that does not help with a new haproxy build with the following git commits: http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=61d39a0e2a047df78f7f3bfcf5584090913cdc65 http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=fa8e2bc68c583a227ebc78bab5779b84065b28da Haproxy uses heuristics to estimate if kernel splicing might improve performance or not. Both directions are handled independently. Note that the heuristics used are not much aggressive in order to limit excessive use of splicing. Don't know if those heuristics are still fully valid for post 3.5 kernels, but it probably doesn't hurt. Regards, Lukas Thank you very much, Annika
RE: splice(0xedb, 0, 0xf09, 0, 0x72b0, 0x3) = -1 EAGAIN (Resource temporarily unavailable)
Hi, For testing we disabled splicing on one of the cluster members on the new cluster (after succesfull tests). Now load drops below 8 from 16. So I maybe try it with splice-auto and if that does not help with a new haproxy build with the following git commits: Yes, but please fix the tickrate of the kernel; I believe that is the real issue here, everything else will just hide the real problem. Regards, Lukas
Ваши глаза заново будут видеть превосходнейшее
http://goo.gl/e6FzJp работа на не вредит глазам
Rate limiting on specific endpoint
Dear HAProxy community, First of all thank you for the wonderful work and the support contained within the archives of this emailing list. I am trying to protect a specific endpoint of my application (let's assume that it ends with /athanasios) and only that endpoint. The limits that I am trying to enforce are 30 requests per 10 mins with a burst rate of 10 per 1 second. How can I implement this using HAProxy? I googled around but no solutions are matching i) the burst requirement OR ii) the specific endpoint requirement. Any pointers or configuration snippets would be more than welcome. Again, keep up the good work and thanks for any replies.
New bug?
Hi, after upgraded to haproxy-ss-20131207, haproxy failed to start due to the errors: [ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:15] : error detected while parsing a 'rspideny' condition : missing args for fetch method 'table_cnt' in sample expression 'table_cnt'. [ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:19] : error detected while parsing ACL 'too_fast' : missing args for fetch method 'fe_sess_rate' in sample expression 'fe_sess_rate'. [ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:23] : 'tcp-request content accept' : error detected in frontend 'zorayoyo9881' while parsing 'if' condition : no such ACL : 'too_fast' Bests, -Igor
Re: New bug?
Hi, On 09.12.2013 20:14, Igor wrote: Hi, after upgraded to haproxy-ss-20131207, haproxy failed to start due to the errors: [ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:15] : error detected while parsing a 'rspideny' condition : missing args for fetch method 'table_cnt' in sample expression 'table_cnt'. [ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:19] : error detected while parsing ACL 'too_fast' : missing args for fetch method 'fe_sess_rate' in sample expression 'fe_sess_rate'. [ALERT] 343/024837 (19081) : parsing [/etc/haproxy/conf.conf:23] : 'tcp-request content accept' : error detected in frontend 'zorayoyo9881' while parsing 'if' condition : no such ACL : 'too_fast' Could you please send us the output of haproxy -vv and maybe your config after cleaning up any confidential data? Bests, -Igor cheers thomas
Re: Rate limiting on specific endpoint
On Mon, Dec 9, 2013 at 8:07 PM, Athanasios | ZenGuard athanas...@zenguard.org wrote: Dear HAProxy community, First of all thank you for the wonderful work and the support contained within the archives of this emailing list. I am trying to protect a specific endpoint of my application (let's assume that it ends with /athanasios) and only that endpoint. The limits that I am trying to enforce are 30 requests per 10 mins with a burst rate of 10 per 1 second. How can I implement this using HAProxy? I googled around but no solutions are matching i) the burst requirement OR ii) the specific endpoint requirement. Any pointers or configuration snippets would be more than welcome. Again, keep up the good work and thanks for any replies. Hi Athanasios, Do you want to protect per IP on this specific URL or whoever needs it, this URL can't be hit more that 30 times in 10 minutes? Baptiste
Re: dst host IP ACL support?
On Fri, Dec 6, 2013 at 7:03 PM, Igor j...@owind.com wrote: Hi, Is it possible to create ACL that based on destination host's IP in HTTP mode? For example, acl n1 dst_ip 1.1.1.1 use_backend b1 if n1 if request host: u2.abc.com has IP 1.1.1.1, it will use backend b1. Thanks. Bests, -Igor Hi Igor, There is an ACL called 'dst'. Baptiste
Re: XForwardfor Varnish behind HaProxy
Hi Clemence, If I were you, I would ask HAProxy to forward client IP address in an header named differently than X-Forwarded-For, something like X-Client-IP. Then configure you nginx to log the IP found in X-Client-IP and you're done. Baptiste
Re: Haproxy Load-Balance Scaling
On Fri, Dec 6, 2013 at 7:00 AM, Qingshan Xie xieq...@yahoo.com wrote: Hello Experts, not sure if this subject was already discussed or not, like to hear the advices and suggestions. If a single HAProxy instance as a load-balancer could not handle the high-load traffic, how to scale multiple instances as a group of load-balancers to handle the high-load? Thanks, Q.Xie Hi , First, please stop sending HTML message. You can Load-Balance your HAProxy servers using LVS, which is more or less a packet forwarder. It's dumb but very fast and can sustains millions of connections (since it manages only packets, it requires much less memory than HAProxy). Baptiste
Re: Improving behaviour when nbproc 1
On 4 Dec 2013 16:23, Chris Burroughs chris.burrou...@gmail.com wrote: On 12/03/2013 04:07 PM, Chris Burroughs wrote: This could just be me not being adept at email patches. Sorry if this is obvious but is this supposed to apply against 1.4 or 1.5? To answer my own question this applies against 1.5. I'm not sure of the feasibility or desirability of backporting to 1.4. I ran with this in a production load test and as far I can tell it worked as advertised. We were able to run with nbproc and still have useful looking stats sockets for haptop, ganglia etc. At least in our use case, stats sockets with this patch solve the primary objection to running with nbproc. Apologies for missing your message the other day. To answer yes it is against 1.5 ... The caveats are the peers don't work and the session table and load balancing can get messed up due to the lack of shared information between processes but if you just need to utilise multiple stat sockets and the rest doesn't matter so much then it works nicely.
Re: Improving behaviour when nbproc 1
On 12/09/2013 05:02 PM, James Hogarth wrote: To answer yes it is against 1.5 ... The caveats are the peers don't work and the session table and load balancing can get messed up due to the lack of shared information between processes but if you just need to utilise multiple stat sockets and the rest doesn't matter so much then it works nicely. But thesse are the same caveats that nbproc has always had, not new ones, correct?
RE: Compile warning on OS X
Hi Igor, include/common/time.h:111:29: warning: implicit conversion from 'unsigned long' to '__darwin_suseconds_t' (aka 'int') changes value from 18446744073709551615 to -1 [-Wconstant-conversion] tv-tv_sec = tv-tv_usec = TV_ETERNITY; ~ ^~~ include/common/time.h:32:26: note: expanded from macro 'TV_ETERNITY' Can I ignore this warning even the compile succeed? Thanks for any suggestion. Not sure, could you git bisect this? Lukas
RE: Haproxy Load-Balance Scaling
Hi, Hello Experts, not sure if this subject was already discussed or not, like to hear the advices and suggestions. If a single HAProxy instance as a load-balancer could not handle the high-load traffic, how to scale multiple instances as a group of load-balancers to handle the high-load? [...] You can Load-Balance your HAProxy servers using LVS, which is more or less a packet forwarder. It's dumb but very fast and can sustains millions of connections (since it manages only packets, it requires much less memory than HAProxy). If you need to grow even larger, some DNS tricks may come in handy, like active-active round-roubin and geolocation based redirect. If even that is not enough and you need to scale horizontally, then you can load balance the incoming traffic at your routers via ECMP. This scales as long as your router has enough bandwidth and ports. Also, some folks (cloudflare) use anycast even with TCP traffic like HTTP and HTTPS to scale. But you need to consider the downsides of anycast with TCP very carefully and design your network to compensate for it. You do not want to just switch this on. Regards, Lukas
RE: Three patches to the haproxy-systemd-wrapper
Hi folks, On Sat, 23 Nov 2013 12:05:24 +0100 Willy Tarreau w...@1wt.eu wrote: Hi Mark-Antoine, On Sat, Nov 23, 2013 at 07:37:21PM +0900, Marc-Antoine Perennou wrote: I don't have access to a computer to actually test those, but: - the first one looks nice, never felt really confident hard coding SBINDIR and the solution makes sense - nice catch for the second one, didn't think of the sigint when writing it, lgtm - third one is trivial enough not to harm anyone +1 for me OK that's perfect, I'm merging them. Thanks for your fast response! Willy Sounds great, thank you both! Let me know if there are any issues. There is a compiler warning after commit 1b6e75fa84 (MEDIUM: haproxy- systemd-wrapper: Use haproxy in same directory): src/haproxy-systemd-wrapper.c: In function ‘locate_haproxy’: src/haproxy-systemd-wrapper.c:28:10: warning: ignoring return value of ‘readlink’, declared with attribute warn_unused_result [-Wunused-result] Can we silence the warning as per the diff attached? If you agree, I will send a proper patch to Willy. Thanks, Lukas silence-readlink.diff Description: Binary data
Re: Three patches to the haproxy-systemd-wrapper
Hi Lukas, On Tue, Dec 10, 2013 at 01:12:59AM +0100, Lukas Tribus wrote: Hi folks, On Sat, 23 Nov 2013 12:05:24 +0100 Willy Tarreau w...@1wt.eu wrote: Hi Mark-Antoine, On Sat, Nov 23, 2013 at 07:37:21PM +0900, Marc-Antoine Perennou wrote: I don't have access to a computer to actually test those, but: - the first one looks nice, never felt really confident hard coding SBINDIR and the solution makes sense - nice catch for the second one, didn't think of the sigint when writing it, lgtm - third one is trivial enough not to harm anyone +1 for me OK that's perfect, I'm merging them. Thanks for your fast response! Willy Sounds great, thank you both! Let me know if there are any issues. There is a compiler warning after commit 1b6e75fa84 (MEDIUM: haproxy- systemd-wrapper: Use haproxy in same directory): src/haproxy-systemd-wrapper.c: In function ?locate_haproxy?: src/haproxy-systemd-wrapper.c:28:10: warning: ignoring return value of ?readlink?, declared with attribute warn_unused_result [-Wunused-result] Can we silence the warning as per the diff attached? If you agree, I will send a proper patch to Willy. I agree with you, it looks good. Thanks! Willy