RE: possible bug with CumReq info stat

2015-02-02 Thread Lukas Tribus
 There is no SSL protected repo. I'm surprized that you found the
 haproxy.org
 site slow, usually it's reasonably fast. Are you sure you weren't
 cloning from 1wt.eu instead, which is the slow master ?


 Would it be possible to get the haproxy org on github to be synced with
 your repos.

 Haproxy.org is supposed to be synced. Github's haproxy account doesn't
 belong to me and is not an official one (and it's different).

Since there is also a haproxy github-organisation behind this github fork,
it does look like it would be an official mirror and I can see why it
attracts bug reports on the github issue tracker for those repositories and
merge requests on github instead of the mailing list. Thats bad, very bad.

For anyone forking or mirroring a particular piece of code: please do not
impersonate the author or the project. Make it abundantly clear that the
mirror is unofficial (not only in the description or README, but in the
title).

I'm CC'ing Jeff Buchbinder since he seems to be the one merging real haproxy
git back to this github fork.



 That might provide a nice SSL protected location.

 It will bring absolutely nothing

I tend to disagree:

 1) trees are replicated to multiple places

Of course the actual replication would have to be protected as-well, not
just the download from the mirror. SSH or HTTPS as a transport protocol
will do that.



 and 2) the git structure by itself provides protection.

I don't see how it can protect against a MITM'ed git mirror that is serving
tainted repositories.



 However SSL would be even slower and more resource intensive.

I believe the performance penalty of SSL is neglectable.

Lets look at some numbers:
From my box here I have about 55ms of RTT to git.haproxy.org and
115ms RTT to github.com (the latter crosses the atlantic).

If I clone via unprotected HTTP from git.haproxy.org I need about
2 and a half minutes.

If I clone via protected HTTPS from the github.com fork (which is not
exactly the same, but not that different either) I need about 30 seconds.


So although the RTT to github is about twice as much and we use HTTPS
instead of HTTP, it still is about five times faster than git.haproxy.org.


This is most likely due to the plain-HTTP usage instead of git-over-HTTP,
as Warren mentioned.


Do we absolutely need that kind of speed? No. I'm just saying that performance
is not a showstopper for SSL, especially in this case.



 And would require a cert and careful maintenance.

Correct, unless we would use something like github ourselfs. That would
mean leveraging their platform for all those nice things like HTTPS and
git-over-HTTP, without the actual maintenance burden. That requires,
of course, trusting github.

What this would also bring to the table is a bug-tracker with strong ties
to git (but is of course github-proprietary).



Regards,

Lukas

  


RE: Backend DOWN but Layer 7 check pass

2015-02-02 Thread Lukas Tribus
 Hey,

 I have ran into a odd scenario, where the backend is DOWN however the
 layer 7 checks are passing. I have included the check which we
 received. The haproxy setup is fairly simple using proxy protocol. I
 could only find one example of this issue here, however, no follow up
 was done on the reason:
 http://comments.gmane.org/gmane.comp.web.haproxy/10454

What release are you running? I guess we will have to see the actual
health check as a traffic capture, like Willy suggested in that thread.


Lukas

  


Backend DOWN but Layer 7 check pass

2015-02-02 Thread Rob
Hey,

I have ran into a odd scenario, where the backend is DOWN however the
layer 7 checks are passing. I have included the check which we
received. The haproxy setup is fairly simple using proxy protocol. I
could only find one example of this issue here, however, no follow up
was done on the reason:
http://comments.gmane.org/gmane.comp.web.haproxy/10454

The current setup is haproxy box uses proxy protocol to nginx which
sends it to the backends.

The only thing I question that may have effected this was  limit_conn
  slowdown  50; in nginx. The nodes did hit the slowdown due to
volume, I however was unable to replicate this same DOWN Layer7 check
passed in synthetic testing afterwords.

localhost haproxy[27338]: Server backend_name/ip is DOWN, reason:
Layer7 check passed, code: 200, check duration: 4ms. 0 active and 0
backup servers left. 344 sessions active, 0 requeued, 0 remaining in
queue.

HAProxy Config (IP's and Service names changed):
https://gist.github.com/coosh/63939a070ac60509ef16

Nginx Config all of them are very simple (IP's and Service names
changed): https://gist.github.com/coosh/0dfefd164d4ccbce9ffa

Rob



Re: Backend DOWN but Layer 7 check pass

2015-02-02 Thread Rob
Currently running 1.5-dev19, very tricky to get a packet capture as it
only happened in production. When I try to do synthetic testing
replicate it in a staging environment I cannot get it to happen. When
a backend does go down the layer 7 check shows a valid status code for
a down host. The other thought if the agent is marking it as down, I
know in 1.4.24 and 1.5-dev19 there was some flap detection work done
due to crashes. I was unable to find any reference to it in docs,
however, is there any flap detection if a node flaps so many times it
marks it down for a period of time even if layer 7 checks start to
pass?

On Mon, Feb 2, 2015 at 4:18 PM, Lukas Tribus luky...@hotmail.com wrote:
 Hey,

 I have ran into a odd scenario, where the backend is DOWN however the
 layer 7 checks are passing. I have included the check which we
 received. The haproxy setup is fairly simple using proxy protocol. I
 could only find one example of this issue here, however, no follow up
 was done on the reason:
 http://comments.gmane.org/gmane.comp.web.haproxy/10454

 What release are you running? I guess we will have to see the actual
 health check as a traffic capture, like Willy suggested in that thread.


 Lukas





HAProxy backend server AWS S3 Static Web Hosting

2015-02-02 Thread Thomas Amsler
Hello,

Is it possible to front AWS S3 Static Web Hosting with HAProxy? I have
tried to setup a backend to proxy requests to
SomeHost.s3-website-us-east-1.amazonaws.com:80. But I am getting an error
from S3 indicating that the bucket SomeHost does not exist. Has anybody
tried to do that?

Best,
Thomas Amsler


Re: [PATCH/RFC 0/8] Email Alerts

2015-02-02 Thread Willy Tarreau
Hi Simon,

On Mon, Feb 02, 2015 at 11:16:09AM +0900, Simon Horman wrote:
   * No options to configure the format of the email alerts
  
  You know, even if we make this format very flexible, some users
  will complain that they cannot send it in html and attach graphs :-)
 
 Haha, yes indeed.
 
 One reason I choose not to tackle that problem at all was
 that it seems to have some similarity to Pandora's box.

Absolutely!

 The best idea I have had so far is to allow a template, with some
 meta-fields that are filled in at run-time, E.g. %p might mean insert the
 name of the proxy here. And for all other text to be treated as literals.
 This may be flexible enough for many use-cases. Though as you point out,
 its hard to cover all the bases.

Yes and it quickly becomes a mess where we see some %[] appear again
because all possible tags have been used and are not enough :-/

 I'd also be happy to let this be and see what requests people have
 to enhance or configure the messages.

I agree on this!

   * No options to configure delays associated with the checks
 used to send alerts.
  
  Maybe an improvement could be to later implement a small queue
  and group messages that are too close together. It would require
  a timeout as well so that queued messages are sent once the max
  aggregation delay is elapsed.
 
 Do you mean include multiple alerts in a single email message?

Yes that was it, to preserve the reader's patience.

 If so, yes that sounds reasonable to me and it is something that
 had crossed my mind while working on this series. I suspect it would
 not be terribly difficult to implement on top of this series.

Possible indeed.

   * No Documentation
 
 This one is not so difficult to fix and I will see about doing so.

OK.

   * No support for STLS. This one will be a little tricky.
  
  I don't think it is a big problem. Users may very well route to
  the local relay if they absolutely don't want to send a clear-text
  alert over a shared infrastructure. But given that they are the
  same people who read their e-mails from their smartphone without
  ever entering a password, I'd claim that there are a lot of things
  to improve on their side before STLS becomes their first reasonable
  concern.
 
 Yes, I agree. I think it would be nice to have. But I also think
 it is best classified as future work.

It might be harder now than what it could be later (eg: switch the
transport protocol in the middle of a connection), so let's forget
about it for now.

   Again the purpose is to solicit feedback on the code so far,
   in particular the design, before delving into further implementation 
   details.
  
  I've reviewed the whole patch set and have nothing to say about
  it, it's clean and does what you explained here. I feel like you're
  not very satisfied with abusing the check infrastructure to send
  an e-mail, but I wouldn't worry about that now given that nowhere
  in the configuration these checks are visible, so once we find it
  easier to handle the mailers using a regular task managing its own
  session, it shouldn't be that hard to convert the code.
 
 I was a little apprehensive about reusing checks in this way.
 But the code turned out to be a lot cleaner than I expected.
 And I am not comfortable with this approach.

I know, the idea is that it's in order to provide our beloved users
with the features they want right now, while we'll be the ones having
to scratch our heads behind the curtain later. What you did didn't
destroy health checks so in the worst case it could simply be undone,
and I don't think there would be any reason for this. You're just
reusing the check infrastructure because it's the only one capable
of initiating a standalone connection outside for now.

  So for now I'm totally positive about your work. Please tell me
  if you want me to merge it now in case that makes things easier
  for you to continue. Some breakage is expected to happen soon on
  the buffer management, requiring some painful rebases, so at least
  we'll have to sync on this to limit the painful work.
 
 Thanks for your positive review.
 
 I would like to take your offer and ask for the code to be merged.
 I'll see about working on some of the lower hanging fruit discussed
 above. Especially providing some documentation.

So I've merged it now, all 8 patches. Now I trust you to provide the
doc so that users can quickly start testing it. Do not hesitate either
if you want to fix issues or change your mind about certain points you
are not happy with, this is still in development, no worries!

Thanks!
Willy




Re: Help haproxy

2015-02-02 Thread Sander Klein

On 02.02.2015 12:09, Mathieu Sergent wrote:

Hi,

I try to set up a load balancing with HAProxy and 3 web servers.
I want to receive on my web servers the address' client.
I read that it is possible with the option  source ip usesrc   but
you need to be root.
If you want to not be root, you have to used  HAProxy with Tproxy.
But Tproxy demand too much system configuration.
There is an other solution ?
I hope that you have understood my problem.

Yours sincerely.

Mathieu Sergent

PS : Sorry for my English.


Your English is no problem. ;-)

You can add an X-Forwarded-For header using haproxy. If you then use 
mod_rpaf for apache or realip on nginx you can easily substitute the 
loadbalancer ip with the ip of the client.


Regards,

Sander




Re: HAproxy constant memory leak

2015-02-02 Thread Georges-Etienne Legendre
Thanks for your help.

The configuration is now back to 5000 maxconn, and Haproxy has been running
with this config over the last weekend. The memory footprint is now 1G.

# ps -u nobody u
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
nobody9103  0.7  3.9 1334192 1291740 ? Ss   Jan30  30:03
/usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid

# telnet localhost 1935
show pools
Dumping pools usage. Use SIGQUIT to flush them.
  - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 3 users [SHARED]
  - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users
[SHARED]
  - Pool channel (80 bytes) : 168 allocated (13440 bytes), 6 used, 1 users
[SHARED]
  - Pool task (112 bytes) : 149 allocated (16688 bytes), 67 used, 1 users
[SHARED]
  - Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 1 users
[SHARED]
  - Pool connection (320 bytes) : 168 allocated (53760 bytes), 6 used, 1
users [SHARED]
  - Pool hdr_idx (416 bytes) : 84 allocated (34944 bytes), 2 used, 1 users
[SHARED]
  - Pool session (864 bytes) : 85 allocated (73440 bytes), 3 used, 1 users
[SHARED]
  - Pool requri (1024 bytes) : 21 allocated (21504 bytes), 0 used, 1 users
[SHARED]
  - Pool buffer (16416 bytes) : 168 allocated (2757888 bytes), 6 used, 1
users [SHARED]
Total: 10 pools, 2971824 bytes allocated, 111984 used.



I've also executed the tcpdump + strace for about 30 seconds. There should
be no confidential info, but to be sure, I will send them to you personally.

Let me know if I can capture anything else that could be helpful.

Thanks!

-- Georges-Etienne

On Sat, Jan 31, 2015 at 8:40 AM, Willy Tarreau w...@1wt.eu wrote:

 On Sat, Jan 31, 2015 at 12:59:34AM +0100, Lukas Tribus wrote:
   The maxconn was set to 4096 before, and after 45 days, haproxy was
   using 20gigs...
 
  Ok, can you set maxconn back to 4096, reproduce the leak (to at least
  a few gigabytes) and a run show pools a few times to see where
  exactly the memory consumption comes from?

 Also, could you please send a network capture of the checks from
 the firewall to haproxy (if possible, taken on the haproxy side) ?
 It is possible that there is a specific sequence leading to an
 improper close (eg: some SSL structs not being released at certain
 steps in the handhskake, etc).

 Please use this to take your capture :

 tcpdump -vs0 -pi eth0 -w checks.cap host firewall-ip and port
 local-port

 Wait for several seconds, then Ctrl-C. Be careful, your capture
 will contain all the traffic flowing between haproxy and the
 firewall's address facing it, so there might be confidential
 information there, only send to the list if you think it's OK.

 Ideally, in parallel you can try to strace haproxy during this
 capture :

strace -tts200 -o checks.log -p $(pgrep haproxy)

 Thanks,
 Willy




Re: Help haproxy

2015-02-02 Thread Jarno Huuskonen
Hi,

On Mon, Feb 02, Sander Klein wrote:
 On 02.02.2015 12:09, Mathieu Sergent wrote:
 Hi,
 
 I try to set up a load balancing with HAProxy and 3 web servers.
 I want to receive on my web servers the address' client.
 I read that it is possible with the option  source ip usesrc   but
 you need to be root.
 If you want to not be root, you have to used  HAProxy with Tproxy.
 But Tproxy demand too much system configuration.
 There is an other solution ?
 I hope that you have understood my problem.
 
 Yours sincerely.
 
 Mathieu Sergent
 
 PS : Sorry for my English.
 
 Your English is no problem. ;-)
 
 You can add an X-Forwarded-For header using haproxy. If you then use
 mod_rpaf for apache or realip on nginx you can easily substitute the
 loadbalancer ip with the ip of the client.

Or if you're running apache 2.4 then it should come with
mod_remoteip: http://httpd.apache.org/docs/current/mod/mod_remoteip.html

And for tomcat there's:
https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/RemoteIpValve.html

-Jarno

-- 
Jarno Huuskonen



Help haproxy

2015-02-02 Thread Mathieu Sergent
Hi,

I try to set up a load balancing with HAProxy and 3 web servers.
I want to receive on my web servers the address' client.
I read that it is possible with the option  source ip usesrc   but you
need to be root.
If you want to not be root, you have to used  HAProxy with Tproxy. But
Tproxy demand too much system configuration.
There is an other solution ?
I hope that you have understood my problem.

Yours sincerely.

Mathieu Sergent

PS : Sorry for my English.


[PATCH] MEDIUM: Document email alerts

2015-02-02 Thread Simon Horman
Signed-off-by: Simon Horman ho...@verge.net.au
---
 doc/configuration.txt | 104 ++
 1 file changed, 104 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c829590..aa3f30f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1224,6 +1224,36 @@ peer peername ip:port
 server srv2 192.168.0.31:80
 
 
+3.6. Mailers
+
+It is possible to send email alerts when the state of servers changes.
+If configured email alerts are sent to each mailer that is configured
+in a mailers section. Email is sent to mailers using SMTP.
+
+mailer mailersect
+  Creates a new mailer list with the name mailersect. It is an
+  independent section which is referenced by one or more proxies.
+
+mailer mailername ip:port
+  Defines a mailer inside a mailers section.
+
+  Example:
+mailers mymailers
+mailer smtp1 192.168.0.1:587
+mailer smtp2 192.168.0.2:587
+
+backend mybackend
+mode tcp
+balance roundrobin
+
+email-alert mailers mymailers
+email-alert from te...@horms.org
+email-alert to te...@horms.org
+
+server srv1 192.168.0.30:80
+server srv2 192.168.0.31:80
+
+
 4. Proxies
 --
 
@@ -1344,6 +1374,10 @@ default_backend   X  X   
  X -
 description   -  X X X
 disabled  X  X X X
 dispatch  -  - X X
+email-alert from  X  X X X
+email-alert mailers   X  X X X
+email-alert myhostnameX  X X X
+email-alert toX  X X X
 enabled   X  X X X
 errorfile X  X X X
 errorloc  X  X X X
@@ -2650,6 +2684,76 @@ errorloc303 code url
   See also : errorfile, errorloc, errorloc302
 
 
+email-alert from emailaddr
+  Declare the from email address to be used in both the envelope and header
+  of email alerts.  This is the address that email alerts are sent from.
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+emailaddr is the from email address to use when sending email alerts
+
+  Also requires email-alert mailers and email-alert to to be set
+  and if so sending email alerts is enabled for the proxy.
+
+  See also : email-alert mailers, email-alert myhostname, email-alert to,
+ section 3.6 about mailers.
+
+
+email-alert mailers mailersect
+  Declare the mailers to be used when sending email alerts
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+mailersect is the name of the mailers section to send email alerts.
+
+  Also requires email-alert from and email-alert to to be set
+  and if so sending email alerts is enabled for the proxy.
+
+  See also : email-alert from, email-alert myhostname, email-alert to,
+ section 3.6 about mailers.
+
+
+email-alert myhostname hostname
+  Declare the to hostname address to be used when communicating with
+  mailers.
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+emailaddr is the to email address to use when sending email alerts
+
+  By default the systems hostname is used.
+
+  Also requires email-alert from, email-alert mailers and
+  email-alert to to be set and if so sending email alerts is enabled
+  for the proxy.
+
+  See also : email-alert from, email-alert mailers, email-alert to,
+ section 3.6 about mailers.
+
+
+email-alert to emailaddr
+  Declare both the recipent address in the envelope and to address in the
+  header of email alerts. This is the address that email alerts are sent to.
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+emailaddr is the to email address to use when sending email alerts
+
+  Also requires email-alert mailers and email-alert to to be set
+  and if so sending email alerts is enabled for the proxy.
+
+  See also : email-alert from, email-alert mailers,
+ email-alert myhostname, section 3.6 about mailers.
+
+
 force-persist { if | unless } condition
   Declare a condition to force persistence on down servers
   May be used in sections:defaults | frontend | listen | backend
-- 
2.1.4




Re: Help haproxy

2015-02-02 Thread Mathieu Sergent
Hi Sander,

Yes i reloaded the haproxy and my web server too. But no change.
 And i'm not using proxy protocol.

To give you more precisions, on my web server i used tcpdump functions
which give me back the header of the requete http. And in this i found my
client's address.
But this is really strange that i can do it without the forwardfor.

Regards,

Mathieu


2015-02-02 16:15 GMT+01:00 Sander Klein roe...@roedie.nl:

 Hi Mathieu,

 Pleas keep the list in the CC.

 On 02.02.2015 15:26, Mathieu Sergent wrote:

 Thanks for your reply.

 I just used the option forwardfor in the haproxy configuration. And i
 can find client's address from my web server (with tcpdump).
 But if i don't use the option forwardfor, the web server still find
 the client's address. That's make any sense ?


 To be honest, that doesn't make any sense to me. Are you sure you have
 reloaded the haproxy process after you removed the forwardfor?

 Or, could it be you are using the proxy protocol (send-proxy)?

 Greets,

 Sander



Re: HAproxy constant memory leak

2015-02-02 Thread Willy Tarreau
Hi Georges-Etienne,

On Mon, Feb 02, 2015 at 08:35:21AM -0500, Georges-Etienne Legendre wrote:
 Thanks for your help.
 
 The configuration is now back to 5000 maxconn, and Haproxy has been running
 with this config over the last weekend. The memory footprint is now 1G.

OK, so there's no doubt about it.

 # ps -u nobody u
 USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
 nobody9103  0.7  3.9 1334192 1291740 ? Ss   Jan30  30:03
 /usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid
 
 # telnet localhost 1935
 show pools
 Dumping pools usage. Use SIGQUIT to flush them.
   - Pool pipe (32 bytes) : 5 allocated (160 bytes), 5 used, 3 users [SHARED]
   - Pool capture (64 bytes) : 0 allocated (0 bytes), 0 used, 1 users
 [SHARED]
   - Pool channel (80 bytes) : 168 allocated (13440 bytes), 6 used, 1 users
 [SHARED]
   - Pool task (112 bytes) : 149 allocated (16688 bytes), 67 used, 1 users
 [SHARED]
   - Pool uniqueid (128 bytes) : 0 allocated (0 bytes), 0 used, 1 users
 [SHARED]
   - Pool connection (320 bytes) : 168 allocated (53760 bytes), 6 used, 1
 users [SHARED]
   - Pool hdr_idx (416 bytes) : 84 allocated (34944 bytes), 2 used, 1 users
 [SHARED]
   - Pool session (864 bytes) : 85 allocated (73440 bytes), 3 used, 1 users
 [SHARED]
   - Pool requri (1024 bytes) : 21 allocated (21504 bytes), 0 used, 1 users
 [SHARED]
   - Pool buffer (16416 bytes) : 168 allocated (2757888 bytes), 6 used, 1
 users [SHARED]
 Total: 10 pools, 2971824 bytes allocated, 111984 used.

Impressive, nothing *seems* to be used here. So we're leaking somewhere else.
I spent the whole afternoon on Friday trying to reproduce various cases using
connection probes, with/without data, with/without request, etc, but couldn't
come to anything related to what you're seeing.

 I've also executed the tcpdump + strace for about 30 seconds. There should
 be no confidential info, but to be sure, I will send them to you personally.

Great, thank you, I'll study all this. I really appreciate your help on
this bug! I hope I can reproduce it and spot what is happening.

Best regards,
Willy




Re: Help haproxy

2015-02-02 Thread Sander Klein

Hi Mathieu,

Pleas keep the list in the CC.

On 02.02.2015 15:26, Mathieu Sergent wrote:

Thanks for your reply.

I just used the option forwardfor in the haproxy configuration. And i
can find client's address from my web server (with tcpdump).
But if i don't use the option forwardfor, the web server still find
the client's address. That's make any sense ?


To be honest, that doesn't make any sense to me. Are you sure you have 
reloaded the haproxy process after you removed the forwardfor?


Or, could it be you are using the proxy protocol (send-proxy)?

Greets,

Sander



Re: HAproxy constant memory leak

2015-02-02 Thread Willy Tarreau
Georges-Etienne,

your captures were extremely informative. While I cannot reproduce the
behaviour here even by reinjecting the same health check requests, I'm
seeing two really odd things in your trace below :

We accept an SSL connection from the firewall :

08:15:52.297357 accept(6, {sa_family=AF_INET, sin_port=htons(32764), 
sin_addr=inet_addr(firewall)}, [16]) = 1

It sends 48 bytes :

08:15:52.297717 read(1, \200.\1\3\0\0\25\0\0\0\20, 11) = 11
08:15:52.297831 read(1, 
\0\0\3\0\0\10\0\0\6\4\0\200\0\0\4\0\0\5O\0\0@\202J#i\242K7)\300\2536o\245=\23,
 37) = 37

Then we're checking for /etc/krb5.conf :

08:15:52.297984 stat(/etc/krb5.conf, 0x7fff544b1990) = -1 ENOENT (No such 
file or directory)

Then trying to read some random :

08:15:52.298082 open(/dev/urandom, O_RDONLY) = -1 ENOENT (No such file or 
directory)

Then trying to figure the local host name :

08:15:52.298187 uname({sys=Linux, node=node's local hostname, ...}) = 0

Then doing some netlink-based studd :

08:15:52.298316 socket(PF_NETLINK, SOCK_RAW, 0) = 2
08:15:52.298395 bind(2, {sa_family=AF_NETLINK, pid=0, groups=}, 12) = 0
08:15:52.298471 getsockname(2, {sa_family=AF_NETLINK, pid=9103, 
groups=}, [12]) = 0
08:15:52.298550 sendto(2, \24\0\0\0\26\0\1\3\210x\317T\0\0\0\0\0\0\0\0, 20, 
0, {sa_family=AF_NETLINK, pid=0, groups=}, 12) = 20
08:15:52.298650 recvmsg(2, {msg_name(12)={sa_family=AF_NETLINK, pid=0, 
groups=}, 
msg_iov(1)=[{0\0\0\0\24\0\2\0\210x\317T\217#\0\0\2\10\200\376\1\0\0\0\10\0\1\0\177\0\0\1\10\0\2\0\177\0\0\1\7\0\3\0lo\0\0\0\0\0\24\0\2\0\210x\317T\217#\0\0\2\32\200\0\n\0\0\0\10\0\1\0\n\0\35\22\10\0\2\0\n\0\35\22\10\0\4\0\n\0\35?\n\0\3\0bond0\0\0\0\0\0\0\24\0\2\0\210x\317T\217#\0\0\2\32\200\0\f\0\0\0\10\0\1\0\n\2\177\217\10\0\2\0\n\2\177\217\10\0\4\0\n\2\177\277\n\0\3\0bond2\0\0\0\0\0\0\24\0\2\0\210x\317T\217#\0\0\2\32\200\0\r\0\0\0\10\0\1\0\nZ\6j...,
 4096}], msg_controllen=0, msg_flags=0}, 0) = 356
08:15:52.298841 recvmsg(2, {msg_name(12)={sa_family=AF_NETLINK, pid=0, 
groups=}, 
msg_iov(1)=[{@\0\0\0\24\0\2\0\210x\317T\217#\0\0\n\200\200\376\1\0\0\0\24\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\24\0\6\0\377\377\377\377\377\377\377\377H\3\0\0H\3\0\0@\0\0\0\24\0\2\0\210x\317T\217#\0\0\n@\200\375\n\0\0\0\24\0\1\0\376\200\0\0\0\0\0\0\3064k\377\376\256\37@\24\0\6\0\377\377\377\377\377\377\377\377f\5\0\0f\5\0\0@\0\0\0\24\0\2\0\210x\317T\217#\0\0\n@\200\375\v\0\0\0\24\0\1\0\376\200\0\0\0\0\0\0\3064k\377\376\256\37A\24\0\6\0\377\377\377\377\377\377\377\377\232\5\0\0\232\5\0\0@\0\0\0\24\0\2\0...,
 4096}], msg_controllen=0, msg_flags=0}, 0) = 448
08:15:52.299059 recvmsg(2, {msg_name(12)={sa_family=AF_NETLINK, pid=0, 
groups=}, 
msg_iov(1)=[{\24\0\0\0\3\0\2\0\210x\317T\217#\0\0\0\0\0\0\1\0\0\0\24\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\24\0\6\0\377\377\377\377\377\377\377\377H\3\0\0H\3\0\0@\0\0\0\24\0\2\0\210x\317T\217#\0\0\n@\200\375\n\0\0\0\24\0\1\0\376\200\0\0\0\0\0\0\3064k\377\376\256\37@\24\0\6\0\377\377\377\377\377\377\377\377f\5\0\0f\5\0\0@\0\0\0\24\0\2\0\210x\317T\217#\0\0\n@\200\375\v\0\0\0\24\0\1\0\376\200\0\0\0\0\0\0\3064k\377\376\256\37A\24\0\6\0\377\377\377\377\377\377\377\377\232\5\0\0\232\5\0\0@\0\0\0\24\0\2\0...,
 4096}], msg_controllen=0, msg_flags=0}, 0) = 20
08:15:52.299242 close(2)= 0

Then trying to open nsswitch.conf :

08:15:52.299353 open(/etc/nsswitch.conf, O_RDONLY) = -1 ENOENT (No such file 
or directory)

Then does the netlink + nsswitch dance a second time, followed by about
10 times the following with various domain name suffixes :

08:15:52.300841 open(/etc/resolv.conf, O_RDONLY) = -1 ENOENT (No such file or 
directory)
08:15:52.300938 socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 2
08:15:52.301018 connect(2, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr(127.0.0.1)}, 16) = 0
08:15:52.301100 poll([{fd=2, events=POLLOUT}], 1, 0) = 1 ([{fd=2, 
revents=POLLOUT}])
08:15:52.301179 sendto(2, \327\r\1\0\0\1\0\0\0\0\0\0\t_kerberos\fvarious 
domain suffixes\0\0\20\0\1, 51, MSG_NOSIGNAL, NULL, 0) = 51
08:15:52.301296 poll([{fd=2, events=POLLIN}], 1, 5000) = 1 ([{fd=2, 
revents=POLLERR}])
08:15:52.301373 close(2)= 0

Etc. It does that *a lot*. A few times we're seeing brk() with an
increasing value though it's not huge enough to prove everything leaks
there, but it proves that it happens inside openssl, since it's between
a read() performed by openssl and a stat() performed by it as well :

08:16:02.055371 epoll_wait(0, {{EPOLLIN, {u32=6, u64=6}}, {EPOLLIN, {u32=5, 
u64=5}}}, 200, 159) = 2
08:16:02.055457 accept(6, {sa_family=AF_INET, sin_port=htons(13053), 
sin_addr=inet_addr(some-public-address)}, [16]) = 2
08:16:02.00 fcntl(2, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
08:16:02.055658 accept(6, 0x7fff544b3e70, [128]) = -1 EAGAIN (Resource 
temporarily unavailable)
08:16:02.055806 read(2, \200.\1\3\0\0\25\0\0\0\20, 11) = 11
08:16:02.055908 read(2, \0\0\3\0\0\10\0\0\6\4\0\200\0\0\4\0\0\5\233G\314..., 
37) = 37
08:16:02.056005 

Re: Help haproxy

2015-02-02 Thread Sander Klein

On 02.02.2015 16:33, Mathieu Sergent wrote:

Hi Sander,

Yes i reloaded the haproxy and my web server too. But no change.
 And i'm not using proxy protocol.

To give you more precisions, on my web server i used tcpdump functions
which give me back the header of the requete http. And in this i found
my client's address.
But this is really strange that i can do it without the forwardfor.


The only other thing that I can think of is that your client is behind a 
proxy server which adds the X-Forward-For header for you...


Or you got something strange in your config...

Sander



Re: [PATCH] MEDIUM: Document email alerts

2015-02-02 Thread Willy Tarreau
On Tue, Feb 03, 2015 at 01:00:44PM +0900, Simon Horman wrote:
 Signed-off-by: Simon Horman ho...@verge.net.au
 ---
  doc/configuration.txt | 104 
 ++
(...)

Great! I changed the commit tag to DOC and applied it as-is.

Thank you Simon!
Willy




RE: HAproxy constant memory leak

2015-02-02 Thread Lukas Tribus
 OpenSSL sometimes acts stupidly like this inside a chroot. We've
 encountered a few issues in the past with openssl doing totally crazy
 stuff inside a chroot, including abort() on krb5-related things. From
 what I understood (others, please correct me if I'm wrong), such
 processing may be altered by the type of key or ciphers.
 
 In my opinion, you should attempt two things :
 
 1) ensure that your ssl library is up to date (double checking doesn't
 cost much)
 
 2) try it again without the chroot statement to see if when openssl finds
 what it's looking for, the leak stops.
 
 3) maybe file a report to the openssl list about a memory leak in that
 exact situation, with the traces you sent to me. Maybe they'll want
 to have your public key as well to verify some assumptions about
 what could be done inside the lib with its properties.

I suppose you can disable KRB5 ciphers to be part of the negotiation by
adding !KRB5 to your cipher configuration. If that works, it is probably
the simplest workaround, unless you actually need KRB5.


Lukas

  

Re: Global ACLs

2015-02-02 Thread Willy Tarreau
Hi Christian,

On Mon, Feb 02, 2015 at 04:55:56PM +0100, Christian Ruppert wrote:
 Hey,
 
 are there some kind of global ACLs perhaps? I think that could be really 
 useful. In my case I have ~70 frontends and ~100 backends. I often use 
 the same ACLs on multiple frontends/backends for specific whitelisting 
 etc.
 It would be extremely helpful to specify some of those ACLs in the 
 global scope and use it where needed without having to re-define it 
 again and again.
 Technically that shouldn't be much different from what it does in the 
 local scope, shouldn't it?
 So I guess the ACL is prepare once on startup, it shouldn't matter where 
 that is done. Using it so actually evaluating it is always (as before) 
 done in the local scope, depending on the actual Layer etc.
 
 So adding support for global ACLs should be easy and helpful, or am I 
 wrong? Did I forgot something important here?
 
 Example:
 
 global
 acl foo src 192.168.1.1
 acl foobar hdr_ip(X-Forwarded-For,-1) 192.168.1.2 # This *might* be 
 a special case... Not yet further verified.
 
 
 frontend example
 
 use_backend ... if foo
 use_backend ... if foobar
 

We've been considering this for a while now without any elegant solution.
Recently while discussing with Emeric we got an idea to implement scopes,
and along these lines I think we could instead try to inherit ACLs from
other frontends/backends/defaults sections. Currently defaults sections
support having a name, though this name is not internally used, admins
often put some notes there such as tcp or a customer's id.

Here we could have something like this :

defaults foo
acl local src 127.0.0.1

frontend bar
acl client src 192.168.0.0/24
use_backend c1 if client
use_backend c2 if foo/local

It would also bring the extra benefit of allowing complex shared configs
to use their own global ACLs regardless of what is being used in other
sections.

That's just an idea, of course.

Regards,
Willy




Global ACLs

2015-02-02 Thread Christian Ruppert

Hey,

are there some kind of global ACLs perhaps? I think that could be really 
useful. In my case I have ~70 frontends and ~100 backends. I often use 
the same ACLs on multiple frontends/backends for specific whitelisting 
etc.
It would be extremely helpful to specify some of those ACLs in the 
global scope and use it where needed without having to re-define it 
again and again.
Technically that shouldn't be much different from what it does in the 
local scope, shouldn't it?
So I guess the ACL is prepare once on startup, it shouldn't matter where 
that is done. Using it so actually evaluating it is always (as before) 
done in the local scope, depending on the actual Layer etc.


So adding support for global ACLs should be easy and helpful, or am I 
wrong? Did I forgot something important here?


Example:

global
acl foo src 192.168.1.1
acl foobar hdr_ip(X-Forwarded-For,-1) 192.168.1.2 # This *might* be 
a special case... Not yet further verified.



frontend example

use_backend ... if foo
use_backend ... if foobar



--
Regards,
Christian Ruppert



Re: possible bug with CumReq info stat

2015-02-02 Thread Warren Turkal
All fair points. Too bad you don't have the haproxy org on github. It would
be nice if that were a trustworthy source.

With regard to the slowness, I am using the following remote config:
$ git remote -v
origin http://git.haproxy.org/git/haproxy-1.5.git/ (fetch)
origin http://git.haproxy.org/git/haproxy-1.5.git/ (push)

It looks like that server uses the dumb http protocol for git, which
requires more roundtrips.

$ curl -si
http://git.haproxy.org/git/haproxy-1.5.git/info/refs?service=git-upload-pack
| grep --binary-files=text '^Content-Type'
Content-Type: text/plain -- this would be different in the smart protocol

That may explain the slowness.

Thanks,
wt

On Mon, Feb 2, 2015 at 9:40 AM, Willy Tarreau w...@1wt.eu wrote:

 On Mon, Feb 02, 2015 at 09:34:37AM -0800, Warren Turkal wrote:
  On Sat, Jan 31, 2015 at 4:57 AM, Willy Tarreau w...@1wt.eu wrote:
 
   There is no SSL protected repo. I'm surprized that you found the
   haproxy.org
   site slow, usually it's reasonably fast. Are you sure you weren't
 cloning
   from 1wt.eu instead, which is the slow master ?
  
 
  Would it be possible to get the haproxy org on github to be synced with
  your repos.

 Haproxy.org is supposed to be synced. Github's haproxy account doesn't
 belong to me and is not an official one (and it's different).

  That might provide a nice SSL protected location.

 It will bring absolutely nothing considering that 1) trees are replicated
 to multiple places, and 2) the git structure by itself provides protection.
 However SSL would be even slower and more resource intensive. And would
 require a cert and careful maintenance.

 Willy




-- 
Warren Turkal


Re: Global ACLs

2015-02-02 Thread Warren Turkal
That sounds pretty cool. I would love to only have to define my ACLs in one
place.

wt

On Mon, Feb 2, 2015 at 8:31 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi Christian,

 On Mon, Feb 02, 2015 at 04:55:56PM +0100, Christian Ruppert wrote:
  Hey,
 
  are there some kind of global ACLs perhaps? I think that could be really
  useful. In my case I have ~70 frontends and ~100 backends. I often use
  the same ACLs on multiple frontends/backends for specific whitelisting
  etc.
  It would be extremely helpful to specify some of those ACLs in the
  global scope and use it where needed without having to re-define it
  again and again.
  Technically that shouldn't be much different from what it does in the
  local scope, shouldn't it?
  So I guess the ACL is prepare once on startup, it shouldn't matter where
  that is done. Using it so actually evaluating it is always (as before)
  done in the local scope, depending on the actual Layer etc.
 
  So adding support for global ACLs should be easy and helpful, or am I
  wrong? Did I forgot something important here?
 
  Example:
 
  global
  acl foo src 192.168.1.1
  acl foobar hdr_ip(X-Forwarded-For,-1) 192.168.1.2 # This *might* be
  a special case... Not yet further verified.
  
 
  frontend example
  
  use_backend ... if foo
  use_backend ... if foobar
  

 We've been considering this for a while now without any elegant solution.
 Recently while discussing with Emeric we got an idea to implement scopes,
 and along these lines I think we could instead try to inherit ACLs from
 other frontends/backends/defaults sections. Currently defaults sections
 support having a name, though this name is not internally used, admins
 often put some notes there such as tcp or a customer's id.

 Here we could have something like this :

 defaults foo
 acl local src 127.0.0.1

 frontend bar
 acl client src 192.168.0.0/24
 use_backend c1 if client
 use_backend c2 if foo/local

 It would also bring the extra benefit of allowing complex shared configs
 to use their own global ACLs regardless of what is being used in other
 sections.

 That's just an idea, of course.

 Regards,
 Willy





-- 
Warren Turkal


Re: possible bug with CumReq info stat

2015-02-02 Thread Willy Tarreau
On Mon, Feb 02, 2015 at 09:34:37AM -0800, Warren Turkal wrote:
 On Sat, Jan 31, 2015 at 4:57 AM, Willy Tarreau w...@1wt.eu wrote:
 
  There is no SSL protected repo. I'm surprized that you found the
  haproxy.org
  site slow, usually it's reasonably fast. Are you sure you weren't cloning
  from 1wt.eu instead, which is the slow master ?
 
 
 Would it be possible to get the haproxy org on github to be synced with
 your repos.

Haproxy.org is supposed to be synced. Github's haproxy account doesn't
belong to me and is not an official one (and it's different).

 That might provide a nice SSL protected location.

It will bring absolutely nothing considering that 1) trees are replicated
to multiple places, and 2) the git structure by itself provides protection.
However SSL would be even slower and more resource intensive. And would
require a cert and careful maintenance.

Willy




Re: possible bug with CumReq info stat

2015-02-02 Thread Warren Turkal
On Sat, Jan 31, 2015 at 4:57 AM, Willy Tarreau w...@1wt.eu wrote:

 There is no SSL protected repo. I'm surprized that you found the
 haproxy.org
 site slow, usually it's reasonably fast. Are you sure you weren't cloning
 from 1wt.eu instead, which is the slow master ?


Would it be possible to get the haproxy org on github to be synced with
your repos. That might provide a nice SSL protected location.

wt
-- 
Warren Turkal