Re: haproxy 1.6.0 crashes

2015-10-16 Thread Willy Tarreau
Hi Christopher,

sorry for the delay, I spent the whole day in meetings :-/

On Fri, Oct 16, 2015 at 11:42:38AM +0200, Christopher Faulet wrote:
> Le 16/10/2015 10:38, Willy Tarreau a écrit :
> >Thus this sparks a new question : when the cache is disabled, are we sure
> >to always free the ssl_ctx on all error paths after it's generated ? Or
> >are we certain that we always pass through ssl_sock_close() ?
> >
> 
> That's a good question. By greping on SSL_free, it should be good.

OK.

> >The other problem I'm having is related to how we manage the LRU cache.
> >Is there a risk that we kill some objects in this cache while they're
> >still in use ?
> 
> The SSL_CTX and SSL objects are reference-counted objects, so there is 
> no problem.
> 
> When a SSL_CTX object is created, its refcount is set to 1. When a SSL 
> connection use it, it is incremented and when the connection is closed, 
> it is decremented. Of course, it is also decremented when SSL_CTX_free 
> is called.
> During a call to SSL_free or SSL_CTX_free, its reference count reaches 
> 0, the SSL_CTX object is freed. Note that SSL_free and SSL_CTX_free can 
> be called in any order.

OK so the unused objects in the tree have a refcount of 1 while the used
ones have 2 or more, thus the refcount is always valid. Good that also
means we must not test if the tree is null or not in ssl_sock_close(),
we must always free the ssl_ctx as long as it was dynamically created,
so that its refcount decreases, otherwise it keeps increasing upon every
reuse.

> So, if a call to SSL_CTX_free is called whilst a SSL connection uses the 
> corresponding SSL_CTX object, there is no problem. Actually, this 
> happens when a SSL_CTX object is evicted from the cache. There is no 
> need to check if it is used by a connection or not.

Not only it is not needed, but we must not.

> We do not track any reference count on SSL_CTX, it is done internally by 
> openssl. The only thing we must do, is to know if it a generated 
> certificate

I totally agree.

> and to track if it is in the cache or not.

And here I disagree for the reason explained above since this is already
covered by the refcount.

> >>finally, we can of course discuss the design of this feature. There is
> >>no problem. I will be happy to find a more elegant way to handle it, if
> >>it is possible.
> >
> >Ideally we'd have the info in the ssl_ctx itself, but I remember that 
> >Emeric
> >told me a while ago that we couldn't store anything in an ssl_ctx. Thus I
> >can understand that we can't easily "tag" the ssl_ctx as being statically
> >or dynamically allocated, which is why I understand the need for the flag
> >on the connection as an alternative.
> >
> 
> Well, I'm not an openssl guru. It is possible to store and retrieve data 
> on a SSL_CTX object using SSL_CTX_set_ex_data/SSL_CTX_get_ex_data 
> functions. But I don't know if this a good practice to use it. And I 
> don't know if this is expensive or not.

That's also what Rémi suggested. I don't know how it's used, I'm seeing
an index with it and that's already used for DH data, so I don't know how
it mixes (if at all) with this. I'm not much concerned by the access cost
in fact since we're supposed to access it once at session creation and once
during the release. It's just that I don't understand how this works. Maybe
the connection flag is simpler for now.

Willy




Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Willy Tarreau
Hi,

On Fri, Oct 16, 2015 at 12:26:20AM -0400, Mark Betz wrote:
> Hi, I have a hopefully quick question about setting up backends for
> resolvable internal service addresses.
> 
> We are putting together a cluster on Google Container Engine (kubernetes)
> and have haproxy deployed in a container based on Ubuntu 14.04 LTS.
> 
> Our backend server specifications are declared using an internal resolvable
> service name. For example:
> 
> logdata-svc
> logdata-svc.default.svc.cluster.local
> 
> Both of these names correctly resolve to an internal IP address in the
> range 10.xxx.xxx.xxx, as shown by installing dnsutils into the container
> and running nslookup on the name prior to starting haproxy:
> 
> Name: logdata-svc.default.svc.cluster.local
> Address: 10.179.xxx.xxx
> 
> However regardless of whether I use the short form or fqdn haproxy fails to
> start, emitting the following to stdout:
> 
> [ALERT] 288/041651 (52) : parsing [/etc/haproxy/haproxy.cfg:99] : 'server
> logdata-service' : invalid address: 'logdata-svc.default.svc.cluster.local'
> in 'logdata-svc.default.svc.cluster.local:1'
> 
> We can use IPV4 addresses in the config, but if we do so we would be giving
> up a certain amount of flexibility and resilience obtained from the kubedns
> service name resolution layer.
> 
> Anything we can do here? Thanks!

What exact version are you using (haproxy -vv) ? I'd be interested to
see if you're using getaddrinfo() or gethostbyname() (this will appear
in the dump above). Getaddrinfo() is known for being able to produce
such oddities in certain corner cases, and there was a recent fix for
a somewhat related issue appearing on freebsd and apparently not on
linux. Depending on your version, it may mean that linux is in fact
impacted as well or that the fix caused some breakage there. That's
just a supposition of course.

Also could you check that you only have IPv4 addresses for this name :

host -a logdata-svc.default.svc.cluster.local

I wouldn't be surprized if you got an IPv6 address while IPv6 is
currently not enabled on your system for example, preventing the
address from being used.

Regards,
willy




Re[2]: Multiple Monitor-net

2015-10-16 Thread Bryan Rodriguez

Thank you!

Worked perfectly!


[Bryan]



-- Original Message --
From: "Willy Tarreau" 
To: "Bryan Rodriguez" 
Cc: haproxy@formilux.org
Sent: 10/16/2015 10:28:13 AM
Subject: Re: Multiple Monitor-net


On Fri, Oct 16, 2015 at 05:18:24PM +, Bryan Rodriguez wrote:
 AWS health check monitoring comes from the following networks.  
Logging

 is going crazy.  I read that only the last monitor-net is read.  Is
 there a way to filter from the logs all the following requests?

monitor-net 54.183.255.128/26
monitor-net 54.228.16.0/26
monitor-net 54.232.40.64/26
monitor-net 54.241.32.64/26
monitor-net 54.243.31.192/26
monitor-net 54.244.52.192/26
monitor-net 54.245.168.0/26
monitor-net 54.248.220.0/26
monitor-net 54.250.253.192/26
monitor-net 54.251.31.128/26
monitor-net 54.252.254.192/26
monitor-net 54.252.79.128/26
monitor-net 54.255.254.192/26
monitor-net 107.23.255.0/26
monitor-net 176.34.159.192/26
monitor-net 177.71.207.128/26


Yes, instead of using monitor-net, you can use a redirect (if the 
checker

accepts it) or go to a specific backend instead, and use the "silent"
log-level :

  http-request set-log-level silent if { src -f aws-checks.list }
  http-request redirect location /  if { src -f aws-checks.list }

Or :

  use-backend aws-checks if { src -f aws-checks.list }

  backend aws-checks
 http-request set-log-level silent
 error-file 503 /path/to/forged/response.http

Then you put all those networks (one per line) in a file called
"aws-checks.list" and that will be easier.

Hoping this helps,
Willy






Re: Lua complete example ?

2015-10-16 Thread Willy Tarreau
Hello,

On Fri, Oct 16, 2015 at 06:38:16PM +0200, One Seeker wrote:
> Hello,
> 
> I would like to manipulate some data from a TCP backend (modify data before
> it is forwarded to client), and this is not supported (it is for HTTP with
> rewrites, but not in TCP mode).
> 
> With v1.6, Lua scripting brings hope, but the documentation is lacking
> (doc/lua-api/index.rst is a bit of a harsh place to start learning this
> aspect of HAProxy)..
> Is there an "elaborate" (or advanced) example of using Lua with HAProxy
> (not a Hello World) I can learn from (I'm very good at learning from
> real-world code :), not necessarily doing what I'm describing here, but
> just doing some real stuff to showcase Lua for HAProxy..

I understand what you're looking for. I've seen that Thierry is currently
working on a nice doc, but as any doc, it takes at least as long to write
as it took to implement the documented features. There are some simple
examples on blog.haproxy.com, I don't know if they help you enough. Maybe
at some point if you post what you came up with, someone here could help
you finish.

That's all I can provide for now :-/

willy




Re: haproxy + ipsec -> general socket error

2015-10-16 Thread Willy Tarreau
On Fri, Oct 16, 2015 at 02:08:37PM +0200, wbmtfrdlxm wrote:
> when using ipsec on the backend side, this error pops up in the haproxy log 
> from time to time: 
> 
> Layer4 connection problem, info: "General socket error (No buffer space 
> available)

This error normally means that there is no more memory for the sockets
in kernel space. This must never happen during a socket() or connect()
call, otherwise it indicates that your system is under strong memory
contention. Is your system swapping ? Or worse, is it virtualized with
memory ballooning or other such horrors^H^H^H^H^H^Hbeauties that make
you believe you're running with unlimited resources while in fact you're
running with no more resources at all ?

Willy




Dynamically change server maxconn possible?

2015-10-16 Thread Daren Sefcik
I am thinking the answer is no but figured I would ask just to make
sure...basically can I change individual server maxconn numbers on-the-fly
while haproxy is running or do I need to do a full restart to have them
take effect?

TIA...


Re: [blog] What's new in HAProxy 1.6

2015-10-16 Thread Willy Tarreau
On Fri, Oct 16, 2015 at 04:07:01PM +0200, Pavlos Parissis wrote:
> 1.6.0 comes with excellent documentation as well. Just look at the
> amount of information anyone can find in:
> http://www.haproxy.org/download/1.6/doc/management.txt
> http://cbonte.github.io/haproxy-dconv/intro-1.6.html

Thank you Pavlos, that's really pleasant to get such feedback! I took
me I-don't-know-how-many hours of pain to write those because I felt
they were definitely needed, I still find they're quite incomplete
and I'm sad that I didn't have the time to update architecture.txt,
but knowing that this work is well received is much appreciated. So
thank you for this!

Cheers,
Willy




Re: Multiple Monitor-net

2015-10-16 Thread Willy Tarreau
On Fri, Oct 16, 2015 at 05:18:24PM +, Bryan Rodriguez wrote:
> AWS health check monitoring comes from the following networks.  Logging 
> is going crazy.  I read that only the last monitor-net is read.  Is 
> there a way to filter from the logs all the following requests?
> 
>monitor-net 54.183.255.128/26
>monitor-net 54.228.16.0/26
>monitor-net 54.232.40.64/26
>monitor-net 54.241.32.64/26
>monitor-net 54.243.31.192/26
>monitor-net 54.244.52.192/26
>monitor-net 54.245.168.0/26
>monitor-net 54.248.220.0/26
>monitor-net 54.250.253.192/26
>monitor-net 54.251.31.128/26
>monitor-net 54.252.254.192/26
>monitor-net 54.252.79.128/26
>monitor-net 54.255.254.192/26
>monitor-net 107.23.255.0/26
>monitor-net 176.34.159.192/26
>monitor-net 177.71.207.128/26
 
Yes, instead of using monitor-net, you can use a redirect (if the checker
accepts it) or go to a specific backend instead, and use the "silent"
log-level :

  http-request set-log-level silent if { src -f aws-checks.list }
  http-request redirect location /  if { src -f aws-checks.list }

Or :

  use-backend aws-checks if { src -f aws-checks.list }

  backend aws-checks
 http-request set-log-level silent
 error-file 503 /path/to/forged/response.http

Then you put all those networks (one per line) in a file called
"aws-checks.list" and that will be easier. 

Hoping this helps,
Willy




Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Mark Betz
Hi, Willy. Thanks for the reply. The version of haproxy installed into the
container is:

$ /usr/sbin/haproxy --version
HA-Proxy version 1.5.14 2015/07/02

Also, for completeness:

$ uname -a
Linux haproxy 3.19.0-30-generic #34-Ubuntu SMP Fri Oct 2 22:08:41 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux

I don't believe ipv6 addresses are coming back in our cluster. I did an
nslookup on that name from inside the container earlier and just got back
the internal ipv4 address.

Thanks a bunch for the assistance. At the moment we have reverted the
config to use private IPs, but I would like to pursue this if there is a
chance to get names working, so let me know if there is any additional info
I can provide.

Regards,



On Fri, Oct 16, 2015 at 1:20 PM, Willy Tarreau  wrote:

> Hi,
>
> On Fri, Oct 16, 2015 at 12:26:20AM -0400, Mark Betz wrote:
> > Hi, I have a hopefully quick question about setting up backends for
> > resolvable internal service addresses.
> >
> > We are putting together a cluster on Google Container Engine (kubernetes)
> > and have haproxy deployed in a container based on Ubuntu 14.04 LTS.
> >
> > Our backend server specifications are declared using an internal
> resolvable
> > service name. For example:
> >
> > logdata-svc
> > logdata-svc.default.svc.cluster.local
> >
> > Both of these names correctly resolve to an internal IP address in the
> > range 10.xxx.xxx.xxx, as shown by installing dnsutils into the container
> > and running nslookup on the name prior to starting haproxy:
> >
> > Name: logdata-svc.default.svc.cluster.local
> > Address: 10.179.xxx.xxx
> >
> > However regardless of whether I use the short form or fqdn haproxy fails
> to
> > start, emitting the following to stdout:
> >
> > [ALERT] 288/041651 (52) : parsing [/etc/haproxy/haproxy.cfg:99] : 'server
> > logdata-service' : invalid address:
> 'logdata-svc.default.svc.cluster.local'
> > in 'logdata-svc.default.svc.cluster.local:1'
> >
> > We can use IPV4 addresses in the config, but if we do so we would be
> giving
> > up a certain amount of flexibility and resilience obtained from the
> kubedns
> > service name resolution layer.
> >
> > Anything we can do here? Thanks!
>
> What exact version are you using (haproxy -vv) ? I'd be interested to
> see if you're using getaddrinfo() or gethostbyname() (this will appear
> in the dump above). Getaddrinfo() is known for being able to produce
> such oddities in certain corner cases, and there was a recent fix for
> a somewhat related issue appearing on freebsd and apparently not on
> linux. Depending on your version, it may mean that linux is in fact
> impacted as well or that the fix caused some breakage there. That's
> just a supposition of course.
>
> Also could you check that you only have IPv4 addresses for this name :
>
> host -a logdata-svc.default.svc.cluster.local
>
> I wouldn't be surprized if you got an IPv6 address while IPv6 is
> currently not enabled on your system for example, preventing the
> address from being used.
>
> Regards,
> willy
>
>


-- 
Mark Betz
Sr. Software Engineer
*icitizen*

Mobile: 908-328-8666
Office: 908-223-5453
Email: mark.b...@icitizen.com
Twitter: @markbetz


Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Mark Betz
Hi, Willy. You're quite right that I misread your instructions. Have not
had a lot of time to put into this today. Apologies. Here is the
information I gathered. Hope this helps. It's interesting to me that
nslookup returns a record but host -a does not, however I don't know enough
about how Google plumbs this out to speculate as to why. Also note that I
tried the host command with both the short and fqdn names with the same
result, but have included only the short form query below.

$ nslookup logdata-svc
Server: 10.179.240.10
Address: 10.179.240.10#53

Name: logdata-svc.default.svc.cluster.local
Address: 10.179.249.177

$ host -a logdata-svc
Trying "logdata-svc.default.svc.cluster.local"
Trying "logdata-svc.svc.cluster.local"
Trying "logdata-svc.cluster.local"
Trying "logdata-svc.c.icitizen-dev3-stack-1069.internal"
Trying "logdata-svc.555239384585.google.internal"
Trying "logdata-svc.google.internal"
Trying "logdata-svc"
Host logdata-svc not found: 3(NXDOMAIN)
Received 104 bytes from 169.254.169.254#53 in 68 ms

$ /usr/sbin/haproxy -vv
HA-Proxy version 1.5.14 2015/07/02
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat
-Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
Running on OpenSSL version : OpenSSL 1.0.1f 6 Jan 2014
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


On Fri, Oct 16, 2015 at 4:51 PM, Willy Tarreau  wrote:

> On Fri, Oct 16, 2015 at 04:40:20PM -0400, Mark Betz wrote:
> > Hi, Willy. Thanks for the reply. The version of haproxy installed into
> the
> > container is:
> >
> > $ /usr/sbin/haproxy --version
> > HA-Proxy version 1.5.14 2015/07/02
>
> I precisely asked for "haproxy -vv" because it says a lot more and what
> we need to check (support for getaddrinfo) is there.
>
> > Also, for completeness:
> >
> > $ uname -a
> > Linux haproxy 3.19.0-30-generic #34-Ubuntu SMP Fri Oct 2 22:08:41 UTC
> 2015
> > x86_64 x86_64 x86_64 GNU/Linux
>
> OK.
>
> > I don't believe ipv6 addresses are coming back in our cluster. I did an
> > nslookup on that name from inside the container earlier and just got back
> > the internal ipv4 address.
>
> Are you sure it returned *all* records ?
>
> When I do "nslookup 1wt.eu", I get "Address: 62.212.114.60". When I do
> "host -a 1wt.eu", I get :
>
>   1wt.eu.   2586IN  A   62.212.114.60
>   1wt.eu.   978 IN  2001:7a8:363c:2::2
>
> As you can see that's quite different. That's why I asked for these to
> be sure we're seeing. I'm pretty sure we can get this from nslookup as
> well, it's just that I never manage to use it because when a tool does
> not provide any help on its command line, you're not incited to read
> man pages to learn it...
>
> Regards,
> Willy
>
>


-- 
Mark Betz
Sr. Software Engineer
*icitizen*

Email: mark.b...@icitizen.com
Twitter: @markbetz


Alerte Info - Info RFI/Guinée: cérémonie officielle samedi au...

2015-10-16 Thread RFI - Alerte info

Visualisez cet email dans votre navigateur :
http://rfi.nlfrancemm.com/HM?b=8NkeCqXPviyjnHWALUR7bQWNCaa1yRJUI-R-xn5Z16bAvn1BrMe6k8juI4IRm9GC=lvjHTxOLBwnOqLtb3g2ZxQ

---

16/10/2015
Info RFI/Guine: crmonie officielle samedi au palais du peuple 
dannonce des rsultats provisoires du 1er tour de la 
prsidentielle (Cni)
http://rfi.nlfrancemm.com/HP?b=2g4ZNHSoaFUfwKZu7SGwgiD5pBKHAmjvsPLjKH-fhFml0BnWMe6z3WLNXScL6mO1=w9ohChFFWL_3ubjj_9DlRQ





Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Willy Tarreau
On Fri, Oct 16, 2015 at 04:40:20PM -0400, Mark Betz wrote:
> Hi, Willy. Thanks for the reply. The version of haproxy installed into the
> container is:
> 
> $ /usr/sbin/haproxy --version
> HA-Proxy version 1.5.14 2015/07/02

I precisely asked for "haproxy -vv" because it says a lot more and what
we need to check (support for getaddrinfo) is there.

> Also, for completeness:
> 
> $ uname -a
> Linux haproxy 3.19.0-30-generic #34-Ubuntu SMP Fri Oct 2 22:08:41 UTC 2015
> x86_64 x86_64 x86_64 GNU/Linux

OK.

> I don't believe ipv6 addresses are coming back in our cluster. I did an
> nslookup on that name from inside the container earlier and just got back
> the internal ipv4 address.

Are you sure it returned *all* records ?

When I do "nslookup 1wt.eu", I get "Address: 62.212.114.60". When I do
"host -a 1wt.eu", I get :

  1wt.eu.   2586IN  A   62.212.114.60
  1wt.eu.   978 IN  2001:7a8:363c:2::2

As you can see that's quite different. That's why I asked for these to
be sure we're seeing. I'm pretty sure we can get this from nslookup as
well, it's just that I never manage to use it because when a tool does
not provide any help on its command line, you're not incited to read
man pages to learn it...

Regards,
Willy




Re: Lua complete example ?

2015-10-16 Thread One Seeker
Thank you Willy, that's an honest answer.
You grasped my "practical" concern (I always thought "Examples" section in
man pages should be moved up high :)
I've been to blog.haproxy.com, and no full-fat Lua meals there as of yet.
I understand this is a new hot thing with HAProxy, so we'll have to wait
for it to grow & thrive..
Sure I'll report back here if I get to the point where I get some Lua code
to do this (even a lame contraption).

That being said, if a charitable soul here around can provide some working
Lua code for modifying backend response in TCP mode, or a battle plan for
the concept, that would be nice.

Also, maybe this can be made a new feature in 1.7, even minimally : search
for a stretch of bytes and replace with some other byte sequence (rewrites
for TCP, basically), append a piece of data, .. that kind of simple stuff
(at the user's own peril, of course. Shuffling bytes is for adults).
Would be nice, and doesn't sound daunting to implement..

Forgot to mention : HAProxy is freakin'great !

On Fri, Oct 16, 2015 at 7:15 PM, Willy Tarreau  wrote:

> Hello,
>
> On Fri, Oct 16, 2015 at 06:38:16PM +0200, One Seeker wrote:
> > Hello,
> >
> > I would like to manipulate some data from a TCP backend (modify data
> before
> > it is forwarded to client), and this is not supported (it is for HTTP
> with
> > rewrites, but not in TCP mode).
> >
> > With v1.6, Lua scripting brings hope, but the documentation is lacking
> > (doc/lua-api/index.rst is a bit of a harsh place to start learning this
> > aspect of HAProxy)..
> > Is there an "elaborate" (or advanced) example of using Lua with HAProxy
> > (not a Hello World) I can learn from (I'm very good at learning from
> > real-world code :), not necessarily doing what I'm describing here, but
> > just doing some real stuff to showcase Lua for HAProxy..
>
> I understand what you're looking for. I've seen that Thierry is currently
> working on a nice doc, but as any doc, it takes at least as long to write
> as it took to implement the documented features. There are some simple
> examples on blog.haproxy.com, I don't know if they help you enough. Maybe
> at some point if you post what you came up with, someone here could help
> you finish.
>
> That's all I can provide for now :-/
>
> willy
>
>


Build failure of 1.6 and openssl 0.9.8

2015-10-16 Thread Willy Tarreau
Hi Christopher,

Marcus (in CC) reported that 1.6 doesn't build anymore on SuSE 11
(which uses openssl 0.9.8). After some digging, we found that it
is caused by the absence of EVP_PKEY_get_default_digest_nid()
which was introduced in 1.0.0 and which was introduced by this
patch :

  commit 7969a33a01c3a70e48cddf36ea5a66710bd7a995
  Author: Christopher Faulet 
  Date:   Fri Oct 9 11:15:03 2015 +0200

MINOR: ssl: Add support for EC for the CA used to sign generated certificate

This is done by adding EVP_PKEY_EC type in supported types for the CA privat
key when we get the message digest used to sign a generated X509 certificate
So now, we support DSA, RSA and EC private keys.

And to be sure, when the type of the private key is not directly supported, 
get its default message digest using the function
'EVP_PKEY_get_default_digest_nid'.

We also use the key of the default certificate instead of generated it. So w
are sure to use the same key type instead of always using a RSA key.

Interestingly, not all 0.9.8 will see the same problem since SNI is not
enabled by default, it requires a build option. This explains why on my
old PC I didn't get this problem with the same version.

I initially thought it would just be a matter of adding a #if on the
openssl version but it doesn't appear that easy given that the previous
code was different, so I have no idea how to fix this. Do you have any
idea ? Probably we can have a block of code instead of EVP_PKEY_... on
older versions and that will be fine. I even wonder if EC was supported
on 0.9.8.

It's unfortunate that we managed to break things just a few days before
the release with code that looked obviously right :-(

Thanks for any insight.

Willy




Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Willy Tarreau
On Fri, Oct 16, 2015 at 05:11:08PM -0400, Mark Betz wrote:
> Hi, Willy. You're quite right that I misread your instructions. Have not
> had a lot of time to put into this today. Apologies. Here is the
> information I gathered. Hope this helps. It's interesting to me that
> nslookup returns a record but host -a does not, however I don't know enough
> about how Google plumbs this out to speculate as to why. Also note that I
> tried the host command with both the short and fqdn names with the same
> result, but have included only the short form query below.

Indeed, and the most puzzling is that they both try the exact same name
and don't get the same result! Host seems to use a different server here.
Maybe you have several nameservers in your resolv.conf and certain have
valid information and others not, which could explain a different behaviour.
At least your build status doesn't show any use of getaddrinfo() so what
you're seeing isn't an incompatibility related to the flag I was speaking
about. You're using the plain old gethostbyname() which works everywhere.

I guess you'll have to figure one way or another how it is possible that
"host -a" fails below. Maybe it's time to try to play with your resolv.conf
to find if changing something there fixes it.

You may be interested in testing if "ping" on this fqdn works fine
and all the time.

Regards,
Willy




Re: Dynamically change server maxconn possible?

2015-10-16 Thread Willy Tarreau
On Fri, Oct 16, 2015 at 12:07:17PM -0700, Daren Sefcik wrote:
> I am thinking the answer is no but figured I would ask just to make
> sure...basically can I change individual server maxconn numbers on-the-fly
> while haproxy is running or do I need to do a full restart to have them
> take effect?

It's not possible right now but given that we support dynamic maxconn,
I see no technical problem to implement it and I actually think it would
be a good idea to support this on the CLI, as "set maxconn server XXX"
just like we have "set maxconn frontend YYY".

If you (or anyone else) is interested in trying to implement it, I'm
willing to review the patch and help if any difficulty is faced.

Regards,
Willy




Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Mark Betz
I'm going to take this up with Google on the kubernetes user group and see
what they have to say about the difference in behavior. I will report back
with what I learn.

Regards,

On Fri, Oct 16, 2015 at 5:16 PM, Willy Tarreau  wrote:

> On Fri, Oct 16, 2015 at 05:11:08PM -0400, Mark Betz wrote:
> > Hi, Willy. You're quite right that I misread your instructions. Have not
> > had a lot of time to put into this today. Apologies. Here is the
> > information I gathered. Hope this helps. It's interesting to me that
> > nslookup returns a record but host -a does not, however I don't know
> enough
> > about how Google plumbs this out to speculate as to why. Also note that I
> > tried the host command with both the short and fqdn names with the same
> > result, but have included only the short form query below.
>
> Indeed, and the most puzzling is that they both try the exact same name
> and don't get the same result! Host seems to use a different server here.
> Maybe you have several nameservers in your resolv.conf and certain have
> valid information and others not, which could explain a different
> behaviour.
> At least your build status doesn't show any use of getaddrinfo() so what
> you're seeing isn't an incompatibility related to the flag I was speaking
> about. You're using the plain old gethostbyname() which works everywhere.
>
> I guess you'll have to figure one way or another how it is possible that
> "host -a" fails below. Maybe it's time to try to play with your resolv.conf
> to find if changing something there fixes it.
>
> You may be interested in testing if "ping" on this fqdn works fine
> and all the time.
>
> Regards,
> Willy
>
>


-- 
Mark Betz
Sr. Software Engineer
*icitizen*

Email: mark.b...@icitizen.com
Twitter: @markbetz


Re: [ANNOUNCE] haproxy-1.6.0 now released!

2015-10-16 Thread Godbach

Greate. A lot of new features and optimizations!

--
Best Regards,
Godbach




Re: haproxy 1.6.0 crashes

2015-10-16 Thread Christopher Faulet

Le 15/10/2015 16:55, Willy Tarreau a écrit :

Hi Christopher,

On Thu, Oct 15, 2015 at 03:22:52PM +0200, Christopher Faulet wrote:

Le 15/10/2015 14:45, Seri, Kim a écrit :

Christopher Faulet  writes:


I confirm the bug. Here is a very quick patch. Could you confirm that it
works for you ?



Hi,

I can confirm this patch fixes the crash!!

cf. because of my mail service, I've changed my e-mail

Thanks a lot.


Great!

Willy, is it ok to you if I add the CO_FL_DYN_SSL_CTX flag to track
connections with a generated SSL certificate or do you prefer I find
another way to fix the bug ?


I'm still having doubts on the fix, because I feel like we're working
around a design issue here. First, the problem is that it's unclear
to me in which condition we may end up calling this code. How can it
happen that we end up in this code with an empty LRU tree ? Can we
generate cookies without a cert cache ? Or can the cert cache be empty
with some certs still in use ? If the later, maybe instead we should
keep a reference to the cache using the refcount so that we don't kill
the entry as long as it's being used.

Indeed, this is mostly a matter of being sure that we free an ssl_ctx
that was allocated, so there should be other ways to do it than adding
more SSL knowledge into the session. I'm not opposed to merging this
fix as a quick one to fix the trouble for the affected users, but I'd
prefer that we find a cleaner solution if possible.



Hi,

First the LRU tree is only initialized when the SSL certs generation is 
configured on a bind line. So, in the most of cases, it is NULL (it is 
not the same thing than empty).
When the SSL certs generation is used, if the cache is not NULL, a such 
certificate is pushed in the cache and there is no need to release it 
when the connection is closed.
But it can be disabled in the configuration. So in that case, we must 
free the generated certificate when the connection is closed.


Then here, we have really a bug. Here is the buggy part:

3125)  if (conn->xprt_ctx) {
3126) #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
3127)  if (!ssl_ctx_lru_tree && objt_listener(conn->target)) {
3128)  SSL_CTX *ctx = SSL_get_SSL_CTX(conn->xprt_ctx);
3129)  if (ctx != 3130) 
 SSL_CTX_free(ctx);

3131)  }
#endif
3133)  SSL_free(conn->xprt_ctx);
3134)  conn->xprt_ctx = NULL;
3135)  sslconns--;
3136)  }

The check on the line 3127 is not enough to determine if this is a 
generated certificate or not. Because ssl_ctx_lru_tree is NULL, 
generated certificates, if any, must be freed. But here ctx should also 
be compared to all SNI certificates and not only to default_ctx. Because 
of this bug, when a SNI certificate is used for a connection, it is 
erroneously freed when this connection is closed.


In my patch, I chosen to use a specific flag on the connection instead 
of doing certificates comparisons. This seems to me easier to understand 
and more efficient. But it could be discussed, there are many other 
solutions I guess.


finally, we can of course discuss the design of this feature. There is 
no problem. I will be happy to find a more elegant way to handle it, if 
it is possible.


--
Christopher Faulet



避开外贸B2B价格战和探价询盘。

2015-10-16 Thread topeasy_...@126.com



目标客户开发系统,24小时可以找遍您行业内的上万目标客户资源。具有搜索速度快,搜索质量高,信息准确率高,投入成本低特点。让你一天搜索联系100个客户改为一天联系几千个高质量的终端客户,把更多的时间用在跟进优质客户上。避开B2B的价格战,展会的成本高,主动出击迅速找到真正对你们产品感兴趣的客户。挖掘出你们对手还没有挖掘到的客户,选择搜索出海量的客户信息来,你们就抢先一步联系客户啦。 

QQ:3162770448 (在线演示,帮您找到您全球客户)
电话:18688475238 深圳地区可提供上门演示邮箱:3162770...@qq.com   联系人:蔡生
 



Looking for help about "req.body" logging

2015-10-16 Thread Alberto Zaccagni
Hello,

Sorry for the repost, but it's really not clear to me how to use this
feature: "Processing of HTTP request body" in
http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/, can it be
used to log the body of a request?

I am trying to use it like this in both my HTTP and HTTPS frontends:

option http-buffer-request
log-format "%[req.body]"

The error I get is "'log-format' : sample fetch  may not be
reliably used here because it needs 'HTTP request headers' which is not
available here.", where should I be using it?
Does that mean that we cannot log req.body at all or that I have to enable
another option before trying to use it?

Any hint or help is much appreciated.
Thank you.

Cheers


Re: haproxy 1.6.0 crashes

2015-10-16 Thread Willy Tarreau
Hi Christopher,

On Fri, Oct 16, 2015 at 10:03:06AM +0200, Christopher Faulet wrote:
> First the LRU tree is only initialized when the SSL certs generation is 
> configured on a bind line. So, in the most of cases, it is NULL (it is 
> not the same thing than empty).
> When the SSL certs generation is used, if the cache is not NULL, a such 
> certificate is pushed in the cache and there is no need to release it 
> when the connection is closed.
> But it can be disabled in the configuration. So in that case, we must 
> free the generated certificate when the connection is closed.
> 
> Then here, we have really a bug. Here is the buggy part:
> 
> 3125)  if (conn->xprt_ctx) {
> 3126) #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
> 3127)  if (!ssl_ctx_lru_tree && objt_listener(conn->target)) {
> 3128)  SSL_CTX *ctx = SSL_get_SSL_CTX(conn->xprt_ctx);
> 3129)  if (ctx != 3130) 
>  SSL_CTX_free(ctx);
> 3131)  }
> #endif
> 3133)  SSL_free(conn->xprt_ctx);
> 3134)  conn->xprt_ctx = NULL;
> 3135)  sslconns--;
> 3136)  }
> 
> The check on the line 3127 is not enough to determine if this is a 
> generated certificate or not. Because ssl_ctx_lru_tree is NULL, 
> generated certificates, if any, must be freed. But here ctx should also 
> be compared to all SNI certificates and not only to default_ctx. Because 
> of this bug, when a SNI certificate is used for a connection, it is 
> erroneously freed when this connection is closed.

Yep, thanks for the reminder. You already explained this to me once but
it appears that it didn't remain obvious in my mind.

Thus this sparks a new question : when the cache is disabled, are we sure
to always free the ssl_ctx on all error paths after it's generated ? Or
are we certain that we always pass through ssl_sock_close() ?

The other problem I'm having is related to how we manage the LRU cache.
Is there a risk that we kill some objects in this cache while they're
still in use ?

> In my patch, I chosen to use a specific flag on the connection instead 
> of doing certificates comparisons. This seems to me easier to understand 
> and more efficient. But it could be discussed, there are many other 
> solutions I guess.

I'm not disagreeing with your proposal, it may even end up being the only
solution. I'm just having a problem with keeping this information outside
of the ssl_ctx itself while I think we have to deal with a ref count there
due to it being present both in the tree and in sessions which make use of
it. Very likely we'll have the flag in the connection to indicate that the
cert was generated and must be freed one way or another, but I'm still
bothered by checking the lru_tree when doing this because I fear that it
means that we don't properly track the ssl_ctx's usage.

> finally, we can of course discuss the design of this feature. There is 
> no problem. I will be happy to find a more elegant way to handle it, if 
> it is possible.

Ideally we'd have the info in the ssl_ctx itself, but I remember that Emeric
told me a while ago that we couldn't store anything in an ssl_ctx. Thus I
can understand that we can't easily "tag" the ssl_ctx as being statically
or dynamically allocated, which is why I understand the need for the flag
on the connection as an alternative.

Willy




Re: Unexpected error messages

2015-10-16 Thread Baptiste
Is your problem fixed?

We may emit a warning for such configuration.

Baptiste
Le 15 oct. 2015 07:34, "Krishna Kumar (Engineering)" <
krishna...@flipkart.com> a écrit :

> Hi Baptiste,
>
> Thank you for the advise and solution, I didn't realize retries had to be
> >1.
>
> Regards,
> - Krishna Kumar
>
> On Wed, Oct 14, 2015 at 7:51 PM, Baptiste  wrote:
> > On Wed, Oct 14, 2015 at 3:03 PM, Krishna Kumar (Engineering)
> >  wrote:
> >> Hi all,
> >>
> >> We are occasionally getting these messages (about 25 errors/per
> occurrence,
> >> 1 occurrence per hour) in the *error* log:
> >>
> >> 10.xx.xxx.xx:60086 [14/Oct/2015:04:21:25.048] Alert-FE
> >> Alert-BE/10.xx.xx.xx 0/5000/1/32/+5033 200 +149 - - --NN 370/4/1/0/+1
> >> 0/0 {10.xx.x.xxx||367||} {|||432} "POST /fk-alert-service/nsca
> >> HTTP/1.1"
> >> 10.xx.xxx.xx:60046 [14/Oct/2015:04:21:19.936] Alert-FE
> >> Alert-BE/10.xx.xx.xx 0/5000/1/21/+5022 200 +149 - - --NN 302/8/2/0/+1
> >> 0/0 {10.xx.x.xxx||237||} {|||302} "POST /fk-alert-service/nsca
> >> HTTP/1.1"
> >> ...
> >>
> >> We are unsure what errors were seen at the client. What could possibly
> be the
> >> reason for these? Every error line has retries value as "+1", as seen
> above. The
> >> specific options in the configuration are (HAProxy v1.5.12):
> >>
> >> 1. "retries 1"
> >> 2. "option redispatch"
> >> 3. "option logasap"
> >> 4. "timeout connect 5000", server and client timeouts are high - 300s
> >> 5. Number of backend servers is 7.
> >> 6. ulimit is 512K
> >> 7. balance is "roundrobin"
> >>
> >> Thank you for any leads/insights.
> >>
> >> Regards,
> >> - Krishna Kumar
> >>
> >
> > Hi Krishna,
> >
> > First, I don't understand how the "retries 1" and the "redispatch"
> > works together in your case.
> > I mean, redispatch is supposed to be applied at 'retries - 1'...
> >
> > So basically, what may be happening:
> > - because of logasap, HAProxy does not wait until the end of the
> > session to generate the log line
> > - this log is in error because a connection was attempted (and failed)
> > on a server
> >
> > You should not setup any ulimit and let HAProxy do the job for you.
> >
> > Baptiste
>


Re: Unexpected error messages

2015-10-16 Thread Krishna Kumar (Engineering)
Hi Baptiste,

Thanks for your follow up!

Sorry, I was unable to test that since it was seen only on the production
server. However, I tested the same on a test box, with retries=1 and
redispatch, and see that redispatch does happen even with retries=1
when the backend is down (health check is disabled, retries=1, redispatch
is enabled). However, retries=0 does not redispatch as expected.

So the issue remains. We are also checking if they are facing packet
losses that might explain this problem. The confusing part is that the error
contains status = 200.

Thanks,
- Krishna Kumar


On Fri, Oct 16, 2015 at 3:49 PM, Baptiste  wrote:
> Is your problem fixed?
>
> We may emit a warning for such configuration.
>
> Baptiste
>
> Le 15 oct. 2015 07:34, "Krishna Kumar (Engineering)"
>  a écrit :
>>
>> Hi Baptiste,
>>
>> Thank you for the advise and solution, I didn't realize retries had to be
>> >1.
>>
>> Regards,
>> - Krishna Kumar
>>
>> On Wed, Oct 14, 2015 at 7:51 PM, Baptiste  wrote:
>> > On Wed, Oct 14, 2015 at 3:03 PM, Krishna Kumar (Engineering)
>> >  wrote:
>> >> Hi all,
>> >>
>> >> We are occasionally getting these messages (about 25 errors/per
>> >> occurrence,
>> >> 1 occurrence per hour) in the *error* log:
>> >>
>> >> 10.xx.xxx.xx:60086 [14/Oct/2015:04:21:25.048] Alert-FE
>> >> Alert-BE/10.xx.xx.xx 0/5000/1/32/+5033 200 +149 - - --NN 370/4/1/0/+1
>> >> 0/0 {10.xx.x.xxx||367||} {|||432} "POST /fk-alert-service/nsca
>> >> HTTP/1.1"
>> >> 10.xx.xxx.xx:60046 [14/Oct/2015:04:21:19.936] Alert-FE
>> >> Alert-BE/10.xx.xx.xx 0/5000/1/21/+5022 200 +149 - - --NN 302/8/2/0/+1
>> >> 0/0 {10.xx.x.xxx||237||} {|||302} "POST /fk-alert-service/nsca
>> >> HTTP/1.1"
>> >> ...
>> >>
>> >> We are unsure what errors were seen at the client. What could possibly
>> >> be the
>> >> reason for these? Every error line has retries value as "+1", as seen
>> >> above. The
>> >> specific options in the configuration are (HAProxy v1.5.12):
>> >>
>> >> 1. "retries 1"
>> >> 2. "option redispatch"
>> >> 3. "option logasap"
>> >> 4. "timeout connect 5000", server and client timeouts are high - 300s
>> >> 5. Number of backend servers is 7.
>> >> 6. ulimit is 512K
>> >> 7. balance is "roundrobin"
>> >>
>> >> Thank you for any leads/insights.
>> >>
>> >> Regards,
>> >> - Krishna Kumar
>> >>
>> >
>> > Hi Krishna,
>> >
>> > First, I don't understand how the "retries 1" and the "redispatch"
>> > works together in your case.
>> > I mean, redispatch is supposed to be applied at 'retries - 1'...
>> >
>> > So basically, what may be happening:
>> > - because of logasap, HAProxy does not wait until the end of the
>> > session to generate the log line
>> > - this log is in error because a connection was attempted (and failed)
>> > on a server
>> >
>> > You should not setup any ulimit and let HAProxy do the job for you.
>> >
>> > Baptiste



Re: Re: haproxy 1.6.0 crashes

2015-10-16 Thread Remi Gacogne
Hi Willy, Christopher,

> Ideally we'd have the info in the ssl_ctx itself, but I remember that Emeric
> told me a while ago that we couldn't store anything in an ssl_ctx. Thus I
> can understand that we can't easily "tag" the ssl_ctx as being statically
> or dynamically allocated, which is why I understand the need for the flag
> on the connection as an alternative.

Well, I am not sure it will suit your needs, but it is possible to store
some info in a SSL_CTX using SSL_CTX_set_ex_data(). We are already doing
that for DH parameters and Certificate Transparency data.

-- 
Remi





signature.asc
Description: OpenPGP digital signature


Re: Looking for help about "req.body" logging

2015-10-16 Thread Baptiste
Le 16 oct. 2015 10:46, "Alberto Zaccagni" <
alberto.zacca...@lazywithclass.com> a écrit :
>
> Hello,
>
> Sorry for the repost, but it's really not clear to me how to use this
feature: "Processing of HTTP request body" in
http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/, can it be
used to log the body of a request?
>
> I am trying to use it like this in both my HTTP and HTTPS frontends:
>
> option http-buffer-request
> log-format "%[req.body]"
>
> The error I get is "'log-format' : sample fetch  may not be
reliably used here because it needs 'HTTP request headers' which is not
available here.", where should I be using it?
> Does that mean that we cannot log req.body at all or that I have to
enable another option before trying to use it?
>
> Any hint or help is much appreciated.
> Thank you.
>
> Cheers

Have you turned on 'mode http'?

Baptiste


Re: Looking for help about "req.body" logging

2015-10-16 Thread Alberto Zaccagni
Yes, I did turn it on. Or so I think, please have a look at my
configuration file:
https://gist.github.com/lazywithclass/d255bb4d2086b07be178

Thank you

Alberto

On Fri, 16 Oct 2015 at 10:12 Baptiste  wrote:

>
> Le 16 oct. 2015 10:46, "Alberto Zaccagni" <
> alberto.zacca...@lazywithclass.com> a écrit :
> >
> > Hello,
> >
> > Sorry for the repost, but it's really not clear to me how to use this
> feature: "Processing of HTTP request body" in
> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/, can it be
> used to log the body of a request?
> >
> > I am trying to use it like this in both my HTTP and HTTPS frontends:
> >
> > option http-buffer-request
> > log-format "%[req.body]"
> >
> > The error I get is "'log-format' : sample fetch  may not be
> reliably used here because it needs 'HTTP request headers' which is not
> available here.", where should I be using it?
> > Does that mean that we cannot log req.body at all or that I have to
> enable another option before trying to use it?
> >
> > Any hint or help is much appreciated.
> > Thank you.
> >
> > Cheers
>
> Have you turned on 'mode http'?
>
> Baptiste
>


Re: haproxy 1.6.0 crashes

2015-10-16 Thread Christopher Faulet

Le 16/10/2015 10:38, Willy Tarreau a écrit :

Thus this sparks a new question : when the cache is disabled, are we sure
to always free the ssl_ctx on all error paths after it's generated ? Or
are we certain that we always pass through ssl_sock_close() ?



That's a good question. By greping on SSL_free, it should be good.


The other problem I'm having is related to how we manage the LRU cache.
Is there a risk that we kill some objects in this cache while they're
still in use ?


The SSL_CTX and SSL objects are reference-counted objects, so there is 
no problem.


When a SSL_CTX object is created, its refcount is set to 1. When a SSL 
connection use it, it is incremented and when the connection is closed, 
it is decremented. Of course, it is also decremented when SSL_CTX_free 
is called.
During a call to SSL_free or SSL_CTX_free, its reference count reaches 
0, the SSL_CTX object is freed. Note that SSL_free and SSL_CTX_free can 
be called in any order.


So, if a call to SSL_CTX_free is called whilst a SSL connection uses the 
corresponding SSL_CTX object, there is no problem. Actually, this 
happens when a SSL_CTX object is evicted from the cache. There is no 
need to check if it is used by a connection or not.



In my patch, I chosen to use a specific flag on the connection instead
of doing certificates comparisons. This seems to me easier to understand
and more efficient. But it could be discussed, there are many other
solutions I guess.


I'm not disagreeing with your proposal, it may even end up being the only
solution. I'm just having a problem with keeping this information outside
of the ssl_ctx itself while I think we have to deal with a ref count there
due to it being present both in the tree and in sessions which make use of
it. Very likely we'll have the flag in the connection to indicate that the
cert was generated and must be freed one way or another, but I'm still
bothered by checking the lru_tree when doing this because I fear that it
means that we don't properly track the ssl_ctx's usage.



We do not track any reference count on SSL_CTX, it is done internally by 
openssl. The only thing we must do, is to know if it a generated 
certificate and to track if it is in the cache or not.



finally, we can of course discuss the design of this feature. There is
no problem. I will be happy to find a more elegant way to handle it, if
it is possible.


Ideally we'd have the info in the ssl_ctx itself, but I remember that Emeric
told me a while ago that we couldn't store anything in an ssl_ctx. Thus I
can understand that we can't easily "tag" the ssl_ctx as being statically
or dynamically allocated, which is why I understand the need for the flag
on the connection as an alternative.



Well, I'm not an openssl guru. It is possible to store and retrieve data 
on a SSL_CTX object using SSL_CTX_set_ex_data/SSL_CTX_get_ex_data 
functions. But I don't know if this a good practice to use it. And I 
don't know if this is expensive or not.


Functionally, I agree with you. It would be better to keep info on 
SSL_CTX object inside this object. And, at the beginning, I considered 
using these functions. But I was not enough confident to do it. Maybe 
Emeric can enlighten us.


--
Christopher Faulet



Re: haproxy + ipsec -> general socket error

2015-10-16 Thread wbmtfrdlxm
what linux distribution are you using?

light traffic is simulating 100 users browsing a website, simple http requests. 
we have 2 backend nodes and after a while, both of them become unavailable. 
after lowering or stopping traffic, everything goes back to normal.
without ipsec, no problem at all.


 On Fri, 16 Oct 2015 14:40:51 +0200 Jarno 
Huuskonenjarno.huusko...@uef.fi wrote  

Hi, 
 
On Fri, Oct 16, wbmtfrdlxm wrote: 
 when using ipsec on the backend side, this error pops up in the haproxy 
log from time to time: 
 
 Layer4 connection problem, info: "General socket error (No buffer space 
available) 
 
We're using ipsec(libreswan) on backend, but I haven't seen any problems 
with ipsec (just checked logs for past few months). 
 
 we have tried both strongswan and libreswan, error is still the same. 
there is nothing strange in the ipsec logs, connection seems stable. but as 
soon as we start generating some light traffic, haproxy loses connectivity with 
the backend nodes. 
 we are running centos 7, standard repositories. 
 
What's light traffice for you ? Have you tried w/out ipsec (does it 
work w/out problems) ? 
 
-Jarno 
 
-- 
Jarno Huuskonen 
 






haproxy + ipsec -> general socket error

2015-10-16 Thread wbmtfrdlxm
when using ipsec on the backend side, this error pops up in the haproxy log 
from time to time: 

Layer4 connection problem, info: "General socket error (No buffer space 
available)


we have tried both strongswan and libreswan, error is still the same. there is 
nothing strange in the ipsec logs, connection seems stable. but as soon as we 
start generating some light traffic, haproxy loses connectivity with the 
backend nodes.
we are running centos 7, standard repositories.

any ideas what could be wrong?



Re: [PATCH] BUG: ssl: Fix conditions to release SSL_CTX when a SSL connection is closed

2015-10-16 Thread Christopher Faulet

Le 15/10/2015 16:50, Christopher Faulet a écrit :

Hi,

Here is a proper patch to fix the recent bug reported on haproxy 1.6.0
when SNI is used.

Willy, I didn't wait your reply to speed-up the code review. But if
there is any problem with this patch, let me know.

Regards,


After our discussion on this bug, I reworked my patch to use 
SSL_CTX_set_ex_data/SSL_CTX_get_ex_data functions.


Willy, I let you choose the patch you prefer :)

PS: I do some checks and AFAIK, it works. But a double-check will not to 
be too much...


--
Christopher Faulet
>From 171bb9ca0522c12d2f4f9a105c557e01c0011ecc Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Thu, 15 Oct 2015 15:29:34 +0200
Subject: [PATCH] BUG: ssl: Fix conditions to release SSL_CTX object when a
 connection is closed

When a SSL connection is closed, if its associated SSL_CTX object was generated
and if it was not cached[1], then it must be freed[2].

But, we must check that it was generated. And this check is buggy when multiple
certificates are used on the same bind line. We check that the SSL_CTX object is
not the default one (ssl_ctx != bind_conf->default_ctx). But it is not enough to
determine if it was generated or not. We should also check it against SNI
certificates (bind_conf->sni_ctx and bind_conf->sni_wc_ctx). This bug was
introducted with the commit d2cab92 and it leads to a segfault in certain
circumstances. A SNI certificate can be erroneously released when a connection
is closed.

This commit fix the bug. Now, when a SSL_CTX object is generated, we mark it
using SSL_CTX_set_ex_data function. Then when the connection is closed, we check
the presence of this mark using SSL_CTX_get_ex_data functon. If the SSL_CTX
object is marked and not cached (because ssl_ctx_lru_tree is NULL), it is freed.

More information on this bug can be found on the HAProxy mailing-list[3]

[1] This happens when the cache to store generated SSL_CTX object does not exist
(ssl_ctx_lru_tree == NULL) because the 'tune.ssl.ssl-ctx-cache-size' option is
set to 0. This cache is also NULL when the dynamic generation of SSL
certificates is used on no listener.

[2] Cached SSL_CTX objects are released when the cache is destroyed (during
HAProxy shutdown) or when one of them is evicted from the cache.

[3] https://www.mail-archive.com/haproxy@formilux.org/msg19937.html
---
 src/ssl_sock.c | 23 ---
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 5319532..35a3edf 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -153,8 +153,10 @@ static char *x509v3_ext_values[X509V3_EXT_SIZE] = {
 };
 
 /* LRU cache to store generated certificate */
-static struct lru64_head *ssl_ctx_lru_tree = NULL;
-static unsigned int   ssl_ctx_lru_seed = 0;
+static struct lru64_head *ssl_ctx_lru_tree  = NULL;
+static unsigned int   ssl_ctx_lru_seed  = 0;
+static int gen_ssl_ctx_ptr_index = -1;
+static int is_gen_ssl_ctx= 1;
 #endif // SSL_CTRL_SET_TLSEXT_HOSTNAME
 
 #if (defined SSL_CTRL_SET_TLSEXT_STATUS_REQ_CB && !defined OPENSSL_NO_OCSP)
@@ -1128,6 +1130,9 @@ ssl_sock_do_create_cert(const char *servername, unsigned int serial,
 	}
 #endif
  end:
+	if (!SSL_CTX_set_ex_data(ssl_ctx, gen_ssl_ctx_ptr_index, _gen_ssl_ctx))
+		goto mkcert_error;
+
 	return ssl_ctx;
 
  mkcert_error:
@@ -3124,11 +3129,10 @@ static void ssl_sock_close(struct connection *conn) {
 
 	if (conn->xprt_ctx) {
 #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
-		if (!ssl_ctx_lru_tree && objt_listener(conn->target)) {
-			SSL_CTX *ctx = SSL_get_SSL_CTX(conn->xprt_ctx);
-			if (ctx != objt_listener(conn->target)->bind_conf->default_ctx)
-SSL_CTX_free(ctx);
-		}
+		SSL_CTX *ctx = SSL_get_SSL_CTX(conn->xprt_ctx);
+		int *flag= SSL_CTX_get_ex_data(ctx, gen_ssl_ctx_ptr_index);
+		if (!ssl_ctx_lru_tree && flag != NULL && *flag == is_gen_ssl_ctx)
+			SSL_CTX_free(ctx);
 #endif
 		SSL_free(conn->xprt_ctx);
 		conn->xprt_ctx = NULL;
@@ -5392,6 +5396,11 @@ static void __ssl_sock_init(void)
 #ifndef OPENSSL_NO_DH
 	ssl_dh_ptr_index = SSL_CTX_get_ex_new_index(0, NULL, NULL, NULL, NULL);
 #endif
+
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+	gen_ssl_ctx_ptr_index = SSL_CTX_get_ex_new_index(0, NULL, NULL, NULL, NULL);
+#endif
+
 }
 
 __attribute__((destructor))
-- 
2.4.3



RE: haproxy + ipsec -> general socket error

2015-10-16 Thread Lukas Tribus
> when using ipsec on the backend side, this error pops up in the haproxy 
> log from time to time: 
> 
> Layer4 connection problem, info: "General socket error (No buffer space 
> available) 
> 
> 
> we have tried both strongswan and libreswan, error is still the same. 
> there is nothing strange in the ipsec logs, connection seems stable. 
> but as soon as we start generating some light traffic, haproxy loses 
> connectivity with the backend nodes. 
> we are running centos 7, standard repositories. 
> 
> any ideas what could be wrong? 

The error comes from the kernel, you will have to troubleshoot on
there (both strongswan and libreswan probably use the kernel's
ipsec stack, so that's why the behavior is the same).

- make sure you use the latest centos 7 kernel.
- try increasing /proc/sys/net/ipv4/xfrm4_gc_thresh
- report the issue (to CentOs/RedHat)


There is nothing that can be done in userspace/haproxy (except maybe
lowering the load by using keep-alive and connection pooling).


Regards,

Lukas

  


Re: haproxy + ipsec -> general socket error

2015-10-16 Thread Jarno Huuskonen
Hi,

On Fri, Oct 16, wbmtfrdlxm wrote:
> when using ipsec on the backend side, this error pops up in the haproxy log 
> from time to time: 
> 
> Layer4 connection problem, info: "General socket error (No buffer space 
> available)

We're using ipsec(libreswan) on backend, but I haven't seen any problems
with ipsec (just checked logs for past few months).

> we have tried both strongswan and libreswan, error is still the same. there 
> is nothing strange in the ipsec logs, connection seems stable. but as soon as 
> we start generating some light traffic, haproxy loses connectivity with the 
> backend nodes.
> we are running centos 7, standard repositories.

What's light traffice for you ? Have you tried w/out ipsec (does it
work w/out problems) ?

-Jarno

-- 
Jarno Huuskonen




Re: [blog] What's new in HAProxy 1.6

2015-10-16 Thread Pavlos Parissis
On 14/10/2015 12:40 μμ, Baptiste wrote:
> Hey,
> 
> I summarized what's new in HAProxy 1.6 with some configuration
> examples in a blog post to help quick adoption of new features:
> http://blog.haproxy.com/2015/10/14/whats-new-in-haproxy-1-6/
> 
> Baptiste
> 

1.6.0 comes with excellent documentation as well. Just look at the
amount of information anyone can find in:
http://www.haproxy.org/download/1.6/doc/management.txt
http://cbonte.github.io/haproxy-dconv/intro-1.6.html

Get an idea how rich is the documentation see
sections '4. Stopping and restarting HAProxy' and '5. File-descriptor
limitations' in management.txt

I believe this should be mentioned in the blog spot as good quality of
documentation is _very_ important.

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


L’actualité hebdomadaire par RFI - Les 10 Africains les plus riches selon «Forbes»

2015-10-16 Thread RFI L'HEBDO
L’actualité hebdomadaire par RFI -  16/10/2015

Visualisez cet email dans votre navigateur 

http://rfi.nlfrancemm.com/HM?b=RusZUI3MexiGeicbBFqpha_6oyhc1JIg8bKvht3jM35laTsE9Lne1hIPzM_H_sv_=3MFaAQj1cxFi_oP-8TlvJA
 


Les 10 Africains les plus riches selon «Forbes»
Les plus grosses fortunes d’Afrique sont répertoriées chaque année par le 
magazine Forbes. Elles sont le fait d’entrepreneurs opérant dans le ciment, le 
luxe, l’agroalimentaire et la grande distribution et proviennent, pour 
l’essentiel, d’Afrique du Sud, du Nigeria et d’Egypte. Elles ont été 
construites en majorité par des hommes de 54 à 83 ans, avec seulement deux 
Noirs et une femme de 42 ans.
http://rfi.nlfrancemm.com/HP?b=ijUEeCzO1sAA4hUJEMsrIQuq16tqmLwlfe8usBguVXEFkngGvKFru8pzpPBU8-Ab=0LHGqk2vcOcIzRUXEGCbjQ
Prix Bayeux 2015: les reporters de guerre courent après le temps
Du 5 au 11 octobre 2015, la ville de Bayeux, en Normandie, accueillait le prix 
des correspondants de guerre. Jeunes ou anciens, freelance ou titulaires, 
beaucoup dressent le même constat amer : ils n’ont plus le temps de faire leur 
métier comme ils le souhaiteraient.
http://rfi.nlfrancemm.com/HP?b=GJ-gj74DFaDhn9mxtzHRbb7AXG_B-SK6kwjK4Al14tpGE5ujvsGqI7K14nKGi3lO=JMlnyvMIh2hCHyAGHg5GVg
Nouveau roman de Rushdie: entre fable politique et réalisme magique
C'est la publication en 1981 des Enfants de minuit qui a propulsé Salman 
Rushdie sur le devant de la scène littéraire. Il est l'auteur d'une dizaine 
d'ouvrages de fiction mais a aussi écrit des essais, des pièces de théâtre et 
des scénarios pour le cinéma. Doué d’une imagination baroque, il a renouvelé 
les lettres anglophones. Son nouveau roman qui vient de paraître est à 
mi-chemin entre récit fantastique à la Harry Potter et allégorie politique.
http://rfi.nlfrancemm.com/HP?b=94qpeTRIg_ZRYcqunxAvnEsE5NMWTqVIT1IXnQ-ImeAcbPp2WLaiX4BEBLQgfY6F=1tl2D78DMZFq5iuP7MGKYA
Le difficile hommage aux 40 000 «fous» morts sous Vichy
Après plusieurs mois de travail, ce sont enfin les dernières finitions du 
rapport sur les handicapés mentaux morts de faim entre 1941 et 1945 . 
L'historien Jean-Pierre Azéma, fin connaisseur du régime de Vichy, y propose 
ses conseils sur l'hommage à rendre et l'interprétation de l'histoire. Volonté 
eugéniste ou drame de l'époque ? La communauté scientifique reste très partagée.
http://rfi.nlfrancemm.com/HP?b=p7edKOPvNwdssSlct7Jl_XIt_O4yQwl-mlcwcv2i3RUqHOypNm3oaQAkBVULGVdL=Lt_fMf6f0TG9WgTL5SoZiA
Conflit chez Air France: les pilotes en ligne de mire
Début octobre, Air France a détaillé son plan de restructuration, annonçant la 
suppression de près de 3 000 postes d’ici 2017. Cet ancien fleuron des 
compagnies d’aviation fait face à des difficultés financières et tarde à 
s’adapter au nouveau paysage aérien, notamment dans le domaine du low cost. Les 
pilotes refusent tout compromis permettant de trouver une issue à cette crise. 
Toutes les négociations ont échoué. Et le dialogue est rompu depuis l’agression 
du directeur des ressources humaines, qui a suscité de vives réactions en 
France et à l’international. Mais Air France n’en est pas à sa première crise. 
Retour sur une année de turbulences.
http://rfi.nlfrancemm.com/HP?b=uNmruNAv6dteAoc3_irmIUFgFZAzTvx-B0_uQj983wGdqzVZINV6lxI-I08iQ_bl=SGzKGR6RZ5uavuJ0aZ4pUQ
Nigériens d’Algérie: expulsions ou départs volontaires?
L’Algérie a recommencé à expulser des ressortissants nigériens entrés 
illégalement sur son territoire, a-t-on appris dans Appels sur l’actualité. 
Ainsi, dans les prochains jours, quelque 1 000 personnes doivent être ramenées 
au nord du Niger. Mais pour parvenir à cet objectif, les autorités ont arrêté 
des migrants sans distinction. Et ceux qui n’ont pas pu prouver qu’ils 
n’étaient pas Nigériens ont été forcés de monter dans les bus en direction de 
Tamanrasset, dernière grande ville du Sud. Il s’agit pourtant de « 
rapatriements volontaires », selon l’accord de décembre 2014 entre Alger et 
Niamey, qui a permis déjà de rapatrier 3 600 personnes. Cette fois, quatre 
convois sont prévus au départ des grandes villes. Le premier a quitté Alger le 
29 septembre.
http://rfi.nlfrancemm.com/HP?b=rhanpOo8vy6zQ8yx1MLyAlqK7FQpRjA_jxXZrV82a4RPPDFuJmTybhmvJrYy7Azp=RQ0iFNljFgGiik8K1EbiUA


Madagascar: la faim menace à nouveau les populations du sud du pays
A Madagascar, la faim menace de nouveau les populations du sud du pays. Le 
Fonds des Nations unies pour l’agriculture et l’alimentation (FAO) vient de 
sortir, mardi 13 octobre, une note sur la dégradation de la situation 
alimentaire dans trois régions du Grand sud, car les récoltes ont été 
mauvaises. Il y a six mois ces régions faisaient déjà face à l’une des plus 
graves crises alimentaires de leur histoire. Plusieurs centaines de milliers de 
personnes se retrouvaient sans ressources à cause d’une longue sécheresse. En 
juillet on estimait que 580 000 personnes se trouvaient encore en situation 
d’insécurité alimentaire. Le gouvernement et les partenaires, comme le 
Programme alimentaire 

Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Mark Betz
I am not having much luck getting output from tcpdump inside the container.
I don't have much experience with the tool so any tips will be appreciated.
I'm starting the command in the container start-up script right before
haproxy is launched...

sudo nohup tcpdump -i any -U -nn -XX -e -v -S -s 0 -w
/var/log/icitizen/tcpdump.out &

So far all I have managed to capture is:

link-type LINUX_SLL (Linux cooked)

On Fri, Oct 16, 2015 at 1:17 AM, Baptiste  wrote:

>
> Le 16 oct. 2015 06:27, "Mark Betz"  a écrit :
> >
> > Hi, I have a hopefully quick question about setting up backends for
> resolvable internal service addresses.
> >
> > We are putting together a cluster on Google Container Engine
> (kubernetes) and have haproxy deployed in a container based on Ubuntu 14.04
> LTS.
> >
> > Our backend server specifications are declared using an internal
> resolvable service name. For example:
> >
> > logdata-svc
> > logdata-svc.default.svc.cluster.local
> >
> > Both of these names correctly resolve to an internal IP address in the
> range 10.xxx.xxx.xxx, as shown by installing dnsutils into the container
> and running nslookup on the name prior to starting haproxy:
> >
> > Name: logdata-svc.default.svc.cluster.local
> > Address: 10.179.xxx.xxx
> >
> > However regardless of whether I use the short form or fqdn haproxy fails
> to start, emitting the following to stdout:
> >
> > [ALERT] 288/041651 (52) : parsing [/etc/haproxy/haproxy.cfg:99] :
> 'server logdata-service' : invalid address:
> 'logdata-svc.default.svc.cluster.local' in
> 'logdata-svc.default.svc.cluster.local:1'
> >
> > We can use IPV4 addresses in the config, but if we do so we would be
> giving up a certain amount of flexibility and resilience obtained from the
> kubedns service name resolution layer.
> >
> > Anything we can do here? Thanks!
> >
> > --
> > Mark Betz
> > Sr. Software Engineer
> > icitizen
> >
> > Email: mark.b...@icitizen.com
> > Twitter: @markbetz
>
> Hi,
>
> Weird. Configuration parsing is failing, which means it's a libc/system
> problem.
> Is your resolv.conf properly set up and the server responsive?
> Can you run a tcpdump at haproxy's start up and over your raw container
> (no dnsutils installed).
>
> Baptiste
>



-- 
Mark Betz
Sr. Software Engineer
*icitizen*

Mobile: 908-328-8666
Office: 908-223-5453
Email: mark.b...@icitizen.com
Twitter: @markbetz


Re: haproxy + ipsec -> general socket error

2015-10-16 Thread Baptiste
Have you 'tunned' your sysctls?

Baptiste
Le 16 oct. 2015 14:56, "wbmtfrdlxm"  a écrit :

> what linux distribution are you using?
>
> light traffic is simulating 100 users browsing a website, simple http
> requests. we have 2 backend nodes and after a while, both of them become
> unavailable. after lowering or stopping traffic, everything goes back to
> normal.
> without ipsec, no problem at all.
>
>
>  On Fri, 16 Oct 2015 14:40:51 +0200 *Jarno
> Huuskonen>* wrote 
>
> Hi,
>
> On Fri, Oct 16, wbmtfrdlxm wrote:
> > when using ipsec on the backend side, this error pops up in the haproxy
> log from time to time:
> >
> > Layer4 connection problem, info: "General socket error (No buffer space
> available)
>
> We're using ipsec(libreswan) on backend, but I haven't seen any problems
> with ipsec (just checked logs for past few months).
>
> > we have tried both strongswan and libreswan, error is still the same.
> there is nothing strange in the ipsec logs, connection seems stable. but as
> soon as we start generating some light traffic, haproxy loses connectivity
> with the backend nodes.
> > we are running centos 7, standard repositories.
>
> What's light traffice for you ? Have you tried w/out ipsec (does it
> work w/out problems) ?
>
> -Jarno
>
> --
> Jarno Huuskonen
>
>
>
>


Re: haproxy + ipsec -> general socket error

2015-10-16 Thread wbmtfrdlxm
just those 2:

net.ipv4.tcp_max_syn_backlog = 8192
net.core.somaxconn = 2048



 On Fri, 16 Oct 2015 16:13:31 +0200 Baptistebed...@gmail.com wrote 
 

Have you 'tunned' your sysctls?
 Baptiste
 Le 16 oct. 2015 14:56, "wbmtfrdlxm" wbmtfrd...@zoho.com a écrit :
what linux distribution are you using?

light traffic is simulating 100 users browsing a website, simple http requests. 
we have 2 backend nodes and after a while, both of them become unavailable. 
after lowering or stopping traffic, everything goes back to normal.
without ipsec, no problem at all.


 On Fri, 16 Oct 2015 14:40:51 +0200 Jarno 
Huuskonenjarno.huusko...@uef.fi wrote  

Hi, 
 
On Fri, Oct 16, wbmtfrdlxm wrote: 
 when using ipsec on the backend side, this error pops up in the haproxy 
log from time to time: 
 
 Layer4 connection problem, info: "General socket error (No buffer space 
available) 
 
We're using ipsec(libreswan) on backend, but I haven't seen any problems 
with ipsec (just checked logs for past few months). 
 
 we have tried both strongswan and libreswan, error is still the same. 
there is nothing strange in the ipsec logs, connection seems stable. but as 
soon as we start generating some light traffic, haproxy loses connectivity with 
the backend nodes. 
 we are running centos 7, standard repositories. 
 
What's light traffice for you ? Have you tried w/out ipsec (does it 
work w/out problems) ? 
 
-Jarno 
 
-- 
Jarno Huuskonen 
 






 





Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Mark Betz
Thanks for the reply Baptiste. Here is the dump of /etc/resolv.conf inside
the container:

nameserver 10.179.240.10
nameserver 169.254.169.254
nameserver 10.240.0.1
search default.svc.cluster.local svc.cluster.local cluster.local
c.icitizen-dev3-stack-1069.internal. 555239384585.google.internal.
google.internal.
options ndots:5

I will get some output from tcpdump and include it in a reply in a few
minutes. Thanks again for your time.

On Fri, Oct 16, 2015 at 1:17 AM, Baptiste  wrote:

>
> Le 16 oct. 2015 06:27, "Mark Betz"  a écrit :
> >
> > Hi, I have a hopefully quick question about setting up backends for
> resolvable internal service addresses.
> >
> > We are putting together a cluster on Google Container Engine
> (kubernetes) and have haproxy deployed in a container based on Ubuntu 14.04
> LTS.
> >
> > Our backend server specifications are declared using an internal
> resolvable service name. For example:
> >
> > logdata-svc
> > logdata-svc.default.svc.cluster.local
> >
> > Both of these names correctly resolve to an internal IP address in the
> range 10.xxx.xxx.xxx, as shown by installing dnsutils into the container
> and running nslookup on the name prior to starting haproxy:
> >
> > Name: logdata-svc.default.svc.cluster.local
> > Address: 10.179.xxx.xxx
> >
> > However regardless of whether I use the short form or fqdn haproxy fails
> to start, emitting the following to stdout:
> >
> > [ALERT] 288/041651 (52) : parsing [/etc/haproxy/haproxy.cfg:99] :
> 'server logdata-service' : invalid address:
> 'logdata-svc.default.svc.cluster.local' in
> 'logdata-svc.default.svc.cluster.local:1'
> >
> > We can use IPV4 addresses in the config, but if we do so we would be
> giving up a certain amount of flexibility and resilience obtained from the
> kubedns service name resolution layer.
> >
> > Anything we can do here? Thanks!
> >
> > --
> > Mark Betz
> > Sr. Software Engineer
> > icitizen
> >
> > Email: mark.b...@icitizen.com
> > Twitter: @markbetz
>
> Hi,
>
> Weird. Configuration parsing is failing, which means it's a libc/system
> problem.
> Is your resolv.conf properly set up and the server responsive?
> Can you run a tcpdump at haproxy's start up and over your raw container
> (no dnsutils installed).
>
> Baptiste
>



-- 
Mark Betz
Sr. Software Engineer
*icitizen*

Mobile: 908-328-8666
Office: 908-223-5453
Email: mark.b...@icitizen.com
Twitter: @markbetz


Re: Resolvable host names in backend server throw invalid address error

2015-10-16 Thread Shawn Heisey
On 10/16/2015 9:40 AM, Mark Betz wrote:
> I am not having much luck getting output from tcpdump inside the
> container. I don't have much experience with the tool so any tips will
> be appreciated. I'm starting the command in the container start-up
> script right before haproxy is launched...
> 
> sudo nohup tcpdump -i any -U -nn -XX -e -v -S -s 0 -w
> /var/log/icitizen/tcpdump.out &

Most of those options are not useful when capturing actual packet data
to a file, they are only useful when dumping packet information to
stdout.  They might be confusing tcpdump.

Try a much less complicated command.  You might want to pick a specific
interface rather than "any" ... captures on the "any" interface are not
done promiscuously, and in many cases you do want a promiscuous capture:

tcpdump -i eth0 -s0 -w output.cap

If the idea is to capture both traffic going in and out of haproxy, and
this happens on separate interfaces, you might want to do separate
captures for each interface.

I'm not a tcpdump expert, so I won't be able to answer expert-level
questions about it, but I have used it a lot.

Thanks,
Shawn




Re[2]: Multiple Monitor-net

2015-10-16 Thread Bryan Rodriguez
What about TCP requests or not HTTP traffic?   It seems TCP traffic is 
still logged when using:


http-request set-log-level silent if { src -f aws-checks.list }



[Bryan]



-- Original Message --
From: "Willy Tarreau" 
To: "Bryan Rodriguez" 
Cc: haproxy@formilux.org
Sent: 10/16/2015 10:28:13 AM
Subject: Re: Multiple Monitor-net


On Fri, Oct 16, 2015 at 05:18:24PM +, Bryan Rodriguez wrote:
 AWS health check monitoring comes from the following networks.  
Logging

 is going crazy.  I read that only the last monitor-net is read.  Is
 there a way to filter from the logs all the following requests?

monitor-net 54.183.255.128/26
monitor-net 54.228.16.0/26
monitor-net 54.232.40.64/26
monitor-net 54.241.32.64/26
monitor-net 54.243.31.192/26
monitor-net 54.244.52.192/26
monitor-net 54.245.168.0/26
monitor-net 54.248.220.0/26
monitor-net 54.250.253.192/26
monitor-net 54.251.31.128/26
monitor-net 54.252.254.192/26
monitor-net 54.252.79.128/26
monitor-net 54.255.254.192/26
monitor-net 107.23.255.0/26
monitor-net 176.34.159.192/26
monitor-net 177.71.207.128/26


Yes, instead of using monitor-net, you can use a redirect (if the 
checker

accepts it) or go to a specific backend instead, and use the "silent"
log-level :

  http-request set-log-level silent if { src -f aws-checks.list }
  http-request redirect location /  if { src -f aws-checks.list }

Or :

  use-backend aws-checks if { src -f aws-checks.list }

  backend aws-checks
 http-request set-log-level silent
 error-file 503 /path/to/forged/response.http

Then you put all those networks (one per line) in a file called
"aws-checks.list" and that will be easier.

Hoping this helps,
Willy






Re: Multiple Monitor-net

2015-10-16 Thread Willy Tarreau
On Fri, Oct 16, 2015 at 10:52:32PM +, Bryan Rodriguez wrote:
> What about TCP requests or not HTTP traffic?   It seems TCP traffic is 
> still logged when using:
> 
> http-request set-log-level silent if { src -f aws-checks.list }

Absolutely, and you should get a warning stating that http-request
will not work in TCP mode.

It would have made sense to have set-log-level accessible from TCP
rules, I guess it was implemented before we made it easy to share
actions between multiple rulesets. There's another action I would
have liked in TCP rules : set-src. But we don't have it either.
This is definitely something we need to uniformize a lot more in
1.7!

Willy




Biobased Plasticizer

2015-10-16 Thread 金永华
Have a nice day!
 
We learn you are on the market of bioplasticizers,  Would you please consider 
using our biobased plasticiser for your eco-plastics?
 
It is an absolute substitute for traditional plasticizers with properties of 
food safe, non pollution, high heat stability etc.. Also it can reduce the 
carbon footprint largely.
 
For your reference, we are the biggest and leading manufacturer of  biobased 
plasticizer in China. The annual yield will reach to 300,000 MTS by the end of 
2015’s.
 
Its performances are higher than traditional ones such as DOP, DBP, DIDP etc.. 
 
Hope to find a way to work with you. Waiting for your contact to discuss the 
possible deal.
 
Thanks.
 
Best regards.
 
Yours faithfully
 
 
 
Tim
 
 
 
www.changlianchem.com
 
t...@changlianchem.com

Lua complete example ?

2015-10-16 Thread One Seeker
Hello,

I would like to manipulate some data from a TCP backend (modify data before
it is forwarded to client), and this is not supported (it is for HTTP with
rewrites, but not in TCP mode).

With v1.6, Lua scripting brings hope, but the documentation is lacking
(doc/lua-api/index.rst is a bit of a harsh place to start learning this
aspect of HAProxy)..
Is there an "elaborate" (or advanced) example of using Lua with HAProxy
(not a Hello World) I can learn from (I'm very good at learning from
real-world code :), not necessarily doing what I'm describing here, but
just doing some real stuff to showcase Lua for HAProxy..

With thanks.


Multiple Monitor-net

2015-10-16 Thread Bryan Rodriguez
AWS health check monitoring comes from the following networks.  Logging 
is going crazy.  I read that only the last monitor-net is read.  Is 
there a way to filter from the logs all the following requests?


   monitor-net 54.183.255.128/26
   monitor-net 54.228.16.0/26
   monitor-net 54.232.40.64/26
   monitor-net 54.241.32.64/26
   monitor-net 54.243.31.192/26
   monitor-net 54.244.52.192/26
   monitor-net 54.245.168.0/26
   monitor-net 54.248.220.0/26
   monitor-net 54.250.253.192/26
   monitor-net 54.251.31.128/26
   monitor-net 54.252.254.192/26
   monitor-net 54.252.79.128/26
   monitor-net 54.255.254.192/26
   monitor-net 107.23.255.0/26
   monitor-net 176.34.159.192/26
   monitor-net 177.71.207.128/26