Re: Need some help configuring backend health checks

2015-10-19 Thread Jarno Huuskonen
Hi,

On Sun, Oct 18, Daren Sefcik wrote:
> I have an ICAP server backend with servers that each listen on different
> ports, can anyone offer some advice on how to configure health checks for
> it? I am currently using basic but that really doesn't help if the service
> is not responding.
> 
> Here is my haproxy config for the backend:
> 
> backend HTPL_CONT_FILTER_tcp_ipvANY
> mode tcp
> balance roundrobin
> timeout connect 5
> timeout server 5
> retries 3
> server HTPL-WEB-01_10.1.4.153 10.1.4.153:1344 check inter 5000  weight 200
> maxconn 200 fastinter 1000 fall 5
> server HTPL-WEB-02_10.1.4.154 10.1.4.154:1344 check inter 5000  weight 200
> maxconn 200 fastinter 1000 fall 5
> server HTPL-WEB-02_10.1.4.155_01 10.1.4.155:8102 check inter 5000  weight
> 200 maxconn 200 fastinter 1000 fall 5
> server HTPL-WEB-02_10.1.4.155_02 10.1.4.155:8202 check inter 5000  weight
> 200 maxconn 200 fastinter 1000 fall 5

Do the icap servers (squid+diladele?) respond to something like this:
https://support.symantec.com/en_US/article.TECH220980.html
or https://exchange.icinga.org/oldmonex/1733-check_icap.pl/check_icap.pl

Maybe you can use tcp-check to send icap request and look for
"ICAP/1.0 200" response:
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#tcp-check%20connect
http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/

-Jarno

-- 
Jarno Huuskonen



Re: lua, changing response-body in http pages 'supported' ?

2015-10-19 Thread thierry . fournier
On Mon, 19 Oct 2015 01:31:42 +0200
PiBa-NL  wrote:

> Hi Thierry,
> 
> Op 18-10-2015 om 21:37 schreef thierry.fourn...@arpalert.org:
> > On Sun, 18 Oct 2015 00:07:13 +0200
> > PiBa-NL  wrote:
> >
> >> Hi haproxy list,
> >>
> >> For testing purposes i am trying to 'modify' a response of a webserver
> >> but only having limited success. Is this supposed to work?
> >> As a more usefull goal than the current LAL to TST replacement i imagine
> >> rewriting absolute links on a webpage could be possible which is
> >> sometimes problematic with 'dumb' webapplications..
> >>
> >> Or is it outside of the current scope of implemented functionality? If
> >> so, it on the 'lua todo list' ?
> >>
> >> I tried for example a configuration like below. And get several
> >> different results in the browser.
> >> -Sometimes i get 4 times TSTA
> >> -Sometimes i see after the 8th TSTA- Connection: keep-alive << this
> >> happens most of the time..
> >> -Sometimes i get 9 times TSTA + STOP << this would be the desired
> >> outcome (only seen very few times..)
> >>
> >> Probably due to the response-buffer being filled differently due to
> >> 'timing'..
> >>
> >> The "connection: keep-alive" text is probably from the actual server
> >> reply which is 'appended' behind the response generated by my lua
> >> script.?. However shouldn't the .done() prevent that from being send to
> >> the client?
> >>
> >> Ive tried putting a loop into the lua script to call res:get() multiple
> >> times but that didnt seem to work..
> >>
> >> Also to properly modify a page i would need to know all changes before
> >> sending the headers with changed content-length back to the client..
> >>
> >> Can someone confirm this is or isn't (reliably) possible? Or how this
> >> can be scripted in lua differently?
> >
> > Hello,
> >
> > Your script replace 3 bytes by 3 bytes, this must run with HTTP, but if
> > your replacement change the length of the response, you can have some
> > difficulties with clients, or with keepalive.
> Yes i started with replacing with the same number of bytes to avoid some 
> of the possible troubles caused by changing the length.. And as seen in 
> the haproxy.cfg it is configured with 'mode http'.
> >
> > The res:get(), returns the current content of the response buffer.
> > Maybe it not contains the full response. You must execute a loop with
> > regular "core.yield()" to get back the hand to HAProxy and wait for new
> Calling yield does allow to 'wait' for more data to come in.. No 
> guarantee that it only takes 1 yield for data to 'grow'..
> 
> [info] 278/055943 (77431) : luahttpresponse Content-Length XYZ: 14115
> [info] 278/055943 (77431) : luahttpresponse SIZE: 2477
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 6221
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 7469
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 7469
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 7469
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 7469
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 7469
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 7469
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 7469
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 8717
> [info] 278/055943 (77431) : luahttpresponse LOOP
> [info] 278/055943 (77431) : luahttpresponse SIZE: 14337
> [info] 278/055943 (77431) : luahttpresponse DONE?: 14337
> 
> > data. When all the data are read, res:get() returns an error.
> Not sure when/how this error would happen.? The result of res:get only 
> seems to get bigger while the webserver is sending the response..
> >
> > The res:send() is dangerous because it send data directly to the client
> > without the end of haproxy analysis. Maybe it is the cause o your
> > problem.
> >
> > Try to use res:set().
> Ok tried that, new try with function below.
> >
> > The difficulty is that another "res:get()" returns the same data that
> > these you put.
> >
> > I don't known if you can modify an http response greater than one
> > buffer.
> Would be nice if that was somehow possible. But my current lua script 
> cannot..
> >
> > The function res:close() closes the connection even if HAProxy want to
> > keep the connection alive. I suggest that you don't use this function.
> It seems txn.res:close() does not exist? txn:done()
> >
> > I reproduce the error message using curl. By default curl tries
> > to transfer data with keepalive, and it is not happy if the all the
> > announced data are not transfered.
> >
> > 

Re: Build failure of 1.6 and openssl 0.9.8

2015-10-19 Thread Christopher Faulet

Le 16/10/2015 22:42, Willy Tarreau a écrit :

Hi Christopher,

Marcus (in CC) reported that 1.6 doesn't build anymore on SuSE 11
(which uses openssl 0.9.8). After some digging, we found that it
is caused by the absence of EVP_PKEY_get_default_digest_nid()
which was introduced in 1.0.0 and which was introduced by this
patch :

   commit 7969a33a01c3a70e48cddf36ea5a66710bd7a995
   Author: Christopher Faulet 
   Date:   Fri Oct 9 11:15:03 2015 +0200

 MINOR: ssl: Add support for EC for the CA used to sign generated 
certificate

 This is done by adding EVP_PKEY_EC type in supported types for the CA 
privat
 key when we get the message digest used to sign a generated X509 
certificate
 So now, we support DSA, RSA and EC private keys.

 And to be sure, when the type of the private key is not directly supported,
 get its default message digest using the function
 'EVP_PKEY_get_default_digest_nid'.

 We also use the key of the default certificate instead of generated it. So 
w
 are sure to use the same key type instead of always using a RSA key.

Interestingly, not all 0.9.8 will see the same problem since SNI is not
enabled by default, it requires a build option. This explains why on my
old PC I didn't get this problem with the same version.

I initially thought it would just be a matter of adding a #if on the
openssl version but it doesn't appear that easy given that the previous
code was different, so I have no idea how to fix this. Do you have any
idea ? Probably we can have a block of code instead of EVP_PKEY_... on
older versions and that will be fine. I even wonder if EC was supported
on 0.9.8.

It's unfortunate that we managed to break things just a few days before
the release with code that looked obviously right :-(

Thanks for any insight.



Hi Willy,

Damned! I generated a huge amount of disturbances with my paches! Really 
sorry for that.


Add a #ifdef to check the OpenSSL version seems to be a good fix. I 
don't know if there is a workaround to do the same than 
EVP_PKEY_get_default_digest_nid() for old OpenSSL versions.


This function is used to get default signature digest associated to the 
private key used to sign generated X509 certificates. It is called when 
the private key differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. 
It should be enough for most of cases (maybe all cases ?).


By the way, I attached a patch to fix the bug.

Regards,
--
Christopher Faulet
>From 76e79a8c8a98474f3caf701b75370f50729516b2 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 19 Oct 2015 13:59:24 +0200
Subject: [PATCH 2/2] BUILD: ssl: fix build error introduced in commit 7969a3
 with OpenSSL < 1.0.0

The function 'EVP_PKEY_get_default_digest_nid()' was introduced in OpenSSL
1.0.0. So for older version of OpenSSL, compiled with the SNI support, the
HAProxy compilation fails with the following error:

src/ssl_sock.c: In function 'ssl_sock_do_create_cert':
src/ssl_sock.c:1096:7: warning: implicit declaration of function 'EVP_PKEY_get_default_digest_nid'
   if (EVP_PKEY_get_default_digest_nid(capkey, ) <= 0)
[...]
src/ssl_sock.c:1096: undefined reference to `EVP_PKEY_get_default_digest_nid'
collect2: error: ld returned 1 exit status
Makefile:760: recipe for target 'haproxy' failed
make: *** [haproxy] Error 1

So we must add a #ifdef to check the OpenSSL version (>= 1.0.0) to use this
function. It is used to get default signature digest associated to the private
key used to sign generated X509 certificates. It is called when the private key
differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. It should be enough for
most of cases.
---
 src/ssl_sock.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 35a3edf..7c82464 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -1091,12 +1091,16 @@ ssl_sock_do_create_cert(const char *servername, unsigned int serial,
 	else if (EVP_PKEY_type (capkey->type) == EVP_PKEY_EC)
 		digest = EVP_sha256();
 	else {
+#if (OPENSSL_VERSION_NUMBER >= 0x100fL)
 		int nid;
 
 		if (EVP_PKEY_get_default_digest_nid(capkey, ) <= 0)
 			goto mkcert_error;
 		if (!(digest = EVP_get_digestbynid(nid)))
 			goto mkcert_error;
+#else
+		goto mkcert_error;
+#endif
 	}
 
 	if (!(X509_sign(newcrt, capkey, digest)))
-- 
2.4.3



Re: haproxy 1.6.0 crashes

2015-10-19 Thread Christopher Faulet

Hi Willy,

Le 16/10/2015 19:07, Willy Tarreau a écrit :


The SSL_CTX and SSL objects are reference-counted objects, so there is
no problem.

When a SSL_CTX object is created, its refcount is set to 1. When a SSL
connection use it, it is incremented and when the connection is closed,
it is decremented. Of course, it is also decremented when SSL_CTX_free
is called.
During a call to SSL_free or SSL_CTX_free, its reference count reaches
0, the SSL_CTX object is freed. Note that SSL_free and SSL_CTX_free can
be called in any order.


OK so the unused objects in the tree have a refcount of 1 while the used
ones have 2 or more, thus the refcount is always valid. Good that also
means we must not test if the tree is null or not in ssl_sock_close(),
we must always free the ssl_ctx as long as it was dynamically created,
so that its refcount decreases, otherwise it keeps increasing upon every
reuse.


No. Maybe my explanation was not really clear. The SSL_CTX refcount is 
not exposed. It is an internal parameter. So, it is not incremented when 
the SSL_CTX is pushed in the cache tree.


The call to SSL_set_SSL_CTX increases the refcount and the call to 
SSL_free decrements it (when the connection is closed). And, of course, 
the call to SSL_CTX_free decrements it too. The SSL_CTX object is 
released when the refcount reaches 0.


For a SSL_CTX object, SSL_CTX_free must be called once. When it is 
evicted from the cache tree (or when the tree is destroyed) _OR_ when 
the connection is closed if there is no cache tree. If we always release 
SSL_CTX objects when the SSL connection is closed, we will have 
undefined references for cached objects, leading to a segfault.




So, if a call to SSL_CTX_free is called whilst a SSL connection uses the
corresponding SSL_CTX object, there is no problem. Actually, this
happens when a SSL_CTX object is evicted from the cache. There is no
need to check if it is used by a connection or not.


Not only it is not needed, but we must not.


We do not track any reference count on SSL_CTX, it is done internally by
openssl. The only thing we must do, is to know if it a generated
certificate


I totally agree.


and to track if it is in the cache or not.


And here I disagree for the reason explained above since this is already
covered by the refcount.


The refcount is not incremented when a SSL_CTX object is pushed in the 
cache. There is no way to manually increment or decrement it. So, we 
must really know if the SSL_CTX object was cached or not when the SSL 
connection is closed.



Well, I'm not an openssl guru. It is possible to store and retrieve data
on a SSL_CTX object using SSL_CTX_set_ex_data/SSL_CTX_get_ex_data
functions. But I don't know if this a good practice to use it. And I
don't know if this is expensive or not.


That's also what Rémi suggested. I don't know how it's used, I'm seeing
an index with it and that's already used for DH data, so I don't know how
it mixes (if at all) with this. I'm not much concerned by the access cost
in fact since we're supposed to access it once at session creation and once
during the release. It's just that I don't understand how this works. Maybe
the connection flag is simpler for now.


Well, use SSL_CTX_set_ex_data/SSL_CTX_get_ex_data seems to work. But, 
I'm not a SSL expert, so maybe I missed something (and bugs related to 
my recent patches show that this is not false modesty...). I sent 2 
fixes for this bug [1][2]. If you want I rework one of them, I will be 
happy to do it.


[1] https://www.mail-archive.com/haproxy@formilux.org/msg19962.html
[2] https://www.mail-archive.com/haproxy@formilux.org/msg19995.html

Regards
--
Christopher Faulet



Re: haproxy 1.6.0 crashes

2015-10-19 Thread Willy Tarreau
On Mon, Oct 19, 2015 at 03:06:44PM +0200, Christopher Faulet wrote:
> >OK so the unused objects in the tree have a refcount of 1 while the used
> >ones have 2 or more, thus the refcount is always valid. Good that also
> >means we must not test if the tree is null or not in ssl_sock_close(),
> >we must always free the ssl_ctx as long as it was dynamically created,
> >so that its refcount decreases, otherwise it keeps increasing upon every
> >reuse.
> 
> No. Maybe my explanation was not really clear. The SSL_CTX refcount is 
> not exposed. It is an internal parameter. So, it is not incremented when 
> the SSL_CTX is pushed in the cache tree.
> 
> The call to SSL_set_SSL_CTX increases the refcount and the call to 
> SSL_free decrements it (when the connection is closed). And, of course, 
> the call to SSL_CTX_free decrements it too. The SSL_CTX object is 
> released when the refcount reaches 0.
> 
> For a SSL_CTX object, SSL_CTX_free must be called once. When it is 
> evicted from the cache tree (or when the tree is destroyed) _OR_ when 
> the connection is closed if there is no cache tree. If we always release 
> SSL_CTX objects when the SSL connection is closed, we will have 
> undefined references for cached objects, leading to a segfault.

OK, I understood the opposite, which is that we kept a refcount for each
user (cache and/or sessions).

But then how do we know that an SSL_CTX is still in use when we want to
evict it from the cache and that we must not free it ? Is it just the
fact that between the moment it's picked from the cache using
ssl_sock_get_generated_cert() and the moment it's associated to a session
using SSL_set_SSL_CTX() it's not possible to yield and destroy the cached
object so no race is possible here ? If so I'm fine with it for now (though
it will become "fun" when we start to play with threads), I just want to
be certain we're not overlooking this part as well.

Also that raises another point : if the issue is related to SSL_CTX_free()
being called on static contexts, then to me it means that these contexts
were not properly refcounted when assigned to the SSL. Don't you think
that we shouldn't instead do something like the following to properly
refcount any context attached to an SSL and ensure that the SSL_CTX_free()
can always be performed regardless of parallel activities in the LRU tree
or anything else ?

/* Alloc a new SSL session ctx */
conn->xprt_ctx = 
SSL_new(objt_server(conn->target)->ssl_ctx.ctx);
+   SSL_set_SSL_CTX(conn->xprt_ctx, 
objt_server(conn->target)->ssl_ctx.ctx);

> The refcount is not incremented when a SSL_CTX object is pushed in the 
> cache. There is no way to manually increment or decrement it. So, we 
> must really know if the SSL_CTX object was cached or not when the SSL 
> connection is closed.

I'm having an issue here as well since the LRU's destroy callback is set
to SSL_CTX_free. This we start with a non-null refcount. I'm sorry if I am
not clear, but the problem I'm having could be described like this :

  - two sets of entities can use a shared resource at any instant : cache
and SSL sessions ;
  - each of them uses SSL_CTX_free() at release time to release the object ;
  - SSL_CTX_free() takes care of the refcount to know if it must free or not,
which means that these two entities above are each responsible for one
refcount point ;
  - the SSL_CTX_free() called by the cache is unconditional when the object
is evicted from the cache ;
  - the SSL_CTX_free() is only done if the cache is enabled ;

Due to last line I deduce that with this condition we're leaking SSL_CTX
when the cache is disabled. I'm possibly missing something, it's just that
seeing one entity monitor its adversary in a refcounted system triggers in
me the feeling that we're painting over the real problem.

I understand that what we're trying to achieve is only to avoid calling
SSL_CTX_free() on objects that were never dynamically allocated, which
is why I'm seriously thinking about refcounting these ones as well. I don't
agree with not calling it with objects that were dynamically allocated but
could never be added to the cache (ie cache disabled), for me it's a leak.
And in the end I suspect that we're still facing an imbalanced refcounting
system that instead of crashing will slowly leak some SSL_CTX.

> >>Well, I'm not an openssl guru. It is possible to store and retrieve data
> >>on a SSL_CTX object using SSL_CTX_set_ex_data/SSL_CTX_get_ex_data
> >>functions. But I don't know if this a good practice to use it. And I
> >>don't know if this is expensive or not.
> >
> >That's also what Rémi suggested. I don't know how it's used, I'm seeing
> >an index with it and that's already used for DH data, so I don't know how
> >it mixes (if at all) with this. I'm not much concerned by the access cost
> >in fact since we're supposed to access it once at session creation and once
> >during the release. It's just that I don't 

Re: Build failure of 1.6 and openssl 0.9.8

2015-10-19 Thread Willy Tarreau
Hi Christopher,

On Mon, Oct 19, 2015 at 03:05:05PM +0200, Christopher Faulet wrote:
> Damned! I generated a huge amount of disturbances with my paches! Really 
> sorry for that.

Shit happens sometimes. I had my hours of fame with option
http-send-name-header merged in 1.4-stable years ago, and that was so badly
designed that it still managed to cause a lot of trouble during 1.6-dev.

> Add a #ifdef to check the OpenSSL version seems to be a good fix. I 
> don't know if there is a workaround to do the same than 
> EVP_PKEY_get_default_digest_nid() for old OpenSSL versions.

I was unsure how the code was supposed to work given that two blocks
were replaced by two others and I was unsure whether there was a
dependence. So as long as we can fall back to the pre-patch behaviour
I'm perfectly fine.

> This function is used to get default signature digest associated to the 
> private key used to sign generated X509 certificates. It is called when 
> the private key differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. 
> It should be enough for most of cases (maybe all cases ?).

OK great.

> By the way, I attached a patch to fix the bug.

Thank you. Marcus, can you confirm that it's OK for you with this fix so
that I can merge it ?

Thanks!
Willy




Re: Need some help configuring backend health checks

2015-10-19 Thread Daren Sefcik
Thanks Jarno, I am still not sure how I can apply this to each server using
a different port but will poke around at it and see if I can figure it out.

On Mon, Oct 19, 2015 at 1:04 AM, Jarno Huuskonen 
wrote:

> Hi,
>
> On Sun, Oct 18, Daren Sefcik wrote:
> > I have an ICAP server backend with servers that each listen on different
> > ports, can anyone offer some advice on how to configure health checks for
> > it? I am currently using basic but that really doesn't help if the
> service
> > is not responding.
> >
> > Here is my haproxy config for the backend:
> >
> > backend HTPL_CONT_FILTER_tcp_ipvANY
> > mode tcp
> > balance roundrobin
> > timeout connect 5
> > timeout server 5
> > retries 3
> > server HTPL-WEB-01_10.1.4.153 10.1.4.153:1344 check inter 5000  weight
> 200
> > maxconn 200 fastinter 1000 fall 5
> > server HTPL-WEB-02_10.1.4.154 10.1.4.154:1344 check inter 5000  weight
> 200
> > maxconn 200 fastinter 1000 fall 5
> > server HTPL-WEB-02_10.1.4.155_01 10.1.4.155:8102 check inter 5000
> weight
> > 200 maxconn 200 fastinter 1000 fall 5
> > server HTPL-WEB-02_10.1.4.155_02 10.1.4.155:8202 check inter 5000
> weight
> > 200 maxconn 200 fastinter 1000 fall 5
>
> Do the icap servers (squid+diladele?) respond to something like this:
> https://support.symantec.com/en_US/article.TECH220980.html
> or https://exchange.icinga.org/oldmonex/1733-check_icap.pl/check_icap.pl
>
> Maybe you can use tcp-check to send icap request and look for
> "ICAP/1.0 200" response:
>
> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#tcp-check%20connect
> http://blog.haproxy.com/2014/01/02/haproxy-advanced-redis-health-check/
>
> -Jarno
>
> --
> Jarno Huuskonen
>


[SPAM] Re: Build failure of 1.6 and openssl 0.9.8

2015-10-19 Thread Marcus Rueckert
On 2015-10-19 16:29:45 +0200, Willy Tarreau wrote:
> On Mon, Oct 19, 2015 at 03:05:05PM +0200, Christopher Faulet wrote:
> > Damned! I generated a huge amount of disturbances with my paches! Really 
> > sorry for that.
> 
> Shit happens sometimes. I had my hours of fame with option
> http-send-name-header merged in 1.4-stable years ago, and that was so badly
> designed that it still managed to cause a lot of trouble during 1.6-dev.
> 
> > Add a #ifdef to check the OpenSSL version seems to be a good fix. I 
> > don't know if there is a workaround to do the same than 
> > EVP_PKEY_get_default_digest_nid() for old OpenSSL versions.
> 
> I was unsure how the code was supposed to work given that two blocks
> were replaced by two others and I was unsure whether there was a
> dependence. So as long as we can fall back to the pre-patch behaviour
> I'm perfectly fine.
> 
> > This function is used to get default signature digest associated to the 
> > private key used to sign generated X509 certificates. It is called when 
> > the private key differs than EVP_PKEY_RSA, EVP_PKEY_DSA and EVP_PKEY_EC. 
> > It should be enough for most of cases (maybe all cases ?).
> 
> OK great.
> 
> > By the way, I attached a patch to fix the bug.
> 
> Thank you. Marcus, can you confirm that it's OK for you with this fix so
> that I can merge it ?

confirmed: compiles now.

just for my understanding ... we do not hit the compile error we saw
before with ssl_sock_switchctx_cbk now because jump out of the
ssl_sock_prepare_ctx function early. my question would be ... could we
jump out even earlier if we already know that we will fail? e.g. why
create the private key and setting up the new x509 object if we already
know it will fail? why not go to mkcert_error on top of the function?

darix

-- 
   openSUSE - SUSE Linux is my linux
   openSUSE is good for you
   www.opensuse.org



Re: Dynamically change server maxconn possible?

2015-10-19 Thread Daren Sefcik
Thanks, this will be helpful to find a good load balance as the systems are
running.

On Mon, Oct 19, 2015 at 1:12 PM, Willy Tarreau  wrote:

> On Mon, Oct 19, 2015 at 02:19:52PM -0500, Andrew Hayworth wrote:
> > I was just thinking about how useful this would be, and will submit a
> patch
> > for it.
>
> Thank you Andrew.
>
> Willy
>
>


Re: Dynamically change server maxconn possible?

2015-10-19 Thread Willy Tarreau
On Mon, Oct 19, 2015 at 02:19:52PM -0500, Andrew Hayworth wrote:
> I was just thinking about how useful this would be, and will submit a patch
> for it.

Thank you Andrew.

Willy




Thank You: Proxy by Sub-directory through Domain Rewriting

2015-10-19 Thread Susheel Jalali

Dear Aleks, Bryan, Igor and Willy,

Thank you Aleks and Igor for your domain rewrite insights and Bryan and 
Willy for your basic configuration guidance in the last few weeks.  Your 
insights and a few Web articles helped us achieve the following milestones:


1)  A working reverse proxy load balancing deployment of HAProxy.

2)  Access a product while employing reverse proxy by sub-directory.

Your efforts culminated into a HAProxy configuration that is given below 
for your and other users review, modification and use.


Also, if you are a blog writer, please feel free to post it as your blog 
for other's benefit and quicker adoption of HAProxy.


Your guidance has been important and relevant to us as HAProxy, Varnish, 
Keepalived / Heartbeat and Pound implementations will be the corporate 
sentinel of our private cloud infrastructure that customers access. 
This corporate infrastructure will help in our journey to IPO.  Your 
encouraging and timely help propelled us faster towards this goal. 
Thank you.



Additional Web articles that helped us:

i. Baptiste Assman: 
http://blog.haproxy.com/2014/04/28/howto-write-apache-proxypass-rules-in-haproxy/


ii. Waldner: 
http://backreference.org/2012/04/25/load-balancing-and-ha-for-multiple-applications-with-apache-haproxy-and-keepalived/




===

HAProxy configuration for reverse proxy by Sub-directory

===

global
[]
defaults
[]

frontend webapps-frontend
bind  *:80 name http
bind  *:443 name https ssl crt /path/to/server.pem

log   global
optionforwardfor
optionhttplog clf

http-request add-header X-Forwarded-Proto https if { ssl_fc }

acl host_httpsreq.hdr(Host) :SSL_FORWARDED_Port  # Port 
not required if there is no port forwarding

acl path_subdomain_p2 path_beg -i /Product2
use_backend subdomain_p2-backend if host_https path_subdomain_p2

backend subdomain_p2-backend
acl hdr_location res.hdr(Location) -m found
rspirep ^(Location:)\ (http://)(.*)$   Location:\ https://\3 if 
hdr_location


server Product2.VM0  
cookie c check



Thank you.


Sincerely,

-- --
Susheel Jalali
Coscend Communications Solutions
susheel.jal...@coscend.com

www.Coscend.com




[PATCH] MINOR: cli: ability to set per-server maxconn

2015-10-19 Thread Andrew Hayworth
In another thread "Dynamically change server maxconn possible?",
someone raised the possibility of setting a per-server maxconn via the
stats socket. I believe the below patch implements this functionality.

I'd appreciate any feedback, since I'm not really familiar with this
part of the code. However, I've tested it by curling slow endpoints
(the nginx echo_sleep module, specifically) and can confirm that NOSRV
is returned appropriately according to whatever maxconn settings are
set via the socket.

- Andrew Hayworth

>From 186f4a33fea210e63ef25b023adab9abf133004d Mon Sep 17 00:00:00 2001
From: Andrew Hayworth 
Date: Mon, 19 Oct 2015 19:15:56 +
Subject: [PATCH] MINOR: cli: ability to set per-server maxconn

This commit adds support for setting a per-server maxconn from the stats
socket. The only really notable part of this commit is that we need to
check if maxconn == minconn before changing things, as this indicates
that we are NOT using dynamic maxconn. When we are not using dynamic
maxconn, we should update maxconn/minconn in lockstep.
---
 src/dumpstats.c | 31 ++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index e80e45c..b2bd13b 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -1646,6 +1646,35 @@ static int stats_sock_parse_request(struct
stream_interface *si, char *line)

  return 1;
  }
+ else if (strcmp(args[2], "server") == 0) {
+ struct server *sv;
+ int v;
+
+ sv = expect_server_admin(s, si, args[3]);
+ if (!sv)
+ return 1;
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Integer value expected.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[4]);
+ if (v < 0) {
+ appctx->ctx.cli.msg = "Value out of range.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (sv->maxconn == sv->minconn) { // static maxconn
+  sv->maxconn = sv->minconn = v;
+ } else { // dynamic maxconn
+  sv->maxconn = v;
+ }
+
+ return 1;
+ }
  else if (strcmp(args[2], "global") == 0) {
  int v;

@@ -1681,7 +1710,7 @@ static int stats_sock_parse_request(struct
stream_interface *si, char *line)
  return 1;
  }
  else {
- appctx->ctx.cli.msg = "'set maxconn' only supports 'frontend' and
'global'.\n";
+ appctx->ctx.cli.msg = "'set maxconn' only supports 'frontend',
'server', and 'global'.\n";
  appctx->st0 = STAT_CLI_PRINT;
  return 1;
  }
--
2.1.3


0001-MINOR-cli-ability-to-set-per-server-maxconn.patch
Description: Binary data


Re: Dynamically change server maxconn possible?

2015-10-19 Thread Andrew Hayworth
I was just thinking about how useful this would be, and will submit a patch
for it.

On Fri, Oct 16, 2015 at 3:53 PM, Willy Tarreau  wrote:

> On Fri, Oct 16, 2015 at 12:07:17PM -0700, Daren Sefcik wrote:
> > I am thinking the answer is no but figured I would ask just to make
> > sure...basically can I change individual server maxconn numbers
> on-the-fly
> > while haproxy is running or do I need to do a full restart to have them
> > take effect?
>
> It's not possible right now but given that we support dynamic maxconn,
> I see no technical problem to implement it and I actually think it would
> be a good idea to support this on the CLI, as "set maxconn server XXX"
> just like we have "set maxconn frontend YYY".
>
> If you (or anyone else) is interested in trying to implement it, I'm
> willing to review the patch and help if any difficulty is faced.
>
> Regards,
> Willy
>
>
>


-- 
- Andrew Hayworth


1.6.0 Error: Cannot Create Listening Socket for Frontend and Stats,Proxies

2015-10-19 Thread Susheel Jalali

Dear HAProxy Developers:

The following error message appears with HAProxy 1.6.0 after start and
then the load balancer stops.  No haproxy.pid is getting created.  The
same configuration works seamlessly with HAProxy 1.5.14 on the same
server.  We are seeking insights into what we could be missing in our
configuration?

The port numbers below are dedicated to this HAProxy instance and only
one HAProxy instance is running.

/var/log/messages

Frontend:  Cannot create listening socket (0.0.0.0:)
Frontend:  Cannot create listening socket (0.0.0.0:)
Proxy for stats:  Cannot create listening socket ()

Server environment:  Centos 7.1, and dynamic loading of (Lua 5.3.1, PCRE 
8.32, OpenSSL 1.0.1e, zlib 1.2.7)


Thank you.

Sincerely,

-- --
Susheel Jalali
Coscend Communications Solutions
susheel.jal...@coscend.com

www.Coscend.com







Re: [PATCH] MINOR: cli: ability to set per-server maxconn

2015-10-19 Thread Andrew Hayworth
Apologies for two posts in a row: this version of the patch includes a
blurb for doc/management.txt as well.

- Andrew Hayworth

>From 6c54812a06706460dd2944ce7d51ea29636ed989 Mon Sep 17 00:00:00 2001
From: Andrew Hayworth 
Date: Mon, 19 Oct 2015 19:15:56 +
Subject: [PATCH] MINOR: cli: ability to set per-server maxconn

This commit adds support for setting a per-server maxconn from the stats
socket. The only really notable part of this commit is that we need to
check if maxconn == minconn before changing things, as this indicates
that we are NOT using dynamic maxconn. When we are not using dynamic
maxconn, we should update maxconn/minconn in lockstep.
---
 doc/management.txt |  5 +
 src/dumpstats.c| 31 ++-
 2 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/doc/management.txt b/doc/management.txt
index d67988b..a53a953 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -1356,6 +1356,11 @@ set maxconn frontend  
   delayed until the threshold is reached. The frontend might be specified by
   either its name or its numeric ID prefixed with a sharp ('#').

+set maxconn server  
+  Dynamically change the specified server's maxconn setting. Any positive
+  value is allowed including zero, but setting values larger than the global
+  maxconn does not make much sense.
+
 set maxconn global 
   Dynamically change the global maxconn setting within the range defined by the
   initial global maxconn setting. If it is increased and connections were
diff --git a/src/dumpstats.c b/src/dumpstats.c
index e80e45c..b2bd13b 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -1646,6 +1646,35 @@ static int stats_sock_parse_request(struct
stream_interface *si, char *line)

  return 1;
  }
+ else if (strcmp(args[2], "server") == 0) {
+ struct server *sv;
+ int v;
+
+ sv = expect_server_admin(s, si, args[3]);
+ if (!sv)
+ return 1;
+
+ if (!*args[4]) {
+ appctx->ctx.cli.msg = "Integer value expected.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ v = atoi(args[4]);
+ if (v < 0) {
+ appctx->ctx.cli.msg = "Value out of range.\n";
+ appctx->st0 = STAT_CLI_PRINT;
+ return 1;
+ }
+
+ if (sv->maxconn == sv->minconn) { // static maxconn
+  sv->maxconn = sv->minconn = v;
+ } else { // dynamic maxconn
+  sv->maxconn = v;
+ }
+
+ return 1;
+ }
  else if (strcmp(args[2], "global") == 0) {
  int v;

@@ -1681,7 +1710,7 @@ static int stats_sock_parse_request(struct
stream_interface *si, char *line)
  return 1;
  }
  else {
- appctx->ctx.cli.msg = "'set maxconn' only supports 'frontend' and
'global'.\n";
+ appctx->ctx.cli.msg = "'set maxconn' only supports 'frontend',
'server', and 'global'.\n";
  appctx->st0 = STAT_CLI_PRINT;
  return 1;
  }
--
2.1.3


0001-MINOR-cli-ability-to-set-per-server-maxconn.patch
Description: Binary data


[PATCH] MEDIUM: dns: Don't use the ANY query type

2015-10-19 Thread Andrew Hayworth
The ANY query type is weird, and some resolvers don't 'do the legwork'
of resolving useful things like CNAMEs. Given that upstream resolver
behavior is not always under the control of the HAProxy administrator,
we should not use the ANY query type. Rather, we should use A or 
according to either the explicit preferences of the operator, or the
implicit default (/IPv6).

- Andrew Hayworth

>From 8ed172424cbd79197aacacd1fd89ddcfa46e213d Mon Sep 17 00:00:00 2001
From: Andrew Hayworth 
Date: Mon, 19 Oct 2015 22:29:51 +
Subject: [PATCH] MEDIUM: dns: Don't use the ANY query type

Basically, it's ill-defined and shouldn't really be used going forward.
We can't guarantee that resolvers will do the 'legwork' for us and
actually resolve CNAMES when we request the ANY query-type. Case in point
(obfuscated, clearly):

  PRODUCTION! ahaywo...@secret-hostname.com:~$
  dig @10.11.12.53 ANY api.somestartup.io

  ; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> @10.11.12.53 ANY api.somestartup.io
  ; (1 server found)
  ;; global options: +cmd
  ;; Got answer:
  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62454
  ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 4, ADDITIONAL: 0

  ;; QUESTION SECTION:
  ;api.somestartup.io.IN  ANY

  ;; ANSWER SECTION:
  api.somestartup.io. 20  IN  CNAME
api-somestartup-production.ap-southeast-2.elb.amazonaws.com.

  ;; AUTHORITY SECTION:
  somestartup.io.   166687  IN  NS  ns-1254.awsdns-28.org.
  somestartup.io.   166687  IN  NS  ns-1884.awsdns-43.co.uk.
  somestartup.io.   166687  IN  NS  ns-440.awsdns-55.com.
  somestartup.io.   166687  IN  NS  ns-577.awsdns-08.net.

  ;; Query time: 1 msec
  ;; SERVER: 10.11.12.53#53(10.11.12.53)
  ;; WHEN: Mon Oct 19 22:02:29 2015
  ;; MSG SIZE  rcvd: 242

HAProxy can't handle that response correctly.

Rather than try to build in support for resolving CNAMEs presented
without an A record in an answer section (which may be a valid
improvement further on), this change just skips ANY record types
altogether. A and  are much more well-defined and predictable.

Notably, this commit preserves the implicit "Prefer IPV6 behavior."
---
 include/types/dns.h |  3 ++-
 src/checks.c|  6 +-
 src/dns.c   |  6 +-
 src/server.c| 18 +++---
 4 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/include/types/dns.h b/include/types/dns.h
index f8edb73..ea1a9f9 100644
--- a/include/types/dns.h
+++ b/include/types/dns.h
@@ -161,7 +161,8 @@ struct dns_resolution {
  unsigned int last_status_change; /* time of the latest DNS
resolution status change */
  int query_id; /* DNS query ID dedicated for this resolution */
  struct eb32_node qid; /* ebtree query id */
- int query_type; /* query type to send. By default DNS_RTYPE_ANY */
+ int query_type;
+ /* query type to send. By default DNS_RTYPE_A or DNS_RTYPE_
depending on resolver_family_priority */
  int status; /* status of the resolution being processed RSLV_STATUS_* */
  int step; /* */
  int try; /* current resolution try */
diff --git a/src/checks.c b/src/checks.c
index ade2428..d3cd567 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -2214,7 +2214,11 @@ int trigger_resolution(struct server *s)
  resolution->query_id = query_id;
  resolution->qid.key = query_id;
  resolution->step = RSLV_STEP_RUNNING;
- resolution->query_type = DNS_RTYPE_ANY;
+ if (resolution->resolver_family_priority == AF_INET) {
+ resolution->query_type = DNS_RTYPE_A;
+ } else {
+ resolution->query_type = DNS_RTYPE_;
+ }
  resolution->try = resolvers->resolve_retries;
  resolution->try_cname = 0;
  resolution->nb_responses = 0;
diff --git a/src/dns.c b/src/dns.c
index 7f71ac7..53b65ab 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -102,7 +102,11 @@ void dns_reset_resolution(struct dns_resolution
*resolution)
  resolution->qid.key = 0;

  /* default values */
- resolution->query_type = DNS_RTYPE_ANY;
+ if (resolution->resolver_family_priority == AF_INET) {
+ resolution->query_type = DNS_RTYPE_A;
+ } else {
+ resolution->query_type = DNS_RTYPE_;
+ }

  /* the second resolution in the queue becomes the first one */
  LIST_DEL(>list);
diff --git a/src/server.c b/src/server.c
index 8ddff00..33d6922 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2692,21 +2692,13 @@ int snr_resolution_error_cb(struct
dns_resolution *resolution, int error_code)
  case DNS_RESP_TRUNCATED:
  case DNS_RESP_ERROR:
  case DNS_RESP_NO_EXPECTED_RECORD:
- qtype_any = resolution->query_type == DNS_RTYPE_ANY;
  res_preferred_afinet = resolution->resolver_family_priority ==
AF_INET && resolution->query_type == DNS_RTYPE_A;
  res_preferred_afinet6 = resolution->resolver_family_priority ==
AF_INET6 && resolution->query_type == DNS_RTYPE_;

- if ((qtype_any || res_preferred_afinet || res_preferred_afinet6)
+ if ((res_preferred_afinet || 

Re: [call to comment] HAProxy's DNS resolution default query type

2015-10-19 Thread Andrew Hayworth
Hi all -

Just to chime in, we just got bit by this in production. Our dns
resolver (unbound) does not follow CNAMES -> A records when you send
an ANY query type. This is by design, so I can't just configure it
differently (and ripping out our DNS resolver is not immediately
feasible).

I therefore vote to stop sending the ANY query type, and instead rely
on A and  queries. I don't have any comments on behavior regarding
NX behavior.

NB: There is also support amongst some bigger internet companies to
fully deprecate this query type:
https://blog.cloudflare.com/deprecating-dns-any-meta-query-type/

On Thu, Oct 15, 2015 at 12:49 PM, Lukas Tribus  wrote:
>> I second this opinion. Removing ANY altogether would be the best case.
>>
>> In reality, I think it should use the OS's resolver libraries which
>> in turn will honor whatever the admin has configured for preference
>> order at the base OS level.
>>
>>
>> As a sysadmin, one should reasonably expect that tweaking the
>> preference knob at the OS level should affect most (and ideally, all)
>> applications they are running rather than having to manually fiddle
>> knobs at the OS and various application levels.
>> If there is some discussion and *good* reasons to ignore the OS
>> defaults, I feel this should likely be an *optional* config option
>> in haproxy.cfg ie "use OS resolver, unless specifically told not to
>> for $reason)
>
> Its exactly like you are saying.
>
> I don't think there is any doubt that HAproxy will bypass OS level
> resolvers, since you are statically configuring DNS server IPs in the
> haproxy configuration file.
>
> When you don't configure any resolvers, HAproxy does use libc's
> gethostbyname() or getaddrinfo(), but both are fundamentally broken.
>
> Thats why some applications have to implement there own resolvers
> (including nginx).
>
> First of all the OS resolver doesn't provide the TTL value. So you would
> have to guess or use fixed TTL values. Second, both calls are blocking,
> which is a big no-go for any event-loop based application (for this
> reason, it can only be queried at startup, not while the application
> is running).
>
> Just configure a hostname without resolver parameters, and haproxy
> will resolve your hostnames at startup via OS (and then maintain those
> IP's).
>
>
> Applications either have to implement a resolver on their own (haproxy,
> nginx), or use yet another external library, like getdnsapi [1].
>
>
> The point is: there is a reason for this implementation, and you can
> fallback to OS resolvers without any problems (just with their drawbacks).
>
>
>
>
> Regards,
>
> Lukas
>
>
> [1] https://getdnsapi.net/
>



-- 
- Andrew Hayworth



New night light Product Alert

2015-10-19 Thread jimmy
Hopethise-mailfi=ndsyouwell.   
HereIwouldliket=ostronglyrecommendourNEWlaunchedRECHARGEABLErainbowLEDtablelampJK-848withCOLORCHANGINGMOODLIGHTforyour=reference:
MultifunctionINEXPENSIVE:
Powerperformance5Wwithsuperbrightness800Luxat30=CMofdistance:Aqualifiedreadinglamp.
   
Withbuilt-in1800mAhrechargeablelithiumbattery#186=50:canbeusedasworkinglamp,emergencylamp,campinglamp,etc&=hellip;
   RGBSMD5050X6LEDbulb:256colorchangeablemood=lightornightlight.=bsp; 
 
Novelandmodernshapewithhumanizedandfunctionalde=sign,colorchangerainbowbase.  
 Adjustablesoft flexiblelamparm,noise-freegoos=eneck.   
Sensitivetouchswitchwith3stepsdimmerfunction.   
S=pecialDCjack,whichcanbechargedwithUSBcableofsmartphones,h=umanizedandconvenient.
   C=orrugatedgiftboxpacking,customizedpackingdesignisnegotiable.   =20  

Howdoyouthinka=boutthisnewlight?Callmeorsendmee-mailtolearnmoredetails. 
Thanks=regards,HighTechGlobalGrou=pCo.,Ltd.Jimmy ji...@joyclub69.com=  
JackLightingCo.,Ltd.Address:NO.5LiuqingRoad,Gaosha,Dongshe=ngTowm,Zhongshan,Guangdong,ChinaTelephone:86-0760-22211959http://j=acklighting.en.alibaba.com
 www.ft-led.com 

[no subject]

2015-10-19 Thread promotion
HI Customer,  If you can not see the description below, please click here. 如無法閱讀以下的內容,請按此.To learn more, please visit www.printing100.com. 想了解多D請到www.printing100.com 只需落單時付全數網上落單即享折扣優惠 送貨大優惠全線噴畫,展板,展架等,運費及所有配件 噴畫展架即減5% 噴畫展架類滿HK$5000或 咭片一次性落單滿HK$200即可享有全8折 印刷類即減2% 噴畫展架只限普通工商業區   咭片只包括運輸公司提供之基本地區   咭片落單時全額付款Banner橫額  Foam Board展板X Display StandX展架Pull Up Banner 易拉架Promotion table促銷臺 客戶平臺只需登陸客戶平臺,填寫相關資料,選擇所需產品,進行相關操作即可落單自己花時間拿貨?  不如我地送到你手上!我司每件貨品出貨時間都會進過嚴格QC,每件經過QC的貨品都會粘上我司的"信心保證標籤""信心保證標籤"印有我司的基本資料,如客戶要求可以訂購前通知客服同事,不粘上標籤.      ◇  電腦界字  ◇  Pop架     ◇  工程安裝  ◇  促銷臺     ◇  展示彩盒  ◇  人形展架     ◇  L型展架  ◇  掛畫油畫     ◇  超薄燈箱  ◇  橫額座     ◇  易拉架  ◇  FOAM BOARD     ◇  展板屏風  ◇  橫額 banner     ◇  展示架  ◇  X展架     ◇  拉網架  ◇  三腳架     ◇  海報,噴畫  ◇  資料架     ◇  簽名布  ◇  海報架   ◇  彩色單張  ◇  信封信紙  ◇  封套  ◇  特種紙咭片  ◇  傳統咭片  ◇  硬皮快勞  ◇  彩色咭片  ◇  數碼咭片  ◇  公文袋  ◇  賀咭  ◇  白咭咭片  ◇  結婚揮春  ◇  海報  ◇  牌仔/吊牌  ◇  環保袋  ◇  長型刊物  ◇  咭片  ◇  電腦表格  ◇  餐臺紙  ◇  數碼單張  ◇  福字月曆  ◇  單簿發票  ◇  彩色門券  ◇  書籤  ◇  單色書刊  ◇  數碼書刊  ◇  利是封  ◇  說明書  ◇  彩色書刊  ◇  結婚咭  ◇  紙箱  ◇  餐牌  ◇  手挽袋  ◇  自訂臺/掛曆  ◇  彩盒    Hot Line :82007559Email:sa...@printing100.com