Re: [PATCH] BUG/MAJOR: fix a segfault on option http_proxy and url_ip acl

2012-10-24 Thread Willy Tarreau
On Wed, Oct 24, 2012 at 11:47:47PM +0200, Cyril Bonté wrote:
> url2sa() mistakenly uses "addr" as a reference. This causes a segfault when
> option http_proxy or url_ip are used.

Wow, good catch Cyril, thanks a lot !

This is typically why I hate type casts in general and prefer unions
whenever possible.  Casts tell the compiler "I know I'm right", while
unions let it complain about mistakes. Of course here it was not really
possible.

> This bug was introduced in haproxy 1.5 and doesn't need to be backported.

Indeed this is an old one, it was introduced in dev5 1.5 year ago, so even
people using the good old dev7 are affected.

Patch applied, of course !

cheers,
Willy




[PATCH] MEDIUM: http: accept IPv6 values with (s)hdr_ip acl

2012-10-24 Thread Cyril Bonté
Commit ceb4ac9c states that IPv6 values are accepted by "hdr_ip" acl,
but the code didn't allow it. This patch provides the ability to accept IPv6
values.
---
 src/proto_http.c |   23 ++-
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/src/proto_http.c b/src/proto_http.c
index 9cdf3be..bbec4f2 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -8077,9 +8077,9 @@ smp_fetch_hdr_val(struct proxy *px, struct session *l4, 
void *l7, unsigned int o
return ret;
 }
 
-/* Fetch an HTTP header's integer value. The integer value is returned. It
- * takes a mandatory argument of type string and an optional one of type int
- * to designate a specific occurrence. It returns an IPv4 address.
+/* Fetch an HTTP header's IP value. takes a mandatory argument of type string
+ * and an optional one of type int to designate a specific occurrence.
+ * It returns an IPv4 or IPv6 address.
  */
 static int
 smp_fetch_hdr_ip(struct proxy *px, struct session *l4, void *l7, unsigned int 
opt,
@@ -8088,9 +8088,22 @@ smp_fetch_hdr_ip(struct proxy *px, struct session *l4, 
void *l7, unsigned int op
int ret;
 
while ((ret = smp_fetch_hdr(px, l4, l7, opt, args, smp)) > 0) {
-   smp->type = SMP_T_IPV4;
-   if (url2ipv4((char *)smp->data.str.str, &smp->data.ipv4))
+   if (url2ipv4((char *)smp->data.str.str, &smp->data.ipv4)) {
+   smp->type = SMP_T_IPV4;
break;
+   } else {
+   struct chunk *trash = sample_get_trash_chunk();
+   if (smp->data.str.len < trash->size - 1) {
+   memcpy(trash->str, smp->data.str.str, 
smp->data.str.len);
+   trash->str[smp->data.str.len] = '\0';
+   smp->data.str = *trash;
+   if (inet_pton(AF_INET6, smp->data.str.str, 
&smp->data.ipv6)) {
+   smp->type = SMP_T_IPV6;
+   break;
+   }
+   }
+   }
+
/* if the header doesn't match an IP address, fetch next one */
if (!(smp->flags & SMP_F_NOT_LAST))
return 0;
-- 
1.7.10.4




unsubscribe

2012-10-24 Thread Peter Miller

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

The Conde Nast Publications Ltd (No. 226900), Vogue House, Hanover Square, 
London W1S 1JU



[PATCH] BUG/MAJOR: fix a segfault on option http_proxy and url_ip acl

2012-10-24 Thread Cyril Bonté
url2sa() mistakenly uses "addr" as a reference. This causes a segfault when
option http_proxy or url_ip are used.

This bug was introduced in haproxy 1.5 and doesn't need to be backported.
---
 src/standard.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/standard.c b/src/standard.c
index 287931a..76031e9 100644
--- a/src/standard.c
+++ b/src/standard.c
@@ -906,12 +906,12 @@ int url2sa(const char *url, int ulen, struct 
sockaddr_storage *addr)
 * be warned this can slow down global daemon 
performances
 * while handling lagging dns responses.
 */
-   ret = url2ipv4(curr, &((struct sockaddr_in 
*)&addr)->sin_addr);
+   ret = url2ipv4(curr, &((struct sockaddr_in 
*)addr)->sin_addr);
if (!ret)
return -1;
curr += ret;
((struct sockaddr_in *)addr)->sin_port = (*curr == ':') 
? str2uic(++curr) : 80;
-   ((struct sockaddr_in *)addr)->sin_port = htons(((struct 
sockaddr_in *)&addr)->sin_port);
+   ((struct sockaddr_in *)addr)->sin_port = htons(((struct 
sockaddr_in *)addr)->sin_port);
}
return 0;
}
-- 
1.7.10.4




Re: hdr_ip/url_ip/urlp_ip don't support IPv6 values

2012-10-24 Thread Cyril Bonté

Hi again,

Le 24/10/2012 23:41, Willy Tarreau a écrit :

No problem. The principle is quite simple: a fetch function is called with
a pointer to a sample in which to store the type and contents. If contents
need storage, you put them in a buffer-sized chunk returned by
sample_get_trash_chunk(). If the fetch callers need to convert the contents,
they call sample_get_trash_chunk() again and get the alternate buffer to
store the conversion.

Don't worry, we'll help you :-)


OK, thanks. The patch is ready but I had to spend some time on a bug in 
url2sa().
I'll send you a patch in a few minutes : this bug causes a segfault as 
soon as url2sa() is called. It concerns both "option http_proxy" and 
"url_ip".


--
Cyril Bonté



Re: hdr_ip/url_ip/urlp_ip don't support IPv6 values

2012-10-24 Thread Willy Tarreau
Hi Cyril,

On Wed, Oct 24, 2012 at 08:33:57PM +0200, Cyril Bonté wrote:
> Hi Willy,
> 
> Le 24/10/2012 01:48, Willy Tarreau a écrit :
> >Either way, now the doc does not reflect reality, so we must address the
> >issue. I don't think that anything is missing anymore to have the IPv6
> >address parser to fill the gap. The smp_fetch_hdr() function was rewritten
> >precisely to address this need, inet_pton and a few skips of square
> >brakets should be all that's needed. Hmmm I think I get it now. inet_pton
> >needs a zero-terminated string and for this we needed to copy the address
> >to the sample trash chunk (using sample_get_trash_chunk()), trim it, and
> >parse it from there using inet_pton(). We can do that when url2ipv4
> >returns zero it seems, as it stops on anything not a digit nor a dot, and
> >wants 3 dots.
> >
> >I probably won't have time to look into this tomorrow, so if you can, once
> >again your fix will be welcome !
> 
> Yes of course, I can work on it, but probably not before some days for 
> me too. Thanks for pointing me to sample_get_trash_chunk(), this will help.
> I had a quick look on it today, I'm not sure I'll use samples correctly, 
> so maybe the first patch will require some re-reads and comments ;-)

No problem. The principle is quite simple: a fetch function is called with
a pointer to a sample in which to store the type and contents. If contents
need storage, you put them in a buffer-sized chunk returned by
sample_get_trash_chunk(). If the fetch callers need to convert the contents,
they call sample_get_trash_chunk() again and get the alternate buffer to
store the conversion.

Don't worry, we'll help you :-)

Thanks,
Willy




Re: gracefully stop accepting requests, remove server until update in place, bring back online

2012-10-24 Thread Baptiste
Hi,

You can issue a "disable server" command on the HAProxy stats socket.
It should stop receiving traffic very quickly and does not require to
restart HAProxy, it'd be taken into account on the fly.

cheers




On Wed, Oct 24, 2012 at 8:38 PM, S Ahmed  wrote:
> Say I want to update code on my cluster, and do it in a way where no users
> get bad reqeusts if the server is down.
>
> Say I am updating server#1, is it possible for me to tell haproxy to stop
> sending *new* requests to the server.  So after 1 minute I know that the
> server isn't receiving any new requests and it is free now, so I take it
> offline and update the software, then tell haproxy to bring it back online.
>
> How can I achieve this?
>
> Now I would have to update hapoxy's config I am guessing, will this bring
> haproxy down also for a second?



RE: option accept-invalid-http-request

2012-10-24 Thread Lukas Tribus

> Because the percent ("%") character serves as the indicator for 
> percent-encoded
> octets, it must be percent-encoded as "%25" for that octet to be used as data
> within a URI.

I don't believe HAproxy understands the difference between an not-encoded "%" 
or "%%"
and a correctly encoded "%25".

HAproxy does check every single character in the request, but not in
context with the characters before and after, I believe.

  


Re: hdr_ip/url_ip/urlp_ip don't support IPv6 values

2012-10-24 Thread Cyril Bonté

Hi Willy,

Le 24/10/2012 01:48, Willy Tarreau a écrit :

Either way, now the doc does not reflect reality, so we must address the
issue. I don't think that anything is missing anymore to have the IPv6
address parser to fill the gap. The smp_fetch_hdr() function was rewritten
precisely to address this need, inet_pton and a few skips of square
brakets should be all that's needed. Hmmm I think I get it now. inet_pton
needs a zero-terminated string and for this we needed to copy the address
to the sample trash chunk (using sample_get_trash_chunk()), trim it, and
parse it from there using inet_pton(). We can do that when url2ipv4
returns zero it seems, as it stops on anything not a digit nor a dot, and
wants 3 dots.

I probably won't have time to look into this tomorrow, so if you can, once
again your fix will be welcome !


Yes of course, I can work on it, but probably not before some days for 
me too. Thanks for pointing me to sample_get_trash_chunk(), this will help.
I had a quick look on it today, I'm not sure I'll use samples correctly, 
so maybe the first patch will require some re-reads and comments ;-)


--
Cyril Bonté



Re: Backends Referencing other Backends?

2012-10-24 Thread Joel Krauska
The reasons I would want them are in the original email, but here are
some more details.

1.  -- Gathering of unique stats without having to define a pool twice
This would be stellar for adhoc debug. (how much traffic matches this
ACL??), but also useful for general traffic classification.


2.  -- Allowing a pool to be served by a backup pool if all of the
original pool's servers are down.

Our app tier machines are technically capable of serving /ANY/
content. (we have a common code deploy)
However, we use HAproxy and header or URL matching to group certain
sub-groups of traffic to certain pools.
eg. Send image renders (which take longer and can block other traffic)
to a dedicated pool of image render boxes.

If somehow the entire image render pool died, it would be /ok/ to
briefly allow that traffic to hit other servers in our general pool.


I find that the longer a config file gets, the more error prone it becomes.

Having some 'macros' or 'includes' and some of the techniques I was
asking for avoids having to repeat configuration and reduce errors.



On Wed, Oct 24, 2012 at 12:41 AM, Baptiste  wrote:
> Hi Joel,
>
> Unfortunately, this kind of configuration is not doable.
> Could you tell us why  you want to do such thing, what is the real
> need for this (even if I have  some ideas about it ;) )
>
> cheers



HAProxy + tproxy + mysql, starts to never ack

2012-10-24 Thread Gerardo Malazdrewicz
Hello!

I have two read only mysql servers behind a haproxy. 

client: public IP
haproxy: public IP/private for mysql
mysql: private IP

All involved machines run Linux 3.2

A small number of servers use them.

The whole setup is not yet in production.

When in production (with just one server of the small set using them), after a
while (hours?), the mysql servers stop ACKing.

Other servers get their ACK, and work as expected.

haproxy and mysql have iptables/firewall rules.

haproxy as std in this situation.
/sbin/iptables -t mangle -N DIVERT
/sbin/iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
/sbin/iptables -t mangle -A DIVERT -j MARK --set-mark 1
/sbin/iptables -t mangle -A DIVERT -j ACCEPT
/sbin/ip rule add fwmark 1 lookup 100
/sbin/ip route add local 0.0.0.0/0 dev lo table 100

mysql marks all packages with a private source address, sends them using eth1 to
haproxy.
/sbin/iptables -A OUTPUT -o eth0 -t mangle -p tcp -s $(/sbin/ip route | awk
'/eth1/ {print $9 }') -j MARK --set-mark 1
/sbin/ip rule add fwmark 1 table 100
/sbin/ip route add default via 172.16.137.142 dev eth1 table 100

No conntrack of any kind (needed?).

Does not seem a haproxy issue, though perhaps some of you had seen it before,
and know what I am overlooking here.

Thanks in advance,
 Gerardo Malazdrewicz






Re: Graceful handling of garbage collecting servers?

2012-10-24 Thread Baptiste
Or better, use the disable-on-404 a few seconds before the garbage
collector restart occurs...

Baptiste

On Wed, Oct 24, 2012 at 5:19 PM, Ben Timby  wrote:
> I am not familiar with Java application servers, so please excuse my 
> ignorance.
>
> Is it possible to schedule the garbage collection? If so, you could
> temporarily disable the server, kick off GC, then re-enable the
> server. HAProxy has a stats socket that would allow you to adjust the
> server's weight to 0 temporarily. If you could make a JSP to kick off
> GC, then you could have a simple cron job that uses socat to disable
> the server, curl to hit that page, then socat to re-enable the server.
> Do each server in turn (or on separate intervals). If you can do this
> more often than it would happen "naturally" then you can control the
> process and lose 0 requests.
>



Re: option accept-invalid-http-request

2012-10-24 Thread Dmitry Sivachenko

On 24.10.2012 19:13, Jonathan Matthews wrote:

On 24 October 2012 16:03, Dmitry Sivachenko  wrote:

Hello!

I am running haproxy-1.4.22 with option accept-invalid-http-request turned
on (the default).


Do you actually mean "off" here?



Yes, sorry.





It seems that haproxy successfully validates requests with unencoded '%'
characted in it:

http://some.host.net/api/v1/do_smth?lang=en-ru&text=100%%20Pure%20Mulberry%20Queen

(note unencoded % after 100).

I see such requests in my backend's log.  I expect haproxy return HTTP 400
(Bad Request) in such cases.

Is it a bug or am I missing something?


Percentage signs are valid in URIs. Your application could be doing
/anything/ with them; HAProxy doesn't know what.
I don't /believe/ it's a validating parser's job to disallow these -
it sounds like you want more of a WAF.



Well, at least from Wikipedia:
http://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_the_percent_character

Because the percent ("%") character serves as the indicator for percent-encoded 
octets, it must be percent-encoded as "%25" for that octet to be used as data 
within a URI.


When haproxy encounters, say, unencoded whitespace character, it returns HTTP 
400.  Why '%' should be an exception?






Re: Graceful handling of garbage collecting servers?

2012-10-24 Thread Ben Timby
I am not familiar with Java application servers, so please excuse my ignorance.

Is it possible to schedule the garbage collection? If so, you could
temporarily disable the server, kick off GC, then re-enable the
server. HAProxy has a stats socket that would allow you to adjust the
server's weight to 0 temporarily. If you could make a JSP to kick off
GC, then you could have a simple cron job that uses socat to disable
the server, curl to hit that page, then socat to re-enable the server.
Do each server in turn (or on separate intervals). If you can do this
more often than it would happen "naturally" then you can control the
process and lose 0 requests.



Re: option accept-invalid-http-request

2012-10-24 Thread Jonathan Matthews
On 24 October 2012 16:03, Dmitry Sivachenko  wrote:
> Hello!
>
> I am running haproxy-1.4.22 with option accept-invalid-http-request turned
> on (the default).

Do you actually mean "off" here?

> It seems that haproxy successfully validates requests with unencoded '%'
> characted in it:
>
> http://some.host.net/api/v1/do_smth?lang=en-ru&text=100%%20Pure%20Mulberry%20Queen
>
> (note unencoded % after 100).
>
> I see such requests in my backend's log.  I expect haproxy return HTTP 400
> (Bad Request) in such cases.
>
> Is it a bug or am I missing something?

Percentage signs are valid in URIs. Your application could be doing
/anything/ with them; HAProxy doesn't know what.
I don't /believe/ it's a validating parser's job to disallow these -
it sounds like you want more of a WAF.

All IMHO, of course :-)

Jonathan
-- 
Jonathan Matthews // Oxford, London, UK
http://www.jpluscplusm.com/contact.html



option accept-invalid-http-request

2012-10-24 Thread Dmitry Sivachenko

Hello!

I am running haproxy-1.4.22 with option accept-invalid-http-request turned on 
(the default).


It seems that haproxy successfully validates requests with unencoded '%' 
characted in it:


http://some.host.net/api/v1/do_smth?lang=en-ru&text=100%%20Pure%20Mulberry%20Queen

(note unencoded % after 100).

I see such requests in my backend's log.  I expect haproxy return HTTP 400 (Bad 
Request) in such cases.


Is it a bug or am I missing something?

Thanks!



Re: Graceful handling of garbage collecting servers?

2012-10-24 Thread Finn Arne Gangstad
On Tue, Oct 23, 2012 at 4:02 PM, Mariusz Gronczewski  wrote:
> 2012/10/23 Thomas Heil :
>> Hi,
>>
>> On 23.10.2012 13:55, Finn Arne Gangstad wrote:
>>>
>>> Each request is a reasonably simple GET request that typically takes
>>> 10-20ms to process. This works great until a server needs to GC, then
>>> the query will hang for a few seconds.
>> Iam not quit sure, but I think you can play with timeout server and
>> option redispatch and retries, so that when GC occours the request would be
>> redispatched to the next server in the backend.
>>
> Try using "balance leastconn", if server will slow down/halt because
> of GC his queue will quickly be higher than rest of servers and new
> request will hit non-GCing ones, only disadvantage is that servers
> which respond faster will on average get more requests but that can be
> a good thing, if for any reason (backup, system update etc.) one of
> servers will start answering slower it will automatically get less
> requests.

leastconn helps slighty in this particular situation, we'd lose maybe
6-7 queries
instead of 10 (depending a bit on the load), but we still lose queries and we
don't want to lose any queries at all. Any query that takes more than a second
or two is effectively lost.

haproxy doesn't currently support resubmitting a query, but it would be very
nice if it could do something along the nginx feature "proxy_next_upstream".
nginx lets you resubmit a query until you have started sending data back
to the client, haproxy only lets you resubmit until a connection to the
backend server has been established.

- Finn Arne



Inaccurate message for errors on bind parsing

2012-10-24 Thread Holger Just

Hi there,

after half a day of debugging (and subsequently kicking myself), I 
finally noticed that whenever HAProxy (1.5-dev12 in this case) 
encounters an unknown option on a bind line, it will error out with this 
message irregardless of OpenSSL being enabled or not:


[ALERT] 296/194609 (6625) : parsing [/etc/haproxy/haproxy.cfg:40] : 
'bind' only supports the 'transparent', 'accept-proxy', 'defer-accept', 
'name', 'id', 'mss', 'mode', 'uid', 'gid', 'user', 'group' and 
'interface' options.


I thought I went crazy, thinking somehow OpenSSL support would not 
properly compile on a certain system when I only misconfigured it. It 
would be awesome if you could fix that message the cfgparse.c to reflect 
the actually available options. Unfortunately, I'm not versed enough in 
writing C to fix it myself :(


--Holger



Re: Path_reg to multiple servers

2012-10-24 Thread Baptiste
Hi,

For this type of matching, I would rather use path_beg, which will be
much more efficient:
acl use_server_1 path_beg /q/a /q/b /q/c
use backend server1 if user_server_1

acl use_server_2 path_beg /q/x /q/y
use backend server2 if user_server_2

Or just update your regex to the example below
acl use_server_1 path_reg ^/q/(a|b|c)
use backend server1 if user_server_1

acl use_server_2 path_reg ^/q/(x|y)
use backend server2 if user_server_2

Note: I did not try these ACLs, but they should work.

cheers



On Wed, Oct 24, 2012 at 1:12 AM, Rahul  wrote:
> Hi,
>   If I have URL of the form /q/a/1234 or /q/b/456 or /q/c/987
> and /q/x/abc or /q/y/3455
>
> I want to route any URLs of the form /q/[a|b|c]/.*
>
> i.e anything which is meant to the queues a or b or c to
> one backend,
> and anything which is meant for /q/[x|y]/.*  to a different
> backend, how would I achieve this?
>
> I attempted:
> acl use_server_1 path_reg /a|b|c/
> use backend server1 if user_server_1
>
> acl use_server_2 path_reg /x|y/
> use backend server2 if user_server_2
>
> This does not match routes correctly..Any ideas?
>
>
>
>
>
>



Re: Backends Referencing other Backends?

2012-10-24 Thread Baptiste
Hi Joel,

Unfortunately, this kind of configuration is not doable.
Could you tell us why  you want to do such thing, what is the real
need for this (even if I have  some ideas about it ;) )

cheers