[ANNOUNCE] haproxy-1.4.26

2015-01-31 Thread Willy Tarreau
Hi,

This is an update of branch 1.4, after 10 months of fixes.

Aside the various minor fixes that could accumulate over almost one year, we
have four fixes for important bugs :

  - http-send-name-header was still broken and could cause corrupted requests
to be sent when requests were pipelined. The bug was reported by Guillaume
Castagnino and debugged by Cyril and myself. It made us scratch our heads
a lot ; I think it has been one of the hardest ones to fix so far because
1.4's infrastructure is not well suited to support this feature. Thus if
you use it, it should now be safe, but if any new bug surfaces, please
upgrade to 1.5.

  - a possible integer overflow could happen when computing available data in
a buffer when combined with http-send-name-header, resulting in a read
overflow which can crash the process. Did I say that we shouldn't use
http-send-name-header in 1.4 ?

  - using http-send-name-header with a POST request whose body fills the
request buffer could cause a memmove to be performed with a negative
size length if the connection to the server fails and is redispatched
to a server with a longer name, crashing the process.

  - issuing "show sess" on the CLI may sometimes maintain a reference to a
session which is not properly released if the CLI is suddenly aborted
while the reference is kept (eg: buffer full). This can silently corrupt
the back ref list and cause haproxy to crash when freeing pools, typically
while soft-stopping on a reload, causing the loss of all established
sessions.

The other ones are not that important and probably self-explanatory from
the changelog below :

- BUG/MINOR: stats: fix a typo on a closing tag for a server tracking 
another one
- BUG/MEDIUM: auth: fix segfault with http-auth and a configuration with an 
unknown encryption algorithm
- BUG/MEDIUM: config: userlists should ensure that encrypted passwords are 
supported
- BUG/MINOR: log: fix request flags when keep-alive is enabled
- BUG/MINOR: checks: prevent http keep-alive with http-check expect
- BUG/MEDIUM: backend: Update hash to use unsigned int throughout
- BUG/MINOR: http: fix typo: "401 Unauthorized" => "407 Unauthorized"
- BUG/MINOR: build: handle whitespaces in wc -l output
- DOC: httplog does not support 'no'
- BUG/MEDIUM: regex: fix risk of buffer overrun in exp_replace()
- BUILD: fix Makefile.bsd
- BUILD: also fix Makefile.osx
- BUG/MAJOR: http: fix again http-send-name-header
- BUG/MAJOR: buffer: fix possible integer overflow on reserved size 
computation
- BUG/MAJOR: buffer: don't schedule data in transit for leaving until 
connected
- BUG/MINOR: http: don't report server aborts as client aborts
- DOC: stop referencing the slow git repository in the README
- DOC: remove the ultra-obsolete TODO file
- BUILD: remove TODO from the spec file and add README
- MINOR: log: make MAX_SYSLOG_LEN overridable at build time
- DOC: remove references to CPU=native in the README
- BUG/MEDIUM: http: don't dump debug headers on MSG_ERROR
- BUG/MAJOR: cli: explicitly call cli_release_handler() upon error
- BUG/MEDIUM: tcp: don't use SO_ORIGINAL_DST on non-AF_INET sockets
- BUG/MINOR: config: don't inherit the default balance algorithm in 
frontends
- BUG/MEDIUM: http: fix header removal when previous header ends with pure 
LF
- BUG/MINOR: http: abort request processing on filter failure

For distro packages maintainers, I'd suggest to backport at least all the
MAJOR and MEDIUM fixes.

Usual links below :
 Site index   : http://www.haproxy.org/
 Sources  : http://www.haproxy.org/download/1.4/src/devel/
 Changelog: http://www.haproxy.org/download/1.4/src/CHANGELOG
 Cyril's HTML doc : 
http://cbonte.github.io/haproxy-dconv/configuration-1.4.html

Willy




Re: Possible to send backend host and port in healthcheck?

2015-01-31 Thread Pavlos Parissis
On 01/02/2015 07:35 πμ, Willy Tarreau wrote:
> Hello Joseph,
> 
> I'm CCing Bhaskar since he was the one proposing the first solution, he
> may have some useful insights. Other points below.
> 
> On Thu, Jan 15, 2015 at 01:23:59PM -0800, Joseph Lynch wrote:
>> Hello,
>>
>> I am trying to set up a health check service similar to the inetd solutions
>> suggested in the documentation. Unfortunately, my backends run on different
>> ports because they are being created dynamically and as far as I can tell I
>> cannot include the server port in my healthcheck either as part of the
>> server declaration, a header, or as part of the healthcheck uri itself.
>>
>> I have been trying to come up with potential solutions that are not overly
>> invasive, and I think that the simplest solution is to include the server
>> host and port in the existing send-state header. I have included a patch
>> that I believe does this at the end of this email. Before I go off
>> maintaining a local fork, I wanted to ask if the haproxy devs would be
>> sympathetic to me trying to upstream this patch?
> 
> I'm personally fine with it. As you say, it's really not invasive, so we
> could merge it and even backport it into 1.5-stable. I'd slightly change
> something however, I'd use "address" instead of "host" in the field, since
> that's what you're copying there. "Host" could be used later to copy the
> equivalent of a host name, so let's not misuse the field name.
> 
>> As for prior art, I found a few posts on this mailing list about the
>> ability to add headers to http checks. I believe that something like
>> http://marc.info/?l=haproxy&m=139181606417120&w=2 would be more then what
>> we need to solve this problem, but that thread seems to have died. I do
>> believe that a general ability to add headers to healthchecks would be
>> superior to my patch, but the general solution is significantly harder to
>> pull off.
> 
> I'd like to re-heat that thread. I didn't even remember about it, indeed
> we were busy finalizing 1.5. Bhaskar, I still think your work makes sense
> for 1.6, so if you still have your patch, it's probably time to resend it :-)
> 

If I understood Bhaskar's suggestion correctly, we could delegate health
check for backend servers to a single server which does all the health
checking. Am I right ? If it this is case then the downside of multiple
health checks when nbproc > 1 is gone! But, I would like to see a
fail-back mechanism as we have with agent check in case that single
server is gone. Alternatively, we could have Bhaskar's suggestion
implemented in the agent check.

I am re-heating the request of delegate health checks to a central
service with a fall-back mechanism in place because
* Reduces checks in setups where you have servers in multiple backends
* Reduces checks in setups where you have more than 1 HAProxy active
server(HAProxy servers behind a Layer 4 load balancer - ECMP and etc)
* Reduces checks when multi-process model is used
* Reduces CPU stress on firewalls, when they are present between HAProxy
and backend servers.

This assumes that there are enough resources on the 'health-checker'
server to sustain huge amount of requests. Which is not a big deal if
'health-checker' solution is designed correctly, meaning that backend
servers push their availability to that 'health-checker' server and etc.
Furthermore, 'health-checker' server should have a check in place to
detect backend servers not sending their health status and declare them
down after a certain period of inactivity.

In case of servers located across multiple vlans, there is a edge case
where backend servers are reported as healthy but HAProxy fails to send
traffic to them due to missing network routes, firewall holes and etc.

The main gain of this solution is that you make backend servers
responsible for announcing their availability, it is a mindset change as
we have used to have LBs performing the health checks and be the
authoritative source of such information.

Cheers,
Pavlos








signature.asc
Description: OpenPGP digital signature


[ANNOUNCE] haproxy-1.5.11

2015-01-31 Thread Willy Tarreau
Hi!

Here comes another month of fixes. Nothing really important this time, mostly
small annoyances caused by improper behaviours. One of them was not exactly a
bug since it used to work as documented, but as it was documented to work in a
stupid and useless way I decided to backport it anyway. It's the "http-request
set-header" action which used to remove the target header prior to computing
the format string, making it impossible to append a value to an existing header,
or to have to pass via a dummy header, adding to the complexity. Now the string
is computed before removing the header so that there's no more insane tricks to
go through. One important fix targets users running on 1.5.10 : the addition of
"log-tag" uncovered a bug by which we can run with a null logger if no logger
is declared. Since 1.5.10 (with log-tag), this can cause a crash upon startup,
so this was fixed here.

The changes are of so low importance that the changelog is explicit enough :
- BUG/MEDIUM: backend: correctly detect the domain when use_domain_only is 
used
- MINOR: ssl: load certificates in alphabetical order
- BUG/MINOR: checks: prevent http keep-alive with http-check expect
- BUG/MEDIUM: Do not set agent health to zero if server is disabled in 
config
- MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health is 
zero
- BUG/MINOR: stats:Fix incorrect printf type.
- DOC: add missing entry for log-format and clarify the text
- BUG/MEDIUM: http: fix header removal when previous header ends with pure 
LF
- BUG/MEDIUM: channel: fix possible integer overflow on reserved size 
computation
- BUG/MINOR: channel: compare to_forward with buf->i, not buf->size
- MINOR: channel: add channel_in_transit()
- MEDIUM: channel: make buffer_reserved() use channel_in_transit()
- MEDIUM: channel: make bi_avail() use channel_in_transit()
- BUG/MEDIUM: channel: don't schedule data in transit for leaving until 
connected
- BUG/MAJOR: log: don't try to emit a log if no logger is set
- BUG/MINOR: args: add missing entry for ARGT_MAP in arg_type_names
- BUG/MEDIUM: http: make http-request set-header compute the string before 
removal
- BUG/MINOR: http: fix incorrect header value offset in 
replace-hdr/replace-value
- BUG/MINOR: http: abort request processing on filter failure

Usual URLs below :
Site index   : http://www.haproxy.org/
Sources  : http://www.haproxy.org/download/1.5/src/
Git repository   : http://git.haproxy.org/git/haproxy-1.5.git/
Git Web browsing : http://git.haproxy.org/?p=haproxy-1.5.git
Changelog: http://www.haproxy.org/download/1.5/src/CHANGELOG
Cyril's HTML doc : 
http://cbonte.github.com/haproxy-dconv/configuration-1.5.html

Regards,
Willy




Re: connection is rejected when using ipad with send-proxy option

2015-01-31 Thread Willy Tarreau
On Thu, Jan 15, 2015 at 12:16:13PM -0800, Alex Wu wrote:
> We enable send-proxy for ssl connections, and have the patched apache module 
> to deal with proxyprotocol.
> 
> >From Mac OS, we see it works as designed. But when we repeat the same test 
> >using ipad, then we the connection rejected. iPad cannot establish the 
> >connection to haproxy over ssl.

I don't understand, your iPad doesn't support the proxy protocol, so
why would you expect it to work ?

Willy




Re: tproxy bug in haproxy-1.5.10

2015-01-31 Thread Willy Tarreau
On Thu, Jan 15, 2015 at 08:21:05PM +0100, U.Mutlu wrote:
> global
>maxconn 512
> 
> defaults
>timeout connect 1m
>timeout client  2m
>timeout server  2m
>#option redispatch
> 
> frontend MyFrontend
>bind192.168.100.101:5678

Here you need "transparent" on the "bind" line to enable tproxy. But
I agree with Lukas that your setup is overly complicated, makes no
sense at all, and does not reflect what you'd use in production. I
didn't even know that tproxy used to support the options you were
using, so probably you're not the only one to seek complications...

Regards,
Willy




Re: Possible to send backend host and port in healthcheck?

2015-01-31 Thread Willy Tarreau
Hello Joseph,

I'm CCing Bhaskar since he was the one proposing the first solution, he
may have some useful insights. Other points below.

On Thu, Jan 15, 2015 at 01:23:59PM -0800, Joseph Lynch wrote:
> Hello,
> 
> I am trying to set up a health check service similar to the inetd solutions
> suggested in the documentation. Unfortunately, my backends run on different
> ports because they are being created dynamically and as far as I can tell I
> cannot include the server port in my healthcheck either as part of the
> server declaration, a header, or as part of the healthcheck uri itself.
> 
> I have been trying to come up with potential solutions that are not overly
> invasive, and I think that the simplest solution is to include the server
> host and port in the existing send-state header. I have included a patch
> that I believe does this at the end of this email. Before I go off
> maintaining a local fork, I wanted to ask if the haproxy devs would be
> sympathetic to me trying to upstream this patch?

I'm personally fine with it. As you say, it's really not invasive, so we
could merge it and even backport it into 1.5-stable. I'd slightly change
something however, I'd use "address" instead of "host" in the field, since
that's what you're copying there. "Host" could be used later to copy the
equivalent of a host name, so let's not misuse the field name.

> As for prior art, I found a few posts on this mailing list about the
> ability to add headers to http checks. I believe that something like
> http://marc.info/?l=haproxy&m=139181606417120&w=2 would be more then what
> we need to solve this problem, but that thread seems to have died. I do
> believe that a general ability to add headers to healthchecks would be
> superior to my patch, but the general solution is significantly harder to
> pull off.

I'd like to re-heat that thread. I didn't even remember about it, indeed
we were busy finalizing 1.5. Bhaskar, I still think your work makes sense
for 1.6, so if you still have your patch, it's probably time to resend it :-)

> If you guys agree that this patch is ok, please let me know what
> I need to do next. I don't see any tests for the send-state option but I
> presume that I would need to update the documentation.

Please check what your patch does with servers not having any IP nor port
(ie: the ones reached over a unix socket). I suspect you'll have the socket
path in the host part, and "0" as the port, but I'm not sure, so I'd like
to be certain. Also, please verify that it's really the server's address
and port, and not the ones from the check, that are passed in the header.
So if you specify "addr" and "port" on the server line, these ones are only
used for the check, and we want to be sure that the real ones are properly
used. Once everything's OK, please update the doc about "send-state".

Thanks,
Willy

> === Patch ===
> 
> diff --git a/src/checks.c b/src/checks.c
> index 15a3c40..d620b5b 100644
> --- a/src/checks.c
> +++ b/src/checks.c
> @@ -477,6 +477,8 @@ static int httpchk_build_status_header(struct server
> *s, char *buffer, int size)
> int sv_state;
> int ratio;
> int hlen = 0;
> +   char host[46];
> +   char port[6];
> const char *srv_hlt_st[7] = { "DOWN", "DOWN %d/%d",
>   "UP %d/%d", "UP",
>   "NOLB %d/%d", "NOLB",
> @@ -507,8 +509,11 @@ static int httpchk_build_status_header(struct server
> *s, char *buffer, int size)
>  (s->state != SRV_ST_STOPPED) ? (s->check.health -
> s->check.rise + 1) : (s->check.health),
>  (s->state != SRV_ST_STOPPED) ? (s->check.fall) :
> (s->check.rise));
> 
> -   hlen += snprintf(buffer + hlen,  size - hlen, "; name=%s/%s; node=%s;
> weight=%d/%d; scur=%d/%d; qcur=%d",
> -s->proxy->id, s->id,
> +   addr_to_str(&s->addr, host, sizeof(host));
> +   port_to_str(&s->addr, port, sizeof(port));
> +
> +   hlen += snprintf(buffer + hlen,  size - hlen, "; host=%s; port=%s;
> name=%s/%s; node=%s; weight=%d/%d; scur=%d/%d; qcur=%d",
> +host, port, s->proxy->id, s->id,
>  global.node,
>  (s->eweight * s->proxy->lbprm.wmult + s->proxy->lbprm.wdiv
> - 1) / s->proxy->lbprm.wdiv,
>  (s->proxy->lbprm.tot_weight * s->proxy->lbprm.wmult +
> s->proxy->lbprm.wdiv - 1) / s->proxy->lbprm.wdiv,



Re: Stick tables, good guys, bad guys, and NATs

2015-01-31 Thread Yuan
Hi Willy,

Gratitude. Thanks a lot. Appreciate this tons. Helps a lot.

Regards,

Long Wu Yuan 龙 武 缘 
Sr. Linux Engineer 高级工程师
ChinaNetCloud 云络网络科技(上海)有限公司 | www.ChinaNetCloud.com1238 Xietu Lu, X2 Space 
1-601, Shanghai, China | 中国上海市徐汇区斜土路1238号X2空 间1-601室

24x7 Support Hotline: +86-400-618-0024 | Office Tel: +86-(21)-6422-1946
We are hiring! http://careers.chinanetcloud.com  | Customer Portal - 
https://customer-portal.service.chinanetcloud.com/


On Jan 31, 2015, at 9:32 PM, Willy Tarreau  wrote:

> Hi guys,
> 
> On Tue, Jan 27, 2015 at 06:01:13AM +0800, Yuan Long wrote:
>> I am in the same fix.
>> No matter what we try, the data to address is the real
>> laptop/desktop/cellphone/server count. That count is skewed as soon as
>> there are a hundred laptops/desktops behind a router.
>> 
>> Best I heard is from Willy himself, suggestion to use base32+src. At the
>> cost of losing plain text and having a binary to use in acl but works for
>> now. Grateful to have HAProxy in the first place.
> 
> There's no universal rule. Everything depends on how the site is made,
> and how the bad guys are acting. For example, some sites may work very
> well with a rate-limit on base32+src. That could be the case when you
> want to prevent a client from mirroring a whole web site. But for sites
> with very few urls, it could be another story. Conversely, some sites
> will provide lots of different links to various objects. Think for
> example about a merchant's site where each photo of object for sale is
> a different URL. You wouldn't want to block users who simply click on
> "next" and get 50 new photos each time.
> 
> So the first thing to do is to define how the site is supposed to work.
> Next, you define what is a bad behaviour, and how to distinguish between
> intentional bad behaviour and accidental bad behaviour (eg: people who
> have to hit reload several times because of a poor connection). For most
> sites, you have to keep in mind that it's better to let some bad users
> pass through than to block legitimate users. So you want to put the cursor
> on the business side and not on the policy enforcement side.
> 
> Proxies, firewalls etc make the problem worse, but not too much in general.
> You'll easily see some addresses sending 3-10 times more requests than other
> ones because they're proxying many users. But if you realize that a valid
> user may also reach that level of traffic on regular use of the site, it's
> a threshold you have to accept anyway. What would be unlikely however is
> that surprizingly all users behind a proxy browse on steroids. So setting
> blocking levels 10 times higher than the average pace you normally observe
> might already give very good results.
> 
> If your site is very special and needs to enforce strict rules against
> sucking or spamming (eg: forums), then you may need to identify the client
> and observe cookies. But then there's even less generic rule, it totally
> depends on the application and the sequence to access the site. To be
> transparent on this subject, we've been involved in helping a significant
> number of sites under abuse or attack at HAProxy Technologies, and it
> turns out that whatever new magic tricks you find for one site are often
> irrelevant to the next one. Each time you have to go back to pencil and
> paper and write down the complete browsing sequence and find a few subtle
> elements there.
> 
> Regards,
> Willy
> 



Re: tt calculation

2015-01-31 Thread Willy Tarreau
Hi Laurent,

On Mon, Jan 26, 2015 at 12:42:58PM +0100, Laurent Dormoy wrote:
> Hi,
> 
> According to the haproxy doc, Td = Tt - (Tq + Tw + Tc + Tr)
> 
> But in my haproxy HTTP logs, Td is always equal to 0 (meaning that Tt = Tq + 
> Tw + Tc + Tr)
> 
> The reverse proxy serves clients all over Europe and keep-alive is not 
> enabled.
> 
> Can someone explain me this ? 

Yes, if your server responds all the data in very few packets
and these data can be pushed to the system buffers on the client
side, you'll really get 0 ms as seen from haproxy. The larger
the socket buffers, and the larger the amount of data that you'll
be able to pass to the client without waiting. That's why transfer
time can only be measured from a receiver and not a sender.

Hoping this helps,
Willy




Re: tt calculation

2015-01-31 Thread Warren Turkal
Can you share your config?

wt

On Thu, Jan 29, 2015 at 1:08 AM, Laurent Dormoy 
wrote:

> Up,
>
> did anybody experience or can explain why this happens ? I need to perform
> response time troubleshooting and I miss the whole proxy->client time
> because of this.
>
> Cheers,
>
> --
> Laurent
>
>
> - Ursprüngliche Mail -
> > Von: "Laurent Dormoy" 
> > An: haproxy@formilux.org
> > Gesendet: Montag, 26. Januar 2015 12:42:58
> > Betreff: tt calculation
> >
> > Hi,
> >
> > According to the haproxy doc, Td = Tt - (Tq + Tw + Tc + Tr)
> >
> > But in my haproxy HTTP logs, Td is always equal to 0 (meaning that Tt =
> Tq +
> > Tw + Tc + Tr)
> >
> > The reverse proxy serves clients all over Europe and keep-alive is not
> > enabled.
> >
> > Can someone explain me this ?
> >
> > I use v1.4.24-2
> >
> > Thanks,
> >
> > --
> > Laurent
> >
> >
> >
> >
>
>


-- 
Warren Turkal


man page for haproxy.cfg

2015-01-31 Thread Ryan O'Hara

I've been asked to provide a man page for haproxy.cfg, which would be
a massive endeavor. Since Cyril has done such an excellent job
generating the HTML documentation, how difficult would it be to grok
this into man page format? Has anyone done it?

Ryan




Re: HAproxy constant memory leak

2015-01-31 Thread Georges-Etienne Legendre
The maxconn was set to 4096 before, and after 45 days, haproxy was using
20gigs...

What else could it be?

-- Georges-Etienne

On Fri, Jan 30, 2015 at 1:49 PM, Lukas Tribus  wrote:

> With "maxconn 5" this is expected behavior, because haproxy will use
> RAM up to an amount that is justified for 5 concurrent connections.
> Configure maxconn to a proper and real value and the RAM usage will be
> predictable. Lukas
>


[PATCH/RFC 8/8] MEDIUM: Support sending email alerts

2015-01-31 Thread Simon Horman
Signed-off-by: Simon Horman 
---
 include/proto/checks.h |   2 +
 include/types/checks.h |   2 +-
 include/types/proxy.h  |  18 ++-
 src/cfgparse.c |  26 ++--
 src/checks.c   | 321 +
 src/server.c   |   1 +
 6 files changed, 356 insertions(+), 14 deletions(-)

diff --git a/include/proto/checks.h b/include/proto/checks.h
index 24dec79..b4faed0 100644
--- a/include/proto/checks.h
+++ b/include/proto/checks.h
@@ -47,6 +47,8 @@ static inline void health_adjust(struct server *s, short 
status)
 const char *init_check(struct check *check, int type);
 void free_check(struct check *check);
 
+void send_email_alert(struct server *s, const char *format, ...)
+   __attribute__ ((format(printf, 2, 3)));
 #endif /* _PROTO_CHECKS_H */
 
 /*
diff --git a/include/types/checks.h b/include/types/checks.h
index 8162a06..4b35d30 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -181,7 +181,7 @@ struct check {
char **envp;/* the environment to use if 
running a process-based check */
struct pid_list *curpid;/* entry in pid_list used for 
current process-based test, or -1 if not in test */
struct protocol *proto; /* server address protocol for 
health checks */
-   struct sockaddr_storage addr;   /* the address to check, if 
different from  */
+   struct sockaddr_storage addr;   /* the address to check */
 };
 
 struct check_status {
diff --git a/include/types/proxy.h b/include/types/proxy.h
index 72d1024..230b804 100644
--- a/include/types/proxy.h
+++ b/include/types/proxy.h
@@ -208,6 +208,19 @@ struct error_snapshot {
char buf[BUFSIZE];  /* copy of the beginning of the message 
*/
 };
 
+struct email_alert {
+   struct list list;
+   struct list tcpcheck_rules;
+};
+
+struct email_alertq {
+   struct list email_alerts;
+   struct check check; /* Email alerts are implemented using 
existing check
+* code even though they are not 
checks. This structure
+* is as a parameter to the check code.
+* Each check corresponds to a mailer */
+};
+
 struct proxy {
enum obj_type obj_type; /* object type == 
OBJ_TYPE_PROXY */
enum pr_state state;/* proxy state, one of PR_* */
@@ -386,9 +399,10 @@ struct proxy {
struct mailers *m;  /* Mailer to send email alerts 
via */
char *name;
} mailers;
-   char *from; /* Address to send email 
allerts from */
-   char *to;   /* Address(es) to send email 
allerts to */
+   char *from; /* Address to send email alerts 
from */
+   char *to;   /* Address(es) to send email 
alerts to */
char *myhostname;   /* Identity to use in HELO 
command sent to mailer */
+   struct email_alertq *queues;/* per-mailer alerts queues */
} email_alert;
 };
 
diff --git a/src/cfgparse.c b/src/cfgparse.c
index de94074..3af0449 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2019,8 +2019,8 @@ int cfg_parse_mailers(const char *file, int linenum, char 
**args, int kwm)
}
 
proto = protocol_by_family(sk->ss_family);
-   if (!proto || !proto->connect) {
-   Alert("parsing [%s:%d] : '%s %s' : connect() not 
supported for this address family.\n",
+   if (!proto || !proto->connect || proto->sock_prot != 
IPPROTO_TCP) {
+   Alert("parsing [%s:%d] : '%s %s' : TCP not supported 
for this address family.\n",
  file, linenum, args[0], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
@@ -6607,15 +6607,19 @@ int check_config_validity()
}
}
 
-   if (
-   (curproxy->email_alert.mailers.name || 
curproxy->email_alert.from || curproxy->email_alert.myhostname || 
curproxy->email_alert.to) &&
-   !(curproxy->email_alert.mailers.name && 
curproxy->email_alert.from && curproxy->email_alert.to)) {
-   Warning("config : 'email-alert' will be ignored for %s 
'%s' (the presence any of "
-   "'email-alert from', 'email-alert mailer', 
'email-alert hostname' or 'email-alert to' requrires each of"
-   "'email-alert from', 'email-alert mailer' and 
'email-alert to'  to be present).\n",
-   proxy_type_str(curproxy), curproxy->id);
-   err_code |= ERR_WARN;
-   free_email_alert(curproxy);
+ 

[PATCH/RFC 0/8] Email Alerts

2015-01-31 Thread Simon Horman
Hi Willy, Hi All,

the purpose of this email is to solicit feedback on an implementation
of email alerts for haproxy the design of which is based on a discussion
in this forum some months ago.


This patchset allows configuration of mailers. These are like
the existing haproxy concept of peers. But for sending email alerts.

It also allows proxies to configure sending email alerts to mailers.

An example haproxy.cf snippet is as follows:

listen VIP_Name
...
server RIP_Name ...
email-alert mailers some_mailers
email-alert from te...@horms.org
email-alert to te...@horms.org

mailers some_mailers
mailer us 127.0.0.1:25
mailer them 192.168.0.1:587


And an example email alert is as follows:

 start 
Date: Wed, 28 Jan 2015 17:23:21 +0900 (JST)
From: te...@horms.org
To: te...@horms.org
Subject: [HAproxy Alert] Server VIP_Name/RIP_Name is DOWN.

Server VIP_Name/RIP_Name is DOWN.
 end 


Internally the code makes use of a dummy check using the tcpcheck
code to set up a simple sequence of expect/send rules. This,
along with the notion of mailers, is my interpretation of the
earlier discussion in this forum.

The current code has only been lightly tested and has a number of
limitations including:

* No options to control sending email alerts based on the event
  that has occurred. That is, its probably way to noisy for most
  people's tastes.
* No options to configure the format of the email alerts
* No options to configure delays associated with the checks
  used to send alerts.
* No Documentation
* No support for STLS. This one will be a little tricky.

Again the purpose is to solicit feedback on the code so far,
in particular the design, before delving into further implementation details.


For reference this patchset is available in git
https://github.com/horms/haproxy devel/email-alert

This patchset is based on the current mainline master branch:
the 1.6 development branch. The current head commit there is
32602d2 ("BUG/MINOR: checks: prevent http keep-alive with http-check expect").


Simon Horman (8):
  MEDIUM: Remove connect_chk
  MEDIUM: Refactor init_check and move to checks.c
  MEDIUM: Add free_check() helper
  MEDIUM: Move proto and addr fields struct check
  MEDIUM: Attach tcpcheck_rules to check
  MEDIUM: Add parsing of mailers section
  MEDIUM: Allow configuration of email alerts
  MEDIUM: Support sending email alerts

 Makefile|   2 +-
 include/proto/checks.h  |   5 +
 include/types/checks.h  |   3 +
 include/types/mailers.h |  65 
 include/types/proxy.h   |  24 +++
 include/types/server.h  |   5 -
 src/cfgparse.c  | 290 
 src/checks.c| 427 ++--
 src/mailers.c   |  17 ++
 src/server.c|  60 ++-
 10 files changed, 803 insertions(+), 95 deletions(-)
 create mode 100644 include/types/mailers.h
 create mode 100644 src/mailers.c

-- 
2.1.4




[PATCH/RFC 1/8] MEDIUM: Remove connect_chk

2015-01-31 Thread Simon Horman
Remove connect_chk and instead call connect_proc_chk()
and connect_conn_chk(). There no longer seems to be any
value in having a wrapper function here.

Signed-off-by: Simon Horman 
---
 src/checks.c | 25 ++---
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/src/checks.c b/src/checks.c
index 1b5b731..321fe34 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1824,27 +1824,6 @@ out:
 }
 
 /*
- * establish a server health-check.
- *
- * It can return one of :
- *  - SN_ERR_NONE if everything's OK
- *  - SN_ERR_SRVTO if there are no more servers
- *  - SN_ERR_SRVCL if the connection was refused by the server
- *  - SN_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
- *  - SN_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, 
...)
- *  - SN_ERR_INTERNAL for any other purely internal errors
- * Additionnally, in the case of SN_ERR_RESOURCE, an emergency log will be 
emitted.
- */
-static int connect_chk(struct task *t)
-{
-   struct check *check = t->context;
-
-   if (check->type == PR_O2_EXT_CHK)
-   return connect_proc_chk(t);
-   return connect_conn_chk(t);
-}
-
-/*
  * manages a server health-check that uses a process. Returns
  * the time the task accepts to wait, or TIME_ETERNITY for infinity.
  */
@@ -1875,7 +1854,7 @@ static struct task *process_chk_proc(struct task *t)
 
check->state |= CHK_ST_INPROGRESS;
 
-   ret = connect_chk(t);
+   ret = connect_proc_chk(t);
switch (ret) {
case SN_ERR_UP:
return t;
@@ -2018,7 +1997,7 @@ static struct task *process_chk_conn(struct task *t)
check->bo->p = check->bo->data;
check->bo->o = 0;
 
-   ret = connect_chk(t);
+   ret = connect_conn_chk(t);
switch (ret) {
case SN_ERR_UP:
return t;
-- 
2.1.4




[PATCH/RFC 3/8] MEDIUM: Add free_check() helper

2015-01-31 Thread Simon Horman
Add free_check() helper to free the memory allocated by init_check().

Signed-off-by: Simon Horman 
---
 include/proto/checks.h | 1 +
 src/checks.c   | 7 +++
 2 files changed, 8 insertions(+)

diff --git a/include/proto/checks.h b/include/proto/checks.h
index 1e65652..24dec79 100644
--- a/include/proto/checks.h
+++ b/include/proto/checks.h
@@ -45,6 +45,7 @@ static inline void health_adjust(struct server *s, short 
status)
 }
 
 const char *init_check(struct check *check, int type);
+void free_check(struct check *check);
 
 #endif /* _PROTO_CHECKS_H */
 
diff --git a/src/checks.c b/src/checks.c
index ae981f8..b2f89a5 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -2807,6 +2807,13 @@ const char *init_check(struct check *check, int type)
return NULL;
 }
 
+void free_check(struct check *check)
+{
+   free(check->bi);
+   free(check->bo);
+   free(check->conn);
+}
+
 
 /*
  * Local variables:
-- 
2.1.4




[PATCH/RFC 4/8] MEDIUM: Move proto and addr fields struct check

2015-01-31 Thread Simon Horman
The motivation for this is to make checks more independent of each
other to allow further reuse of their infrastructure.

For nowserver->check and server->agent still always use the same values
for the addr and proto fields so this patch should not introduce any
behavioural changes.

Signed-off-by: Simon Horman 
---
 include/types/checks.h |  2 ++
 include/types/server.h |  5 -
 src/checks.c   | 14 +++---
 src/server.c   | 14 +++---
 4 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/include/types/checks.h b/include/types/checks.h
index 831166e..04d79c4 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -179,6 +179,8 @@ struct check {
char **argv;/* the arguments to use if 
running a process-based check */
char **envp;/* the environment to use if 
running a process-based check */
struct pid_list *curpid;/* entry in pid_list used for 
current process-based test, or -1 if not in test */
+   struct protocol *proto; /* server address protocol for 
health checks */
+   struct sockaddr_storage addr;   /* the address to check, if 
different from  */
 };
 
 struct check_status {
diff --git a/include/types/server.h b/include/types/server.h
index 1cabb83..4f97e17 100644
--- a/include/types/server.h
+++ b/include/types/server.h
@@ -202,11 +202,6 @@ struct server {
 
int puid;   /* proxy-unique server ID, used 
for SNMP, and "first" LB algo */
 
-   struct {/* configuration  used by 
health-check and agent-check */
-   struct protocol *proto; /* server address protocol for 
health checks */
-   struct sockaddr_storage addr;   /* the address to check, if 
different from  */
-   } check_common;
-
struct check check; /* health-check specific 
configuration */
struct check agent; /* agent specific configuration 
*/
 
diff --git a/src/checks.c b/src/checks.c
index b2f89a5..6624714 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1437,18 +1437,18 @@ static int connect_conn_chk(struct task *t)
 
/* prepare a new connection */
conn_init(conn);
-   conn_prepare(conn, s->check_common.proto, check->xprt);
+   conn_prepare(conn, check->proto, check->xprt);
conn_attach(conn, check, &check_conn_cb);
conn->target = &s->obj_type;
 
/* no client address */
clear_addr(&conn->addr.from);
 
-   if (is_addr(&s->check_common.addr)) {
+   if (is_addr(&check->addr)) {
 
/* we'll connect to the check addr specified on the server */
-   conn->addr.to = s->check_common.addr;
-   proto = s->check_common.proto;
+   conn->addr.to = check->addr;
+   proto = check->proto;
}
else {
/* we'll connect to the addr on the server */
@@ -2498,10 +2498,10 @@ static void tcpcheck_main(struct connection *conn)
/* no client address */
clear_addr(&conn->addr.from);
 
-   if (is_addr(&s->check_common.addr)) {
+   if (is_addr(&check->addr)) {
/* we'll connect to the check addr specified on 
the server */
-   conn->addr.to = s->check_common.addr;
-   proto = s->check_common.proto;
+   conn->addr.to = check->addr;
+   proto = check->proto;
}
else {
/* we'll connect to the addr on the server */
diff --git a/src/server.c b/src/server.c
index 8554e75..31f319b 100644
--- a/src/server.c
+++ b/src/server.c
@@ -901,7 +901,7 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
}
 
newsrv->addr = *sk;
-   newsrv->proto = newsrv->check_common.proto = 
protocol_by_family(newsrv->addr.ss_family);
+   newsrv->proto = newsrv->check.proto = 
newsrv->agent.proto = protocol_by_family(newsrv->addr.ss_family);
newsrv->xprt  = newsrv->check.xprt = newsrv->agent.xprt 
= &raw_sock;
 
if (!newsrv->proto) {
@@ -1109,8 +1109,8 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
goto out;
}
 
-   newsrv->check_common.addr = *sk;
-   newsrv->check_common.proto = 
protocol_by_family(sk->ss_family);
+   newsrv->check.addr = newsrv->agent.addr = *sk;
+   newsrv->check.proto = newsrv->agent.proto = 
prot

[PATCH/RFC 6/8] MEDIUM: Add parsing of mailers section

2015-01-31 Thread Simon Horman
As mailer and mailers structures and allow parsing of
a mailers section into those structures.

These structures will subsequently be freed as it is
not yet possible to use reference them in the configuration.

Signed-off-by: Simon Horman 
---
 Makefile|   2 +-
 include/types/mailers.h |  65 ++
 src/cfgparse.c  | 177 
 src/mailers.c   |  17 +
 4 files changed, 260 insertions(+), 1 deletion(-)
 create mode 100644 include/types/mailers.h
 create mode 100644 src/mailers.c

diff --git a/Makefile b/Makefile
index 4671759..4e3e166 100644
--- a/Makefile
+++ b/Makefile
@@ -664,7 +664,7 @@ OBJS = src/haproxy.o src/sessionhash.o src/base64.o 
src/protocol.o \
src/session.o src/hdr_idx.o src/ev_select.o src/signal.o \
src/acl.o src/sample.o src/memory.o src/freq_ctr.o src/auth.o \
src/compression.o src/payload.o src/hash.o src/pattern.o src/map.o \
-   src/namespace.o
+   src/namespace.o src/mailers.o
 
 EBTREE_OBJS = $(EBTREE_DIR)/ebtree.o \
   $(EBTREE_DIR)/eb32tree.o $(EBTREE_DIR)/eb64tree.o \
diff --git a/include/types/mailers.h b/include/types/mailers.h
new file mode 100644
index 000..582bb94
--- /dev/null
+++ b/include/types/mailers.h
@@ -0,0 +1,65 @@
+/*
+ * include/types/mailer.h
+ * This file defines everything related to mailer.
+ *
+ * Copyright 2015 Horms Solutions Ltd., Simon Horman 
+ *
+ * Based on include/types/peers.h
+ *
+ * Copyright 2010 EXCELIANCE, Emeric Brun 
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  
USA
+ */
+
+#ifndef _TYPES_EMAIL_ALERT_H
+#define _TYPES_EMAIL_ALERT_H
+
+#include 
+#include 
+#include 
+#include 
+
+struct mailer {
+   char *id;
+   struct mailers *mailers;
+   struct {
+   const char *file;   /* file where the section appears */
+   int line;   /* line where the section appears */
+   } conf; /* config information */
+   struct sockaddr_storage addr;   /* SMTP server address */
+   struct protocol *proto; /* SMTP server address's protocol */
+   struct xprt_ops *xprt;  /* SMTP server socket operations at 
transport layer */
+   void *sock_init_arg;/* socket operations's opaque init 
argument if needed */
+   struct mailer *next;/* next mailer in the list */
+};
+
+
+struct mailers {
+   char *id;   /* mailers section name */
+   struct mailer *mailer_list; /* mailers in this mailers section */
+   struct {
+   const char *file;   /* file where the section appears */
+   int line;   /* line where the section appears */
+   } conf; /* config information */
+   struct mailers *next;   /* next mailers section */
+   int count;  /* total number of mailers in this 
mailers section */
+   int users;  /* number of users of this mailers 
section */
+};
+
+
+extern struct mailers *mailers;
+
+#endif /* _TYPES_EMAIL_ALERT_H */
+
diff --git a/src/cfgparse.c b/src/cfgparse.c
index c5f20a3..2db5ed1 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -48,6 +48,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1916,6 +1917,145 @@ out:
return err_code;
 }
 
+
+/*
+ * Parse a line in a , ,  or  section.
+ * Returns the error code, 0 if OK, or any combination of :
+ *  - ERR_ABORT: must abort ASAP
+ *  - ERR_FATAL: we can continue parsing but not start the service
+ *  - ERR_WARN: a warning has been emitted
+ *  - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int cfg_parse_mailers(const char *file, int linenum, char **args, int kwm)
+{
+   static struct mailers *curmailers = NULL;
+   struct mailer *newmailer = NULL;
+   const char *err;
+   int err_code = 0;
+   char *errmsg = NULL;
+
+   if (strcmp(args[0], "mailers") == 0) { /* new mailers section */
+   if (!*args[1]) {
+   Alert("parsing [%s:%d] : missing name for mailers 
section.\n", file, linenum);
+   err_code |= ERR_ALERT | ERR_ABORT;
+  

[PATCH/RFC 5/8] MEDIUM: Attach tcpcheck_rules to check

2015-01-31 Thread Simon Horman
This is to allow checks to be established whose tcpcheck_rules
are not those of its proxy.

Signed-off-by: Simon Horman 
---
 include/types/checks.h |  1 +
 src/checks.c   | 34 +-
 src/server.c   |  1 +
 3 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/include/types/checks.h b/include/types/checks.h
index 04d79c4..8162a06 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -166,6 +166,7 @@ struct check {
char desc[HCHK_DESC_LEN];   /* health check description */
int use_ssl;/* use SSL for health checks */
int send_proxy; /* send a PROXY protocol header 
with checks */
+   struct list *tcpcheck_rules;/* tcp-check send / expect 
rules */
struct tcpcheck_rule *current_step; /* current step when using 
tcpcheck */
struct tcpcheck_rule *last_started_step;/* pointer to latest tcpcheck 
rule started */
int inter, fastinter, downinter;/* checks: time in milliseconds 
*/
diff --git a/src/checks.c b/src/checks.c
index 6624714..0f99d47 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -60,7 +60,7 @@
 #include 
 
 static int httpchk_expect(struct server *s, int done);
-static int tcpcheck_get_step_id(struct server *);
+static int tcpcheck_get_step_id(struct check *);
 static void tcpcheck_main(struct connection *);
 
 static const struct check_status check_statuses[HCHK_STATUS_SIZE] = {
@@ -620,7 +620,7 @@ static void chk_report_conn_err(struct connection *conn, 
int errno_bck, int expi
chk = get_trash_chunk();
 
if (check->type == PR_O2_TCPCHK_CHK) {
-   step = tcpcheck_get_step_id(check->server);
+   step = tcpcheck_get_step_id(check);
if (!step)
chunk_printf(chk, " at initial connection step of 
tcp-check");
else {
@@ -1463,8 +1463,8 @@ static int connect_conn_chk(struct task *t)
/* only plain tcp-check supports quick ACK */
quickack = check->type == 0 || check->type == PR_O2_TCPCHK_CHK;
 
-   if (check->type == PR_O2_TCPCHK_CHK && 
!LIST_ISEMPTY(&s->proxy->tcpcheck_rules)) {
-   struct tcpcheck_rule *r = (struct tcpcheck_rule *) 
s->proxy->tcpcheck_rules.n;
+   if (check->type == PR_O2_TCPCHK_CHK && 
!LIST_ISEMPTY(check->tcpcheck_rules)) {
+   struct tcpcheck_rule *r = (struct tcpcheck_rule *) 
check->tcpcheck_rules->n;
/* if first step is a 'connect', then tcpcheck_main must run it 
*/
if (r->action == TCPCHK_ACT_CONNECT) {
tcpcheck_main(conn);
@@ -2351,23 +2351,23 @@ static int httpchk_expect(struct server *s, int done)
 /*
  * return the id of a step in a send/expect session
  */
-static int tcpcheck_get_step_id(struct server *s)
+static int tcpcheck_get_step_id(struct check *check)
 {
struct tcpcheck_rule *cur = NULL, *next = NULL;
int i = 0;
 
/* not even started anything yet => step 0 = initial connect */
-   if (!s->check.current_step)
+   if (check->current_step)
return 0;
 
-   cur = s->check.last_started_step;
+   cur = check->last_started_step;
 
/* no step => first step */
if (cur == NULL)
return 1;
 
/* increment i until current step */
-   list_for_each_entry(next, &s->proxy->tcpcheck_rules, list) {
+   list_for_each_entry(next, check->tcpcheck_rules, list) {
if (next->list.p == &cur->list)
break;
++i;
@@ -2384,7 +2384,7 @@ static void tcpcheck_main(struct connection *conn)
struct check *check = conn->owner;
struct server *s = check->server;
struct task *t = check->task;
-   struct list *head = &s->proxy->tcpcheck_rules;
+   struct list *head = check->tcpcheck_rules;
 
/* here, we know that the check is complete or that it failed */
if (check->result != CHK_RES_UNKNOWN)
@@ -2565,14 +2565,14 @@ static void tcpcheck_main(struct connection *conn)
case SN_ERR_SRVTO: /* ETIMEDOUT */
case SN_ERR_SRVCL: /* ECONNREFUSED, ENETUNREACH, ... */
chunk_printf(&trash, "TCPCHK error establishing 
connection at step %d: %s",
-   tcpcheck_get_step_id(s), 
strerror(errno));
+   tcpcheck_get_step_id(check), 
strerror(errno));
set_server_check_status(check, 
HCHK_STATUS_L4CON, trash.str);
goto out_end_tcpcheck;
case SN_ERR_PRXCOND:
case SN_ERR_RESOURCE:
case SN_ERR_INTERNAL:
chunk_printf(&trash, "TCPCHK error establishing 
connection at step %d",
-  

[PATCH/RFC 2/8] MEDIUM: Refactor init_check and move to checks.c

2015-01-31 Thread Simon Horman
Refactor init_check so that an error string is returned
rather than alerts being printed by it. Also
init_check to checks.c and provide a prototype to allow
it to be used from multiple C files.

Signed-off-by: Simon Horman 
---
 include/proto/checks.h |  2 ++
 src/checks.c   | 26 ++
 src/server.c   | 44 +---
 3 files changed, 37 insertions(+), 35 deletions(-)

diff --git a/include/proto/checks.h b/include/proto/checks.h
index f3d4fa6..1e65652 100644
--- a/include/proto/checks.h
+++ b/include/proto/checks.h
@@ -44,6 +44,8 @@ static inline void health_adjust(struct server *s, short 
status)
return __health_adjust(s, status);
 }
 
+const char *init_check(struct check *check, int type);
+
 #endif /* _PROTO_CHECKS_H */
 
 /*
diff --git a/src/checks.c b/src/checks.c
index 321fe34..ae981f8 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -2781,6 +2781,32 @@ static void tcpcheck_main(struct connection *conn)
return;
 }
 
+const char *init_check(struct check *check, int type)
+{
+   check->type = type;
+
+   /* Allocate buffer for requests... */
+   if ((check->bi = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
+   return "out of memory while allocating check buffer";
+   }
+   check->bi->size = global.tune.chksize;
+
+   /* Allocate buffer for responses... */
+   if ((check->bo = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
+   return "out of memory while allocating check buffer";
+   }
+   check->bo->size = global.tune.chksize;
+
+   /* Allocate buffer for partial results... */
+   if ((check->conn = calloc(1, sizeof(struct connection))) == NULL) {
+   return "out of memory while allocating check connection";
+   }
+
+   check->conn->t.sock.fd = -1; /* no agent in progress yet */
+
+   return NULL;
+}
+
 
 /*
  * Local variables:
diff --git a/src/server.c b/src/server.c
index b19ebbe..8554e75 100644
--- a/src/server.c
+++ b/src/server.c
@@ -21,6 +21,7 @@
 
 #include 
 
+#include 
 #include 
 #include 
 #include 
@@ -796,35 +797,6 @@ const char *server_parse_weight_change_request(struct 
server *sv,
return NULL;
 }
 
-static int init_check(struct check *check, int type, const char * file, int 
linenum)
-{
-   check->type = type;
-
-   /* Allocate buffer for requests... */
-   if ((check->bi = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
-   Alert("parsing [%s:%d] : out of memory while allocating check 
buffer.\n", file, linenum);
-   return ERR_ALERT | ERR_ABORT;
-   }
-   check->bi->size = global.tune.chksize;
-
-   /* Allocate buffer for responses... */
-   if ((check->bo = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
-   Alert("parsing [%s:%d] : out of memory while allocating check 
buffer.\n", file, linenum);
-   return ERR_ALERT | ERR_ABORT;
-   }
-   check->bo->size = global.tune.chksize;
-
-   /* Allocate buffer for partial results... */
-   if ((check->conn = calloc(1, sizeof(struct connection))) == NULL) {
-   Alert("parsing [%s:%d] : out of memory while allocating check 
connection.\n", file, linenum);
-   return ERR_ALERT | ERR_ABORT;
-   }
-
-   check->conn->t.sock.fd = -1; /* no agent in progress yet */
-
-   return 0;
-}
-
 int parse_server(const char *file, int linenum, char **args, struct proxy 
*curproxy, struct proxy *defproxy)
 {
struct server *newsrv = NULL;
@@ -1592,7 +1564,7 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
}
 
if (do_check) {
-   int ret;
+   const char *ret;
 
if (newsrv->trackit) {
Alert("parsing [%s:%d]: unable to enable checks 
and tracking at the same time!\n",
@@ -1671,9 +1643,10 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
}
 
/* note: check type will be set during the config 
review phase */
-   ret = init_check(&newsrv->check, 0, file, linenum);
+   ret = init_check(&newsrv->check, 0);
if (ret) {
-   err_code |= ret;
+   Alert("parsing [%s:%d] : %s.\n", file, linenum, 
ret);
+   err_code |= ERR_ALERT | ERR_ABORT;
goto out;
}
 
@@ -1681,7 +1654,7 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
}
 
if (do_agent) {
-   int ret;
+   const char *ret;
 

[PATCH/RFC 7/8] MEDIUM: Allow configuration of email alerts

2015-01-31 Thread Simon Horman
This currently does nothing beyond parsing the configuration
and storing in the proxy as there is no implementation of email alerts.

Signed-off-by: Simon Horman 
---
 include/types/mailers.h |   6 +--
 include/types/proxy.h   |  10 +
 src/cfgparse.c  | 109 
 3 files changed, 122 insertions(+), 3 deletions(-)

diff --git a/include/types/mailers.h b/include/types/mailers.h
index 582bb94..07374a7 100644
--- a/include/types/mailers.h
+++ b/include/types/mailers.h
@@ -23,8 +23,8 @@
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  
USA
  */
 
-#ifndef _TYPES_EMAIL_ALERT_H
-#define _TYPES_EMAIL_ALERT_H
+#ifndef _TYPES_MAILERS_H
+#define _TYPES_MAILERS_H
 
 #include 
 #include 
@@ -61,5 +61,5 @@ struct mailers {
 
 extern struct mailers *mailers;
 
-#endif /* _TYPES_EMAIL_ALERT_H */
+#endif /* _TYPES_MAILERS_H */
 
diff --git a/include/types/proxy.h b/include/types/proxy.h
index d67fe88..72d1024 100644
--- a/include/types/proxy.h
+++ b/include/types/proxy.h
@@ -380,6 +380,16 @@ struct proxy {
} conf; /* config information */
void *parent;   /* parent of the proxy when 
applicable */
struct comp *comp;  /* http compression */
+
+   struct {
+   union {
+   struct mailers *m;  /* Mailer to send email alerts 
via */
+   char *name;
+   } mailers;
+   char *from; /* Address to send email 
allerts from */
+   char *to;   /* Address(es) to send email 
allerts to */
+   char *myhostname;   /* Identity to use in HELO 
command sent to mailer */
+   } email_alert;
 };
 
 struct switching_rule {
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 2db5ed1..de94074 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2056,6 +2056,18 @@ out:
return err_code;
 }
 
+static void free_email_alert(struct proxy *p)
+{
+   free(p->email_alert.mailers.name);
+   p->email_alert.mailers.name = NULL;
+   free(p->email_alert.from);
+   p->email_alert.from = NULL;
+   free(p->email_alert.to);
+   p->email_alert.to = NULL;
+   free(p->email_alert.myhostname);
+   p->email_alert.myhostname = NULL;
+}
+
 int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
 {
static struct proxy *curproxy = NULL;
@@ -2352,6 +2364,15 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
if (defproxy.check_command)
curproxy->check_command = 
strdup(defproxy.check_command);
 
+   if (defproxy.email_alert.mailers.name)
+   curproxy->email_alert.mailers.name = 
strdup(defproxy.email_alert.mailers.name);
+   if (defproxy.email_alert.from)
+   curproxy->email_alert.from = 
strdup(defproxy.email_alert.from);
+   if (defproxy.email_alert.to)
+   curproxy->email_alert.to = 
strdup(defproxy.email_alert.to);
+   if (defproxy.email_alert.myhostname)
+   curproxy->email_alert.myhostname = 
strdup(defproxy.email_alert.myhostname);
+
goto out;
}
else if (!strcmp(args[0], "defaults")) {  /* use this one to assign 
default values */
@@ -2393,6 +2414,7 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
free(defproxy.conf.lfs_file);
free(defproxy.conf.uif_file);
free(defproxy.log_tag);
+   free_email_alert(&defproxy);
 
for (rc = 0; rc < HTTP_ERR_SIZE; rc++)
chunk_destroy(&defproxy.errmsg[rc]);
@@ -2870,6 +2892,61 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
err_code |= ERR_ALERT | ERR_FATAL;
}
}/* end else if (!strcmp(args[0], "cookie"))  */
+   else if (!strcmp(args[0], "email-alert")) {
+   if (*(args[1]) == 0) {
+   Alert("parsing [%s:%d] : missing argument after 
'%s'.\n",
+ file, linenum, args[0]);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+}
+
+   if (!strcmp(args[1], "from")) {
+   if (*(args[1]) == 0) {
+   Alert("parsing [%s:%d] : missing argument after 
'%s'.\n",
+ file, linenum, args[1]);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+   }
+   free(curproxy->email_alert.from);
+   curproxy->email_alert.from = strdup(args[2]);
+   }
+   else if (!strcmp(args[1], "

RE: Option no-sslv3 no being honoured with wildcard certs

2015-01-31 Thread Lukas Tribus
> I have a situation where the no-sslv3 is being ignored using version  
> 1.5.10 on centos 6.6 and my test backend Java Rest api test servers are  
> rejecting SSL handshakes with : 

Please post you configuration, "haproxy -vv" output and a ssldump of the
failed ssl handshake.

This doesn't make any sense, because even without "no-sslv3" openssl
would still negotiate TLS if the backend supports it.


Lukas

  


[SPAM] Vidéo : le coup d'essai

2015-01-31 Thread Frédéric

Cette newsletter vous a été envoyée au format graphique HTML.
Si vous lisez cette version, alors votre logiciel de messagerie préfère les 
e-mails au format texte.
Vous pouvez lire la version originale en ligne:
http://treeview.ens-mail6.com/zo670x



Si cet email ne s'affiche pas correctement, vous pouvez le
visualiser grâce à ce lien ( http://treeview.ens-mail6.com/zo670x ).

Newsletter du 29 janvier 2015

Notre coup de coeur

Driver CLEVELAND 588 altitude(Cleveland)

148.50 €
au lieu de
299.00 €
( - 50 % !)

J'en profite... ( http://www.visiongolf.fr/proshop.aspx?Id=1156 )

Technique

La balle de golf reconditionnée : un bon plan pour jouer l’hiver.

En cette période d’hiver où les terrains de golf sont gras,
humides  et où les températures ne favorisent pas les coups de
grande distance, la balle de golf reconditionnée est un excellent
compromis qualité - prix pour jouer l’hiver. En effet, ce concept
qui allie écologie et ...

Lire l'article... ( http://www.visiongolf.fr/exercice.aspx )

Notre dernière vidéo

Le coup d'essai

Dans cette vidéo, Joel Bernard nous explique l'utilité du coup
d'essai

Voir la vidéo... (
http://www.visiongolf.fr/lecons_de_golf_en_video.aspx )

Coaching

Comment appliquer ma technique facilement sur le parcours ?

Vous devez déjà certainement avoir eu cette réflexion un jour
:« tout va bien sur le practice, mais quand j’arrive sur le
parcours, tout est différent ; je n’arrive plus à faire ce que ...


Lire l'article... ( http://www.visiongolf.fr/coaching.aspx )

Santé

Diminuer son index, même pendant l’hiver

Si vous êtes comme la plupart des golfeurs, vous cherchez toujours
des moyens efficaces d’améliorer votre index. Même pendant la
période hivernale, il existe des astuces qui vous permettront ...

Lire l'article... ( http://www.visiongolf.fr/golf_sante.aspx )

Retrouvez-nous sur les différents réseaux sociaux :

Vous recevez notre newsletter car vous êtes membre de Visiongolf.
Si vous ne vous souvenez plus de vos identifiants, cliquez ici (
http://www.visiongolf.fr/ForgotPassword.aspx ) .
Si vous aimez notre newsletter, pensez à l'envoyer à vos amis
golfeurs.

_
Désinscription / Changer d'adresse e-mail: 
http://treeview.ens-mail6.com/ugmwsmqqgsgwmyqhguywggesewhh
Powered par Vision Golf



Compression for response codes other than 200s such as 201s?

2015-01-31 Thread Jesse Hathaway
According to the configuration manual:

>  Compression is disabled when:
>* HTTP status code is not 200

However, many APIs return a 201 response from a post request.

What is the rationale for only enabling compression on 200 response codes?

Thanks, Jesse




Compression for response codes other than 200s such as 201s?

2015-01-31 Thread Jesse Hathaway
The HAProxy configuration manual contains the following note:

> Compression is disabled when:
>   * HTTP status code is not 200

However, many rest APIs send back a 201 status code after receiving a POST.

What is the rationale for only compressing responses with a 200 status code?

Thanks, Jesse




Re: Compression for response codes other than 200s such as 201s?

2015-01-31 Thread Willy Tarreau
On Thu, Jan 29, 2015 at 08:41:04PM +, Jesse Hathaway wrote:
> The HAProxy configuration manual contains the following note:
> 
> > Compression is disabled when:
> >   * HTTP status code is not 200
> 
> However, many rest APIs send back a 201 status code after receiving a POST.
> 
> What is the rationale for only compressing responses with a 200 status code?

I've found it in this commit message :

  commit d300261babe838ac3bdd2f5e2353980845fb3929
  Author: William Lallemand 
  Date:   Mon Nov 26 14:34:47 2012 +0100

MINOR: compression: disable on multipart or status != 200

The compression is disabled when the HTTP status code is not 200, indeed
compression on some HTTP code can create issues (ex: 206, 416).

Multipart message should not be compressed eitherway.

Note that this was one of the very early commits of compression, at a time
where it was only starting to work and we were extremely careful. Now that
compression works well, I think we could broaden the scope of status codes,
and explicitly allow 2xx except 204-206. From what I'm seeing, at least 200,
201, 202, and 203 may contain a response subject to compression.

Well, maybe we could at least allow 200-203 then. What do you think ?

Willy




Re: [PATCH] BUG/MINOR: checks: prevent http keep-alive with http-check expect

2015-01-31 Thread Willy Tarreau
On Fri, Jan 30, 2015 at 12:07:07AM +0100, Cyril Bonté wrote:
> Sébastien Rohaut reported that string negation in http-check expect didn't
> work as expected.
> 
> The misbehaviour is caused by responses with HTTP keep-alive. When the
> condition is not met, haproxy awaits more data until the buffer is full or the
> connection is closed, resulting in a check timeout when "timeout check" is
> lower than the keep-alive timeout on the server side.
> 
> In order to avoid the issue, when a "http-check expect" is used, haproxy will
> ask the server to disable keep-alive by automatically appending a
> "Connection: close" header to the request.

Very nice, thank you Cyril for this work. I've applied it to both 1.6 and 1.5.

Cheers,
Willy




Re: Packaging Tape, China

2015-01-31 Thread Edward Zhang
Hi there,


This is Edward from Shijiazhuang Jiang Run Industry Trade Co., Ltd.. I am 
writing with the reference to Packaging tape. We have been manufacturing it 
since 2003 and continuously exporting in huge quantities. Therefore I would be 
grateful if you can kindly confirm you interest and the required quantity 
enabling me to offer you our best possible price. 


Yours Faithfully
--

Edward Zhang   Marketing
Shijiazhuang Jiang Run Industry Trade Co., Ltd.


Skype:  edwardzhang0210
Mobile: +8615132992167 (WhatsApp, Viber & Wechat)

Compression for response codes other than 200s such as 201s?

2015-01-31 Thread Jesse Hathaway
The HAProxy configuration manual contains the following note:

> Compression is disabled when:
>   * HTTP status code is not 200

However, many rest APIs send back a 201 status code after receiving a POST.

What is the rationale for only compressing responses with a 200 status code?

Thanks, Jesse




man page for haproxy.cfg

2015-01-31 Thread Ryan O'Hara

I've been asked to provide a man page for haproxy.cfg, which would be
a massive endeavor. Since Cyril has done such an excellent job
generating the HTML documentation, how difficult would it be to grok
this into man page format? Has anyone done it?

Ryan




Stainless Steel Wedge Wire Screen

2015-01-31 Thread dyj...@sina.com
Dear Friend,

This is Ada

We professionally produce  stainless steel wire mesh and polyester mesh for 
printing and filtering.

Our trademark is R&K.  please tell me the spec. of your requirement.
have a nice day!
---

Best regards
Ada Wong

Hebei Reking Wire Mesh Co.,Ltd 



Tel: + 86 311 86673357 | Fax:+86 311 86057527 | Mobile: + 86 13473759795 | 



i...@wirecloths.com | www.wirecloths.com Skype: misswang-juan| 
MSN:misswang-j...@hotmail.com

Add: Room 701,Unit 1,Building 17,No.96 of East Huai An Road,
Yuhua District,Shijiazhuang city,Hebei province,China
P.C.:050024

P SAVE PAPER - Please do not print this e-mail unless necessary

[PATCH] BUG/MINOR: checks: prevent http keep-alive with http-check expect

2015-01-31 Thread Cyril Bonté
Sébastien Rohaut reported that string negation in http-check expect didn't
work as expected.

The misbehaviour is caused by responses with HTTP keep-alive. When the
condition is not met, haproxy awaits more data until the buffer is full or the
connection is closed, resulting in a check timeout when "timeout check" is
lower than the keep-alive timeout on the server side.

In order to avoid the issue, when a "http-check expect" is used, haproxy will
ask the server to disable keep-alive by automatically appending a
"Connection: close" header to the request.
---
 doc/configuration.txt | 4 
 src/checks.c  | 3 +++
 2 files changed, 7 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 2295744..7c1edd8 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2905,6 +2905,10 @@ http-check expect [!]  
   waste some CPU cycles, especially when regular expressions are used, and that
   it is always better to focus the checks on smaller resources.
 
+  Also "http-check expect" doesn't support HTTP keep-alive. Keep in mind that 
it
+  will automatically append a "Connection: close" header, meaning that this
+  header should not be present in the request provided by "option httpchk".
+
   Last, if "http-check expect" is combined with "http-check disable-on-404",
   then this last one has precedence when the server responds with 404.
 
diff --git a/src/checks.c b/src/checks.c
index feca96e..1b5b731 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1427,6 +1427,9 @@ static int connect_conn_chk(struct task *t)
else if ((check->type) == PR_O2_HTTP_CHK) {
if (s->proxy->options2 & PR_O2_CHK_SNDST)
bo_putblk(check->bo, trash.str, 
httpchk_build_status_header(s, trash.str, trash.size));
+   /* prevent HTTP keep-alive when "http-check expect" is 
used */
+   if (s->proxy->options2 & PR_O2_EXP_TYPE)
+   bo_putstr(check->bo, "Connection: close\r\n");
bo_putstr(check->bo, "\r\n");
*check->bo->p = '\0'; /* to make gdb output easier to 
read */
}
-- 
2.1.4




Re: HAproxy constant memory leak

2015-01-31 Thread Willy Tarreau
On Sat, Jan 31, 2015 at 12:59:34AM +0100, Lukas Tribus wrote:
> > The maxconn was set to 4096 before, and after 45 days, haproxy was  
> > using 20gigs... 
> 
> Ok, can you set maxconn back to 4096, reproduce the leak (to at least
> a few gigabytes) and a run "show pools" a few times to see where
> exactly the memory consumption comes from?

Also, could you please send a network capture of the checks from
the firewall to haproxy (if possible, taken on the haproxy side) ?
It is possible that there is a specific sequence leading to an
improper close (eg: some SSL structs not being released at certain
steps in the handhskake, etc).

Please use this to take your capture :

tcpdump -vs0 -pi eth0 -w checks.cap host  and port 

Wait for several seconds, then Ctrl-C. Be careful, your capture
will contain all the traffic flowing between haproxy and the
firewall's address facing it, so there might be confidential
information there, only send to the list if you think it's OK.

Ideally, in parallel you can try to strace haproxy during this
capture :

   strace -tts200 -o checks.log -p $(pgrep haproxy)

Thanks,
Willy




Re: Stick tables, good guys, bad guys, and NATs

2015-01-31 Thread Willy Tarreau
Hi guys,

On Tue, Jan 27, 2015 at 06:01:13AM +0800, Yuan Long wrote:
> I am in the same fix.
> No matter what we try, the data to address is the real
> laptop/desktop/cellphone/server count. That count is skewed as soon as
> there are a hundred laptops/desktops behind a router.
> 
> Best I heard is from Willy himself, suggestion to use base32+src. At the
> cost of losing plain text and having a binary to use in acl but works for
> now. Grateful to have HAProxy in the first place.

There's no universal rule. Everything depends on how the site is made,
and how the bad guys are acting. For example, some sites may work very
well with a rate-limit on base32+src. That could be the case when you
want to prevent a client from mirroring a whole web site. But for sites
with very few urls, it could be another story. Conversely, some sites
will provide lots of different links to various objects. Think for
example about a merchant's site where each photo of object for sale is
a different URL. You wouldn't want to block users who simply click on
"next" and get 50 new photos each time.

So the first thing to do is to define how the site is supposed to work.
Next, you define what is a bad behaviour, and how to distinguish between
intentional bad behaviour and accidental bad behaviour (eg: people who
have to hit reload several times because of a poor connection). For most
sites, you have to keep in mind that it's better to let some bad users
pass through than to block legitimate users. So you want to put the cursor
on the business side and not on the policy enforcement side.

Proxies, firewalls etc make the problem worse, but not too much in general.
You'll easily see some addresses sending 3-10 times more requests than other
ones because they're proxying many users. But if you realize that a valid
user may also reach that level of traffic on regular use of the site, it's
a threshold you have to accept anyway. What would be unlikely however is
that surprizingly all users behind a proxy browse on steroids. So setting
blocking levels 10 times higher than the average pace you normally observe
might already give very good results.

If your site is very special and needs to enforce strict rules against
sucking or spamming (eg: forums), then you may need to identify the client
and observe cookies. But then there's even less generic rule, it totally
depends on the application and the sequence to access the site. To be
transparent on this subject, we've been involved in helping a significant
number of sites under abuse or attack at HAProxy Technologies, and it
turns out that whatever new magic tricks you find for one site are often
irrelevant to the next one. Each time you have to go back to pencil and
paper and write down the complete browsing sequence and find a few subtle
elements there.

Regards,
Willy




RE: HAproxy constant memory leak

2015-01-31 Thread Lukas Tribus
> The maxconn was set to 4096 before, and after 45 days, haproxy was  
> using 20gigs... 

Ok, can you set maxconn back to 4096, reproduce the leak (to at least
a few gigabytes) and a run "show pools" a few times to see where
exactly the memory consumption comes from?

Lukas
  


Re: affinity cookie is only set intermittently

2015-01-31 Thread Willy Tarreau
Hi,

[ next time, please put a subject on your e-mail, as it's really
  unconvenient for people reading the list to process messages with
  no subject ]

Some responses below.

On Mon, Jan 26, 2015 at 11:43:39AM -0800, Aaron Golub wrote:
(...)
> So far, so good...the SWAP_SERVERID cookies gets set fine.  Now...from
> here, the servers at the www_prodswap and PROD_https_swap backends
> redirects some of their traffic to these frontends using reverse proxy that
> is configured in our paython site and effectively uses curl to access urls
> and the delivers that data back out to the client:

I'm assuming that when you say "redirect", you really mean "forward"
in fact.

> #-
> # "frontend" section describes a set of listening sockets accepting client
> connections.
> frontend PROD_rev_proxy_http
> #-
>bind 10.2.0.202:80 
>mode http
>option  httplog
>default_backend PROD_http
> 
> #-
> # "frontend" section describes a set of listening sockets accepting client
> connections.
> frontend PROD_rev_proxy_tcp
> #-
>bind 10.2.0.202:443
>acl is_port_443 dst_port 443
>mode tcp
>use_backend PROD_https if is_port_443
>default_backend PROD_http

Note that the ACL above will always match given that the frontend
only listens to port 443.

> These front ends them direct traffic to the following backends:
> 
> #-
> # "backend" section describes a set of servers to which the proxy will
> connect to forward incoming connections.
> backend PROD_http
> #-
>mode http
>option httplog
>stats enable
>stats auth :
>balance roundrobin
>stick on src table PROD_https
>cookie PHP_SERVERID insert indirect nocache
>option httpclose
>option forwardfor
>option httpchk /healthcheck.txt
> server prod4 10.2.0.105:80  cookie prod4 weight 34
> check
> server prod5 10.2.0.106:80  cookie prod5 weight 33
> check
> server prod6 10.5.0.107:80  cookie prod6 weight 33
> check
> 
> 
> #-
> backend PROD_https
> #-
>mode tcp
>option tcplog
>balance roundrobin
>stick-table type ip size 200k expire 30m
>stick on src
>server prod4 10.2.0.105:443
>server prod5 10.2.0.106:443
>server prod6 10.5.0.107:443
> 
> 
> 
> So here's the problemThe pages on the PROD_http/PROD_https load just
> fine, but the  PHP_SERVERID cookie is only set intermittently.   Why would
> that be?  Do I have the cookies settings configured incorrectly?  The
> reason I ask is because we believe that these cookies settings are causing
> server affinity to be lost.  Any insight into this would be greatly
> appreciated.

I'm seeing two possibilities :

  1) are you sure that none of your requests are made over SSL ? Since
 you have a pass-through configuration for SSL which forwards in TCP
 mode without setting any cookie, that could be the explanation.

  2) another possibility could be related to the "indirect" keyword. When
 it is set, haproxy will only add the cookie in responses to requests
 which do not have a valid cookie. Maybe in your case the client
 expects the cookie to be present in every response and would flush
 it if it's not present, then causing the affinity to be lost ? If
 that's the case, you can simply workaround this by removing the
 "indirect" keyword.

> Also...is it possible to  have cookies set for HTTPs as well and can it be
> the same cookie as the http cookie?
> I'm currently using HAproxy1.4.

Yes but for that you need to decipher SSL. That requires haproxy 1.5 and
that you install your server's certificate on haproxy. It may or may not
be acceptable in your environment.

Regards,
Willy




Re: Forum Access Request

2015-01-31 Thread Willy Tarreau
Hi,

On Mon, Jan 26, 2015 at 06:21:18PM +, BGaudreault Brian wrote:
> Hello,
> 
> Can I gain access to your forum to get assistance with our new HAProxy 1.5.10 
> redundant setup?

You're at the right place. Provided that you did your homework first (ie:
read the doc and have specific questions), you may get some assistance
here.

Regards,
Willy




Re: question about X-Forwarded-For and proxy protocol

2015-01-31 Thread Willy Tarreau
On Thu, Jan 29, 2015 at 09:57:32AM -0800, Warren Turkal wrote:
> I am using HAProxy 1.5.10. My config looks something like the following:
> 
> frontend main
>   bind *:8080 accept-proxy
>   use backend blah
> 
> backend blah
>   server 10.0.0.1
> 
> When I am accepting proxy protocol connections on the bind line in my front
> end, I would like to add an X-Forwarded-For header that identifies the
> original client from the proxy protocol info. Is there some pattern folks
> use to do that? Does "option forwardfor" do this, or do I need to reqadd
> the header manually?

The proxy protocol will replace the client's IP address everywhere in
the internal structs, so for haproxy, the *real* client will be the
one advertised there. Thus if you use "option forwardfor", the address
presented in the proxy protocol will appear in the x-forwarded-for
header. For example, let's say you're deploying an haproxy setup in
AWS. You set up ELB to enable the proxy protocol, and haproxy as
configured above plus optoin forwardfor. The server will then get a
request from haproxy with a header identifying the original client
(the one ELB sees).

hoping this helps,
Willy




Re: possible bug with CumReq info stat

2015-01-31 Thread Willy Tarreau
Hi Warren,

On Tue, Jan 27, 2015 at 03:04:16PM -0800, Warren Turkal wrote:
> The definition of the global.req_count at include/types/global.h line 109
> is an unsigned int. The print code it treating it as a signed int. The
> attached commit fixes that.

Thanks, I've applied it to both 1.5 and 1.6.

> Also, is there an SSL protected location for fetching the haproxy git repo
> whose cert is signed by a widespread CA? The haproxy.org site also seems to
> be pretty slow for git cloning.

There is no SSL protected repo. I'm surprized that you found the haproxy.org
site slow, usually it's reasonably fast. Are you sure you weren't cloning
from 1wt.eu instead, which is the slow master ?

Regards,
Willy




question about X-Forwarded-For and proxy protocol

2015-01-31 Thread Warren Turkal
I am using HAProxy 1.5.10. My config looks something like the following:

frontend main
  bind *:8080 accept-proxy
  use backend blah

backend blah
  server 10.0.0.1

When I am accepting proxy protocol connections on the bind line in my front
end, I would like to add an X-Forwarded-For header that identifies the
original client from the proxy protocol info. Is there some pattern folks
use to do that? Does "option forwardfor" do this, or do I need to reqadd
the header manually?

Thanks,
wt
-- 
Warren Turkal


Soldes, ça continue ! , un large choix de chaussures ville à prix tout petit

2015-01-31 Thread Bexley
Soldes, ça continue ! , un large choix 
de chaussures ville à prix tout petit  Dès 89€ la paire. Grand choix de boots, 
mocassins, richelieus, 
derbies. Dépêchez-vous et craquez sans vous ruiner.
 Pour visualiser ce message sur votre navigateur, consultez notre version en 
ligne: 
http://communication.bexley.com/HM?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1JgvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFH
. Bexley Quality for men
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J4_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVEk
Chaussures Ville: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J4PcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEl
 | Chaussures Détente: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J4fcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEq
 | Chemises: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J5vcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEr
 | Pulls & Polos: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J5_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVEo
 | Chinos: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J5PcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEp
 | Ceintures: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J5fcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEu
 | Chaussettes: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J-vcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEv
 | Accessoires: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J-_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVEs
Soldes chaussures ville
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J-PcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEt
Soldes chaussures ville
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J-fcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEy
Soldes chaussures ville
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J_vcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEz
Lazio chataîgne
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J__cStGb5lw8W0bBhOG5mpqVsje_HhdCavVEw
Andrea noir
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J_PcStGb5lw8W0bBhOG5mpqVsje_HhdCavVEx
Canossa noir
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J_fcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE2
Renfield noir
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J8vcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE3
Flager gomme city
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J8_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVE0
Greenwich light noir
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J8PcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE1

Nos boutiques
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J8fcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE6
Shop online
: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J9vcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE7
  
LIVRAISON GRATUITE AVANTAGES CLIENTS NOS MAGASINS France et Europe* dès 99€ 
Autres destinations jusqu'à -80%: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J9_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVE4
Remise par lots, remise fidélité 5%,: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J9PcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE5

Détaxe automatique livraison hors U.E: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1J9fcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE-
Paris, Bruxelles
 Lyon, Marseille, Annecy: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1JivcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE_

PARTAGER 
: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1Ji_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVE8

: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1JiPcStGb5lw8W0bBhOG5mpqVsje_HhdCavVE9

: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1JifcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFC

: 
http://communication.bexley.com/HS?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1JjvcStGb5lw8W0bBhOG5mpqVsje_HhdCavVFD

Bexley, le spécialiste de la chaussure de luxe à prix vraiment 
accessibles !

 Depuis près de 30 ans, Bexley est le chausseur de référence pour les 
hommes qui recherchent des chaussures de luxe au meilleur prix. 
S'adressant à une clientèle d'hommes ayant le goût des belles choses et 
le souci du détail, Bexley propose également une gamme de chemises, 
pulls, polos, ceintures et autres accessoires pour hommes dont le 
rapport qualité-prix a déjà séduit plus de 50 clients !
 Vous êtes plutôt classique ou trendy ? Vous aimez les modèles chics ou 
casual ? Retrouvez l'intégralité de nos collections ainsi que la liste 
de nos boutiques sur www.bexley.fr: 
http://communication.bexley.com/HP?a=ENX7Cqo_g3eG8SA9MKJtnZLnGHxKLk1Jj_cStGb5lw8W0bBhOG5mpqVsje_HhdCavVFA


 Les soldes continuent chez Bexley. Venez découvrir notre large choix 
de chaussures ville en soldes. Boots semelle cuir ou gomme, chaussures 
richelieu, derby ou encore mocassins ou chaussures à boucle, faites 

RE: HAproxy constant memory leak

2015-01-31 Thread Lukas Tribus
With "maxconn 5" this is expected behavior, because haproxy will use
RAM up to an amount that is justified for 5 concurrent connections.

Configure maxconn to a proper and real value and the RAM usage will be
predictable.


Lukas

  

[SPAM] Afla cum sa obtii corpul dorit intr-un timp foarte scurt

2015-01-31 Thread Pro Muscle

După luni de cercetare intensivă, a fost finalizată formula GHAdvanced
+. Acesta conține unele dintre cele mai puternice și dovedite boostere de
hormoni natural care poate fii cumpărat.

Acest produs va ajuta la dezvoltarea masei musculare, la marirea
rezistentei la efort, si va ajuta sa ardeti grasimea. Acest produs inca nu
a aparut in magazinele din Romania poate fii comandat online, il vezi
primii in 4-5 zile lucratoare.

Produsul contine L-Arginine, GTF Chromium, Niacin, L-Ornithine.

Daca ti-ai dorit un corp de invidiat acum este momentul. Acceasta oferta
este valabila doar 10 zile de la data primirii email-ului.

Daca nu sunteti multumiti de acest produs il puteti returna in decurs de 90
zile.

Click aici sau copiaza urmatoarea adresa in browserul tau:
http://christian-marketing.biz/link.php?M=541195&N=24&L=7&F=T



Cu respect,
Echipa Pro Muscle

Click aici pentru a va dezabona de la serviciul de newsletter Pro Muscle.



haproxy mail delivery issues in the last 24 hours

2015-01-31 Thread Willy Tarreau
Hi,

due do an expired DNS entry, emails to the list have been blocked
since yesterday at noon. This is now fixed. If you found that any
of your e-mails didn't show up on the list in the last 24 hours,
please resend it.

Thanks to Lukas for notifying me!
Willy