Dos-Attack / Drop Connections

2010-03-16 Thread Bernhard Krieger

Hi,

Since few days i am DOS-attacked on a hosted website on my server.
I set i rule which returns a redirect to 127.0.0.1

acl aclHPOK  hdr_reg(User-Agent) .*
redirect location  http://127.0.0.1/ if !aclHPOK

Its possible to set a rule to drop this requests instead of  
redirecting or return an errorcode?



regards
Bernhard




This message was sent using IMP, the Internet Messaging Program.




Re: Dos-Attack / Drop Connections

2010-03-16 Thread Willy Tarreau
On Tue, Mar 16, 2010 at 09:12:39AM +0100, Bernhard Krieger wrote:
 Hi,
 
 Since few days i am DOS-attacked on a hosted website on my server.
 I set i rule which returns a redirect to 127.0.0.1
 
 acl aclHPOK  hdr_reg(User-Agent) .*
 redirect location  http://127.0.0.1/ if !aclHPOK
 
 Its possible to set a rule to drop this requests instead of  
 redirecting or return an errorcode?

yes, instead of doing a redirect, you can simply do that :

block if !aclHPOK

Also, your ACL is expensive. You can simply check that the
user-agent header is not empty that way :

  acl aclHPOK  hdr_reg(User-Agent) .

Regards,
Willy




Re: Dos-Attack / Drop Connections

2010-03-16 Thread Bernhard Krieger

Hello Willi,

thanks for reply.
If i change the rule to block the requests, the Session rate grow up  
to 1000/secs.
If i use the redirection option ( to http://127.0.0.1 ), it decreases  
to 500/secs.


The DOS-Attack iteself is very strange, it attacks my old clanpage  
which has not more than 10 requests per month ... a very high visited  
page ;)


The attack produces only traffic... he will never reach the final goal :)


THX haproxy THX willi

regards
Bernhard



- Nachricht von w...@1wt.eu -
 Datum: Tue, 16 Mar 2010 09:38:41 +0100
   Von: Willy Tarreau w...@1wt.eu
Antwort an: Willy Tarreau w...@1wt.eu
   Betreff: Re: Dos-Attack / Drop Connections
An: Bernhard Krieger b...@noremorze.at
Cc: haproxy@formilux.org



On Tue, Mar 16, 2010 at 09:12:39AM +0100, Bernhard Krieger wrote:

Hi,

Since few days i am DOS-attacked on a hosted website on my server.
I set i rule which returns a redirect to 127.0.0.1

acl aclHPOK  hdr_reg(User-Agent) .*
redirect location  http://127.0.0.1/ if !aclHPOK

Its possible to set a rule to drop this requests instead of
redirecting or return an errorcode?


yes, instead of doing a redirect, you can simply do that :

block if !aclHPOK

Also, your ACL is expensive. You can simply check that the
user-agent header is not empty that way :

  acl aclHPOK  hdr_reg(User-Agent) .

Regards,
Willy





- Ende der Nachricht von w...@1wt.eu -




This message was sent using IMP, the Internet Messaging Program.




Re: Dos-Attack / Drop Connections

2010-03-16 Thread Bernhard Krieger

Hello Willi,

i am using 1.4.1.

Set up new rule with tarpit
Rate decreased below 40/secs.

regards
Bernhard


- Nachricht von w...@1wt.eu -
 Datum: Tue, 16 Mar 2010 10:44:24 +0100
   Von: Willy Tarreau w...@1wt.eu
Antwort an: Willy Tarreau w...@1wt.eu
   Betreff: Re: Dos-Attack / Drop Connections
An: Bernhard Krieger b...@noremorze.at
Cc: haproxy@formilux.org



On Tue, Mar 16, 2010 at 10:32:40AM +0100, Bernhard Krieger wrote:

Hello Willi,

thanks for reply.
If i change the rule to block the requests, the Session rate grow up
to 1000/secs.
If i use the redirection option ( to http://127.0.0.1 ), it decreases
to 500/secs.


It means that the attacker immediately retries. Then use a tarpit, it
will slow it down a lot. On what version are your running ? With 1.4
you can condition the tarpit with an ACL :

timeout tarpit 1m
reqtarpit . if ! { hdr_reg(user-agent) . }

On 1.3 it will be a bit more complicated, you'll have to branch to a
specific backend for the tarpit :

frontend ...
  acl ua-ok hdr_reg(user-agent) .
  use_backend bk_tarpit if !ua-ok

backend bk_tarpit
  timeout tarpit 1m
  reqtarpit .



The DOS-Attack iteself is very strange, it attacks my old clanpage
which has not more than 10 requests per month ... a very high visited
page ;)

The attack produces only traffic... he will never reach the final goal :)


Well, never underestimate a DoS attack. There is often a first phase of
identification of the target. You should also avoid publicly discussing
the reasons why you think it will not succeed and the workarounds you
are setting up ! If the guy really wants to take you down, he just has
to read the list's archives to update his attack vector.

Regards,
Willy





- Ende der Nachricht von w...@1wt.eu -




This message was sent using IMP, the Internet Messaging Program.




Re: dynamic referer protecion

2010-03-16 Thread Mikołaj Radzewicz
Hmm, it is little not what I thought... We have DDoS due to our links
are put on high load site. I wanted to check referrers dynamical and
then block  them (with the highest rate).
Maybe is it possible to limit the session based on host?

On Tue, Mar 16, 2010 at 6:08 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi,

 On Mon, Mar 15, 2010 at 07:54:19PM +0100, Miko?aj Radzewicz wrote:
 Dear Sir,
 I have been using haproxy for a couple of weeks in some basic
 configuration. Since 2 weeks we have been suffering from some DoS
 attacks to our web servers which made them causes 500 due to extreamly
 high number of connections. All of them are caused through the
 referers - links(urls) to our web servers are put on very load pages
 causing run out of pool of connection on our web servers. Is it some
 way to protect our infrastructur using haproxy? Are you planning to
 add sth like that in the future?

 The first thing you must do is to set a maxconn value on each server
 line in haproxy's config to limit the number of concurrent connections
 per server to a level the server can sustain. This will ensure that your
 servers are not saturated anymore. The second thing is to correctly
 configure your various timeouts so that you don't needlessly keep a
 lot of connections on haproxy or your servers when you suppose that
 the client might already have gone. For instance, a client will not
 wait more than 30-40 seconds for something that does not seem to come,
 so let's have your server timeout and queue timeout at these levels.
 You must also set option abortonclose so that each connection aborted
 while the request is still in the queue will not be sent to a server.

 Then if you know what to identify in the requests, you can eliminate
 them or tarpit them, which consists in keeping the connection open
 for some predefined time to slow down the attacker. But since what you
 describe looks more like normal browsers, maybe a drop will be enough.
 If you can identify a set of referrers, you can block on that. You can
 even redirect the request to the site holding the referrer. Probably
 that they'll check their page and fix it quickly after detecting the
 higher load ! For instance :

    acl from_site1 hdr_beg(referer) http://site1.dom/
    redirect location http://site1.dom/ if from_site1
    ...

 Regards,
 Willy





-- 
Regards,
MR



Re: dynamic referer protecion

2010-03-16 Thread Willy Tarreau
On Tue, Mar 16, 2010 at 11:39:11AM +0100, Miko?aj Radzewicz wrote:
 Hmm, it is little not what I thought... We have DDoS due to our links
 are put on high load site. I wanted to check referrers dynamical and
 then block  them (with the highest rate).

You mean they put the links on *many* high load sites ? I think that
at some point you'll have found the list.

 Maybe is it possible to limit the session based on host?

You mean on referrer I presume. Right now it's not possible. Most of
the code to do that is present but still requires some non-obvious
changes in order to support that.

In my opinion you should really try to enumerate the few higher load
sites. Simply capture them (capture request header referer len 32)
then check them in your logs using sort | uniq -c | sort -n and write
a rule to get most of them away.

Regards,
Willy




RE: mod_security and/or fail2ban

2010-03-16 Thread Andrew Commons
I would also be very interested in the learned opinions of the other readers
of this list on this topic.

When I first considered this in the HAProxy context a few weeks ago I
figured that implementing this functionality in HAProxy or putting it
in-line with HAProxy on the same server would be a bottleneck and that
modsecurity implemented on the web servers would scale better. This
(pretentiously) assumes high volumes.

If you do want to implement filtering in front of the web servers - i.e.
implement some form of web application firewall (WAF) -  then I believe it
is possible to do this with modsecurity using Apache as a proxy.

I designed and managed the implementation of a WAF in the late 90's before
they were commercially available items. This sat in front of the web servers
at a financial institution for over 10 years and stopped all the automated
threats. Relatively simple white-lists can be very effective in this context
and can be largely independent of the applications although, obviously, some
inspection of the HTTP traffic is required. At this simple level you can:

* Strip headers that are not in the white-list
* Inspect URIs for invalid characters
* Reject methods you don't want to deal with.
* Inspect POST bodies for invalid characters (although file uploads can
present problems here)

Adding application knowledge is a balancing act between the configuration
overheads and how much of these overheads application developers can
stomach, but it greatly increases the effectiveness of the firewall. I
generally assume that the application developers will not be interested in
security (if not right now then at some time in the future) and that the WAF
is the belt even if they don't supply the braces :-) At this level you might
be able to restrict methods and parameters (in the body of a POST or in the
query string) to specific URIs.

I'm not a great fan of 'learned' behaviour in tools like this, I much prefer
explicit testable rules that do not vary with user behaviour or browsing
history. 

That said, there are some things that can be picked up from inspection of
outgoing traffic and applied to responses. These include:

* Parameters that can come back and whether they are in the URI or the body
of a POST
* Maximum lengths if they are specified as part of the outgoing HTML
* The expected method associated with the response

Adding a cookie going out and using it to index into the session coming back
facilitates this if you are doing it dynamically. You can also specify most
of this up front.

This is not too deep and will stop most Web 1.x nasties. Web 2.x another
story - I haven't had to worry about it :-)

Cheers
Andrew



-Original Message-
From: Olivier Le Cam [mailto:olivier.le...@crdp.ac-versailles.fr] 
Sent: Wednesday, 17 March 2010 12:07 AM
To: haproxy@formilux.org
Subject: mod_security and/or fail2ban

Hi -

I am exploring various solutions in order to implement some filtering 
features (even basic ones) at the haproxy side. My goal is to get rid of 
the most popular bots and vulnerability scanners.

Would someone aware of a way to perform such a filter with haproxy 
using, say, the modsecurity CRS?

Another alternative could be to scan the haproxy logs with fail2ban or 
equivalent. I was wondering if that could be a satisfying enough and if 
some fail2ban rulesets could be already available for that.

Thanks in anticipation for any idea/pointers!

-- 
Olivier Le Cam
Département des Technologies de l'Information et des Communications
Académie de Versailles




XML Output interrupt / P-FLAG

2010-03-16 Thread Bernhard Krieger

Hello,


After upgrading to 1.4.1 we getting failures on our XML-interface.

Below the haproxy log entry of the request.

P-FLAG:
The P indicates that the session was prematurely aborted by the proxy,  
because of a connection limit enforcement, because a DENY filter was  
matched,because of a security check which detected and blocked a  
dangerous error in server response which might have caused information  
leak (eg: cacheable cookie), or because the response was processed by  
the proxy (redirect, stats, etc...).



Mar 16 15:17:26 hostname haproxy[17065]: 192.168.4.147:2559  
[16/Mar/2010:15:17:26.483] http-in PP/BACKEND1 0/0/0/167/168 200 16528  
- - PDVN 102/10/9/9/0 0/0 {www.x} GET  
/ModulServletEdata?param=searchedatapromotion_id=2473crcsec=1268744841861crcsum=11297casesensitive=0add_average=25dec=2status=1show_eid_only=falseorderclause=ORDER%20BY%20voter_rating,%20creationdate%20DESCdata_is_like=ts=31739  
HTTP/1.1


The request is interrupted and so we didnt get  the the whole XML-Output.

If i switch back to version 1.3.22, it works without any problems.

I have no idea which rule, securitycheck,... cause this issue!

regards
Krieger Bernhard



This message was sent using IMP, the Internet Messaging Program.




Re: Truncated health check response from real servers

2010-03-16 Thread nick
diff -ur haproxy-1.4.1/include/types/proxy.h haproxy-1.4.1-ecv-test/include/types/proxy.h
--- haproxy-1.4.1/include/types/proxy.h	2010-03-04 22:39:19.0 +
+++ haproxy-1.4.1-ecv-test/include/types/proxy.h	2010-03-15 10:15:40.0 +
@@ -137,6 +137,8 @@
 #define PR_O2_MYSQL_CHK 0x0002  /* use MYSQL check for server health */
 #define PR_O2_USE_PXHDR 0x0004  /* use Proxy-Connection for proxy requests */
 #define PR_O2_CHK_SNDST 0x0008  /* send the state of each server along with HTTP health checks */
+#define PR_O2_EXPECT	0x0010	/* http-check expect sth */
+#define PR_O2_NOEXPECT	0x0020	/* http-check expect ! sth */
 /* end of proxy-options2 */
 
 /* bits for sticking rules */
@@ -274,6 +276,9 @@
 	int grace;/* grace time after stop request */
 	char *check_req;			/* HTTP or SSL request to use for PR_O_HTTP_CHK|PR_O_SSL3_CHK */
 	int check_len;/* Length of the HTTP or SSL3 request */
+	char *expect_str;			/* http-check expected content */
+	regex_t *expect_regex;			/* http-check expected content */
+	char *expect_type;			/* type of http-check, such as status, string */
 	struct chunk errmsg[HTTP_ERR_SIZE];	/* default or customized error messages for known errors */
 	int uuid;/* universally unique proxy ID, used for SNMP */
 	unsigned int backlog;			/* force the frontend's listen backlog */
diff -ur haproxy-1.4.1/include/types/server.h haproxy-1.4.1-ecv-test/include/types/server.h
--- haproxy-1.4.1/include/types/server.h	2010-03-04 22:39:19.0 +
+++ haproxy-1.4.1-ecv-test/include/types/server.h	2010-03-15 10:15:40.0 +
@@ -147,6 +147,9 @@
 	struct freq_ctr sess_per_sec;		/* sessions per second on this server */
 	int puid;/* proxy-unique server ID, used for SNMP */
 
+	char *check_data;			/* storage of partial check results */
+	int check_data_len;			/* length of partial check results stored in check_data */
+
 	struct {
 		const char *file;		/* file where the section appears */
 		int line;			/* line where the section appears */
diff -ur haproxy-1.4.1/src/cfgparse.c haproxy-1.4.1-ecv-test/src/cfgparse.c
--- haproxy-1.4.1/src/cfgparse.c	2010-03-04 22:39:19.0 +
+++ haproxy-1.4.1-ecv-test/src/cfgparse.c	2010-03-15 10:15:40.0 +
@@ -2874,8 +2874,65 @@
 			/* enable emission of the apparent state of a server in HTTP checks */
 			curproxy-options2 |= PR_O2_CHK_SNDST;
 		}
+		else if (strcmp(args[1], expect) == 0) {
+			if (strcmp(args[2], status) == 0 || strcmp(args[2], string) == 0) {
+curproxy-options2 |= PR_O2_EXPECT;
+if (*(args[3]) == 0) {
+	Alert(parsing [%s:%d] : '%s %s %s' expects  regex as an argument.\n, 
+		file, linenum, args[0], args[1], args[2]);
+	return -1;
+}
+curproxy-expect_type = strdup(args[2]);
+curproxy-expect_str = strdup(args[3]);
+			}
+else if (strcmp(args[2], rstatus) == 0 || strcmp(args[2], rstring) == 0) {
+curproxy-options2 |= PR_O2_EXPECT;
+if (*(args[3]) == 0) {
+Alert(parsing [%s:%d] : '%s %s %s' expects  regex as an argument.\n,
+file, linenum, args[0], args[1], args[2]);
+return -1;
+}
+curproxy-expect_regex = calloc(1, sizeof(regex_t));
+if (regcomp(curproxy-expect_regex, args[3], REG_EXTENDED) != 0) {
+Alert(parsing [%s:%d] : bad regular expression '%s'.\n, file, linenum, args[0]);
+return -1;
+}
+curproxy-expect_type = strdup(args[2]);
+}
+else if (strcmp(args[2], !) == 0 ) {
+curproxy-options2 |= PR_O2_NOEXPECT;
+if (strcmp(args[3], status) == 0 || strcmp(args[3], string) == 0) {
+	if (*(args[4]) == 0) {
+	Alert(parsing [%s:%d] : '%s %s %s %s' expects  regex as an argument.\n,
+	file, linenum, args[0], args[1], args[2], args[3]);
+   	 	return -1;
+   		 }
+	curproxy-expect_type = strdup(args[3]);
+	curproxy-expect_str = strdup(args[4]);
+}
+else if (strcmp(args[3], rstatus) == 0 || strcmp(args[3], rstring) == 0) {
+	if (*(args[4]) == 0) {
+	Alert(parsing [%s:%d] : '%s %s %s %s' expects  regex as an argument.\n,
+	file, linenum, args[0], args[1], args[2], args[3]);
+   	 	return -1;
+   		 }
+
+

Re: Truncated health check response from real servers

2010-03-16 Thread Willy Tarreau
Hi Nick,

Thanks for the update. I've quickly reviewed it and noticed
some of the issues of the initial ECV patch (though I don't
remember them all, I'll have to dig into my mailbox). I'm
putting a few examples below.

What I can propose you is to proceed in 3 phases :

  - I will try to extract the two features from your patch
(response reassembly and ECV), and apply the first one
to next 1.4.

  - I'll try to fix the remaining issues of the ECV code and
post it for review.

  - then once everyone is OK and we get satisfying results, we
merge it into another 1.4, so that we'll finally get it into
1.4 stable.

There's also something we lose with the ECV patch : if ECV is in
use, then we can't enable the disable-on-404 feature anymore. I
think we could still combine them both by checking for the 404
when the response does not match the criterion. I'm not exactly
sure how but we'll see that later.

Some issues below :

On Tue, Mar 16, 2010 at 03:50:46PM +, n...@loadbalancer.org wrote:
 + else if (strcmp(args[1], expect) == 0) {
 + if (strcmp(args[2], status) == 0 || strcmp(args[2], 
 string) == 0) {
 + curproxy-options2 |= PR_O2_EXPECT;
 + if (*(args[3]) == 0) {
 + Alert(parsing [%s:%d] : '%s %s %s' 
 expects  regex as an argument.\n, 

warning, it's not a regex but a string in the error message.

 + char *contentptr = strstr(s-check_data, \r\n\r\n);

here we can go past the end of the buffer. = segfault if pattern missing.

 + /* Check the response content against the supplied string
 +  * or regex... */
 + if (strcmp(s-proxy-expect_type, string) == 0)

it's a waste of CPU cycles to compare strings for just a type which
fits in an enum.

Regards,
Willy




Re: Truncated health check response from real servers

2010-03-16 Thread Willy Tarreau
On Tue, Mar 16, 2010 at 06:22:09PM +0100, Willy Tarreau wrote:
 What I can propose you is to proceed in 3 phases :
 
   - I will try to extract the two features from your patch
 (response reassembly and ECV), and apply the first one
 to next 1.4.

OK, your code was clean and the two parts were really distinct.
It was a child's game to split them.

Doing so, I discovered one minor and one medium issues which I
fixed. The minor one is that detecting the end of a response
now always requires another poll() call, which can be expensive
on LBs with hundreds of servers, especially since in HTTP the
close is almost always there pending after the response :

  20:20:03.958207 recv(7, HTTP/1.1 200\r\nConnection: close\r\n..., 8030, 0) 
= 145
  20:20:03.958365 epoll_wait(3, {{EPOLLIN, {u32=7, u64=7}}}, 8, 1000) = 1
  20:20:03.958543 gettimeofday({1268767203, 958626}, NULL) = 0
  20:20:03.958694 recv(7, ..., 7885, 0) = 0
  20:20:03.958833 shutdown(7, 2 /* send and receive */) = 0

I've arranged the recv() call to be able to read up to EAGAIN or
error or end, and now we get both the response and the close in
the same call :

  20:29:58.797019 recv(7, HTTP/1.1 200\r\nConnection: close\r\n..., 8030, 0) 
= 145
  20:29:58.797182 recv(7, ..., 7885, 0) = 0
  20:29:58.797356 shutdown(7, 2 /* send and receive */) = 0

The medium issue was that by supporting multiple calls to recv(), we
woke up the bad old guy who sends us POLLERR status causing aborts
before recv() has a chance to read a full response. This happens on
the second recv() if the server has closed a bit quickly and sent an
RST :

  21:11:21.036600 epoll_wait(3, {{EPOLLIN, {u32=7, u64=7}}}, 8, 993) = 1
  21:11:21.054361 gettimeofday({1268770281, 54467}, NULL) = 0
  21:11:21.054540 recv(7, H..., 8030, 0) = 1
  21:11:21.054694 recv(7, 0x967e759, 8029, 0) = -1 EAGAIN (Resource temporarily 
unavailable)
  21:11:21.054843 epoll_wait(3, {{EPOLLIN|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}}, 
8, 975) = 1
  21:11:21.060274 gettimeofday({1268770281, 60386}, NULL) = 0
  21:11:21.060454 close(7)= 0

The fix simply consists in removing this old obsolete test, and now it's
OK :

  21:11:59.402207 recv(7, H..., 8030, 0) = 1
  21:11:59.402362 recv(7, 0x8b5c759, 8029, 0) = -1 EAGAIN (Resource temporarily 
unavailable)
  21:11:59.402511 epoll_wait(3, {{EPOLLIN|EPOLLERR|EPOLLHUP, {u32=7, u64=7}}}, 
8, 974) = 1
  21:11:59.407242 gettimeofday({1268770319, 407353}, NULL) = 0
  21:11:59.407425 recv(7, TTP/1.0 200 OK\r\n..., 8029, 0) = 16
  21:11:59.407606 recv(7, 0x8b5c769, 8013, 0) = -1 ECONNRESET (Connection reset 
by peer)
  21:11:59.407753 shutdown(7, 2 /* send and receive */) = -1 ENOTCONN 
(Transport endpoint is not connected)

The trained eye would have noticed that I got a two-packet HTTP response
with an RST detected before the second packet and that the check code
still managed to get it right. This is really good news !

I'm now gathering my changes and committing your patch with the small
fixes above. That way we can concentrate on ECV.

Cheers,
Willy




Re: ha proxy Nagios plugin

2010-03-16 Thread Josh Brown (Mailing List)

On Mar 8, 2010, at 1:19 PM, Willy Tarreau wrote:

 Hi,
 
 On Mon, Mar 08, 2010 at 02:58:14PM +0100, Stéphane Urbanovski wrote:
 Cool, thanks. Are you interested in getting it merged into
 mainline ? If so, we can create an entry into the contrib
 directory.
 
 No objections, but I'm not sure it is the best place. A link in the README 
 should be enough.
 
 OK will do that then. In fact, projects that people want to maintain
 are more suited out of tree, and the ones that are written as one-shot
 and have no reason to change in the future are better merged (eg: the
 net-snmp plugin is a good example). That's why I asked.
 
 Also, it seems to rely only on the HTTP socket. Do you think
 it can easily be adapted to also support the unix socket, which
 is global and does not require opening a TCP port ?
 
 The plugin works with Nagios which is not installed on the same host. So a 
 remote access in a way or other is mandatory.
 
 hey, that obviously makes sense !
 
I was looking at the Nagios script that Jean-Christophe wrote and I think there 
may be a need for something a bit different.

I feel that we might want to have the Nagios plugin monitor a specific 
frontend/backend service combination, rather than the entire HAProxy setup.  
This would be useful because we could focus on individual HAProxy services and 
their specific health.  If we try and write one plugin for all HAProxy services 
things start to get muddled.

Here is what I am thinking:
- Specify a listen and backend service(s) or a single listen service where the 
frontend and backend have the same name
- Specify thresholds for sessions, errors, queue, etc... Make this dynamic in 
case any fields change..
- Since most the stats are based on counters the nagios plugin would have to 
maintain persistent counters, most likely in an external file
- To keep the load down on the admin requests to haproxy perhaps we have the 
script cache the csv data and check it for freshness every run..  
- Specify thresholds for how many backend services can be up or down as a 
percentage, so like if 50% of backend services are down, go critical, if only 
25% of them are down make it a warning or something..
- Output performance data for sessions, errors, queue, etc.. so that we can 
trend and make pretty pictures

We could try and use Ton Voon's new multiple threshold syntax specified here.. 
http://nagiosplugins.org/rfc/new_threshold_syntax

That's about all I have for now..

Thoughts?

-Josh Brown

Re: XML Output interrupt / P-FLAG

2010-03-16 Thread Willy Tarreau
Hi Bernhard,

On Tue, Mar 16, 2010 at 03:52:36PM +0100, Bernhard Krieger wrote:
 Hello,
 
 
 After upgrading to 1.4.1 we getting failures on our XML-interface.
 
 Below the haproxy log entry of the request.
 
 P-FLAG:
 The P indicates that the session was prematurely aborted by the proxy,  
 because of a connection limit enforcement, because a DENY filter was  
 matched,because of a security check which detected and blocked a  
 dangerous error in server response which might have caused information  
 leak (eg: cacheable cookie), or because the response was processed by  
 the proxy (redirect, stats, etc...).
 
 
 Mar 16 15:17:26 hostname haproxy[17065]: 192.168.4.147:2559  
 [16/Mar/2010:15:17:26.483] http-in PP/BACKEND1 0/0/0/167/168 200 16528  
 - - PDVN 102/10/9/9/0 0/0 {www.x} GET  
 /ModulServletEdata?param=searchedatapromotion_id=2473crcsec=1268744841861crcsum=11297casesensitive=0add_average=25dec=2status=1show_eid_only=falseorderclause=ORDER%20BY%20voter_rating,%20creationdate%20DESCdata_is_like=ts=31739
   
 HTTP/1.1
 
 The request is interrupted and so we didnt get  the the whole XML-Output.
 
 If i switch back to version 1.3.22, it works without any problems.
 
 I have no idea which rule, securitycheck,... cause this issue!

Do you have any transfer-encoding header in response ? If haproxy manages
to get out of sync with one chunk, it could find something very different
from an hexadecimal size and return an error which could be the same.

And could you please send me (in private if info is sensible) a tcpdump
capture of the exchange as seen from the haproxy machine ? Please use :

   $  tcpdump -s0 -npi eth0 tcp

You can even do that with 1.3.22 running, I'll try to feed 1.4 with the
response to see if I can make it fail.

Thanks!
Willy




Re: [ANNOUNCE] haproxy-1.4.0

2010-03-16 Thread Krzysztof Olędzki

On 2010-03-03 23:12, Willy Tarreau wrote:

On Wed, Mar 03, 2010 at 01:02:16AM +0100, Krzysztof Ol??dzki wrote:

On 2010-03-03 00:47, Willy Tarreau wrote:

On Wed, Mar 03, 2010 at 12:39:48AM +0100, Willy Tarreau wrote:

Finally, encryption was only tested on Linux and FreeBSD so it could be
nice to verify if it works on Solaris in the same way (with -lcrypt) and
to add USE_LIBCRYPT for ifeq ($(TARGET),solaris).


OK I'm testing it now then and will keep you informed.


it builds with the same warning as we had on linux, despite the
man not mentioning anything specific.


:( Did you include unistd.h? If so, could you try defining _XOPEN_SOURCE
to 600?


Yes unistd.h is included as indicated in the man page, but still
the warning. I've tried with _XOPEN_SOURCE set to 600 and it reported
even more errors than with 500. In fact, any value other than 500
reports a huge number of errors in many includes files.

So in the end I've added a new build option USE_CRYPT_H which for
now I only set by default for solaris (though it works on linux,
various toolchains except dietlibc). It will make it easy to enable
it for other systems if required.

Using the following patch I can build it everywhere here without a
warning. Could you please test on your various FreeBSD versions, I
see no reason why it should change anything, it's just for the sake
of completeness.


Ack, Ack. Sorry for the delay. I know that you have already merged this 
patch and released a new version. ;)


Best regards,

Krzysztof Olędzki