Re: [PATCH] BUG/MINOR: acl: don't use record layer in req_ssl_ver

2015-11-05 Thread Willy Tarreau
On Thu, Nov 05, 2015 at 08:54:39AM +0100, Lukas Tribus wrote:
> >> @@ -402,7 +402,7 @@ smp_fetch_req_ssl_ver(const struct arg *args, struct 
> >> sample *smp, const char *kw
> >> if (bleft < 5)
> >> goto too_short;
> >>
> >> - version = (data[1] << 16) + data[2]; /* version: major, minor */
> >> + version = (data[9] << 16) + data[10]; /* client hello version: major, 
> >> minor */
> >> msg_len = (data[3] << 8) + data[4]; /* record length */
> >
> > See above ? we check for 5 bytes minimum because we didn't parse further
> > than data[4], and now we're reading data[10], so the test on bleft above
> > must be changed to bleft < 11.
> >
> > Can someone please check if the other patch referenced above has the same
> > bug?
> 
> Ouch, here's the original patch:
> http://marc.info/?l=haproxy=144431273015849=2
> 
> It does indeed bump this check to 11 bytes. Also, there is a third change in
> that path, where it touches bleft and data values (bleft -= 11; data += 11;)
> some lines below.
> 
> The original patch does not work, at this point I'm not quite sure why.

It's because of bleft -= 11 and data += 11. At the end of the code there's
a comment and a check :


/* We could recursively check that the buffer ends exactly on an SSL
 * fragment boundary and that a possible next segment is still SSL,
 * but that's a bit pointless. However, we could still check that
 * all the part of the request which fits in a buffer is already
 * there.
 */
if (msg_len > channel_recv_limit(req) + req->buf->data - req->buf->p)
msg_len = channel_recv_limit(req) + req->buf->data - 
req->buf->p;

msg_len is the length of what starts at byte 5. So by skipping 6 extra bytes
the msg_len becomes incorrect and this check doesn't match anymore.

... which leads to another point. Since we're checking inside a record, are
we sure that we don't need to check the record type ? There are probably
bytes between the record length and the record's version.

I'd suggest at minima that the comment on "version" is adjusted and that the
version is read *after* msg_len so that we parse the message from left to
right only to avoid any confusion :

-   version = (data[1] << 16) + data[2]; /* version: major, minor */
msg_len = (data[3] <<  8) + data[4];  /* record length */
+   version = (data[9] << 16) + data[10]; /* record version: major, 
minor */

And in fact I'm still not satisfied with this because the check for the
message length is performed a few lines later so we may parse crap from
a subsequent record. Thus the proper thing to do is this (sorry for hand-
written patch, I think you'll get the idea) :

/* SSLv3 header format */
-   if (bleft < 5)
+   if (bleft < 11)
goto too_short;

version = (data[1] << 16) + data[2]; /* version: major, minor */
msg_len = (data[3] <<  8) + data[4];  /* record length */

/* format introduced with SSLv3 */
if (version < 0x0003)
goto not_ssl;

+   /* message length between 6 and 2^14 + 2048 */
+   if (msg_len < 6 || msg_len > ((1<<14) + 2048))
+   goto not_ssl;

bleft -= 5; data += 5;

/* we want to return the record version, not the envelope's 
version */
+   version = (data[4] << 16) + data[5]; /* record version: major, 
minor */

> Thanks and sorry for the near-miss,

No worries, that's exactly why I want to see patches posted to the list and
why I'm strongly against pull requests. I want to see eyes looking at the
code. There are 2000 eyeballs on this list, anyone at any moment can chime
in and ask a question or report a doubt.

Thanks,
Willy




Re: trailling whitespace after HTTP version

2015-11-05 Thread Baptiste
On Thu, Nov 5, 2015 at 9:36 AM, Julien Vermillard  wrote:
> Hi,
> When we migrated from Apache HTTPD to HA Proxy we found a strange problem.
> We have some old HTTP client (embedded devices) which send a trailling
> whitespace after the HTTP version.
> The send: "GET HTTP/1.1 " in place of "GET HTTP/1.1"
>
> HAProxy just drop the message sending a 400 error, HTTPD accept it.
>
> It's not a big code change for making haproxy resilient to that, but I
> wonder if you would accept a patch for that or you think you don't want
> haproxy to be resilient to such behaviour?
>
> Thanks,
> Julien


Hi Julien,

You want to break compliency with HTTP RFC?
  https://tools.ietf.org/html/rfc7230#section-3.1.1

I'm against such change as a default behavior.
Maybe you could simply upgrade the code of "option
accept-invalid-http-request" to meet your requirement.
http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#4.2-option%20accept-invalid-http-request

Or create a new option "option accept-broken-http-version-from-client" :)

Baptiste



Re: use several gpc's ?

2015-11-05 Thread Baptiste
On Thu, Nov 5, 2015 at 2:48 PM, Sylvain Faivre
 wrote:
> Hi,
>
> Is there a way to use several gpc's ?
>
> I already use gpc0 to track client IPs generating too many errors, and I
> need to use another counter to track client IPs requesting some pages too
> fast.
>
> Here are the relevant parts of my current setup :
>
> frontend web
> stick-table type ip size 500k expire 5m store gpc0
> tcp-request content track-sc1 src
> http-request deny if !i_internal { sc1_get_gpc0 gt 0 }
>
> backend front
> stick-table type ip size 100k expire 5m store http_err_rate(10s)
> tcp-request content track-sc2 src
> acl error_rate_abuse sc2_http_err_rate gt 10
> acl mark_as_abuser sc1_inc_gpc0 gt 0
> reqtarpit . if error_rate_abuse !whitelist mark_as_abuser
>
> And I'm trying to add something like this to the frontend :
>
>   stick-table type ip size 50k expire 24h store gpc0_rate(60s)
>   acl pages_info path_sub -i info.php
>   acl too_many_info_requests sc0_gpc0_rate() gt 50
>   acl mark_seen_pages_info sc0_inc_gpc0 gt 0
>   tcp-request content track-sc0 src if pages_info
>   http-request deny if mark_seen_pages_info too_many_info_requests
>
> But I'm afraid that I will not be able to distinguish the info stored in
> gpc0 for the error count and for the requests count...
> What am I missing here ?
>


Hi Sylvain,

Which version of HAProxy are you using?
With 1.6, there are some converters that may be used to get rid of
using gpc while counting errors.
It means you would store abuser client IP in a dedicated table and
simply check if the IP is there:
 http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#in_table

I have on my TODO to write such type of article on the blog. Some kind
of DDOS protection with HAProxy 1.6.

Baptiste



[PATCH v2] BUG/MINOR: acl: don't use record layer in req_ssl_ver

2015-11-05 Thread Lukas Tribus
The initial record layer version in a SSL handshake may be set to TLSv1.0
or similar for compatibility reasons, this is allowed as per RFC5246
Appendix E.1 [1]. Some implementations are Openssl [2] and NSS [3].

A related issue has been fixed some time ago in commit 57d229747
("BUG/MINOR: acl: req_ssl_sni fails with SSLv3 record version").

Fix this by using the real client hello version instead of the record
layer version.

This was reported by Julien Vehent and analyzed by Cyril Bonté.
The initial patch is from Julien Vehent as well.

This should be backported to stable series, the req_ssl_ver keyword was
first introduced in 1.3.16.

[1] https://tools.ietf.org/html/rfc5246#appendix-E.1
[2] 
https://github.com/openssl/openssl/commit/4a1cf50187659e60c5867ecbbc36e37b2605d2c3
[3] https://bugzilla.mozilla.org/show_bug.cgi?id=774547
---

Here's the v2 patch adressing the problems of the patch posted yesterday.
I quickly tested it and it does work fine.

Regarding the code comments, I think you mixed the 2 versions up:
record layer version number (TLSPlaintext.version) is the 'envelope's
version' (we are currently returning this version in req_ssl_ver)

client hello client version (ClientHello.client_version - this
path makes sure we return this version in req_ssl_ver)

---
 src/payload.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/payload.c b/src/payload.c
index ce9280c..710ed4c 100644
--- a/src/payload.c
+++ b/src/payload.c
@@ -399,21 +399,24 @@ smp_fetch_req_ssl_ver(const struct arg *args, struct 
sample *smp, const char *kw
data = (const unsigned char *)req->buf->p;
if ((*data >= 0x14 && *data <= 0x17) || (*data == 0xFF)) {
/* SSLv3 header format */
-   if (bleft < 5)
+   if (bleft < 11)
goto too_short;
 
-   version = (data[1] << 16) + data[2]; /* version: major, minor */
+   version = (data[1] << 16) + data[2]; /* record layer version: 
major, minor */
msg_len = (data[3] <<  8) + data[4]; /* record length */
 
/* format introduced with SSLv3 */
if (version < 0x0003)
goto not_ssl;
 
-   /* message length between 1 and 2^14 + 2048 */
-   if (msg_len < 1 || msg_len > ((1<<14) + 2048))
+   /* message length between 6 and 2^14 + 2048 */
+   if (msg_len < 6 || msg_len > ((1<<14) + 2048))
goto not_ssl;
 
bleft -= 5; data += 5;
+
+   /* return the client hello client version, not the record layer 
version */
+   version = (data[4] << 16) + data[5]; /* client hello version: 
major, minor */
} else {
/* SSLv2 header format, only supported for hello (msg type 1) */
int rlen, plen, cilen, silen, chlen;
-- 
1.9.1




Fast reloads leave orphaned processes on systemd based systems

2015-11-05 Thread Lukas Loesche
When reloading haproxy too fast on EL7 (RedHat, CentOS) the system is
being filled with orphaned processes.

I encountered this problem on CentOS 7 with
haproxy-1.5.4-4.el7_1.x86_64 but expect it to exist on all systems
using haproxy-systemd-wrapper not just those based on Fedora.

Steps to reproduce:

1) haproxy is running normal.

[root@localhost ~]# ps ax | grep haproxy
 3140 ?Ss 0:00 /usr/sbin/haproxy-systemd-wrapper -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid
 3141 ?S  0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
 3142 ?Ss 0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

2) Several reloads are executed in quick succession. Problem worsens
when processes happen to execute a reload in parallel.

[root@localhost ~]# while :; do systemctl reload haproxy; done
^C

3) There's multiple haproxy processes running that will never end. As
you can see there's duplicate pids for the -sf arg. Maybe caused by a
race between haproxy-systemd-wrapper reading and the new haproxy
process writing it's pid.

[root@localhost ~]# ps ax | grep haproxy
  423 ?S  0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 419
  429 ?S  0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 425
  430 ?Ss 0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 419
  431 ?Ss 0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 425
31833 ?Ss 0:01 /usr/sbin/haproxy-systemd-wrapper -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid
36593 ?S  0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 36587
36600 ?Ss 0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 36587
38316 ?S  0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38311
38324 ?Ss 0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38311
38344 ?S  0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38325
38350 ?Ss 0:00 /usr/sbin/haproxy -f
/etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds -sf 38325
...
...


I believe the problem is that there's a race in
haproxy-systemd-wrapper.c line 98 where it's missing a
} else if (nb_pid > 0) { ... block until nb_pid is no longer found in
pidfile. Or something similarly blocking.

Otherwise the parent will accept new SIGUSR2/SIGHUP reloads before the
new haproxy process that was spawned in line 96 has written it's pid
file.

Also note the following from the systemd.service manpage:
"It is strongly recommended to set ExecReload= to a command that not
only triggers a configuration reload of the daemon, but also
synchronously waits for it to complete."
That's currently not the case.



[SPAM] Reliable, Flexible, and Responsible PCB Partner

2015-11-05 Thread Rosia

  
  
Dear Sir or Madam, 
 
Good Morning. This is sales manager's assistant Rosia from WHX Electronic Co.,Ltd. Hopefully not disturbing. Pls allow me to introduce our company and products to you, and hope that we may enter into business cooperation in the future.
 
WHX Electronic Co.,Ltd is an one-stop PCB manufacturer, can run mass production for multilayer board (up to 18 layer board) and produce rigid board, aluminum board and complex PCBs. 
 
We can do high-mix low-volume order or mass production order quickly and perfectly.
 
We can arrange the shipment in one working day after your payment.
 
If you are seeking for a PCB supplier with High Quality, Competitive Price, On-Time Delivery and Personalized Service, WHX Electronic Co.,Ltd will be a fine choice.
 
If you have any interest in our PCB, plz send us inquiry, we will give you our lowest quotation. For any more information, plz feel free to contact us any time.
 
We hope our excellent products can obtain your favors! Wholeheartedly welcome your support and cooperation!
 
Best Regards,
Rosia
*** 
ShenZhen WanHeXing Electronic Co., Ltd   
web:www.whxpcb.com
Email: sale...@whxpcb.com
Tel: +86075523599980 
Fax: +86075523599960
Skype:whxrosia 
 Address: Building 15A,JiaTian Industrial,Area of HuangTian,Western Town, Bao'an District,ShenZhen,China 

  *

  


Re: LUA, 'retry' failed requests

2015-11-05 Thread PiBa-NL

Hi Thierry,
Op 5-11-2015 om 8:08 schreef Thierry FOURNIER:

Hi,

Now, because of you I have my own freebsd installed :). I can
reproduced the segfault. I suppose that the OS other than Freebsd are
impacted, but luckyly it not visible.
OK thanks for installing and trying on FreeBSD :) i suppose some OS's 
are more likely to catch/evade a problem then others.


I encounter an easy compilation error. So can you test the two attached
patches. The first one fix the compilation issue, and the second one
fix he segfault.
First patch removes the warning below though it always build fine as far 
as i could tell. Anyway less warnings is better.

include/common/mini-clist.h:114:9: warning: 'LIST_PREV' macro redefined
#define LIST_PREV(lh, pt, el) (LIST_ELEM((lh)->p, pt, el))

Second patch i confirm fixes the core dump.

Thanks as always!

Regards,
PiBa-NL


Thierry


On Mon, 2 Nov 2015 20:50:01 +0100
PiBa-NL  wrote:


Op 2-11-2015 om 10:03 schreef Thierry FOURNIER:

On Sat, 31 Oct 2015 21:22:14 +0100
PiBa-NL  wrote:


Hi Thierry, haproxy-list,

Hi Pieter,

Hi Thierry,



I've created another possibly interesting lua script, and it works :)
(mostly). (on my test machine..)

When i visit the 192.168.0.120:9003 website i always see the 'Hello
World' page. So in that regard this is usable, it is left to the browser
to send the request again, not sure how safe this is in regard to
mutations being send twice. It should probably check for POST requests
and then just return the error without replacing it with a redirect..
Not sure if that would catch all problem cases..

Ive created a lua service that counts how many requests are made, and
returns a error status every 5th request.
Second there is a lua response script that checks the status, and
replaces it by a redirect if it sees the faulty status 500.
This does currently result in the connection being closed and reopened
probably due to the txn.res:send().?.

Though i am still struggling with what is and isn't supposed to be possible.
For example the scripts below are running in 'mode http' and mostly just
changing 'headers'.
I expected to be able to simply read the status by calling
txn.f:status() but this always seems to result in 'null'.
Manually parsing the response buffer duplicate works but seems ugly..

txn.f:status()  <  it doesnt result in the actual status.

This is a bug wich I reproduce. Can you try the attached patches ?

With the patches it works without my 'workaround', thanks.

txn.res:set()  < if used in place of send() causes 30 second delay

This function put data in the input part of the response buffer. This
new data follows the HAProxy stream when the Lua script is finished.
It is your case.

I can't reproduce this behaviour, I suppose that its because I work
locally, and I'm not impacted by the network latency.

Even when i bind everything to 0.0.0.0 and use 127.0.0.1 to query the
9003 port it still waits for the timeout to strike..
I'm not sure why it doesn't happen in your setup.. Of course i'm running
on FreeBSD, but i don't expect that to affect this..



txn.done()  < dumps core. (im not sure when ever to call it? the script
below seems to match the description that this function has.?.)

I can't reproduce too, for the same reasons, I guess.

Please note that both set() and done() need to be uncommented for the
dump to happen, with the 5th request.

Not sure if it helps, but backtrace of the dump below (would 'bt full'
be more usefull?):
(gdb) bt
#0  0x000801a76bb5 in memmove () from /lib/libc.so.7
#1  0x00417523 in buffer_insert_line2 (b=0x8024a,
  pos=0x8024a0035 "\r\n\ncontent-type: text/plain\r\ncontent-length:
394\r\n\r\nError 5\r\nversion\t\n[HTTP/1.1]\t\nf\t\n   0\t\n
[userdata: 0x802683a68]\t\nsc\t\n   0\t\n   [userdata:
0x802683be8]\t\nc\t\n 0\t\n   [userdata: 0x802683b68]\t\nheader"...,
  str=0x58c695 "Connection: keep-alive", len=22) at src/buffer.c:126
#2  0x0047b3a5 in http_header_add_tail2 (msg=0x8024bb290,
hdr_idx=0x8024bb280, text=0x58c695 "Connection: keep-alive", len=22)
  at src/proto_http.c:595
#3  0x0047f943 in http_change_connection_header
(txn=0x8024bb280, msg=0x8024bb290, wanted=8388608) at src/proto_http.c:2079
#4  0x004900fd in http_process_res_common (s=0x802485600,
rep=0x802485650, an_bit=262144, px=0x8024de000) at src/proto_http.c:6882
#5  0x004d6c90 in process_stream (t=0x8024ab710) at
src/stream.c:1918
#6  0x00420588 in process_runnable_tasks () at src/task.c:238
#7  0x0040ce0e in run_poll_loop () at src/haproxy.c:1559
#8  0x0040dcb2 in main (argc=4, argv=0x7fffeb00) at
src/haproxy.c:1912


Am i trying to do it wrong?

p.s. Is 'health checking' using lua possible? The redis example looks
like a health 'ping'.. It could possibly be much much more flexible then
the tcp-check send  / tcp-check expect routines..

It is not possible. You can write a task which do something (like an

RE: [PATCH v2] BUG/MINOR: acl: don't use record layer in req_ssl_ver

2015-11-05 Thread Lukas Tribus
>> This should be backported to stable series, the req_ssl_ver keyword was
>> first introduced in 1.3.16.
>
> Thanks Lukas, applied to 1.7, 1.6, 1.5 and 1.4. For 1.3 there might be
> other patches pending so this one will get there at the same time.

Great.

I didn't really expect a 1.3 backport, I don't really think its necessary, I 
just
included that information in the commit message for completeness (actually
I wasn't sure whether 1.4 contains this feature or not, thats why grepped
through git history :) ).


Thanks,

Lukas

  


Re: LUA, 'retry' failed requests

2015-11-05 Thread Willy Tarreau
Hi Pieter,

On Thu, Nov 05, 2015 at 08:21:26PM +0100, PiBa-NL wrote:
> >I encounter an easy compilation error. So can you test the two attached
> >patches. The first one fix the compilation issue, and the second one
> >fix he segfault.
> First patch removes the warning below though it always build fine as far 
> as i could tell. Anyway less warnings is better.

Yes definitely. No warning is the goal unless we really have no other
option. I don't have a single warning on my builds and that helps me
a lot to spot issues I introduce during backports.

> include/common/mini-clist.h:114:9: warning: 'LIST_PREV' macro redefined
> #define LIST_PREV(lh, pt, el) (LIST_ELEM((lh)->p, pt, el))
> 
> Second patch i confirm fixes the core dump.

Great, thanks for this useful report. I've thus merged the patches
into 1.7 and 1.6.

Cheers,
Willy




Re: [PATCH v2] BUG/MINOR: acl: don't use record layer in req_ssl_ver

2015-11-05 Thread Willy Tarreau
On Thu, Nov 05, 2015 at 08:31:28PM +0100, Lukas Tribus wrote:
> >> This should be backported to stable series, the req_ssl_ver keyword was
> >> first introduced in 1.3.16.
> >
> > Thanks Lukas, applied to 1.7, 1.6, 1.5 and 1.4. For 1.3 there might be
> > other patches pending so this one will get there at the same time.
> 
> Great.
> 
> I didn't really expect a 1.3 backport, I don't really think its necessary, I 
> just
> included that information in the commit message for completeness (actually
> I wasn't sure whether 1.4 contains this feature or not, thats why grepped
> through git history :) ).

In 1.4 and 1.3 it's an acl fetch (we didn't have samples by then). But
it doesn't cost anything to fix bugs in older versions in general, you
know it's basically a "git log --oneline $last_ver..master | grep BUG"
to run, followed by a bunch of "git cherry-pick -x" for each patch to
backport so that's quite simple.

Cheers,
Willy




Discount Louis Vuitton Store

2015-11-05 Thread Nancy


Your email client cannot read this email.
To view it online, please go here:
http://91kuyule.com/display.php?M=129224=5cc5ccbc3b027c395176c49b322e54d3=6!L=5!N=10
To stop receiving these
emails:http://91kuyule.com/unsubscribe.php?M=129224!C=5cc5ccbc3b027c395176c49b322e54d3!L=5!N=6!lan=en


Re: use several gpc's ?

2015-11-05 Thread Baptiste
On Thu, Nov 5, 2015 at 3:45 PM, Sylvain Faivre
 wrote:
> On 11/05/2015 03:30 PM, Baptiste wrote:
>>
>> On Thu, Nov 5, 2015 at 2:48 PM, Sylvain Faivre
>>  wrote:
>>>
>>> Hi,
>>>
>>> Is there a way to use several gpc's ?
>>>
>>> I already use gpc0 to track client IPs generating too many errors, and I
>>> need to use another counter to track client IPs requesting some pages too
>>> fast.
>>>
>>> Here are the relevant parts of my current setup :
>>>
>>> frontend web
>>>  stick-table type ip size 500k expire 5m store gpc0
>>>  tcp-request content track-sc1 src
>>>  http-request deny if !i_internal { sc1_get_gpc0 gt 0 }
>>>
>>> backend front
>>>  stick-table type ip size 100k expire 5m store http_err_rate(10s)
>>>  tcp-request content track-sc2 src
>>>  acl error_rate_abuse sc2_http_err_rate gt 10
>>>  acl mark_as_abuser sc1_inc_gpc0 gt 0
>>>  reqtarpit . if error_rate_abuse !whitelist mark_as_abuser
>>>
>>> And I'm trying to add something like this to the frontend :
>>>
>>>stick-table type ip size 50k expire 24h store gpc0_rate(60s)
>>>acl pages_info path_sub -i info.php
>>>acl too_many_info_requests sc0_gpc0_rate() gt 50
>>>acl mark_seen_pages_info sc0_inc_gpc0 gt 0
>>>tcp-request content track-sc0 src if pages_info
>>>http-request deny if mark_seen_pages_info too_many_info_requests
>>>
>>> But I'm afraid that I will not be able to distinguish the info stored in
>>> gpc0 for the error count and for the requests count...
>>> What am I missing here ?
>>>
>>
>>
>> Hi Sylvain,
>>
>> Which version of HAProxy are you using?
>> With 1.6, there are some converters that may be used to get rid of
>> using gpc while counting errors.
>> It means you would store abuser client IP in a dedicated table and
>> simply check if the IP is there:
>>
>> http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#in_table
>>
>> I have on my TODO to write such type of article on the blog. Some kind
>> of DDOS protection with HAProxy 1.6.
>>
>> Baptiste
>>
>
> We are using HAproxy 1.5, upgrading to 1.6 shouldn't be a huge problem.
>
> I guess I'll wait for your article, since I'm not sure I understand
> everything about all this table stuff.
>
> So, with HAproxy 1.5, one cannot have two types of DDOS protection at the
> same time ? (against flag offenders who send too many requests, and those
> whose requests cause too many errors)

No, you can.
I guess you have already read this article:
http://blog.haproxy.com/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
It's DDOS "v1" :)

You could have a single table which monitors req rate and err rate in
the mean time and increment gpc0 only when one of the counter goes
over a threshold.

Baptiste



Re: trailling whitespace after HTTP version

2015-11-05 Thread Julien Vermillard
Hi Baptise,
Thanks for the feedback.
Enabling this only on accept-invalid-http-request is fine for me.
I agree this is not the correct behaviour, but most of reverse proxy and
http proxy we used for several years were resilient to that (even fixing
that) so that's why I'm asking.

Julien

On Thu, Nov 5, 2015 at 3:24 PM Baptiste  wrote:

> On Thu, Nov 5, 2015 at 9:36 AM, Julien Vermillard 
> wrote:
> > Hi,
> > When we migrated from Apache HTTPD to HA Proxy we found a strange
> problem.
> > We have some old HTTP client (embedded devices) which send a trailling
> > whitespace after the HTTP version.
> > The send: "GET HTTP/1.1 " in place of "GET HTTP/1.1"
> >
> > HAProxy just drop the message sending a 400 error, HTTPD accept it.
> >
> > It's not a big code change for making haproxy resilient to that, but I
> > wonder if you would accept a patch for that or you think you don't want
> > haproxy to be resilient to such behaviour?
> >
> > Thanks,
> > Julien
>
>
> Hi Julien,
>
> You want to break compliency with HTTP RFC?
>   https://tools.ietf.org/html/rfc7230#section-3.1.1
>
> I'm against such change as a default behavior.
> Maybe you could simply upgrade the code of "option
> accept-invalid-http-request" to meet your requirement.
>
> http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#4.2-option%20accept-invalid-http-request
>
> Or create a new option "option accept-broken-http-version-from-client" :)
>
> Baptiste
>


Re: use several gpc's ?

2015-11-05 Thread Sylvain Faivre

On 11/05/2015 03:30 PM, Baptiste wrote:

On Thu, Nov 5, 2015 at 2:48 PM, Sylvain Faivre
 wrote:

Hi,

Is there a way to use several gpc's ?

I already use gpc0 to track client IPs generating too many errors, and I
need to use another counter to track client IPs requesting some pages too
fast.

Here are the relevant parts of my current setup :

frontend web
 stick-table type ip size 500k expire 5m store gpc0
 tcp-request content track-sc1 src
 http-request deny if !i_internal { sc1_get_gpc0 gt 0 }

backend front
 stick-table type ip size 100k expire 5m store http_err_rate(10s)
 tcp-request content track-sc2 src
 acl error_rate_abuse sc2_http_err_rate gt 10
 acl mark_as_abuser sc1_inc_gpc0 gt 0
 reqtarpit . if error_rate_abuse !whitelist mark_as_abuser

And I'm trying to add something like this to the frontend :

   stick-table type ip size 50k expire 24h store gpc0_rate(60s)
   acl pages_info path_sub -i info.php
   acl too_many_info_requests sc0_gpc0_rate() gt 50
   acl mark_seen_pages_info sc0_inc_gpc0 gt 0
   tcp-request content track-sc0 src if pages_info
   http-request deny if mark_seen_pages_info too_many_info_requests

But I'm afraid that I will not be able to distinguish the info stored in
gpc0 for the error count and for the requests count...
What am I missing here ?




Hi Sylvain,

Which version of HAProxy are you using?
With 1.6, there are some converters that may be used to get rid of
using gpc while counting errors.
It means you would store abuser client IP in a dedicated table and
simply check if the IP is there:
  http://cbonte.github.io/haproxy-dconv/snapshot/configuration-1.6.html#in_table

I have on my TODO to write such type of article on the blog. Some kind
of DDOS protection with HAProxy 1.6.

Baptiste



We are using HAproxy 1.5, upgrading to 1.6 shouldn't be a huge problem.

I guess I'll wait for your article, since I'm not sure I understand 
everything about all this table stuff.


So, with HAproxy 1.5, one cannot have two types of DDOS protection at 
the same time ? (against flag offenders who send too many requests, and 
those whose requests cause too many errors)




❤ Les plus belles rencontres pour une nuit ou pour la vie

2015-11-05 Thread Marion de MecACroquer
Title: Toujours plus de rencontres































 
 
Site de rencontre généraliste où les femmes ont le pouvoir   
Découvrez le site de rencontre MecACroquer ! Un site qui change les codes de la rencontre. Ce sont les femmes qui ont le pouvoir. Osez MecACroquer.Com, un site basé sur le concept du girl power. Les codes de séductions changent, ce sont désormais les femmes qui ont le pouvoir dans le jeu de la séduction.
Rejoignez les milliers de célibataires sans plus attendre.
Discutez, échangez et rencontrez de nouvelles personnes, qui sait ? Venez rencontrer des célibataires qui vous correspondent.
Inscrivez-vous maintenant sur MecACroquer en moins de 1 minute ! 

Grâce à MecACroquer, faites les plus belles rencontres ! Laissez vous surprendre par de belles rencontres. Faites toujours plus de rencontres via notre service en ligne. Utilisez et abusez des nombreuses fonctionnalités proposées afin de rencontrer l'amour pour une nuit ou pour la vie.

Inscrivez-vous et utilisez notre service de rencontre innovant !
Un problème à l'inscription ? Contactez notre service client : cont...@mecacroquer.com.
Vous avez reçu ce mail de notre part car vous avez visité notre site internet récemment. 
Vous n'avez cependant pas été ajouté à une base de donnée marketing. Veuillez ignorer cet email si vous êtes déjà inscrit. 
 
 
 














11 route de la croix moriau, 44350, guerande, France






Vous pouvez vous  
désabonner
 ou   
modifier vos coordonnées
 à tout moment.









 
  Optimisé par: